Center for Applied Postrationality: An Update 2021-04-01T08:13:53.221Z
How many people have died in China from Covid-19? 2020-03-30T03:06:14.016Z
How many people in the Bay Area are infected with Covid-19 as of the end of 2/29/2020? 2020-03-01T05:37:26.751Z
What is a reasonably complete set of the different meanings of "hard work"? 2019-12-07T04:54:30.190Z
Who's an unusual thinker that you recommend following? 2019-09-13T04:11:17.413Z
How would a person go about starting a geoengineering startup? 2019-08-06T07:34:53.451Z
Dony's Shortform Feed 2019-07-24T23:48:05.223Z
What are good practices for using Google Scholar to research answers to LessWrong Questions? 2019-05-18T21:44:20.978Z
Why exactly is the song 'Baby Shark' so catchy? 2019-05-17T06:26:24.824Z
What is good literature on learned helplessness? 2019-05-17T03:15:42.669Z
How to make plans? 2019-04-23T08:29:58.809Z
Announcing the Center for Applied Postrationality 2019-04-02T01:17:12.873Z
What would 10x or 100x better than CFAR look like? 2019-03-31T05:55:06.731Z
If you wrote a letter to your future self every day, what would you put in it? 2019-03-12T09:42:37.185Z
What is learning? 2019-02-08T03:18:10.859Z
What are questions? 2019-01-09T07:37:30.246Z


Comment by DonyChristie on Could you have stopped Chernobyl? · 2021-08-30T21:35:20.472Z · LW · GW

Noting that the more real, second-order disaster resulting from Chernobyl may have been less usage of nuclear power (assuming that had an influence on antinuclear sentiment). Likewise, I'm guessing the Challenger disaster had a negative influence on the U.S. space program. Covid lockdowns also have this quality of not tracking the cost-benefit of their continuation. Human reactions to disasters can be worse than the disasters themselves, especially if the costs of those reactions are hidden. I don't know how this translates to AI safety but it merits thought. 

Comment by DonyChristie on Examples of Acausal Trade with an Alien Universe? · 2021-04-01T23:37:16.976Z · LW · GW

Dropping this paper here as what I know to be the canonical text on this subject.

Comment by DonyChristie on TAPs 3: Reductionism · 2021-02-08T02:09:53.603Z · LW · GW

“He has half the deed done who has made a beginning.”
– Horace

Comment by DonyChristie on Simulacrum 3 As Stag-Hunt Strategy · 2021-01-27T02:15:11.235Z · LW · GW

This happens intergenerationally as parents forget to alert their children to the actual reasons for things. Having observed this happen with millenials, I am scared of what we are all collectively missing because older generations literally just forgot to tell us.

What do you think we are missing?

Comment by DonyChristie on Matt Levine on "Fraud is no fun without friends." · 2021-01-23T20:05:10.110Z · LW · GW

A crucial consideration for why destroying restaurant business is good: factory farming.

Comment by DonyChristie on CollAction history and lessons learned · 2021-01-18T03:04:46.745Z · LW · GW

Hey Ron, I am working on my own version of this (inspired by this Sequence), and would love to get your advice! Right now I am focusing on crowdfunding via dominant assurance contracts on Ethereum.

How did you / would you verify that someone did something? What are specific examples of that happening for different actions? What kinds of evidence can be provided? I have a fuzzy sense of what this looks like right now. The closest sites I can think of just off the top of my head that involve verification are CommitLock (which I made a successful $1000 commitment contract on to get myself to do swim lessons) and DietBet, which requires a photo of your scale (it also has that 'split the pot' feature you mentioned, which I am pretty excited for).

Comment by DonyChristie on Dony's Shortform Feed · 2020-10-13T18:58:58.561Z · LW · GW

I am very interested in practicing steelmanning/Ideological Turing Test with people of any skill level. I have only done it once conversationally and it felt great. I'm sure we can find things to disagree about. You can book a call here.

Comment by DonyChristie on Dony's Shortform Feed · 2020-08-18T00:22:37.985Z · LW · GW
I’ve mentioned previously that I’ve been digging into a pocket of human knowledge in pursuit of explanations for the success of the traditional Chinese businessman. The hope I have is that some of these explanations are directly applicable to my practice.
Here’s my current bet: I think one can get better at trial and error, and that the body of work around instrumental rationality hold some clues as to how you can get better.
I’ve argued that the successful Chinese businessmen are probably the ones who are better at trial and error than the lousier ones; I posited that perhaps they needed less cycles to learn the right lessons to make their businesses work.
I think the body of research around instrumental rationality tell us how they do so. I’m thankful that Jonathan Baron has written a fairly good overview of the field, with his fourth edition of Thinking and Deciding. And I think both Ray Dalio’s and Nicholas Nassem Taleb’s writings have explored the implications of some of these ideas. If I were to summarise the rough thrust of these books:
Don’t do trial and error where error is catastrophic.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7)
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).

Don’t do trial and error where error is catastrophic.

Wearing a mask in a pandemic. Not putting ALL of your money on a roulette wheel. Not balancing on a tightrope without a net between two skyscrapers unless you have extensive training. Not posting about controversial things without much upside. Not posting photos of meat you cooked to Instagram if you want to have good acclaim in 200 years when eating meat is outlawed. Not building AI because it's cool. Falling in love with people who don't reciprocate.

The unknown unknown risk that hasn't been considered yet. Not having enough slack dedicated to detecting this.

Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).

If you've gone on OkCupid for the past 7 years and still haven't got a date from it, maybe try a different strategy. If messaging potential tenants on a 3rd-party site doesn't work, try texting them. If asking questions on Yahoo Answers doesn't get good answers, try a different site.

Increase the number of trials you can do in your life. Decrease the length and cost of each trial.

Talk to 10x the number of people; message using templates and/or simple one-liners. Invest with Other People's Money if asymmetric upside. Write something for 5 minutes using Most Dangerous Writing App then post to 5 subreddits. Posting ideas on Twitter instead of Facebook, rationality content on LessWrong Shortform instead of longform. Yoda Timers. If running for the purpose of a runner's high mood boost, try running 5 times that day as fast as possible. Optimizing standard processes for speed.

In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”

Posting content to 10x the people 10x faster generally has huge upside (YMMV). Programming open-source something useful and sharing it.

Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)

Roam is good for this, perhaps SuperMemo. Posting things to social media and coming up with examples of the rules is also a good way of learning content. cough

Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7

Did messaging or posting to X different places work? Try 2X, 5X, etc. 1 to N after successfully going 0 to 1.

Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).

Stating assumptions strongly and clearly so they are disconfirmable, then setting a Yoda Timer to seek counter-examples of the generalization.

Comment by DonyChristie on Bug Hunt 2 · 2020-08-17T17:21:00.089Z · LW · GW

Any updates on this in the past six months?

Comment by DonyChristie on Mati_Roy's Shortform · 2020-07-06T02:04:29.860Z · LW · GW

Mati, would you be interested in having a friendly and open (anti-)debate on here (as a new post) about the value of open information, both for life extension purposes and else (such as Facebook group moderation)? I really support the idea of lifelogging for various purposes such as life extension but have a strong disagreement with the general stance of universal access to information as more-or-less always being a public good.

Comment by DonyChristie on Dony's Shortform Feed · 2020-07-03T23:31:40.667Z · LW · GW

Sure thing. What would you recommend for learning management?

(I count that as an answer to my other recent question too.)

Comment by DonyChristie on Dony's Shortform Feed · 2020-07-03T22:32:13.310Z · LW · GW

Warning: TVTropes links

When should I outsource something I'm bad at vs leveling up at that skill?

Comment by DonyChristie on Dony's Shortform Feed · 2020-07-03T19:05:33.059Z · LW · GW

How would you instruct a virtual assistant to help you with scheduling your day/week/etc?

Comment by DonyChristie on The Real Standard · 2020-04-01T04:24:32.837Z · LW · GW

Great post! It's like the "what if an alien took control of you" exercise but feels more playful and game-y. I started a Google doc to plan the month of April from Gurgeh's perspective.

See also: Outside.

Comment by DonyChristie on Dony's Shortform Feed · 2020-03-25T01:58:48.764Z · LW · GW

Why does CHAI exclude people who don't have a near-perfect GPA? This doesn't seem like a good way to maximize the amount of alignment work being done. High GPA won't save the world and in fact selects for obedience to authority and years of status competition, leading to poor mental health to do work in, decreasing the total amount of cognitive resources being thrown at the problem.

(Hypothesis 1: "Yes, this is first-order bad but the second-order effect is we have one institutionally prestigious organization, and we need to say we have selective GPA in order to fit in and retain that prestige." [Translator's Note: "We must work with evil in order to do good." (The evil being colleges and grades and most of the economic system.)])

(Hypothesis 2: "GPA is the most convenient way we found to select for intelligence and conscientiousness, and those are the traits we need the most.")

(Hypothesis 3: "The university just literally requires us to do this or we'll be shut down.")

Won't somebody think of the grad students!

Comment by DonyChristie on How many people in the Bay Area are infected with Covid-19 as of the end of 2/29/2020? · 2020-03-03T20:32:45.145Z · LW · GW

No, there has barely been any testing. I think it's more like 200-1000 cases.

Comment by DonyChristie on How many people in the Bay Area are infected with Covid-19 as of the end of 2/29/2020? · 2020-03-03T20:31:56.845Z · LW · GW

I think it's ~1000 for that day. I don't know how long deaths take to bear out - is it 14 days? If it's that, then if we have ~34 deaths in 14 days then that suggests this prediction is right.

We probably have a greater than average transmission rate/doubling time because no one is tracking it, and I'm pessimistic enough about the U.S. healthcare system at by this point that we may not even be tracking fatalities accurately.

I will update my answer as I think about this more clearly.

Comment by DonyChristie on Dony's Shortform Feed · 2019-12-24T05:53:29.537Z · LW · GW

What was the most valuable habit you had during the past decade?

What is the most valuable habit you could inculcate or strengthen over the next decade?

(Habit here broadly construed as: "specific activity that lasts anywhere from a number of seconds to half an hour or more. Examples: playing golf each morning. Better example: practicing your driving swing at 6:00am for 30 minutes (but you can give much more detail than that!). Bad example: poorly operationalized vague statements like "being more friendly".)

See: The One Thing

Comment by DonyChristie on Sayan's Braindump · 2019-11-27T06:02:10.082Z · LW · GW

Can you define a post-scarcity economy in terms of what you anticipate the world to look like?

Comment by DonyChristie on bgaesop's Shortform · 2019-10-31T01:52:16.925Z · LW · GW

I am currently very skeptical that the PNSE paper has anything of worth, given that Jeffery Martin's Finder's Course is basically a scam according to this review and some others. (I don't know if the paper is based on Finder's Course participants.) It would be valuable for someone to do a fact check on the paper.

Comment by DonyChristie on Invisible Choices, Made by Default · 2019-10-23T03:40:33.416Z · LW · GW

Actually making the cards is what stops me.

Comment by DonyChristie on What's your big idea? · 2019-10-21T04:46:07.420Z · LW · GW

"Let's finish what Engelbart started"

1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.

2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.

3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.

In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.

See: Ought

Comment by DonyChristie on Book summary: Unlocking the Emotional Brain · 2019-10-13T00:00:36.552Z · LW · GW

I'm so glad someone did a writeup of this! Part of me has wanted to, I think I have a draft... I remember going through severe depression over four years ago and one of my reprieves was joyfully reading the papers written about coherence psychology. I will definitely be linking this post as a reference.

There are many times I am talking with people and want to reference from the conceptual structure of coherence psychology, but there is way too much inferential distance especially with aspiring rationalists who are not therapy geeks, so I end up mentally flailing my arms in frustration. The theory seems like a better candidate for The One True Psychotherapy than almost any other and it pains me to see people go about solving their problems without it in their toolkit, and not being able to communicate this to them. e.g. It's frustrating to see people trying to correct the output of emotional schemas without accessing the generating model for disconfirmation. e.g. A person may feel uncomfortable with someone else who has low self-esteem so they will try to correct it verbally without engaging in a process that will change the underlying 'pro-symptom position'.

There's the related problem that there are very few coherence therapists. I don't think most psychologists have heard of this and I find that confusing.

Oh, there's also the fact that I tried a coherence therapist and didn't find it that helpful the way it was done. They were fine to talk to but it seems retrospectively like they were cargo-culting the motions of coherence therapy as outlined by Ecker et al. I haven't had other therapists but I suspect the inefficacy is only very weak evidence pointing against the modality vs other modalities and more a problem with cramming an attempt at powerful introspection into expensive 1-hour blocks. i.e. I think psychotherapeutic structure across the board is broken and when the singularity happens it won't be a problem anymore.

My hope is that we can develop new delivery structures into which we can import psychological techniques and have them deployed at scale while being better than 1-hour weeklies, 8-hour shamanic trips, or that annoying app with the emotionally saccharine bird.

See also: The Method of Levels

Comment by DonyChristie on Dony's Shortform Feed · 2019-08-25T21:40:47.903Z · LW · GW

How might a person develop INCREDIBLY low time preference? (They value their future selves in decades to a century nearly as much as they value their current selves?)

Who are people who have this, or have acquired this, and how did they do it?

Do these concepts make sense or might they be misunderstanding something? Tabooing/decomposing them, what is happening cognitively, experientially, when a human mind does this thing?

What would a literature review say?

Comment by DonyChristie on Dony's Shortform Feed · 2019-08-15T01:18:29.669Z · LW · GW

I'm really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.)

The guiding question is, "What upgrades would make my life easier?" In contrast with the question that is more typically asked: "How do I achieve this hard thing?"

It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don't really identify with). Part of that is a sense of... naughtiness? Like we're supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There's something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents?

The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones.

(This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. )

Of course, most of one's upgrades should be tiny and within one's comfort zone. What the portfolio of small vs huge changes one should make in one's life is an open question to me, because while it makes sense to be mostly conservative with one's allocation of one's life resources, I suspect that fear brings people to justify the static zone of safety they've created with their current structure, preventing them from seeking out better states of being that involve jettisoning sunk costs that they identify with. Better coordination infrastructure could make such changes easier if people don't have to risk as much social conflict.

Comment by DonyChristie on Dony's Shortform Feed · 2019-08-12T04:32:57.239Z · LW · GW

You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.

I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?

Comment by DonyChristie on Dony's Shortform Feed · 2019-08-10T05:10:28.597Z · LW · GW

Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)

Comment by DonyChristie on Dony's Shortform Feed · 2019-08-02T08:23:35.153Z · LW · GW

I find the question, "What would change my mind?", to be quite powerful, psychotherapeutic even. AKA "singlecruxing". It cuts right through to seeking disconfirmation of one's model, and can make the model more explicit, legible, object. It's proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon's "Keeping Beliefs Cruxy".

I am curious how others ask this question of themselves. What follows is me practicing the question.

What would change my mind about the existence of the moon? Here are some hypotheses:

  • I would look up in the sky every few hours for several days and nights and see that it's not there.
  • I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
  • Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics' asylum. The doctor prescribes me a pill and I no longer believe.
    • It turns out I was deluded and now I'm relieved to be sane.
    • It turns out they have brainwashed me and now I'm relieved to be sane.
  • I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I'm looking for?) the question but it's interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.

These anticipations were System 2 generated and I'm still uncertain to what extent I can imagine them actually happening and changing my mind. It's probably sane and functional that the mind doesn't just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.

Comment by DonyChristie on What makes a scientific fact 'ripe for discovery'? · 2019-05-17T09:17:39.043Z · LW · GW

Initial Braindump (hopefully will edit)

Knowledge dependencies (algebra before calculus)

Necessary tools? (Did people who made simultaneous discoveries use the same tools?)

The research community for that domain? (How much communication is there? How dense are the connections between people?)

How new the field is. Whether there was a sudden jump in the number of researchers.

How frequently discoveries happen in the field.

Whether a major disaster or other event is obstructing scientific progress from being made at the time.

Whether the existence of simultaneous discoveries is just an artifact of cherrypicking biases

Comment by DonyChristie on Why exactly is the song 'Baby Shark' so catchy? · 2019-05-17T08:39:37.359Z · LW · GW

Here I have generated an audio file that precisely fits your description.

Would you agree that this production could acquire several billion views because it is very catchy, with its catchiness being only composed of simplicity, repetitiveness, and doot?

Comment by DonyChristie on Announcing the Center for Applied Postrationality · 2019-04-02T05:08:34.914Z · LW · GW

What is "explied postrationality"?

Comment by DonyChristie on [NeedAdvice]How to stay Focused on a long-term goal? · 2019-03-09T05:01:36.891Z · LW · GW

Most or all of these ideas are things I have or am trying or are variations on them:

You could send a recurring email to yourself everyday (using Mail Conductor if you use Gmail). As part of the email, ask: how can this recurring email be improved? Have I done [x] that is relevant to domain of my life [y]? For example: Did I do exercise today? Have I read a book? Have I written? etc. And include hopeful beliefs that you want to remind yourself of, or of exciting goals you want to accomplish.

Likewise, you could make a personal homepage with these reminders.

You could use the app Tasker if you use an Android phone to pull up one of these things as well, for example whenever you open your phone.

You can track your habits using Beeminder, using the Lights Spreadsheet, or the visual dashboard.

You could post this question to other places on the internet. (there are many such places!)

It's totally possible to build a support network of people online. You could join rationalist Discord servers and build a support network there, perhaps making your own Discord server (pretty easy). There are so many people out there who are in fact willing to talk.

You can carry around a notebook and just write your thoughts in it, or journal your thoughts in a simple text file.

You could write out a bunch of your negative thoughts in a notebook, then write out more functional/truer thoughts, put those on index cards, and review those everyday. Or just put hopeful thoughts/reminders on those cards, or quotes from people you look up to.

Scheduling: You could try setting a 15 minute timer to figure out whatever's blocking you from using a calendar. You could figure out how to use voice commands such that you could be like "Yo calendar, at 5 pm I will spend an hour writing a LessWrong post" like it's a human.

You could try taking the meaningful qualia you feel around the girl you're infatuated with and learning how to autogenerate and attach it to other things. (If you spent 10 minutes on it, could you fall in love with a cup?)

You could spend 15 minutes on the regret-minimization exercise

How long do you go outside? Do you get sunlight? Do you ever go in the woods?

Do you go to any events or meetups? You could try the intention of going to one three times a week.

What activities did you enjoy as a child? You could try doing that several hours every day, guilt free.

Every night, you could write out a story of how you want tomorrow to go.

You could do daily email reports with an accountability partner. I would be willing to be your accountability partner if you wish.

Let me know how things go!

Comment by DonyChristie on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-02-23T08:25:29.413Z · LW · GW

People at Location X and Y move to Location Z.

People in a city switch from using cars to using other forms of transport.

Switching from worse dating sites to something better.

People in a group house switching from defecting on cleaning the house out of expectation others won't help clean, to one in which everyone starts cleaning because they expect others to cooperate in cleaning.

Buying or renting an otherwise too-expensive property.

Switching from the Multiplayer Game That Everyone Is Playing to a Better Multiplayer Game.

Switching to a new religion from Christianity.

Accountability contracts - "I'll engage in habit X if Y other people are also engaging in Habit X conditional on Y other people engaging in Habit X"

"I'll stop making political posts if >75% of my friends also agree to stop making political posts", "I'll stop making memes if 1000 other people also agree to stop making them".

Recruiting people to make world record attempts that involve large numbers of people, where it's not worth marginally joining if that doesn't look likely and therefore it doesn't get off the ground.

Recruiting people to do things that would otherwise result in the police arresting them if there were a smaller amount of people, e.g. 1 million people using psychedelics as a protest in Washington D.C.

People attending events in general! Many events don't have a critical mass to seem worthy of joining on the margin.

Starting an exercise circle if there are 5 other people to also start it with.

Going through a course of study if there are 10 other people to also study it with.

Switching people from Bitcoin to whatever is better (how would people decide that?).

Meta: sufficient amount of people coordinating to use the same Kickstarter for Inadequate Equilibria. (I'm sorry)

I can write more specific examples upon request.

Comment by DonyChristie on Open Thread January 2019 · 2019-01-21T08:05:39.088Z · LW · GW

What weird experiments?

Comment by DonyChristie on Circling · 2019-01-08T21:38:44.701Z · LW · GW

I was wrong on producing a writeup that qualifies as "a writeup" (I'm not sure exactly where I would have put it after the draft had been finished). I am poorly calibrated in personal action predictions (it may be the case that I am only tempted to make a prediction that I'll do a thing when I want to signal to myself or others that I will in fact do a thing when the outside view says I won't, so I should probably update downward that I'll do a thing if I find myself trying to predict a probability that I'll do it, over and above the normal downward adjustment for planning fallacy and Hofstadter's Law).

Thankfully there is satisfactory content on the subject. For instance, "Group Debugging" seems to be the thing-that-is-doing-the-closest-thing-to-this at meetups that is more repeatable and tractable than the original Hamming question (it's basically what the Hamming thing I said I facilitated was), though it is somewhat different from the broad scope of the original (though I don't like the word "Debugging" associated with this exercise, it seems to fetishize using programming metaphors to apply to human psychology, which feels sterile, cliquey, overreliant on usage of "System 2" solutions, and not as obviously descriptive of what is happening as it could be. Maybe "Group Problem-Solving"?).

Comment by DonyChristie on Circling · 2018-02-19T07:39:08.211Z · LW · GW

I facilitated a Hamming Circle two days ago and it looks like I will produce some kind of writeup someday, >50% probability.

Comment by DonyChristie on [deleted post] 2017-10-18T20:04:03.687Z

I believe Null Hypothesis. This site isn't getting too many comments so there's plenty room for variation. I would definitely rule out Hypothesis 7.