Inaccessible finely tuned RNG in humans?

post by Sandi · 2020-10-07T17:04:36.142Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    15 Zack_M_Davis
    2 Jiro
    1 AnthonyC
    1 ike
None
7 comments

It was my impression that humans are bad at random (number) generation. Tell a person to arrange stars on a black canvas randomly, and they'll space them out more or less equidistantly and uniformly. If asked to generate a sequence of coin flips, we would avoid longer runs of the same coin face, which do happen with real coins.

The scientific consensus agrees. Here's a study where participants are asked to generate a sequence of 300 digits from 1 to 9. The researches successfully predict the next digit with a 27% chance (a truly random sequence would indicate an 11% chance). Scott Aaronson designed a simple program that predicts which of two keys the user will press next with a 70% chance on average.

On the other hand, I've seen a number of informal online polls (e.g. on reddit and twitter) where the respondents demonstrate an ability to randomly sort themselves into two groups with a very high degree of accuracy. See here and here, though I've seen a ton of these with various ratios. I've never seen it not work. It does break down for 1-99, but that seems reasonable to me.

The question is, do you know of any research into this phenomenon or do you have a good explanation for it?

Note that the ability is there in some contexts but not in others. An RNG exists within us, but our access to it is sometimes blocked. I've seen other examples of this in psychology so it doesn't weird me out as much as it used to, but I still wanna point it out. 

If this seems like a queer phenomenon to fixate on, know that an RNG is useful in many contexts. It's necessary for strong/optimal strategies in certain games (poker, rock paper scissors) and game theoretic scenarios. It's useful for coordination without communication. The best known algorithms for some problems are probabilistic.

And while I have your attention: https://www.strawpoll.me/21064106

Answers

answer by Zack_M_Davis · 2020-10-07T18:17:05.301Z · LW(p) · GW(p)

Getting a single random choice (e.g., of poll options) and getting a random sequence (e.g., of numbers, or coin faces, or star locations) are different tasks. Perhaps the problem isn't that people can't make an approximately random choice; it's that they can't make independent random choices. That's why people place the stars equidistantly or avoid repetitions. Having already placed the first star, or already said "Heads", we don't know how to forget. The second star, the second flip, is chosen in a different mental context: not in an unmarked uniform space, but in a space that already has a star on it.

comment by Sandi · 2020-10-07T18:31:09.949Z · LW(p) · GW(p)

Well, I'm quite satisfied with that. Thank you!

answer by Jiro · 2020-10-10T21:17:55.208Z · LW(p) · GW(p)

Generate several "random" numbers in your head, trying to generate them randomly but falling prey to the usual problems of trying to generate them in your head. Then add them together and take them mod X to produce a result that is more like a real random number.

answer by AnthonyC · 2020-10-25T18:11:44.921Z · LW(p) · GW(p)

So, yes, asking people to generate a random sequence doesn't result in a random sequence, but I wonder if we are random enough to get a random bit string in the same way you can extract one from a biased coin?

I also wonder if simple hacks like "only ask me for one symbol (a digit, or binary digit, or H/T) every 10 seconds, and have them automatically recorded without letting me see past ones so I can't really remember recent substrings" could eliminate biases related to sequential symbol probabilities being conditional on one another.

answer by ike · 2020-10-07T18:26:15.170Z · LW(p) · GW(p)

It seems fairly easy to get a sequence of random-seeming numbers by e.g. applying some transform to the first ten digits of pi if you remember those, your birth date, other important dates, etc. As long as you come up with a procedure to convert to the scale you need, it shouldn't be predictable for low sample sizes. 

300 random numbers is pushing it, though. For that you'd need a hash function, which most can't calculate mentally without specific training. 

comment by Sandi · 2020-10-07T18:44:02.119Z · LW(p) · GW(p)

I seriously doubt the majority of the participants in these casual polls are doing anything like that.

7 comments

Comments sorted by top scores.

comment by abramdemski · 2020-10-07T18:00:41.965Z · LW(p) · GW(p)

People could be using actual random number sources to do this, although I think that's probably not the explanation.

I want to point out that getting these polls right is a much different task than generating a good pseudorandom sequence. For example, people could come with a few pseudorandom numbers stamped onto their soul (EG, a birthdate), which they use to answer polls like this. This would not help with generating long pseudorandom sequences. 

(I don't actually think everyone uses their birthdate as a pseudorandom number, but maybe they use similar things such as address etc. But mainly, there could be other explanations which similarly don't imply people would be good at generating larger sequences.)

Replies from: Sandi, Sandi
comment by Sandi · 2020-10-07T18:52:20.623Z · LW(p) · GW(p)

Hm, could we tell apart yours and Zack's [LW(p) · GW(p)] theories by asking a fixed group of people for a sequence of random numbers over a long period of time, with enough delay between each query for them to forget? 

comment by Sandi · 2020-10-07T18:42:51.414Z · LW(p) · GW(p)

This occurred to me, but I didn't see how it could work with different ratios. I guess if you have a sample from a variable with a big support (> 100 events) that's uniformly distributed, that would work (e.g. if x is your birth date in days, then x/365 < 20 would work).

It would be interesting to test this with a very large sample where you know a lot of information about the respondents and then trying to predict their choice.

comment by Adele Lopez (adele-lopez-1) · 2020-10-07T20:56:01.300Z · LW(p) · GW(p)

My method for doing this is to imagine a spinner partitioned into certain sections (like a pie chart of the intended distribution), and then imagine spinning it rapidly and watching it slow down and seeing where it lands. I don't know how well this actually works, but I adopted it after seeing a poll like the ones you linked asking people to choose based on this method, which ended up being very close. It's does take a decent amount of mental effort, as well as a couple seconds for each round -- I'd expect any method that works to have these properties.

Replies from: Sandi, crabman, Radamantis
comment by Sandi · 2020-10-08T08:34:00.928Z · LW(p) · GW(p)

I wonder if, in that case, your brain picks the stopping time, stopping point or "flick" strength using the same RNG source that is used when people just do it by feeling.

What if you tried a 50-50 slider on Aaronson's oracle, if it's not too exhausting to do it many times in a row? Or write down a sequence here and we can do randomness tests on it. Though I did see some tiny studies indicating that people can improve at generating random sequences.

comment by philip_b (crabman) · 2020-10-07T22:32:42.869Z · LW(p) · GW(p)

My method is to come up with a phrase or find a phrase written somewhere nearby, count the syllables or letters, and take this value modulo the number of bins. For the topicstarter's poll, I found a sentence on a whiteboard near myself, counted its letters modulo 10, got 5, so I voted for 30%, because the bins were like 20% - 30% - 50%.

comment by NunoSempere (Radamantis) · 2020-10-07T21:09:44.219Z · LW(p) · GW(p)

My method is to generate a random sentence and then assign a 0 to letters before m and a 1 to letters afterwards. This is pretty fast.