[LINK] Using procedural memory to thwart "rubber-hose cryptanalysis"

post by Shmi (shminux) · 2012-07-19T23:54:08.814Z · LW · GW · Legacy · 17 comments

Contents

17 comments

It's an interesting idea, to fight the standard social engineering attempts by hiding the password from the user. In a sense, all the conscious mind gets is "********". The paper is called "Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks". Here is a popular write-up and the paper PDF.

Abstract:

Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

While this approach does nothing against man-in-the-middle attacks, it can probably be evolved into a unique digital signature some day. Cheaper than a retinal scan or a fingerprint, and does not require client-side hardware.

 

17 comments

Comments sorted by top scores.

comment by CronoDAS · 2012-07-20T00:33:52.594Z · LW(p) · GW(p)

Even if you can't divulge the password, you can still enter it... so if someone is actually in a position to coerce you, they're probably also in a position to make you enter the password for them. (It's damn hard to make an ATM that will give you your money when you want it, but also makes it impossible for someone to empty your account by waiting for you at the ATM and pointing a gun at you.)

Replies from: VNKKET, jimmy, army1987, JackV, Adele_L
comment by VNKKET · 2012-07-20T02:08:35.067Z · LW(p) · GW(p)

And after skimming the paper, the only thing I could find in response to your point is:

Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress. Some behaviors such as timed responses to stimuli may detectably change when the user is under duress. Alternately, we might imagine other modes of detection of duress, including video monitoring, voice stress detection, and skin conductance monitoring [8, 16, 1]. The idea here would be to detect by out-of-band techniques the effects of coercion. Together with in-band detection of altered performance, we may be able to reliably detect coerced users.

Replies from: Kaj_Sotala, ViEtArmis, Clippy
comment by Kaj_Sotala · 2012-07-20T08:42:10.232Z · LW(p) · GW(p)

Of course, such changes could also be caused by being stressed in general. Even if you could calibrate your model to separate the effects of "being under duress" from "being generally stressed" in a particular subject, I would presume that there's too much variability in people that you could do this reliably for everyone.

Imagine how people would react to an ATM that gave them their money whenever they wanted it - except when they were in a big hurry and really needed the cash now.

Replies from: handoflixue
comment by handoflixue · 2012-07-24T23:10:08.987Z · LW(p) · GW(p)

(Blind Optimism) They'd learn to meditate!

But then, how do we stop people from being coerced in to meditative states... :(

comment by ViEtArmis · 2012-07-20T14:43:25.245Z · LW(p) · GW(p)

Got the flu? Sorry, no email for you today.

comment by Clippy · 2012-07-25T22:38:56.532Z · LW(p) · GW(p)

In addition to what Kaj_Sotala said, there is already a much simpler, more reliable way to detect coercion on authentication: distress passwords!

comment by jimmy · 2012-07-20T19:47:05.480Z · LW(p) · GW(p)

My next step would be to game context dependent memory to make the memory unavailable under duress.

comment by A1987dM (army1987) · 2012-07-20T22:20:13.840Z · LW(p) · GW(p)

I've heard of some kind of security system whereas you can enter either the usual password or a “special” one, and if you enter the latter you're granted access but the police are alerted, or something like that.

The extension to that to an ATM might be one which gives fake bills, takes a picture, and alerts the police if the “fake” PIN is input.

Replies from: VincentYu
comment by VincentYu · 2012-07-21T02:33:59.052Z · LW(p) · GW(p)

For ATMs, the idea is out there, but it has never been implemented. Snopes on this:

The Credit Card Accountability Responsibility and Disclosure Act of 2009 compelled the Federal Trade Commission to provide an analysis of any technology, either then currently available or under development, which would allow a distressed ATM user to send an electronic alert to a law enforcement agency. The following statements were made in the FTC's April 2010 report in response to that requirement:

FTC staff learned that emergency-PIN technologies have never been deployed at any ATMs.

The respondent banks reported that none of their ATMs currently have installed, or have ever had installed, an emergency-PIN system of any sort. The ATM manufacturer Diebold confirms that, to its knowledge, no ATMs have or have had an emergency-PIN system.

comment by JackV · 2012-07-20T11:18:15.410Z · LW(p) · GW(p)

I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:

  • you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
  • you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is

I think the "stress detector" idea is one that is unlikely to work unless someone works on it specifically to tell the difference between "hurried" and "coerced", but I don't think the system is useless because it doesn't solve every problem at once.

OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.

Replies from: printing-spoon, billswift
comment by printing-spoon · 2012-07-21T05:33:54.105Z · LW(p) · GW(p)

you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them

Easier to avoid with basic instruction.

you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is

Enemy knows the system, they can copy the login system in your cell.

comment by billswift · 2012-07-20T12:55:21.816Z · LW(p) · GW(p)

OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.

Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.

comment by Adele_L · 2012-07-20T03:04:23.374Z · LW(p) · GW(p)

It still means you need to be physically present and in an able condition.

comment by BlackNoise · 2012-07-20T16:16:38.133Z · LW(p) · GW(p)

Reminded me of this comic

comment by Decius · 2012-07-20T03:13:39.567Z · LW(p) · GW(p)

The biggest flaw I can see is that it becomes trivial to forget your password. The system is thus only as secure as the backup system.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2012-07-20T08:40:47.039Z · LW(p) · GW(p)

I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of '2 weeks' from reading about this yesterday.

Replies from: Decius
comment by Decius · 2012-07-20T17:41:48.735Z · LW(p) · GW(p)

It's only as valid as identifying someone by how they ride their bicycle. Any number of neurological factors, including fatigue, could change how someone enters the 'password' provided.