Security Pie

The ramblings of three security curmudgeons

What is all this about lie and other detectors?

with one comment

In his latest posting (, Sharon refers to a hypothetical detector for lying over email. Now such things exist, and have existed for quite some time. Plotters connected to sensors have been used as lie detectors since its evolutionary invention spanning some 40 years and multiple devices during the turn of the last century. Every so often a handheld lie detector would appear on the classified ads of some local newspaper or one of the inflight magazines or skymall.

Now everyone knows (or should know) that the jury is out about the accuracy of lie detectors. Now why is that significant?

There are 4 possible outcomes of a lie detector test:

Did not lie


Not caught

Not lied and not caught (0,0)

Lied and Not Caught (1,0)


Not lied but caught (0,1)

Lied and caught (1,1)

In the case of lie detection testing, the consequences of the “True results”:(0,0) and (1,1), the outcomes are appropriate. For example, the thief was caught and made to serve time. Or the person was acquitted or allowed to continue the hiring process.

However for the “false results”: (1,0) and (0,1), the outcomes can be devastating. Some of the most damaging spies that the world has ever seen (e.g. Aldrich Ames, Robert Hansen; See the rather opinionated routinely passed lie detector tests. Similarly, try to imagine a husband that just happens to be overly anxious about participating in the Fox’s ”The Moment of Truth” and is mistakenly ousted as having an extramarital affair. Try to imagine explaining that one to a tearful significant other (BTW – one reason to object to programmers’ decisions to broadcast these shows to a public that is ignorant of detection theorem and statistics).

The dilemma for security: Since a person passed a lie detector test can you assume they are trustworthy? Can you take considerably less care in securing their access to resources?

The science of detecting a certain quantity or quality of a signal from an environment of surrounding noise is called Detection Theory. It has been around for ages, was developed initially for the development of the radar ( It is what I concentrated on for the first ten years of my engineering life in designing and evaluating SONAR and other detection systems.

In Detection Theory, the same outcomes as in the lie detector diagram above are used:

No Signal


Not Detected

True Negative (0,0)

False Negative (1,0)


False Positive (0,1)

True Positive (1,1)

Assuming the results of the detector are positive, then assuming the probability of detection (true positive) is α, the probability of getting a false positive is (1-α). Unless α is pretty high, the probability for a false positive will be significantly high (so that it cannot be ignored).

The same logic works for the False Negative and False Negative pairs. See for more data.

So what affects α? What are the parameters that would affect the quality of the results? What would we need to improve the results of the lie detector?

In RADAR, α is related to a mathematical ratio between the signal and noise (amply called the SNR or Signal-to-Noise-Ratio). Signal would be the property we want to detect while while noise would be any other noise source.

Examples of signal sources

Example of background noise sources


Reflection from a real airplane

Reflection from a real cloud


Submarine propeller noise

School of snapping shrimp

Lie Detector

Sweat, anxiety of a lying person

Sweat, anxiety of an (in this case) honest person fearful of an unnatural test

So it is imperative that if a detector is to be used, that it is fully understood in the context of its usage.

For example, I would assume that lie detectors will be fairly reliable in detecting teenagers that are using drugs. I say this because of the following:

1.   Teenagers have probably been using the drugs for a limited amount of time. So the emotional response to the concept of lying about it is still strong.

2.   Making a mistake is reasonably ‘acceptable’:

a.   False positive: blood test will clear that up, or at worst the teenager will go to counseling

b.   False negative: Same result as having not conducted the test

Note: Sure there might be extreme responses to these types of tests, but overall the use is appropriate.

Using the same criteria I would stipulate that testing if a long term FBI agent is an existing spy is a bad use for lie detection:

1.   If the FBI agent is an existing spy, it is more than likely that their false reality has become their reality. Lying is real to them, so they have no emotional response associated with lying (it is the least of their “vices”)

2.   Making a mistake can lead to disastrous results:

a.   False positive: Agent is “caught” and researched. Trust is no longer there and a good person is removed from activity.

b.   False negative: How many American operatives were compromised by Robert Hansen? Need more be said? Trust is, in fact, accentuated and reestablished by the results of the test.

But detection theorem is not limited to the military or to lie detectors. As more and more systems are used to make sense of the world around us, to package and repackage information automatically, we see the results of different detectors around us:

Search engines – What do false positives mean? Well, ask the founders of all the search engines that competed with Google. Relevancy counted. And the results were translated into countless Billions of dollars.

Cell phone – Just how better is digital coding of the transmission? Just compare the modern language of cellular technology “can you hear me now?” vs. the past (fading in and out). Sure you can squeeze more channels in, but analog (like Verizon) is still a better quality medium for our human detectors.

Security – In the past, security was ‘1’ or ‘0’ either a port was open or not. Tapping into a NIC would allow us to step through the traffic. An application was allowed to load or not. Today, we see the first waves of “intelligent systems”. Systems which try to elevate (detect) the “nasty” stuff by prioritizing the display of activities related to un-patched vulnerabilities among the clutter of less disruptive activity. Similarly, DLP solutions have detectors included within the applications (to distinguish between confidential and other data). Cost of a bad choice? Perhaps 17 FTE…

Auto-defibrillators: yes, those yellow boxes at airports and upscale malls. These detect heart activity, and based on heart activity can provide defibrillation pulses. For the sake of the users, I hope that automated detector is designed right.

As a summary, detection (as opposed to a simple decision) is here to stay, and to have detector design compromises that will have certain systems providing better results than others.

Written by assafl

October 14th, 2008 at 6:38 pm