LIE DETECTION PROCEDURES
"I'm not a smart man, but I know what a lie is." (Forrest Gump)
A significant area of forensic expertise is lie detection, and perhaps the best known method of deception detection is the polygraph technique, often inaccurately referred to as the "lie detector test." However, other methods exist, such as -- hypnosis, narcoanalysis ("truth serum"), PSE (psychological stress evaluation), P300 brain wave "fingerprinting," and good, old-fashioned police psychology. This lecture discusses the more advanced techniques beyond simple interviewing and interrogating tactics that rely upon monitoring things like eye movement and body language. While lie detection is the kind of thing which seems to always inhabit the realm of common sense, there are times and reasons for removing the technological blinders and seeing that all which can be done is done.
HISTORY OF THE POLYGRAPH
Lombroso, the founding father of criminology in 1895, was the first to experiment with a machine measuring blood pressure and pulse to record the honesty of criminals. He called it a hydrosphygmograph. A similar device was used by Harvard psychologist William Marston during World War I in espionage cases, who brought the technique into American court systems. In 1921, John Larson added the item of respiration rate, and by 1939, Leonard Keeler, one of the founding fathers of forensic science, added skin conductance and an amplifier, thus signaling the birth of the "polygraph" as we know it today.
The polygraph is a sound and reliable technique for detecting deception. The vast majority of studies into the reliability of polygraph testing estimate it at least at 90% or higher. Numerous research findings, and works in the field of medicine, have justified the connection between involuntary (sympathetic nervous system) physiological changes and emotional states related to truth-telling or deception. Unfortunately, mostly through historical accident, polygraph exams are not legally admissible unless there is a stipulated agreement prior to trial. There's also some questionable validity (if something measures what it purports to measure) in the use of polygraph for noncriminal purposes, such as preemployment screening, drug testing, and so forth. Ironically, polygraphs are more commonly found in these noncriminal, civil law areas. Police departments, for example, make extensive use of them in their personnel policies, and they are also common with sensitive, security jobs in government and business. Their use in these contexts ignores the fact that these were not what the tests were designed for (criminological positivism), and instead, substitutes the Teddy Roosevelt idea that a public safety employee has nothing to hide anyway (under color of law or presumption of innocence). Privacy rights and protections against self-incrimination are much less guaranteed in civil cases or civil matters.
It must be remembered that this is the area that gave us the Frye test for admissibility (Frye v. United States 1923). It's worth looking at the language of Frye in some detail:
|Just when a scientific principle or discovery crosses the line between experimental and demonstrable is difficult to define. Some in this twilight zone, the evidential force of the principle must be recognized, and while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.|
The court in Frye decided that the polygraph belonged in the fields of physiology and psychology, but at the time (and to this day still), neither field claims the technique. An interdisciplinary field called "psychophysiology" emerged during the 1970s, but it consists largely of academics with no practical field experience in dealing with criminal suspects. On the whole, their research, while methodologically sound and statistically sophisticated, tends to involve studies in the truthfulness of college students (convenience sampling). The field of psychology tends to treat the polygraph as another instance of psychometric tests, and one of the most fundamental principles of psychometry is that each and every subject should get the exact same questions in exactly the same way. All of this flies in the face of polygraph examiners who argue, understandably, that in the hands of a "master," with experience in dealing with criminals and flexibility in being able to condition and question suspects differently, the polygraph technique is highly valid and reliable.
There has been numerous, conflicting case law since 1923, but no definitive Supreme Court ruling). Polygraph tests in the vast majority of jurisdictions follow a stare decisis (let the precedent stand) pattern in which they are not admissible (per se inadmissibility) at any criminal trial unless there has been a prior stipulation by both prosecution and defense to agree to submit the results of such examination. The logical outcome of this is that both sides should agree on the undisputed expertise of a single examiner, or that the prosecution should use its own examiner and the defense should use its examiner, but the power to choose rests with the defense, a phenomenon referred to as the "friendly examiner" syndrome. Studies have not borne out the truth of this syndrome, but it theoretically refers to the idea that a deceitful suspect who takes a polygraph voluntarily at the request and arrangement of their own attorney will stand a better chance of appearing truthful.
The law is even more complicated than per se inadmissibility. Because of stare decisis and the Fifth Amendment, the state cannot require a criminal suspect to submit to a polygraph, even under some pretrial stipulated agreement. If there is a state-sponsored examination, it must be voluntarily taken, and then if the results are negative, they are generally inadmissible because the test was given under coercive circumstances. The fact of the matter is that only positive test results ever get seen at trial. For this reason, lawmakers in about half the states (including North Carolina especially) have decided to NOT allow polygraph evidence to be admitted in criminal cases at all, for any reason, stipulated agreement or otherwise. A small handful of other states (most notably New Mexico) have gone the opposite direction, permitting polygraph results in criminal cases, pursuant to stipulation and over objection, especially if the test results were unfavorable to the defendant.
HOW LIE DETECTION WORKS
Lie detection methods have been used for years by police interrogators. Physiologically, when a suspect lies about their involvement in crime, it's fairly easy to notice a flushed face, throbbing of the carotid artery, dryness of the mouth, sundry other clues. Psychologically, verbally, and nonverbally, there are other clues and cues. The assumption behind all lie detection methods is that there's a natural interaction between mind and body, and depending upon the individual and their level of involvement, deceptive suspects will utilize certain mental, emotional, and physical defense mechanisms that are dependent upon the amount of stress they're under or what danger they perceive themselves to be into. Now, that's a big assumption, and the phrase "defense mechanisms" might be better called the "psychological set" to rule out any idea that the technique is psychoanalytically grounded, which it is not. Polygraph exams are believed to offer individual, rather than class, evidence because through the years, developmentally, a person develops set ways of reacting to reacting to stressful or threatening situations. During a polygraph, an examiner is always paying attention to these fundamental clues and cues, developing a sense of the suspect's values, beliefs, motives, and attitudes.
The machine part of a polygraph examination is designed to pay attention to the actions of the nervous system, particularly the autonomic nervous system, and then certain sympathetic members of the autonomic system which alert the body to stress or threatening situations. The machine has components that measure the following:
respiration (pneumograph -- pneumatic tubes, assisted by beaded chains, are fastened around the chest and abdomen of the person)
electrodermal skin response (galvanometer -- two electrodes are affixed to two fingers on the same hand, and an imperceptible amount of electricity is run through them)
blood volume and pulse rate (cardiospymograph -- a blood pressure cuff, of the type used by physicians, is fastened around the upper arm)
The machine is not just operated. The examiner should be a person of ability, experience, education, intelligence, and integrity who uses the machine in a predetermined manner. There are three (3) phases of the test procedure: (1) a pretest interview; (2) chart recording; and (3) diagnosis. Beforehand, the examiner is provided with all relevant information regarding the case, such as the criminal charges against the person and the statement of facts. They then spend some time alone preparing a pool of test questions that are neither too broad nor too specific. Anything calling for an opinion or belief that can change with time or motivation is ruled out as a possible test question, as is anything vague. The pool of questions should focus on a single incident, the facts, and narrowly defined issues of disputed action, not intention.
During the pretest interview, the examiner will condition the subject by clarifying the purpose of the test, reassuring them about its objectivity, and/or defining terms that will be used. Also, a control question will be developed and selected. A control question is unrelated to any legal issue, but it addresses a related behavior. For example, with a crime of violence, a control question might be "Have you ever lost your question or done things you regret?" Relevant questions are those that have a direct bearing on the case, and irrelevant questions have no bearing whatsoever, but can only be answering truthfully ("Are you sitting in a four-legged chair right now?").
Generally, a series of 9-10 prepared questions are asked, allowing about 10 seconds following an irrelevant question and 15-20 seconds following a relevant or control question. It's also standard to run through all questions a minimum of three times before a diagnosis is attempted.
Diagnosis is made by verifying other clues and cues with the chart. A truthful subject's chart will show emotional attention was paid toward the control questions and deflected away from the relevant questions. A deceptive subject's chart will show emotional attention directed toward relevant questions and away from control questions. The following illustrations might be helpful:
The top line is respiration and the second line blood pressure. Questions #4 and #7 were irrelevant, and show the most reaction.
Question #4 was irrelevant and question 6 was a control question, the latter showing the most reaction. The relevant questions, #3 and #5, show emotional attention in their most common form, a steady increase or decrease in the baseline.
In cases where the suspect is trying to "beat" the polygraph or confuse the examiner in some way, you need to also look at the third and fourth lines, skin conductance and cardiovascular change. The relevant questions were #4 and #6, which show a response, but there's a lack of response to control questions #5 and #7.
In cases where the suspect is trying to mislead the examiner because they are trying to feign amnesia, mental illness, or other mental block indicative of confusion, the pattern results are as above, with high-ranging plateaus. It is rare to get this kind of pattern from a truthful subject. There are only four known "countermeasures" (ways to "beat" a polygraph test), and these are: controlled breathing; muscle tensing; tongue biting; and mental arithmetic. These techniques have been known for years, are part of training that foreign agents undergo, and anyone who has purchased the publicly-available DVDs on them is likely on a government list in case they ever apply for a security clearance.
HYPNOSIS AS A METHOD OF LIE DETECTION
Only when an accused person or eyewitness suffers from a clear case of amnesia will the court consider the use of hypnosis. Defense counsel usually calls for it (in the case of suspects), and is under an obligation to have the results or psychiatric testimony corroborated by an independent third party. Hypnotically induced testimony is inadmissible in court, and hypnotically refreshed or recollected memories are highly controversial.
In a 5-4 decision, the Supreme Court in Rock v. Arkansas (1987), ruled that there should be no per se inadmissible approaches to hypnotically refreshed testimony, that these constitute arbitrary restrictions and prohibit a person from being able to testify in their own defense. Most states now follow a case-by-case approach, and controversy is the norm in the scientific literature.
It is far more common to see hypnosis used in the pre-indictment state of criminal justice, where the police have yet to zero in on a suspect. Eyewitnesses are often involved here. The most popular methods involve past-memory regression and memory enhancement. The first method is usually designed to "unblock" something that is preventing the eyewitness from remembering, and the second method is designed to better "see" some detail, such as a license plate number. The second method is more controversial than the first, and is reminiscent of the psychic technique of "clairvoyance," which ironically was involved back in 1845 when hypnosis was first used in a court of law. Hypnosis, when used to relax, or "unblock" the subject's ability to "watch" the event as it happened has the potential to be of great value to criminal investigation, and is consistent with Dr. Martin Reiser's retrieval theory of hypnosis. The retrieval theory of hypnosis stands in stark opposition, however, to Dr. Martin Orne's construction theory of hypnosis, which states that there are all sorts of distracting situational factors to events which are stored in memory.
NARCO-HYPNOSIS AND NARCOANALYSIS (TRUTH SERUM)
Controversy and disagreement reign over the use of various drugs -- scopolamine, sodium amytal, and sodium pentothal -- to reliably produce truthfulness. All such drugs inhibit control of the nervous system and reduce inhibitions. All courts refuse to admit truth serum evidence, the leading case coming from the New Jersey Supreme Court, State v. Pitts (1989), which disallowed the results of a sodium amytal interview, ruling that it is not a valid scientific technique. The problems with drug-induced memory recall illustrate some of the problems with hypnosis in general, and some of those problems are as follows:
hypermnesia or confabulation, where the subject fills in gaps with false material
hypnotic recall, which is a term meaning something felt or thought at the time of undergoing the drug or hypnotic treatment gets "retroactively" integrated with the original material of memory
memory hardening, where false confidence becomes attached to a bad memory simply because of the extensiveness of the drug or hypnotic procedures carried out
PSYCHOLOGICAL STRESS EVALUATION
PSE, sometimes called voice stress analysis, is based on the use of a certain machine developed in the late 1960s that presumably detects "guilt-revealing," laryngeal microtremors which exist in the voice and are associated with stress and lying. One of the assumptions that PSE makes is in regard to the "startle reaction" which scientists haven't yet agreed upon as being a reflex or an emotion (Ekman & Rosenberg 1998). Most research has produced negative or mixed findings of a relationship between microtremors and deception. Like the polygraph, the key to success is the preparation of prepared test questions. A handful of states, like Arkansas and Louisiana, have licensing requirements for PSE evaluators, but it's generally conceded that PSE is not admissible in civil or criminal cases and only useful in investigative settings.
Other methods of PSE involve simple psycholinguistics, spectrographic voice recognition, or the identification of verbal and nonverbal behaviors which signal deception. The following is a short list of those indicators:
language that deflects away from self; increased pitch; overgeneralizations
speech hesitations and pauses; thinking through what to say; no apparent spontaneity
increase in the number of shrugs; leg and foot movements; eye blinking, nervous stroking
hyperventilation, breath holding, sighing, flushed appearance
reduced use of hand gestures; reduced use of visual or sensory images
other unusual nonverbal behavior; other inconsistent verbal behavior
BRAIN WAVE FINGERPRINTING
Neuroscientist Lawrence Farwell, who runs a Brain Wave Institute in Fairfield, Iowa patented this technique in 1995, which has attracted the attention of the FBI and CIA as a better way to detect moles. Iowa judges are also fond of admitting the technique, even though Iowa is a state where the polygraph is outlawed. The basic principle is that different regions of the brain light up when people tell the truth or lie, and further, that different regions are activated depending upon the type of lie. Dr. Farwell's research, however, looks at a specific type of electrical brain wave, called P300, which activates when a person sees a familiar object. For example, if a murder suspect is claiming an alibi, then their P300 wave won't activate when they are shown the murder weapon. The technology is promising in that the research indicates the brain stores visual images. In the P300 test, a subject wears a headband of electrodes and faces a computer screen. In similar tests, a subject wears a helmet of electrodes, and experts try to make interpretations from a record of what areas of the brain "light up" or receive intensive blood flow. The technique doesn't have anything to do with emotions, or whether a person is sweating or not; it simply detects scientifically whether certain information is stored in the brain.
Since 1995, brain fingerprinting has been extensively tested by the FBI. For example, FBI agents who would only know certain key words or phrases associated with a crime were tested, and the results had 100% accuracy in distinguishing agents who knew about the crime from those who didn't. Sometime around 2003, brain fingerprinting became admissible in court for use in identifying or exonerating individuals in the U.S. A 2004 case, involving the exoneration of a convicted killer in Oklahoma was a critical test case for the technique.
SPECTROGRAPHIC VOICE RECOGNITION
In the 1960s, a scientist named Lawrence Kersta, working with an invention called the sound spectrograph created back in 1944 by Bell Laboratory scientists, claimed that "voiceprints" were a unique way to identify individuals. He trademarked the word "voiceprint", founded his own company (Voiceprint Laboratories), and established a professional association, the International Association of Voice Identification (IAVI) which eventually was absorbed in 1980 by the International Association for Identification (IAI), mainly a group of fingerprint experts. Kersta's brightest followers were Ernest Nash, who helped develop the Michigan State Police crime lab, and Oscar Tosi, a professor who helped develop the Michigan State University Forensic Science program. Together, Kersta, Nash, and Tosi were the leading (and quite possibly only) "voiceprint" experts in the United States. They testified all over the country, and put together training courses and certification programs which are still somewhat sought-after today.
Today, nobody uses the word "voiceprint" anymore because of its erroneous association with "fingerprints." The former is a method of expert interpretation and opinion while "fingerprinting" is a matter of absolute certainty and infallibility. To use the word "voiceprint" gives the method more scientific credibility than it deserves. At best, voiceprint identification is like polygraphy (lie detection) and only admissible in 35 states, although it (like polygraphy) makes for a valuable investigative tool to screen potential suspects. Even the phrase "voiceprint identification" may be improper and should probably be abandoned in favor of the broader term, spectrographic voice recognition.
Do NOT for a moment, however, think this is dead technology. Consider, for example, all those software companies trying to perfect speech recognition programs that allow you to talk to computers. Same underlying theory; same practical problems:
Person: It's hard to recognize speech. I want to
Computer: It's hard to wreck a nice beach. I want two chicken.
Consider also the promise and potential of individual voice recognition. If there's one thing police departments have plenty of, it's tape recordings of phone calls. Sure, there's enhanced 911 which displays the address and residency info. on a caller, but what about all those anonymous tips, bomb threats, and ransom demands. If the technology were perfected, you could practically eliminate things like terrorism and kidnapping. Furthermore, let's take another criminal justice problem area -- eyewitness identification. It seems hypocritical to allow the "sound of voice" as eyewitness evidence, and then exclude scientific methods of voice comparison. Research clearly indicates that "earwitness" evidence is unreliable and can result in wrongful conviction. And (to wear the point out), what about the potential for crowd control. With scientific methods of voice identification, you could find out who those Klansmen are under the hoods or who those shouting rioters are looting the grocery store. It's not even a constitutional issue. Speech is protected, but voice isn't. There's nothing prohibiting every single voice in the country being recorded, indexed, and cataloged in an FBI database.
It all depends, of course, on whether each and every individual voice is different. That is precisely the unproven hypothesis behind the THEORY of spectrographic voice recognition. In order to prove it, you need to have research done by scientists in recognized disciplines of study, like speech, audiology, phonetics, acoustics, physiology, or anatomical kinesiology (exercise science). Unfortunately, no one in any of these disciplines seems to be interested in such research. You can't even find a college course anywhere, at any level, graduate or undergraduate, dealing with the topic. There's also no journals, and very few books on the subject. It might be fair to say that the scientific community, on whole, hasn't exactly warmed up to the idea of spectrographic voice recognition.
HOW IT WORKS
The human voice is incapable of producing one pitch at a time. Instead, it produces a simultaneous series of fundamentals and overtones. Some overtones are random and others are multiples of the fundamental, called harmonics. Of all the characteristics of voice, two of the most important ones are frequency and intensity. Frequency is the speed at which air particles vibrate, measured in centimeters per second. Humans can only produce and hear frequencies in the 60-16,000 cps range. Intensity is the amount of energy (loudness) in a sound wave or pulse. Variation in intensity does not affect frequency, but no two sound waves (even those produced by the same individual) will have exactly the same frequencies and intensities. That is, a sound, once made (even by the same individual) can never be exactly replicated in all its characteristics.
Uniqueness in voice is a product of both physiology and learning. With physiology, the two most important things are resonators (nasal, oral, and pharyngeal passages) and articulators (lips, teeth, tongue, soft palate, and jaw muscles). With learning, the process is mostly imitation and trial and error, but the brain's speech center is actually sending signals to various organs. There's no such thing as spontaneous speech; it's all controlled by the brain. A person may try to disguise their voice, or even learn a foreign language, but the way their brain controls speech habit and especially the way their resonators and articulators are shaped and used cannot be changed. This makes each and every individual voice unique. At the very least, intraspeaker variability is less than interspeaker variability.
It should be noted, however, that some of this has not been empirically established. That is to say, there haven't been exhaustive clinical trials done on all the different ways to disguise, alter, muffle, or mimic the voice. There's also no research on the impact of age, dentures, tooth extraction, respiratory illness, emotional state, and the like. There's some scientific support about foreign language making no difference, however.
It used to be thought (according to Kersta) that there were 10 so-called "cue words" (the, to, and, me, on, is, you, I, it and a) that lent themselves well to voice comparison, then that similar "soundsamples" had to be obtained (requiring wired undercover operatives to get suspects to say silly things), but today, it's generally agreed that an expert MUST have samples similar in the following respects:
precisely the same language, word-for-word must be compared
precisely the same conditions in terms of emotional state and background noise
precisely the same recording equipment must be used
if none of the above conditions are met, longer sound samples are needed
The sound samples are played (unknown voice first) in a continuous loop through a sound spectrograph, either one of the fairly old analog machines resembling an IBM mainframe or one of the newer digital signal analysis workstations (there's some worries about "enhancements" with the digital PCs). It "reads" the frequencies and intensities, and produces printout that look a lot like recordings of earthquake tremors. These printouts are sliced into 2.5 second segments called spectrograms, and they portray three spectra: time (horizontal axis); frequency (vertical axis), and intensity (degree of darkness in the ink).
The expert then engages in a two-step process of comparison: aural (listening) and visual (looking over the spectrograms). In the aural stage, the analyst is making note of pronunciation similarities or differences, such as if the word "the" is said with a short "a" or long "e" sound. They will also index and splice certain start and stop points in each sample, to create a new playback loop where one sample ends and another begins (this won't usually be admissible in court). Finally, the expert scrutinizes for speech habits, psycholinguistic features, dialect, inflection, syllable grouping, and breath patterns. Like profilers, they are trying to put themselves in the mind of the suspect.
In the visual stage, the 2.5 second spectrograms representing the same sound are visually compared. The analyst studies bandwidth, mean frequencies, trajectories, striations, stops, plosives, and fricatives. Differences as well as similarities are looked at. In fact, accounting for the differences plays a part in one of five standard conclusions the expert arrives at:
positive identification -- at least 20 similarities and all differences accounted for
probable identification -- less than 20 similarities and no unexplained differences
positive elimination -- 20 differences or more exist and cannot be explained away
probable elimination -- when recorded text is limited or of low quality
no decision -- when insufficient information or too few common speech sounds
The courts have not been all that accepting of the formulation of these five standard conclusions. The expert can usually expect on cross, redirect, or recross to be asked for a more quantitative conclusion. Particularly with "probable identification," and since some defendant's life may be at stake, the expert will be asked to express in percentage terms how confident they are in a "match." Appeals on the basis of "this is the way our profession does things" isn't probably going to help much when the judge orders them to "answer the question." Then, the whole issue of error crops up, and this is a field with an interesting history on that subject.
ERROR AND ADMISSIBILITY
In short, there's an error rate, ranging from 6% to 29%. Back in 1962, Kersta claimed 99% accuracy, but Tosi (who was denigrated in one case for just being a "college professor") would claim error rates far in excess of 1%. Nash and Tosi even bumped heads as opposing experts (unbeknownst to either one) in a California case with positive identification versus probable elimination opinions. The true error rates may never be known, and for that reason, the technique is unlikely to qualify under Daubert, although that depends upon how you interpret proof of "reliability" as spelled out in Daubert, which actually permits novel scientific evidence of known reliability and replicability.
Spectrographic voice recognition hasn't fared any better under Frye rules, either, with it's requirement of "general acceptance" in the scientific community. Both the listening and looking stages of the method are somewhat subjective or at least qualitative in how they're combined. The standard decision reached by one expert may be at odds with the standard decision reached by another. Moreover, the process of becoming an expert' is far from being calibrated itself. By laying down guidelines for standard opinions, the profession appears to be working with a Coppolino standard, but popularity of the "voiceprint" legacy and/or seriousness of the case lead courts to require a points of comparison or probability estimate approach. Judicial notice and case precedent are mixed, an example of bean-counting.
Several of the major big states (like Indiana, Arizona, New York, California, Michigan, Massachusetts, New Jersey, and Florida) accept the methodology. Others (like Pennsylvania and Texas) want to, but are waiting for "general acceptance." Still others (like Maryland and Louisiana) have always rejected Frye standards anyway, and are perhaps typical in using general Rules of Evidence to get this type of scientific expertise on the stand as long as it's relevant and provides some material assistance to the jury. Intelligence agencies, like the CIA and NSA, also regularly use voice recognition in their work which sometimes involves truly identifying whether the voice on a tape recording is the terrorist they think it is or not.
Alaska v. Coon (1999)
American Polygraph Association
AntiPolygraph.org's Lie Behind the Lie Detector
Can You Beat a Polygraph?
Farwell Brain Fingerprinting
International Association for Identification
Journal of Credibility Assessment and Witness Psychology
Polygraph Law and Daubert
The Polygraph Place
Scientific Validity of Polygraph Testing
Speech as a Biometric
Steven Cain's Voice Expert Homepage
Voice Identification: The Aural/Spectrographic Method
Burke, T. (1999). "Brain fingerprinting: Latest tool for law enforcement." Law & order 47(6): 28-31.
Comment, C. (1972). "The evidentiary value of spectrographic voice identification" Journal of criminal law, criminology & police science 63: 343.
Ekman, P. & Rosenberg, E. (Eds.) (1998). What the face reveals. NY: Oxford Univ. Press.
Gray, C. & G. Kopp. (1944). Voiceprint identification. Bell Telephone Report, Bell Laboratories.
Hollien, H., L. Geison & J. Hicks. (1987). "Voice stress evaluators and lie detection" Journal of forensic sciences 32(2):405-18.
Hollien, H. (1990). The acoustics of crime: The new science of forensic phonetics. NY: Springer.
Kersta, L. (1962). "Voiceprint identification", 196 Nature magazine 1253, Dec. 29, 1962.
Koenig, B. (1986). "Spectrographic voice identification" Journal of acoustical society of America 79: 2088.
MacDonald, J. (1955). "Truth serum" Journal of criminal law & criminology 46:259-69.
Matte, J. (1996). Forensic psychophysiology using the polygraph. NY: JAM Publications.
Moenssens, A., J. Starrs, C. Henderson & F. Inbau. (1995). Scientific evidence in civil and criminal cases. Westbury, NY: Foundation Press.
Moore, M., Petrie, C. & Braga, A. (Eds.) (2003). The polygraph and lie detection. NY: National Academies Press.
Nachshon, I. et al. (1985). "Validity of the psychological stress evaluation" Journal of police science and administration 13:275-35.
Nardini, W. (1987). "The polygraph technique: An overview" Journal of police science and administration 15:239-49.
National Academy of Science. (1979). On the theory and practice of voice identification. Washington D.C.
Orlansky, J. (1962). An assessment of lie detection capability. Washington D.C.: Institute for Defense Analysis.
Orne, M. (1984). "Hypnotically induced testimony" in E. Loftus (ed.) Eyewitness testimony. NY: Free.
Rabon, D. & Chapman, T. (2009). Interviewing and interrogation, 2e. Durham: Carolina Academic Press.
Reid, J. & F. Inbau. (1977). Truth and deception: The polygraph technique. Baltimore: Williams & Wilkins.
Tanner, D. & Tanner, M. (2004). Forensic aspects of speech patterns. NY: Lawyers and Judges Publishing Co.
Tosi, O. (1981). "Methods of voice identification for law enforcement agencies" Identification news, April, 6.
Vrij, A. (2000). Detecting lies and deceit. NY: Wiley.
Weinrich, H. (2006). The linguistics of lying. Seattle: Univ. of Washington Press.
Last updated: Nov. 14, 2013
Not an official webpage of APSU, copyright restrictions apply, see Megalinks in Criminal Justice
O'Connor, T. (2013). "Lie Detection Techniques," MegaLinks in Criminal Justice. Retrieved from http://www.drtomoconnor.com/3220/3220lect02b.htm.