The answer is: maybe, but only when it comes to the really obvious stuff.
In this month’s journal of Law and Human Behavior there’s a new paper reporting on a study designed to test jurors’ ability to discern and deal with bad science. It’s titled "I Spy with My Little Eye: Jurors’ Detection of Internal Validity Threats in Expert Evidence". The researchers hypothesized that mock jurors could spot glaring flaws but not subtle though equally fatal ones. They also hypothesized that jurors would, when unable to judge the soundness of a scientific paper, use its publication status as a sort of seal of approval for its validity. Their results confirmed their first hypothesis but partially rejected their second one and along the way seemed to confirm the view that the less people understand about science and the scientific method the less impressed they are with publication status. Indeed, unpublished science may well be viewed as "cutting edge" and so, like soap powder, "New! And Improved!"
It’s a well done paper; infinitely better than so much of the Empirical Legal Silliness out there. It also provides a measure of hope about jurors’ ability to deal with scientific issues. For example, these mock jurors were able to identify and discredit a study without a control group. Using data demonstrating e.g. that 18% of type II diabetes patients taking drug X had heart attacks or strokes over the following decade to show that a particular plaintiff’s heart attack was caused by drug X is of course just a version of the post hoc ergo propter hoc fallacy unless there’s some similar group of type II diabetics not on drug X who didn’t suffer such a high rate of heart disease and stroke.
On the other hand, more subtle but equally invalidating flaws like obvious confounders, biases and reversal of causation went undetected. These observations led the authors to conclude that "[o]ur results indicate that although jurors may be capable of identifying a missing control group, they struggle with more complex internal validity threats such as a confound and experimenter bias. As such, the role of traditional legal safeguards against junk science in court such as cross-examination, opposing expert testimony, and judicial instructions become increasingly important." These findings and others like them underscore the continuing need for judges to act as gatekeepers. Such objective findings also continue to undercut the fact-free reasoning behind Comment c. Toxic substances and disease, Section 28. Burden of Proof, Restatement (Third) of Torts and its effort to loosen the standards for admitting expert testimony in toxic tort cases.
But how are our judges doing? Are they starting, at last, to "get" science? Seventeen years ago in Daubert v. Merrell Dow Chief Justice Rehnquist wrote, concurring in part and dissenting in part, "I defer to no one in my confidence in federal judges, but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its "falsifiability" and I suspect some of them will be, too." That a Justice on the U.S. Supreme Court could not understand that the demarcation between science and pseudo-science had something to do with being able to test the theory being advanced was shocking in 1993. Surely judges are getting this concept first introduced in middle school. Alas that it is not so. One of the references in the paper above is to a study that found that only 5% of state court judges "demonstrated a clear understanding of falsifiability". Worse yet, 80% of the same judges were confident they were up to the task of gatekeeping.
What does it all mean? I think it means that the battle against junk science is far from over but that lay people are finally becoming a little more skeptical of scientific claims and are at last learning to distinquish between the junk and the science, at least on a rudimentary level.