Suppose for a moment you’re a grocer and you’ve ordered a truckload of apples. You open the crates and find, mixed in with different kinds of apples, a bushel of potatoes, a bushel of pineapples, and a couple of bushels of limes. You call the farmer and ask what happened. But now suppose you couldn’t examine the delivery visually. All you have is a spreadsheet recording dimensions and weight of individual pieces of fruit. How could you tell something was wrong with the delivery?
In class actions, the trier of fact and the attorneys are faced with a similar situation. The proposed class is too numerous to scrutinize everyone’s personal situation – that’s why it’s a class action. The facts in evidence often boil down to various records of numerical data. How can the court decide whether some of these putative class members are fundamentally different than the others, based on that?
An old adage applies too often: “People use statistics the way a drunk uses a lamp post: for support instead of illumination.” I’ll argue there’s a better way.
Simple descriptive statistics are compiled by both sides—the equivalent of summaries of the size and weight of the delivered fruit: usually averages or means, and the overall range, i.e., the maximum and minimum. One side might argue that the average is all the court needs to be concerned with. The other side might argue that the measurements are all over the place.
The courts may look to experts to make sense of this. Framed properly, the issue is fairly straight-forward:
Given the pattern to the data, are these all apples, even if of different varieties of apple, or are some fundamentally different?
Statistics can help answer this question. (Statistics is also used in other ways, for example, in discrimination cases and in shareholder lawsuits, but that’s a subject for a different article.)
The issue is not simply whether the proposed class is diverse. The issue is whether data with this pattern was generated by the same kind of underlying, real-world process, or alternatively, are there fundamentally different “populations” in the data? In legal parlance, is there a preponderance of facts in common?
As a first step, the diversity of sizes can be examined. A pineapple is much bigger than an apple, and a lime is much smaller. Potatoes could be larger, smaller, or about the same size. Using a visual aid like a frequency diagram (histogram), the expert and non-expert alike can look for outliers in the data. But what do the outliers signify? Errors in record-keeping? A handful of unusual situations? Are either of these enough to reject the class? More arguments ensue.
The next step in answering this question is to compute various “moments about the mean.” Many litigators are experienced in working with (or in opposition to) experts who compute standard deviations and variances. A neglected tool in this type of analysis is the “fourth moment about the mean”, which goes by the somewhat exotic name of “kurtosis.” Kurtosis is a unit-free number, which is a useful feature, and sometimes flags, in layman’s terms, whether there is something odd in the data. Kurtosis has various uses in statistics, some rather technical, but in class certification analyses, excessive kurtosis might be a signifier that the data mixes together different populations. In our example, kurtosis would tell the grocer: Wait a minute! Different kinds of produce were mixed together.
Other tools, such as formal statistical hypothesis-testing may be used in the scientific analysis of class, as well. It is even possible to test whether named plaintiffs are statistically representative of the proposed class as a whole. None of these tools is a substitute for the trier of fact, and the tools will not flag every situation: A shipment of oranges, apples, and potatoes could involve a distribution of weights that gives rise to no warning flags. And of course, all three are edible fruit or vegetable. But where statistical methods do generate warning flags, the courts may well wish to drill down into the underlying causes.