Sources of Computational Error in Probabilistic Genotyping Software Used for DNA Mixture Interpretation

Heather Miller Coyle, University of New Haven

© 2014 IRJCS. All rights reserved.

Abstract

Use of DNA for human identification is considered a gold standard for criminal and civil casework. Although DNA is powerful and convincing technology, there is an inherent error rate associated with DNA mixture analysis methods whether computed manually or with software. The use of probabilistic genotyping software programs for the analysis of complex DNA mixtures is gaining momentum in certain regions of the United States and little information exists in the published literature on sources of error in establishing true contributors to DNA mixtures as compared to false positive matches from non-contributor reference DNA databases. On review of a forensic software program called forensic statistical tool or FST, some factors contributing to the high error rate have been identified as (a) percentages or peak height ratios used to establish major and minor components of a mixture, (b) choice of analytical thresholds and (c) the empirically derived allele drop-in rates (contamination events) and drop-out rates. All potential pair-wise comparisons of allele combinations must be considered for each locus and it is possible using certain computational parameters to artificially match an individual who is not the source of the evidence at some estimated probability which becomes the error rate for the method. Based on a brief survey of different computational programs, the error rates for two person DNA mixtures range from 0.005% (e.g. TrueAllele) to 0.02% (e.g. FST). With an increase in the number of contributors to the sample to three, there is a corresponding increase in the error rate (0.08%) with the FST software analysis as there are greater numbers of permutations or possible combinations of allele arrangements.