An Analytical Review of Two Decades of Research Related to PC-Crash Simulation Software - Part I

AAEAAQAAAAAAAAoWAAAAJDZkZGRmZjE2LWQxY2YtNDU2Yi1iMjQ4LTkxY2M0YzY5MzQ3OQ.jpg

I'm going to put this article out in bite size chunks. It'll probably end up being 6 or 7 parts. As always, I welcome your feedback!

Introduction

PC-Crash is a vehicular accident simulation software that is widely used in the accident reconstruction community. Later parts of this article will review the prior literature that has addressed the capabilities of PC-Crash along with its accuracy and reliability for various applications (planar collisions, rollovers, and human motion). I actively use PC-Crash software in my accident reconstruction practice. However, the intent of this study is not to promote PC-Crash over and above the use of other simulation software packages, since other widely used and broadly tested simulation tools are also available and in use by accident reconstructionists.

In this first part, I will begin by defining several terms related to simulation validation that will lay the groundwork for the later parts. The definitions and concepts presented here are drawn from articles by Kleijnen [1995], Balci [1997], Robinson [1997], Carson [2002], and Sargeant [2003].

Conceptual Model Validation: Along with all other simulation programs and reconstruction models, PC-Crash uses conceptual models that attempt to mimic real world systems. Examples would be the tire and suspension models in PC-Crash. The validity of these conceptual models could be evaluated and demonstrated independent of their implementation in PC-Crash, a process that can be termed conceptual model validation. An example would be to compare the PC-Crash tire model to real-world tire data to determine the degree to which the model accurately mimics the behavior of real tires. One important idea to keep in mind here is that the degree to which the model needs to mimic the real world system depends on the application to which it is being applied.

Calibration: Conceptual model validation could also relate to model calibration where the validity of the model is assumed to have been demonstrated and its use is a matter of inputing physically realistic inputs to obtain correct results. An example of this would be to use tire data and curves from Brach’s tire modeling publications [2000, 2005, 2008, 2009] to determine reasonable inputs into the PC-Crash TM-Easy tire model. Again, the issue of using reasonable inputs into the model and calibrating the model to generate realistic outputs is not unique to PC-Crash, but applies to the use of any simulation software or analytical model by accident reconstructionists.

Verification: Each of the conceptual models used by PC-Crash also has its implementation within the software, through the computer code underlying the software. The process of ensuring that these conceptual models are correctly implemented in the code is referred to as verification. Verification aims at eliminating any programming errors. Most validation studies of PC-Crash and other simulation software packages assume the software developers have adequately addressed this task.

Operational Validation: Operational validation involves ensuring that the implemented conceptual models, when they are all combined in the software, produce a sufficient level of accuracy when compared to the real-world process or system the model attempts to replicate. Carson [2002] noted that “sufficient accuracy means that the model can be used as a substitute for the real system for the purposes of experimentation and analysis…” Robinson [1997] notes that “a key concept is the idea of sufficient accuracy. No model is ever 100% accurate…the aim is to ensure that the model is sufficiently accurate…this accuracy is with reference to the purpose for which the model is to be used.” Again, here, sufficient accuracy would be defined relative to the purpose for which the model will be utilized. Not only that, sufficient accuracy can also be defined in relationship to other available models – the question in this regard would be: “Does PC-Crash produce accuracy levels comparable to other reconstruction models and techniques?”

In relationship to operational validation, another issue that should be acknowledged is the degree to which user skill affects the simulation results. As the PC-Crash website at one time stated: “PC-Crash is technical software for serious users. It’s powerful and gives you a lot of freedom, so that you can simulate unique events. It therefore necessarily leaves you the option to make a mess. Don’t let yourself do this.” This hints at the simple fact that some analysts are more skilled than others. No matter how valid the underlying models in PC-Crash are, the quality of the simulation produced by any particular analyst is not guaranteed. This is true not only of analysis conducted within PC-Crash, but of any simulation software or reconstruction technique. This is an important issue for validation because not all studies that explore the validity of PC-Crash have isolated the validity of the physical models in PC-Crash from the skill of the user applying those models. These studies are still useful, but it should be recognized that the underlying models might actually be stronger and more robust than what any particular user is able to demonstrate.

Day [1989] raised this same issue, noting that “misuse [of computer programs for accident reconstruction] is due to the lack of a thorough understanding of how the programs work…Just as the level of skill varies among investigators, the level of understanding how the programs work also varies. When properly used, these computer programs are an invaluable accident investigation tool. When misused, these programs can produce erroneous results – and a misconception of what actually occurred during the accident.”

Wach [2012] also raises this issue, noting that “it should be emphasized that PC-Crash is only a tool which gives correct results in so far as the data are correct and mathematical models for the investigated physical phenomena are relevant, therefore the final responsibility for the conclusions derived from calculations rests with the user…to use the program correctly knowledge of the dynamics of vehicles and collisions is indispensable…”

An example of demonstrating the operational validty of PC-Crash would be to compare simulations of crash tests to the measured data from the crash test. One issue that arises when assessing the operational validity of PC-Crash is the validity of the data input into the software. One way to pose the question of operational validity would be, “If we know that all of the inputs are valid, then how well do the models within PC-Crash mimic the dynamics of a real-world crash?” This question can be problematic because some parameters, such as the coefficient of restitution and the intervehicular friction, are not directly measureable from a crash test and calculation of these parameters can contain considerable uncertainty [Rose, 2007]. Some validation studies also ask the evaluator to use values for input parameters that they would typically use in their accident reconstruction practice, rather than providing the actual known input values. This is one way of setting up a validation study and likely unavoidable to some extent. However, if a study is setup this way, it should simply be recognized that user skill is entering into the results to some degree. Another way to setup the study would be to provide the analyst with all the known data from the crash test in order to decrease (though certainly not eliminate) the degree to which the user’s knowledge and skill enter into the results.

Credibility: Another issue related to validation is a simulation software’s credibility. In a validation context, the term credibility refers to the confidence of the community of users in the simulation software. Wide use of a simulation software package like PC-Crash can be an indication that the relevant community accepts it and judges it valid. An indication of this would be the presence of studies in the literature that assume the validity of the simulation software.

References

Balci, Osman, “Verification, Validation and Accreditation of Simulation Models,” Proceedings of the 1997 Winter Simulation Conference.

Brach, Matthew R., Brach, Raymond M., “Modeling Combined Braking and Steering Tire Forces,” SAE Technical Paper 2000-01-0357, 2000, doi:10.4271/2000-01-0357.

Brach, Raymond M., R. Matthew Brach, Vehicle Accident Analysis and Reconstruction Methods, SAE International, ISBN 0768007763, 2005.

Brach, Raymond M., R. Matthew Brach, “Tire Models used in Accident Reconstruction Vehicle Motion Simulation,” presented at XVII Europäischen für Unfallforschung und Unfallanalyse (EVU) - Conference, Nice, France, 2008.

Brach, Raymond M., Brach, R. Matthew, “Tire Models for Vehicle Dynamic Simulation and Accident Reconstruction,” SAE Technical Paper 2009-01-0102, 2009, doi:10.4271/2009-01-0102.

Carson II, John S., “Model Verification and Validation,” Proceedings of the 2002 Winter Simulation Conference.

Kleijnen, Jack P.C., “Theory and Methodology: Verification and Validation of Simulation Models,” European Journal of Operational Research 82 (1995) 145-162.

Robinson, Stewart, “Simulation Model Verification and Validation: Increasing the Users’ Confidence,” Proceedings of the 1997 Winter Simulation Conference.

Rose, Nathan A., Beauchamp, Gray, Bortles, Will, “Quantifying the Uncertainty in the Coefficient of Restitution Obtained with Accelerometer Data from a Crash Test,” 2007-01-0730, Society of Automotive Engineers, 2007.

Sargent, Robert G., “Verification and Validation of Simulation Models,” Proceedings of the 2003 Winter Simulation Conference.

Reader Comments

Sam Kodsi (Kodsi Forensic Engineering, 3/18/2017) - Good article on crash simulations, Nate. Although they have been validated primarily for two vehicle crashes, in simulations, the crash pulse duration is ignored (i.e. the simulations assume the crash is instantaneous, when research shows that a crash pulse typically lasts between 80 to 200 milliseconds). The “point of impact” and plane of contact as set by the simulation expert is at maximum engagement between the vehicles (maximum dynamic crush - not static), meanwhile in reality, the area of contact between the vehicles changes during the crash through to maximum engagement between the vehicles' rebound and separation phases.

Simulations may also not model the model vehicle specific suspension and steering geometry such as vehicle dynamics software such as CarSim, TruckSim, MotorcycleSim or model the vehicle structure, force-deflection and crush deformation such as FEA Finite Element Analysis simulation software such as LS-Dyna. As such, there is uncertainty in the simulations, which predicts the vehicle trajectory based on the final rest positions (physical evidence) and many assumptions by the expert, including the unknown crash-specific restitution (about 0.07 to 0.20) and vehicle-vehicle friction. These are limitations, just like EDR limitations, etc. Trajectory Error are not dirty words.

#accidentreconstruction #simulation #forensics #forensicengineering #crashreconstruction #expert