Accident reconstruction and visibility studies are enhanced with camera-matched 3D video

Latest technology can recreate environmental factors present at the time of the accident

Jorge Mendoza
2008 March

In court, the admission of video evidence for accident reconstruction and visibility studies requires that the accident scene be substantially similar to the time of the accident. However, with the passage of time, the accident scene’s environmental factors can change or disappear: Vehicles are destroyed or reported missing, trees are cut down, signs are removed, and the roadway configuration can be altered. When these missing environmental factors are the centerpiece or are critical components in your case, then they need to be accurately reproduced for consideration in the accident reconstruction analysis and visibility study.

First, some terminology: A visibility study records what a reasonably alert driver or witness would be able to detect under substantially similar accident conditions. (Reference: SAE Technical Papers 921575 Visibility Study-Methodologies and Reconstruction.) An accident reconstruction uses scientific methodology to determine what factors contributed to an accident. (Reference: The Traffic-Accident Investigation Manual/At Scene Investigation and Technical Follow-Up/Northwestern Traffic Institute.)

The credibility and admissibility of the video exhibit depend on how accurately these environmental factors are represented. In accident reconstruction there are three main factors that contribute to the accident. They are human factors (the interaction between the driver, vehicle and surroundings), environmental factors (the roadway, weather, sun position, moon phase, topography, etc.) and vehicle factors (mechanical condition and performance). A clinical psychologist evaluates the human factors, the visibility expert looks at the environmental factors and a reconstructionist analyzes the vehicle factors. 

The accident site

During a typical accident site investigation, the experts analyze accident photos, physical evidence and eyewitness testimony to determine what factors may have contributed to the accident. The reconstructionist determines the vehicles’ velocity, acceleration, impact point and position of rest. The visibility expert examines conspicuity, lines of sight and obstructions at crucial points in time. The clinical psychologist uses the combined experts’ analyses to determine a perception-reaction time for the driver and an accident avoidance model.

In an ideal world, the accident site is unchanged and the vehicle factors, environmental factors, and human factors support each other factually. The physical evidence (vehicle crush, skid marks, point of impact, and the point of rest) are analyzed and used to solve for vehicle position, velocity and acceleration. Cameras are carefully positioned at the eye level of eyewitnesses in exemplar vehicles and around the accident scene to determine visibility issues and scene obstructions. The reconstruction video is then analyzed and used to determine the perception reaction time for the driver. Finally, a forensic video evidentiary exhibit is produced to show the combined results from all three accident reconstruction factors.

In the real world, there is a high probability that environmental factors have changed or vanished from the accident scene. The only evidence that experts may have to work with is crushed vehicles, accident photos, a police report and witness statements. The accident reconstruction and visibility study video therefore require careful planning.

The first two things that need to be done are to have the accident scene 3D surveyed and aerial-photographed. Next, the reconstructionist performs a computer simulation of the accident based on the available evidence and 3D survey. The visibility expert next creates a 3D computer model of the scene based on the 3D survey. Computer models of the missing environmental factors, e.g., a vehicle, are also constructed based on manufacturer’s specifications. Then photographic texture overlays are applied to the 3D models and 3D survey. The 3D scene is then lighted using physics-based lighting to match the conditions that existed at the time of the accident. If there are eyewitnesses, virtual cameras are positioned at their locations throughout the 3D scene. Finally, the computer simulation motion data is applied to the computer models of vehicles in the scene and a 3D forensic animation is produced for evaluation.

Camera matching procedure

At this point a site visit is made to acquire video from various eyewitness locations. The forensic animation is used as a guide to assist in placing the video cameras and to identify where the missing environmental factors were located. Tennis balls can be used to mark these locations at the accident site. The camera vehicle may have the camera mounted on the exterior for proper height and orientation. The accident reconstruction is done using the same scientific methodology that was used under ideal conditions with consideration for the missing environmental factors. After the scene video is acquired it is processed using specialized 3D camera matching software (Match Mover Pro, Boujou, 3D Equalizer, etc.). The 3D camera matching software processes the video and creates a virtual camera and 3D “point cloud.” The virtual camera has the same attributes (position, rotation, and focal length) and movement as the real world camera that acquired the video. The 3D point cloud has a direct correspondence to points on the 3D survey and artifacts in the video. The 3D environmental factors and camera matched video are merged together such that the 3D environmental factors fall into place on the video. The visibility study, perception reaction time and avoidance model are done based on this 3D camera-matched video.

The reason that the camera-matching procedure works is because the 3D survey provides a common point of reference that allows the experts to be in synch with each other. The camera matching software creates a 3D link between the 3D survey of the accident scene and 2D video of the scene. The software provides a means of working in a 3D environment to accurately position the missing objects on the 3D survey. The camera matching software first tracks stationary features, then it solves for the camera’s attributes and creates the 3D point cloud. This 3D point cloud is then overlaid on the accident site survey for calibration. At this point the 2D video can be viewed in the 3D computer environment with the 3D survey overlaid and added environmental factors. This ability to view 2D video in a 3D environment is extremely important because it is the basis for the accurate placement of the missing 3D environmental factors. Finally, the added 3D environmental factors are rendered over the 2D video for analysis.  

It is impossible to add 3D computer elements accurately to 2D video without 3D camera matching software. However, 2D elements can be added to 2D video by using 2D tracking software (Adobe After Effects, Combustion, Shake, etc.). The 2D tracking software tracks moving artifacts and stabilizes the video (horizontally and vertically) to assist in the placement and animation of 2D elements. The main issue is that the video is a 2D representation of the 3D world and if the added 2D element does not move along the horizontal or vertical video axis then the changing perspective can have a significant impact on the visibility study and accident reconstruction which are based on 3D objects.

Another consideration is that the accident reconstruction expert may provide simulation data (1/30 of a second) with translation in the x, y, and z direction and rotation about the x, y, and z axes. Any translation in the z direction must be 3D camera matched. Rotations of 2D elements are only possible for the z axis. Another potential problem is in trying to establish an accurate velocity or acceleration for a 2D moving object. Velocity is the change in positions over time. Acceleration is the change in velocity over time. What this means is that in order to apply a velocity or acceleration to a 2D element, the video must contain features of known distances and the video frame rate must also be known. The standard playback rate for video established by National Television Standards Committee (NTSC) is 29.97 frames per second for the United States of America, Canada and parts of South America.

Two examples

The first example involves a garbage truck that ran over (dragging him 120 feet) a ten-year-old boy (5’2”) as he walked two feet in front of a garbage truck pushing a garbage container. The accident site was surveyed and aerial-photographed in preparation for the accident reconstruction and visibility study. Because the accident vehicle was destroyed, a 3D computer model of the garbage truck had to be built in the computer environment along with a 3D model of the boy who was now 18 years old.

The garbage truck interior was a critical element in the visibility study and was digitally modeled based on a similar garbage truck’s dimensions. The camera vehicle had three cameras (providing a panoramic view) mounted at the driver’s eye level (7’2”) for the visibility study. There were cameras also positioned around the scene to capture the time- distance relationship between an exemplar boy and camera vehicle during the reconstruction. Next the truck interior was camera matched into the video for analysis. The visibility study revealed that the driver (standing in right side of the truck) could not see the boy walking two feet in front of the truck. The preventive accident model showed that a properly positioned and adjusted mirror could have alerted the driver to the boy’s position.

In this final example over a mile and a half of road way was surveyed and aerial-photographed. A chemical spill occurred on interstate 5 South of Colusa County. The number one lane was closed and warning signs were distributed over approximately 1.5 miles alerting drivers that the number one lane was closed. A tractor trailer driver claims to not have seen any warning signs and crashed into slowing traffic resulting in one fatality. The Caltrans signs and vehicles had to be modeled in the computer. A computer simulation of the accident was first reconstructed then a site visit made to acquire video. The video did not include any exemplar vehicles or warning signs. The video camera was mounted at the truck driver’s eye position on top of a vehicle. The camera vehicle was then driven at accident speed (60 miles an hour) to the impact point. The video was then processed through the camera matching software and the missing environmental factors were added according to specifications. The final product was a forensic animation and video that showed the truck driver could have seen the warning signs in time to slow down and avoid the accident.

Latest uses of camera-matching

Camera matching technology is also being used today to survey accident sites (reference: SAE technical paper 2004-01-1121 A Video Tracking Photogrammetry Technique - To Survey Roadways for Accident Reconstruction) and the French National Institute for Research in Computer Science and Control (INRIA) is using the camera-matching software technology in the development of an optical navigation system for a mobile robot (reference: Three-Dimensional Computer Vision - A Geometric Viewpoint).

Learning from Hollywood

Steven Spielberg used camera matching technology in 1993 to add computer generated dinosaurs in the movie Jurassic Park. Industrial Light and Magic’s (ILM) Dennis Muren won 1 of 8 visual effects Oscars for the film, (reference: 3D World, February 2008). It was the first film to feature realistic dinosaurs and it set the standard for creature effects. Directors’ eyes were now opened to 3D and soon integrated the technology with animatronics. Today, the motion picture industry has adopted the 3D camera- matching technology to add special effects and computer generated (CG) vehicles, people, buildings, space ships etc., reference (Matchmoving - The Invisible Art of Camera Tracking and Cinefex). These special effects and CG   elements can be generated without endangering people or damaging property.

Hollywood also uses computer controlled cameras that record the x, y, z translation and rotation of the camera during operation. The ability to accurately reproduce this movement provides an alternate method of adding environmental factors to video through the use of a 3D scene survey and 3D animation and modeling program, such as 3D Studio Max and Maya. The future for this exciting technology is total 3D scene extraction.

Jorge Mendoza Jorge Mendoza

Bio as of July 2017:

For over 24 years, Jorge and Donna Mendoza have operated Litigation Animation, Inc., a consulting firm with offices in San Jose and San Francisco, California, specializing in laser and optical scanning, animation, photo/surveillance video analysis, and day/night visibility studies. Jorge and Donna are both FAA licensed drone pilots with experience in optical scanning accident sites from the air for greater coverage and cost benefit. Jorge has a BS in Mechanical Engineering and Donna has a BS in Medical Technology. Both are members of the Society of Forensic Engineers and Scientists (SFES). Their work can be viewed at www.litigationanimation.com.

E-mails: litamation4297@gmail.com and litamation@yahoo.com.

Phone: 408-677-6475 408-206-4297

http://www.litigationanimation.com

http://www.reachfortheskymapping.com/

 

http://www.litigationanimation.com
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
3D computer simulation showing a side-impact between an automobile and a fire truck
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
Computer simulation on aerial texture overlay of survey
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
3D camera matching software tracking 2D features
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
3D point cloud and virtual camera solution
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
Animation of a garbage truck about to run over a 10-year-old boy
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
Digital computer model of garbage truck interior used in visibility study
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
Truck’s interior made semi-transparent to show pedestrian’s position
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video
This video illustrated a tractor-trailer rear-ending a car on I-5 after failing to slow for warning signs. To the left is the view from inside the cab, post-impact.
Accident reconstruction and visibility studies are enhanced with camera-matched 3D video

Endnote

1. SAE Technical Papers 921575 Visibility Study -Methodologies and Reconstruction — Ernest Klein and Gregory Stephens (Collision Research and Analysis)                      

2. The Traffic-Accident Investigation Manual (At Scene Investigation and Technical Follow-Up) — J. Stannard Baker and Lynn B. Fricke (Northwestern University Traffic Institute)

3. SAE technical paper 2004-01-1121 A Video Tracking Photogrammetry Technique — To Survey Roadways for Accident Reconstruction-William T.C. Neale, Steve Fenton, Scott McFadden and Nathan A. Rose (Knott Laboratory, Inc.)

4. Three-Dimensional Computer Vision (A Geometric Viewpoint) Oliver Faugeras

5. Matchmoving - The Invisible Art of Camera Tracking — Tim Dobbert

6. SAE 1999-01-0093 Using Computer Reverse Projection Photogrammetry to Analyze an Animation — David J. Massa (Sigma Animation, Inc.)

7. Cinefex–Publisher — Don Shay

8. 3D World, February 2008

Copyright © 2016 by the author.
For reprint permission, contact the publisher: www.plaintiffmagazine.com