Recorded at AIHce EXP 2021
Chemical vapors generated in the radioactive waste storage tanks at the Hanford Site are a concern for workers at the site. Worker exposures have been extensively sampled creating a very large data set for OH exposure assessment. Due to the very large data set, a scalable data analysis process was developed in collaboration with Pacific Northwest National Laboratory which reduces the time required for analysis and works with a wide range of data set sizes. This process provides: a) consistent, traceable results, b) improved data quality, and c) outputs that improve risk communication. This session will discuss the key components of this process: 1) improving data quality by detecting errors; 2) choosing appropriate statistical analysis methods; and 3) visualizing the data in ways that aid error detection and communication of the results. This presentation will discuss the development of each of these components in detail, lessons learned and the choices that were the most appropriate for the situation.
Upon completion of the session, the participant will be able to:
- Use high quality data in an exposure assessment.
- Identify data errors in a data set.
- Recognize the importance of the data distribution.
- Compare parametric vs nonparametric statistical methods.
- Outline methods for dealing with censored data.
- Define the difference between UCL and UTL.
- Calculate UTLs using three methods.
Dr. Scott Clingenpeel, CIH
JILL JOHNSTON, MS, CIH
Michael Zabel, CIH