Environmental sampling approaches and technology have come a long way over the last 30-years. These improvements have led to an increase in quality of the data and the overall confidence of the results. Despite the improvements in sampling equipment, technology, and analysis, “bad data” is still collected, processed, and reported. Bad data, as defined here, is data that cannot be used for the intended purpose. Therefore, the end user cannot effectively make decisions based on the data. Most of the time bad data is a result of a lack of planning prior to sampling, but bad data can also be collected in the field, even after good planning. There are of course situations where good data may have been collected, but it was improperly used, which can result in erroneous conclusions…”good data, gone bad”.
With respect to environmental projects, the sampling program must accurately reflect the questions that the user is attempting to answer. So, when developing a sampling program, there are several key steps that are required in order to provide reliable and accurate data that can be relied on for decision-making purposes:
Conceptual Site Model
Prior to initiating any sampling program, a Conceptual Site Model (CSM) should be developed. The CSM describes the characteristics of a site and the processes by which potential contaminants may move from contaminant sources to potential receptors. CSMs facilitate site understanding and help organize site activities and information. CSMs are utilized throughout the lifecycle of investigation/remediation to assist with important decisions and identify potential data gaps, and are considered “living documents” (therefore, can change subject to new information). Ultimately, the CSM can help systematically scope and plan investigations (media, contaminants, locations), isolate relevant exposure scenarios, evaluate potential risks to specific receptors, and guide selection of any necessary remedies.
Quality Assurance/Quality Control
Once the CSM is created and the investigation needs are understood, then QA/QC procedures must be developed for the project. Specific QA/QC considerations include sampling methodologies, project responsibilities flow chart, analytical methods, laboratory QA procedures, data quality objectives, data validation and data usability analysis. The specific QA/QC process is often assembled into a Quality Assurance Project Plan (QAPP). A QAPP is required for many larger projects, such as cleanup work under U.S. EPA’s Superfund and hazardous waste programs, and some state programs. The goal of a QAPP or other QA/QC methodology is to ensure that any data generated from direct measurements activities or collected from other sources (or even compiled from computerized databases and information systems) is collected/analyzed appropriately and following specific procedures. In practice, projects that are largely “presence/absence”, such as some property transaction investigations, will have more limited QA/QC than sites in state voluntary cleanup programs or federal programs (e.g., Superfund and RCRA Corrective Action), which have much more substantial and rigid QA/QC protocols.
Diligent Execution of Field Work
Once the CSM and the QAPP have been developed, that work must be properly executed in the field. This includes following appropriate health and safety protocols, equipment decontamination, and conducting the field work in accordance with scope of work and QA/QC protocols. There are a number of potential issues that can arise during implementation of the field activities. However, it has been our experience that these potential issues can be largely eliminated, no matter how complex the sampling program, if the managers, field team, laboratories, and subcontractors are all “on the same page”. Successful field projects are the direct result of successful project planning and communication.
Data Analysis and Interpretation
The first step in the data analysis process is to ensure that the data were collected, analyzed, and validated following the appropriate QA/QC procedures identified at the outset of the project. After the results are verified and tabulated, what do we do with this information? What criteria do we compare the results to? How do we evaluate these results with respect to previous sampling results? What are the risks associated with the Site? In simple “presence/absence” investigations, this is a relatively straight-forward process. However, for most larger sampling projects there is always some level of interpretation of the data with respect to delineation, additional data needs, and risk to potential receptors. The tools used for that interpretation such as geological cross- sections, groundwater flow diagrams, iso-concentration maps, statistical tools, and models all have limitations. The interpretive nature of this data is why it is important to have multiple lines of evidence to support your interpretations and conclusions.
Bottom line. Successful environmental sampling projects require significant upfront planning and good communication throughout. August Mack is here to help guide you through this process so that you can achieve your project goals.
If you would like to learn more about environmental sampling, register for our webinar on March 3rd.