Facts Optional: A Case Study in DEQ’s Data Spin
Published Jan 30, 2026
Katie Murray: The agency is making a decision to include data of questionable quality in their reporting because they just want to. There’s a lot about their approach to the data that is along those same lines. They’re leaving out all of that data below reporting limits because they want to. And some of the responses that I’ve heard is we’re being precautionary. The scientific response to that would be, there are ways to be precautionary. There’s a whole scientific approach around being precautionary, but it doesn’t mean that you just make it up and do what you want.
Chris Edwards: Welcome to Forestry Smart Policy, a podcast produced by the Oregon Forest Industries Council for policymakers and other thought leaders influencing decisions in Oregon. I’m Chris Edwards, your host and president of OFIC. This episode is a little bit different. Today, we handed the mic to Katie Murray, the executive director of Oregonians for Food and Shelter, where Katie hosts a conversation with a recent graduate of Oregon State University, Isabella Nelson and Dr. Kim Anderson, professor from the Department of Environmental and Molecular Toxicology at Oregon State University. Isabella recently graduated with an honors bachelor of science degree in environmental chemistry, with a minor in toxicology. The conversation focuses on Isabella’s analysis of work and data from the Oregon Department of Environmental Quality’s Pesticide Stewardship Partnership. It was an extraordinarily revealing conversation one every policymaker in the space ought to hear, especially in an environment when budgets and accountability are top of mind. The analysis demonstrates how bias and desired outcomes skew the presentation and interpretation of data and ultimately politicize science. We hope you learn as much from this episode as we did. Without further ado, here is Katie Murray hosting a conversation with Isabella Nelson and Dr. Kim Anderson, PhD.
Katie Murray: All right. Well, thank you for having me today as a guest host on your podcast. My name is Katie Murray. I’m Executive Director of Oregonians for Food and Shelter. And I’ve done one podcast in the past for OFIC, but I’ve never hosted a podcast, so hopefully, I do a good job with this. Just a brief bit of background on Oregonians for Food and Shelter, or OFS for short. We are a large coalition of natural resource industry members. We have individual farms. We have forestry operations. We have commodity groups and trade associations. And we come together really around three main things. Pesticides, fertilizers, and biotechnology. So we are trying to encourage science-based policies around these issues, which tend to be fairly controversial and hot button issues in the state. So we’re working a lot on state legislative issues and also regulatory issues. And one thing we want to talk about today is something that we’ve been working on at OFS for many years now. A little bit of a deep dive into environmental science and public policy to talk about an issue that in a nutshell relates to how flawed data analysis can lead to misguided environmental policies and actually impact what we’re seeing on the ground in terms of results. So this is about Oregon’s Pesticide Stewardship Partnership. This is a program that started in the late 90s in the state, was expanded with state funding in 2013 and has continued to be funded by the state since then. It also has partial funding from fees paid by pesticide registrants. This is a voluntary stewardship program that works between our Department of Ag and Department of Environmental Quality that looks to pair monitoring for pesticides in Oregon’s waterways to target that with outreach and education that responds to any findings that come up from that monitoring. So we’re looking in the water, trying to see what we find. If we find stuff, we’re going out to pesticide applicators and trying to help them do better and keep pesticides from entering the water. There’s also a goal there of helping to maintain effective pest management for our producers and other applicators so that those pesticides can continue to be used responsibly and safely. So there are many success stories from this program, which we can talk about a little bit, over several decades. We’ve had regions like Walla Walla and Hood River, where we’ve seen great success in reducing certain pesticide findings. Over the last few years, we’ve had some concerns come in with this program, where the data that’s being collected and intended to inform targeted education is now being used in a different way, analyzed in a different way, and put back out in ways where the data is being taken out of context and they’re using inappropriate analysis methods that are really not scientifically supportable. So they’re cherry picking data, they’re including data of questionable quality to draw very sweeping conclusions and to paint an overwhelmingly negative picture when in reality, the actual data set tells a different story. So based on all of these issues and the lack of response that we’ve gotten as we’ve raised the issues, we’ve had meetings, we’ve sent letters, we’ve tried to raise this with agency leadership, we decided to come up with a project for a student at Oregon State University to really just take this data set and analyze it with appropriate methods and make recommendations for this program on how they can improve what they’re doing. Because ultimately, we want to see this program succeed, we want the metrics to be measurable, to be accurate, and we want the results to be successful. We want to see minimized pesticides at our waterways. Before we get into the meat of the program, I wanted to introduce who we have with us today. First, we have our intern, Isabella Nelson. She was a student at the time at Oregon State University. She’s since graduated and is now still working in one of the labs there. We also have her supervising professor, Dr. Kim Anderson from Oregon State. Before we go to Isabella, let’s just hear for a second from Kim. I know you’ve worked with this program probably since its inception. You’ve been at OSU for how many years? 27?
Kim Anderson: 27.
Katie Murray: So, just talk a little bit about that. Tell us your thoughts on kind of when you started hearing about these issues and this project and kind of what your thoughts are on where we are.
Kim Anderson: Thanks, Katie. So, this is my first podcast. So, I’m very nervous. Thanks. So, yeah, I actually remember when the program came into being in the late 90s and I was actually in Idaho and subsequently came to Oregon State and it was an incredible program. I remember everyone being inspired by Oregon, that they could come together across the spectrum of everyone in the state and find common cause and develop a program that could address those things that they could agree on that were really important to both of them. And it was really nothing short of inspiring. And other states actually adopted Washington, Idaho, adopted this kind of approach and I thought that it was just really very cool that that Oregon could lead the way in this. And I think the whole goal was to develop across the spectrum, everyone agreed, let’s develop good, robust, evidence-based methods to solve the problems we can agree on, that we have concerns about. And it was that very sort of Oregon way that I thought was inspiring and I’m still inspired. I know I was involved slightly with the Hood River success story, and there’s just so much success that can still happen. I think it’s that desire to get back to those sort of robust approaches so that we can solve what we all agree on or common problems.
Katie Murray: Totally agree. I think there’s a lot that we have to build on and I would just like to see this program refocus itself so that we can really dive back into this data set in an appropriate way and start tackling those problems in other regions. Before we kick it over to Isabella, Kim, just tell us a little bit about what your role was with this project because this really was student-led, I would say. Isabella was very independent working on this project, but what role did you play?
Kim Anderson: I didn’t have a role directly in Isabella’s internship. She was hired by OFS when she was a student, but she was getting her honors degree in my laboratory and doing research. So she would come to me and ask questions about, well, how do I handle this kind of data and how do I handle that kind of data? I just told her how to handle data and it didn’t really occur to me whether she was asking for her honors thesis or for her internship because the answer was the same. This is the approach we use now in 2025 to handle data. So the answers were always the same and I just mentored her like I mentored all my students and giving them the best approach, best scientific methods, the appropriate methodology.
Katie Murray: Great. Well, we certainly appreciate having your input into this. Isabella, why don’t you tell us a little bit about how you started to approach this project when you first started?
Isabella Nelson: Yeah. Thanks, Katie. When I first started this project, I had a lot of catching up to do on the background of the program. I just went through and I read as much information and all of the reports as I could get my hands on, and then I dove into the data. I was able to get the raw data for a select couple of pesticides from DEQ directly, and I put it all in an Excel sheet, and then I just followed the data wherever it took me, followed whatever rabbit holes it led me down, and I ended up finding some pretty big conclusions from that.
Katie Murray: So just to clarify, you decided to cover just a couple of basins and a couple of pesticides just to kind of minimize the data set. So this was kind of a pilot, just looking at providing the agency back with some examples of how they could do things differently with the data.
Isabella Nelson: Yeah. So I chose to look at specifically the pudding region up around Salem, where we’re recording right now, and the Amazon region down around Eugene. Then within those regions, I looked at five individual pesticides, which have all been identified as pesticides of high concern by the program in the last couple of years.
Katie Murray: Great. So you mentioned you found some pretty big conclusions. So let’s start walking through some of those. What were the most important things that you found?
Isabella Nelson: So the first and I think one of the most important issues that I found was how the program is treating data below minimum reporting limits or MRLs. So this is going to get a bit technical, so bear with me. So the MRL is a value below which the pesticide can be reliably detected as being present in the water, but it cannot be quantified. So you cannot say for sure what the concentration is, but you can say it’s there. So anything below this MRL is being excluded by DEQ from the analyses and from being reported into public data. Now, as I mentioned, below MRL doesn’t mean the pesticide is not there. It just means it’s present at very low levels and especially well below any safety thresholds for humans or aquatic life. Now, throughout the life of this program and throughout the regions and the pesticides, the majority of samples being collected and analyzed are below this threshold. Well over 90 percent of the samples really is below this threshold, which is a good thing because it demonstrates that these pesticides are being detected at very low levels in the waterways and they’re not expected to pose risks to humans or aquatic life. However, in the analyses that DEQ is doing and in the data that it’s presenting to the public, all of these values that are below this minimum reporting limit are being excluded.
Katie Murray: And I think we just need to clarify here, the raw data that DEQ sent for this project didn’t actually include any non-detects, so we’re assuming they’ve excluded all of that data as well as any data that’s above that detection limit but below reporting or what you’re terming the MRL. Maybe give us an example of this because this is a little bit complex to understand, but what does it mean when they exclude all of this data, which as you said could be upwards of 90 percent of the data they’re finding? What would that look like? How does that change what we’re seeing in the data set?
Isabella Nelson: As a specific example down in the Amazon region, imidacloprid, which is one of the pesticides that we looked for in this pilot, there were over 930 samples collected for this pesticide since 2012. However, DEQ reports only 17 samples as being analyzed for imidacloprid, and that is because these are the only samples, only these 17 samples had imidacloprid at levels above this minimum reporting limit. So that right there is an exclusion of 90% of those samples. And what it ultimately means is that any statistical analyses you’re doing with these data, you’re going to have it skewed much higher because you’re excluding, what is that, 912 low samples.
Katie Murray: So the averages that DEQ would display for the concentrations that they’re finding would be artificially much, much higher in their reporting if they leave out all of these very low numbers, which would bring that average down significantly.
Isabella Nelson: Yes. And looking at the data and calculating it with MRLs and without MRLs, you can see a stark contrast in how large those averages are.
Katie Murray: I think one thing that’s good to note here is that all of this data gets presented to what this program terms their local partners. These are Soil and Water Conservation District staff, who are then tasked with developing education and outreach materials that would go directly to applicators in their region based on these findings. So we’re presenting data to these local partners, asking them to design whole education programs around data that may not be accurate and well isn’t accurate in its reporting. And so that’s presenting a major problem, not just for the metrics for the program, but for its results. How can we get results if we aren’t targeting the right things or if the things we’re targeting are artificially inflated? So maybe Kim, I don’t know if this is bringing up any thoughts for you about just the basic science around what you do with a data set and numbers like this, but what are your thoughts?
Kim Anderson: First of all, minimum reporting limits are a tremendous amount of work for DEQ to develop, and as Isabella said, it means that they did detect a chemical. They’re just not as certain about the exact concentration. There’s more uncertainty about the exact concentration because it’s very, very low and it’s hard analytically. So I do want to give a shout out for doing this. It’s really important information to have the distinction between below detection limit, we did not find it, and below minimum reporting limit, we found it, but there’s more uncertainty with this concentration because it’s so low. What does that mean? I just think that it’s a shame that they removed this very useful distinction between below detection limit and minimum reporting limit. There’s a distinction there. That’s why they went to all that trouble to define those two values, and they’re removing the distinction between those two things by lumping them all as below detection limit. And so that contains real analytical information. It results in hiding data. It results in distortions of averages. It results in distortions of trends. It changes conclusions because of that. So I think not having a methodologically rigorous approach is really detrimental to the program. We have to tie the original goals of the program with the study design and how data analysis happens. Those are inextricably linked and we have to use the data analysis approaches based on the original questions we asked and the study design. And low level measurements are critically important when we’re doing exposure assessment, environmental monitoring and toxicology studies. So I think that this is just super critical to the program.
Katie Murray: And again here, I just want to emphasize that even data below the detection limit or what some would call non-detects are valuable and extricable from any data set. So no data should be excluded from analysis and certainly not in a data set like this where the zeros and the very low numbers are incredibly meaningful. The way that the agency is currently approaching this, leaving out the overwhelming number of data points and really just focusing on those highest results is very misleading, not just to their local partners and trying to implement education, but it’s misleading to the public when they put this out through their data tool. It’s misleading to policymakers who might look at this data set and think that every time they go out, they’re finding these high concentrations and that is the extent of the data set when in fact it’s the opposite. Over 90%, maybe even 95% of the time, those detections are so low or even non-existent that they’re not going to pose any safety issue. That’s really valuable information and in many cases is something to celebrate about what our producers are doing out there and how they’re keeping the water safe. So Isabella, why don’t you walk us through the next issue that you identified?
Isabella Nelson: So in addition to the issues with minimum reporting limits, another thing that we identified is the consistent inclusion of samples that have failed some quality control procedures. Looking at the raw data, there’s notations in which samples have failed in past QC, quality control. And between 2009 and 2023 in the data that I was looking at, nearly 25% of these samples failed quality control. Now, in laboratory science, as I’m sure Kim will attest, quality control is very, very important. When a sample fails quality control, it means that these results aren’t reliable and either shouldn’t be included in analyses or should be included cautiously. These samples could be failing quality control for a variety of reasons. There could be contamination in the sample. There could have been an equipment malfunction. There could be human error or specifically to get into the nitty-gritty of this specific program. A number of these samples were failing because there wasn’t sufficient sample volume collected at these sites or the water that was collected was too silty to be analyzed accurately. Now, standard scientific practice states that you should exclude these failed samples. However, DEQ is going through and including all of these samples, and when they report it in this public data viewer tool, they don’t differentiate which samples have failed and which have passed, which could lead to the wrong conclusion from this data. When we exclude these samples from analysis, you are ultimately going to get a more accurate data analysis, which leads us to more accurate results and conclusions.
Katie Murray: This is another important point that we have actually talked to the agency about, because I’ve noticed this in past years in the spreadsheets of data that I’ve received from them. There is that column that says that data failed or didn’t fail quality control, and I have asked and been told that although yes, it did fail quality control, they feel that it’s still useful information. And so the agency is making a decision to include data of questionable quality in their reporting because they just want to. And I think that there’s a lot about their approach to the data that is along those same lines. They’re leaving out all of that data below reporting limits because they want to. And some of the responses that I’ve heard is we’re being precautionary. We want to try to approach this as precautionary as possible. And I think our response to that, and I assume the scientific response to that, would be, you know, there are ways to be precautionary. There’s a whole scientific approach around being precautionary, but it doesn’t mean that you just make it up and do what you want. You really should be following a protocol. Am I wrong?
Kim Anderson: You know, there are times when one might have some type of failed QC. Presumably, there’s multiple levels of quality assurance and quality control that go into the program. I’m sure there are. The thing that you always have in a rigorous science program, a rigorous quality assurance program plan, is that if something fails QC, prior to it failing, as a program plan, you have criteria defined as to why you would allow yourself to use that data or not. And so the criteria is defined ahead of time. It may fail because of X. But as the Director of Analysis, I’ve already predefined that if it was just outside, that I would still include it. But I might maybe flag it as having more uncertainty. So the criteria with which you would use has to be predefined and it has to be transparent. And usually even within that, you would still tag that value as having more uncertainty with it because it did fail QC. So there is a normal accepted scientific approach to how you deal with quality control.
Katie Murray: Well, I think the list continues to go here. We’ve got another issue that you can tell us about related to how this program presents the data in terms of frequency of detection. So what did you find there with detection frequency?
Isabella Nelson: Yeah. So detection frequencies are simply how many times this pesticide has been detected out of how many times it’s been sampled for. Historically, detection frequencies have been used in this program to draw conclusions about progress that has been made for one individual pesticide within one region. So just looking at, let’s say, imidacloprid in the Hood River region. However, over the past couple of years, this program has started drawing conclusions using detection frequencies by comparing these detection frequencies between region, between the pesticides, and they’re trying to track it over the years. And that’s not an appropriate use of the data. And that is because, most importantly, this data comes from targeted sampling efforts. So this program goes out and collects samples at the times of year and at the locations where they are expecting to find high concentrations of pesticides, which results in something we call sampling bias. So they may be targeting a specific stream near an agricultural field at the time where they know that this one farmer is using this pesticide expecting to find the pesticides in that stream. This sampling approach does make sense for the program’s original purpose of finding areas where pesticides are high in the waterways, educating applicators in those areas, and then monitoring to see if that education works before moving on to another location where they expect to find these same high concentrations. But if you’re moving around to mainly sample in places that you suspect will result in high detections, it is then inappropriate to use how often a pesticide is detected as a metric of concern. And detection frequency is a large portion of how DEQ determines that a pesticide is of concern. Another reason why the use of detection frequencies is inappropriate in this specific program is because the number of samples that are being taken varies dramatically between regions, between locations within a region over the years, and between the individual pesticide samples. So, in some cases, a location might be sampled heavily one year for glyphosate. We all know glyphosate. And then they ignore glyphosate in that location the next year. And as a result, this means that the detection frequencies are not comparable between these two years. So you just really can’t make a meaningful comparison, given their methods of going to specific regions, looking for stuff where they know they’re going to find it, and then having this haphazard sampling method where sometimes they look for it, sometimes they don’t.
Katie Murray: In some regions, they’re not even sampling for that thing, but these regions are being compared over time.
Isabella Nelson: Yes, that is ultimately correct. The way this program is set up, it’s great for the original purpose, but it is not set up to use detection frequencies as metrics of comparison or to make determinations about which pesticides are of concern.
Kim Anderson: I think an example is if the way frequency is typically used in environmental monitoring is you have the same 10 sites every year, and you sample those same 10 sites every year, and you say, is it getting worse or is it getting better? Same 10 sites. In this program, that’s not the original design. The design was we sample 10 sites, and those high sites, we go in and work with the community and get them lower. Once we determine that those two sites, let’s say, were lower, then we don’t analyze those sites anymore. We go to two different sites looking for other high areas that we could work with the community. So if I’m working with 10 sites every single time, I can do frequency. But if I the next year drop two sites and two low sites, because now I’ve successfully worked with those two communities, and I’ve dropped those two low sites, and now I go hunt for areas that are high, I’m biasing those 10 sites now to look for high sites, and I’m dropping the successes. We’re going and looking for other areas, but to tell frequency, that’s not the same 10 sites, which is how frequency is used in science.
Katie Murray: That’s perfect. We’re going and looking for other areas, but to tell frequency, that’s not the same 10 sites, which is how frequency is used in science. Right. And again, as a reminder, all of this is to come up with those pesticides that the agencies are going to deem the highest concern and which are going to get the resources focused on them. They’re going to have our local soil and water districts out there educating on them. And so what we want is to make sure that that list is accurate. And if we’re using inappropriate metrics like detection frequency, we’re leaving out some of that data that, well, most of the data that is very valuable, that helps us understand the extent of those detections, we’re really not getting the right picture of where the issues are, what the issues are, and how we can then go about solving them. So we’re not done yet. We’re going to talk about one more issue that you found in your analysis. And this is perhaps the most concerning issue, but this one is going to involve a little bit of a science lesson first so that we can really dive into it. So this relates to how the PSP program is assessing risk. What do these numbers mean? When we find something in the water, how do we interpret it? Is it safe? Is it not safe? So how the agency handles this is another issue. So Isabella, we’ll start this off with talking about benchmarks. What is a benchmark for aquatic life? Acute versus chronic benchmarks, and then we can get into the problem.
Isabella Nelson: Yeah. So a benchmark is they’re all set by EPA. It is the value below which a pesticide is not expected to pose any concern to the aquatic life. So a chronic benchmark is, think of it as a long-term safety standard for medication, whereas an acute benchmark is the safety standard for medication that you’re just going to take one dose of. So these are not equivalent values and it results in the chronic benchmark being much, much, much lower than this acute benchmark. So as an example, you could safely take a high dose of Tylenol once. However, if you were to then take that same dose of Tylenol every day for years, you would start to face some serious issues.
Katie Murray: Okay. So we understand the benchmark. What is acute? What is chronic? Some of this also relates to the sampling method of the PSP program and how they’re out there pulling samples. That has to go then tie back to their analysis method. So Kim, maybe you can shed some light on how that works.
Kim Anderson: Yes. No, that’s exactly right. The way the samples are taken define how you do the analysis. So in this case, you have acute and chronic. So if you pull a sample one time, presumably when there’s a high likelihood of use, that’s one sample in the example that Isabella used, that’s that one sample of taking Tylenol, that’s acute. But if you were to take samples throughout the year and average those samples, and you put that in the assessment of acute and chronic, you would use the chronic because it’s taking samples throughout the year. Well, you want to probably get a representation of the year, so you would probably want to take them maybe seasonally, growing season, dormant season, and then maybe on the edges of those seasons. I would say four times, but that would be a discussion to have. But you can’t just take one sample at the peak use and use a chronic value because it doesn’t represent chronic. In fact, earlier I mentioned that Idaho and Washington have similar programs. They use the chronic value, but they sample multiple times a year. So again, study design, goals of the study, study design, and how you do the analysis, they’re linked. They’re all linked. So you have to use the appropriate analysis to how the study was designed and how the samples were pulled. In this case, one set of samples were pulled during high use, and so you would use acute. In the case in Idaho and Washington, where they’re pulled throughout the year, then you would use chronic.
Katie Murray: So we’ve set this up a little bit. Now, Isabella, tell us about the finding here. What is DEQ doing with these benchmarks for acute and chronic?
Isabella Nelson: Yeah. So DEQ is using these benchmarks to then create what they call an aquatic life ratio or ALR. Now, this ALR is the highest concentration of this pesticide that they found anywhere during the sampling time divided by the lowest available aquatic life benchmark, and that is verbatim their definition. So I said that the chronic benchmark is always lower than the acute. So this lowest available benchmark that they’re using, it’s always going to be the chronic, which, as Kim explained, is not suitable for this sampling design. In addition, there are different species. There’s plant species, there’s other species, and they’re also freshwater and marine waters.
Kim Anderson: And so again, it goes back to it’s very important to use the appropriate benchmark, in this case freshwater, because all of our streams are freshwater. And I think in some instances saltwater benchmarks have been used. And again, it’s just really important to tie what was done to the data analysis. Those all have to be tied together. So one needs to use freshwater benchmarks than, for instance, marine benchmarks.
Isabella Nelson: Actually, frequently the marine benchmark is the lowest available aquatic life benchmark because of the chemistry of how these pesticides behave in saltwater versus freshwater. Frequently, that lowest available aquatic life benchmark is a marine chronic benchmark. Yet none of these samples are being taken from marine waters.
Katie Murray: So I think just to try to draw this all together, in this finding, what we’re looking at is DEQ pulling out the highest peak concentration from a region, or even sometimes they do this across the whole state, what was the highest number they found, and I’ve seen these data sets, I think in a lot of these cases, that’s the one concentration that was above the reporting limit. So they’re taking out this isolated incident, and then they’re dividing it by some very low number benchmark, which is inappropriate for the scenario that they’re measuring. So if it’s saltwater, that’s inappropriate, because they’re looking at freshwater here. What they end up with is a number that is inevitably going to show some concern, because it’s this inflated scenario. It’s the worst case scenario, but to me, it’s an impossible scenario, because it represents something that didn’t actually happen. So picking out this highest concentration and trying to put it over a benchmark that doesn’t represent the exposure to any organism from this sample is, frankly, false. It’s a false conclusion. And it’s misleading the results of the program when that becomes the metric that we’re communicating the impacts with. And we’re sharing this number, which actually I’ve tried to understand in their presentations. And it’s really hard to look at these ratios and understand, what are we actually talking about? So I would suggest that they get rid of this entirely. But just to make it easier for our local partners to really see the data, they need to see the actual data. They should see the whole spread of data so that they can understand that some of these numbers are spikes and they are isolated outliers, we call them, and they should not become the central metric for how these detections are evaluated and interpreted. And as Kim mentioned, sampling multiple times per year in the same location could lead you to use chronic benchmarks, but she also mentioned the importance of analyzing all of those results together. And that must include any data below the MRL as well as the non-detects, rather than just picking a single high detection for comparison against a low benchmark. Well, I guess the good news, to shift my tone a little bit, is that you’re not here just identifying problems. What we wanted this project to do is present recommendations back to DEQ and ODA, you know, our agencies are working together on this project, about how they can improve what they’re doing to resolve these issues and help refocus this program on the true issues that we can be analyzing and attacking on the ground.
Isabella Nelson: Yeah, so some recommendations that we were able to come up with based on these issues that we identified first and most importantly is just include all of the data, like include the data below these minimum reporting limits, because you’re going to get a really different picture if you’re looking at 930 imidacloprid samples instead of 17. We would also recommend excluding other data, like failed quality control samples. Include the data below MRLs, exclude the data that has failed quality control, and have a clear and transparent process for this inclusion and exclusion so that the soil and water conservation districts and the public can understand what is happening with this data. And then furthermore, remove the use of detection frequencies for making determinations on what pesticides are of concern, because as we’ve stated previously, they’re not appropriate for this sampling design, and you’re not going to get an accurate picture of which pesticides are actually of concern, which can then lead to misdirected use of resources. And finally, get rid of the use of aquatic life ratios. The more accurate way of doing this would be to compare all of the concentrations that we found in a region instead of just that single high concentration outlier. Compare all of those samples to an appropriate benchmark or two. Like compare all of these concentrations to acute benchmarks for freshwater species. And then you’ll get an accurate picture of what concentrations are actually going to be of risk to these species.
Katie Murray: So one of the things that we talked about, Isabella, and you actually created is a new decision making tool. So this program uses a decision matrix. They plug in the detection frequency and these benchmarks. And that’s how they decide for each region and even across the state, which pesticides are of highest concern. The matrix takes a pesticide that only hit half of the benchmark. So it was 50% of the benchmark and that becomes a pesticide of concern. So I know that you had some good recommendations. You even created a new matrix. And I think you talked about that a little bit. But is there anything you want to say about that new tool that you’re recommending?
Isabella Nelson: Yeah. So I can actually use an example. Oxyfluorophin, another pesticide that we looked for in this pilot project down in the Amazon region, plugged into the detection frequency and 50% of a benchmark decision matrix that is currently being used, it pings as a pesticide of high concern. However, if you look at it with this decision matrix that I have been working on, that looks at how many times a sample has been detected above a relevant benchmark. It pings as a pesticide of no concern because not a single sample concentration for this pesticide was ever detected above a relevant aquatic life benchmark.
Katie Murray: So I think this brings us to what are we doing with these recommendations? And I know that you’ve presented this, Isabella. There was a great opportunity to present this to the Water Quality Pesticide Management Team, which is an interagency team of staff who oversee the water quality programs jointly between ODA, DEQ. There’s also Oregon Health Authority, ODF. There’s a few different natural resource agencies in that group. It’s also been presented in various other contexts. But what has been the reaction so far? What’s your thought on how this is being received?
Isabella Nelson: There’s been an interesting combination. For most people, they weren’t aware that these were the practices. But for a handful of people, they were pretty stalwart in their defense of their practices. They were not happy or willing to listen to these alternatives or being told that they’re not conducting the most accurate of scientific practices.
Katie Murray: Kim, I know you were involved in the presentation to the water quality team. What were your thoughts after being part of that discussion?
Kim Anderson: Well, for such an inspiring program, I was disappointed at what appeared to be less than engaged discussion. I think that it’s very important to have civil discourse. And there just didn’t seem to be space for civil discourse discussing any of these methodological problems with the data set. And so I think that it would be really helpful if there was civil discourse about the approaches that have been used with the data.
Katie Murray: Well, I’ll say from the OFS perspective, the reaction has certainly been disappointing. I feel that the agency is just stuck. They’re unwilling to receive this feedback and listen to the scientists, frankly, who are presenting this information to them. From our side, we will continue to try to draw attention to this. I think the basic expectation that our regulatory agencies adhere to science and follow scientific practice in their programs, even if it’s voluntary, all of this data gets used for different purposes. This data feeds into some regulatory programs. It also informs the public and we can’t stress enough how important science is in programs like this and how disappointing it is when we see our agencies just blatantly ignoring scientific practice. And in this case, that’s what we’re seeing. Our goal here is really to make sure that this program is successful. We want it to be focusing on the right things. We want to be able to understand the data ourselves and make sure that that’s accurate. And that’s really the way that we would sustain the program is to have it be appropriately focused. So I think, when there’s this discovery that a program like this isn’t following scientific practice, it just leads to more issues with trust, with our regulatory agencies. It erodes trust in science and the scientific process when we’ve got agencies who are just not willing to, you know, improve what they’re doing. So I don’t know if Kim or Isabella, you have other thoughts as we close this up, but, this, to me, this is one example of an area where we would really like to see more of a focus and discussion on the policy side, just how to have a conversation and how to provide feedback like this and have it be received and acted on in a way that meets everyone’s goals.
Kim Anderson: Yeah, I think the program was so inspiring when it initially started, and it had really clear documented successes. And I think most of the program is still there. The very roots of this program that made it successful in the communities, all those things still exist. All those connections and networking exist. I have a lot of optimism that the very roots of the program still exist, but it’s going to wither if trust isn’t there. And if we don’t connect what we’re doing with the analysis and to bring it to the scientific standards that we use now, the plant’s going to wither, good roots or not. And that will be disappointing because it’s such an inspiring program.
Katie Murray: That’s great, Kim. I completely agree. How about you, Isabella? What are your thoughts? You’re a young student, now graduated, entering the world, and this was really a heavy project for you to take on. And presenting this back to an agency, you know, where, I mean, there’s a lot of tension, and I think you did an awesome job. But how are you feeling through all of this? What are your thoughts on having completed this project?
Isabella Nelson: Yeah, it was pretty eye-opening and honestly a little bit disappointing to learn that these agencies and these people who are treated as the foremost experts in the fields by students, they’re not infallible. They have their own biases and agendas that sometimes doesn’t align with the scientific practices, which it was a bit of a letdown to learn as a young person just getting into this field.
Katie Murray: Well, I think our hope is that your work will continue to be brought forward, and that ultimately we will be able to have a conversation that moves the needle on this program and helps to refocus on quality control, appropriate statistical methods, clear data presentations, how to have an accurate risk assessment. These are very basic requests that we’re making, and really want to just make sure that this program can realign and refocus and be ultimately the successful program that it has been for the last number of decades. So I just want to say thank you to Kim and Isabella for joining and thanks everyone for listening. I guess the next time that you hear about water quality studies, pesticide stewardship partnership, hopefully you’ll have a better understanding of what good science looks like and why it matters for protecting our environment, protecting our communities, and really helping our existing and ongoing programs thrive and succeed. So thank you very much.
Chris Edwards: I hope you enjoyed this episode. Be sure to check back for new content coming your way soon on the Forestry Smart Policy Podcast. And as always, if you have a question about this episode or something else, just drop us a note at podcast at ofic.com. And who knows, maybe in a future episode, we will address your question or whatever beef you may have with what we have presented.
