Advanced exposure analysis – methods and tools for enhanced human health risk assessment

At the American Chemical Society ACS conference William O’Sullivan of Creme Global presented on the methods and tools for enhanced human health risk assessment. You can watch that talk here.

Talk description

The US EPA Office of Pesticide Programs currently avails of methods described in federal government risk assessment guidance to evaluate pesticide exposures. This presentation will discuss how a wholly probabilistic approach can reproduce the results of the existing approach by means of converging on exposure values for aggregate and acute assessments. Creme Global and the Cares Consortium have developed the Cumulative and Aggregate Risk Evaluation System Next Generation (CARES NG) model. This is a refined, probabilistic model for performing exposure assessments. The CARES NG model accounts for variable human behavior as well as temporal effects on exposure. In doing so, it produces more accurate estimates of exposure by means of subject-level and eating-event specific calculations and contribution analysis. Comparisons of current standards with the more robust techniques now available demonstrate that the newer approaches are able to reproduce conventional results within specified statistical tolerances, setting a reliable benchmark for future exposure calculations which will incorporate temporal components. This could inform regulatory and industry standards and best practices, expanding on the scope of the evaluations beyond non-temporal (i.e. point estimate) methods. The ability for pesticide manufacturers to test exposure potential before bringing a product to market has an enormous potential upside in adding a layer of consumer protection before a substance is brought forward for federal approval. Federal risk assessment guidelines should avail of the most sophisticated and validated techniques. Accordingly, an investigation into using these new probabilistic methods is underway.

Automated transcript

Hello, my name is William Sullivan. And today we’ll be talking about advanced exposure analysis, specifically the methods and tools used for enhanced human health risk assessment, respect to pesticide exposure. All right, let’s begin. So we’ll start this presentation with a brief overview of the motive behind improving pesticide exposure assessments, and then give a quick insight into how we go about that improvement using statistical.

Then we’ll take a moment to review some of the key toxicological terminology and the maths behind them that define acute dietary exposure in particular before we explore how we’ve built upon these fundamentals to develop methods and tools for enhanced human health risk analysis. So why do we put all of this effort into exposure analysis?

It might seem like a bit of an obvious question, but it’s important to acknowledge that exposure analysis plays a key role in the assessment of risks to human health. And so naturally providing members of the side to community with increasingly sophisticated methods and tools for determining these risks can help safeguard people on a day to day basis.

Additionally, it can go on to inform business and policy decisions. Built upon sound, reproducible science. So how do we perform advanced exposure analysis? Well, practice is defined by the EPAs office of pesticide programs, former common basis on which exposure assessment tools are built. Our advanced exposure analysis is derived from the application of statistical methods to these fundamentals of exposure analysis.

We construct distributions of acute exposure to say, determine rolling averages and. Or to determine up to minute by minute exposure evolution for either an individual or for an individual with greater dietary variation as modeled by neighbors, people who are sharing similar characteristics, not necessarily geographical neighbors before I go any further, I wanna introduce ourselves crem global is a scientific modeling data analytics and computing company committed to helping.

Organizations make better decisions based on data and modeling particularly in the exposure space be it through cosmetics or foods in this venture. We’re the technical partner in the notfor profit cares NG development organization. And we’re responsible for developing and maintaining the next generation of cares.

And so. To do so we develop a probabilistic software model, which facilitates multi-source multi route aggregate and cumulative exposure and risk assessments on a cloud-based platform. Let’s get into the fundamentals of exposure analysis. So to start things off we’re going to have a quick look at acute exposure and acute dietary exposure is quantified as the amount of substance that a person ingests in one day expressed in parts for a million or milligrams per kilogram body.

Just simply simply a value that outlines their exposure in a 24 hour period goes hand in hand with this whole idea of being acute. So in that one timeframe, the acute reference dose on the other hand is an estimate of the maximum amount of substance of a substance that a person can be exposed to in one day that does not pose a notable risk to that person’s health in their lifetime accounting for various sources of uncertainty.

And so naturally enough when you’re performing, you’re testing to D. Your no observable adverse effect level there are going to be differences that emerge namely you have your inter species difference. Naturally enough, there’s going to be a bit of variation between testing on lab animals and.

Testing on humans. You’re going to have your intra species uncertainty factor as humans are, are variable in nature. And we have variations in our population accordingly. And naturally enough, you have your auxiliary uncertainty factor. So this can be any number of things, but just as to throw some examples out there we could be looking at uncertainty introduced by basing your calculations on the lowest observed adverse effect level, instead of your no observed adverse effect level, or additionally, you could have uncertainty caused by inhibiting iodide uptake is not an adverse effect in and of itself, but it acts as a precursor to hypothyroidism, which is an adverse effect.

Next we discuss the concepts. An acute population adjusted dose, which is rather simply taking your acute reference dose and then accounting for population specific uncertainty factor. And then we can take this acute population adjusted dose, the APA there, and we can actually express the acute risk itself as a percentage of acute exposure over the acute population.

Just dose the idea, being that as long as this percentage, doesn’t exceed 100. The risk is considered acceptable, and now we’re gonna discuss the tiered exposure assessment as defined by the EPA. So it’s broken into four tiers. Tier one is the least refined and tier four is the most refined. And just to quickly describe how this graphic works for y’all.

The percent crop treated there is the, the bubble on the left hand side that describes the amount of crop treated for all commodities, regardless of their blending. Then as we go from left to right along each of the rows we see the handling based on the blending type, which is how these tiered assessments are actually broke down.

And so we see a row for the non blended commodities, like apples in bulk, we see partially blended commodities, like mixed. We have fully blend commodities, like grains that go into making various kinds of bread. And so with that out of the way, we can talk about what makes the tier one assessment, the least or refin.

So first things first, it assumes that 100% of the crop is treated with a given pesticide. This is not, not always the case. And so it’s an assumption building towards, you know, figuring out what is the worst case scenario. Then we see that none of the blending statuses are giving any special. In reality.

We know that blending a commodity should modify the amount of pesticide or residue that you are, or one is being exposed to. We set the residues at tolerance level. So this is the highest permissible level that these pesticides can be at. They’re not always applied at tolerance level naturally enough.

But in the case of tier one assessment, they assumed to. And finally we look at the processing and how that has no impact on the residue. So no matter what you do to change the, the commodity any amount of processing, it’s not gonna diminish the residue. So you’re assuming that everything that is in the field is going directly.

Into the consumer’s diet. And so we can move on from this least refined assessment to have a look at a slightly more refined assessment, a somewhat refined assessment in tier two. So here we still assume that 100% of crop is treated, but if we go down to the bottom row and have a look fully blended commodities we can see that there is actually a modification to residue.

And this modification is the residue assigned is actually the average measurement of the residues from field trial. So unlike the non blended and the partially blended which are still being assigned the tolerance level residues the fully blended commodities are now being treated with a little bit more understanding of the nuance of how, how this process works.

And there are being given average measurements accordingly. Additionally, in the event that the data is, is available for the tier. We’re able to discuss how the residues are modified by the processing. And so this then gives us a, a better picture of how these, how these residues are, are being experienced by consumers in the end, because at least the the processing is having an impact on the, the consumed pests residue at the end of the, at the end of kind of the production.

But now we’re able to then get into discussing tier three, which is significantly refined. You can see here that there are no holdovers from the tier one assessments, everything is significant and everything matters. And so the big, the big jump I think is that there’s no longer this assumption that we see 100% crop treated as, as a default, we understand the nuance that not all crop is, is going to be.

With the pet set of interest, we then also see them as we start moving left to, right. If we start on the top row, we’ll see that the non-funded commodities do in fact modify residue and they should have a different approach to their partially blended their fully blended counterparts. Their residues themselves are actually determined by taking a composite of individual elements of that commodity.

So in the example of. You don’t just take the residue from a single apple sample that and, and continue to do that. What you do is you take composites. So say a two gallon round batch of apples, determine the residues and actually build a composite composite residue distribution. And then what you do is you de composite that distribution to create a, a, a distribution for which individual apple level residues can be sampled themselves.

Naturally enough as a whole do over from tier two, because we’re always increasing the refinement of these assessments processing then also has an impact on the, on the residue experience by the consumer at the end of the process, once we then move on to the middle row where we discuss the partially abundant commodities naturally enough, the residues are being modified here as well.

So these residues are actually also being derived from composite funnily enough, but based on the percent crop. So the idea being that if you are going to be having 40% of your, of your crop treat, that goes into a, a, a blended, a blended food, say you take this commodity, you apply your, your average residue to 40% of what you have going out.

And otherwise you assume that it’s, it is, it is zero functionally because it’s not falling under. The purview of that percent crop treated. And again, the processing also modifies the residue that a consumer would find in, in the example of eating mixed nuts, and then the fully blended environment again and similarly enough, but improving on the.

The assumptions of tier two the residues are actually taking detect values from the monitoring data, as well as otherwise, assuming a half of the detection, because you are assuming at this level that your commodities are going to be so thoroughly blended that the residues they’re on will also be blended throughout the whole.

And so even when you are not detecting these, these residues, you are going to play it safe and assume a half limit. Just trying to count for decor. And again, as with tier two the processing goes on to modify the the residues available at the end of that production chain. And this finally then brings us to tier four, which is the most refined kind of assessment.

And you’ll note that it only builds upon tier three in one significant way. And that, and that’s in the addition to further studies. So this can include cooking and processing data. So in the example of a potato, for example you’d be looking at taking a potato and looking at potential cooking recipe.

So how it is washed, how it is boiled or how it is washed, and then fried any number of different ways in which you can process it as a consumer, not before it actually gets to the supermarket, that you can process it as a consumer that can reduce the amount of residue you’re exposed to. Further studies can also include market basket.

Whereby we’re actually sampling the residue at the point of purchase, like in a supermarket or these further studies can then look at residue decomposition in the field. So after you’ve applied your pesticides after you take your, your residue samples, the residue may diminish further due to environmental reasons or due to chemical.

And so in diminishing that residue, that’s something you’d be interested in tracking to increase the refinement of your assessment accordingly. And so with the tiering system addressed we’re going to move into our discussion of the, the final equation in covered by the, the fundamentals of exposure analysis.

And so naturally this kind of brings us to the, the cornerstone of. Of our exposure analysis, which is this wonderful equation here for determining total daily, acute exposure. And so this expression evaluates the total daily exposure by individual I on J day where we some over all of the eating events, K of a specified food form, L the residue KL there.

So based on. Your eating event, K and your actual specified food form L these residues aren’t always necessarily single point values. Naturally enough. You can actually sample these residues from a, from a residue distribution and. Performing this sampling multiple times. So over a number of iterations develops a distribution of total daily exposure values that then more accurately describe the likelihood of encountering residues in certain concentrations.

This is a really reliable robust kind of approach to determining the exposure in the acute case which is then naturally goes on to. The calculation of the percentage acute population adjusted dose, which as we mentioned not too long ago was kind of a, a nice kind of benchmark for determining, you know, whether something is of risk.

And as long as the percentage pad does not exceed a hundred percent that risk is considered acceptable. All right. So now that we have this, this great little equation here we are going to move on to the advanced exposure. And so before we get into another round of mathemat mathematics with that, we’re actually gonna talk about the data that powers these platforms.

So the data used in the cares and G dietary module comes from a number of sources. So the CDC are kind enough to put together the national health and nutrition examination survey. So N Haines which gives us ex extensive statistically represented information about the diet of the us popul.

This is critical to have the FDA, then do their part in performing the construction of the food commodity intake database giving us recipes for the foods recorded in the NHANES, which allows us, allows us to break those, those foods down to the commodity level. And then after that finally, and last but not least the U S.

Puts together the pesticide data program database, which provides key data about the type and quantity of pesticides that can be found in the foods consumed by the us population. All three of these are, are critical, critical data infrastructure to have for creating safe, trustworthy, reproducible, and sound dietary risk assessments regarding PIDE.

But what we’re gonna talk about right now is dietary construction with that enhanced data. And so DNA data that we use is comprised over 24,000 to two day consumption diaries. And these consumption diaries are primarily records of eating habits for a given subject. But they capture additional information such as sex, age, weight, ethnicity, and pregnancy status.

Typically acute dietary assessments, example individuals in a popul. In order to make individual day combinations, which are in turn used as unique points in creating an exposure distribution in the acute scenario and standard practices have actually seen us take this two day consumption diary and turn it into a 365 day diet for use in performing additional analysis.

Diets constructed out of these consumption diaries in this way are called temporally repeating diets. And so I’ve put together some graphics to kind of. Highlight the, the differences and the significance of, of the different methods of, of diet generation and why, why we even do it in the first place.

So in the very simple scenario where we assume that 100% of the crop is treated, if you look at the graphic there on the left, you will see a two day consumption diary. Of a, a teal individual. So they’ve been marked over 12 days. This is a 12 day diet rather than a 365 day diet for ease of ease of use.

This person has a 12 day diet built out of a two day consumption diary. And so this consumption diary is repeated as many times as required to fill out 12 days. You can see there on the second day they have an exposure. And so on half of all, the 12 days they’re having an exposure event accordingly the thing is, if we look back to the third tier of assessment, where we acknowledge that, you know, the 100% corrupt treated is, is just an assumption we make.

And we can set the plant that with, with evidence of what the percent crop treat we actually uses. We can see modifications in the exposure. For example, if we are looking at the differences between the graphics on the left and the right, if we only have a 66% crop treated rather than the 100% you are now seeing fewer exposure events you are seeing exposure events because people are still consuming or rather this individual is still consum.

The same material, but they’re now only experiencing exposure in proportion to the amount of the substance that they’d actually be encountering, which is a more refined examination of pesticides in food. If we are looking to point out where these pesticides are we want to know ally, where are they going to show up?

And how tightly are they going to be clustered together? Now this temporarily repeating diet is good, but we can do better. And we do better with cares NG in our advanced dietary construction using a multivariate clustering. So rather than just take one individual as, as described teal there on the left, cuz I I’ve copied over the temporarily repeating diet rather than just taking one individual.

We use clustering to. Determine similar subjects. So these subjects are similar in a number of parameters as, as mentioned previously. So they’re sex age, weight, ethnicity, pregnancy status. But additionally, we look at metrics like their health, their educational, financial status, and we find subjects who are similar.

And we kind of measure that similarly with a, a go dissimilarity index to kind of discard entities that are, that are very, very different. And we can actually pull together. Diets that are now actually composed of rather than just two days of dietary information repeated over and over and over again, we can get multiple subjects involved to kind of create a bit of dietary variety, which is more realistic.

Absolutely. It is. Very few people will eat the same thing on a two day cycle for a full year. Six day cycle, still not fantastic, but at least somewhat more realistic. And, and the aim of this is always to improve our assessments to make them more refined. Right. We want them to, to mimic and model the real world as best as possible.

And in increasing dietary variety, we do that accordingly. So in taking this 365 day diet, we create a temporally matched diet. So not repeating, it’s matched in that we are matching similar subjects and trying to figure out what a common diet looks like for people who are very, very similar to say subject eye.

And if we then compare the events, we’re actually also able to note, note interesting, different traits that we wouldn’t have. Based on just based on one individual. So one with one individual, your, your exposure events are, are kind of, you know, they’re, they’re somewhat more predictable. They’re a lot of easier to kind of forecast in a way with these subjects in the temporally matched regime.

We can see. Okay. Akin to the repeating diet. We are seeing the teal individual get their exposure event on that fourth day. But there’s no exposure event on the sixth, but then after that, the blue and the orange individual are having their exposure events back to back which could be representative of the teal person’s diet.

They actually might be more inclined to have the foods that the blue and the orange person are consuming. On this given day and be at a higher chance for exposure accordingly. And so this is a, a really key refinement, especially when we start getting into the examination of rolling averages based on these temporarily matched diets.

And when we get onto the decay of exposure accordingly, which brings us that onto the actual utilization of the temporary matched diets in a mathematical regime. And so. One of the key reasons that we’re actually interested in these 365 day diets is to model the evolution of the subjects themselves.

So you’ll readily recognize the, the consumed amount times of residue over the body weight term there on the right hand side of this calculation of the, the multiday or the J day average exposure of acute exposure. But what you might might not notice unless you’re paying very keen attention to these equations is that the body weight is now not just described by the individual eye, but it’s also described by the day.

So the key strength of these, these diets and why you wanna create these diets is because you’re now able to model the evolution of body weight, which is a key, key interest in populations like infants and in pregnant. And so when you have these vulnerable elements of your, of your population that you want to really, you wanna make sure you’re, you’re doing right by being able to model the evolution of weights over a, over a certain period is super, super beneficial.

And so this JD average exposure is able to then actually take clusters of acute exposure over certain timeframes and, and examine. How is this playing into, into larger themes of exposure as well? Are we more likely to see certain bursts of exposure around certain timeframes around particular individuals?

Are they more prone to clustering? Acute exposure that could in turn actually end up having chronic effects all questions that can kind of, you can start to approach them with these temporarily matched. We can dig even further and with even more nuance by looking at within day acute exposure.

So N Hayes collects information on these consumption events on an hour by hour and down to a minute by minute basis, which means that you can do your full daily exposure modeling accordingly. So. I’m gonna quickly look at these equations here. And I kind of explain them in, in a bit more detail now because we’re, we’re getting into, we’re getting into equations that require equations.

And so we have this exposure, I J K L here, same story realistically to how we were looking at exposure earlier. So we have our consumed amount and we have times our residue of key import though here is that we actually have an exposure of a, of a, kind of a previous entry that is modified by an exponential decay factor.

So this exponential decay here and decay is, is marked by the, the, the CAPA D there with the subscript times a, a time interval. And so the idea of being that, okay, it’s your first event of the day. You’re just taking the consumed amount times the residue. But as you start accumulating multiple events over, over the one day you are actually finding yourself adding the resultant or, or, or previous exposure to, to the exposure that’s currently in your system in that 24 hour key acute period.

And so as we take as we look to determine the kind of maximum exposure. On a given day for calculation of a J day average persisting acute exposure. We take that daily maximum exposure. We divide that by your, your body weight in a given day, because we don’t have the minute by minute body weight. So there’s no, there’s no temporal resolution to be had on, on that aspect.

But we take that daily body weight. We generate your total daily exposure and we take that over the, over the, the J day average O of interest. And we create that as our, as our data point O of, of invest. So we’re creating distributions of, of multi-day averages of peak exposures as dependent on these eating events.

And that’s gonna give us a very, very kind of tight temporal resolution that is quite frankly unmatched, UN unparalleled. And as we get into our closing remarks, we’ll loop back into that in just a moment. So as a quick recap The advances that we, we kind of bring to the table are specifically in, in for the purpose of this talk specifically sophisticated, temporarily matched diet generation.

So taking these subjects, doing this clustering putting together all this work and effort into computing, what are the most likely individuals who are gonna be having a similar diet to your particular subject of interest? And then we’re able to use, you know, different methods to have a look. What, what is mattering in your particular assessment of interest?

So if you are, if you’re willing to, to look into the effects of a subject evolving without needing to get down to the hour by hour or a minute by minute temporal resolution, the multiday method is perfect because that’s looking at your, your subject of interest changing in body weight. Over time and seeing how that’s gonna be affecting their exposure.

And then if you are looking for a, a much more refined examination down to the minute by minute or down to the hour by hour level, then your within date increases your temporary resolution beyond anything. That’s commercially available. And this is accordingly. The most refined acute exposure assessment are currently commercially available are available from the cares NG And this brings us to our final points where our fundamentals are defined by well understood equations and assessed using tiers that are developed and overlooked by the EPAs office of pesticide programs.

So they stand over it. We wanna work off of good fundamentals and cares. NG does just that here’s she expands the field of acute based assessments through multiday and within day assessments on the, on the strengths of those temporarily matched diet. So it’s with these improved methods and techniques that we can serve risk assessors, who in turn, go on to guide, practice and policy with regards to safeguarding human health.

Okay. Folks, that’s all from me. Thank you very much for your attention and have a wonderful day.

You might also like

Get weekly industry insights from Creme Global

Download The Overview Now

Data Sharing on Creme Global Platform

Gain critical business intelligence
from shared, anonymized data.