From Data Pains to Data Chains

Chains of data can mean a number of things. For instance, estimating dietary exposure to pesticides requires knowledge of the farm to fork continuum and how pesticides sprayed on crops ultimately end up on the consumer's plate, and how levels of pesticides are measured and change due to processing and other influences along the supply chain.

In the FACET project (Flavours, Additives, and Food Contact Materials Exposure Task) we developed a model for estimating exposure to contaminants from any food packaging structure on the market. To do this we gathered data sets on all elements of the packaging industry within the EU and linked them together in a probabilistic model. This involved creating databases of food packaging on supermarket shelves, databases on the construction of elements of different packs and databases on the chemical composition of raw materials that make up the packaging. Due to the size and complexity of the food packaging market within Europe, these data sets can never be accurately linked, however they can be linked probabilistically, and that's exactly what we did.

Another example derives from Microbial Risk Assessment. This involves using growth models to determine the way in which environmental conditions like time and temperature in the supply chain have an effect on microbial growth in foods and how this ultimately impacts the consumer.

In October, at the International Society for Exposure Science in Seattle, there was a lot of discussion and debate focused on how to regulate the staggering number of chemicals that consumers are exposed to (over 30,000 and rising all the time). Assessing whether consumers are overexposed to any of these chemicals is a huge challenge, as estimating exposure to such an amount of substances can prove difficult due to the volume of data required, the number of sources of exposure and the confidential nature of data on the composition of consumer products. The discussion centred on the need for models to be simple, conservative and deterministic in order to act as a screening method to determine what chemicals may be of concern (the so-called “Tier 1” approach). One of the drawbacks of this approach is that you learn very little about the details of how consumers are exposed to different elements of their chemical environment, and while Tier 1 assessments may work as a screening method, there are alternatives.

In FACET, we delivered a food chemical surveillance system that contains confidential information on food packaging, including market shares of packaging materials. All the data is encrypted in the software and can be used to run a detailed probabilistic exposure assessment to all chemicals reported in the project. Using this approach, we covered more than 600 chemicals used in food packaging within Europe. In another project for the Research Institute for Fragrance Materials (RIFM), we developed a probabilistic aggregate exposure tool for fragrances in cosmetics and personal care products. The driving force behind this model is a huge market study on consumer use of different cosmetic and personal care products (over 36,000 subjects). Using this approach and data provided by RIFM, the next phase of the project will cover over 200 compounds in a detailed aggregate exposure assessment.

So what can we learn from this? While consumers are exposed to numerous chemicals from various routes, there are ways forward. By working with industry groups at a sector level, data can be gathered and chained together to cover each route. This involves data collation, formatting, and number crunching (with some additional help from cloud computing), but it can be done. It takes one large project for a tool to be created that performs higher tier assessments, covering large amounts of chemicals across the major routes of exposure. Having done so, answering the question of aggregate exposure becomes an easier task.

Written by Mark Lambe on November 23 2012

Signup for our newsletter