Demystifying AI (artificial intelligence) and its applications in food safety and food risk prevention

Earlier this month Creme Global CEO Cronan McNamara was interviewed by Nikos Manouselis, CEO at Agroknow for the Fireside Chats series. Watch the interview below or you can read the automated transcript.

Demystifying AI (artificial intelligence) and its applications in food safety and food risk prevention

Intro and about Agroknow

This is Nikos Manouselis. I have the pleasure and honor of hosting a series of fireside chats on AI and food safety. I think about the most relaxed conversations, the type of conversation that I would have with you over a glass of wine or a cup of coffee or tea, trying to demystify and uncover what is behind artificial intelligence and its applications in food safety and food risk prevention.

I have to admit that you, Cronan was number one in my list – the top person that I wanted to invite to this conversation. So thank you for being here. Cronan McNamara is Creme Global’s founder and CEO, widely known for its AI models protecting everyone’s health.

About Cronan and Creme Global

What I really like about your work, Cronan at Creme, is that it seems that you’re working on local problems that have a global impact or global problems that have a local impact. It’s amazing, at least in the way that I understand it. So, let’s start by who you are and what the company is doing.

[00:01:41] Cronan: Thanks, Nikos! My background is in physics, maths, and computing. I got involved in food safety research during a post-masters research post at Trinity College Dublin. I found it interesting to apply mathematical modeling and Monte Carlo-type simulation methods to a new area of research with a lot of data, complexity, and challenges. So, I got involved in food data science, food science, and AI modeling back in the early 2000s. I founded the company Creme Global in 2005, and we started working with data, looking at different aspects of food intake and chemical exposure, and then moving into more machine learning and predictive modeling as we went along. So be interesting to talk to you about that today. 

[00:02:50] Nikos: Okay. It was all about mathematical modeling and predictive modeling. And how did the name come?

[00:03:00] Cronan: Oh, that’s a most frequently asked question! So, I’m happy to explain it today. When we were at Trinity College, there was a research project, and it was named by Professor Mike Gibney, who sadly passed away last week. He was an inspirational leader and great at bringing consortia together to do projects. Mike was a Professor of Public Health and Nutrition at Trinity College at the time.

Mike developed a project called the Center for Research and Exposure Modeling Estimates, an acronym for CREME. We worked together on that project at Trinity College. it followed on from the successful EU Monte Carlo project. As we started engaging with industry and government, they got to know the name Creme.

When we formed the company, we thought it would be sensible to keep the name Creme because it would already have some recognition, but we dropped the acronym. That’s the most frequently asked question. We tried to come up with new acronyms, but eventually, we just left it and called it Creme Global.

What Creme Global do

[00:04:05] Nikos: And you started working on exposure modeling problems. What do you do today?

[00:04:11] Cronan: Yes, exposure modeling is still a big part of our work. We work in food, safety, agriculture, chemicals, and cosmetics—our three big sectors. I always liked building platforms and products, so we invested time and energy in developing our own cloud-based platform. This platform provides a secure and flexible way of aggregating and gathering data, which we can discuss a little more. 

So, we help industry collaborate by sharing data securely and anonymously. Then, that data can be visualized or modeled using predictive analytics and machine learning-type methods. At the same time, it can be put through more mathematical models that we’ve created over the years, which look at exposure. Those exposure models combine consumer habits and practice data, which we get from various sources, either public or through purchasing data, with ingredient and formulation data. 

Those models can look at a very detailed level of ingredients in food or cosmetic products and are being consumed or exposed via cosmetics and adding all of that information up. So, at its core, it’s a simple enough model of adding up lots of stuff, but the complexity comes in handling the uncertainties.

We use what’s called Monte Carlo simulation to model all of the uncertainty. Any number or input into the model can be represented by probabilistic distributions or ranges to help simulate that uncertainty and variability and then give good predictions of exposure or intake. We review high consumers as well as average consumers and provide that kind of information to government agencies and/or industry.

[00:05:54] Nikos: I find the conversation around data sharing and private data sharing fascinating. It’s essential, and it’s one of the topics that I really want to talk about with you.

The Fundamentals of Mathematical Modeling and AI in Food Safety

But you said that you started working on mathematical models. And then, in a very sophisticated way, you incorporated the parts with uncertainty into this mathematical modeling. Yeah, exactly. These are parts that cannot be determined in a certain way. At which point in this journey did AI come into play?

[00:06:43] Cronan: good question, and a bit later in the journey, because our core offering when we started the company was were these types of exposure mathematical models built from 1st principles using scientific knowledge around intakes and, different factors that are important absorption factors and things like that, that we can model because you’re never sure of those numbers. You have to use a probabilistic approach. And these mathematical methods I was using in college, I actually did a bit of work in the financial industry as well, using, valuing complex derivative options, again, using probabilistic methods where you don’t know the future, really, you have to, you only know the present, the future will diverge in various ways within known.

Limits are reasonably well-known. The financial industry didn’t always get that correct. They overestimated their confidence in that kind of limits and things that can go wrong. But yeah, exactly. So, you can model any uncertainty using these Monte Carlo methods.

And that’s something I found very interesting in college. And I’ve enjoyed applying that into food safety, risk assessments or exposure assessments these days.

[00:07:51] Nikos: You’re saying that the starting point is a number of set first principles, as you call them, which are the input factors or variables that might affect the model’s outcome.

In my basic understanding, at least in traditional mathematical modeling, they were taken for granted by the scientists in each field. Yeah. They say, guys, we know from our research that these factors correlate with an outcome that you should expect. Yeah. Exactly.

When did AI come into play, and how did this change your perception of the problems?

[00:08:35] Cronan: Yeah, so that’s exactly right. So these are from scientific first principles. We would collaborate with toxicologists and with nutritionists to understand the key issues. And all of these parameters do affect the intake and the exposure estimates.

But the question is, how much is the impact? What are the key drivers? And how confident can we be about that? So those are 1 category of parameters—models you can use, which are scientific models or mathematical models from first principles, as you call them, which is correct. And then, I’ve also always had an eye on the machine learning aspects.

I have always been fascinated by that approach and never had a cause to apply it in those exposure-type projects because these 1st principle models were working very well, and they’re well understood and more transparent, as well than an AI model, which kind of comes up with predictions without necessarily any scientists saying, programming in the different relationships.

But as we went through a number of different projects where we were gathering new data and asking different questions, essentially not just exposure questions, but other types of questions like for example, food fraud prediction or outbreaks of a pathogen in a manufacturing environment or in a.

Agricultural and region are different questions and harder to model from 1st principles because we don’t know the 1st principles. A lot of the time. Therefore, these machine learning methods are really interesting to use to just discover correlations and categorizations of different things within the data that would not be obvious to humans, but just by looking at visualization, for example, so different challenges require different models, and that’s where we, when we had different challenges, we moved into the more machine learning AI type models.

[00:10:28] Nikos: And there will be the types of problems or challenges that require the machine to build the model as it goes.

As it receives new data, it generates this mathematical model on its own. Do I understand correctly?

[00:10:42] Cronan: Yeah, exactly. Even just training it on a lot of historical data. Once you get there, you have a model you’ve trained and tested and are reasonably happy with, whether historical data or more new data comes in.

You can retrain the model and update it if necessary. keep testing it as you go and see if you need to retrain the model.

[00:11:04] Nikos: You have two scenarios, one that still works extremely well when you apply the traditional mathematical modeling. Approaches and one where you see machine learning approach is working best as simple examples to understand the types of challenges that fit better to each one of the approaches.

[00:11:26] Cronan: Exactly. I suppose being a physicist with a background in physics, some of the physics we know the maths of, we can just use models. Similarly, in some of our intake or exposure modeling problems, for example, we might be asked to look at a population’s exposure to a pesticide. We have a lot of data on food consumption. We have data on agriculture. Conversions from food to raw commodities. We have data on the pesticide monitoring program. So, these are all 3 distinct data sets that were never designed to be. They are used as 11 calculations in the model, but by using the first principles, we eat food. We measure that.

We know the person’s information about the person, maybe their age and body weight, and we know the amount of food they’re eating from these kinds of studies that come from the government or, for example, the NHANES database. We know the information on the commodities. We know the pesticides. We really just have to combine those in a sensible way.

And that will create a first principles model that works. You could train an AI model to do that. Eventually, if you did enough. Traditional studies and knew the answer and said, therefore, new pesticides have been developed. Here’s some data on us. The AI could learn those patterns, but it’s not really necessary because the first principle approach works well and is explainable.

But in other questions, you might need to go to AI and machine learning. There are many more. There are different questions in science where we don’t know the cause of everything. It could be human factors that apply to things like, for example, food fraud and climate change.

Prices of commodities change when different political events happen around the world. There’s no real first-principle model that says when political instability goes up, the risk of fraud goes this way. There’s no physical reason for that or scientific principle you could apply there.

Therefore, you have to measure that using data. In the old days, we used statistics and just said that it looked like there was a correlation. And that’s really what machine learning is doing, right? It’s correlation testing on steroids all the time, changing parameters, and trying to find the right fit for these many different parameters.

And that’s how I see machine learning is just automated statistics on steroids, just finding patterns and optimizing things. When you’ve finished that process, you end up with a model that can actually make a prediction based on new data, right? 

“Machine learning is just automated statistics on steroids, just finding patterns and optimizing things.”

The Creme Global exposure models

[00:14:12] Nikos: The final outcome, or the final IP that you’re generating, is the company’s knowledge IP. It’s the model per se, right? It’s a trained model per se. In both scenarios, right?

[00:14:30] Cronan: Yes. The IP of the company Creme Global is the data on the model, or whatever curated data, let’s say. Access to data is a really important part. And then the model itself is part of the trained model, and how you go about that is important.

We also have the, we also think of our cloud infrastructure as IP as well, because we’ve engineered a system that can host models host visualizations and gather and help organizations manage and gather data in a collaborative way. So different aspects of that, would be considered IP.

[00:15:06] Nikos: So what I hear you describing is that technology per se, which we would expect being from the world of technology companies, is an IP on its own data. Let’s talk about data. It can be our data or data provided by third parties. However, the trained model that delivers reliable predictions is something that is being developed by a company like Creme.

Let’s talk about data. Where is the data that you’re working with coming from?

[00:15:43] Cronan: Yeah, there are different sources of data. There are public data sets that we can access, and governments publish these. For example, the NHANES, the CDC in the USA, would publish an NHANES database, which is very rich and has lots of information in it.

Other useful data sets include the agricultural research service at the USDA and other pesticide programs and monitoring programs. All of these data sets can be published in other European countries, and European, Asian, and South American countries have similar data sets. So, I always like to start from the public data and see what’s there and what you can access.

Sometimes, you have challenges around access, accessing the data, and seeing what permissions are associated with that data, and I’m trying to negotiate that process. So that’s a good start. But the thing is, everybody has that data, right? So, for a company, it’s not necessarily a competitive advantage to have that data.

You can spend time and effort curating it and making it useful, and there’s some value in that. But then the real interesting part comes when you have industry data or somehow work with industry clients or even government clients to gather private data, be that around the products that they’re creating the formulations, or the monitoring programs that they’re participating in private industry as an individual entity.

There’s a certain value in their own data, but when they start to share that data as an industry group, the real value emerges because you get a bigger picture of what’s going on in the environment in their sector, and you get enough data to train a model.

As we know, machine learning models are hungry for data. The more data you have, the better you can train a model. So, it’s great when organizations start to pool data that can be used to train machine learning models.

[00:17:37] Nikos: So one source of data is coming from the public sector world, and although everyone can access or try to access, process, and use this, it’s not as easy as it originally sounds.

The second source that I hear you describing is a customer’s data and organization coming and sharing private data with you because they want to build something to address one of their use cases. Yeah, but you do highlight again. It’s, I think, the second or third time that you highlighted this in our conversation: the value of getting more than one organization pulling together.

Regarding the private data, how open are they to doing something like this?

[00:18:25] Cronan: Yeah, that’s the challenge for many reasons. And it’s always a slow start to these kind of initiatives. So some industries have done this really well. At the SOT conference coming up, we’re going to do a little joint presentation with the Research Institute for Fragrance Materials on how they got it right.

They’ve done this for many years and really. They’ve shown a very successful case study of industry sharing data. Now, these are fragrance formulations, which are very highly secretive. They are the secret sauce of perfume, fancy brands, and perfume that they don’t want people to know how they formulate those mixtures to make these expensive perfumes, and then they sell those on to other product manufacturers, like shower gels, shampoos, and body lotions who use those same fragrances, so that is very proprietary information. They would never share that data with their customers, but through good science and good trust building over the years, they convinced them to start sharing that data with RIFM.

They have gathered data on over 2,800 fragrance ingredients. They’ve got the most comprehensive database of formulations used in the cosmetics and personal care sector. When the government has a question, it goes to RIFM.

When the industry has a question, it goes to RIFM. We work with RIFM to help them gather that data. We’ve put together the model of exposure and risk that’s used to set safe limits for all of those fragrance ingredients in collaboration with RIFM. It’s great when you see success stories like that, and you can use them to your advantage.

It motivates other sectors to try something similar, but they’re always slow to get started. But when they do get started, as I think in other projects, they can really build up strong trust and a collaborative community that tends to work well together.

[00:20:31] Nikos: And what I hear you describing again is that. The real value is in creating a resource that is value and supports and delivers value to everyone involved. Yeah, exactly. All the stakeholders are involved in sharing data.

In your experience, what is the moment when they say, okay, now we get it?

[00:20:54] Cronan: Hopefully, you can deliver that aha moment early in the process so that they get small wins or even early wins. They call it low-hanging fruit. Sometimes, at the initial stages of a project, because if they don’t see it. They’re putting a lot of effort into data collection, and usually, there is some effort involved in terms of their internal infrastructure, internal administration, and trying to organize this data.

Also, they’re worried about sharing the data in the first instance. So if they don’t see results reasonably early, it can really lose momentum. So it could be from the first visualization that you do on the aggregated data, and they’re comparing their own industry—sorry, their own company—with the aggregate.

That’s a win. Oh, we’re doing well here. We’re not doing so well here. We’re better than average on this. We’re not. We’re worse than average on that. So, that can be just a simple visualization. And sometimes, that’s an impactful dashboard that they can interrogate and see.

And that’s a nice way to start. Building up a nice bit of data visualizing it. And then, as you have enough data, you can start to try to train a model, but I wouldn’t go straight to trying to train a model. I’d go straight to a visualization initially to give them some value quickly

When training a model, there’s nothing for them to see initially unless you can visualize the model and its results. And will they really trust the model? Not really, probably not initially. So that’s a time. It takes time to trust a model, but they can trust seeing the data and transparency of the data to us to as much of a level as you can do in the aggregate world.

You can’t give away information that could be the private information of one of the participants. So you have to be careful with the aggregate visualizations, but then you can give them their own private visualization of their data, which is more detailed. And that’s a nice way to start these kinds of data projects.

[00:22:55] Nikos: So even pulling together the data, the aggregate ones, or even a better version, a more curated and high-quality version of their own internal data and developing some initial visualizations that can help them understand what they see in their own data and benchmark themselves against others is a strong argument.

I thought that you would say when they see the results of the model, they get crazy and they buy it. You said no, it’s difficult for them to trust the model. Why is it difficult for them to trust the model?

[00:23:36] Cronan: There are a few reasons. I think 1 is that they’re already experts in this business. They’ve been in for many decades, right? And they know what’s going on. Anyway, in terms of food safety, other industries may be more trusting because they don’t have the intuition.

Maybe food fraud is 1 of those markets where they might trust a model better because they don’t really know what’s going on. But if you’re a farmer and you’ve been growing. Product for decades and, the risks and, what tends to happen when it rains, or if there’s a storm or other things, if you’re predicting those things they’ll be already saying I, I know better than this model, and it’s going to take them a long time until the model proves itself worthy of their trust.

So that’s what I think I suppose is 1 of the reasons is that they’re already experts. And. If they’re clever, they’re going to be a bit skeptical, and they should be skeptical of a new technology until it proves itself.

[00:24:33] Nikos: do you get this kind of skepticism?

[00:24:35] Cronan: I’ve definitely. I think, oh, that works great. You can tell the client, oh, you’ve tested the model on Thousands of records, and you’re getting 95 percent confidence and accuracy on there. And then they’re like, great, but I’m going to test it out for a few months before I trust it.

You might think that’s what we just did using your historical data, but they don’t necessarily want to test it themselves.

[00:24:59] Nikos: What you’re describing is creating trust. And creating an environment where people and organizations share is even more difficult. They share data with each other in a controlled environment in a group that agrees on data-sharing principles.

Yeah. What would it take, I wonder, to continue this journey and make such insights available for all? How far are we from a future where you see and generate this valuable knowledge, especially when looking at predictions? where we can make it useful for the rest of the world?

What do you think?

[00:25:44] Cronan: Yeah, that’s a really interesting part. In some of the projects, it’s already possible that they’re not saying that only people who participate in the data sharing can use the model. Actually, some of them are saying anyone can use the model.

Now, there is a cost involved in using the model just because of having to maintain it.

But some of the projects we’re involved in already have gone that step to say this can be used by the government, this can be used by other sectors of other companies that haven’t necessarily been a part of the project in sharing the data, so definitely, that’s possible. It depends on how sensitive they are to the data.

And maybe in some of the other projects, we could share a subset or even more anonymized version of the data to give people some insights. But there’s always a risk that some NGO or other organization with an agenda will just pick up one or two points in the data that look bad and make a story out of it and cause a lot of trouble for an industry. So why would they want to take that risk where they don’t have to where what’s the benefit to them of doing that? So there’s a challenge to try and overcome, but maybe I really like the spirit of scientific sharing of knowledge.

Writing scientific papers or things like that around the results that I’ve got, which come from the data, could be very useful for sharing knowledge and learnings that could be translated to other regions or other industries, right? 

Using AI to tackle malnutrition

[00:27:19] Nikos: This makes sense. Yeah. Do you have a problem, a challenge, or an area that you haven’t touched yet but are eager to examine? Do you have a dream problem that you’d like to attack?

[00:27:34] Cronan: That’s a good question. I hadn’t thought about it, but yeah, there are lots of ideas. I think nutrition is an interesting area and has a lot of health consequences, probably even more than some of the far more high-profile food safety issues.

So malnutrition, either overnutrition or undernutrition, is very interesting. It is a scientific area that could have a huge impact on health and people’s well-being, and I’d love to do more in that. We did a study many years ago on personalized nutrition called Food for Me and recently on a new program that took a twin study at Stanford and looked at the genetic impacts and things like that.

It was interesting to see the study that they did with twins. It was about genetics and the impact of genetics on people’s diet and lifestyle, as well as other factors. I thought it was fascinating, so I’d love to do more in that space.

[00:28:30] Nikos: This is the space and area where you would like to get more data.

You would like to have access to the data to build the models and see what you can predict in terms of expected outcomes, right?

[00:28:41] Cronan: There are better tools now; everybody’s wearing some kind of fitness tracker, like a watch. I’ve got the Apple watch. It’s got great data monitoring my health, all these different metrics you can get. You know, more genetic data quite easily these days, cheaper, and microbiome information, all this stuff.

Suppose it’s just an interesting challenge with all that complexity and lack of understanding of the interaction between these things, such as how to live a healthy life and optimize your lifestyle under various conditions.

Creme Global 10 year goals

[00:29:15] Nikos: If we have this conversation five or 10 years from now, what would you like to have achieved, and be very proud of?

[00:29:24] Cronan: I’m not sure if we’ll get into that area in the next 5 or 10 years. Maybe we will, but I think we’ll continue on our journey around all the other aspects of food safety and chemical exposure and nutrition. We are doing some nutrition work, but that’s a very ambitious grand challenge.

And maybe we will try to set up something like that. Maybe as an EU project or as an even bigger worldwide project. I’d love to do a new project someday where I came up with a grand challenge like that, and I could have the time and space to really work on something like that for five years.

I find that a lot of research projects have some kind of science behind them, but nearly all of it is already known at the start, and what’s going to happen at the end is known. You do your best, and it’s a bit formulaic sometimes, but I’d love to do a more ambitious research project with some good resources behind it.

If I had time and the resources to do it. Yeah.

[00:30:18] Nikos: So you would have liked to have spent five years more devoted to solving one of the grand challenges and having the time and resources to do it. I really wish you could tell me such a story when we have this conversation again.

Key takeaway

[00:30:36] Nikos: If you would like our audience to keep one key message from our conversation today, what would that be?

[00:30:53] Cronan: Yeah. I always try to emphasize really good scientific thinking, critically evaluating data, and using strong mathematical methods.

We see these days the emergence of AI systems, like the chat, and models large language models that are becoming very powerful. But for now, we still need to be very good critical thinkers in order to interact and evaluate the output of these models.

Like I said earlier, if you provide a predictive model, even to your customer, they’re going to be skeptical. They’re going to think about it critically, see if it makes sense, and take time to trust it. So I think being able to evaluate things like that, and being good scientists and good mathematicians, Will be really important in every industry going forward.

[00:31:53] Nikos: So, stay good critical thinkers and evaluate what such technologies can offer with a critical and scientific and mathematical eye. 

“stay good critical thinkers and evaluate what such technologies can offer with a critical and scientific and mathematical eye.”

[00:32:25] Nikos: Thank you so much. It was a pleasure having you here with me. We talked about amazing things and covered lots of different aspects. Thanks for your time, energy, and openness.

[00:32:39] Cronan: Oh, pleasure. Thank you,

You might also like

Get weekly industry insights from Creme Global

Download The Overview Now

Data Sharing on Creme Global Platform

Gain critical business intelligence
from shared, anonymized data.