Like the broader economy, life sciences companies have undergone a mindset shift vis-a-vis artificial intelligence (AI). 

Four or five years ago, some expressed interest in adopting AI because it was “the latest innovation” rather than because of a stated business problem. In marketing, they call this “shiny object syndrome.”

As the debut of Open AI’s ChatGPT and Microsoft’s Bard have brought the promise of helpful AI closer to reality, pharma and medical device companies have rapidly advanced from expressing optimism about their potential to making AI a priority. 

A recent Deloitte study, for instance, found that 66% of biopharma and medtech manufacturers surveyed are experimenting with generative AI. 

As a result, some legitimate use cases have emerged. Firms are using this technology in areas ranging from supporting compliance and regulatory affairs to automating back-office functions and testing other ideas. 

While roundtables MM+M has convened on the subject have featured anecdotal reports about the launch of genAI systems, participants have voiced a fair number of concerns, too, such as lingering mindset barriers and varying trust in AI-derived insights. 

Recently, I sat down with Saskia Steinacker, who has led global digital transformation at German drugmaker Bayer since 2016, for an in-depth, wide-ranging interview about how the company embraced genAI and addressed some of the aforecited concerns. 

Notably, Saskia stressed that just as important as the operational aspects of implementing AI are its governance, which entails responsible use and facilitating AI ethics, and that it’s “fit-for-purpose”…no chasing shiny objects here.

(The following interview has been lightly edited for clarity and brevity.)

MM+M: You on-boarded genAI into the organization centrally and now enable it across the divisions. Can you tell me about the process? 

Steinacker: Sure. As you may know, Bayer is a life science organization with three divisions: pharmaceuticals, consumer health as well as agriculture. Our overall mission is “Health for all, hunger for none.” 

So what we ultimately try to do is to say there are fundamental challenges out there – a growing and aging population, climate change, etc. We’re asking the question, “How can we bring value to our patients, to our consumers, to farmers? How can we use technology to do that?”

When discussing generative AI, we believe it’s important to start from this angle. Otherwise, it’s almost like technology chasing a problem. You’re basically starting from, “What’s your mission? What do you ultimately want to do for your customer?” 

When the excitement for generative AI came along, we had been using digital technologies already and were in the midst of a digital transformation. 

That being said, generative AI fits into that journey. When we started off, we said, “There should be three things we want to do.” 

The first is to make sure that we establish internal, secure, proprietary genAI tools. That’s happening behind the firewall. We also said we want to make sure that if we use external genAI tools – something off-the-shelf, for instance – that we make sure we do it in compliance with our rules. We want to be clear about how we do this in terms of data privacy, security and in how we balance those two things.

Second, having clear engagement. We’ve always selected fit-for-purpose, because you don’t need genAI for everything. 

Then third, we said in the end it’s all about the people. We want to help our employees build AI expertise so that we make sure as we’re introducing those things, they can ultimately use it in their daily work. 

MM+M: That sounds like a big lift. Who oversees it?

Steinacker: We established a genAI catalyst team, a federated, cross-enterprise team which steers those efforts across the three divisions and functions we have. This group has a strategic, end-to-end perspective. Its members understand what we are doing, establish principles, then facilitate the adoption. 

What we saw – and this is an important point – is on the one hand, this federated, cross-enterprise, cross-functional team would oversee the things we want to build centrally, while maintaining flexibility and freedom for the divisions and functions. 

It’s not a control tower. It’s just a way of saying, “How can we be super-smart in cost, efficiency and speed while at the same time being flexible and maximizing freedom for the divisions and the functions” – the best of both worlds? How can we scale capabilities quickly? 

Just to understand initially how it’s working, we ran a few pilots across our business to ensure that it’s somewhat low-effort while keeping risks in-check. 

For example, in the U.S. we used a Microsoft GitHub co-pilot. We started developing our own internal ChatGPT, basically a GPT for internal use. The GitHub co-pilot worked well. We saw that it can augment what we’re doing in coding. It’s super-efficient and productive for our coders, however, it’s always complementary to what people are doing. 

That was one of the learnings. The other one was that we need to partner with our legal and security functions right from the start. Because one thing is, of course, ensuring that we don’t see code and data winding up in places where it doesn’t belong, for example, in an external tool. Then we would have data privacy issues. 

Something I can’t stress enough, as well, is to engage strategic digital and tech partners – Microsoft, Amazon Web Services and so on. We wanted to leverage their technical and program management expertise and say, “Let’s learn as quickly as possible. Let’s optimize our setup.” 

Bayer was one of the first EU-based companies to be offered the pre-market Microsoft 365 Copilot license. So that was very cool for us. A few hundred Bayer employees got to try that. We developed a model store, which is a Copilot plug-in.

Next to the other initial use cases, these were our first opportunities to start getting our hands dirty and really understanding how it works. Finally, this central catalyst team intensely works with the divisions to understand the use cases and focus on what’s the maximum ROI we can get, the maximum value or output. It’s one thing to be excited, but of course you want to make sure it’s still building to your strategy and it’s a focused approach and maximum value contribution. 

MM+M: How did you define what the AI output is?

Steinacker: We always look for value and ROI – again, what can we ultimately do to provide the best products and solutions for our customers, patients and farmers? What’s ultimately the value we deliver there? Of course, we always look at classic KPIs: top line, bottom line, what is it actually that we are moving here? 

It’s important that we are clear on what’s the problem we’re solving and how we ultimately deliver against that. Otherwise, it becomes like experimentation without any goal, which is not what we want.

MM+M: Which brands are using it and how, and how did you select where to use it? How infused is it across the organization?

Steinacker: The way we think about it is across our entire portfolio at Bayer. Its application is not limited to a single brand where we say, “This brand uses it, this doesn’t.” We take an end-to-end approach – its applicability across the business – with the ultimate target being hyper-personalization at scale, better targeting and creativity. 

Brands where we specifically see a lot of value in using genAI are those with a higher percentage of data and e-commerce, where a lot of assets are required. This is an area where having AI-generated creative content could enable us to create that quickly for our campaigns. 

A second example is BioGPT, a genAI for biomedical text generation and mining, which accelerated the productivity of our medical affairs team almost a hundred-fold. By extracting and analyzing data from clinical trials, the tool accelerated the process by which we ultimately understand what are the right health outcomes and match those to molecules that can address what consumers ultimately need. 

These are genAI examples. I can point to many other uses across Bayer that are more on the AI level: AI in medical imaging to power more accurate and timely diagnosis for patients; AI for demand planning to generate accurate forecasts in consumer health; and AI in R&D to help our agriculture division make crop-related discoveries. 

MM+M: Marketers have been using AI for years (chatbots, IVAs, social media sentiment analysis, etc). Did you draw on past experiences or was this an effort to start from scratch?

Steinacker: We can draw on past experience. We have been using AI for years now at Bayer in AI products, chatbots that use machine learning in many of our divisions. 

Of course, we always check and see how we can adapt innovative technologies as much as possible and use them across our value chain. So it’s not that this [started with] generative AI. Every time something new comes, we’re able to draw on our past. 

As we apply AI and data/analytics in various stages of the funnel, from the customer awareness to engagement all the way down to the purchase, we always work with our agency partners or Google to help us understand the behaviors. Then, we understand and ask, “How can we target personalized messaging content as well as the product?” We’ve seen double-digit percent growth in sales conversion there. 

Now with genAI, one comment I want to make is “fit-for-purpose.” There’s a lot of potential, but you don’t always need genAI as a tool. You could use a smaller AI method, rule-based automation or something else. When thinking about cost as well as sustainability, it’s important to choose the right solution and make sure that you’re solving for that and not just throwing genAI at everything. 

MM+M: How did you get buy-in? Was there a need for a lot of education? Also, how did you deal with AI skepticism, and did this differ between your R&D and commercial colleagues?

Steinacker: In general, people are excited about the potential, but we still do see some skepticism when it comes to the actual output and accuracy. 

One key element for us was the catalyst team, composed of representatives from across the business and functions. These are people who are well-versed in what they’re doing. They were fully on-board. People said, “OK, I trust this team that they know what they’re doing.” 

We basically said, “OK, how can we work with the cross-functional team,” making sure we engaged with our scientific community when they had concerns regarding inaccuracy, contradictions to references or reasons for data discrepancies, for instance. 

Next, we integrated reputable data sources, like PubMed. We also made sure to do a manual look-up and a double check. We work very actively with the scientific community as well as with marketers. 

The marketers embraced it. They pointed out the need to make sure we have purposeful content versus just low-quality material. Digital image generation has produced so much now because it’s easy. They also brought up the need for diversity and inclusion so that we’re diverse in our thinking and in the way we produce the content. 

We informed our general staff in a variety of ways and communicated on a broad scale. On our intranet, we held sessions, drove communities and offered many training opportunities. People were so excited that an article – “What is genAI?” – posted to our intranet almost broke all records. 

They said, “Oh my god, genAI is coming!” [laughs] It’s one thing to have people excited, but that was also the moment where we had to introduce training and, of course, some guardrails. 

MM+M: In terms of upskilling marketers, how did you ensure a basic understanding of AI concepts in order for them to leverage the AI tools and technologies you were integrating?

Steinacker: Education is key, as is shaping the experience. We ultimately want people to use this in their daily life, so we had different approaches and different sessions. 

For example, “GenAI for marketers” included tips on prompt engineering and informing about policies. They’re very focused sessions. We have an internal IT academy, which is comprehensive when it comes to AI-related skills and expertise. We’re working with top-notch universities externally. 

We’ve also organized larger AI sessions, one of which attracted over 1,000 people. It’s been interesting to see the huge interest in what we do there. We talk about insights into technology and our initiatives when it comes to AI. We touch on the topic of ethics and governance, but also hands-on learning opportunities. 

It’s important for people to understand how AI prompts work. They also explain our guidelines – being clear that, yes, this is an exciting thing, but what can you do and can you not do? 

In addition, at the executive-committee level, we had many sessions where we introduced the external lens – the outside-in view – to understand what’s possible with genAI and how it links to our strategy. We did some systematic upskilling of our top 200 leaders with a session, as well as encouraging everyone in the broader population via the communication measures I mentioned earlier.

MM+M: Let’s get to the safeguards. Which ones did you put in place to ensure it’s used ethically and then within legal and security frameworks, especially with external partners like agencies tapping in?

Steinacker: Overall, we have defined guiding principles, including those specifically for external AI tools. We want to ensure we have comprehensive guidance on data privacy, intellectual property, cybersecurity and compliance measures, and that people understand what is the correct behavior so that we avoid the risk. We launched that pretty early in the journey. 

This was well-received and was part of the intranet article. It was helpful for people to understand, again, “What can I do versus not do.” At the same time, we minimized any risk for our company.

On top of that, among our internal tools, we also implemented a robust lifecycle management for AI applications. That’s also something we put a lot of emphasis on. Of course, everything else is built within a secure environment; that’s a given. 

MM+M: Were there any other impediments or roadblocks that you encountered – i.e., mindset, lack of data science skills, etc.?

Steinacker: You need to have the right skills on-board. We learned we needed to have more people who are data savvy, so we opened a digital hub in Warsaw in July 2021. We doubled down on that and are currently ramping it up. 

We’ve already hired 300 IT experts, mainly software engineers. We have full-stack engineers, data scientists with expertise in machine learning and also the whole AI and operations research. This works across our three divisions and is part of our global network. Without the right skills, you just have tools but no one who can actually work on them now. 

Speaking of mindset, we strongly believe that what brings the best AI products is having diverse teams working on it. Also, when it comes to that, we tell people that it’s not only about “I’m doing something and now I use AI to do the same thing.” We want them to think about how they can leverage AI or any of these opportunities to change processes and ways of working – fundamentally re-imagining that.

In other words, their mindset was, “Before, it was just, ‘I want to do something analog. Then I did it digitally.’” This is saying, “Can we do something better, faster or just with more value, but really looking end-to-end?” 

I’ll give you a marketing example: It doesn’t help if you say, “I’m not automating the MLR process” when you don’t look at content creation holistically from a marketer perspective. When you optimize it using AI tools, for example generative AI, that’s where the value comes in.

MM+M: How do you prove the ROI?

Steinacker: You start with what’s the value for those we are serving – patient, consumer, farmer and so on. We measure the ROI in terms of top- and bottom-line contribution, because that’s ultimately what you want to solve for. 

Thinking about the top line, you can run A-B tests of genAI campaigns and measure a campaign’s effectiveness from the perspective of contributions to actual purchase of products or increasing the basket size. For the bottom line, it’s more around efficiency. 

In terms of our campaign example, we could track the cost to create the campaign and how long it takes and whether genAI can speed up the process or make it more cost-efficient and lead to the same result. This one is also from the marketing area, but we apply the same thinking across the board.

MM+M: In the future, what level of AI competency do you think will be expected of marketers?

Steinacker: First of all, end-to-end thinking is relevant. They will be more accountable for the whole process as well as content. They need to understand the relevant genAI tool landscape. That’s something they should be up-to-speed on, including prompt engineering, understanding the ethical aspects and how to navigate genAI accordingly. It’s important, as well, to understand the shortcomings the technology can have.

MM+M: Would that entail making sure that somebody is manually checking it to a certain extent to make sure that the information that’s coming out is not an artifact or blatantly fake?

Steinacker: Exactly. Hallucinations can happen with genAI. You need to double check and fact check what it’s producing. Then there’s the whole privacy and regulatory piece. You need to understand how the data was obtained and make sure there’s no copyright infringement. 

The last thing, which is more of a general comment, is that the marketer needs to understand the cost of using these tools. Not only licenses but the AI prompts also come at a cost. As you’re planning your campaign, if you’re using these tools, you need to know how to have them in the mix and manage them sustainably.

MM+M: One final question: Do you have any overarching advice to others looking to set up pilots and establish a roadmap for genAI’s use in commercial operations?

Steinacker: Here are my top four.

First, start by focusing on the high value use cases and link them to the strategy.

Second, select a fit-for-purpose solution that can really balance off-the-shelf versus internal for the same project.

Third, understand the full cost implication, because otherwise you’ll build something and may be very surprised with the cost.

Fourth, this is relevant when it comes to AI: use high quality data. So that’s security, privacy and also where bias plays a big role. Make sure you don’t have any bias in the data because ultimately all your tools and anything you’re doing depends on the data. That’s the make-or-break of everything you’re doing in this area.

For a March 2024 article on GE HealthCare rolling out next-gen AI solutions at HIMSS 2024, click here.