Photo credit: Getty Images

In 2018, the race to identify digital biomarkers is on. Defined by Karger Publishers as “objective, quantifiable physiological and behavioral data that are collected and measured by means of digital devices such as portables, wearables, implantables, or digestibles,” academics, pharmas, payers, and big tech are actively collecting and studying this entirely new class of information to understand, influence, or predict health-related outcomes.

“Objective data captured through digital devices can be wide-ranging and not traditionally viewed as health information. However, the intentionality of analysis can make many types of data fall into the realm of health or healthcare,” says Carlos Rodarte, founder and managing director of digital health consultancy Volar Health.

This means learning if how you type and tap on your smartphone can indicate your mood, if the likelihood to take your meds increases or decreases based on engagement with your Facebook account, or if data from a smart blood pressure cuff can predict if you will develop a comorbidity.

This data can be collected in any number of ways, from targeted studies to “data in the wild” — large-scale sets of data collected without a study intent, but that can retrospectively be mined to identify correlations.

The potential payoff is not insignificant. In Rock Health’s seminal report, The Emerging Influence of Digital Biomarkers on Healthcare, use cases include optimizing clinical trial recruitment, introducing targeted interventions, and individualizing medical policy plans.

But in a world where any data can be considered health data, how are organizations navigating ethical boundaries?

While the term digital biomarker may be new, the concept of digitally enabled behavioral medicine is not. In 2013, Dr. Camille Nebeker’s study had all of the necessary ingredients: willing participants, informed consent, wearable devices — including a camera — and control of the data. Yet the Institutional Review Board (IRB) felt it was too risky.

All human research has risks, and it is the role of the IRB to determine if the benefits outweigh said risks, and if so, what protections should be put in place for human subjects.

As it turns out, the board’s primary concern wasn’t the safety of the participants, but rather the impact on bystanders that might have been in the vicinity, and whose images might have been captured by the camera. It occurred to Nebeker, assistant professor at the University of California, San Diego, that being at the very front end of new ways of doing research presented challenges.

An Absence of Standards

Researchers want the tech because it captures information they need. But they don’t know how to address questions of collection, storage, and security. “The questions became, ‘What should we do with the 30,000 images we capture from every person during the study?’ and ‘What do we do with the GPS data?’” Nebeker recalls.

See also: 5 of healthcare’s hardest nuts to crack

It’s this part — the evolution of informed consent — that presents one of the larger questions for researchers. When your digital footprint also includes information about your family or your social network, what are the protections for these individuals who did not consent?

In 2015, Nebeker’s response was to launch the Connected and Open Research Ethics (CORE) initiative at the University of California, San Diego, funded by the Robert Wood Johnson Foundation. By bringing researchers, ethics board members, technologists, and stakeholders together, CORE aims to build a body of best practices for the use of mobile imaging, pervasive sensing, social media, and location tracking in research.

In the absence of standards, Nebeker is forging a new path for academia. But she is quick to add that new standards are also required for private sector companies pursuing the digital footprint studies, pilots, or trials.

Pharma has rules and regulations, notes Nebeker. “It deals with the FDA. It is accustomed to HIPAA compliance because it is dealing with health data with a patient under care.” So while there is a learning curve, the basic ethics principles are in place.

A number of experts agree this is a place where pharma gets it right. Craig Lipset, head of clinical innovation at Pfizer, shared best practices reflective of an environment that is thoughtful about the deployment of new tech. He suggests limiting data capture, reducing collection to the essential.

“I would avoid using a device that is capturing 20 different types of data if the research team’s need is just for five,” explains Lipset.

“For example, basic actigraphy devices are by far the most common sensor in research studies today for high-fidelity data with a focused purpose.” It is worth noting he considers location data a slippery slope to avoid.

Lipset is also mindful as to how his organization will need to evolve to adapt to changes in tech. He points to the convergence of research-grade and consumer devices. As consumer devices increase in sophistication, studies may move toward using a patient’s own device. “But if it is their own device, then it is really the patient sharing their data with us — not the other way around,” he says.

“It will be interesting to see what consent and permissions look like when people have full control over their data, and how granular patients will want those controls and permissions to be,” adds Lipset.

“Informed consent has a long way to go these days,” agrees Nebeker.

A greater area for concern is the lack of standards within big technology companies, or those who make use of their data. “Big tech companies are accustomed to capturing a lot of data about individuals, but the moment it becomes ‘health’ data, there are many considerations to account for,” says Rodarte.

As researchers tap into the big data opportunities presented by all of our digital traces — from keystrokes to search data to social networking — they should have agreed-upon standards of practice to guide big data research of human subjects. Some even go so far as to suggest big tech could learn something from pharma.

“As tech enters more and more into health, it will better understand the liabilities, risk, traditional research, and healthcare stakeholders have faced,” notes Rodarte.

Nebeker shared the example of a big tech study within an assisted living center to understand cognitive decline. “You know the steps to the toilet, when it flushes. The whole home is sensored,” she explains. When she inquired about the IRB’s stance on resident protections and informed consent, Nebeker says the answer was, “Well, we have a legal team. There is no IRB.”

Between Smoking Jackets and Lawyers

Historically, ethical concerns were the domain of classically trained philosophers and lawyers. For Bray Patrick-Lake, director of stakeholder engagement at Duke Clinical Research Institute, this presents a problem, as there is a big disconnect between those who determine which research moves forward and those who are most likely to benefit.

“As the quality of life decreases, you are willing to make trade-offs. That needs to be considered,” Patrick-Lake says, knowing some patients are willing to say, “Take my data, or I will die.”

But who gets to make that decision in the digital age? “This is not for someone in a smoking jacket, by a fire, to think about what the risks might be,” she adds.

This is not to say research should not have guidance and boundaries, or that the intellectual-philosophical perspective is not welcome.

For Patrick-Lake, it’s about bringing tech into the research space and making sure it is not bogged down with regulatory issues. “But we also need not ignore the history of bioethics. We do need to think through the unintended consequences,” she says. One way to do this is for IRBs to consider the point of view of real people.

“We are going to have to live with this period of discomfort, but we need to proceed thoughtfully. And we need to return to human-centered design,” adds Patrick-Lake, whose work is centered around the engagement of patients, their families, and their communities.

Duke has a bioethics and stakeholder lab that does empirical and formative research. Patrick-Lake describes the importance of informal conversations, roundtables, and town halls, as well as formal research on patient preference in this new digital age.

This is also an approach taken by the Precision Medicine initiative, where Patrick-Lake serves as the co-chair of the National Institutes of Health advisory committee to the director working group. She also believes that pharma, while making great strides in this space, would benefit from engaging and educating stakeholders, but to date, the industry has only “put its toe in the water.”

Patrick-Lake also points to the inevitable evolution of the IRBs themselves. One suggestion is to include more “real people” that mirror the population being served, a sentiment shared by Ifeoma Ajunwa, who holds both a J.D. and a Ph.D. and whose work is at the intersection of ethics and AI.

Ajunwa, who is on faculty at Cornell Law School and Cornell University School of ILR (industrial and labor relations), points out IRBs and the larger ecosystem have a responsibility to include the voices of women and other underrepresented populations. Case in point: voice-enabled tech.

“If the people who did the testing are focused on a technological effectiveness 95% of the time, that’s great, tech-wise. But if 10% of people are excluded, and they are all of one category, that’s really bad.”

Furthermore, IRB members will need to include people who are trained in health, tech, and ethics, which Patrick-Lake suggests would be a good focus for a graduate program. DJ Patil, former U.S. chief data scientist, takes it a step further: “We need to get to a place where every technologist is trained in ethics and security,” he says. “Ethics can’t be a side class. It needs to be core curriculum.”

Patil is particularly concerned about the lack of attention on security, pointing out healthcare data breaches are just the tip of the iceberg.

As devices are further embedded into our lives, security flaws can create greater risks for humans, including manipulation of actions and the data itself.

To even consider that scenario, ethicists need to have a greater understanding of new models, including the role of machine learning. “Just because we can, doesn’t mean we should,” cautions Patil.

Will Ethics Make or Break the Industry?

Setting new standards, evolving the makeup of IRBs, and attaining new skills are all works in progress. Patrick-Lake believes we are heading in the right direction, pointing to Genetic Alliance’s rare disease IRB, noted for its inclusion of experts in health, tech, and ethics.

Nebeker is encouraged by big tech’s recent efforts, relaying that Facebook has built a review process and that Microsoft Research has formed an IRB.

But change is hard. Nebeker often advises scientists that ethics isn’t in their wheelhouse and that they would be wise to add an ethics specialist to the grant application with secondary aims of addressing security, informed consent, and participant access.

“It is a foreign concept,” she shares. When researchers agree, they push it to the back burner as an amendment after they have won the grant.

Even those who spend considerable time weighing ethics find the waters are still muddy. “It’s confusing,” acknowledges Patil. “There isn’t clear guidance as to what you are allowed to do with this data and who has access to it.”

He’s also noted the implications of informed consent in a more connected society with machine learning capabilities. He offers this scenario: If everyone on your street opts in and gives up data about their households, they have consented. If you are the only one on the street not to consent, you are still opted in.

This presents a challenge to the corporations with the potential to translate groundbreaking research into healthcare. With the arrival of big data capabilities and a focus on innovation, Ajunwa explains companies have done a good job of building teams to pilot and test new capabilities. “The next stage,” she continues, “is bringing in techno-ethicists to think through the ramifications of decisions.”

While chief privacy officers, lawyers, and biomedical panels abound, Ajunwa says their domain is too narrow. She recommends an in-house tech ethicist. While she is not aware of any full-time employees with this title, she notes it’s not uncommon for large organizations to bring in consultants.

But Ajunwa sees this as a stopgap, and often it’s only after a crisis. She warns against losing sight of business basics when chasing shiny objects in search of short-term gain: “If you want long-term success, whether you are in tech or pharma, you need to remember the business basics of goodwill. Ethical issues will kill you if you ignore them.”

In addition to developing the skills across the organization, Ajunwa suggests companies need stronger tech ethics at the highest level — the corporate board. Uber is a cautionary tale of what happens when no one is responsible for pausing, considering ethical issues, and formulating a strategy to protect the client base and the company.

“Companies need someone at the board level,” she says. “Each quarter, they [need to be] reviewing policies, pointing out what looks problematic, and catching small fires.”


Sara Holoubek is CEO of Luminary Labs. 

(Luminary Labs has a commercial relationship with Pfizer.)