Small thoughts on large cohorts

[article index] [] [@mattmight] [rss]

I attended the NIH workshop on the large (million+ volunteer) research cohort outlined in the President’s address on Precision Medicine:

My high-level reflection is that it’s time for a new model in cohort research:

To operate at the desired scale and at the desired cost per participant, the large national cohort must switch from a “pull” model of medical research to a “push” model of medical research, and it must reimagine participant engagement.

As a rare disease parent, I’m excited about the possibility of using this cohort to find what NIH Director Francis Collins called “resilient” individuals: participants that, according to their genome, should exhibit a disease phenotype, yet do not.

These individuals will point the way to therapeutic options for many diseases, rare and common.

As a computer scientist, I’m excited about opportunities for making advances in data science, human computer interaction, mobile computing, security and privacy in the service of this initiative.

Read below for my notes and reflections, including:

  • “push versus pull”;
  • “Bring Your Own Genome”;
  • the “cloud of crowds”;
  • false dichotomy: de novo versus cohort of cohorts;
  • gamifying engagement;
  • fully federated, peer-to-peer EHRs;
  • “low friction” mobile health;
  • scientifically literate patients; and
  • why we need the children: the human knockout project.

Pull-based research

The pull-based cohort model is researcher-centric.

The researcher designs the study.

The researcher pulls in participants.

The researcher pulls in – and controls – the data.

Push-based research

A push-based model of cohort research is participant-centric.

In a push-based model, participants will own their own data, a view articulated well by both Sharon Terry, Anne Wojcicki and several others.

Participants would opt to push their data to the researcher.

Participant’s perspective: Cohort as “app store”

To pull off the push model, the interface to the national cohort might feel more like an “app store” than a cohort in the traditional sense.

Participants could choose among different studies for which they qualify.

Entering a study might not be unlike installing an app, except that instead of requesting something like permissions for contacts or SMS, it would be requesting access to prior lab records, genetic data and an annual blood draw.

And, in addition, each study would have to clearly articulate the value proposition to participants: What would a study participant (or society) gain from participating?

Toward this end, the cohort becomes a platform.

PEER–the Platform for Engaging Everyone Responsibly–is what such a platform could look like.

Researcher’s perspective: Cohort as API; Cohort as “cloud of crowds”

Anne Wojcicki also pointed out the importance of APIs.

If the cohort platform exports an API, then anyone (in theory) could design and propose a study.

From the researcher’s view, the cohort becomes a hybridization of both cloud computing and crowd-sourcing:

The large cohort operates like a “cloud of crowds,” to which one dispatches studies in the same way that we currently dispatch computation to Amazon’s cloud or human tasks to its mechanical turk.

Critically, the large cohort will inherit the advantages and disadvantages of both frameworks.

And, unlike the mechanical turk, which has one largely homogeneous crowd, the cohort is many heterogeneous crowds.

Bring your own genome

There seems to be agreement about the need to define the “minimum viable participant” to gain entry to the cohort.

Beyond some genomic data, I don’t sense any broad consensus yet on what the minimum data set required for participation would be.

My preference is for an inclusive model, in which anyone willing to “Bring Your Own Genome” be admitted as a participant, and those without genomes could apply for sequencing slots to receive admission.

False dichotomy: A de novo national cohort versus cohort of cohorts

There seem to be two camps arguing whether the cohort should be assembled out of existing cohorts or whether the national cohort should be constructed from scratch.

From the discussion, these two sides seemed in opposition.

I see no inherent conflict in taking both approaches simultaneously.

If there is a concern about unrepresentativeness or bias from aggregating existing cohorts, it seems to me that the designer of a study is best suited to decide whether it’s a valid concern for them.

Researchers will certainly need the ability to filter for qualified participants, and filtering out members assimilated from other cohorts seems like a reasonble thing to do when the science dictates.

For studies where it’s not a concern, then the science gets to move sooner.

An end to the IRB? “Precision IRBs”?

The bottleneck to a scalabale “cloud of crowds” approach would be IRB approval of a proposed study.

I can imagine that if the cohort API is sufficiently formal, it could be possible to certify algorithmically that some studies pose either no or well-defined risks for participants.

We may never want to take humans entirely out the loop for IRBs even in the simplest cases, but perhaps we can make it easier to review proposals.

There is also a contingent arguing that IRBs may no longer be necessary at all – that participants can become their own “precision IRB” when choosing to join a study.

This seems challenging to pull off.

What does seem clear is that a traditional route for IRB approval may not scale.

Sustaining engagement

Sustaining engagement efficiently is going to be one of the biggest challenges of the large cohort effort.

Participant engagement is part of the mandate for my working group (white paper), and Pearl O’Rourke nicely summarized the challenges and opportunities in an engaged presentation.

Engagement through gamification

My view is that the time is right to apply gamification to the problem of sustaining engagement in a large cohort.

Large segments of the multi-billion-dollar gaming industry have become entirely dependent on being able to quantify, sustain and enhance engagement.

The field of gamification is making serious academic inquiries to understand how it is that mundane, tedious or even unpleasant tasks can be restructured into psychologically rewarding “games.”

If people are willing to spend hours growing corn that doesn’t exist or flinging fake birds at pigs, why can’t we get people to spend a few minutes on a task that would benefit their health and science in general?

I think we can.

Engagement through peer-to-peer communities

There is a sense that the success of prior cohort studies
leveraged the sense of identity that came from belonging to a specific group.

A sense of group identity sustains engagement.

Since there is no defining characteristic for the cohort as a whole, pairing participants with peers would provide a sense of community – synthetic peer networks.

Participants could also receive real-time feedback (and implicit peer pressure) on where they stand relative to that community in terms of engagement.

In fact, I don’t think it matters much what is used as the basis for constructing a synthetic peer network (e.g., zip code, hair color, last four digits of a phone number). Once people have been grouped, it becomes an identity and a community.

I predict that being able to measure and report “engagement” relative to peers – and how one peer network compares to others – will create a friendly spirit of competition within and between groups.

“Lastly, cybersecurity.”

I would not expect (or want) cybersecurity to dominate discussion at a meeting like this, and I’m glad it was acknowledged.

I do disagree with the working group report that claimed that utilizing and maintaining the state of the art for cybersecurity would be sufficient.

In cybersecurity, the state of the art is sufficient for no one.

When it comes to mobile collection of sensitive health data, the status quo is hopelessly inadequate.

A cohort of this size and data richness reperesents an attractive target for cybercriminals (and even hostile nation-states), and it will end up under attack from determined adversaries.

Among other security efforts, I’ve spent the past three years as the PI on a DARPA project geared toward transformative techniques to secure mobile phone applications for the military.

My sense from that experience is that without proper attention to cybersecurity throughout the construction of the cohort infrastructure, it would only take a handful of individuals to pose a large risk to the entire initiative.

Privacy and security

It eventually became clear to me that when people in the medical research community refer to “security,” they’re actually referring to privacy policies and technological protocols for privacy rather than what cybersecurity experts think of as security.

When I think of security, I’m thinking of architecting and designing a system to be resistant (or even impervious) to an aggressive attack.

I think it will be important to consider software development practices for the many apps and databases being proposed, and the strictness of the practices needs to be proportional to sensitivity of the data handled.

Federated versus centralized EHRs

Another key to the success of the large cohort will be having electronic health records (EHRs). If patients have to manually report their health records, the system will collapse under the weight of the burden on the participant.

But, there is no standard for electronic health records and for interoperability (yet).

Centralizing and standardizing health records is an engineering (and political) challenge that would likely exceed the total effort of the entire precision medicine initiative.

I now accept that a federated model of distributed, heterogeneous EHRs is the only feasible model for the cohort in the near term.

A standard for queries versus a standard for records

Once we accept that EHRs will be federated and heterogeneous, the practical solution becomes to design a common standard far querying databases of EHRs.

Since most databases will already export a query language, designing a query translator into each system will likely be the most efficient, feasible and rapid solution in terms of engineering effort.

(In fact, a query translator is so close to my original field of research that I’d be tempted to work on it myself.)

Compared to a standard for EHRs themselves, a common standard for querying EHRs is more feasible.

And, queries can be distributed across a federation of databases more easily than collections of entire EHRs can be pulled across the network.

From a security and privacy perspective, federated queries also means that information is provided on only a “need to know” basis: if a researcher doesn’t need age and zipcode, they won’t get to see age and zipcode.

In the limit: Peer-to-peer EHR networks?

I pondered with other participants the possibility of taking the federated model all the way to the limit: allowing individual participants to not only be owners of their EHR, but to also be the holder of that data.

While the infrastructure to do this doesn’t exist yet, it’s conceivable that third-party EHR brokerages could make this manageable.

Taken to the limit, individual participants would even have the capacity to review and approve the requested queries and their results before consenting to participation.

Making participants the brokers of their own data also puts it in the hands of the one person most likely to spot errors in the data: the participant herself.

I don’t think a model of fully distributed EHRs is practical in the short run, but it would provide significant advantages in the long run.

Low friction mobile health

There is understandably a big emphasis on mobile health technologies for data collection.

The high penetrance of smartphones and their ability to record and transmit data could add a layer of richness never before possible at scale.

As odds would have it, I created one of the first mobile health apps for the iPhone for my son years ago and I’m a regular user of such apps for tracking diet.

My experience as a developer and user with mobile health apps is that, they must be “low friction”: the interaction cost of collecting a datum must be inversely proportional to the frequency with which data is collected.

For example, a nutrition-tracking app that merely requires you to take a picture of your food (which might then be mechanically turked into its nutrients) is lower friction than an app that requires you to manually break down the calories and nutritional content of each food item.

For the app I created to log my son’s medically relevant events (of which there could be 5–10 per hour), I made it no more than a couple taps from the time my phone left my pocket to add events which happened regularly and not many more to add an event that was similar but not identical to a previous event.

That ease of entry made it possible to manually collect thousands of data points for subsequent data-mining.

Finally, because there already exists a thriving ecosystem of health apps, any apps used through the cohort network must be competitive in usability with apps already available on the market.

Aside: Creating engaged patients and patient advocates

After speaking with Claudia Williams, Andrea Downing and several patient advocates in attendance, it’s clear that we need more scientifically literate patients engaging researchers.

To be clear, there are such patients and patient advocates out there, but there are not enough.

Ultimately, patient advocates must be able to ask and to judge the answer to two questions about any proposed research:

  • What is the probability of success?

  • What is the impact on patients in the event of success?

If patients can do this, then patient advocates and organizations can manage rational portfolios of research for their disorders.

If patients cannot answer these questions, they may fall prey to academics whose are better at marketing themselves than translating their work.

Getting to engagement

I don’t have the answer for how to create scientifically literate patients and patient advocates, but we did come up with a few proposals, including:

  • A rare disease roadmap, covering the journey from diagnosis to basic science to drug screening to drug development to clinical trials and regulatory hurdles.

  • A scientific bootcamp or summer school for patients and patient advocates that could escalate the level of understanding to the point where patients and patient advocates become comfortable engaging researchers – and asking hard questions – directly on the science.

Literate, engaged patients are going to become critical to shaping the course of research and prioritizing projects in the face of scarce resources.

Why include children? The human knockout project

In my own talks on rare disease, I’ve used the term “human knockout project” to described the combined effect of rare disease, sequencing and patients connecting over the internet.

Francis Collins also invoked the term in his closing remarks.

A cohort of this magnitude could certainly accelerate the distributed and ad hoc human knockout project already underway.

However, I think this aspect of the project will have limited success unless children are included in the cohort.

The grim actuarial reality of rare disease is that many who suffer are children.

While there may be many well-considered reasons for excluding children from a national cohort, one must weigh those against the cost of losing some of the most valuable genotype-phenotype pairings.

The inferential power of rare disease

In terms of the inferential power it grants, one rare disease genotype/phenotype pair is probably worth a hundred (or many more) healthy genotype/phenotype pairs.

If given the choice between 10,000 rare disease genomes and 1,000,000 random healthy genomes, as a scientist, I would choose the 10,000.

If for no other reason, I think that’s a compelling argument to figure out how to include children.

Rare patients already suffer; let them not suffer in vain.