Some of the most frequent questions we get have to do with why we don’t share the information we learn with the study participants we’re studying. Why won’t we tell them the genetic risk variants they carry? Why won’t we screen their children for them?
The short answer to these frustrated Whys is that we can’t.
Human subjects research (that is, research involving human participants) is tightly regulated by the government. Every step of the research is scrutinized and controlled to ensure the safety of the people involved. Their health, their privacy, and all aspects of their well-being are protected. For example, our latest study needed to be completely vetted by Duke University’s Internal Review Board (IRB) before we could launch it. IRBs review research studies to make sure that the best protections possible are in place for the people who enroll. They ask the crucial question: Is this ethical?
The need for formal ethics boards was made clear in the 1970s with the public revelation of research abuses, of which the the Tuskegee Syphilis Study was a horrifying example. The Tuskegee Syphilis Study was conducted between 1932 and 1972 by the U.S. Public Health Service and Tuskegee University. Over a 40 year period, 600 poor black Alabama sharecroppers were enrolled in the study with the promise of free medical care. The aim of the study was to learn about the progression of syphilis when left untreated. Of the 600 participants, 399 of the men had syphilis and 101 did not. During the course of the study, those with syphilis were never told that they had the disease. Nor were they treated with penicillin, an effective treatment made available in the 1940s. The study was only halted in 1972 because a Public Health Service employee leaked to the press the story that hundreds of people were unecessarily suffering and dying from a treatable disease.
Following these revelations, a series of congressional hearings on human subjects research led to the federal government passing the National Research Act of 1974. This act established ethics bodies to develop and enforce guidelines regulating research involving human subjects. At the core of these guidelines is the concept of consent: people must be fully informed about a study before enrolling. Informed consent includes being told, in understandable language, about all possible harm that could result from the study. Study participants should also be made aware that they are free to leave a study with no negative repercussions.
Which brings us back to why we can’t share the results of our research with study participants. In the case of work we do with samples from the MURDOCK study, the MURDOCK study consent assures potential participants that their identities and privacy will be protected, even from the researchers using their samples. And there’s lots of good reasons for that, covering everything from concerns about insurance coverage being affected to potentially embarrassing medical history. Thus, we simply don’t know the identities of the people whose samples we are studying.
And even if we did know who they were and how to contact them, it would not be legal (or ethical!) to share the data we produce. For one, our data are not generated in certified testing facilities. All clinical laboratory testing done for diagnosis, prevention, or treatment of disease must meet federal standards established by the Clinical Laboratory Improvement Amendments (CLIA) of 1988. These standards are in place to ensure that test results are accurate and reliable. For another, the tests themselves must be FDA approved and rigorously validated for accuracy, precision, sensitivity, and utility.
Imagine the consequences of providing study participants medical information that has not been fully proven. What if a child of someone with MS shows a metabolic signature that we currently believe predicts MS, but future testing reveals that the signature is flawed and not a good universal predictor? Imagine the needless distress. Or, perhaps worse, what if they believe they are safe but they develop the disease anyway? Federal regulators take these concerns seriously, as demonstrated by their well-publicized 2013 order to the genetic testing company 23andMe to stop marketing a personal genome service (PGS). The FDA cited “potential health consequences that could result from false positive or false negative assessments for high-risk indications” and noted that they were “concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.”
We hope that in the future our data will provide the basis for tests that WILL provide helpful medical information to people with MS. Research such as our newest study, which is investigating the interplay between genetics and life experiences in the development and progression of MS, could demonstrate relationships between specific risk variants and environmental triggers and provide hard evidence for prevention strategies. Perhaps we’ll uncover relationships between specific risk variants and drug response. Or our metabolic signature will prove to be robust and reproducible and effective. But until that happens, we continue to follow the guidelines set up for us and continue to strive to protect the well-being of our study participants to the best of our ability. For without study participants, and their trust in our protection, our research would not be possible.