You may have noticed the recent excitement over real-world evidence (RWE) in the context of medical devices, whether you are in the early stages of developing a wearable—like a continuous glucose monitor or a smartwatch with an ECG function—or you are creating a remote patient monitoring device, you might be asking yourself, "Can real-world clinical data from actual use support my submission to the FDA?" and "How can I use that same data for PMS process?"
However, here’s the thing - RWE from wearable medical devices has emerged as a genuine Regulatory pathway and is beginning to transform the landscape of Regulatory submissions.
Last year, Johnson & Johnson accomplished the unprecedented feat in 2023 of being the first medical device label growth allowed entirely based on RWE (of electronic health records) without a traditional clinical trial.
The FDA also granted clearance to the Apple Watch's AFib History function as the first digital health technology to qualify as a Medical Device Development Tool, which can serve as an authorized endpoint in clinical trials. These are no longer isolated experiments, but guides that are beginning to change how medtech companies approach Regulatory submissions and ongoing monitoring for their wearables.
Your device is already generating massive amounts of real-world data every day. Patients are using these devices as they live their lives—in their homes, at work, while exercising, and sleeping—which offers an amount of information that couldn't be replicated in a controlled clinical study.
The problem is not that you have data, but whether you're gathering, validating, and using that data in a manner congruent with the FDA's evolving standards.
Regulatory Process that Changes Everything
Johnson & Johnson had a concern about growing the label for their ThermoCool SmartTouch Catheter—specifically, the indication to include persistent atrial fibrillation into their existing label for paroxysmal AFib.
The traditional approach to expanding an indication typically involves a new randomized controlled trial, which consists of recruiting hundreds of patients, conducting a multi-year, multi-site study, allocating millions of dollars, and then waiting 9 to 12 months for FDA approval. Instead, they took a different approach.
J&J partnered with the National Evaluation System for Health Technology (NEST) and utilized electronic health records from Mercy Health and the Mayo Clinic to conduct a retrospective comparative effectiveness study using data from patients who had already received the treatment in routine clinical practice.
They applied the same practical accuracy as would be applied in a more traditional RCT—pre-specified protocols, statistical analysis plans, and controls for data integrity—but instead of a prospective trial, utilized real-world data (as they had found it). Ultimately, the FDA approved the label expansion into just 6 months—this was half the time!
The Regulatory approval approved in early 2023 wasn't an unusual exception or a draw-your-own-conclusions "pilot program. RWE from wearable devices would serve as the primary evidence for Regulatory decisions, as the FDA accepts data quality and study characteristics.
So, suppose your technology has been used by thousands of patients, generating continuous streams of validated health data. In that case, you acquire evidence of countless value to support new indications, prove your safety and effectiveness claims, or validate your clinical claims—without the time and expense of a traditional clinical trial.
How the FDA evaluates Real World Evidence (RWE) from wearables
First, we have to understand what the FDA is looking for when we submit RWE. The RWE Framework (published in 2018 and then later elaborated on in a draft guidance in December 2023) outlines three (3) basic questions:
The first question is: Is your real-world data fit for use?
That is, the FDA assesses consistency, such as how the data was collected, quality controls, and the validation methods in place. It also weighs the data to measure the actual outcomes being shown. For wearables, this typically involves demonstrating that your device accurately measures results and verifies them against a clinical-grade reference method.
For instance, when Dexcom submitted its G6 continuous glucose monitor for De Novo classification, it validated its device against the FDA-cleared YSI laboratory analyzer, showing a Mean Absolute Relative Difference (MARD) of 9.0%, with 89.9% of sensor readings data compared to within 20% of the laboratory reference.
That level of validation ensured that all sensor readings were compared to a gold standard, allowing for the development of an evidence-based approach. This approach provides the FDA with assurance in the dependability of the data.
Second, can your study design provide enough scientific evidence?
The FDA doesn't approve One (1) 1 1 1 study design over another in principle, but they are firm in their methodology. You can use randomized pragmatic trials, observational studies with external controls, or prospective studies from retrospective data - but whatever study design you apply and whichever data you use, you will need to manage confounding factors, meaning you will answer your outcomes as specified in your analysis plan, and you will have to do that with clinical trial-like transparency.
Even with retrospective data, there should be definitions of what will or will not be done with your data before opening the data door, so that there wouldn’t be selection bias.
Third, does your study meet FDA Regulatory requirements?
Everything from informed consent and IRB approval to data monitoring, quality assurance, and 21 CFR Part 11 compliance for electronic records. It's precisely the same criteria you would apply to a traditional clinical trial, where you are approved to collect data from real-world data sources.
In other words, RWE has been approved for submission, while other RWEs have been denied or delayed. If your company has too much data, you need the correct data, collected in the right way, with a study design that can actually answer all the Regulatory questions from the FDA.
Continuous glucose monitors: The RWE success story:
Out of all the device categories, only continuous glucose monitors have reached that destination. These devices have generated some of the most validated, clinically relevant real-world data, and manufacturers have utilized this data in Regulatory submissions and post-market validation.
For example, consider Abbott's FreeStyle Libre system, which received clearance for the Libre 2. The company utilized real-world observational data from Sweden’s National Diabetes Register to demonstrate that CGM use not only reduces HbA1c levels in the insulin-dependent population but also in the Type 2 diabetes population not using insulin.
However, the interesting part is that after these devices are made available to the market, manufacturers receive continuous data streams: glucose readings every 5 minutes for 10-14 days per sensor, with hundreds of thousands of patients. Dexcom published a real-world usage study on 19,447 patients, examining the rates of device feature usage (e.g., alerts, data sharing, analytics platform) and glycemic outcomes.
They found that patients who interacted with the device features had improved glucose control, valuable information that informed product design, educational pathways for patients and providers, and best practices in clinical settings.
In Abbott's post-market monitoring of Libre 2 clinical studies, the system was tested on 60 subjects after a single 1,000mg dose of vitamin C. They observed a maximum bias of +9.3 mg/dL, which, although not clinically significant for most applications, was substantial enough to warrant FDA clearance for automated insulin dosing (AID) systems, where even minor errors can have a considerable impact.
In contrast, Dexcom developed its G6 system and subjected it to acetaminophen interference testing, showing that it has minimal impact (MARD increased by only 1%). This information facilitated its FDA clearance for AID integrations.
All these interference studies are a crucial part of RWE validation, as they enable us to assess the impact of the device's performance. It is unlikely that a clinical trial would systematically examine every possible involvement across multiple patient populations. Real-world data, if monitored appropriately, can identify these issues early in the process, before safety becomes a concern.
Cardiac Wearables: The Next RWE Frontier
If glucose monitors were the first RWE leaders in wearables, cardiac monitoring devices are racing right along that same path—and in some cases, they are overcoming the challenges.
So, if we think about it, a glucose sensor is in One (1) 1) 1 1 place under our skin, measuring One (1) 1 1 1 1 measure in a relatively controlled microenvironment. A cardiac wearable device is placed on your wrist or chest, capturing electrical signals from your heart during various activities, including walking, sweating, sleeping, exercising, or any other physical movement.
Motion artifacts, inconsistent electrode contact, electromagnetic interference—each One (1) generates noise that creates false alarms or confuses accurate cardiac signals. However, despite these challenges, cardiac wearables are building impressive RWE portfolios.
Beyond Detection: Monitoring and Management
Other companies are using cardiac wearable RWE, something different. Consider implementing remote patient monitoring programs for patients with heart failure. Traditionally, these patients had to visit the clinic often to assess fluid status, weight gain, and cardiovascular parameters. Now, wearables can continuously record heart rate, activity level, sleep patterns, and even proxy indicators of fluid status.
However, the innovation is not just about capturing data; it is about creating wearable-based monitoring that improves outcomes.
Various researchers have used RWE from utilizing real-world data to show improvements in hospital readmission rates, earlier recognition of decompensation, and increases in quality of life. This RWE is now used for reimbursement, indications for remote patient monitoring programs, and clinical practice guidelines.
Validation Challenge:
The validation of cardiac wearables presents a clear challenge from that of CGMs: with glucose monitors you can take blood samples simultaneously and compare sensor readings to laboratory glucose measurements, relatively simple with cardiac monitors, imagining validation requires comparing wearable ECG signals to clinical grade 12 lead ECGs or Holter monitors, and sometimes need to be happening while undertaking behaviors that might affect quality of sensor signal.
Several companies are working on this using a thorough validation process:
- Multi-site validation studies of gold standard equipment in diverse patient populations.
- Activity-specific testing to determine potential gaps in accuracy.
- Analysis of edge cases or testing more challenging cases, such as arrhythmias, and patients with tremors.
- Long-term testing of reliability shows that accuracy does not degrade from days to weeks of continuous wear.
With each of these validation studies, if they are conducted exactly with pre-specified endpoints, an appropriate sample size, and the study is reported in an open source, then this forms the basis of regulatory-grade RWE.
Problems to Avoid When Designing Your RWE Plan
Pitfall #1: You Did Not Anticipate Data Quality Problems.
There are thousands of devices deployed generating data—awesome! But what key issues arise when:
- Patients take off their devices in the shower to charge them, or they forget to put them back on when they are done.
- The data is not updated to the cloud for several days or weeks because of connectivity issues.
- Different versions of the app generate data in slightly distinct formatting and associations.
- Some users update their device firmware versions sporadically, resulting in cohorts having different algorithm versions.
- Establish data quality monitoring within your structure starting day One (1). Set thresholds for acceptable data completeness (e.g., "patient must have worn device for a minimum of 18 hours each day for 80% study duration"). Create automated flags for loss of connectivity and data quality. Don't wait for the FDA to find gaps in your data.
Pitfall #2: Validation Limitations That Undermine Everything
Consider the following scenario, which occurs all too often: a company validates its wearable device with a clinical reference in a lab using 50 healthy volunteers, all seated and still. Excellent MARD, great correlation, the staff was happy.
Next, they want to use real-time data from patients moving around, possibly sweating, and living their lives, which yields a slightly different accuracy.
The FDA's question is: "Your validation study showed nearly 95% accuracy under controlled conditions; however, the real-world data you provided to the FDA shows a lot more variability. How do we know this data is worthy of Regulatory decisions?" - So your company has to validate under conditions that you expect your device will be used. If patients will wear a device during exercise, validate it in an exercise setting.
If the intended user is elderly, include elderly subjects in the validation. If you are designing a device for use in hot, humid climates, ensure that this condition is included in the validation process.
Pitfall #3: Study Design Issues Leading to Regulatory Concerns
Some companies think to themselves, "We have data from 10,000 users! Let's take a look, analyze, and see what we find!" Next, they identify and document these associations, then submit them to the FDA.
To this, the FDA will respond: "This looks like data dragging. You have no analytical plans with specific hypotheses, which means that your analysis could be as much of a chance as an estimate of the actual effect.
Additionally, we observe confounding — the comparison groups in this study differ in age, severity of illness, medication use, and other factors. How do you know the differences you see are the device and not age or medication use?"
In this case, what we must consider is that, even with a retrospective data source, we develop pre-specified analytical plans before opening the data. Define your cohorts, endpoints, and statistical methods before you analyze the data.
Control for confounders, such as using susceptibility score matching or multivariable regression, etc. Demonstrate to the FDA that you are using the same level of accuracy as you would in a prospective trial or study with real-world data.
Pitfall #4: Gaps in Regulatory Compliance
RWE does not mean you can avoid informed consent, IRB approval, or other Regulatory protections. One (1) of the companies learned this the hard way when they attempted to use data obtained from their consumer wellness app—which had a broad and generic privacy policy—for regenerating a Regulatory submission.
The FDA's response was straightforward; they just asked, "Did the users provide specific consent for their data to be used for Regulatory purposes? Did an IRB review the data collection and use in any way?"
The answer was NOOO. Hence, the company was unable to use the data.
Even if you are creating "real-world" data, you need to have proper informed consent that gives the use for Regulatory purposes, IRB oversight of any research activities, and monitoring plans, and you will need to comply with the requirements of 21 CFR Part 11 if you need to produce an electronic record of the data collection and use.
Therefore, you should consider data collection and submission from the outset.
Pitfall #5: Ignoring the "So What?"
The FDA does not simply want to know that your device collects data; it also wants to know how it collects data. They want to see that the data are clinically meaningful and answer a Regulatory question.
Your wearable tracks 47 different physiological parameters continuously. That's excellent for the users. But which clinically relevant parameters do any of these correlate to for a clinical endpoint you are trying to demonstrate? And how does a change in any of these parameters map to a change in patient function? Finally, what is the clinical significance of the differences you identified?
The result will be reflected throughout your entire Regulatory process, linking your device data with clinical outcomes. Do not simply demonstrate that your device can detect changes; show that these changes matter clinically.
Show an association between your wearable data and traditional clinical endpoints such as hospitalizations, changes to medication, quality of life measures, or diagnoses made by a trained clinician. Doing this provides the FDA with a level of assurance and confidence that your data are not only technically valid but also clinically meaningful.
Don’t add years of real-world data only to discover it won’t support Regulatory submissions because basic elements may be / or were missing from the start. Additionally, refrain from conducting validation studies and submitting RWE packages without addressing the fundamental questions posed by the FDA.
Build your RWE strategy with Regulatory expertise – Freyr Solutions, where we guide medical device companies through each stage, which can avoid further pitfalls. Whether you're just starting to think about RWE or you're trying to leverage existing data for Regulatory submissions, we bring the Regulatory intelligence and strategic guidance to maximize your success.