00 · preamble
A while back you asked me what I actually want to do next.
This page is me answering. It's thorough. Scroll, or jump to the part you care about.
01 · the short version
I want to grow into a more senior role in central monitoring.
Directly with Moderna, through my own company, on your team. Because you understand the operational side, and you'd give me the room to actually implement it properly.
That's the short version. The rest of this page is the why.
02 · the gap I keep seeing
Central monitoring and monitoring oversight are looking at the same studies from two angles, and we're barely talking.
Central monitoring sees the KRI signals. Oversight sees the site behavior. But the handoff between them is thin. By the time a signal becomes site-level action, it has usually been translated, summarized, and half-understood by three different people.
The tools exist in pieces. The data exists. But there's no infrastructure connecting the signal to the action. That's the gap, and it's where I think the most value is being lost.
Combining central monitoring and oversight has the potential to unlock a lot. Both in the speed of catching problems and in what actually happens once we catch them. Right now, the gap between signal and action is where most of the investment gets lost.
And the timing matters. You already know E6(R3) is fully in force now across FDA, EMA, MHRA, and Health Canada. Centralized monitoring isn't optional anymore. It's written into the guidance as a standalone approach, and inspectors are treating it that way. How do sponsors detect risks in real time? How are oversight decisions documented? How do signals become action? That's the standard we're being held to right now, and the infrastructure to answer those questions cleanly doesn't exist yet.
03 · what I'd build for us
Here's what a unified central monitoring system would actually look like. I've thought about this a lot, and I've been building a working version to test the ideas.
The system I'm describing isn't a SaaS product you'd buy. It's an internal process built on your data, your KRIs, your escalation rules. I'd take what we already have and wire it into something that actually connects the pieces. Here's the shape of it, phase by phase.
Centralize the data and make it visible.
Pull KRI data from EDC and IRT into one place. With the right API access from IT and the vendors, this replaces the spreadsheet pull from three different systems the night before a review meeting. One dashboard, all studies, updated in real time.
Build the signal engine.
Define KRIs with real thresholds. Run automated statistical monitoring across every site, every week. When a site crosses a threshold, it fires a signal automatically. No waiting for someone to notice in a spreadsheet three months later.
Close the loop between signal and action.
This is the part that doesn't exist today. When a signal fires, it goes somewhere. It gets assigned. Someone owns it. There's an SLA. Oversight sees it, acts on it, and resolves it. The whole thing is tracked and auditable. No more signals disappearing into a shared drive.
Add the intelligence layer.
Once the data flows and the signals close, you can start asking harder questions. Which sites will have problems next quarter? Why did this site's dropout rate spike? What does the enrollment curve actually predict? Statistical monitoring first, AI-driven root cause analysis second.
04 · how we'd roll it out
Start with one study. Prove it works. Then bring the rest in.
The fastest way to land this isn't to boil the ocean. It's to pick one pilot study, ship the whole loop for that one, and let the results do the arguing for phase two.
This approach de-risks both the technical and the political side. Fewer stakeholders to align means faster iteration. Real data means we can tune KRIs and thresholds before they're load-bearing. And a 90-day pilot with hard numbers is a much easier conversation than "trust me, this will work across the portfolio."
That said, pilots like this fail in predictable ways. Here's how I'd keep it from drifting.
-
Parallel-run. Don't replace.
For the pilot, the new system runs alongside whatever we do today. Compare outputs weekly. This kills the "what if yours misses something ours catches" argument before it starts, and gives us a fallback if we hit a bug at month two.
-
Define success metrics on day one. With numbers.
Pilots die when "success" stays fuzzy. Concrete targets: signal detection latency cut from X days to Y. Monitor hours per week pulling data cut from X to Y. N site issues caught earlier than they'd have been caught otherwise. If we can't measure it, we can't sell phase two.
-
Timebox to 90 days with an explicit go / no-go.
Shorter than 90 days and we don't see enough signal. Longer and momentum dies, scope creeps, and it turns into a side project everyone forgets about. Set the decision date on day one. Calendar invite everyone. No surprises.
-
Pick the pilot study deliberately.
Not the cleanest (won't prove value). Not the messiest (won't finish). Something like 2808: early enrollment, active, and representative of future scale.
-
Plan study #2 on purpose, not by default.
Phase two isn't "scale up to ten studies." It's one more study, chosen to stress-test a dimension the pilot didn't. Different therapeutic area, different data source, different size. That's how we learn whether the system actually generalizes before we commit to it across the portfolio.
05 · the proof I've already built it
Everything you just saw is from a working system I built called VigilNova.
Not a deck. Not wireframes. A full-stack platform with a Postgres database, a FastAPI backend, a React frontend, seeded demo data, and a complete audit trail. I'm not trying to sell this. I'm showing you that I've already done the work to prove the concept. Here's the rest of what's in it.
If you want a live walkthrough, I'd love to give you one. You'd find a lot of it directly relevant to what we're trying to do.
06 · the scope I'd want
Concretely, here's the shape of the role.
-
Mature the central monitoring framework across studies.
Including building out a system that actually works. Not another spreadsheet, not another dashboard that nobody looks at. A living framework where the KRIs, the thresholds, and the review cadence all change together as we learn what matters.
-
Own the KRI library, thresholds, and escalation workflows.
End to end. Which metrics we track, where the lines are, what happens when a line is crossed, who gets paged, and how long they have to respond before it rolls up. A real library, not a folder of Excel files.
-
Tighten the handoff between central monitoring signals and what oversight actually does with them.
This is the big one. The thing that I think is quietly costing us the most. Close that gap and every other metric gets better, because the signal finally has somewhere to go.
-
Position the system for an eventual in-house model.
If the direction ever moves toward bringing monitoring in-house, CRAs as FSPs directly under Moderna instead of through a CRO, this is the platform they'd land on from day one. Under E6(R3), the documentation burden sits with the sponsor regardless. Having that chain of custody inside Moderna's walls is a compliance advantage either way.
07 · one last thing
I'd want this structured as a direct contractor engagement with Moderna, not through ICON.
A direct relationship gives us more flexibility on how the role is actually shaped, on what I can own, and on how quickly we can move. It also removes a layer of telephone between what you want and what I build. I think that matters more than it sounds.
VigilNova comes with me. When I come in as a contractor, the system is part of what I bring to the team. Moderna gets a working central monitoring platform on day one, included. I know any tool touching our data would need to go through IT and QA review. I built it with that in mind, and I'm ready for that process.
I've also thought through the IP, continuity, and knowledge transfer questions that come with something like this. Happy to walk through all of it in person.
That's the whole thing. I know it's a lot, but you asked, and I wanted to give you a real answer instead of a quick one. Whenever you're ready to talk through it, I'm here.
Shivam