shivam's lab
private · for Chris ·

00 · preamble

A while back you asked me what I actually want to do next.

This page is me answering. It's a little long, but I think it's necessary. Scroll, or jump to the part you care about.

01 · the short version

I want to grow into a more senior role in central monitoring.

Under you. And, to be direct about it, only under you. You know what I mean.

That part is easy to say in a sentence. The harder part, and the reason I'm writing this instead of just texting, is the why. The role I'm describing isn't quite the one that exists today, and the shape of it matters.

02 · the gap I keep seeing

Central monitoring and monitoring oversight are looking at the same studies from two angles, and we're barely talking.

Central monitoring sees the KRI signals. Oversight sees the site behavior. But the handoff between them is thin. By the time a signal becomes site-level action, it has usually been translated, summarized, and half-understood by three different people.

The tools exist in pieces. The data exists. But there's no infrastructure connecting the signal to the action. That's the gap, and it's where I think the most value is being lost.

Combining central monitoring and oversight has the potential to unlock a lot. Both in terms of speed, and in terms of actually turning KRI signals into site-level action instead of spending millions to gain nothing.

03 · what I'd build for us

Here's what a unified central monitoring system would actually look like. I've thought about this a lot, and I've been building a working version to test the ideas.

The system I'm describing isn't a SaaS product you'd buy. It's an internal process built on your data, your KRIs, your escalation rules. I'd take what we already have and wire it into something that actually closes the loop. Here's the shape of it, phase by phase.

Phase 1

Centralize the data and make it visible.

Pull KRI data from EDC, IRT, and safety databases into one place. No more pulling spreadsheets from three different systems and pasting them into a PowerPoint the night before a review meeting. One dashboard, all studies, updated in real time.

Portfolio view showing all studies, total sites, total subjects, and study health comparison.
portfolio Every study at a glance. Health scores, enrollment, KRI status, site risk distribution.
Data import page showing file type selection and drag-and-drop upload.
data import Subjects, visits, queries, adverse events, protocol deviations, randomizations. Drag, drop, done.
Phase 2

Build the signal engine.

Define KRIs with real thresholds. Run automated statistical monitoring across every site, every week. When a site crosses a threshold, it fires a signal automatically. No waiting for someone to notice in a spreadsheet three months later.

Study dashboard with enrollment forecast, risk signals, and outlier detection.
study dashboard Enrollment forecast with confidence bands, risk signal counts, outlier sites. All live, all automated.
Visual KRI builder with block library and formula canvas.
KRI builder Custom KRIs built visually. Drag metrics, set thresholds, preview the math. No IT tickets, no SQL.
Phase 3

Close the loop between signal and action.

This is the part that doesn't exist today. When a signal fires, it goes somewhere. It gets assigned. Someone owns it. There's an SLA. Oversight sees it, acts on it, and resolves it. The whole thing is tracked and auditable. No more signals disappearing into a shared drive.

Risk signals list with severity, status, assignment, and SLA tracking.
risk signals Every signal with severity, status, assignment, detection date, and SLA age. This is the handoff layer.
Notification center showing signal assignments, data imports, and AI analysis alerts.
notifications Real-time alerts. "High dropout rate at Site 001." "AI analysis complete for abnormal screen failure rate." Nobody has to remember to check.
Phase 4

Add the intelligence layer.

Once the data flows and the signals close, you can start asking harder questions. Which sites will have problems next quarter? Why did this site's dropout rate spike? What does the enrollment curve actually predict? Statistical monitoring first, AI-driven root cause analysis second.

KRI benchmarks with percentile analysis across 24 sites and 21 metrics.
benchmarks Cross-study percentile analysis. P25, median, P75 for every KRI. Spot a two-standard-deviation outlier before the monitor does.
VigilNova dashboard in dark mode showing enrollment forecast chart.
enrollment forecast Regression-based projections with confidence intervals. Know early if a trial will miss its timeline.

04 · how we'd roll it out

Start with one study. Prove it works. Then bring the rest in.

The fastest way to land this isn't to boil the ocean. It's to pick one pilot study, ship the whole loop for that one, and let the results do the arguing for phase two.

This approach de-risks both the technical and the political side. Fewer stakeholders to align means faster iteration. Real data means we can tune KRIs and thresholds before they're load-bearing. And a 90-day pilot with hard numbers is a much easier conversation than "trust me, this will work across the portfolio."

That said, pilots like this fail in predictable ways. Here's how I'd keep it from drifting.

  1. Parallel-run. Don't replace.

    For the pilot, the new system runs alongside whatever we do today. Compare outputs weekly. This kills the "what if yours misses something ours catches" argument before it starts, and gives us a fallback if we hit a bug at month two.

  2. Define success metrics on day one. With numbers.

    Pilots die when "success" stays fuzzy. Concrete targets: signal detection latency cut from X days to Y. Monitor hours per week pulling data cut from X to Y. N site issues caught earlier than they'd have been caught otherwise. If we can't measure it, we can't sell phase two.

  3. Timebox to 90 days with an explicit go / no-go.

    Shorter than 90 days and we don't see enough signal. Longer and momentum dies, scope creeps, and it turns into a side project everyone forgets about. Set the decision date on day one. Calendar invite everyone. No surprises.

  4. Pick the pilot study deliberately.

    Not the cleanest (won't prove value). Not the messiest (won't finish). The right one is mid-enrollment, active, representative of future scale, with a PI and CRA lead who will actually give us 15 minutes a week. I'd want the selection criteria written down so the choice isn't a political fight.

  5. Plan study #2 on purpose, not by default.

    Phase two isn't "scale up to ten studies." It's one more study, chosen to stress-test a dimension the pilot didn't. Different therapeutic area, different data source, different size. That's how we learn whether the system actually generalizes before we commit to it across the portfolio.

One more thing worth naming. The real blocker on pilots like this is almost never the build. It's data access and legal sign-off. Those conversations start day one, not month two.

05 · the proof I've already built it

Everything you just saw is from a working system I built called VigilNova.

Not a deck. Not wireframes. A full-stack platform with a Postgres database, a FastAPI backend, a React frontend, seeded demo data, and a complete audit trail. Here's the rest of what's in it.

Custom KRIs per study with formulas, thresholds, and active status.
study KRIs Per-study KRI library. Formulas, thresholds, categories, all visible inline. No hidden config files.
Subjects list with visit compliance, query counts, adverse events, and site assignment.
subjects Subject-level detail. Visit compliance, query counts, adverse events, site assignment.
Audit log showing timestamped user actions, entity types, and view changes links.
audit trail Every action logged. Timestamps, users, entities, change details. Inspection-ready.
SOC 2 compliance controls checklist with pass/fail status.
compliance SOC 2 controls checklist. 13 of 14 passing. Built-in, not bolted on.
User management with roles: admin, manager, viewer.
user management Role-based access. Admin, manager, viewer. Multi-tenant from day one.
Settings page with branding, notification preferences, and configuration.
settings Configurable per-tenant. Branding, notifications, preferences.

If you want a live walkthrough, I'd love to give you one. You'd find a lot of it directly relevant to what we're trying to do. I'll come to you.

06 · the scope I'd want

Concretely, here's the shape of the role.

  1. Mature the central monitoring framework across studies.

    Including building out a system that actually works. Not another spreadsheet, not another dashboard that nobody looks at. A living framework where the KRIs, the thresholds, and the review cadence all change together as we learn what matters.

  2. Own the KRI library, thresholds, and escalation workflows.

    End to end. Which metrics we track, where the lines are, what happens when a line is crossed, who gets paged, and how long they have to respond before it rolls up. A real library, not a folder of Excel files.

  3. Tighten the handoff between central monitoring signals and what oversight actually does with them.

    This is the big one. The thing that I think is quietly costing us the most. Close that loop and every other metric gets better, because the signal finally has somewhere to go.

07 · one last thing

I'd want this structured as a direct contractor engagement with Moderna, not through ICON.

A direct relationship gives us more flexibility on how the role is actually shaped, on what I can own, and on how quickly we can move. It also removes a layer of telephone between what you want and what I build. I think that matters more than it sounds.

That's the whole thing. I know it's a lot, but you asked, and I wanted to give you a real answer instead of a hallway one. If any of this resonates, let's grab 30 minutes and I'll walk you through it properly. I'll come to you.

Shivam