Auditable AI with Clinical Precision.
Bridging the gap between Clinical Research and Generative AI. Building systems where the expert remains the architect.
QGuard-Med: AI-supported cardiovascular waitlist prioritization with interpretable scoring
~80% hallucination reduction + 60% faster reporting while maintaining 100% audit compliance at Pangaea Oncology
High-throughput screening identified AMP as novel OPLAH enhancer in heart failure pathway
TGF-α as robust determinant of 5-oxoproline in CKD-HF patients
Moving beyond chatbots. I design systems with strict schema constraints, source-grounding, and human-in-the-loop e-signatures to eliminate hallucinations in clinical reports.
From local LLMs (Ollama) to secure Cloud (Bedrock). I build reproducible MVPs using FastAPI and Docker that respect patient data privacy boundaries.
PhD research in Heart Failure and oxidative stress. I bring wet-lab methodology (Z', S/B robustness) to AI validation.
Translating protocols into structured visit schedules. LLM-assisted tools that reduce manual workload while maintaining full audit readiness.
Z'-factor optimization (>0.5), high-throughput screening, omics pipelines (R/Python), biomarker discovery workflows
Bias detection across patient populations, representation analysis, clinical impact disparities, equity evaluation
ICH-GCP E6(R3), ALCOA+ principles, audit trail design, regulatory requirements → technical specs
Claude Code, Cursor, Windsurf, Anty Gravity. Rapid prototyping, Git workflows, Docker, FastAPI microservices
Schema-constrained outputs, chain-of-thought, source citations, edge case testing, ~80% hallucination reduction
Foundation models → orchestration (Bedrock/APIs) → application (RAG/agents) → validation → deployment
Local assistant (Ollama) providing citation-based answers from guidelines without PHI exposure.
On-device voice assistant for stepwise execution of laboratory protocols with audit logging.
Privacy-focused tool for medication follow-up (PrEP) with clear clinical responsibility boundaries.
Designed and executed an independent benchmark to evaluate clinical reasoning, demographic bias, and specialty-level performance of large language models using structured clinical cases. Implemented robust statistical analysis (including difficulty-aware normalization and subgroup analyses) with reproducible local execution.
Built an early MVP for an AI-supported cardiovascular waitlist prioritization workflow, combining clinical criteria and patient-reported inputs into an interpretable prioritization score. Designed with an integration mindset for clinical workflows and future EHR connectivity.
Developed a practical framework for validation, risk management, and governance of clinical AI/GenAI systems, focusing on traceability, bias evaluation, safety-by-design, and implementation readiness in regulated healthcare environments.
Conceptualized an LLM-based assistant for lab protocol support, designed to guide stepwise execution, reduce procedural errors, and maintain session continuity across multi-day experiments, with "safe" vs "exploration" usage modes.
Designed a mobile companion concept for PrEP adherence support, including discrete reminders, longitudinal adherence tracking, and patient education features, with a roadmap for clinical-grade monitoring and evaluation.
Proposed a monitoring layer for LLM-enabled clinical tools (e.g., scribes/copilots) to track performance drift, safety signals, and systematic errors over time, aligned with clinical governance and quality-improvement practices.
Applied advanced analytics and ML-informed approaches to support biomarker discovery and validation workflows, emphasizing robustness checks, reproducibility, and clinically interpretable outputs.
"The question isn't whether AI will be adopted, but by whom it will be constrained. I ensure we don't outsource clinical judgment."