Nagraj Gaonkar
The magic you are looking for, is in the work you are avoiding.
About Me
I’m someone who enjoys going deep into how things work, whether that’s a piece of code, a dataset, or a full system. I care about thoughtful design and building projects that teach me something new each time.
Some Project Works
SILICON (In Progress)
macOS Process Monitoring Agent (C++)
A modular macOS proctoring agent written in modern C++, built around a monitor-first approach for safe, gradual enforcement. It features tamper-resistant logging, a thread-safe JSON policy engine, and real-time process visibility using sysctl/libproc, with heartbeat checks for reliability. Designed to stay lightweight while providing deep OS-level insight and a clear path to full filesystem and network control.
The PitWall Spy
Real‑Time Race Simulation Engine
Full‑fledged C++17 engine for Formula‑1 races. Multithreaded producer–consumer design streams 20 ms snapshots clicks through a thread‑safe ring buffer. Models realistic engine output, driver pace, tyre wear and pit stops; computes positions by total distance and renders a live, colour‑coded leaderboard. JSON‑driven configuration and clean thread orchestration showcase both low‑latency and infrastructure expertise.
No More Circle, Escaping VSM loop
Cranfield Search Engine
Course project for IIT‑Madras NLP. Built a modular search engine that indexes the Cranfield collection using VSM, LSI and k‑means‑based clustering. Implements sentence segmentation, tokenization, stopword removal, lemmatization/stemming, WordNet enhancements, spell check and autocompletion; allows experimenting with different IR models and preprocessing options.
Open Source & ArXiv Pre-Print
LLVM Project
llvm/llvm-project
Enabled X86 SIMD shuffle intrinsics (PSHUFD, PSHUFLW, PSHUFW) to be evaluated in constexpr contexts
Improves compile-time evaluation of vector shuffle intrinsics, enabling advanced constexpr metaprogramming and safer SIMD-heavy code paths.
VERITAS ( Pre-Print )
Verification and Explanation of Realness in Images for Transparency in AI Systems
VERITAS upscales a 32×32 image, highlights relevant regions with GradCAM, scores patches using CLIP and then generates human‑readable anomaly descriptions via a vision‑language model.
VERITAS delivers fine‑grained real/fake analysis for tiny images and enhances trust in AI by making decisions transparent.
Technical Engagements
Ebullient Securities - InterIIT Tech Meet 14.0
Oct '25 - Dec '25IIT Madras - Quant Contingent
Designed a regime-aware, intraday systematic trading framework operating on large-scale tick data, combining trend-following, mean-reversion, and guardian-based flip logic. Focused on robustness via strict no-lookahead constraints, parallelized backtesting, adaptive risk controls, and cross-dataset generalization across 2 masked Exchanges
Adobe Research - InterIIT Tech Meet 13.0
Nov '24 - Dec '24IIT Madras - Computer Vision Contingent
Built a multimodal pipeline for AI-generated image detection, combining visual artifact analysis with language-guided reasoning for interpretable outputs. Created the evaluation and experimentation framework that later evolved into VERITAS