I'm a software engineer based in India. I work across the full stack β shipping production AI systems for clients, and contributing fixes to the open-source tools the ML and agent ecosystem runs on.
- π§ I find correctness bugs, silent failures, and security holes in production codebases and fix them
- π€ I build AI-native products β pipelines, agents, verification systems, clinical tooling
- π Open to full-time remote roles in Backend, Applied AI, and Full Stack Engineering
I contribute to the internals of tools I use β compiler stacks, agent runtimes, LLM frameworks. Here's what I've shipped:
karpathy/autoresearch β AI agent framework, 50k β
- PR #17 β Self-healing error recovery: agent parses its own stack traces and resumes from OOM/syntax failures without stopping. Contributed within hours of the repo going public. Shopify's CEO ran it overnight and got 53% faster rendering from 93 automated commits.
- PR #16 β Eliminated ~4GB of persistent VRAM overhead per run by switching tanh to native bfloat16 β headroom that matters on a single H100 running 100+ experiments overnight.
googleworkspace/cli β Official Google repo, Rust, 20k β in 2 weeks
- PR #500 β Eliminated a TOCTOU race condition in atomic writes: OAuth tokens were briefly world-readable between creation and permission enforcement. Enforced
0o600atomically at creation, closing the leak window for ~21,000 users. - PR #506 β Google Meet integration for
calendar +insertwith deterministicrequestIdhashing β eliminates duplicate Meet links on API retries. - PR #542 β Fixed silent auth failure: token directory errors were swallowed, credentials silently never persisted while auth appeared to succeed.
NVIDIA/NemoClaw β NVIDIA agent security stack, 15k β
- PR #187 β Enforced
0o600onopenclaw.jsonduring migration β session and routing config was world-readable by default. Propagated into 3 downstream forks within 13 hours of merge. - PR #186 β
chmod 600on.envat sandbox startup β closed the exposure window before any agent process reads secrets. - PR #174 β
chmod 600on remote.envpost-SCP duringdeploy()β different attack surface from #186, both were unpatched.
pytorch/pytorch β C++ & Python compiler stack, 63% of global model training
- PR #169128 β Fixed silent data corruption in Inductor C++ kernels: OpenMP + OpenCV together silently produce wrong training outputs. No error thrown. Affects 63% of global model training.
- PR #169786 / #169126 / #169788 β
torch.compilecrashes on nested autograd, Conv+BatchNorm FX graph failures, silentindex_selectout-of-bounds corruption β across ~1.57B pip installs.
langchain-ai/langchain & langgraph β LLM frameworks, used by Replit, Uber, LinkedIn, GitLab
- PR #6509 β Fixed validation crash blocking Generic type hint injection in LangGraph β unblocked
ToolRuntime[Context]patterns in typed agent pipelines. - PR #34046 / #34053 / #34114 β Fixed silent tool-calling failures across Mistral, Groq, and Ollama. TypeErrors that caused agent workflows to silently stop on provider switch. ~500k developers monthly.
kubernetes/kubernetes β Go, 5.6M developers, 3M+ production clusters
- PR #135460 β Modernizing the Device Resource Allocation (DRA) API: migrating
DeviceTaintandAllocatedDeviceStatusto declarative API markers across ~3M production clusters.
- π 3rd Place β Splunk Global Hackathon 2025 out of 1,200+ entries β OPA security add-on for real-time threat detection and privilege escalation auditing
- β‘ Celer AI β voice-to-text pipeline for clinicians, medical report generation from 15 minutes to 40 seconds (95% reduction)
- π‘οΈ Trustful β hallucination detection layer for insurance data extraction using Gemini + Zod validation. A hallucinated policy number in insurance is a liability, not a UX bug.

