APRIL 10 - JUNE 13, 2026

NandaHack:
Agentic AI
Hackathon

HCLTech · MIT Media Lab

Build a system, tool, or layer that fixes a real bottleneck in how AI agents are deployed, coordinated, or scaled.

NandaHack Agent Arena visual

What is NandaHack?

A high-signal hackathon for teams building practical infrastructure for enterprise AI agents.

Mission

Move beyond chatbot demos and build the trust, coordination, memory, safety, and integration layers that enterprises actually need.

Format

Two phases over ten weeks. Teams prototype, iterate with mentor feedback, and present at a final live event at MIT Media Lab.

Who should join

Builders and engineers

You like shipping real systems, not just demos. Bring your stack and build the missing agent layers.

Researchers and students

Test new ideas in production-like scenarios and get feedback from practitioners, judges, and mentors.

Product and domain teams

Have a real enterprise pain point? Turn it into a solution with measurable business impact.

Why participate

Mentors and feedback

Get direct input from technical and business leaders across enterprise AI and systems design.

Real-world evaluation

Your work is judged on utility, robustness, and how well a stock agent can actually use it.

Career and visibility

Top teams present at the final MIT event and gain visibility with leaders across the ecosystem.

Phases & Tracks

Build solutions that can be used by real enterprise agents in real operational workflows.

PHASE 1Apr 10 - June 13, 2026

Infrastructure Agents

Build agent infrastructure that solves real enterprise problems. On demo day, a judge hands a stock OpenClaw agent your instructions — if it can use what you built, you pass.

TrackWhat It BuildsPipeline ImpactRevenue Lever
Client 0Internal AI agents, copilots, productivity toolsCreates proven use cases + credibilityFaster deal conversion (proof-led selling)
Enterprise AI (incl. Modernization)Client-ready solutions (SAP, legacy, agents)Generates direct qualified opportunitiesLarge transformation deals ($$$)
Sales AI Enablement (Client 0 x Enterprise)AI-powered account insights, RFP agentsImproves deal identification + win ratesPipeline expansion + higher conversion
Responsible AISafety, governance, red-teaming solutionsReduces enterprise adoption frictionUnlocks regulated industry deals (BFSI, healthcare)
Executive AI CoachingAI copilots for CXOs + coaching modelOpens C-suite access + strategic dealsHigh-value consulting + transformation programs
PHASE 2June 13 - July 11, 2026

The Arena

Top teams from Phase 1 enter head-to-head arena challenges. Agents compete live across six arenas — deploy, coordinate, trade, adapt, defend, and integrate — all under pressure.

ArenaChallengeTaglineWhat It Tests
The ForgeDeploy or die — fastest production-grade agent deploymentDeployment speed & reliabilityInfrastructure readiness
The CommonsMulti-agent coordination via shared memoryCollective intelligence that worksCoordination efficiency
The BazaarAgent economy — trade services, verify identity, settle paymentsTrust no one, verify everythingEconomic infrastructure
The GauntletRapid-fire unknown challenges — pure reasoning under pressureAdapt or be forgottenGeneralization ability
The WallSurvive adversarial attacks while completing tasksLast agent standingSecurity & resilience
The ThroneFull integration — deploy, coordinate, trade, and defend simultaneouslyRule them allGrand champion

Judging

The judging is simple and weighted toward substance.

Impact

40%

Does it solve a real problem that developers actually have?

Technical Depth

40%

Is the implementation solid, thoughtful, and technically credible?

Simplicity

20%

Is it easy to understand, adopt, and use?

Timeline

Ten weeks, two phases, one final event at MIT

AnnouncementApr 10
Phase 1Apr 10-June 13
Phase 2June 13 - July 11
Hackathon and Awards at MIT Media LabJuly 11, 2026

FAQ

Who can apply?

Students, builders, researchers, and professionals are welcome. Cross-functional teams are encouraged.

Do I need a full team before applying?

No. You can apply as an individual or as a team. Team matching support will be available before the build phase.

What should we submit?

A working prototype, clear documentation, and a short demo that shows the problem, approach, and impact.

What makes a strong project?

Projects that solve high-friction enterprise bottlenecks with practical, reliable, and testable agent workflows.