Project Overview
UFL (Unstoppable Future Life) is a gamified life operating system built for developers, fitness enthusiasts, and ambitious solopreneurs who want to track and execute across every domain of their life — not just habits.
The platform goes well beyond a habit tracker. It is a full AI planning and execution layer: users can generate multi-week plans across 9 life domains (career, project, income, fitness, health, learning, business, mindset, networking), track daily todos with session timers, receive real-time WhatsApp alerts, and accumulate XP and streaks that feed into a visible gamification layer — all from a single dashboard.
Platform: Web (Next.js 15 App Router) — Mobile-first PWA via responsive design Timeline: 6 months (solo build, actively in production) Role: Full-Stack Developer (Solo) AI Layer: 15 specialized agents — Gemini (primary LLM) + Groq (fallback LLM) Payments: Razorpay (India) + Stripe (International) Notifications: WhatsApp Business API (Meta), with 6 pre-approved message templates Database: MongoDB via Prisma ORM (Prisma Client JS) Deployment: Vercel + MongoDB Atlas
The Problem
Most productivity tools optimise for one vertical. Notion tracks projects. Habitica adds points to habits. Calendly manages time. None of them talk to each other.
The real problem is coordination cost: a developer who is also building a side project, hitting the gym, growing a freelance income, and trying to improve their mindset has to manage 4–6 separate tools — and none of those tools know about the others.
The client (and the developer — this was a self-built product) wanted a single system that:
- Generated actionable, week-by-week plans across every life domain — not advice, but structured task lists with deadlines
- Tracked execution at task level with session timers — not just "completed" / "not completed", but actual time-on-task measurement per session with pause/overtime tracking
- Delivered contextual WhatsApp messages — morning briefings, deadline alerts, midnight summaries — not generic push notifications with the same text every day
- Gamified progress without gimmicks — a streak/XP/level system where numbers reflect real output, not just daily check-ins
- Integrated real developer context — WakaTime coding hours and GitHub commits pulled live into the dashboard so the system actually knows how much you coded today
The existing MVP at the start of this project was a basic todo list with no AI, no session tracking, and a single WhatsApp reminder that fired at a hardcoded time.
Architecture & Tech Stack
Tech Stack
- Frontend: Next.js 15 (App Router) — SSR for fast initial load, React Server Components where possible
- Backend: Next.js API Routes — no separate Express server, all 60+ API endpoints live under /app/api/
- Database: MongoDB (via Prisma ORM) — document model fits the wide variety of plan types without schema rigidity of SQL
- Primary LLM: Google Gemini — main backbone for all 15 agents
- Fallback LLM: Groq — automatic on Gemini rate-limit or timeout, same output schema expected from both
- Auth: NextAuth.js (Google OAuth + credentials) with custom session extension for XP, level, and phone
- Payments: Razorpay (INR subscriptions) + Stripe (international) — a unified /api/checkout and /api/webhooks/ layer handles both
- Notifications: WhatsApp Business API — 6 Meta-approved templates, 9 cron-based triggers
- WakaTime: WakaTime REST API pulled daily via cron into WakaTime model — coding time, language breakdown, project split
- GitHub: GitHub REST API — commit counts synced into dashboard stats
- Deployment: Vercel (frontend + all API routes) + MongoDB Atlas
API Surface The platform has over 60 distinct API routes organized by domain:
/api/
├── agents/ # 15 agent endpoints (career, gym, income, project, business...)
├── blueprints/ # Cross-domain unified task dashboard
├── challenges/ # 90-day challenge create/track
├── cron/ # 9 scheduled jobs (morning briefing, midnight, eod, wakatime, reminders...)
├── daily-goals/ # AI-generated daily schedule from active routine
├── habits/ # Habit CRUD + log history
├── routines/ # Named daily routines with task templates
├── todos/ # Todo CRUD + session tracking + dashboard grouping
├── subscription/ # Stripe + Razorpay checkout, limits, webhooks
├── webhooks/ # Stripe, Razorpay, WhatsApp inbound
└── analytics/ # Shareable progress card, dashboard statsThe 15-Agent System
The platform's AI backbone is 15 domain-specialized agents, each accessed via its own API route under /api/agents/. Every agent shares a common context request format but generates domain-specific structured output that maps directly to Prisma models.
Agent Roster
| Agent | Domain | Output Model |
|---|---|---|
| Career Agent | Career roadmap + milestones | CareerPlan + CareerMilestone[] |
| Project Agent | Software project phases + daily dev tasks | ProjectPlan + ProjectPhase[] + ProjectTask[] |
| Income Agent | Freelance/revenue strategy + weekly tasks | IncomePlan + IncomeWeek[] + IncomeTask[] |
| Gym Agent | Workout plan + per-day exercise sets with weights | WorkoutPlan + Workout[] + WorkoutExercise[] |
| Health Agent | Nutrition / sleep / stress tasks by week | HealthPlan + HealthTask[] |
| Learning Agent | Course/skill roadmap with capstone project | LearningPlan + LearningTask[] |
| Business Agent | Startup idea validation + weekly execution | BusinessPlan + BusinessTask[] |
| Mindset Agent | Journaling, affirmations, habit rewiring | MindsetPlan + MindsetTask[] |
| Networking Agent | LinkedIn/Twitter/email outreach tasks | NetworkingPlan + NetworkingTask[] |
| Relationships Agent | Relationship investment activities | RelationshipPlan + RelationshipTask[] |
| Productivity Agent | Focus systems, routines, review cycles | ProductivityPlan + ProductivityTask[] |
| Life Agent | Cross-domain life architecture | LifePlan + LifeTask[] |
| Accountability Agent | Feedback analysis + replan triggers | UserFeedback → ReplanLog |
| Orchestrator Agent | Routes multi-domain requests to sub-agents | Routes to all of the above |
| Limits Agent | Enforces free vs. pro feature gates | Read-only auth check |
LLM Fallback Pattern
async function runAgent(prompt: string, schema: ZodSchema) {
try {
const result = await callGemini(prompt)
return schema.parse(JSON.parse(result)) // validate output
} catch (err: any) {
if (err?.status === 429 || err?.status === 503) {
const fallback = await callGroq(prompt) // same prompt, same schema
return schema.parse(JSON.parse(fallback))
}
throw err
}
}Both LLMs receive the same structured system prompt and must return JSON matching a Zod schema. If Groq's output fails schema validation, the error is surfaced to the user rather than silently persisting malformed plan data.
Agent Memory
Each agent session is stored in AgentConversation (message history by domain) and AgentMemory (key-value facts the agent has learned about the user — profession, goals, preferences). This allows follow-up conversations within the same domain to stay contextually coherent without re-explaining from scratch.
Data Model — Why MongoDB + Prisma
The schema has 45+ models across 9 plan domains. Every domain follows the same nested pattern:
[Domain]Plan
├── goal, strategy, startDate, endDate, isActive
└── [Domain]Week[]
├── weekNumber, focus
└── [Domain]Task[]
├── dayNumber, title, description
├── isCompleted, completedAt, deadline
└── domain-specific fields (e.g. resource, platform, type)This pattern powers the unified Blueprint Dashboard — a BlueprintTask model aggregates tasks across all domain plans into a single cross-domain view without duplicating data: it stores planId, planType, weekNumber, and dayNumber as a lightweight index into the source models.
Why MongoDB over PostgreSQL
The plan data structures vary significantly per domain — a WorkoutTask needs sets, reps, weight, and gifUrl. A NetworkingTask needs platform. A LearningTask needs resource. Forcing these into a shared relational table would require either nullable columns for every field or a complex EAV pattern.
MongoDB's document model lets each plan type own its fields cleanly while Prisma provides type-safe queries, migrations, and relation traversal — giving the ergonomics of an ORM without the schema rigidity of SQL for highly heterogeneous data.
Todo + Session Model (Core Engine)
The Todo model is the daily execution layer and is the most complex model in the schema:
Todo
├── task, category, status ("upcoming" | "in-progress" | "completed" | "time-up")
├── startTime, deadline, reminderTime, plannedTime
├── earnedXp, delayCount, lastDelayedAt
├── isAIGenerated, habitId, challengeId // links to parent systems
├── whatsappNotified, whatsappDeadlineNotified
└── TodoSession[]
├── order, targetDuration, breakDuration
├── startedAt, endedAt, duration (actual)
├── pauseTime, extraTime
└── status (PENDING | RUNNING | PAUSED | COMPLETED)A single todo can have multiple TodoSession records — e.g., a 90-minute deep work block split into three 30-minute sessions with 5-minute breaks. The session engine tracks pause time and overtime separately, giving accurate actual vs. planned time data per task.
WhatsApp Notification System
WhatsApp was chosen over email because the target users (Indian developers aged 20–30) have near-100% WhatsApp open rates vs. sub-15% email open rates.
9 Cron-Triggered Notification Jobs
| Cron Job | Timing | Purpose |
|---|---|---|
morning-briefing | 6:30 AM (user's wake-up time) | Daily plan summary: yesterday's missed tasks, top priority for today, challenge day count |
whatsapp-reminders | Every 30 min | Todo start reminders — fires when startTime is within the next 30 min and todo is still "upcoming" |
whatsapp-deadline-reminders | Every 15 min | Deadline alerts — fires when deadline is within configurable lead time (user setting) and todo is not completed |
daily-goals | 5:00 AM | AI generates the day's todo list from the user's active routine template |
eod-analysis | 9:00 PM | End-of-day AI analysis of completion rate, XP earned, streak status |
midnight | 11:59 PM | Midnight summary: what was done, what was missed, streak health |
wakatime-sync | Every 6 hours | Pulls coding hours from WakaTime API into WakaTime model |
test | On-demand | Verifies WhatsApp delivery to the user's registered number |
test-reminders | On-demand | Validates reminder cron logic without waiting |
Template Architecture
Meta only allows free-form messages within 24 hours of a user-initiated conversation. For platform-initiated messages (all cron jobs above), pre-approved templates are required. Six templates were submitted and approved:
// Morning briefing template call
sendWhatsAppTemplate(user.phone, "morning_briefing_v1", {
"1": user.name,
"2": missedYesterday, // AI-generated summary of missed tasks
"3": todayTopPriority, // highest priority task for today
"4": challengeDayStatus, // "Day 14 of 90 — Build in Public"
"5": appUrl, // deep link to open today's todos
})The content of variables (2, 3, 4) is generated by the morning-briefing AI agent, not hardcoded — so each user gets a contextually accurate summary, not a generic "Don't forget your tasks!" message.
Deduplication
Each Todo has whatsappNotified and whatsappDeadlineNotified boolean flags. Cron jobs check these before firing — so if a reminder network retry fires the same cron twice, no user receives a duplicate WhatsApp message.
Gamification — XP, Streaks & Levels
The gamification layer is woven into the User model directly — no separate progress tables:
User
├── xp Int @default(0)
├── level Int @default(1)
├── totalStreakDays Int @default(0)
├── streakShields Int @default(0)
├── streakShieldContinuity Int // days with 50%+ completion → earns shields
├── autoShieldEnabled Boolean // auto-activate shield on grace day
├── roleTitle String @default("Beginner")
├── roleLevel Int @default(0)
└── lastGraceActivation DateTime?XP System
XP is awarded per Todo completion and stored in Todo.earnedXp. The amount varies by:
- Priority weight — high-priority todos award more XP
- On-time vs. late — completing before the deadline gives a bonus multiplier
- Challenge link — todos linked to an active
Challengeget a challenge-mode XP boost
Level thresholds follow a non-linear curve (each level requires more XP than the last), stored in SystemConfig so an admin panel can tune them without a code deploy.
Streak Shields
A streak shield is a grace mechanic — if a user misses a day but has a shield in reserve, their streak continues. Shields are earned automatically by maintaining ≥50% completion rate for multiple consecutive days (streakShieldContinuity). This prevents the common "I missed one day, my streak is gone, I quit" failure mode seen in apps like Duolingo.
90-Day Challenges
The Challenge model is a structured commitment layer:
Challenge
├── title, focus, description
├── durationDays (default 90), startDate, endDate
├── status ("active" | "completed" | "failed")
├── streakCount, longestStreak, completionRate
└── autoCreateTodos Boolean // auto-generates todos daily if trueA user can only have one active challenge at a time. The challenge banner appears on the dashboard showing current day, completion rate, and streak — creating a daily accountability anchor.
Progress Trees
The ProgressTree model represents a visual metaphor — every 5 completed todos in a category grows a tree. The forest view on the todos page renders these trees with size/health based on the growthLevel and state ("alive" | "ghost" | "retired"). Ghost trees appear when a category has been neglected, giving a visual signal of imbalance without a harsh "failure" notification.
Subscriptions — Razorpay + Stripe
The platform monetizes via a Pro subscription that unlocks the full agent suite. Free users get access to 3 agents (Career, Project, Business) with a 1-prompt limit each per month.
The Subscription model stores both gateway IDs simultaneously:
Subscription
├── userId (1:1 with User)
├── status // "active" | "canceled" | "past_due" | "trialing"
├── planId // "free" | "pro_monthly" | "pro_yearly"
├── stripeCustomerId, stripeSubscriptionId
├── razorpayCustomerId, razorpaySubscriptionId
├── currentPeriodStart, currentPeriodEnd
└── cancelAtPeriodEndWhy store both gateway IDs on the same record? A user might initially subscribe via Razorpay (Indian card) and later switch to Stripe (after moving internationally). Having both IDs on a single subscription record means the platform never creates duplicate subscription records for the same user — the webhook handlers check which ID is populated and update the existing record.
Agent Purchases (One-Time Access)
For users who don't want a full subscription but need a specific agent, the AgentPurchase model enables pay-per-agent access:
AgentPurchase
├── userId, agentId
├── purchasedAt, expiresAt // time-boxed — not permanent
└── amountPaidThe /api/agents/limits route checks both Subscription.status and AgentPurchase[] to determine what the current user can access — a single source-of-truth gate called on every agent API before any Gemini token is spent.
Key Learnings
- MongoDB + Prisma for heterogeneous domain models was the right call — 9 plan types each with their own task fields would have been a maintenance nightmare in PostgreSQL. Prisma's type safety compensated for MongoDB's lack of foreign key enforcement at the DB level.
- WhatsApp template approval takes 2–7 days per template — and Meta rejects templates with placeholders that look like spam. Write templates that read as natural messages with minimal variable slots. Submit a full month before you need them.
- Cron jobs on Vercel have a 10-second function timeout on the free plan — the
morning-briefingcron calls Gemini for every opted-in user, which blew the budget at scale. The fix: process users in batches with a queue instead of a single synchronous loop, and run the cron on Pro plan with a 60-second timeout.
- Zod schema validation on LLM output is non-negotiable — Gemini occasionally returns valid JSON that doesn't match the expected schema (e.g., a missing
weekNumberfield, or a string where an integer is expected). Without Zod parsing at the API boundary, malformed data silently persisted and broke the frontend. Every agent response is now validated before any Prisma write.
- The
delayCountfield on Todo gave unexpected insight — tracking how many times a user delayed a specific task revealed which categories of work users were systematically avoidant about. This became an input to the Accountability Agent for more accurate replanning suggestions.
- Streak shields dramatically improved retention — users who triggered the shield mechanic had 40% higher 30-day retention than users who broke a streak with no shield. The psychology of "I have a safety net" reduces the all-or-nothing mindset that kills most habit apps.