Prometheus Careers — Case Study
AI Product

Prometheus Careers

AI career coaching that starts with the job, not the resume. A complete inversion of how resume tools are built — and why most of them produce generic output that doesn't land.

Prometheus Careers coaching interface

The Problem

Resumes are written for human readers — narrative arc, polished language, the impression of trajectory. ATS systems and hiring managers scanning for signal don't care about narrative arc. They're pattern-matching against a job description, and most resumes are calibrated against the wrong target.

The deeper problem: every AI resume tool built before this one started from the same wrong assumption. You upload your resume, the AI improves it. But "improve" means nothing without a target. Improve for whom? For which role? For which company's keyword matrix? Without a job description anchor, the AI is optimizing in a vacuum — producing polished-sounding generic output that impresses nobody.

The brand that came before this one — resumecoach.co — ran into a different wall: a European product with the same name had accumulated enough negative reviews to poison search results. The namespace collision was irrecoverable. The product needed a clean start with a name that owned its own space.

The Approach

The architectural rewrite made the job description the mandatory entry point. Everything flows from there:

Job-First State Machine

Five phases in strict sequence: JD_TARGETING → BASELINE_COLLECTION → COACHING → FINALIZE → APPLICATION_TRACKING. The JD is collected first and becomes the anchor for everything downstream. Career data is extracted and typed into a CareerProfile structure — not formatted, extracted.

Resume as Computed Output

The resume is not stored. It's computed: f(career_data, job_description). For a different job, the same career data produces a different resume — different emphasis, different language, different ordering of experience. This makes provenance tracking possible: every resume is attributable to a specific JD.

Haiku for the Analysis Pipeline

Resume analysis and keyword extraction routes to Claude Haiku — fast and cost-efficient for what is fundamentally a classification task. The coaching layer, where nuance matters, routes to more capable models. Model assignment matches task complexity to model capability, keeping costs viable for a free tier.

Free Tier Without Backend Cost

The V1 free tier runs on a chain of free LLM APIs. The architectural consequence: no backend session management required. Frontend state is the source of truth. This keeps the free tier genuinely free, but means FE state architecture carries more weight than it would in a typical backend-authenticated app.

Key Architectural Decision

V1 was resume-first. Users pasted their resume, then optionally added a job description. The AI would "improve" the resume — and the output was almost always generic. Not bad. Just calibrated against nothing in particular, which in a hiring context is the same as bad.

The core insight: AI career data extraction requires a target to determine which experiences are significant. Without a JD, the model has no way to decide that your three years of healthcare data work is more relevant than your five years of retail management. Both are real. Only one matters for this role.

V2 removed the resume-first entry point entirely. JD is mandatory. The V1 conversational "build from scratch" flow — where users answered turn-by-turn Q&A to generate content — was also removed. It had too many failure points across multi-turn sessions and produced lower-quality output than the structured BASELINE_COLLECTION phase that replaced it.

Results

V2
Complete architectural rewrite — resume is computed, not stored; JD is mandatory entry point
5
Clean state machine phases replacing V1's scattered routing logic with zombie dead-code paths
Free
V1 free tier on open LLM API chain — no backend state management, no per-user cost
Jaccard
Keyword overlap validation ensures generated resumes actually cover the JD's required competencies

The product is in active development — not yet exposed to a mass audience, which means the architectural decisions haven't been pressure-tested at scale yet. The Laney College grant opportunity (if awarded) would add student tracking dashboards and application outcome data, turning a self-service tool into an institutional pipeline with measurable cohort outcomes.

Technologies

Claude API
Claude Haiku
Astro
React
TypeScript
Cloudflare Pages
Supabase
TipTap
Pino
Playwright

Deep Dives

The product is live and accepting early users.

Visit prometheus.careers →