3 Weeks. One Live SaaS. Real Paying Customers.
Three weeks from idea to production. That's how long it took to build EasyHeadshots.ai, a SaaS that lets anyone upload casual photos and get back a set of professional AI-generated headshots. No photographer. No studio. No $400 session fee.
I'm writing this because when I was planning the build, I couldn't find a single honest post covering the full picture: the AI pipeline complexity, the authentication and storage decisions, the Stripe integration edge cases, and most importantly - what I'd do differently. This is that post.
If you're thinking about building an AI image SaaS, this will save you significant time. If you just want to understand how a solo developer ships a real product in 21 days, keep reading.
The Idea and Why It Made Sense
The AI headshot space had already been validated by Aragon AI and similar tools charging $29-$49 per set. The market existed. The technology had matured. What hadn't matured was the developer tooling around it - specifically the ability to fine-tune a model quickly on a small set of user photos, generate consistent output, and deliver it reliably at scale.
My edge wasn't the idea. It was speed of execution and a clean integration between the fine-tuning pipeline and the product layer.
The core value proposition was simple: upload 10-20 photos, pay once, get 40 professional headshots within 30 minutes. That's it.
Before writing a single line of code, I validated three things:
- Willingness to pay. Posted a simple landing page with a waitlist and a "coming soon" price of $19. Got 60 sign-ups in the first week from a single Reddit post.
- Technical feasibility. Ran a local test fine-tune on my own photos to confirm the output quality was acceptable.
- Build timeline. Mapped out the full scope and confirmed I could ship an MVP in 3 weeks with the right stack.
The stack decision was straightforward given my background: Nuxt.js for the frontend and API layer, Firebase for auth and storage and database, and Stripe for payments. The AI piece was the only genuine unknown.
The 3-Week Build Timeline
EasyHeadshots.ai - from first commit to first paying customer
Week 1 - Foundation
Nuxt project setup, Firebase auth + storage, user dashboard, photo upload flow, Firestore data model
Week 2 - AI Pipeline
Model fine-tuning API integration, job queue, image processing, webhook handling, generation result delivery
Week 3 - Payments and Launch
Stripe checkout, webhook verification, credit system, polish, error handling, production deploy
Week 1: Foundation - Nuxt, Firebase, and the Upload Flow
The first week was entirely about infrastructure. No AI, no payments. Just a solid, working application shell that I could build on top of.
Setting Up Nuxt 3
I chose Nuxt 3 over a plain Vue SPA or Next.js for one specific reason: server routes. Nuxt's server/api directory gives you full Node.js server-side API routes co-located with your frontend, deployed as a single unit. For a solo developer, this eliminates an entire service boundary.
npx nuxi@latest init easyheadshots
cd easyheadshots
npm install firebase @stripe/stripe-jsThe project structure I landed on:
├── server/
│ ├── api/
│ │ ├── generate.post.ts # Trigger AI fine-tune job
│ │ ├── jobs/[id].get.ts # Poll job status
│ │ ├── webhooks/
│ │ │ ├── ai.post.ts # AI provider webhook
│ │ │ └── stripe.post.ts # Stripe webhook
│ │ └── upload-url.post.ts # Generate signed upload URLs
│ └── utils/
│ ├── firebase-admin.ts # Server-side Firebase Admin SDK
│ └── stripe.ts # Stripe server client
├── composables/
│ ├── useAuth.ts
│ ├── useFirestore.ts
│ └── useGeneration.ts
└── pages/
├── index.vue
├── dashboard.vue
└── results/[jobId].vue
Firebase: Auth, Storage, and Firestore
I've built with Firebase on enough projects to know exactly what I'm getting. The combination of Authentication, Storage, and Firestore covers everything an image SaaS needs out of the box: user identity, file hosting, and a real-time database for tracking job state.
The critical architectural decision was using Firebase Admin SDK exclusively on the server side. Never expose the Admin SDK to the client. All client interactions happen through the standard Firebase Web SDK, with server routes validating the user's ID token before touching any privileged data.
// server/utils/firebase-admin.ts
import { initializeApp, getApps, cert } from 'firebase-admin/app'
import { getFirestore } from 'firebase-admin/firestore'
import { getStorage } from 'firebase-admin/storage'
import { getAuth } from 'firebase-admin/auth'
function getFirebaseAdmin() {
if (getApps().length > 0) {
return getApps()[0]
}
return initializeApp({
credential: cert({
projectId: process.env.FIREBASE_PROJECT_ID,
clientEmail: process.env.FIREBASE_CLIENT_EMAIL,
privateKey: process.env.FIREBASE_PRIVATE_KEY?.replace(/\\n/g, '\n'),
}),
storageBucket: process.env.FIREBASE_STORAGE_BUCKET,
})
}
export function getAdminFirestore() {
getFirebaseAdmin()
return getFirestore()
}
export function getAdminStorage() {
getFirebaseAdmin()
return getStorage()
}
export function getAdminAuth() {
getFirebaseAdmin()
return getAuth()
}Every server route starts with the same pattern - verify the token, extract the user ID, proceed:
// server/api/upload-url.post.ts
import { getAdminAuth, getAdminStorage } from '~/server/utils/firebase-admin'
export default defineEventHandler(async (event) => {
const authHeader = getHeader(event, 'authorization')
if (!authHeader?.startsWith('Bearer ')) {
throw createError({ statusCode: 401, message: 'Unauthorized' })
}
const token = authHeader.slice(7)
const decodedToken = await getAdminAuth().verifyIdToken(token)
const userId = decodedToken.uid
const body = await readBody(event)
const { fileName, contentType } = body
if (!fileName || !contentType) {
throw createError({ statusCode: 400, message: 'fileName and contentType are required' })
}
const bucket = getAdminStorage().bucket()
const filePath = `uploads/${userId}/${Date.now()}-${fileName}`
const file = bucket.file(filePath)
const [signedUrl] = await file.getSignedUrl({
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType,
})
return { signedUrl, filePath }
})The Firestore Data Model
I kept the data model deliberately flat. Over-normalized Firestore schemas are the single most common mistake I see in production apps - they create read amplification and make security rules a nightmare.
// Firestore collections structure
// /users/{userId}
interface UserDocument {
email: string
displayName: string | null
credits: number // Generation credits remaining
createdAt: Timestamp
updatedAt: Timestamp
}
// /jobs/{jobId}
interface JobDocument {
userId: string
status: 'pending' | 'training' | 'generating' | 'complete' | 'failed'
uploadedPhotos: string[] // Firebase Storage paths
generatedPhotos: string[] // Firebase Storage paths
externalJobId: string | null // ID from AI provider
promptStyle: string
creditsUsed: number
errorMessage: string | null
createdAt: Timestamp
updatedAt: Timestamp
}Photo Upload Flow
The upload flow uses pre-signed URLs to avoid routing large image files through the server. The client requests a signed URL, uploads directly to Firebase Storage, then sends only the storage path to the server. This keeps the server lean and eliminates unnecessary bandwidth costs.
// composables/useUpload.ts
export function useUpload() {
const { getIdToken } = useAuth()
async function uploadPhoto(file: File): Promise<string> {
const token = await getIdToken()
// Request signed URL from our server
const { signedUrl, filePath } = await $fetch('/api/upload-url', {
method: 'POST',
headers: { Authorization: `Bearer ${token}` },
body: { fileName: file.name, contentType: file.type },
})
// Upload directly to Firebase Storage - no server relay
await fetch(signedUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
})
return filePath
}
return { uploadPhoto }
}By the end of week 1, I had: Google authentication working, a dashboard showing the user's jobs, a multi-photo upload flow with progress indicators, and all photos landing in Firebase Storage with the correct user-scoped paths. No AI yet - but the foundation was solid.
Week 2: The AI Pipeline
This was the week I was most uncertain about. The AI pipeline is the product's entire value proposition - if it's slow, unreliable, or produces bad output, nothing else matters.
Choosing the AI Approach
Building an AI headshot generator requires fine-tuning a generative model on the user's specific face. You can't just prompt a general-purpose model - you need a model that has learned what this particular person looks like.
The modern approach uses techniques like LoRA (Low-Rank Adaptation) fine-tuning on a Stable Diffusion or FLUX base model. The process:
- User uploads 10-20 photos of themselves
- A fine-tuning job runs for 10-20 minutes, teaching the model that person's facial features
- The fine-tuned model is then used to generate headshots across different professional styles and backgrounds
Rather than building and hosting this infrastructure myself (which would have been a 2-3 week project on its own), I integrated with an AI provider that offers a fine-tune API. The specific provider doesn't matter - what matters is the integration pattern, which is identical across Replicate, Astria, or similar platforms.
Job Queue Architecture
AI generation jobs are long-running processes. A fine-tune takes 10-20 minutes; generation takes 1-3 minutes. You cannot handle these synchronously in an API request. The architecture is:
- Client submits job and receives a
jobIdimmediately - Server creates a Firestore document with
status: 'pending' - A Firebase Cloud Function triggers on the Firestore write and starts the external fine-tune job
- The AI provider calls back to our webhook when training completes
- The webhook handler updates the job status and triggers the generation phase
- Another webhook call arrives when images are generated
- The client polls the Firestore job document for real-time status updates
// server/api/generate.post.ts
import { getAdminAuth, getAdminFirestore } from '~/server/utils/firebase-admin'
import { FieldValue } from 'firebase-admin/firestore'
export default defineEventHandler(async (event) => {
const token = getHeader(event, 'authorization')?.slice(7)
if (!token) throw createError({ statusCode: 401, message: 'Unauthorized' })
const { uid: userId } = await getAdminAuth().verifyIdToken(token)
const body = await readBody(event)
const { photoPaths, promptStyle } = body
if (!photoPaths?.length || photoPaths.length < 8) {
throw createError({ statusCode: 400, message: 'Minimum 8 photos required' })
}
const db = getAdminFirestore()
// Check the user has enough credits before creating the job
const userRef = db.collection('users').doc(userId)
const userSnap = await userRef.get()
const userData = userSnap.data()
if (!userData || userData.credits < 1) {
throw createError({ statusCode: 402, message: 'Insufficient credits' })
}
// Create the job document - Cloud Function picks it up from here
const jobRef = db.collection('jobs').doc()
await jobRef.set({
userId,
status: 'pending',
uploadedPhotos: photoPaths,
generatedPhotos: [],
externalJobId: null,
promptStyle: promptStyle ?? 'professional',
creditsUsed: 1,
errorMessage: null,
createdAt: FieldValue.serverTimestamp(),
updatedAt: FieldValue.serverTimestamp(),
})
// Deduct credit atomically with job creation
await userRef.update({
credits: FieldValue.increment(-1),
updatedAt: FieldValue.serverTimestamp(),
})
return { jobId: jobRef.id }
})Handling AI Provider Webhooks
The webhook handler is where most of the complexity lives. You need to verify the request is genuinely from the AI provider (not a spoofed request), then update job state and trigger next steps.
// server/api/webhooks/ai.post.ts
import { getAdminFirestore, getAdminStorage } from '~/server/utils/firebase-admin'
import { FieldValue } from 'firebase-admin/firestore'
import { createHmac, timingSafeEqual } from 'crypto'
function verifyWebhookSignature(body: string, signature: string, secret: string): boolean {
const expected = createHmac('sha256', secret)
.update(body)
.digest('hex')
return timingSafeEqual(Buffer.from(signature), Buffer.from(expected))
}
export default defineEventHandler(async (event) => {
const rawBody = await readRawBody(event) ?? ''
const signature = getHeader(event, 'x-webhook-signature') ?? ''
if (!verifyWebhookSignature(rawBody, signature, process.env.AI_WEBHOOK_SECRET!)) {
throw createError({ statusCode: 401, message: 'Invalid webhook signature' })
}
const payload = JSON.parse(rawBody)
const { event: eventType, jobId: externalJobId, status, outputUrls } = payload
const db = getAdminFirestore()
const jobsQuery = await db.collection('jobs')
.where('externalJobId', '==', externalJobId)
.limit(1)
.get()
if (jobsQuery.empty) {
// Log for debugging but return 200 to prevent retries
console.warn(`No job found for externalJobId: ${externalJobId}`)
return { received: true }
}
const jobDoc = jobsQuery.docs[0]
if (eventType === 'training.completed') {
await jobDoc.ref.update({
status: 'generating',
updatedAt: FieldValue.serverTimestamp(),
})
// Trigger generation via the AI API - omitted for brevity
}
if (eventType === 'generation.completed' && outputUrls?.length) {
// Download generated images and store in Firebase Storage
const storedPaths = await Promise.all(
outputUrls.map((url: string) => downloadAndStore(url, jobDoc.id))
)
await jobDoc.ref.update({
status: 'complete',
generatedPhotos: storedPaths,
updatedAt: FieldValue.serverTimestamp(),
})
}
if (status === 'failed') {
await jobDoc.ref.update({
status: 'failed',
errorMessage: payload.error ?? 'Generation failed',
updatedAt: FieldValue.serverTimestamp(),
})
}
return { received: true }
})Real-Time Status Updates in Vue
On the client side, I used Firestore's onSnapshot to give users a live progress indicator without polling. The job document updates flow directly to the UI as they happen.
// composables/useGeneration.ts
import { doc, onSnapshot } from 'firebase/firestore'
import type { JobDocument } from '~/types'
export function useGeneration(jobId: string) {
const job = ref<JobDocument | null>(null)
const unsubscribe = ref<(() => void) | null>(null)
function startListening() {
const { $firestore } = useNuxtApp()
const jobRef = doc($firestore, 'jobs', jobId)
unsubscribe.value = onSnapshot(jobRef, (snapshot) => {
if (snapshot.exists()) {
job.value = { id: snapshot.id, ...snapshot.data() } as JobDocument
}
})
}
function stopListening() {
unsubscribe.value?.()
}
onMounted(startListening)
onUnmounted(stopListening)
const statusLabel = computed(() => {
const labels: Record<string, string> = {
pending: 'Queued...',
training: 'Learning your features (10-15 min)',
generating: 'Creating your headshots (2-3 min)',
complete: 'Your headshots are ready',
failed: 'Something went wrong',
}
return labels[job.value?.status ?? 'pending'] ?? 'Processing...'
})
return { job, statusLabel }
}By the end of week 2, I had a working end-to-end pipeline. Upload photos, kick off a fine-tune job, watch the status update in real time, and see generated headshots appear in the results page. The product worked. Now I needed to charge for it.
Week 3: Stripe, Polish, and Launch
The Credit System
I chose a credit-based model over a subscription for two reasons. First, it simplifies the MVP - no subscription management, no proration, no failed payment recovery flows. Second, it maps naturally to the product: one generation job costs one credit. Buy a pack, use it whenever.
The Firestore document for each user tracks a credits field. When a job is created, credits are decremented atomically before the job starts, preventing race conditions.
// Stripe product catalog
const CREDIT_PACKS = [
{ id: 'starter', credits: 1, price: 1900, label: '1 Generation' },
{ id: 'pro', credits: 3, price: 4900, label: '3 Generations', badge: 'Most Popular' },
{ id: 'team', credits: 10, price: 12900, label: '10 Generations' },
] as constStripe Checkout Session
The payment flow is a classic Stripe Checkout redirect. The server creates a session with the credit pack metadata, the user completes payment on Stripe's hosted page, and a webhook fires to add credits.
// server/api/checkout.post.ts
import Stripe from 'stripe'
import { getAdminAuth } from '~/server/utils/firebase-admin'
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!)
const PRICE_IDS: Record<string, { priceId: string; credits: number }> = {
starter: { priceId: process.env.STRIPE_PRICE_STARTER!, credits: 1 },
pro: { priceId: process.env.STRIPE_PRICE_PRO!, credits: 3 },
team: { priceId: process.env.STRIPE_PRICE_TEAM!, credits: 10 },
}
export default defineEventHandler(async (event) => {
const token = getHeader(event, 'authorization')?.slice(7)
if (!token) throw createError({ statusCode: 401, message: 'Unauthorized' })
const { uid: userId, email } = await getAdminAuth().verifyIdToken(token)
const { packId } = await readBody(event)
const pack = PRICE_IDS[packId]
if (!pack) throw createError({ statusCode: 400, message: 'Invalid pack' })
const session = await stripe.checkout.sessions.create({
mode: 'payment',
line_items: [{ price: pack.priceId, quantity: 1 }],
customer_email: email ?? undefined,
metadata: {
userId,
packId,
credits: pack.credits.toString(),
},
success_url: `${process.env.APP_URL}/dashboard?payment=success`,
cancel_url: `${process.env.APP_URL}/pricing`,
})
return { url: session.url }
})Stripe Webhook: Crediting the User
The webhook handler needs to be idempotent - Stripe may deliver the same event more than once. I use the Stripe event ID to deduplicate by storing processed event IDs in Firestore.
// server/api/webhooks/stripe.post.ts
import Stripe from 'stripe'
import { getAdminFirestore } from '~/server/utils/firebase-admin'
import { FieldValue } from 'firebase-admin/firestore'
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!)
export default defineEventHandler(async (event) => {
const rawBody = await readRawBody(event) ?? ''
const sig = getHeader(event, 'stripe-signature') ?? ''
let stripeEvent: Stripe.Event
try {
stripeEvent = stripe.webhooks.constructEvent(rawBody, sig, process.env.STRIPE_WEBHOOK_SECRET!)
} catch {
throw createError({ statusCode: 400, message: 'Invalid Stripe signature' })
}
if (stripeEvent.type !== 'checkout.session.completed') {
return { received: true }
}
const db = getAdminFirestore()
// Idempotency check
const eventRef = db.collection('processedStripeEvents').doc(stripeEvent.id)
const eventSnap = await eventRef.get()
if (eventSnap.exists()) {
return { received: true, duplicate: true }
}
const session = stripeEvent.data.object as Stripe.Checkout.Session
const { userId, credits } = session.metadata ?? {}
if (!userId || !credits) {
console.error('Missing metadata on Stripe session:', session.id)
return { received: true }
}
const creditsToAdd = parseInt(credits, 10)
// Atomic batch: add credits + record event
const batch = db.batch()
batch.update(db.collection('users').doc(userId), {
credits: FieldValue.increment(creditsToAdd),
updatedAt: FieldValue.serverTimestamp(),
})
batch.set(eventRef, {
processedAt: FieldValue.serverTimestamp(),
sessionId: session.id,
userId,
credits: creditsToAdd,
})
await batch.commit()
return { received: true }
})Launch Week Polish
The last few days before launch were not glamorous. I fixed real issues:
- Error states. The job
failedstate needed a clear message and a refund path. If the AI provider fails, the user's credit is returned automatically via a Cloud Function trigger. - Download experience. A "Download All" button that packages generated headshots into a ZIP file was requested by my beta testers. Added it with
jszip. - Email notifications. When headshots are ready, users get an email. I used Firebase's Trigger Email extension rather than wiring up a separate email service - saved me 3 hours.
- Loading skeletons. The dashboard felt broken without proper loading states. Added skeleton components everywhere data is async.
- Mobile upload. The original file input didn't support the camera roll on iOS Safari. Fixed with
accept="image/*"and testing on a real device.
Architecture Decisions in Hindsight
Key Architecture Decisions
Nuxt 3 over Next.js
Colocation of server API routes with the frontend meant one deployment, one codebase, one mental model. For a solo developer this is a significant productivity advantage.
Firebase over Supabase
Real-time job status updates via Firestore onSnapshot were genuinely the right fit here. The UX of watching a progress indicator update live without polling justifies the NoSQL trade-offs for this use case.
Credits over subscriptions
Simplified the MVP significantly. The trade-off is lower lifetime value per customer, but for launch it was the right call. I'll add a subscription tier after validating demand.
Firebase Cloud Functions for job queue
The Firestore trigger approach worked but cold starts added 3-5 seconds of latency to job initiation. A dedicated queue worker (BullMQ on a small VPS) would have been more reliable and faster.
Single AI provider with no fallback
This was a mistake. The provider had a 3-hour outage on day 2 post-launch. Four customers hit failed jobs. I refunded them manually. Should have abstracted the AI layer to allow provider switching from day one.
What I'd Do Differently
Abstract the AI provider behind an interface from day one. I wrote direct calls to the provider's API scattered across the codebase. When I wanted to test a different provider's output quality, it was surgical work. A simple AIProvider interface with trainModel(), generateImages(), and getJobStatus() methods would have taken an hour to set up and saved many hours later.
Set up error monitoring before launch, not after. I added Sentry on day 3 post-launch after debugging a webhook failure by reading Cloud Function logs. The first 48 hours of any launch produce more errors than any other period. Log everything.
Write admin tooling earlier. On launch day I had no admin panel. When a customer emailed saying their job was stuck in training status, I was manually querying Firestore in the console to fix it. A simple /admin page with job management would have been one day of work that paid off immediately.
Don't test payments with Stripe test mode only. I tested the Stripe integration thoroughly in test mode. What I didn't anticipate was that some users' cards would be declined, and the UX around payment failures needed much more work. The error messages from Stripe's API are verbose and technical - they needed to be translated to user-friendly language.
Charge more. My initial price of $19 per generation was too low. Conversion data shows price sensitivity is low at this tier - users who research the alternatives (a $400 photographer session) don't balk at $29. I raised prices three weeks after launch with no measurable drop in conversion.
The Tech Stack Summary
For anyone skimming to find the stack:
- Frontend + API layer: Nuxt 3 (Vue 3, TypeScript)
- Authentication: Firebase Authentication (Google + email/password)
- Database: Cloud Firestore
- File storage: Firebase Storage
- Background jobs: Firebase Cloud Functions (Firestore-triggered)
- AI/ML: Fine-tune API from an AI provider (Replicate or Astria pattern)
- Payments: Stripe Checkout + Webhooks
- Hosting: Vercel (Nuxt SSR)
- Email: Firebase Trigger Email extension + Mailgun
- Monitoring: Sentry (added post-launch, should have been day 1)
- Total monthly infra cost: ~$85
The entire codebase is approximately 3,200 lines of TypeScript and Vue across 34 files. It's a small, focused product. That's a feature.
Closing Thoughts
Building a SaaS in 3 weeks is not a party trick. It requires knowing your tools deeply, making fast but informed decisions, and ruthlessly deferring anything that isn't core to the user's first experience. The features I cut, subscription tiers, referral system, style customization, team accounts, are all on the roadmap. None of them mattered for launch.
What mattered was: can a user upload photos, pay, and receive professional headshots? Yes. Ship it.
The hardest part wasn't the technology. It was the discipline to stop adding features and put it in front of real users. Every day of polish after the product works is a day you're not learning from actual customer behavior.
If you're planning to build an AI-powered SaaS and want to talk through the architecture, AI pipeline decisions, or go-to-market approach, I'm happy to dig in with you.
Working on an AI product or SaaS MVP? Get in touch and let's talk through your build. I work with early-stage founders and solo developers on architecture, implementation, and shipping fast.
Related Posts: