Category Task / Spec Item
Frontend PostPulse AI â Technical Specification & Implementation Guide
PostPulse AI is a micro-SaaS product built for social media marketers that combines analytics, AI content generation,
and multi-platform scheduling. The system predicts best posting times, auto-generates captions and hashtags, performs
A/B tests, and automates publishing with a one-click workflow. This document provides a developer-friendly,
implementable specification including frontend components, backend services, database schema, APIs, background jobs,
Frontend integrations, security, deployment, and an MVP timeline.
Frontend Frontend Architecture
Recommended stack: [Link] (React) + Tailwind CSS + Shadcn/UI for components. Deploy on Vercel for CDN/Edge. Routing &
Pages: - / (Landing page) â marketing site + pricing. - /app/dashboard â main analytics + calendar. - /app/accounts â
list of connected social accounts. - /app/create â post editor & asset uploader. - /app/schedule â content calendar and
scheduled posts list. - /app/analytics â detailed reports & competitor heatmap. - /app/settings â organization, team,
billing, brand voice. - /admin â internal admin for support and moderation (restricted). Key Components: - AuthWrapper
(handles session & role-based UI). - PostEditor (WYSIWYG-style with media preview, variant generator panel). -
VariantList (shows AI-generated captions & metrics predictions). - CalendarView (drag/drop scheduling for posts). -
AnalyticsCharts (line charts, bar charts â use recharts; don't set default colors unless requested). -
ConnectAccountModal (OAuth flow helper + permission checks). State Management: - Lightweight global state with Zustand
for auth/session and small shared states. - Fetch data via React Query (TanStack Query) for caching, retries, and
background refetch. Accessibility & Responsiveness: - Mobile-first design; ensure editor supports uploads from mobile
Frontend devices. - Keyboard-accessible modals and aria labels for controls.
users: - id UUID PK - name TEXT - email TEXT UNIQUE - password_hash TEXT - role TEXT (enum: admin, member) - created_at
Frontend TIMESTAMP - updated_at TIMESTAMP
Frontend organizations: - id UUID PK - name TEXT - billing_customer_id TEXT - created_at TIMESTAMP
Frontend memberships: - id UUID PK - user_id UUID FK - organization_id UUID FK - role TEXT
social_accounts: - id UUID PK - organization_id UUID FK - platform TEXT (enum: instagram, facebook, linkedin, tiktok, x)
- platform_handle TEXT - platform_account_id TEXT - access_token TEXT (ENCRYPT at rest) - refresh_token TEXT -
Frontend token_expires_at TIMESTAMP - permissions JSONB - meta JSONB
posts: - id UUID PK - organization_id UUID FK - author_id UUID FK - status TEXT (draft, scheduled, publishing,
published, failed) - scheduled_at TIMESTAMP WITH TIME ZONE - posted_at TIMESTAMP - platforms JSONB (list of
Frontend social_accounts + metadata) - attachments JSONB (S3 URLs, thumbnails) - created_at TIMESTAMP
post_variants: - id UUID PK - post_id UUID FK - caption TEXT - hashtags TEXT[] - media_overrides JSONB - ai_metadata
Frontend JSONB (prompt, model, tokens) - predicted_score FLOAT - created_at TIMESTAMP
schedules: - id UUID PK - post_id UUID FK - scheduled_at TIMESTAMP WITH TIME ZONE - timezone TEXT - recurrence JSONB
Frontend (optional)
ab_tests: - id UUID PK - post_id UUID FK - variant_ids UUID[] - sample_size INT - start_time TIMESTAMP - end_time
Frontend TIMESTAMP - winner_variant_id UUID
analytics_events (normalized): - id UUID PK - post_id UUID FK - platform TEXT - impressions INT - engagements INT -
Frontend likes INT - shares INT - comments INT - fetched_at TIMESTAMP - raw_json JSONB
Request (POST /api/v1/ai/generate-caption): { "media_url": "[Link] "brand_voice":
"friendly_informative", "tone": "excited", "length": "short" } Response: [ {"variant_id":"uuid","caption":"Short
Frontend punchy hook...","hashtags":["#growth"],"predicted_score":0.82}, {...} ]
OpenAI (or equivalent) is used as the primary generator. Key patterns: - Prompt templates: Use brand voice profile and
example captions to prime model. - Safety & moderation: Send generated captions through a content moderation check. -
Caching: Cache caption responses for identical media_url + brand_voice inputs for N minutes to limit token usage and
costs. - Rate-limiting: Use token bucket algorithm per-organization to prevent runaway costs. Example prompt skeleton:
You are a creative social media copywriter for {brand_name}. The brand voice is {brand_voice}. Create 3 short caption
variants for this media (link: {media_url}). Keep captions under 140 characters and include 2â 4 relevant hashtags. Avoid
Frontend profanity. Provide a short rationale for each caption.
Platforms to integrate: - Meta Graph API (Instagram/Facebook Pages): Requires app review for posting and read-insights.
Use long-lived tokens where available. - LinkedIn Marketing API: Company pages posting; needs application review for
write access. - TikTok for Developers: varies by region and app permissions. - X (Twitter) API: posting and metrics
access. Important: Each platform has changing policies and rate limits. Implement adapter layers per platform to
encapsulate and centralize platform-specific logic and error handling. Always verify the latest API docs before
Frontend production release.
Suggested stack: - Frontend: Vercel ([Link]), utilize edge functions for auth cookies and personalization. - Backend:
Render / Railway / [Link] for [Link] service; Dockerized deployments. - Database: Postgres (managed, e.g., AWS RDS /
Supabase / PlanetScale alternative). - Cache & Queues: Redis (managed) for BullMQ. - Storage: AWS S3 for assets; signed
URLs for uploads. CI/CD: - GitHub Actions pipeline: lint -> unit tests -> build -> deployment (protected branches for
main/staging). - Migrations: Prisma migrations or Flyway for DB schema changes. Monitoring & Observability: - Sentry
for error tracking. - PostHog or Amplitude for product analytics and funnels. - Logs: centralized structured logs (e.g.,
Frontend Grafana Loki, Papertrail).
Week 0â 2 (Discovery & Design): Specs, wireframes, API contracts, initial infra setup. Week 3â 6 (Core Backend + Auth +
DB): Implement auth, organizations, social account adapter skeleton, and Postgres schema. Week 7â 9 (Frontend MVP): Post
editor, account connect flow, basic dashboard, calendar view. Week 10â 12 (AI, Scheduler & Publish Worker): Integrate
OpenAI for caption generation, implement scheduler & publisher workers, add Stripe billing. Week 13â 14 (Beta &
Frontend Hardening): Closed beta, bug fixes, monitoring, and documentation. Target: Public beta end of Day 60.
CREATE TABLE users (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name TEXT, email TEXT UNIQUE, password_hash TEXT,
role TEXT, created_at TIMESTAMP DEFAULT now()); CREATE TABLE organizations (id UUID PRIMARY KEY DEFAULT
gen_random_uuid(), name TEXT, billing_customer_id TEXT, created_at TIMESTAMP DEFAULT now()); CREATE TABLE
social_accounts (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), organization_id UUID REFERENCES organizations(id),
platform TEXT, platform_account_id TEXT, access_token TEXT, refresh_token TEXT, token_expires_at TIMESTAMP, meta JSONB,
created_at TIMESTAMP DEFAULT now()); CREATE TABLE posts (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), organization_id
UUID REFERENCES organizations(id), author_id UUID, status TEXT, scheduled_at TIMESTAMPTZ, posted_at TIMESTAMPTZ,
Frontend platforms JSONB, attachments JSONB, created_at TIMESTAMP DEFAULT now());
MVP must-have features: 1. Account onboarding & social platform connection (Instagram, Facebook Pages, LinkedIn, TikTok,
X). 2. Historical post ingestion & engagement analysis (at least last 90 days where API permits). 3. AI Engagement
Predictor (virality score and best time suggestions per account). 4. AI Caption & Hashtag Generator (3â 5 caption
variants + ranked hashtags). 5. One-upload multi-format repurposer (auto-resize/format for Reels, Shorts, Stories). 6.
Scheduler & Background Publisher (job queue + workers to publish posts at scheduled times). 7. Basic A/B testing flow
(post variants, small-scale test, auto-publish winner). 8. Analytics dashboard with basic metrics (impressions, likes,
Backend comments, engagement rate). 9. Billing (Stripe/Razorpay integration) and subscription plans.
1) Onboarding Flow: - User registers (email/social SSO) â Creates or joins an organization â Connects social accounts
(OAuth). - System requests read/post permissions. After approval, the platform imports profile metadata and, where
possible, recent posts and engagement data. 2) Analysis & Recommendation Flow: - After import, analytics service
groups historical posts â computes best times per weekday + format â stores account-specific posting calendar. - AI
generates initial content templates and 'starter captions' based on brand voice settings. 3) Create â A/B Test â
Schedule â Publish flow: - User uploads media + selects platforms â Post editor suggests captions & hashtags (3
variants) â User selects to A/B test or pick a variant. - If A/B selected: scheduler posts small-sample variants,
measures engagement during test window, chooses winner and publishes to the rest of audience. 4) Post-Publish
Backend Analytics: - Workers collect analytics via platform APIs (or webhooks where supported) and update dashboard.
Backend Backend Architecture
Recommended stack: [Link] (NestJS recommended for modular architecture) + TypeScript. PostgreSQL with Prisma ORM. Redis
for queues & caching. Background workers with BullMQ or [Link]. Service Modules: 1. Auth Service â JWT + refresh
tokens, SSO integrations (Google, Apple). Role-based access and organization scoping. 2. Accounts Service â Manage
organizations, members, permissions. 3. Social Integration Service â Handles OAuth flows, token refresh, API wrappers
per platform (Meta, LinkedIn, TikTok, X). Isolates platform-specific logic. 4. AI Service â Encapsulates OpenAI calls,
prompt templates, rate-limiting, caching of responses. 5. Scheduler Service â Enqueue publish jobs, manage A/B tests,
apply platform rate limiters. 6. Analytics Service â Ingests platform metrics, normalizes data, computes derived metrics
(engagement rate, best time). 7. Billing Service â Stripe/Razorpay integrations, invoices, plan enforcement middleware.
8. Admin / Audit Service â logs, activity tracking, customer support endpoints. Microservice vs Monolith: - MVP: Single
deployable backend with modular sub-folders (easier to iterate). - When scaling: split Social Integration and Scheduler
Backend into separate services (these handle heavy loads).
Use PostgreSQL with JSONB columns for flexible platform metadata. Below are core tables and primary fields (Postgres
Backend types suggested):
Backend API Endpoints (Representative)
Auth & User: POST /api/v1/auth/register â {name, email, password} -> 201 {user, token} POST /api/v1/auth/login â {email,
password} -> 200 {token, refresh_token} POST /api/v1/auth/refresh â {refresh_token} -> 200 {token} Organization &
Accounts: POST /api/v1/orgs â create org GET /api/v1/orgs/{id}/social-accounts â list connected social accounts POST
/api/v1/orgs/{id}/social-accounts/connect â start OAuth for platform Post & Scheduling: POST /api/v1/posts â create
post draft (body: media urls, platforms[]) POST /api/v1/posts/{id}/variants/generate â triggers AI generator -> returns
caption variants POST /api/v1/posts/{id}/schedule â {scheduled_at, timezone, platform_ids, ab_test_config?} GET
/api/v1/posts/{id} â returns post + variants + schedule AI & Analytics: POST /api/v1/ai/generate-caption â {media_url,
brand_voice, tone, length} -> [{caption, hashtags, score}] GET /api/v1/analytics/accounts/{account_id}/insights â
returns best times, top posts, engagement trends Webhooks: POST /webhooks/social/{platform} â for events where platform
Backend supports webhooks (e.g., comments, metrics updates) POST /webhooks/stripe â billing events
Backend Sample API JSON â Caption Generation
Queue system: Redis + BullMQ (or [Link]). Separate queues by priority: - publish-high (immediate publishes) -
publish-normal (scheduled publishes) - analytics-fetch (periodic ingestion) - ai-requests (rate-limited calls to OpenAI)
Worker responsibilities: 1. Publisher Worker: Picks scheduled jobs, calls Social Integration Service to publish; handles
retries and exponential backoff for rate limits; marks post as published/failed. 2. A/B Test Worker: Runs small-sample
publishes, monitors engagement during test window, calculates winner, triggers final publish job. 3. Analytics Fetch
Worker: Periodically polls platform metrics APIs or ingests webhook events and populates analytics_events. 4. AI Worker
(optional): Processes queued AI generation tasks to avoid exceeding model rate limits and to cache responses.
Important: Platform token refresh must be handled by a separate job that refreshes tokens before expiration and not
Backend during publish time.
Pseudocode for publish worker: 1. SELECT * FROM posts WHERE status='scheduled' AND scheduled_at <= now() LIMIT 50 2. For
each post: - Verify social_account token validity; refresh if needed - If post has ab_test configured, ensure ab
test completed or schedule ab worker - Call platform API wrapper: publish(media_urls, caption, hashtags) - On
success: update [Link]='published', posted_at=now(), enqueue analytics fetch job - On failure: log error,
Backend enqueue retry (exponential backoff), notify admin if repeated failures
1. Encryption: Encrypt access tokens and sensitive data at rest (KMS). Use TLS everywhere. 2. Least Privilege: Request
minimum permissions for social platform tokens and explain why each permission is needed to users. 3. User Data
Deletion: Provide 'delete org' and 'delete account' flows that remove user data per GDPR rules. 4. Audit Logs: Store
admin actions and publish events for compliance and debugging. 5. Rate-limit & Abuse Protection: Implement per-org rate
Backend limits for API usage and AI tokens to prevent cost overruns.
DATABASE_URL REDIS_URL JWT_SECRET OPENAI_API_KEY STRIPE_SECRET_KEY AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY S3_BUCKET
SENTRY_DSN POSTHOG_API_KEY META_APP_ID META_APP_SECRET LINKEDIN_CLIENT_ID LINKEDIN_CLIENT_SECRET TIKTOK_CLIENT_ID
Backend TIKTOK_CLIENT_SECRET
1. Unit Tests: services, API controllers, and helpers. 2. Integration Tests: API endpoints with a test Postgres & Redis
instance. 3. End-to-end Tests: Cypress tests for critical flows (onboarding, post creation, scheduling). 4. Contract
Backend Tests: Ensure social integration adapters behave consistently using Pact or similar.
Other / Project Management VEloxi LABs Pvt. Ltd. â Generated: 2025-08-10 [Link]
Other / Project Management Prepared for: Development Team
Other / Project Management Executive Summary
Other / Project Management Personas & Product Goals
Primary Personas: 1) Freelance Social Media Marketer â needs fast, reliable post creation, scheduling and reporting for
3-10 clients. 2) Small Agency Owner â needs multi-account management, white-label reports, and A/B testing for
campaigns. 3) E-commerce Brand Manager â needs repurposing of product videos/images and conversion-focused captions.
Primary Goals: - Reduce time-to-post via AI content generation and automated repurposing. - Increase organic reach by
recommending optimal post timing, formats, and hashtags. - Improve engagement using A/B testing and reply-suggestion
Other / Project Management automation.
Other / Project Management Core Features (MVP Scope)
Other / Project Management Delighters / Post-MVP Features
1. Competitor Heatmap and trending hashtag intelligence. 2. Engagement Reply AI (suggested replies to comments & DMs in
brand voice). 3. White-label agency reports & PDF exports. 4. Cross-account content calendar and team collaboration
Other / Project Management (approval flows).
Other / Project Management User Flows (High-level)
Other / Project Management Data Model (Core Tables)
indexes: - posts(scheduled_at) for scheduler lookups - social_accounts(platform_account_id) - analytics_events(post_id,
Other / Project Management platform)
Other / Project Management Background Jobs & Worker Processes
Other / Project Management Scheduler & Publishing â Pseudocode
Other / Project Management A/B Testing Flow (Implementation Notes)
1. User selects 'Run A/B test' and selects variants & sample size. 2. System schedules small-sample publishes (e.g.,
post A to 10% of followers with limited visibility options if supported by platform; else publish to test audience/time
window). 3. A/B Test Worker monitors engagement metrics during testing window (5â 60 minutes depending on platform) and
calculates winner using a simple weighted score: score = (engagements / impressions) * weight + recency_bonus. 4. Winner
is set and full publish job is enqueued for the remaining audience. Note: Not all platforms support native audience-
Other / Project Management splitting. The system must emulate tests via time-based or small initial publishes and compare performance.
Other / Project Management AI Workflows & Prompting
Other / Project Management Third-party Integrations & Considerations
Other / Project Management Security, Data Protection & Compliance
Other / Project Management DevOps, CI/CD & Hosting
Other / Project Management Environment Variables (example)
Other / Project Management Testing Strategy
Other / Project Management MVP Roadmap & 60-Day Implementation Plan
Other / Project Management Appendix â Sample SQL (Simplified)