What Blast is — and what makes it different
Buffer, Hootsuite, and Sprout Social solve a real problem: posting to multiple social platforms without logging in and out of each one. But they share a fundamental limitation that I kept running into — they treat the web as the universe. Text your audience? Not a feature. WhatsApp broadcast? Find another tool. And all of them, without exception, hold your API credentials on their servers, which means you're trusting a third party not just with your scheduling, but with your accounts.
Blast is my answer to both of those constraints. It's a self-hosted multi-platform broadcaster that sends a single composed post to all twelve of SMS, Email, WhatsApp, X (Twitter), Instagram, Facebook, LinkedIn, TikTok, Threads, Telegram, Reddit, and Pinterest at the same time. SMS and WhatsApp open the device's native app with the text pre-filled — a pragmatic workaround that means no carrier relationships or WhatsApp Business API fees. The other ten platforms are called via their official APIs. The whole thing runs on your own machine or a free-tier cloud host, and your API credentials never touch a third-party server. That's the pitch.
What surprised me during development is how much falls out naturally once you commit to self-hosting. CORS restrictions that would normally block browser-to-API calls are solved by the proxy. Token expiry becomes a visible, managed concern rather than a silent failure. Scheduling becomes reliable because the job queue lives on the server, not in a browser tab that might close. The constraints turned out to be generative.
Architecture
The frontend is React 18, structured around a strict MVVM pattern with enforced layer boundaries. The model layer is eight pure JavaScript modules — platforms.js, dispatchers.js, credentials.js, composer.js, history.js, storage.js, endpoints.js, and logger.js — none of which import React or touch the DOM. The viewmodel layer is six React hooks that translate model state into something the view can render. The view layer is twenty components across compose, history, calendar, analytics, connections, settings, and logs views. The composition root, SocialBlast.jsx, wires everything together.
I chose this structure over something like Redux because I wanted the business logic to be independently testable without a React environment. All 260 unit tests run against the model layer using Node's built-in test runner — no React Testing Library, no mocking of hooks. The discipline of never letting model code import from React enforces this naturally. The trade-off is that the composition root carries more wiring than you'd see in a hook-heavy architecture, but the resulting clarity about where business logic lives has been worth it every time I've needed to debug something.
The entire frontend bundles to a single self-contained HTML file using esbuild via a build script. No CDN calls at runtime, no network dependency to open the app, and E2E tests run against the static file directly rather than spinning up a dev server. It distributes as one file you can open in any browser.
The backend proxy is a Node.js HTTP server I wrote with a deliberate constraint: zero npm dependencies. Every module uses only Node 22 built-ins — node:http, node:crypto, node:fs, node:sqlite. I made this decision early and held to it throughout. The practical effect is a proxy you can deploy to Render, Railway, or Fly.io in about five minutes with no npm install surprises, no supply-chain exposure, and no lockfile drift. When something breaks, it's my code that's wrong.
The SQLite job queue uses Node 22's built-in node:sqlite module, which was experimental at the time of writing. That flag gave me pause briefly, then I decided it was exactly the right trade-off for a personal tool: I get a proper relational store for the scheduler with no install, and if the API stabilises differently I'm the one who has to fix it. For a shared commercial product I'd think twice. For something I run myself, it's fine.
Features, as they fit together
The core workflow is composing a post and blasting it everywhere. But the feature set really earns its keep in the workflows surrounding that core action. Consider the path from idea to published to monitored: you open the composer, optionally ask the built-in Claude AI writing assistant to generate four platform-aware variants — a punchy version, a formal one, a casual one, a question — or type a free-form prompt and get a rewrite. You pick a variant, adjust it, attach media if needed. If you're not ready to post, you save a draft. Drafts persist in history with a prominent DRAFT badge and load back into the composer with one click. When you are ready, you either blast immediately or schedule it for a specific time via a datetime picker that saves a job to the proxy's SQLite queue.
Once something is scheduled, it appears on the correct day in the calendar view — a full month grid showing text previews and platform icons per post. The calendar syncs with the proxy on mount, which means multiple people pointing at the same proxy see each other's scheduled items. When the scheduler fires the job, a Slack notification arrives in your configured channel: a structured Block Kit message with a post preview, per-platform results, post IDs, and a clear signal for any partial failures. After the post goes out, each history card has a Stats button that fetches live engagement numbers — likes, comments, views, shares — from the platforms that returned a post ID. An Inbox button fetches replies and comments from eight platforms, rendered as a read-only thread beneath the card.
The media upload path is worth describing specifically because it has two modes. When a proxy is configured, the toolbar's image and video buttons open a native OS file picker, upload the file to the proxy's /api/upload endpoint as multipart form data (parsed by a hand-written multipart parser — no busboy, no formidable), and serve it back via a URL that feeds directly into seven platform APIs. Without a proxy, the buttons fall back to prompting for an externally-hosted URL. The 50 MB cap and 24-hour cleanup run on the proxy. This felt like the right division: full functionality for users who've set up the proxy, graceful degradation for those who haven't.
Token expiry is handled at app load. A useEffect in the app state hook walks every saved connection and flips any with a past expiresAt to expired status. Expired platforms are excluded from the send count — you can't accidentally include them in a blast. Each platform button shows a coloured dot: red for expired, amber for expiring within seven days. If any selected platform is in that window, a warning banner appears above the Blast button with a direct link to Connections. It's the kind of defensive UX that pays for itself the first time it prevents an embarrassing silent failure.
The analytics dashboard rounds out the feature set. It computes five panels over local history: posting streak, best day of week as a bar chart, peak hour, content mix breakdown (text-only versus with media versus with platform overrides), and a day-of-week activity heatmap. No external analytics service, no tracking — just computation over the array of posts you've already made.
Testing
The test suite has three layers. At the base, 260 unit tests cover all eight model modules using node --test with no install whatsoever. They import directly from the model's ESM source. This was one of those decisions that felt almost too simple when I made it — Node has had a built-in test runner since version 18, it supports async tests, subtests, and coverage, and it runs fast. Not using it would have meant choosing a dependency for its own sake.
The proxy has 29 unit tests covering every route, authentication, CORS, upload, media serving, and Slack notifications. These run the actual server on a random port and make real HTTP requests against it. The 51 Playwright E2E tests run against the pre-compiled HTML bundle across desktop, iPhone, and Android viewport configurations. They cover drafts, calendar, scheduling, media upload, inbox, stats, shared calendar, settings, expiry handling, and analytics. Adding iOS and Android device emulation suites brings the total to around 460 tests.
In practice, the test pyramid held up well. Most bugs were caught at the model layer — pure functions fail loudly and quickly. The proxy tests caught a handful of auth and CORS edge cases that would have been much harder to reproduce in a browser. The E2E tests caught exactly one thing the others missed: a timing issue in the scheduler's calendar sync on slow network emulation. That's roughly the right ratio.
What wasn't built, and why
Several reasonable features were deliberately excluded. OAuth redirect flows require a hosted callback URL, which would have introduced a dependency on a specific deployment environment. Pasting tokens is less elegant but more honest about what's actually happening, and it works identically whether you're running locally or on a cloud host. Token auto-refresh was similarly scoped out — only Facebook, Instagram, and Threads tokens are refreshable without a refresh token; the others require user action regardless. The expiry warning system handles this transparently without creating the illusion of automation that doesn't generalise.
Reply-from-app was never on the table. Each platform's reply API has its own authentication and rate-limit surface; building write access for twelve platforms would have doubled the scope and introduced failure modes that are genuinely hard to reason about. Read-only inbox is the right scope for a broadcaster. Client-side scheduling — a setTimeout that fires a blast — only works if the browser tab stays open, so the proxy scheduler is the correct architecture; implementing the wrong version would have been worse than not implementing it. An Electron wrapper would have added over 200 MB of binary dependency for an app that already distributes as a single HTML file.
Where it goes next
The most valuable next step is multi-user auth, which I estimate at about two days of work. The SQLite database is already there; it's a matter of adding a users table, a login endpoint using crypto.scrypt, JWT signing, and a role field for admin and editor access. That unlocks approval workflows — a post sits in pending_approval status until an approver flips it, which is the feature that turns Blast from a personal tool into something a small team could use together.
Binary media upload for X and LinkedIn would round out the media story. The proxy already signs X OAuth 1.0a requests; the remaining work is implementing the chunked upload protocol. LinkedIn is a three-step asset registration flow. Replacing the 24-hour temporary file system with Cloudflare R2 or S3 presigned URLs would give uploaded media a permanent home. And on the reliability side, free-tier Render instances sleep after fifteen minutes of inactivity — a missed scheduler tick means a missed post. An external cron ping hitting /health every ten minutes would keep the proxy warm at zero cost.
The roadmap item I find most interesting is AI analytics: feeding the history array to Claude and surfacing genuine insights about posting patterns and engagement. The data is already there. It's a small piece of work that could return disproportionate value.
Building with an AI collaborator
Blast was built entirely within a single extended Claude conversation — architecture, implementation, debugging, refactoring, testing, and documentation, all iteratively through dialogue. I want to be specific about what that actually felt like, because the experience was different from what I expected going in.
The discipline that mattered most was reading before writing. Before touching any file, the relevant existing code was read in full. This sounds obvious but it's easy to skip under time pressure, and skipping it produces the kind of subtle inconsistency that compounds across a codebase. Working conversationally made this easier to enforce: the conversation had a record of what had been read and what hadn't.
Feature prioritisation happened through multiple SWOT analyses over the course of the project. What looked like the obvious next feature at the start of a session sometimes looked different after examining what the codebase could support cleanly versus what would require structural changes. The calendar sync, the Slack notifications, and the token expiry handling all moved up the priority list through that process. The Electron wrapper and client-side scheduling both moved off the list entirely.
The constraint of working within a single conversation turned out to be a feature. It kept the context coherent across weeks of work in a way that a scattered set of prompts wouldn't have.
What I built is something I genuinely use. It does the thing I wanted it to do — compose once, send everywhere, own my own credentials — and it's tested well enough that I trust it. The process of building it collaboratively with an AI was less like dictating to a code generator and more like pair programming with someone who has read a lot of documentation and never gets tired. The judgment calls were still mine. The architecture decisions were still mine. But the speed of moving from decision to working implementation, and back to decision, was unlike anything I'd experienced building alone.