HTML / CSS / JS Node.js Claude AI Formspree Render

This Website —
Built Collaboratively with AI

A personal site covering portfolio, blog, resume, and an AI chat assistant — designed, built, iterated, and deployed entirely through conversation with Claude.

6
HTML pages
0
npm dependencies
2
Deployed proxies
AI
Powered chat assistant

Why build it this way

Every developer eventually needs a personal site. The usual options are a WordPress theme you never quite get right, a Squarespace template that looks like everyone else's, or a weekend project that stalls out at a half-finished React app. I wanted something different: a site that genuinely reflects how I work, built using the same collaborative AI-assisted process I've been applying to larger projects.

So rather than reaching for a framework or a template, I opened a conversation with Claude and described what I wanted — a warm, personal site that covered everything from my resume and certifications to a blog, a project portfolio, and a working contact form. What followed was an iterative back-and-forth that produced not just the site, but a deliberate set of decisions about structure, technology, and scope that I can explain and defend. That process is as much a part of this project as the code itself.

Structure and design

The site is plain HTML, CSS, and JavaScript — no framework, no build step, no bundler. That was a deliberate choice made early in the conversation. A personal site doesn't need React. It needs to be fast, easy to host anywhere, and simple enough that I can edit it in any text editor without spinning up a development environment. A single shared styles.css file carries the design system across all pages, built around a warm cream-and-bark palette using CSS custom properties throughout.

The initial version was a single index.html file covering everything. As the scope grew — adding a blog, projects, and individual post pages — we restructured it into six separate files: index.html as the homepage, projects.html for the portfolio, blog.html as the listing page, individual blog-post-*.html files for each article, and dedicated project write-up pages for major work. The nav and footer are consistent across all of them, and internal links connect everything without any routing library.

The design went through several rounds of real feedback. The hero section originally used a placeholder illustration; I replaced it with my initials, then updated the code to load a profile.jpeg when I was ready to add a real photo. The resume section was updated to reflect my actual work history — including adding Applied Research Laboratories and my earliest role at Applied Systems, reordering certifications to show the most recent first, and updating the download button to link to my actual PDF filename. Small details, but the kind that matter when a recruiter is reading the page.

Content and pages

The homepage anchors everything. It opens with a hero section establishing who I am and what I do, moves into an about section with skill chips and experience stats, shows a two-post preview of the blog with a link to the full listing, and presents my full work history as a visual timeline alongside my certifications and education. The contact section closes the page. Every section on the homepage links outward to the relevant standalone page — the blog preview links to blog.html, the work nav link goes to projects.html.

The blog is structured around a real editorial pattern. blog.html shows each post as a card with a tag, date, title, and the opening paragraph — enough to tell a reader whether the post is worth their time. Each card links through to its own standalone page where the full article lives, formatted with section headers, pull quotes, and inline code where appropriate. The two placeholder posts were written with enough substance to show what the blog will look like when I start publishing in earnest: one on the value of boring technology choices, one on building DevOps culture on a small team. Both topics I actually have things to say about.

The projects section follows the same pattern. projects.html shows a grid of project cards. The first — Blast, my self-hosted social broadcaster — links through to a full case study page with a hero, key stats, and a long-form technical write-up. The remaining three cards are placeholders, ready to be filled as I document more work. The Blast card has a custom dark gradient thumbnail with a lightning bolt icon to distinguish it from the placeholders at a glance.

The contact form and Formspree

A contact form that doesn't actually send anything is worse than no contact form — it creates the impression of responsiveness without the reality. Getting the form working properly required solving the classic problem of sending email from a static site without a backend.

The solution was Formspree, integrated using their Vanilla JS Ajax library rather than a plain HTML form POST. The Ajax approach means the page doesn't reload on submission, field-level validation errors appear inline next to the relevant input, the submit button disables during the request to prevent double-sends, and a success message appears in place of the form after a successful submission. The form captures name, email address, and message, and delivers them directly to my inbox. No backend required, no credentials to manage.

The AI chat assistant

The most technically interesting addition to the site is the floating AI chat assistant — a bubble in the bottom-right corner that visitors can open to ask questions about my experience, skills, and projects and get immediate, accurate answers. This is where the site stops being a static brochure and starts being something interactive.

The architecture has two parts. The frontend is a self-contained chat panel built into index.html — a floating button that opens a conversation interface, maintains message history across the session, handles the thinking state while waiting for a response, and degrades gracefully if the proxy isn't reachable. The backend is a small Node.js proxy server deployed to Render, written with zero npm dependencies using only Node 22 built-ins: node:http, node:https, node:crypto, node:fs. It exposes a single endpoint — POST /api/chat — and a GET /health for monitoring.

The proxy holds the Anthropic API key in an environment variable set through Render's dashboard, so it never appears in the browser or in the GitHub repository. Every request from the site goes to the proxy, which signs and forwards it to Claude's API. The system prompt — baked into the proxy, never exposed to the client — contains my complete professional profile: every role, every skill, every certification, every project, my location, my email, my availability. Claude uses this to answer questions accurately and stay on topic, redirecting anything outside its knowledge to the contact form.

Keeping the API key server-side and the system prompt out of the browser isn't just good security practice — it means the AI's knowledge of me is authoritative and consistent for every visitor, regardless of what they ask.

Deploying the proxy involved working through a few real obstacles. An SSH key mismatch with GitHub's host key required clearing the old entry from known_hosts and re-establishing trust. Setting up SSH access to GitHub from scratch — generating a key pair, adding the public key to GitHub's settings, verifying the connection — was a process I walked through step by step. A trailing slash in the proxy URL caused a double-slash in the API endpoint path (//api/chat) that produced 404 errors until spotted. These are the kinds of friction points that don't appear in tutorials but show up in real deployments, and working through them is part of what makes a project genuinely complete rather than theoretically complete.

What was built with AI, and what that means

Every file on this site — every line of HTML, every CSS rule, every JavaScript function, both proxy servers, the system prompt, and this write-up — was produced through conversation with Claude. That's worth being precise about, because "built with AI" covers a wide range of things. In this case it means I described what I wanted, gave feedback on what I saw, made decisions about trade-offs, provided my actual resume data and credentials setup, and directed the work at every step. Claude wrote the code. I reviewed it, deployed it, and iterated on it based on what I actually experienced.

The process surfaced decisions I might have deferred or skipped working alone. Should the blog be a section on the homepage or its own page? How should the proxy handle the API key? What should the AI assistant actually know, and what should it decline to answer? Each of these was a real conversation, not a click through a template wizard. The resulting site reflects those decisions in its structure — the separation of concerns between pages, the zero-dependency proxy constraint, the choice of Formspree over a custom backend for the contact form.

What I find most interesting about this project isn't the technology — plain HTML and a small Node server are about as unsexy as it gets. It's the demonstration that building something thoughtfully with AI assistance doesn't mean abdicating judgment. It means applying judgment more continuously, at a pace that a solo developer working from scratch couldn't sustain. Every feature on this site was built deliberately, explained clearly, and deployed in working condition. That's what I care about, regardless of how the code got written.

← Back to projects