Most teams I’ve worked with have at least one “clean up the CSS” script buried in a build step or someone’s terminal history. It usually works, but it isn’t very approachable. If you’re not comfortable with CLIs and config files, it’s hard to know what’s safe to remove or how to see the impact.
extract-css started as an attempt to make that process feel more tangible. I wanted a single page where you can paste real HTML and CSS, run an extraction, and see the result in a browser‑style preview—without installing anything and without sending your code to a third‑party service.
That led to a simple full‑stack shape: an edge‑hosted worker that runs the extraction engine and serves the app, and a React front‑end that talks to it through a typed API.
Why the Edge, Not Just a CLI
The first decision was whether this should stay “just a CLI” or become a web tool.
A CLI would have been familiar and fast, but previewing would still be left to the user: run a command, then manually open a browser and wire up a test page. I wanted the preview to be built in, and I didn’t want people to install packages or start a local server just to try a one‑off extraction.
An edge worker fits that need well. It can:
- serve the front‑end assets and shell, and
- expose a small API that runs PurgeCSS (or a similar engine) on the provided HTML and CSS.
Conceptually, the architecture looks like:
Editors → Typed API → Extraction Engine → Preview
That keeps deployment straightforward—one worker, one build—and avoids adding a separate backend stack.
A Small, Typed API Over the Engine
On the server side, the public surface area is intentionally tiny. There’s a single procedure that accepts an object with HTML and CSS strings, runs basic validation, strips <script> tags and obvious external references, and then passes the cleaned data into the extraction engine with a configuration that preserves keyframes, font faces, and CSS variables.
In code, the shape looks roughly like:
type ExtractRequest = { html: string; css: string };
type ExtractResult = { css: string };
async function extractCss(req: ExtractRequest): Promise<ExtractResult> {
// validate input, sanitize HTML, invoke extraction engine
return { css: cleanedOutput };
}
Errors get turned into simple, predictable responses that the client can render. There’s no authentication or database in the core flow; if the tool ever needs per‑user features, the existing call shape can stay the same.
On the client side, a typed API client and a data‑fetching layer keep the edge call encapsulated. Components ask for “run the extraction” via a mutation that returns a result object, and that’s enough to drive the UI without each piece knowing about PurgeCSS directly.
The Workbench: Two Editors and a Preview
The main experience is two editors and a preview pane, with a bit of state to keep everything predictable.
Instead of scattering state across multiple hooks, a reducer keeps track of two sets of values:
- what’s currently in the HTML and CSS editors, and
- what the preview is currently rendering.
A simplified version of that state shape:
type WorkbenchState = {
htmlInput: string;
cssInput: string;
previewHtml: string;
previewCss: string;
};
When you run an extraction, the mutation updates the “live” HTML and CSS for the preview in one step, and the preview re‑renders. Your inputs don’t change unless you choose to copy the result back into them. That makes it easy to compare before and after or tweak your snippet and rerun, without losing the original.
The preview sits in a resizable panel. You can drag a handle or use the keyboard to adjust its height, and the preferred size is remembered locally. It’s a small detail, but it makes longer sessions more comfortable.
Handling Real‑World HTML and CSS Safely
A common use case is pasting HTML and CSS straight from a production page. That’s useful, but it also means you have to be careful about what you execute.
The preview treats all content as untrusted. Before anything goes into the iframe, the app strips scripts, removes CSS that tries to pull in external resources, and wraps the result in a tightly scoped document. The iframe is sandboxed and uses an inline document string rather than pointing at a URL.
If something goes wrong while building that document, the UI falls back to an explicit error state instead of trying to render half‑broken markup. The goal is to make it feel like a real browser preview without accidentally running arbitrary code.
Keeping the App Simple to Use and Maintain
Outside the workbench, there isn’t much extra structure. There’s a small landing page that explains what the tool does and shows a basic before/after example, and a route for the app itself.
Both routes share the same API client and query layer, so loading and error states behave consistently. Locally, everything runs under a standard dev server; in production, it’s a Vite build bundled with the worker for deployment to the edge.
From a maintenance point of view, that’s the main benefit of the design: one place to deploy, one API to think about, and a single page where most of the behavior lives.
What I Learned
Working on extract-css reinforced that you don’t need a lot of surface area to make a tool useful. In this case:
- one well‑defined endpoint was enough to cover the core job,
- a small amount of carefully modeled client‑side state made the workflow feel manageable, and
- treating pasted content as untrusted by default avoided a category of problems that are easy to miss.
The result is modest by design: paste, extract, preview. That’s all it does. The value is giving people a clearer, safer way to answer a simple question—“what CSS is actually used here?”—without asking them to become build‑tool experts first or to trust a black‑box service with their code.