Shovel.js: What If Your Server Was Just a Service Worker?
Somewhere in the history of JavaScript, servers and browsers diverged into completely different worlds. Browsers got the Fetch API, the Cache API, CookieStore, and Service Workers. Servers got req and res objects, proprietary middleware chains, and a buffet of abstractions that vary wildly between Node.js, Bun, and Cloudflare Workers. Shovel.js asks a deceptively simple question: what if your server was just a Service Worker? The answer, it turns out, is a portable meta-framework that lets you write full-stack applications using the same web-standard APIs you already know from the browser, and then deploy that exact same code to any JavaScript runtime.
Built by Brian Kim (the creator of Crank.js), @b9g/shovel is the result of what he describes as a three-month meditation on the Service Worker model. It is young, opinionated, and unabashedly ambitious. If you are the kind of developer who believes that web standards should be the foundation of server-side development, Shovel might be the most interesting framework you have not heard of yet.
The Toolbox
Shovel.js packs a surprising amount into its standards-first approach:
- Standards-First APIs: Uses the Fetch API, Cache API, CookieStore, FileSystem API, URLPattern, and AsyncContext.Variable instead of inventing new abstractions
- True Runtime Portability: The same code runs on Node.js, Bun, and Cloudflare Workers without modification
- Generator-Based Middleware: A
yield-based middleware system that makes the request/response lifecycle explicit and eliminates an entire class of bugs - Built-In Bundler: ESBuild-powered compilation with JSX/TypeScript support, asset content hashing, and code splitting
- Curated Global APIs: Configurable backends for caching, file storage, databases, logging, and cookies via
self.*globals - Framework Agnostic: Works with Crank.js, HTMX, Lit, Alpine.js, or plain vanilla JavaScript on the client side
- Multiple Rendering Modes: SSR, SSG, and SPA are all supported through the Service Worker event model
Breaking Ground
Get started with the project scaffolding tool:
npm create shovel my-app
This will prompt you to choose from template options including hello-world, api, static-site, or full-stack. For manual setup, install the core packages directly:
npm install @b9g/shovel @b9g/router
# or
yarn add @b9g/shovel @b9g/router
Laying the Foundation
Hello, Service Worker
At its core, a Shovel application looks like a Service Worker. You register a fetch event listener, and every incoming request flows through it. The router gives you a clean API for matching paths and HTTP methods:
import { Router } from '@b9g/router';
const router = new Router();
router.get('/', (request: Request) => {
return new Response('Hello from Shovel!');
});
router.get('/api/health', (request: Request) => {
return Response.json({ status: 'ok', uptime: process.uptime() });
});
self.addEventListener('fetch', (event) => {
event.respondWith(router.handle(event.request));
});
Notice what is happening here. There is no req object with a .params property. There is no res.send(). You receive a standard Request and return a standard Response. These are the same classes you use in browser JavaScript, in Cloudflare Workers, and in Deno. That is the entire point.
Dynamic Routes and Context
Routes can capture dynamic segments, and Shovel gives you a context object to access them:
router.get('/users/:id', (request: Request, context) => {
const userId = context.params.id;
return Response.json({ userId, name: `User ${userId}` });
});
router
.get('/posts/:slug', (request: Request, context) => {
return Response.json({ slug: context.params.slug, title: 'A Great Post' });
})
.put('/posts/:slug', async (request: Request, context) => {
const body = await request.json();
return Response.json({ updated: true, slug: context.params.slug, ...body });
})
.delete('/posts/:slug', (request: Request, context) => {
return Response.json({ deleted: true, slug: context.params.slug });
});
The router uses a radix tree algorithm that matches Fastify's O(1) route lookup performance, so you are not trading expressiveness for speed. Method chaining on the same path keeps related endpoints together in a way that reads naturally.
Running the Dev Server
Start building with the development server, which includes watch mode and hot reload:
npx shovel develop src/server.ts
Your application will be available at http://localhost:7777. When you are ready to ship, build for your target platform:
npx shovel build src/server.ts --platform=node
npx shovel build src/server.ts --platform=bun
npx shovel build src/server.ts --platform=cloudflare
Same source file, three different targets. The build system handles the platform-specific details so you do not have to.
Digging Deeper
The Yield Moment: Generator Middleware
Most middleware systems use a next() callback or an await next() pattern. Shovel takes a different approach entirely: middleware functions are async generators. You yield the request to pass it downstream, and you get the response back from the yield. Everything before the yield is the request phase. Everything after is the response phase.
async function* timing(request: Request) {
const start = Date.now();
const response = yield request;
response.headers.set('X-Response-Time', `${Date.now() - start}ms`);
return response;
}
async function* logger(request: Request) {
console.log(`--> ${request.method} ${new URL(request.url).pathname}`);
const response = yield request;
console.log(`<-- ${response.status}`);
return response;
}
async function* errorBoundary(request: Request) {
try {
const response = yield request;
return response;
} catch (error) {
console.error('Request failed:', error);
return Response.json(
{ error: 'Internal Server Error' },
{ status: 500 }
);
}
}
This is not just syntactic sugar. The generator pattern solves a real problem with traditional middleware: it is impossible to forget to call next(), and there is no risk of calling it outside the async window. The yield keyword makes the separation between request processing and response processing visually obvious. If you have ever debugged middleware that accidentally ran response logic before the downstream handler finished, you will appreciate how clean this model is.
Curated Globals: Your Server's Standard Library
Shovel provides a set of configurable global APIs that mirror browser capabilities. Instead of importing database clients, logger instances, and cache layers from different packages, you access them through self.* globals that Shovel provisions based on your configuration:
// Caching responses using the standard Cache API
const cache = await self.caches.open('api-responses');
router.get('/api/expensive', async (request: Request) => {
const cached = await cache.match(request);
if (cached) return cached;
const data = await fetchExpensiveData();
const response = Response.json(data);
await cache.put(request, response.clone());
return response;
});
// File storage using the FileSystem API
const uploads = self.directories.get('uploads');
router.post('/upload', async (request: Request) => {
const formData = await request.formData();
const file = formData.get('file') as File;
const handle = await uploads.getFileHandle(file.name, { create: true });
const writable = await handle.createWritable();
await writable.write(file);
await writable.close();
return Response.json({ uploaded: file.name });
});
// Cookie management using the CookieStore API
router.get('/session', async (request: Request) => {
const sessionCookie = await self.cookieStore.get('session_id');
if (!sessionCookie) {
await self.cookieStore.set('session_id', crypto.randomUUID());
}
return Response.json({ hasSession: !!sessionCookie });
});
The magic is in the configuration layer. Your shovel.json file tells Shovel which backends to use for each API, and you can swap them without changing a line of application code:
{
"directories": {
"uploads": {
"module": "@b9g/filesystem",
"export": "local",
"path": "./uploads"
}
},
"caches": {
"api-responses": {
"module": "@b9g/cache",
"export": "memory"
}
}
}
In development, you point at local storage and an in-memory cache. In production, you swap to S3 and Redis. The twelve-factor app philosophy baked right in.
Static Assets with Import Attributes
Shovel uses the import attributes syntax to handle static assets with automatic content hashing:
import favicon from './favicon.ico' with { assetBase: '/' };
import styles from './styles.css' with { assetBase: '/assets' };
router.get('/', (request: Request) => {
return new Response(
`<!DOCTYPE html>
<html>
<head>
<link rel="icon" href="${favicon}" />
<link rel="stylesheet" href="${styles}" />
</head>
<body>
<h1>Welcome</h1>
</body>
</html>`,
{ headers: { 'Content-Type': 'text/html' } }
);
});
The import paths are resolved at build time and replaced with content-hashed URLs. On Cloudflare, the assets are served through Workers Assets. On Node and Bun, Shovel provides static file middleware. You write the same import statement regardless of where the application runs.
When to Pick Up This Shovel
Shovel.js is still in its 0.x era. The creator is upfront about the fact that bugs are expected and breaking changes will happen. This is not the framework you choose for a project that needs to ship to production next week with zero risk. But it is the framework you choose if you want to bet on web standards, if you are tired of runtime lock-in, and if the idea of writing your server as a Service Worker makes you grin.
The roadmap is ambitious: sessions, authentication, WebSockets, cron jobs, email, and an admin interface inspired by Django are all planned. The goal is a maximally batteries-included full-stack framework built entirely on standards that already exist in your browser.
If you are building with Crank.js, HTMX, Lit, or any client library that does not demand its own meta-framework, @b9g/shovel offers something genuinely novel. It is a server that thinks like a browser, a build tool that stays out of your way, and a bet on the idea that the best server APIs are the ones the web platform already defined.