Digital warehouse with data verification drones and a maine coon cat on a server rack

VerifyFetch: The Fetch That Has Trust Issues (And That's a Good Thing)

The Orange Cat
The Orange Cat

Downloading a 4GB AI model in the browser is stressful enough. Now imagine the network dropping at 3.8GB, forcing you to start from scratch. Or worse, imagine that download completing successfully -- only for the file to be silently corrupted, or tampered with by a compromised CDN. VerifyFetch is a TypeScript library that wraps the familiar Fetch API with streaming integrity verification, resumable downloads, and fail-fast corruption detection, all while using a constant 2MB of memory regardless of how massive the file is. If you are loading WASM modules, AI models, or any large static assets in the browser, this library turns a nerve-wracking experience into a reliable one.

Why Your Browser Needs a Bouncer

The browser already has a built-in mechanism for integrity checking called Subresource Integrity (SRI). It works great for a 50KB script tag. But native SRI and crypto.subtle.digest() share an ugly secret: they buffer the entire file in memory before verifying it. Download a 4GB model and your browser needs 4GB of RAM just for the hash computation. That is a recipe for crashed tabs and angry users.

VerifyFetch solves this by processing files in streaming chunks. It computes SHA-256 hashes incrementally using a Rust-compiled WASM module, never holding more than about 2MB in memory at once. It also layers on features that native SRI simply does not offer: resumable downloads that survive network failures, chunked verification that catches corruption early, multi-CDN failover, and a Service Worker mode that adds verification to every fetch call without touching your application code.

Feature Rundown

  • Streaming verification: Constant 2MB memory footprint regardless of file size, powered by Rust/WASM hashing.
  • Resumable downloads: Verified chunks are persisted to IndexedDB. If the network drops, resume from the last good chunk instead of starting over.
  • Fail-fast chunked verification: Detect corruption at chunk 5 and stop immediately, instead of downloading the entire file only to discard it.
  • Multi-CDN failover: Automatically try backup sources if the primary CDN fails or serves tampered content.
  • Service Worker mode: Add integrity verification to every fetch in your app without changing a single line of application code.
  • Manifest system: Manage file hashes in a structured JSON manifest with CLI tooling for generation and CI enforcement.
  • Zero runtime dependencies: Nothing extra enters your supply chain.

Getting Started

Install from npm with your preferred package manager:

npm install verifyfetch
# or
yarn add verifyfetch

The package ships with TypeScript types included. A Rust/WASM module is bundled for high-performance hashing, with an automatic fallback to crypto.subtle if WebAssembly is not available in the environment.

Your First Verified Download

Trusting a Single File

The simplest use case mirrors the standard fetch API but with an sri option for integrity verification:

import { verifyFetch } from "verifyfetch";

const response = await verifyFetch("/engine.wasm", {
  sri: "sha256-uU0nuZNNPgilLlLX2n2r+sSE7+N6U4DukIj3rOLvzek=",
});

const module = await WebAssembly.compile(await response.arrayBuffer());

If the downloaded content does not match the expected hash, the promise rejects. You get the same familiar Response object you know from fetch, so integrating into existing code is straightforward.

Streaming Without the Memory Tax

For large files where memory matters, switch to the streaming API:

import { verifyFetchStream } from "verifyfetch";

const { stream } = await verifyFetchStream("/large-model.safetensors", {
  sri: "sha256-abc123...",
});

for await (const chunk of stream) {
  await processChunk(chunk);
}

This processes the file in constant memory. The hash is computed incrementally as chunks arrive, and the final integrity check happens at the end of the stream. For a 4GB file, you use roughly 2MB of memory instead of 4GB.

Checking the WASM Engine

The library ships a Rust-compiled WASM module for faster hashing. You can check whether it loaded successfully:

import { isUsingWasm } from "verifyfetch";

if (!(await isUsingWasm())) {
  console.warn("WASM unavailable - falling back to SubtleCrypto");
}

Without WASM, the library falls back to crypto.subtle, which still works but buffers more data in memory. A console warning fires automatically for files over 50MB when running in fallback mode.

Downloads That Survive Chaos

Resumable Downloads with IndexedDB Persistence

This is where VerifyFetch truly shines. The resumable download API verifies and persists each chunk to IndexedDB as it arrives. If the connection drops, the browser closes, or the user navigates away, the download picks up right where it left off:

import { verifyFetchResumable } from "verifyfetch";

const result = await verifyFetchResumable("/model.safetensors", {
  chunked: manifest.artifacts["/model.safetensors"].chunked,
  persist: true,
  onProgress: ({
    bytesVerified,
    totalBytes,
    chunksVerified,
    totalChunks,
    resumed,
    speed,
    eta,
  }) => {
    const percent = ((bytesVerified / totalBytes) * 100).toFixed(1);
    console.log(
      `${percent}% complete (${chunksVerified}/${totalChunks} chunks)` +
        `${resumed ? " [resumed]" : ""} - ETA: ${eta}s`
    );
  },
  onResume: (state) => {
    console.log(`Resuming from chunk ${state.verifiedChunks}`);
  },
});

Under the hood, the library uses HTTP Range requests to fetch only the remaining chunks. Each chunk is individually verified before being stored, so you never persist corrupted data. Utility functions let you inspect and manage in-progress downloads:

import { canResume, getDownloadProgress, cancelDownload } from "verifyfetch";

if (await canResume("/model.safetensors")) {
  const progress = await getDownloadProgress("/model.safetensors");
  console.log(`Can resume from ${progress.bytesVerified} bytes`);
}

await cancelDownload("/model.safetensors");

Fail-Fast Corruption Detection

Traditional integrity checks are all-or-nothing: download the entire file, hash it, and only then discover it was corrupted. With chunked verification, corruption is caught the moment a bad chunk arrives:

import { createChunkedVerifier, verifyFetchStream } from "verifyfetch";

const verifier = createChunkedVerifier(
  manifest.artifacts["/model.bin"].chunked
);
const { stream } = await verifyFetchStream("/model.bin", {
  sri: manifest.artifacts["/model.bin"].chunked.root,
});

for await (const chunk of stream) {
  const result = await verifier.verifyNextChunk(chunk);

  if (!result.valid) {
    throw new Error(
      `Chunk ${result.index} is corrupt - aborting download`
    );
  }

  await processChunk(chunk);
}

If chunk 5 out of 4000 is corrupted, you stop at chunk 5 instead of downloading 3995 more useless chunks. That is a meaningful bandwidth savings, especially on metered connections.

The Service Worker Shortcut

If you want verification on every fetch without modifying application code, drop a Service Worker into your project:

// sw.js
import { createVerifyWorker } from "verifyfetch/worker";

createVerifyWorker({
  manifestUrl: "/vf.manifest.json",
  include: ["*.wasm", "*.bin", "*.onnx", "*.safetensors"],
  onFail: "block",
  cacheVerified: true,
});
// app.js - nothing changes here
const model = await fetch("/model.bin"); // verified transparently

The worker intercepts matching requests, verifies them against the manifest, and caches verified responses. Your application code stays completely untouched.

Managing Hashes at Scale

The Manifest and CLI

For projects with multiple verified assets, VerifyFetch provides a manifest format and CLI tools to generate and enforce it:

# Generate SRI hashes for your assets
npx verifyfetch sign model.bin engine.wasm config.json

# Generate chunked hashes for large files
npx verifyfetch sign --chunked --chunk-size 1048576 model.bin

# Verify all assets in CI
npx verifyfetch enforce --manifest ./vf.manifest.json

The v2 manifest format supports both simple whole-file hashes and chunked verification with per-chunk hashes:

{
  "version": 2,
  "base": "/",
  "artifacts": {
    "/model.bin": {
      "sri": "sha256-fullFileHash...",
      "size": 4294967296,
      "chunked": {
        "root": "sha256-rootHash...",
        "chunkSize": 1048576,
        "hashes": ["sha256-chunk0...", "sha256-chunk1..."]
      }
    }
  }
}

Running verifyfetch enforce in your CI pipeline catches any case where assets change without updating their hashes, closing the loop on build-time integrity.

Multi-CDN Failover

When a single CDN is not enough, provide multiple sources and let the library handle failover:

import { verifyFetchFromSources } from "verifyfetch";

const response = await verifyFetchFromSources(
  "sha256-abc123...",
  "/model.bin",
  {
    sources: [
      "https://primary-cdn.example.com",
      "https://backup-cdn.example.com",
      "https://fallback.example.com",
    ],
    strategy: "sequential",
  }
);

The strategy option lets you choose between sequential (try each in order), race (first successful response wins), or fastest (pick the fastest source). If one CDN serves tampered content, the integrity check fails and the library moves on to the next source automatically.

Wrapping Up

VerifyFetch sits at a unique intersection: it is the only library that combines streaming integrity verification, resumable downloads, fail-fast chunked checking, and multi-CDN failover for browser-based large file downloads -- all with zero runtime dependencies and a constant memory footprint. Whether you are loading multi-gigabyte AI models for WebLLM, verifying WASM modules before instantiation, or simply wanting to protect your users from supply chain attacks like the polyfill.io incident, verifyfetch gives you a drop-in solution that works like the fetch API you already know. Your browser still has trust issues, but now they are the productive kind.