Real Output Proof
Production proof from a real PDF smoke test: Modal Docling processing, digest and manifest outputs, private download checks, and cleanup verification.
FileDigest has been tested against a real 25-page academic PDF through the production web flow. The purpose of this page is to show the kind of evidence a buyer should expect before trusting a document-preparation workflow.
Proof run: April 29, 2026, against the production FileDigest Codex deployment.
Production smoke result
The production run processed a 2,271,469 byte PDF through Modal Docling and reached SUCCEEDED.
Verified output:
digest.mdgeneratedmanifest.jsongenerated- digest length: 88,719 characters
- manifest fields:
job_id,generated_at,engine,processing_mode,ocr,table_mode,total_files,parsed_files,failed_files,total_tokens,warnings,worker_seconds,cost_estimate,sources - app-level delete set the database job status to
DELETED - storage cleanup left zero objects under the smoke job prefix
Privacy checks
The same run verified the private download model:
- raw input object URL was denied before and after upload
- unauthenticated artifact download returned
401 - wrong-user artifact download returned
404 - owning-user artifact download redirected to a private signed URL
What this proves
The test demonstrates that FileDigest can process a real PDF, generate the user-facing Markdown digest, generate a machine-facing manifest, enforce private artifact access, and clean up job storage after deletion.
What this does not prove
This is not a benchmark across every file type or a promise that every damaged document will parse perfectly. It is a concrete production proof point: a real PDF moved through the upload, processing, private download, and cleanup path.
What to test yourself
Start with a small packet, inspect the generated digest.md, inspect the manifest.json, then decide whether the output is good enough to reuse in ChatGPT, Claude, RAG, research, consulting, or analyst workflows.