Skip to content

transcribe-clips

Use when transcribing raw footage clips using Deepgram. Produces timestamped JSON per clip with word-level timing and optional speaker diarization for multi-speaker content.

ModelSource
sonnetpack: video-pipeline
Full Reference

Transcribes all usable clips from clip-manifest.json using Deepgram’s Nova-2 model. Produces per-clip timestamped JSON with word-level timing, paragraph segmentation, and utterance boundaries. Supports speaker diarization for interviews and multi-speaker content. Skipped clips never halt the pipeline. Outputs to transcripts/ directory.


ItemValue
API dependencyDEEPGRAM_API_KEY env var
ModelDeepgram Nova-2
Inputclip-manifest.json in current directory
Outputtranscripts/<clip-name>.json per clip
Next stageclassify-and-plan-edit
Rate limit handlingBackoff 60s, retry 3×

I want to…File
Set up API key, configure Deepgram params, run transcription, handle errorsreference/setup.md
See the per-clip JSON schema and progress summary formatreference/output.md

Usage: Read the reference file matching your current task from the index above. Each file is self-contained with code examples and inline gotchas.


┏━ ⚡ transcribe-clips ━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Transcribing [count] clips via Deepgram ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛