Skip to content

ai-pimp

Active router for ALL AI model and inference requests — classifies by capability (text, image, video, streaming UI) and routes to the correct AI skill. Use when integrating AI models, choosing providers, or building AI features.

ModelSource
sonnetpack: ai
Full Reference If the request involves AI models, inference, providers, or AI-powered features in ANY way — text generation, image/video generation, streaming UI, tool use, embeddings, multi-modal, or provider selection — you MUST route through this skill FIRST.

This is not optional. This is not negotiable. You cannot skip this.

The orchestration layer for all AI model and inference expertise. Not documentation — an active router. Every AI request flows through this routing table before any response.

Mandatory Announcement — FIRST OUTPUT before anything else:

┏━ 🤖 ai-pimp ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ [one-line description of what request/routing] ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

No exceptions. Box frame first, then route.

The AI pack is armadillo’s inference ecosystem — 5 skills covering Claude/Anthropic, OpenAI, Google Gemini, Vercel AI SDK streaming UI, and fal.ai image/video/GPU generation. Routes by capability, not just provider preference.

Classify the request. Invoke the matching skill. No response before invocation.

Request PatternSkill
Claude API, Anthropic SDK, tool use, system prompts, extended thinkinganthropic-api
OpenAI, GPT-4o, Chat Completions, Assistants API, fine-tuningopenai-api
Gemini, @google/genai, multimodal, video understanding, Vertex AIgoogle-genai
AI chat UI, streaming responses, useChat, useCompletion, RSC AIvercel-ai-sdk
Image generation, video generation, FLUX, Stable Diffusion, GPU inferencefal-ai
”Which AI provider should I use?”Decision matrix below

Route to this when the user asks which provider to pick.

Use CaseRecommendation
Text generation, reasoning, analysisProvider preference or anthropic-api (Claude)
Long context, document processinganthropic-api (200k context)
Code generation, function callinganthropic-api or openai-api
Image generationfal-ai (FLUX, Stable Diffusion)
Video generationfal-ai (Kling, Runway via fal)
Multimodal (image+text input)google-genai (Gemini) or anthropic-api
Streaming chat UI in React/Next.jsvercel-ai-sdk + specific provider
Multi-provider / provider-agnosticvercel-ai-sdk (unified interface)
Embeddings, vector searchopenai-api or google-genai
Real-time GPU inference, custom modelsfal-ai
  • If a request spans multiple skills, invoke the PRIMARY skill first (closest to the core question)
  • “Build an AI chatbot” → vercel-ai-sdk first (UI layer), then provider skill
  • “Which model is best?” → Decision matrix, then route to winning provider’s skill
  • Provider-specific API questions → route directly to that provider’s skill
  • Image/video generation is ALWAYS fal-ai — not the text provider skills
  • Streaming UI in any framework → vercel-ai-sdk first
User SaysChain
”Build an AI chat app”vercel-ai-sdkanthropic-api / openai-api
”Generate images from user prompts”fal-ai
”Analyze uploaded documents”anthropic-api (vision + long context)
“Multi-provider with fallback”vercel-ai-sdk (provider abstraction)
“Real-time voice or video AI”fal-aivercel-ai-sdk
”Fine-tune a model”openai-api (fine-tuning API)

Before routing, check project context:

  • stack.json → look for "ai" key — if set, route directly to that skill
  • package.json → detect ai, @anthropic-ai/sdk, openai, @google/genai, @fal-ai/client
  • .env.example → which API keys are present signals which providers are configured
Detected DepRoute Default
ai (Vercel AI SDK)vercel-ai-sdk for UI layer questions
@anthropic-ai/sdkanthropic-api for direct calls
openaiopenai-api
@google/genaigoogle-genai
@fal-ai/clientfal-ai
  • Never respond about AI providers or models before invoking the target skill
  • No summarizing, planning to invoke, or explaining what you’re about to do
  • If unclear, ask ONE clarifying question, then route
  • The skill’s content has the verified facts — always defer to it
  • Image/video gen is fal-ai territory — never suggest text provider SDKs for generation
  • “Add AI to my app” is vercel-ai-sdk territory — unified SDK, then pick provider