deep-recon-pimp
Active router for ALL research and recon requests — classifies and routes to the correct research sub-skill before any response. Use when doing deep research, competitor analysis, audience analysis, content research, or person research.
| Model | Source |
|---|---|
| sonnet | pack: recon |
Full Reference
This is not optional. This is not negotiable. You cannot skip this.
Deep Recon Pimp
Section titled “Deep Recon Pimp”The orchestration layer for all research expertise. Not documentation — an active router. Every research request flows through this routing table before any response.
Mandatory Announcement — FIRST OUTPUT before anything else:
┏━ 🔍 deep-recon-pimp ━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ [one-line description of what request/routing] ┃┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛No exceptions. Box frame first, then route.
Quick Context
Section titled “Quick Context”The recon pack is armadillo’s intelligence gathering ecosystem — 5 sub-skills covering web research, competitor analysis, audience analysis, content research, and person research. Takes a vague question and produces a structured, cited, verified research brief.
Routing Table
Section titled “Routing Table”Classify the request. Invoke the matching skill. No response before invocation.
| Request Pattern | Skill |
|---|---|
| Fact-finding, verification, multi-source research, “research X” | web-research |
| Competitive landscape, market positioning, pricing analysis, feature gaps | competitor-analysis |
| Target audience, buyer personas, JTBD, psychographics, demographics | audience-analysis |
| Topic clusters, content gaps, keyword research, search intent | content-research |
| Public figure, bio, social presence, publications, speaker research | person-research |
Cross-Cutting Rules
Section titled “Cross-Cutting Rules”All research sub-skills enforce these constraints:
| Constraint | Requirement |
|---|---|
| Minimum sources | 3 for factual claims, 5 for statistics |
| Confidence levels | high / medium / speculative — flagged inline |
| Date-aware queries | Current year in all searches — never trust training-data dates |
| Citation format | Every claim links to source URL |
| Contradiction detection | When sources disagree, present both with evidence |
| Firecrawl fallback | All sub-skills may use firecrawl for deep extraction when WebFetch fails or for site-wide crawls |
Resilience
Section titled “Resilience”All sub-skills follow these resilience rules:
| Rule | Detail |
|---|---|
| Exponential backoff | 1s → 2s → 4s → 8s on fetch failures |
| Max retries | 3 per URL before marking as inaccessible |
| Timeout | 30s per fetch (WebFetch or firecrawl) |
| Rate limiting | Minimum 1s between external API calls; back off on 429 |
| Budget tracking | Log fetches consumed vs budget remaining |
Firecrawl Budgets
Section titled “Firecrawl Budgets”Per-skill firecrawl page limits (in addition to WebFetch budgets):
| Skill | Firecrawl Pages | Use Case |
|---|---|---|
web-research | 5 per session | Fallback when WebFetch fails |
competitor-analysis | 10 per competitor | Site crawl + messaging extraction |
audience-analysis | 5 per review platform | Review mining (G2, Amazon, Trustpilot) |
content-research | 10 per audit | Competitor article extraction |
person-research | 0 | Uses WebSearch + WebFetch only |
State Detection
Section titled “State Detection”Before routing, check project state to inform the research:
brand.json→ if exists, incorporate brand context into researchstack.json→ if exists, tailor technical research to the stack.claude/docs/plans/→ check for existing research briefs to build onbusiness.json→ if exists, use as context for competitor/audience research
| State | Recommendation |
|---|---|
| brand.json exists | Pre-load brand positioning for competitor analysis |
| business.json exists | Use NAP data for local competitor research |
| Existing research brief found | Build on it, don’t start from scratch |
| No prior research | Start with web-research for broad context |
Chaining Patterns
Section titled “Chaining Patterns”| User Says | Chain |
|---|---|
| ”Research my competitors” | competitor-analysis |
| ”Who’s my target audience?” | audience-analysis |
| ”Full competitive landscape” | competitor-analysis → audience-analysis |
| ”Content strategy research” | content-research → competitor-analysis (content audit) |
| “Research this person for outreach” | person-research |
| ”I need to understand this market” | web-research → competitor-analysis → audience-analysis |
| ”Verify these claims” | web-research (verification mode) |
Priority Order (when multiple skills apply)
Section titled “Priority Order (when multiple skills apply)”- web-research — broad fact-finding, verification
- competitor-analysis — market positioning, competitive landscape
- audience-analysis — who to target, buyer understanding
- content-research — what to create, topic authority
- person-research — individual intelligence gathering
What This Skill Does NOT Route
Section titled “What This Skill Does NOT Route”- SEO audits →
seo-pimphandles - Brand asset work →
brand-pimphandles - Content creation →
content-creationpack handles - Client reports →
client-opspack handles
Verification Gate
Section titled “Verification Gate”Every research output MUST include a verification summary table. No brief ships without this.
| Column | Required |
|---|---|
| Source URL | Full URL |
| Tier | 1 (primary) / 2 (official secondary) / 3 (journalism) / 4 (blog/forum) |
| Credibility Score | 0-8 per source-verification.md rubric |
| Confidence Level | HIGH / MEDIUM / SPECULATIVE |
Minimum thresholds:
- 3 sources for factual claims, 5 for statistics
- If any claim has only 1 source with credibility < 5, flag as SPECULATIVE
- All source URLs must be included in the final output — no citation without link
Research Brief Schema
Section titled “Research Brief Schema”Standard output shape for all research types. Downstream consumers (content-researcher, prospecting, client-reports) rely on this format:
| Field | Type | Required |
|---|---|---|
title | string | yes |
type | web / competitor / audience / content / person | yes |
date | ISO date | yes |
sources[] | array of {url, title, tier, credibility_score, date_published} | yes |
findings[] | array of {claim, confidence, supporting_sources[]} | yes |
confidence_summary | {high: n, medium: n, speculative: n} | yes |
contradictions[] | array of {claim, sources_disagree[], resolution} | if any |
Downstream consumers:
content-researcher— uses research brief for content writing pipelineprospecting— uses research findings for prospect intelligenceclient-reports— incorporates research data into client deliverables
Hard Rules
Section titled “Hard Rules”- Never respond about research before invoking the target skill
- No summarizing, planning to invoke, or explaining what you’re about to do
- If unclear, ask ONE clarifying question, then route
- The skill’s content has the verified methodology — always defer to it
- “Research this” without specifics → start with
web-research - ALL research outputs must follow the brief template from the sub-skill’s output-format.md
- Privacy: person-research is PUBLIC information only — flag and refuse if request targets private individuals