Research Summarizer
Structured research summarization agent skill for non-dev users. Handles academic papers, web articles, reports, and documentation. Extracts key findings, generates comparative analyses, and produces properly formatted citations. Use when: user wants to summarize a research paper, compare multiple sources, extract citations from documents, or create structured research briefs. Plugin for Claude Code, Codex, Gemini CLI, and OpenClaw.
$ npx promptcreek add research-summarizerAuto-detects your installed agents and installs the skill to each one.
What This Skill Does
The Research Summarizer skill provides a structured workflow for summarizing research papers, articles, and reports into actionable briefs. It's designed for product managers, analysts, and founders to efficiently extract key information, compare sources, and properly format citations, saving time and improving comprehension.
When to Use
- Summarize a research paper into a structured brief.
- Compare multiple sources side-by-side with synthesis.
- Extract and format all citations from a document.
- Get a research brief on a specific topic.
- Break down a complex whitepaper.
- Quickly understand the key findings in a document.
Key Features
Installation
$ npx promptcreek add research-summarizerAuto-detects your installed agents (Claude Code, Cursor, Codex, etc.) and installs the skill to each one.
View Full Skill Content
Research Summarizer
> Read less. Understand more. Cite correctly.
Structured research summarization workflow that turns dense source material into actionable briefs. Built for product managers, analysts, founders, and anyone who reads more than they should have to.
Not a generic "summarize this" — a repeatable framework that extracts what matters, compares across sources, and formats citations properly.
Slash Commands
| Command | What it does |
|---------|-------------|
| /research:summarize | Summarize a single source into a structured brief |
| /research:compare | Compare 2-5 sources side-by-side with synthesis |
| /research:cite | Extract and format all citations from a document |
When This Skill Activates
Recognize these patterns from the user:
- "Summarize this paper / article / report"
- "What are the key findings in this document?"
- "Compare these sources"
- "Extract citations from this PDF"
- "Give me a research brief on [topic]"
- "Break down this whitepaper"
- Any request involving: summarize, research brief, literature review, citation, source comparison
If the user has a document and wants structured understanding → this skill applies.
Workflow
/research:summarize — Single Source Summary
- Identify source type
- Academic paper → use IMRAD structure (Introduction, Methods, Results, Analysis, Discussion)
- Web article → use claim-evidence-implication structure
- Technical report → use executive summary structure
- Documentation → use reference summary structure
- Extract structured brief
Title: [exact title]
Author(s): [names]
Date: [publication date]
Source Type: [paper | article | report | documentation]
## Key Thesis
[1-2 sentences: the central argument or finding]
## Key Findings
1. [Finding with supporting evidence]
2. [Finding with supporting evidence]
3. [Finding with supporting evidence]
## Methodology
[How they arrived at these findings — data sources, sample size, approach]
## Limitations
- [What the source doesn't cover or gets wrong]
## Actionable Takeaways
- [What to do with this information]
## Notable Quotes
> "[Direct quote]" (p. X)
- Assess quality
- Source credibility (peer-reviewed, reputable outlet, primary vs secondary)
- Evidence strength (data-backed, anecdotal, theoretical)
- Recency (when published, still relevant?)
- Bias indicators (funding source, author affiliation, methodology gaps)
/research:compare — Multi-Source Comparison
- Collect sources (2-5 documents)
- Summarize each using the single-source workflow above
- Build comparison matrix
| Dimension | Source A | Source B | Source C |
|------------------|-----------------|-----------------|-----------------|
| Central Thesis | ... | ... | ... |
| Methodology | ... | ... | ... |
| Key Finding | ... | ... | ... |
| Sample/Scope | ... | ... | ... |
| Credibility | High/Med/Low | High/Med/Low | High/Med/Low |
- Synthesize
- Where do sources agree? (convergent findings = stronger signal)
- Where do they disagree? (divergent findings = needs investigation)
- What gaps exist across all sources?
- What's the weight of evidence for each position?
- Produce synthesis brief
## Consensus Findings
[What most sources agree on]
## Contested Points
[Where sources disagree, with strongest evidence for each side]
## Gaps
[What none of the sources address]
## Recommendation
[Based on weight of evidence, what should the reader believe/do?]
/research:cite — Citation Extraction
- Scan document for all references, footnotes, in-text citations
- Extract and format using the requested style (APA 7 default)
- Classify citations by type:
- Primary sources (original research, data)
- Secondary sources (reviews, meta-analyses, commentary)
- Tertiary sources (textbooks, encyclopedias)
- Output sorted bibliography with classification tags
Supported citation formats:
- APA 7 (default) — social sciences, business
- IEEE — engineering, computer science
- Chicago — humanities, history
- Harvard — general academic
- MLA 9 — arts, humanities
Tooling
scripts/extract_citations.py
CLI utility for extracting and formatting citations from text.
Features:
- Regex-based citation detection (DOI, URL, author-year, numbered references)
- Multiple output formats (APA, IEEE, Chicago, Harvard, MLA)
- JSON export for integration with reference managers
- Deduplication of repeated citations
Usage:
# Extract citations from a file (APA format, default)
python3 scripts/extract_citations.py document.txt
Specify format
python3 scripts/extract_citations.py document.txt --format ieee
JSON output
python3 scripts/extract_citations.py document.txt --format apa --output json
From stdin
cat paper.txt | python3 scripts/extract_citations.py --stdin
scripts/format_summary.py
CLI utility for generating structured research summaries.
Features:
- Multiple summary templates (academic, article, report, executive)
- Configurable output length (brief, standard, detailed)
- Markdown and plain text output
- Key findings extraction with evidence tagging
Usage:
# Generate structured summary template
python3 scripts/format_summary.py --template academic
Brief executive summary format
python3 scripts/format_summary.py --template executive --length brief
All templates listed
python3 scripts/format_summary.py --list-templates
JSON output
python3 scripts/format_summary.py --template article --output json
Quality Assessment Framework
Rate every source on four dimensions:
| Dimension | High | Medium | Low |
|-----------|------|--------|-----|
| Credibility | Peer-reviewed, established author | Reputable outlet, known author | Blog, unknown author, no review |
| Evidence | Large sample, rigorous method | Moderate data, sound approach | Anecdotal, no data, opinion |
| Recency | Published within 2 years | 2-5 years old | 5+ years, may be outdated |
| Objectivity | No conflicts, balanced view | Minor affiliations disclosed | Funded by interested party, one-sided |
Overall Rating:
- 4 Highs = Strong source — cite with confidence
- 2+ Mediums = Adequate source — cite with caveats
- 2+ Lows = Weak source — verify independently before citing
Summary Templates
See references/summary-templates.md for:
- Academic paper summary template (IMRAD)
- Web article summary template (claim-evidence-implication)
- Technical report template (executive summary)
- Comparative analysis template (matrix + synthesis)
- Literature review template (thematic organization)
See references/citation-formats.md for:
- APA 7 formatting rules and examples
- IEEE formatting rules and examples
- Chicago, Harvard, MLA quick reference
Proactive Triggers
Flag these without being asked:
- Source has no date → Note it. Undated sources lose credibility points.
- Source contradicts other sources → Highlight the contradiction explicitly. Don't paper over disagreements.
- Source is behind a paywall → Note limited access. Suggest alternatives if known.
- User provides only one source for a compare → Ask for at least one more. Comparison needs 2+.
- Citations are incomplete → Flag missing fields (year, author, title). Don't invent metadata.
- Source is 5+ years old in a fast-moving field → Warn about potential obsolescence.
Installation
One-liner (any tool)
git clone https://github.com/alirezarezvani/claude-skills.git
cp -r claude-skills/product-team/research-summarizer ~/.claude/skills/
Multi-tool install
./scripts/convert.sh --skill research-summarizer --tool codex|gemini|cursor|windsurf|openclaw
OpenClaw
clawhub install cs-research-summarizer
Related Skills
- product-analytics — Quantitative analysis. Complementary — use research-summarizer for qualitative sources, product-analytics for metrics.
- competitive-teardown — Competitive research. Complementary — use research-summarizer for individual source analysis, competitive-teardown for market landscape.
- content-production — Content writing. Research-summarizer feeds content-production — summarize sources first, then write.
- product-discovery — Discovery frameworks. Complementary — research-summarizer for desk research, product-discovery for user research.
Supported Agents
Attribution
Details
- Version
- 1.0.0
- License
- MIT
- Source
- seeded
- Published
- 3/17/2026
Related Skills
CPO Advisor
Product leadership for scaling companies. Product vision, portfolio strategy, product-market fit, and product org design. Use when setting product vision, managing a product portfolio, measuring PMF, designing product teams, prioritizing at the portfolio level, reporting to the board on product, or when user mentions CPO, product strategy, product-market fit, product organization, portfolio prioritization, or roadmap strategy.
Onboarding CRO
When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding flow," "activation rate," "user activation," "first-run experience," "empty states," "onboarding checklist," "aha moment," or "new user experience." For signup/registration optimization, see signup-flow-cro. For ongoing email sequences, see email-sequence.
Paywall Upgrade CRO
When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgrade screen," "upgrade modal," "upsell," "feature gate," "convert free to paid," "freemium conversion," "trial expiration screen," "limit reached screen," "plan upgrade prompt," or "in-app pricing." Distinct from public pricing pages (see page-cro) — this skill focuses on in-product upgrade moments where the user has already experienced value.