Write Query
Write optimized SQL for your dialect with best practices. Use when translating a natural-language data need into SQL, building a multi-CTE query with joins and aggregations, optimizing a query against a large partitioned table, or getting dialect-specific syntax for Snowflake, BigQuery, Postgres, etc.
$ npx promptcreek add write-queryAuto-detects your installed agents and installs the skill to each one.
What This Skill Does
This skill generates optimized SQL queries from natural language descriptions. It identifies the necessary data, filters, aggregations, and joins. It's useful for data analysts, engineers, and anyone who needs to query databases but may not be fluent in SQL.
When to Use
- Generating SQL for data extraction
- Creating reports from database tables
- Translating business questions into SQL
- Building data pipelines
- Querying specific data subsets
Key Features
Installation
$ npx promptcreek add write-queryAuto-detects your installed agents (Claude Code, Cursor, Codex, etc.) and installs the skill to each one.
View Full Skill Content
/write-query - Write Optimized SQL
> If you see unfamiliar placeholders or need to check which tools are connected, see CONNECTORS.md.
Write a SQL query from a natural language description, optimized for your specific SQL dialect and following best practices.
Usage
/write-query <description of what data you need>
Workflow
1. Understand the Request
Parse the user's description to identify:
- Output columns: What fields should the result include?
- Filters: What conditions limit the data (time ranges, segments, statuses)?
- Aggregations: Are there GROUP BY operations, counts, sums, averages?
- Joins: Does this require combining multiple tables?
- Ordering: How should results be sorted?
- Limits: Is there a top-N or sample requirement?
2. Determine SQL Dialect
If the user's SQL dialect is not already known, ask which they use:
- PostgreSQL (including Aurora, RDS, Supabase, Neon)
- Snowflake
- BigQuery (Google Cloud)
- Redshift (Amazon)
- Databricks SQL
- MySQL (including Aurora MySQL, PlanetScale)
- SQL Server (Microsoft)
- DuckDB
- SQLite
- Other (ask for specifics)
Remember the dialect for future queries in the same session.
3. Discover Schema (If Warehouse Connected)
If a data warehouse MCP server is connected:
- Search for relevant tables based on the user's description
- Inspect column names, types, and relationships
- Check for partitioning or clustering keys that affect performance
- Look for pre-built views or materialized views that might simplify the query
4. Write the Query
Follow these best practices:
Structure:
- Use CTEs (WITH clauses) for readability when queries have multiple logical steps
- One CTE per logical transformation or data source
- Name CTEs descriptively (e.g.,
daily_signups,active_users,revenue_by_product)
Performance:
- Never use
SELECT *in production queries -- specify only needed columns - Filter early (push WHERE clauses as close to the base tables as possible)
- Use partition filters when available (especially date partitions)
- Prefer
EXISTSoverINfor subqueries with large result sets - Use appropriate JOIN types (don't use LEFT JOIN when INNER JOIN is correct)
- Avoid correlated subqueries when a JOIN or window function works
- Be mindful of exploding joins (many-to-many)
Readability:
- Add comments explaining the "why" for non-obvious logic
- Use consistent indentation and formatting
- Alias tables with meaningful short names (not just
a,b,c) - Put each major clause on its own line
Dialect-specific optimizations:
- Apply dialect-specific syntax and functions (see
sql-queriesskill for details) - Use dialect-appropriate date functions, string functions, and window syntax
- Note any dialect-specific performance features (e.g., Snowflake clustering, BigQuery partitioning)
5. Present the Query
Provide:
- The complete query in a SQL code block with syntax highlighting
- Brief explanation of what each CTE or section does
- Performance notes if relevant (expected cost, partition usage, potential bottlenecks)
- Modification suggestions -- how to adjust for common variations (different time range, different granularity, additional filters)
6. Offer to Execute
If a data warehouse is connected, offer to run the query and analyze the results. If the user wants to run it themselves, the query is ready to copy-paste.
Examples
Simple aggregation:
/write-query Count of orders by status for the last 30 days
Complex analysis:
/write-query Cohort retention analysis -- group users by their signup month, then show what percentage are still active (had at least one event) at 1, 3, 6, and 12 months after signup
Performance-critical:
/write-query We have a 500M row events table partitioned by date. Find the top 100 users by event count in the last 7 days with their most recent event type.
Tips
- Mention your SQL dialect upfront to get the right syntax immediately
- If you know the table names, include them -- otherwise Claude will help you find them
- Specify if you need the query to be idempotent (safe to re-run) or one-time
- For recurring queries, mention if it should be parameterized for date ranges
Supported Agents
Attribution
Details
- License
- MIT
- Source
- admin
- Published
- 3/18/2026
Tags
Related Skills
Senior Data Scientist
World-class senior data scientist skill specialising in statistical modeling, experiment design, causal inference, and predictive analytics. Covers A/B testing (sample sizing, two-proportion z-tests, Bonferroni correction), difference-in-differences, feature engineering pipelines (Scikit-learn, XGBoost), cross-validated model evaluation (AUC-ROC, AUC-PR, SHAP), and MLflow experiment tracking — using Python (NumPy, Pandas, Scikit-learn), R, and SQL. Use when designing or analysing controlled experiments, building and evaluating classification or regression models, performing causal analysis on observational data, engineering features for structured tabular datasets, or translating statistical findings into data-driven business decisions.
Instrument Data To Allotrope
Convert laboratory instrument output files (PDF, CSV, Excel, TXT) to Allotrope Simple Model (ASM) JSON format or flattened 2D CSV. Use this skill when scientists need to standardize instrument data for LIMS systems, data lakes, or downstream analysis. Supports auto-detection of instrument types. Outputs include full ASM JSON, flattened CSV for easy import, and exportable Python code for data engineers. Common triggers include converting instrument files, standardizing lab data, preparing data for upload to LIMS/ELN systems, or generating parser code for production pipelines.
Analyze
Answer data questions -- from quick lookups to full analyses. Use when looking up a single metric, investigating what's driving a trend or drop, comparing segments over time, or preparing a formal data report for stakeholders.