If you want your AI coding assistant to actually βgetβ your project, great rules are non-negotiable. But writing effective Cursor rules isnβt just about dumping a list of doβs and donβtsβitβs about structure, clarity, and showing real patterns that match how your team works.
This guide cuts through the noise, breaking down proven best practices for crafting Cursor rules that workβwhat to include, what to avoid, and how to organize everything for maximum clarity and impact. Whether youβre managing a monorepo or fine-tuning a startup codebase, these strategies will help you get more from AI, improve code quality, and keep your team moving fast. Letβs get into it.
π― What Works Well
1. Structure & Organization
- Proper YAML frontmatter (description, globs, alwaysApply fields)
- Logical categorization by feature (backend, frontend, testing, etc.)
- Consistent markdown formatting
- Modular rule files
2. Content Best Practices
- Start with clear, high-level context
- Explicit essential elements (SDK versions, imports, error handling)
- Include concrete examples (markdown format, comments)
- Explicitly mark deprecated patterns
- Provide clear verification steps
3. Effective Rule Types
- Always: Framework/language guidelines
- Auto Attached: File-pattern matching
- Agent Requested: Context-based intelligent application
- Manual: Explicit attachment when needed
4. Technical Patterns
- Emphasize functional programming over OOP
- Strong type safety (TypeScript, strict mode)
- Consistent naming conventions (directories, files, functions)
- Structured error handling (guard clauses, early returns)
- Mandatory testing with explicit patterns
β οΈ What to Avoid
1. Structural Mistakes
- Unclear markdown bullets
- Random YAML without frontmatter
- Inconsistent formatting
- Overly complex files (break into modules)
2. Content Anti-Patterns
- Generic or vague rules
- Missing examples (always show correct/incorrect)
- Outdated rules
- Ignoring edge cases
- No verification criteria
3. Technical Pitfalls
- Unmarked deprecated APIs
- Mixed framework versions in examples
- Neglected security best practices
- Ignoring performance considerations
- Treating testing as an afterthought
Framework Organization Pattern
.cursor/rules/
βββ workspace.mdc
βββ architecture.mdc
βββ backend.mdc
βββ frontend.mdc
βββ testing.mdc
βββ README.md
π¨ Content Strategy
Effective Rule Categories
- Framework-specific patterns (Medusa, React)
- Language conventions (TypeScript, naming)
- Architecture patterns (modules, services, APIs)
- Security practices (validation, auth)
- Performance optimization (caching, queries)
- Testing strategies (unit, integration, e2e)
- Development workflow (git, CI/CD)
Example Quality Indicators
β Real-world complete examples
β Contextual explanations
β Edge-case handling
β Explicit version guidance
β Clear integration patterns
π Maintenance Best Practices
- Regularly update with framework changes
- Test rules with diverse prompts
- Maintain real-world examples
- Remove outdated patterns
- Incorporate new project experiences
Quality Assurance
- Test problematic requests
- Validate deprecated warnings
- Address ambiguous instructions
- Monitor generated code quality
- Collect team feedback
π‘ Key Insights
- Explicit structured rules over vague suggestions
- Concrete examples trump descriptions
- Consistency over perfection
- Contextual clarity is essential
- Continuous maintenance required
π Granularity vs. Grouping
Optimal Granularity
- Split rules when:
- File patterns differ
- Framework/tool variations
- Different developer roles/workflows
- Large rule sets (>500 lines)
- Distinct contexts (testing vs. components vs. API)
- Group rules when:
- Patterns are closely related
- Same file patterns
- Same workflow
- Small rule sets (<100 lines)
- Shared principles
Recommended Testing Rules Split
.cursor/rules/
βββ testing-unit.mdc # Unit tests
βββ testing-integration.mdc # Integration tests
βββ testing-e2e.mdc # E2E tests
Why Split Testing?
- Different file patterns/tools
- Distinct developer workflows
- Specific best practices (mocking, real DBs, user interactions)
π Recommended Medusa Project Structure
Current:
.cursor/rules/
βββ medusa-development.mdc
βββ remix-storefront.mdc
βββ typescript-patterns.mdc
βββ testing-patterns.mdc
βββ remix-hook-form-migration.mdc
Optimized:
.cursor/rules/
βββ medusa-backend.mdc # API, modules, services
βββ remix-frontend.mdc # Components, routes, forms
βββ typescript-patterns.mdc # Language specifics
βββ testing-unit.mdc # Unit testing
βββ testing-e2e.mdc # E2E testing
βββ migration-guides.mdc # All migrations
π§ Implementation Tips
- Begin broad, then split as complexity grows
- Clear naming conventions
- Cross-reference rules
- Monitor usage effectiveness
- Incorporate regular team feedback
The above strategies ensure Cursor rules are effective, maintainable, and valuable for team workflows.