ARTICLE | JUNE, 9

Higher productivity: Cursor features every developer must know

By Ernesto Parada

AI-assisted development environments are revolutionizing how we code and have become an indispensable part of modern software development. Cursor stands out as one of the most powerful tools in this space, as it helps developers and teams increase their productivity and maintain code quality while spending more time on creative problem-solving.

Cursor is an AI-powered code editor that can create new features or functions based on natural language prompts. Thanks to its comprehensive project understanding, it offers real-time code completions, context-aware recommendations, and automatically updates the entire codebase when changes are made.

Built on the widely adopted Visual Studio Code platform (VS Code), Cursor combines familiar IDE functionality and interface with advanced AI models such as GPT-4 and Anthropic Claude. In this post, we’ll explore how to leverage some of Cursor’s key features to enhance your development workflow.

The importance of context in prompts

When working with LLM models, the quality and accuracy of the response largely depend on how we formulate the prompt. LLM-based agents are no exception: the clearer, more specific, and better structured the context we provide, the better and more useful the results will be. In short, dedicating time to crafting good prompts and providing rich, well-organized context is key to maximizing the capabilities of LLM agents in our workflow.

Here are some practical tips to improve your prompts:

  • Be concrete and direct: avoid ambiguities and clearly define what you need.
  • Provide examples whenever possible: this helps the model understand the expected format or style.
  • Include specific constraints or conditions: length limits, coding styles, or naming conventions.
  • Use relevant project information: references to files, functions, or rules that help contextualize the task.
  • Break complex problems into smaller tasks: shorter and more manageable prompts typically generate better results.

Rules: guiding AI’s behavior

One of Cursor’s most powerful and sometimes underused features is Rules. Rules allow you to provide system-level guidance to the Agent and Cmd-K AI. They allow you to control the agent’s behavior by providing persistent, reusable context at the prompt level. Instead of repeating instructions each time, you can define rules that are automatically applied based on context.

There are two main types:

  • Project Rules are version-controlled guidelines stored in .cursor/rules that help maintain consistency across your codebase. They can be automatically triggered or manually invoked to encode domain-specific knowledge, automate workflows or templates, and standardize architectural decisions.
  • User Rules apply to all projects and are always included in your model context. They are defined in Cursor Settings > Rules and are ideal for setting global preferences such as response language or tone.

Rules are written in the .mdc format (similar to Markdown) and can always be active or triggered based on patterns, selected by the agent, or manually. You can even generate them from a conversation using /Generate Cursor Rule, which is very practical for saving best practices that emerge during chat interactions.

The key is to make them clear, precise, and, when possible, include examples. If you find yourself repeating the same instructions in every prompt, you should probably transform them into a rule.

In this repository you can find examples of different technologies.

@Symbols: optimize workflows and avoid errors with @Docs

Another useful feature is the use of @symbols to reference elements within your chats or prompts. When you type @, a suggestion menu appears that allows you to add very precise context without copying and pasting anything.

You can reference:

@Files: project files
@Folders: complete directories
@Code: functions, classes, or any code symbol
@Git: change history
@Notepads: personal notes
@Past Chats: previous conversations
@Cursor Rules: rules defined in Cursor
@Web: external pages
@Docs: access documentation and guides

@Docs is one of the biggest advantages. You can access third-party docs crawled, indexed, and ready to be used as context or custom documents that you add yourself. If you want to crawl and index custom docs, you can do it by @Docs > Add new doc. From Cursor Settings > Features > Docs, you can see the docs you added, and edit, delete, or add new ones.

This is essential for avoiding typical LLM errors, which sometimes complete code or provide explanations based on outdated versions of the code. Additionally, Cursor automatically reindexes this documentation to keep it updated, which is crucial for projects that depend on constantly evolving frameworks or SDKs.

Leveraging MCP for better development

The Model Context Protocol (MCP) transforms how developers interact with Cursor by enabling seamless integration with external tools and data sources. By acting as a plugin system, MCP allows you to extend the agent’s capabilities and access relevant information without having to explain your project’s structure or repeatedly provide context manually.

Here, you can see some practical use cases:

  • Query databases directly from Cursor.
  • Read and use data stored in Notion.
  • Interact with GitHub to create pull requests, manage branches, or search code.
  • Manage memory systems to store and retrieve useful information during work.
  • Automate tasks on platforms like Stripe.

MCP server configuration is done through JSON files (.cursor/mcp.json for project-specific settings or ~/.cursor/mcp.json for global configuration), where commands, arguments, and environment variables are defined to securely manage credentials and tokens.

One example worth mentioning is this powerful browser monitoring and interaction tool. By leveraging Anthropic’s MCP, this application allows you to read the browser’s error console, capture screenshots, and analyze requests, among other things. It’s particularly useful for diagnosing and resolving complex errors that require detailed runtime environment information.

Best practices for quality assurance in AI-generated code

When working with AI-powered development, even well-structured prompts and detailed context might not always meet expectations. Large Language Models (LLMs) have limited context windows, and in complex projects, you might encounter challenges like unintended modifications or code duplication.

To enhance code quality and maintain consistency, we recommend implementing some of these tools:

  • Husky: manage Git hooks and run automatic validations on commits, pushes, or merges.
  • jscpd: detect code duplication and maintain clean code.
  • ESLint: maintain a consistent style and detect errors.

One of the most effective ways to guide the LLM toward generating robust, coherent code is through well-defined tests. Rather than treating testing as a validation step once the code is ready, it’s about actively incorporating it as part of the generation process.

Let’s explore the different testing approaches you can leverage:

  • Unit Tests: define the expected behavior of specific functions. When provided as part of the prompt, they help the agent understand boundaries, valid inputs, and expected outputs.
  • Integration tests: evaluate how well system components or modules work together and ensure that these parts communicate and interact successfully.
  • E2E (end-to-end) tests: useful for modeling the user experience or key business flows from start to finish and checking regressions when the generated code impacts critical parts.

Adopting a Test-Driven Development (TDD) mindset when working with AI not only improves output quality but also provides the model with a clear and verifiable framework for what it needs to build. Even if you don’t strictly follow TDD, outlining the expected scenario before requesting code can reduce errors and unwanted behaviors.

If using tools like vitest, jest, or playwright, you can directly ask the model to complete or adapt tests in these formats. Cursor understands these environments well.

Popular testing frameworks like Vitest, Jest, or Playwright are well-understood by Cursor, making it easy to request test completion or adaptation.

Small details make a difference

While these tips might seem straightforward, their combined impact on your experience and results is significant:

  • Choose the appropriate LLM model for each task, as different models excel at different challenges.
  • Master shortcuts to accelerate your workflow with Cursor’s time-saving commands.
  • Make frequent commits, a good practice that prevents headaches when the agent makes unexpected changes.

Cursor is a powerful tool for developers who want to work faster and write better code. It simplifies common tasks and helps maintain high quality with less stress. However, effective AI-assisted development isn’t about replacing traditional coding practices, it’s about enhancing them.

Our team combines technical expertise with cutting-edge AI tools to deliver superior software solutions. If you are looking to implement AI into your projects, get in touch to discuss how we can help transform your development process.

Scroll to Top