April 28, 2026

The Complete Guide to Using AI Coding Assistants for WordPress Development in 2026

WordPress runs roughly 43% of the web. Every AI coding tutorial uses a Next.js demo app.

There’s a reason for that gap, and it isn’t that AI tools can’t write PHP. It’s that PHP, the WordPress hooks system, the plugin API, theme conventions, and the specific way WordPress handles globals confuse generic AI tools in specific, repeatable ways. If you’ve tried prompting Cursor to “add a custom field to the WooCommerce checkout” and watched it invent a hook that doesn’t exist, you know exactly what I mean.

This guide is the result of using Cursor, Claude Code, GitHub Copilot, and Windsurf on real WordPress projects — client work, my own plugins, a couple of theme rebuilds. It covers which tool to use for what, how to actually configure them for WordPress, the prompt patterns that work, and the failure modes you need to catch before they ship.

If you’re a WordPress developer evaluating AI tooling and you’re tired of articles written by people who’ve never opened functions.php, this is for you.


TL;DR — Which AI tool should you use for WordPress?

ToolBest forPricingLearning curveVerdict
CursorDay-to-day plugin and theme development$20/mo ProLow — it’s VS CodeThe default choice for most WordPress devs
Claude CodeLarger refactors, multi-file changes, automation with WP-CLIUsage-based + Claude plansMedium — terminal-firstBest for serious refactoring work
GitHub CopilotInline autocomplete inside an existing IDE$10–$19/moVery lowFine as a typing assistant, weak for WordPress-specific reasoning
WindsurfCursor alternative with agentic features$15/mo ProLowWorth trying if Cursor’s pricing or limits frustrate you

If you only read this far: use Cursor for daily work, add Claude Code when you need to do something bigger than one file. Skip Copilot unless you already have it.


Why WordPress is harder for AI than typical web stacks

This is the section most articles skip. Understanding why AI tools struggle with WordPress matters more than knowing which tool is “best,” because the failure modes carry across all of them.

The hooks system confuses AI

WordPress’s add_action / add_filter system is a runtime event bus. Hook names aren’t defined in any single file — they’re scattered across core, plugins, and themes. AI tools handle this in two bad ways:

The first is inventing hook names. Ask Claude or Copilot to “modify the cart total before display” and you might get woocommerce_before_cart_total (real) or woocommerce_modify_cart_total_display (made up). The made-up version is plausible enough that you’ll only catch it when nothing happens at runtime.

The second is mixing actions and filters. Actions don’t return values; filters must. AI tools occasionally generate a callback that returns a value but is registered with add_action, or vice versa. Both are silent failures — no PHP error, just behavior that doesn’t match what you asked for.

The fix isn’t tool-side. It’s prompt-side: name the hook explicitly, or tell the AI to verify the hook exists in the WordPress documentation before using it.

Globals and the loop create context AI can’t infer

WordPress relies on globals ($post$wp_query$wpdb) and an implicit “loop” state that depends on where in the request lifecycle your code runs. A function that works inside a template might fail in an AJAX handler because $post isn’t set there.

AI tools generate code based on the file they can see. If you’re editing a single template file, the AI doesn’t know whether you’re inside or outside the loop, whether wp_reset_postdata() is needed, or whether you should be using get_the_ID() versus $post->ID. They guess, and the guess is often wrong in subtle ways that work in dev and break in production under specific conditions.

Plugin and theme conventions get skipped

WordPress has security and i18n conventions that aren’t enforced by PHP — they’re enforced by code review and the plugin directory team:

  • All output must be escaped (esc_htmlesc_attresc_urlwp_kses)
  • All input must be sanitized (sanitize_text_fieldwp_unslash)
  • All form submissions need nonces (wp_nonce_fieldcheck_admin_referer)
  • All user-facing strings need translation functions (__()_e()esc_html__())
  • Database queries should use $wpdb->prepare(), never string concatenation

AI tools skip these defaults unless you explicitly tell them not to. Generated WordPress code is often functional but not safe and not submittable to the plugin directory.

I’ll come back to specific prompt patterns that fix this.


The 4 AI coding tools worth using for WordPress

Cursor

What it is: A fork of VS Code with AI features built into the editor. Inline edits, multi-file changes, chat panel, and an “agent mode” that can read and modify files autonomously.

Setup for WordPress: Install Cursor. Open your WordPress project (the full WP root, or just wp-content/ if you only work on themes/plugins). The two settings that matter:

  1. Add a .cursorrules file at your project root with WordPress-specific context (example below).
  2. Index your project — Cursor will offer this on first open. Let it run; it’s how the AI gets context across files.

A minimal .cursorrules for WordPress:

This is a WordPress project.
- Always escape output using esc_html, esc_attr, esc_url, or wp_kses.
- Always sanitize input using sanitize_text_field or appropriate WP sanitizer.
- All form submissions must use wp_nonce_field and check_admin_referer.
- All user-facing strings must use translation functions (__, _e, esc_html__).
- Use $wpdb->prepare() for all database queries.
- When using hooks, verify the hook name exists in WordPress core or the relevant plugin.
- Follow WordPress PHP coding standards (snake_case, Yoda conditions are not required).
- Do not use deprecated functions (mysql_*, create_function, etc).

For deeper configuration, see How to set up Cursor for WordPress development.

What it’s good at: Day-to-day work. Adding a function, modifying a template, generating a plugin scaffold, writing a Gutenberg block. The inline edit (Cmd+K) is the killer feature — select a function, describe the change, get a diff.

Where it fails: Large multi-file refactors where context exceeds the model’s window. Anything that requires running code or commands (Cursor’s agent can do this but it’s not as smooth as Claude Code’s terminal-native flow).

Real example: I asked Cursor to “add a settings page under WooCommerce that lets the admin set a minimum order quantity per product category.” It generated the menu registration, the settings page HTML with proper nonces and escaping, the option saving logic, and the filter on woocommerce_quantity_input_args. About 80% correct on first try — it forgot to register the settings using register_setting() so the values weren’t persisting. One follow-up prompt fixed it.

Claude Code

What it is: Anthropic’s terminal-based coding agent. Runs in your shell, reads your codebase, executes commands, and edits files. Less of an editor, more of a pair programmer that lives in iTerm.

Setup for WordPress: Install Claude Code. In your WordPress project root, create a CLAUDE.md file — this is the equivalent of .cursorrules but Claude Code reads it on every session start. Same content as the Cursor rules above works fine.

The real unlock is connecting WP-CLI. Claude Code can run wp plugin listwp post listwp db querywp option get — meaning it can actually inspect your live WordPress install rather than guessing from code. This changes the kind of tasks you can delegate to it.

For deeper configuration, see How to set up Claude Code for a WordPress project.

What it’s good at: Anything that touches multiple files or requires iteration. Refactoring a plugin from procedural to OOP. Bulk-updating deprecated function calls. Running a security audit by combining file reads with WP-CLI queries. Writing tests where the agent can run PHPUnit and respond to failures.

Where it fails: Quick one-liner edits. The startup time and token cost don’t make sense for “rename this variable.” Use Cursor or your editor for those.

Real example: I had a 12-year-old client plugin using mysql_* functions, no nonces, output not escaped. I told Claude Code: “Audit this plugin for WordPress security and modernization issues. Produce a list of findings, then fix them in order, running the test suite after each fix.” It produced a 14-item list, fixed them sequentially, and ran the (admittedly thin) tests. About 90 minutes of agent time. Equivalent manual work would have been a day and a half.

GitHub Copilot

What it is: Microsoft/GitHub’s autocomplete-style AI, integrated into VS Code, JetBrains, Neovim, and others.

Setup for WordPress: Nothing WordPress-specific. Install the extension, sign in.

What it’s good at: Inline autocomplete while you type. If you write function my_plugin_register_ and pause, it’ll often complete the function name and body sensibly. Useful as a typing assistant.

Where it fails: It doesn’t reason across your codebase the way Cursor or Claude Code do. It doesn’t read a .cursorrules equivalent. It will happily generate WordPress code without escaping, without nonces, without translation functions. For anything beyond autocomplete, it’s the weakest of the four for WordPress work.

I keep it installed because it’s cheap and the autocomplete is genuinely fast, but I don’t rely on it for anything meaningful.

Windsurf

What it is: Cursor’s main competitor. Similar editor-based experience with agentic features. Owned by Codeium, then acquired, then re-spun — the company history is messy but the product is solid.

Setup for WordPress: Same as Cursor. It supports a similar rules file (.windsurfrules).

What it’s good at: Most things Cursor is good at. Pricing has historically been more generous on the free tier, and some developers prefer its UI.

Where it fails: Smaller community, fewer third-party tutorials. If you hit an edge case, you’ll find more StackOverflow help for Cursor.

I’d suggest trying it if Cursor’s pricing or limits become a problem, but I wouldn’t lead with it.


Head-to-head: same task, four tools

I gave each tool the same task: “Build a custom Gutenberg block that displays the 5 most recent posts from a selected category. Include a category selector in the block sidebar.”

Brief, deliberately. The kind of one-liner request a developer would actually give.

Cursor generated a complete block.json, edit.js with InspectorControls and a SelectControl, save.js using server-side rendering, and a PHP render callback. It used wp.data.useSelect to fetch categories. It missed registering the block via register_block_type initially — I had to ask for that. Total time: ~6 minutes including corrections.

Claude Code asked one clarifying question first (“should the category selector pull all categories or only ones with posts?”), then generated the same set of files plus a package.json with the right @wordpress/scripts dependencies and ran npm install. It tested the build. Total time: ~12 minutes, but it was actually working at the end.

GitHub Copilot doesn’t really do this kind of task — it’s autocomplete. I gave up after trying to coax it through file by file.

Windsurf produced output very similar to Cursor’s, slightly better organized but missed the same register_block_type step.

Verdict: Cursor and Windsurf are roughly equivalent. Claude Code does more, takes longer, and the result is closer to shippable. Copilot is the wrong tool for this kind of multi-file generative task.

For the full benchmark with screenshots, see Cursor vs Claude Code vs Copilot for WordPress: hands-on comparison.


Prompt patterns that work for WordPress

Generic prompts produce generic code. WordPress-specific prompts produce WordPress-specific code. Here are the patterns I reuse constantly.

The “WP context primer”

Paste this at the start of any session where you’ll be generating WordPress code:

You’re working on a WordPress project. All generated code must follow WordPress coding standards and security practices: escape all output (esc_html, esc_attr, esc_url, wp_kses as appropriate), sanitize all input, use nonces for forms, use translation functions for user-facing strings, and use $wpdb->prepare for any direct database queries. Do not invent hook names — if you’re not sure a hook exists, ask before using it.

Even with .cursorrules configured, restating this at the start of a complex session improves output quality noticeably.

The “hook-aware” refactor

Refactor [function/file] to use the appropriate WordPress hooks instead of [direct calls / output / whatever]. Before writing code, list the hooks you plan to use and explain why each is the right choice (action vs filter, priority, expected arguments). Wait for me to confirm before generating code.

The “wait for me to confirm” line is the important part. It catches invented hooks before they end up in your codebase.

The “security review” prompt

Review this code for WordPress security issues. Specifically check for: missing output escaping, missing input sanitization, missing nonces on forms or AJAX handlers, direct database queries without prepare(), use of deprecated functions, and missing capability checks (current_user_can) on privileged operations. List each issue with severity (critical/high/medium/low) and the fix.

Run this on every piece of AI-generated WordPress code before committing. Yes, even when you’re sure it’s fine.

The “plugin scaffold”

Generate a WordPress plugin scaffold for a plugin called “[name]”. Include the main plugin file with proper header, an autoloader for classes, a uninstall.php that cleans up options, a readme.txt template, and folder structure (includes/, admin/, public/, languages/). Use object-oriented PHP with namespaces. Do not implement business logic yet — just the scaffold.

Splitting “scaffold” from “logic” produces much cleaner starting points than asking for the whole plugin in one shot.

The “explain this codebase” prompt

I just took over this WordPress plugin/theme. Read through the code and produce: (1) a summary of what it does, (2) the main entry points and how they’re triggered, (3) any concerning code patterns (security issues, deprecated functions, performance problems), (4) the dependencies it relies on. Don’t change anything yet.

This is a Claude Code prompt specifically — it works because Claude Code can actually read all the files. It’s saved me hours on legacy client takeovers.

For the full library, see AI prompts that actually work for WordPress developers.


Common failure modes and how to catch them

Specific things AI tools get wrong on WordPress, in roughly the order of how often I see them:

Inventing hook names. Mentioned above. Catch it by grepping the WordPress core or plugin source for the hook name before relying on it. Or paste the AI’s code into Claude Code and ask “verify all hook names used in this code exist.”

Forgetting nonces on forms and AJAX. AI tools generate the form HTML and the handler, but skip the nonce field and the check_admin_referer call. Always grep your generated code for nonce before shipping.

Using deprecated database functions. Less common in 2026 than it was, but Copilot especially still generates mysql_query if it’s in its training data near WordPress code. Ban this in your rules file.

Missing escape functions on output. The most common security issue. AI generates echo $variable; without esc_html(). Your security review prompt should catch this on every PR.

Skipping translation functions. Strings get hardcoded instead of wrapped in __(). Easy to fix in bulk later but easier to prompt for upfront.

Confusing actions and filters. Filter callbacks that don’t return a value. Action callbacks that try to return one. PHP doesn’t error — the bug is silent.

Mixing block editor and classic editor patterns. AI sometimes generates a metabox using the classic editor’s metabox API when you’re working on a block-editor-only site, or vice versa. Specify which editor you’re targeting in your prompt.

Bypassing capability checks. Code that performs a privileged action without checking current_user_can(). AI tools assume the user calling the function is authorized. They often aren’t.

For a deeper dive on auditing AI-generated WordPress code, see How to use AI to audit a WordPress theme for security issues.


A real workflow — how I actually use AI on WordPress projects

This is what a typical day looks like. Specific so you can copy what works.

Morning: planning. I open Claude Code in the project I’m working on. First prompt is usually “Read the project, summarize where we left off based on the most recent commits, and list the open todos in CLAUDE.md.” Two minutes, and I have my context back without having to remember.

Mid-morning: feature work. Switch to Cursor for the actual implementation. The pattern I use is: write a one-paragraph description of the feature in a markdown file at the top of my project (PLAN.md), then Cmd+K on each function I want generated, referencing that plan. Cursor’s inline edits are fast enough that I’m not waiting on the AI — I’m reviewing its output as fast as it produces it.

Afternoon: debugging. When something breaks, the prompt I use most is “Here’s the code, here’s the error, here’s what I expected to happen. What’s the most likely cause? Don’t fix it yet — explain first.” The “explain first” matters because AI tools love to fix bugs by adding code, not by understanding them. For a longer treatment, see Debugging WordPress white screen of death with AI assistance.

End of day: review. Before committing anything AI-generated, I run the security review prompt above and skim the diff. Any AI-generated code that I haven’t read line by line doesn’t get committed. This is non-negotiable for me and should be for you.

If you want a worked example of building something end-to-end this way, see Writing a WordPress plugin from scratch with Cursor.


When NOT to use AI for WordPress

Counterintuitive section, but it’s the most useful one if you’re early in your AI workflow.

Quick CSS edits. Opening a chat, describing what you want, reviewing the diff — overhead. Just edit the file.

Anything touching authentication or user roles, without a senior reviewing. AI will generate plausible-looking auth code that has subtle bugs. The cost of getting auth wrong is too high.

Database migrations on production. AI can write the migration. AI cannot understand the consequences of running it on your specific data. Run migrations manually on staging first, every single time, AI or not.

The first time you’re learning a concept. If you’ve never written a Gutenberg block before, don’t ask AI to write your first one. You’ll skip understanding the architecture and you’ll be helpless when something breaks. Build one manually, then use AI for the second.

One-line prompts where you don’t know what you want. “Make this better” produces nothing useful. If you can’t write a one-paragraph description of what you want, you’re not ready to ask the AI yet.


FAQ

Can AI write WordPress plugins? Yes — Cursor and Claude Code can both generate functional WordPress plugins from a description. Quality depends entirely on how specific your prompt is and whether you’ve configured the tool with WordPress-aware rules. Expect to review and correct security issues (nonces, escaping, capability checks) before shipping.

Is Cursor or Claude Code better for WordPress? Cursor is better for day-to-day editing and small features. Claude Code is better for multi-file refactors, codebase audits, and anything that benefits from running commands (WP-CLI, npm, PHPUnit). Most WordPress developers should use both — Cursor as the default editor, Claude Code for bigger jobs.

Does GitHub Copilot work with WordPress? It works in the sense that it’ll autocomplete WordPress code as you type. It doesn’t reason about your codebase or follow WordPress conventions automatically. As of 2026, it’s the weakest of the major AI coding tools for WordPress-specific work, though it’s perfectly fine as a typing assistant alongside another tool.

Can AI tools handle WooCommerce code? Yes, with caveats. WooCommerce has hundreds of hooks and filters, and AI tools confuse or invent them more often than with core WordPress hooks. Always verify hook names against the WooCommerce documentation before relying on AI-generated WooCommerce code. The “explain first, code second” pattern catches most of these errors.

Is it safe to use AI-generated code on a live WordPress site? Only if you’ve reviewed it. AI-generated WordPress code routinely misses escape functions, nonces, and capability checks. Run the security review prompt on every piece of generated code, test on staging, and never commit code you haven’t read line by line. This is true of any code, but the temptation to skip review is higher with AI.

What about AI for Gutenberg block development? Cursor and Claude Code both handle Gutenberg blocks well. The key is to be explicit in your prompt about which APIs to use (@wordpress/scripts@wordpress/blocks, server-side rendering vs save.js). The head-to-head section above shows the kind of output you can expect.

Will AI replace WordPress developers? No. It changes what WordPress developers spend time on. Less time typing boilerplate, more time on architecture, security review, debugging weird production issues, and client communication. Developers who treat AI as a typing assistant get marginal gains. Developers who learn to delegate larger tasks while reviewing carefully get substantial ones.


Conclusion

The TL;DR hasn’t changed since the top of this article: use Cursor for daily WordPress work, add Claude Code for bigger jobs, skip Copilot unless you already pay for it, try Windsurf if Cursor frustrates you. Configure your rules file. Write specific prompts. Review every line before committing. Never trust AI on auth, migrations, or first-time learning.

The bigger point — the one the rest of the AI-coding internet hasn’t caught up to — is that WordPress is its own thing. Generic AI coding advice doesn’t translate. The hooks system, the security conventions, the WooCommerce ecosystem, the difference between block editor and classic editor: all of it matters, and tools that don’t account for it produce subtly wrong code that ships subtly wrong bugs.

The tools are good enough now that a careful WordPress developer using AI well will out-ship a developer not using it. Be the careful one.