In the previous post, we walked through a complete QA workflow with aqua. You saw how a plan is structured, executed, and reused across environments. But we glossed over a critical question: what makes a QA plan actually good?
A plan that passes doesn’t mean it’s effective. A plan with 50 assertions isn’t necessarily better than one with 5. And a plan that’s hard to read is a plan that nobody trusts when it fails.
In this post, we’ll look at practical tips and patterns for writing QA plans that catch real bugs, stay maintainable, and give you confidence when they pass.
Organize Scenarios by User Intent, Not by Implementation
The most common mistake in QA plan design is organizing scenarios around technical boundaries — one for the API, one for the database, one for the UI. This mirrors how developers think about code, but it doesn’t mirror how users experience the product.
Instead, organize scenarios around what a user is trying to accomplish.
Consider a feature where users can invite team members by email. Here’s what an implementation-focused plan might look like:
{
"scenarios": [
{ "name": "API: POST /invitations" },
{ "name": "API: GET /invitations" },
{ "name": "Browser: Invitation form" },
{ "name": "Browser: Invitation list page" }
]
}
And here’s the same feature organized by user intent:
{
"scenarios": [
{ "name": "Send an invitation and verify it appears in the list" },
{ "name": "Resend an expired invitation" },
{ "name": "Revoke a pending invitation" },
{ "name": "Accept an invitation and join the team" }
]
}
The second version tells you exactly what’s being tested and — more importantly — what’s missing. You can immediately see whether edge cases like “invite an email that’s already a member” are covered. You can’t see that in the first version because the scenarios are organized around endpoints, not behavior.
Each scenario can still mix API calls and browser steps. In fact, the best scenarios often do — using the API for fast data setup and the browser for verifying what the user actually sees.
Write Assertions That Explain Why They Exist
Every assertion should answer the question: “What bug would this catch?” If you can’t answer that, the assertion probably isn’t pulling its weight.
aqua supports a description field on assertions. Use it. Not to describe what the assertion checks (the type and expected value already do that), but why it matters.
Weak assertions:
{
"assertions": [
{ "type": "status_code", "expected": 200 },
{ "type": "json_path", "path": "$.id", "condition": "exists" },
{ "type": "json_path", "path": "$.name", "condition": "exists" },
{ "type": "json_path", "path": "$.email", "condition": "exists" },
{ "type": "json_path", "path": "$.created_at", "condition": "exists" },
{ "type": "json_path", "path": "$.role", "condition": "exists" }
]
}
This checks that fields exist, but it doesn’t verify their values. A response where every field is null would pass. These assertions give you a false sense of security.
Strong assertions:
{
"assertions": [
{
"type": "status_code",
"expected": 201,
"description": "Returns 201 Created, not 200 — important for client-side cache behavior"
},
{
"type": "json_path",
"path": "$.email",
"expected": "alice@example.com",
"description": "Email matches the input — catches normalization bugs (lowercase, trim)"
},
{
"type": "json_path",
"path": "$.role",
"expected": "member",
"description": "Default role is 'member', not 'admin' — security-critical"
}
]
}
Notice the difference: fewer assertions, but each one checks a specific value and explains what it’s protecting against. The description fields serve double duty — they document the intent for future readers, and they appear in execution results so you immediately understand why a failure matters.
A useful rule of thumb: if two assertions would fail for the same bug, keep only the more specific one.
Use extract to Build Data-Driven Workflows
The extract feature is one of aqua’s most powerful tools for writing realistic tests. Instead of hardcoding IDs or values, extract them from earlier responses and use them in subsequent steps.
{
"step_key": "create_project",
"action": "http_request",
"config": {
"method": "POST",
"url": "{{api_base_url}}/api/projects",
"headers": {
"Authorization": "Bearer {{auth_token}}",
"Content-Type": "application/json"
},
"body": { "name": "QA Test Project" }
},
"assertions": [
{ "type": "status_code", "expected": 201 }
],
"extract": {
"project_id": "$.project.id",
"project_slug": "$.project.slug"
}
}
Now {{project_id}} and {{project_slug}} are available to every subsequent step in every scenario. This makes your plans resilient — they work regardless of what IDs the server generates.
A key detail: extracted variables are global across scenarios. This means you can set up data in an early scenario and reference it in later ones. Use this intentionally to create a natural flow:
- Scenario 1: “Create a project” — extracts
project_id - Scenario 2: “Add tasks to the project” — uses
{{project_id}}, extractstask_id - Scenario 3: “Complete a task and verify project progress” — uses both
This mirrors how a real user would interact with the app — building on previous actions rather than starting from scratch each time.
Design Dependencies for Clarity, Not Just Ordering
depends_on controls which steps run after which. But it also communicates something to anyone reading the plan: this step can’t make sense without the other one completing first.
Use depends_on to express logical dependencies, not just temporal ones. If step B happens to run after step A but doesn’t actually need A’s output, don’t add a dependency. Keep the dependency graph minimal so that when a step is skipped due to a failed dependency, the skip is meaningful — it means “this test can’t run because its prerequisite failed,” not “this test was arbitrarily chained to something unrelated.”
{
"steps": [
{
"step_key": "login",
"action": "http_request",
"config": { "method": "POST", "url": "{{api_base_url}}/api/auth/login" },
"extract": { "auth_token": "$.token" }
},
{
"step_key": "create_item",
"depends_on": ["login"],
"action": "http_request",
"config": {
"method": "POST",
"url": "{{api_base_url}}/api/items",
"headers": { "Authorization": "Bearer {{auth_token}}" }
},
"extract": { "item_id": "$.item.id" }
},
{
"step_key": "update_item",
"depends_on": ["create_item"],
"action": "http_request",
"config": {
"method": "PATCH",
"url": "{{api_base_url}}/api/items/{{item_id}}"
}
},
{
"step_key": "delete_item",
"depends_on": ["create_item"],
"action": "http_request",
"config": {
"method": "DELETE",
"url": "{{api_base_url}}/api/items/{{item_id}}"
}
}
]
}
Here, both update_item and delete_item depend on create_item but not on each other. This is intentional — they test independent operations on the same resource. If create_item fails, both are skipped (correctly). But a failure in update_item doesn’t affect delete_item.
Use requires to Make Plans Portable
The requires field on scenarios lets you declare which variables a scenario needs to run. If any are missing, the scenario is skipped — not failed. This is a subtle but important distinction.
{
"name": "Admin: Manage user roles",
"requires": ["admin_email", "admin_password"],
"steps": [...]
}
This scenario only runs in environments that have admin credentials configured. In a local environment without admin access, it’s cleanly skipped. In staging with admin credentials available, it runs.
This makes the same plan useful across environments with different capabilities — a local dev setup might not have admin access or a mail server, but staging does. Rather than maintaining separate plans for each environment, write one plan and use requires to let scenarios self-select.
A good pattern: group your scenarios from least to most privilege. Basic user flows first, then scenarios that require elevated access. This way, even in constrained environments, you get useful coverage from the early scenarios.
Mix API and Browser Steps in a Single Scenario
One of aqua’s strengths is that API and browser steps coexist naturally within a single scenario with dependencies flowing across both. Use this to write more realistic and efficient tests.
A common pattern is API setup + browser verification:
{
"name": "Notification appears when task is assigned",
"steps": [
{
"step_key": "assign_task_via_api",
"action": "http_request",
"config": {
"method": "POST",
"url": "{{api_base_url}}/api/tasks/{{task_id}}/assign",
"headers": {
"Authorization": "Bearer {{auth_token}}",
"Content-Type": "application/json"
},
"body": { "assignee_id": "{{user_id}}" }
},
"assertions": [
{ "type": "status_code", "expected": 200 }
]
},
{
"step_key": "check_notification_in_ui",
"action": "browser",
"depends_on": ["assign_task_via_api"],
"config": {
"steps": [
{ "goto": "{{web_base_url}}/notifications" },
{ "wait_for_selector": "text=You were assigned" },
{ "screenshot": "task_assignment_notification" }
]
},
"assertions": [
{
"type": "element_visible",
"selector": "[data-testid='notification-item']",
"description": "Notification appears in the notification center"
},
{
"type": "element_text",
"selector": "[data-testid='notification-item']",
"contains": "assigned",
"description": "Notification text mentions the assignment"
}
]
}
]
}
The API call triggers the action quickly and reliably (no need to navigate to the right page, fill out a form, etc.), while the browser step verifies the user-facing result. This is faster than doing everything through the browser, and it tests the actual integration between backend and frontend.
The reverse pattern — browser action + API verification — is equally useful:
{
"steps": [
{
"step_key": "submit_form_in_browser",
"action": "browser",
"config": {
"steps": [
{ "goto": "{{web_base_url}}/settings/profile" },
{ "type": { "selector": "input[name='display_name']", "text": "New Name" } },
{ "click": "button[type='submit']" },
{ "wait_for_selector": "text=Saved" }
]
}
},
{
"step_key": "verify_via_api",
"action": "http_request",
"depends_on": ["submit_form_in_browser"],
"config": {
"method": "GET",
"url": "{{api_base_url}}/api/users/me",
"headers": { "Authorization": "Bearer {{auth_token}}" }
},
"assertions": [
{
"type": "json_path",
"path": "$.user.display_name",
"expected": "New Name",
"description": "Form submission persisted to the database"
}
]
}
]
}
Here, the browser step exercises the real UI (including JavaScript validation, form serialization, etc.), and the API step confirms the data was actually persisted — not just displayed back optimistically.
Use Conditions for Variant Testing
The condition field lets a step run only when a variable matches a specific value. This is useful for plans that adapt to different configurations.
{
"step_key": "verify_mfa_prompt",
"action": "browser",
"condition": {
"type": "variable_equals",
"variable": "mfa_enabled",
"value": "true"
},
"config": {
"steps": [
{ "wait_for_selector": "input[name='totp_code']" },
{ "type": { "selector": "input[name='totp_code']", "text": "{{totp:mfa_secret}}" } },
{ "click": "button[type='submit']" }
]
}
}
This step only runs when mfa_enabled is "true" in the environment. In environments without MFA configured, it’s skipped. Combined with requires, conditions let you write a single plan that covers multiple configurations without branching into separate plans.
Use Common Scenarios for Repeated Flows
If you find yourself copying the same login steps across multiple plans, extract them into a common scenario. Common scenarios are reusable templates — define them once, then include them in any plan.
The steps are snapshot-copied when a plan version is created, so changes to the common scenario don’t silently alter existing plans. This gives you reusability without surprise.
Good candidates for common scenarios:
- Authentication flows — login, token refresh, MFA handling
- Data setup — creating a user, project, or other prerequisite entity
- Cleanup routines — deleting test data after a run
Keep common scenarios focused on a single responsibility. A “login and create project and invite user” common scenario is doing too much — split it into separate ones that can be composed as needed.
Practical Checklist
Before finalizing a QA plan, run through this checklist:
- Does each scenario name describe a user goal? Not an endpoint or component.
- Does each assertion have a clear bug it would catch? Remove assertions that only check for existence without verifying values.
- Are extracted values used in later steps? If you extract something but never reference it, remove the extraction.
- Do dependencies reflect real data flow? Not just “step 2 runs after step 1” but “step 2 needs step 1’s output.”
- Would this plan work in a different environment? Use
requiresand template variables to avoid hardcoded URLs and credentials. - Can someone unfamiliar with the feature read this plan and understand what’s being tested? If not, add descriptions.
What’s Next
A well-written plan is only as good as the secrets and environment configuration backing it. In the next post, we’ll dive into managing secrets in QA without compromising security — including integration with 1Password, AWS Secrets Manager, and aqua’s automatic masking.