Skip to content

Project Memory

Project memory is a Markdown document stored per project on the aqua server. It serves as a persistent knowledge base that accumulates insights from QA sessions, making them available to the AI agent and team members in future sessions.

As you run QA sessions, you learn things about the application under test — how authentication works, which selectors are reliable, what constraints exist, and so on. Project memory provides a place to record this knowledge so it is not lost between sessions.

  • Reads memory at session start. The agent automatically retrieves the project memory to understand the project context — known selectors, authentication flows, constraints, and past learnings.

  • Applies knowledge during planning. The agent uses the information in project memory when building QA plans, choosing selectors, and configuring steps.

  • Saves new findings at session end. After a QA session, you can ask the agent to update the project memory with what it learned:

    “Save what you learned in this session to the project memory.”

    The agent will merge new insights (discovered selectors, working API patterns, timing considerations, etc.) into the existing memory.

  • Edit via the Web UI. Project memory is viewable and editable through the aqua Web UI. This is useful for manual corrections, adding information the agent doesn’t know, or pre-populating memory before starting a session.

  • Ask the agent to show it. You can ask the agent to retrieve the current project memory at any time:

    “Show me the project memory.”

  • Ask the agent to update specific sections. You can direct the agent to add, change, or remove specific content:

    “Add a note to the project memory that the /api/v2 endpoints require a Bearer token.”

    “Remove the old staging URL from the project memory.”

A well-maintained project memory typically includes the following sections:

Document the high-level structure of the application — frontend framework, backend services, API patterns, and how components relate to each other.

Note the locations of key files and directories that are relevant to testing — route definitions, API handlers, database schemas, and configuration files.

Record how authentication works in the application — login endpoints, token formats, session management, and any test accounts available for different environments.

List reliable CSS selectors for frequently tested UI elements. Note any selectors that are unstable or environment-dependent.

Document known limitations, quirks, or edge cases — rate limits, required feature flags, environment-specific behaviors, or timing sensitivities.

Capture practical tips for writing tests against this application — common patterns that work well, pitfalls to avoid, and lessons learned from previous QA sessions.

  • Ask the agent to update after each session. At the end of a session, ask the agent to save what it learned. This keeps the memory current and useful.
  • Keep it concise. Focus on information that helps create and maintain tests. Avoid duplicating documentation that exists elsewhere.
  • Structure with headings. Use Markdown headings to organize content so the AI agent and team members can quickly find relevant information.
  • Remove stale information. As the application evolves, review and update project memory to remove outdated details — either through the Web UI or by asking the agent.