Connect specifications, automation suites, and results to improve test coverage.
The agent is trained to traverse test links in the knowledge graph—such as relationships between requirements, test specifications, and execution outcomes. Using RAG, it can answer: “Are all requirements for this feature tested?” or “Which test failed most often in the past 3 weeks?”
M4AI Agents can also auto-suggest new test cases based on gaps or generate Robot Framework scripts from natural language or legacy manual descriptions.
Practical Example: A QA lead preparing for a major release uses the Test Navigator to ask whether all new requirements for a module are covered. The agent lists tests, their status, and identifies gaps in automation—saving hours of manual effort.