AI Agent: Mark Flaky Tests
Our new AI-powered agent automatically detects flaky tests and assigns them a "Flaky" label based on test execution analytics. This helps teams quickly identify unstable tests that require attention and improves visibility into test reliability across projects.
You can customize flaky detection thresholds and conditions in your project’s Analytics Settings for greater control and accuracy.
Use cases:
- Quickly isolate unreliable tests that frequently fail intermittently and impact CI pipelines.
- Improve test quality over time by tracking and addressing flaky tests more systematically.
- Streamline triage processes by flagging flaky tests with a consistent label used across dashboards and reports.
AI Agent: Mark Failed Tests
This AI-driven agent automatically identifies consistently failing tests and assigns them a "Fail" label. It analyzes the last month of test execution data to detect tests that have failed 100% of the time, helping teams focus on the most critical issues first.
You can review and act on these labeled tests directly from analytics or test repository views.
Use cases:
- Prioritize test fixes by highlighting tests that fail consistently and block releases.
- Improve test suite stability by quickly spotting long-term failures across large projects.
- Enable smart filtering in dashboards or test plans using the "Failed" label.
Chat with Tests for a Folder
You can now use Chat with Tests to analyze the content of any specific folder within your project. The AI will examine all nested suites and tests under the selected folder, providing targeted insights, summaries, and test suggestions for that specific section of your test repository.
This enables more focused analysis and planning when working with large or modular projects.
New Pre-Configured Prompts for Chat with Tests
We've expanded the Chat with Tests AI feature with a set of pre-configured prompts to speed up your work and get instant insights from your test base. The new options include:
- Summarize this project – Quickly generate a high-level overview of the project, including tested areas, suite structure and understand the scope of each feature and its corresponding test coverage.
- Suggest test cases for a specific feature – Instantly generate relevant test cases based on a given feature name or description.
- Create a plan with 30 test cases for testing – Let the AI build a structured test plan with a detailed list of test cases tailored to your context.
These enhancements turn Chat with Tests into a smart assistant for planning, onboarding, and test gap analysis.
Run Summary Overview
We’ve introduced a Run Summary feature that provides a concise overview for every completed test run. This summary appears automatically once the run is finished and includes key details such as status breakdown, test duration, and overall test outcomes — helping teams quickly assess run results at a glance.
Expanded Drag-n-Drop for Tests, Suites, and Folders
We’ve enhanced the drag-n-drop functionality to make organizing your test structure more intuitive and efficient. You can now:
- Reorder tests directly within a suite using drag-n-drop in the side view.
- Move tests between suites by dragging them from an opened suite and dropping them into another suite in the project tree on the left panel.
This update streamlines test suite management, especially in larger projects with complex structures.
Fixes and Improvements 🛠️
- Optimized data processing for Requirements
- Improved 0Auth handling for Jira
- Improved UI for Company members page - updated styles
- Fixed test counter update after test Plan editing - no page refresh needed
- Improved operator
IN
for TQL - no errors when searching INissue
,state
,status
,created_by
, etc - Improved handling tests structure in manual run after adding/deleting tests and users
- Added
substatus
parameter to the reporter https://github.com/testomatio/reporter/issues/541