The new Empirical: AI agents for end-to-end testing
When I was working on the Playwright team at Microsoft, I met hundreds of developers and QA persons, who wanted to, but were not able to adopted the benefits of automated end-to-end testing. They had the "will" to do it, but lacked the "skills" to make it work. Playwright significantly reduced the skillset required (e.g. with things like auto-waiting, which makes every action – like a page click – more reliable), but there's a still a gap.
The "ROI" of end-to-end testing is always questioned.
- Cost of getting started (setting up the test repo)
- Cost of ongoing test coverage
- Cost of ongoing maintenance for existing tests
Reliable automated QA is crucial for velocity
Building
- Time-to-market is a growing pressure on engineering teams. If you are not shipping daily, you are most likely lagging behind your competition and the demands of your customers.
- As software eats the world, it becomes more "mission critical" for our users. They depend on software, like they depend on their cars or home appliances.
- Our apps are more complicated than ever: a simple user-facing action can depend on internal and external services. Code generated using LLMs is only adding more complexity: testing becomes crucial.
Bridging the skill gap with codegen
What is this about?
Prioritizing outcomes
Autonomous AI agents are not fully there today. Just yesterday we saw a case where our browing agent was not able to choose the right dropdown on a page. This requires human intervention: reviews and prompting. This is also changing: every new model makes it better.
We prioritize delivering outcomes for your team, over scalability. This means your team is shielded from the non-determinism of
Get started
Teams like DPDZero are