
Episode 14: AI-Powered Testing Practices with Alex Martins
03/06/24 • 55 min
Welcome to episode 14 of Inside The Outer Loop! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing.
This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards.
Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?
Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability.
With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows?
AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals.
Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Welcome to episode 14 of Inside The Outer Loop! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing.
This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards.
Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?
Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability.
With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows?
AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals.
Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Previous Episode

Episode 13: Learning from Software Failures
Welcome to episode 13 of Inside The Outer Loop! In this episode, the from the Curiosity Software team, Ben Riley, Rich Jordan and Paul Wright, discuss their learning, experimentations and experiences with failures.
This episode of Why Didn't You Test That? Emphasises the value of experimentation and learning from failure, and why it's key for organizations trying to foster innovation and continuous growth. Highlighting the importance of creating a culture of psychological safety where individuals feel comfortable making mistakes, and embracing failures as opportunities to learn and improve.
Paul Wright recalled a failure he experienced in a previous role, relating it to a lack of communication and alignment within an organization. The failure emphasised the importance of understanding how a new idea or initiative fits into the larger business strategy. Effective communication and alignment between departments can prevent internal competition and ensure that efforts are coordinated towards a common goal.
The podcast also covers the challenge of software design for higher education institutions. Due to resource constraints, these institutions often struggle to engage in early design phases and shift left in the testing process. However, there is a growing recognition of the benefits of early involvement to customize solutions and ensure better alignment with specific needs. This highlights the importance of finding ways to overcome resource limitations and actively participate in software design. Seek out and watch/listen to the complete episode 12 to learn more!
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Next Episode

Episode 15: The Future of AI Co-Pilots with Mark Winteringham
Welcome to episode 15 of Inside The Outer Loop! In this episode, Curiosity Software's Rich Jordan and James Walker, are joined by Mark Winteringham, author of AI-Assisted Testing. Together they reflect on their experience with AI, the effect it has had on software quality and testing, and the future of AI Co-Pilots.
LLMs, namely ChatGPT, Gemini and Llama are cool, but what do they offer in terms of delivering software quality? What leaps have you taken in using generative AI technology? How will you future-proof your AI-Assisted testing efforts? By now you really should be considering these questions at a strategic organisational level.
Guest Mark Winteringham unravels a collage of challenges as he reflects on his new book "AI Assisted Testing” with our hosts. Providing a balanced perspective in understanding the progress, plateaus, and benefits of using artificial intelligence and co-pilots for delivering quality software.
James follows up by exploring the value of AI Co-Pilots in testing and the importance of context in prompt engineering, emphasising the need for experimentation to determine what actually makes a good prompt. Seen with a healthy scepticism, prompts can be used as aids to extend quality testing abilities. But to yield better results, rather than prompting AI with a broad question, the advice is to target specific parts of the system or problem.
But what does implementing AI technology in to your SDLC actually mean, and how does it work? The possibilities seem endless, and large language model’s keep growing, but has there been an impact, is true transformational change still a while away?
Use Curiosity's code at checkout for a discount on Mark Winteringham's book, AI-Assisted Testing!
Get the Book Here: https://bit.ly/ai-testing
Use this code: podcuriosity24
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/inside-the-outer-loop-312902/episode-14-ai-powered-testing-practices-with-alex-martins-46181030"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to episode 14: ai-powered testing practices with alex martins on goodpods" style="width: 225px" /> </a>
Copy