
Episode 15: The Future of AI Co-Pilots with Mark Winteringham
03/19/24 • 43 min
Welcome to episode 15 of Inside The Outer Loop! In this episode, Curiosity Software's Rich Jordan and James Walker, are joined by Mark Winteringham, author of AI-Assisted Testing. Together they reflect on their experience with AI, the effect it has had on software quality and testing, and the future of AI Co-Pilots.
LLMs, namely ChatGPT, Gemini and Llama are cool, but what do they offer in terms of delivering software quality? What leaps have you taken in using generative AI technology? How will you future-proof your AI-Assisted testing efforts? By now you really should be considering these questions at a strategic organisational level.
Guest Mark Winteringham unravels a collage of challenges as he reflects on his new book "AI Assisted Testing” with our hosts. Providing a balanced perspective in understanding the progress, plateaus, and benefits of using artificial intelligence and co-pilots for delivering quality software.
James follows up by exploring the value of AI Co-Pilots in testing and the importance of context in prompt engineering, emphasising the need for experimentation to determine what actually makes a good prompt. Seen with a healthy scepticism, prompts can be used as aids to extend quality testing abilities. But to yield better results, rather than prompting AI with a broad question, the advice is to target specific parts of the system or problem.
But what does implementing AI technology in to your SDLC actually mean, and how does it work? The possibilities seem endless, and large language model’s keep growing, but has there been an impact, is true transformational change still a while away?
Use Curiosity's code at checkout for a discount on Mark Winteringham's book, AI-Assisted Testing!
Get the Book Here: https://bit.ly/ai-testing
Use this code: podcuriosity24
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Welcome to episode 15 of Inside The Outer Loop! In this episode, Curiosity Software's Rich Jordan and James Walker, are joined by Mark Winteringham, author of AI-Assisted Testing. Together they reflect on their experience with AI, the effect it has had on software quality and testing, and the future of AI Co-Pilots.
LLMs, namely ChatGPT, Gemini and Llama are cool, but what do they offer in terms of delivering software quality? What leaps have you taken in using generative AI technology? How will you future-proof your AI-Assisted testing efforts? By now you really should be considering these questions at a strategic organisational level.
Guest Mark Winteringham unravels a collage of challenges as he reflects on his new book "AI Assisted Testing” with our hosts. Providing a balanced perspective in understanding the progress, plateaus, and benefits of using artificial intelligence and co-pilots for delivering quality software.
James follows up by exploring the value of AI Co-Pilots in testing and the importance of context in prompt engineering, emphasising the need for experimentation to determine what actually makes a good prompt. Seen with a healthy scepticism, prompts can be used as aids to extend quality testing abilities. But to yield better results, rather than prompting AI with a broad question, the advice is to target specific parts of the system or problem.
But what does implementing AI technology in to your SDLC actually mean, and how does it work? The possibilities seem endless, and large language model’s keep growing, but has there been an impact, is true transformational change still a while away?
Use Curiosity's code at checkout for a discount on Mark Winteringham's book, AI-Assisted Testing!
Get the Book Here: https://bit.ly/ai-testing
Use this code: podcuriosity24
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Previous Episode

Episode 14: AI-Powered Testing Practices with Alex Martins
Welcome to episode 14 of Inside The Outer Loop! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing.
This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards.
Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?
Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability.
With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows?
AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals.
Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Next Episode

Episode 16: Tips for public speaking & presenting with Lina Deatherage
Welcome to episode 16 of Inside The Outer Loop! In this episode, the Curiosity team, Rich Jordan, Ben Riley and Lina Deatherage, discuss their latest webinar and share tips for public speaking and preparing for live presentations.
The hosts provide insights, personal experiences, and practical advice, making for an informative and enjoyable listen. In this conversation, the teams get to debating the merits of a hot dog being a sandwich. But beyond such pop culture controversies, there’s also insight and tips about public speaking and a focus on how testers are seeking out more formalised test data education.
Keep calm and carry on is the tip from Curiosity’s Ben Riley, who shares his enjoyment of chaotic situations, recalling instances where he faced unforeseen technical issues. Tips being to of course remain calm, but also think on your feet, and adapt during any public speaking engagements.
During a recent webinar, Lina shared her experience of presenting a software demo focused on test data strategies. She touches on the importance of preparation, finding a comfortable pace, and being adaptable in case of any unexpected issues or challenges. Lina also emphasized the importance of realizing that the audience is there to learn and support the speaker, rather than finding faults or errors.
During the webinar, the audience expressed interest in the new certificated Test Data Fundamentals Course. Ensure data isn’t a blocker to improving your quality efforts today, by completing Curiosity's Test Data Fundamentals & Key Questions course. In this certified course, you’ll explore key questions that you should be able to answer when setting up your test data capability and as part of a good Test Data Strategy.
Check out Ben's and Lina's recent webinar, Perfect Your Test Data Strategy: How to Achieve Software Quality and Compliance at Speed. Watch now to learn how you can deliver complete and compliant data on demand, developing quality software at speed.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/inside-the-outer-loop-312902/episode-15-the-future-of-ai-co-pilots-with-mark-winteringham-46903378"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to episode 15: the future of ai co-pilots with mark winteringham on goodpods" style="width: 225px" /> </a>
Copy