
Episode 6: The Impact of Virtualisation in Testing with John Power
08/01/23 • 45 min
Is virtualisation given its rightful place in test design? Deployed in a loosely coupled system, speed and flow increase, whilst reducing technical debt. Team alignment and version handling is improved, setting the terrain for better software delivery. Welcome to this episode of Inside The Outer Loop.
Moving from expertise in original service virtualisation to sandboxing, guest John Power, CEO of Ostia Solutions, with Curiosity Software's Huw Price and Rich Jordan shares insight that a proper sandbox is fully simulated, gives a good customer experience to developers and testers, that a sandbox being standalone and generating synthetic data, it isn’t compromised.
Initially offering a proxy, using request-response algorithms for recording and replaying without Mainframes in play, Ostia went to providing full simulation by example of the UK's Open Banking Model. This involved moving the technology from simply record and replay of data to actual data generation.
The hosts share experiences on leading a virtualisation team but also how best to implement Master Data Management using sandboxes in model-based testing to avoid accidental complexity in the system under test. In adopting such an approach, the starting point really is to understand the current confidence level in the interface you’re asking service virtualisation to replace.
In practice, simulating what currently exists a system, through the framework bringing in functional endpoints and business rules, it informs the required APIs to the benefit of time, security and quality. But the challenge is for organisations to value sandboxes in adjusting the system design rather than as a regulatory or end-of-year afterthought. Beyond creating reusable assets, you’ll ensure continuous updates to sandbox data and testing models.
The approach also gives oversight to which contracts and test environments are affected, alongside sandboxes. Though this requires moving away from a centralised management of APIs. In working towards a better architectural design of a system, where dependencies are isolated, we can learn from Conway’s Law.
It suggests a system mimics the organisation's communication, so it's best to improve communication across teams first. Sandboxing will then thrive at an organisational level. You’ll be reducing technical debt, risk, extra effort and in parallel developing mature teams to enable flow, feedback and experimentation in the system under test.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Is virtualisation given its rightful place in test design? Deployed in a loosely coupled system, speed and flow increase, whilst reducing technical debt. Team alignment and version handling is improved, setting the terrain for better software delivery. Welcome to this episode of Inside The Outer Loop.
Moving from expertise in original service virtualisation to sandboxing, guest John Power, CEO of Ostia Solutions, with Curiosity Software's Huw Price and Rich Jordan shares insight that a proper sandbox is fully simulated, gives a good customer experience to developers and testers, that a sandbox being standalone and generating synthetic data, it isn’t compromised.
Initially offering a proxy, using request-response algorithms for recording and replaying without Mainframes in play, Ostia went to providing full simulation by example of the UK's Open Banking Model. This involved moving the technology from simply record and replay of data to actual data generation.
The hosts share experiences on leading a virtualisation team but also how best to implement Master Data Management using sandboxes in model-based testing to avoid accidental complexity in the system under test. In adopting such an approach, the starting point really is to understand the current confidence level in the interface you’re asking service virtualisation to replace.
In practice, simulating what currently exists a system, through the framework bringing in functional endpoints and business rules, it informs the required APIs to the benefit of time, security and quality. But the challenge is for organisations to value sandboxes in adjusting the system design rather than as a regulatory or end-of-year afterthought. Beyond creating reusable assets, you’ll ensure continuous updates to sandbox data and testing models.
The approach also gives oversight to which contracts and test environments are affected, alongside sandboxes. Though this requires moving away from a centralised management of APIs. In working towards a better architectural design of a system, where dependencies are isolated, we can learn from Conway’s Law.
It suggests a system mimics the organisation's communication, so it's best to improve communication across teams first. Sandboxing will then thrive at an organisational level. You’ll be reducing technical debt, risk, extra effort and in parallel developing mature teams to enable flow, feedback and experimentation in the system under test.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Previous Episode

Episode 5: Critical Thinking in Test Design with Paul Gerrard
In this episode, Paul Gerrard, award-winning software engineering consultant, author and coach, brings his experience to Inside The Outer Loop! Together with Curiosity Software's hosts Huw Price and Rich Jordan, Paul Gerrard discusses the difficulties of organisational hierarchies and how to identify ‘stakeholders’ as your ‘internal testing customers’.
Co-host Rich shares his experience of looking over a test data team, unpacking one of his favoured mantras of ‘go slower to go faster', doing so as a caution to not just relying on your technology capability, which might be brilliant, but did it solve any problem?
In practice this requires analysing the problem as a means of arriving at consensus, and not in any way the much derided ‘analysis paralysis’. Consensus should be applauded as it allows teams to get agile “with a small 'a” and beyond solely considering technology capabilities. Guest Paul then focuses our attention on the value brought to the software delivery by testers, in that their insight and analysis brings us closer to the problem, beyond any supposed solution.
With that in mind co-host Huw informs us on the primary of role of testers, and the need to align their purpose as being critical thinkers within quality assurance. He caveats this to impress it’s more about open than it is dogmatic dialogue. An open dialogue helps plug any blind spots stakeholders may have and leads to demonstrable improvements and time savings across a system and/or processes. Though, these beneficial outcomes of eased communication between stakeholder and tester and vice-versa happens more fluidly in smaller teams. And that’s where risk workshops can help determine business goals and avoid projects shutting down in flames within larger organisations.
These workshops act as a tool for triggering input from broader stakeholders within an organisation’s hierarchy. From this can evolve a crossover of perceptions for deciding and prioritising critical outcomes of any software delivery process. Co-host Rich cautions us though, in that testers need to avoid jumping quickly to answering issues through the language of risk and resiliency which usually lead to ‘functional or performance' testing alone. Instead he promotes the need to foresee and map the requirements that need to be proven to ‘accountable’ stakeholders beforehand.
So too there’s a raft of interpersonal skills including imagination and critical thinking from testers which can be priceless in early stage challenging to plug gaps in requirements. The outcome of which gives a firmer grasp and understanding how software should function.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
Next Episode

Episode 7: The Model-Based Tester’s Journey with Gunesh Patil
Guest Gunesh Patil shares insights on his journey beyond the misunderstandings and misconceptions in Model-Based Testing alongside Curiosity Software's Rich Jordan! Rich and Gunesh previously worked together on major SI projects managing transformational change of a disparate systems in a medium-sized organisation. They championed this as Data Automation and Virtual Environments - so, DAVe Ops for their own version release and change management.
DAVe Ops helped spotlight how a shared understanding of a system’s architecture, or lack of, affects good software testing. Circa 2010, Rich illustrates this as an anti-pattern where automation teams were stepping in to help test teams run test cases in bulky end-to-ends. This was in response to automation test cases failing. A fail isn’t due to the automation but more to a lack of shared understanding of the consumable breakpoints in system’s architecture.
For stakeholders with short-term sights on improved automation, this omits the benefits of the ‘how you get there' approach of Model-Based Testing. It’ll deal less with blackbox, instead observes sustainable metrics such as risk, response times, payloads, ie impact analysis. For Gunesh this visual and flow based production of reusable components is actually a driving force in efficiency. The need for a siloed back-and-forth of translating business requirements into test cases gets reduced. An operational win for service isolation and test matching.
Sketching a practical middleware/automation test strategy comes only by listening to the expectations of designers, developers but also seasoned ancillary actors in the CICD pipeline. This ensures constraints and breakpoints are identified, and which anticipates and avoids introducing accidental complexity in a SUT. The outcome is costly and time consuming data overlaps in automation are avoided.
Operationally, test matching, along with getting and allocating data formalises thinking whilst paying down technical debt. The main takeaway is that collective analysis makes software testing more integrated across teams, giving opportunity to create a strategy factoring in isolation breakpoints. So, don’t just do, also pose questions to tackle organisational but also technical inconsistency and intractability.
Inside the outer loop – brought to you by the Curiosity Software team! Together with industry leaders and experts, we uncover actionable strategies to navigate the outer loop of software delivery, streamline test data management, and elevate software quality. Tune in to transform the way you think about software delivery, quality, and productivity!
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/inside-the-outer-loop-312902/episode-6-the-impact-of-virtualisation-in-testing-with-john-power-45154800"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to episode 6: the impact of virtualisation in testing with john power on goodpods" style="width: 225px" /> </a>
Copy