No Match Found
Cloud environments can be complex. Containers, microservices and other technical components can stretch across an array of systems and raise questions about ownership, dependencies and business logic. These environments often require complex development that touches core business systems, large-scale integrations and data platforms, and artificial intelligence.
Because cloud technology continually advances, quality engineering often lags and can be out of step with business requirements. So, while line-of-business and IT leaders may be immersed in test and monitoring data — and even consistently managing costs and hitting performance metrics — they ultimately may not hit the success criteria their business stakeholders expect.
For example, a company might find itself migrating to a new cloud-based call center application. After successfully completing performance, security and reliability testing that hits metrics and key performance indicators (KPIs), the application underperforms in the real world and is less effective than the prior system. The reason? Testing didn’t take into account necessary process and workflow alterations. This deficiency can lead to longer response times and increased error rates for agents handling customer queries.
To help improve results, it is often critical to incorporate real-world scenarios and dynamic workflow detection into your QE processes. This requires an entirely different approach to QE, one that can be an afterthought or an exercise in hitting technical IT and cloud benchmarks. When advanced QE is introduced early and effectively, it typically serves as an investment in the business — one that can help pay dividends over time. What’s more, the upfront costs are eclipsed by the long-term benefits of matching technology to the actual needs of the business.
Whether you’re on the business side or the IT side, here’s what you should know so you can achieve quality engineering that can help drive business outcomes.
It can be vital to recognize upfront that migrating to a more advanced QE model doesn’t mean it reduces the need for testing. But it does change the way testing takes place and what gets measured. The goal is to help address gaps in visibility and reporting that can directly impact business performance.
A holistic QE model focuses on a central question: Will a software solution or initiative help the business succeed?
Too often, business teams are heavily involved in selecting an application and defining how it can function within the enterprise, but QE tasks typically wind up solely in the hands of IT, which can make them unlikely to spot business issues during testing.
This leads to a second critically important question for business and IT leaders: Do we understand the business scenarios that can make or break our product in the real world? In other words, what constitutes an acceptable business outcome?
A model should incorporate industry-leading QE practices that can take into account the characteristics and qualities of current digital systems and software. Moreover, the model should consider how these factors relate to the company, the industry and the broader world.
We use five key principles as a framework that can help introduce a continuous feedback loop:
With this framework, a company can connect and integrate engineering practices, intelligent automation, cognitive test generation, predictive analysis, defect detection and development practices. This can help enhance clarity and control over nearly every aspect of testing and quality assurance.
The power of this approach lies in its ability to help connect key elements that can determine whether an application actually serves the business. To do that, business and IT leaders should focus on three key areas: usability testing, system acceptance and product acceptance.
|Quality engineering phases overview|
|Usability testing||System acceptance||Product acceptance|
Connecting data, measuring key technical factors and viewing performance in a granular way — with business outcomes in mind (including cybersecurity and regulatory risk) — typically requires both “shift-left” and “shift-right” approaches. You can test things early and often (shift-left) while critiquing the process, assessing results and making adjustments (shift-right).
Of course, each organization should create its own QE roadmap — ideally enroute to creating a quality engineering center of excellence (QE CoE) that can help provide centralized and standardized governance, tools and industry-leading practices and processes. Along the way, there are numerous technical issues that can help address structural business issues to focus on and cultural issues to manage. Revamping QE processes can help lead to new and different tasks, responsibilities and workflows. For example, developers may suddenly find that they rely on a shared services organization to spin up a test environment.
However, when organizations put this model into motion effectively, they typically slide the dial from a cost and technical performance so they can focus on a broader strategic perspective. With appropriate KPIs, metrics and other benchmarks, business and IT groups can better understand what impedes progress and what can help drive performance.
With a QE body overseeing the framework, a structured assessment process, and an interactive dashboard and better reporting, an organization can be equipped to better monitor results in real time through metrics and visualizations. Tech-enabled solutions used in conjunction with our services and within our QE framework can help.
The objective is to establish an end-to-end quality delivery framework that can help support process and business transformation. With the ability to view the data and transparency via key performance factors, an enterprise can better make decisions that align with specific objectives. Quality standards, checkpoints and metrics are baked into a centralized system. Continual improvement and innovation are built into the model.
For example, consider a company that isn’t able to obtain a summary of testing and defective data in its enterprise resource planning (ERP) system. This lack of insight makes it almost impossible for the executive team and steering committee to establish a strong governance framework to make informed decisions. It also can lead to real-world performance failures along with heightened financial, regulatory and reputational risks. With a new QE model, test report creation time can be significantly reduced through automation, and groups overseeing financials and other ERP data can have complete QE visibility through real-time dashboards and reporting.
Avoiding confusion and having uniform goals can be critical. The key lies in asking appropriate questions, analyzing meaningful factors and directing attention and resources to the issues that can help transcend the limited insights conventional IT metrics deliver. By establishing a highly sophisticated testing framework, an organization can liberate itself from the weight of a tactical and reactive QE framework. It can embrace a model that’s finely tuned to the digital age.