6 MIN READ

Top 10 ways to be a test automation hero

Mark Kowal
Mar. 12 2015
Man working at helpdesk
Share

Many quality initiatives are doomed to fail right from the start. There are a number of reasons for this failure including that the initiative was just an afterthought to an ongoing project or worse, part of an ill-defined long term strategy. Organizations with contact centers and IP telephony environments need to ensure their customers have a high quality of experience from the beginning and should apply the appropriate focus and resources to achieve that goal.

By leveraging a solid test automation strategy, organizations can mitigate the risk, reduce its operating costs and improve the quality of the customer experience in these complex environments. What is the best way to accomplish this?  Based on Empirix’s twenty plus years of experience (I’ve been here for 13 of those), here are some things you can do to not only prevent a new deployment from failing, but also ensure that you are recognized as a hero within your company!  To become a test automation hero you should:

Shake hands and kiss babies

Right from the start, you’ll realize that people don’t understand what test automation is and the benefits that can be derived from it. You will need to socialize the concept within the business. That means getting out and meeting with your internal customers to educate them – keep in mind this may include your senior management team. Also keep in mind that test automation is not a replacement for manual testing, but will help to test more of the environment in less time. In any test automation environment, manual testers are critical to assist with complex test cases, and bug verification.

Get the security team on your side

Trust me, test automation and security policies can often be in conflict with each other.  Take the time to work with closely with the security team early in the process and set up a safe sandbox and establish the appropriate input/output procedures. Getting the security team’s input and approval will be critical to a successful test project.

Integrate, integrate, integrate

These are complex environments. There isn’t a single technology from one vendor to master, most modern environments typically deploy many different channels and methods to provide options for customers. That means that the environment will contain voice portals, session border controllers, firewalls, web servers, application servers, and CRM solutions. All of these technologies and the connections to and from require testing, only though integration will you be able to automate test. CLI, API, Webservice, SSH, JRuby, Powershell are all useful devices for driving configurations, controlling tools and triggering real world conditions.

Data In, Data Out, Data Data

Environment data, configuration data, agent data, customer data, application data. You’re going to have to generate and maintain a number of test accounts. They’ll be purpose-made accounts (inactive cards/premium customers/ customers with different products assigned to them) you’ll have several different databases to maintain and you’ll need access to them at run-time.  Solid procedures for creation of test data and the ability to ‘reset’ their status before during and after a test run will be required.  You will also need controllable methods for auditing, storing and automation of configuration data that you can review side by side with test results.  No more just ‘copying’ from production and fishing for what you need.  You’ll need to purposefully create it and create it in a way that matches the model that’s in production. Which brings us to #6

Models, Models EVERYWHERE

The debate is over, due to its ability to find defects earlier in the lifecycle, its ability to cover more of the application and its ability to test those less obvious paths and features of the application, model based test design is the overwhelming test automation champion. Not only are you going to need to model the application, you’re going to have to model user behavior. This means creating user profiles to match your test data to enable more comprehensive test coverage and the ability to translate the impact on the user experience to the business groups.

Mr. Gorbachev, Tear Down This Wall!

In 1987 President Reagan challenged Mr. Gorbachev to “Tear Down This Wall”.  I would argue that the same call to action could be made today in the test environment.  Desktop test teams, voice portal test teams and website test teams co-exist in a single organization yet, they hardly ever speak to each other and often fight for shared budget. Guess what’s not being tested in these organizations? Did you guess “An end-to-end transaction from web to voice portal to agent screen pop – under production level loads” I figured you did. Why is this important? Oh yeah, because that’s how it all works in the real world. (See #8)

That’s so Meta

Testing for quality must have high quality processes that promote quality scripts. If your organization is just starting out, it will take some time to learn all the features of your new shiny tools, start off by automating the simple tasks and the simple parts of the application before moving towards the more complex tasks. Consistently keep best practices in mind and identify the run-time parameters and data required to reduce script maintenance. Automate the parts of the application which are most stable first before you attempt the more dynamic portions of the application.

Always Be Closing

Test automation is a money saver and a time saver, but it’s not a direct money maker. To justify test automation you should keep an excellent accounting of what you’re saving the organization.  Some organizations have told me that by the time the customer reaches a live representative, the fully loaded cost savings including telecom, technology, maintenance, capital etc. is $250,000 a second.   How much effort and what coverage would we have gotten if the testing was done manually? And when you find faults/defects what would have been the business impact if it was in production? How about idle agents, angry customers, mis-routed calls, hung screen pops, blank web pages, customers sitting in queues that are not staffed with any agents to name a few. Keep a record of this and keep reminding the decision makers how much of their bacon you’re saving. How do you get that data? (See #2)

Provide Value to Operations

Ask the operations teams about the biggest problems they encounter with new/upgraded deployments and existing platforms. Ask to see or review the trouble tickets they deal with regarding the applications you are responsible for. What issues are the most voluminous? Which ones take the longest to resolve? What’s a common theme? How can you attack that? Is it load related? Are there fringe cases that only a model based test design would be able to accommodate (2-way pair testing, n-wise testing) Are there regression bugs? Are there documentation bugs? Make the operations team’s life easier.  Develop a proactive automated customer experience transaction for production, run it for them and give them a website to login and review these transactions, make sure the transactions are alarming to the Service Operations Center.  This helps them understand ‘user-error’ versus the controlled test case.

To Pass or To Fail, That is the Question

Those who fail to define, define failure. What is failure? What is a fault? What is an error? Did you even know that in Test Automation those are 3 different things? What are the management expectations for those? what if the System Under Test is a speech application and where it once recognized “I want to get my Account Balance” on the first iteration, it now ask a confirmation question “I think you said… is that right?” You can dynamically script for that and ‘handle’ that like a human would but what should you do? ‘Warn?’ did you integrate with the system under test logs? What accent does the re-prompting occur? At what volume? Is it only at that step or is the speech engine acting flaky all the time? Can you quickly re-wire for a voice quality test from the tool up to the voice portal interface, perhaps it’s the quality of the speech submitted to the engine.

One of the most important questions to answer when something fails is “What has changed?” How do you know? Looks like automated configuration and change management auditing isn’t just a good idea for production, very often the lack of configuration and change management in your test environment could also limit your productivity and render your results vague because you were not in control of all the variables supporting the tests.  Data dictionaries in-scope and out-of-scope definitions will be useful when you are tracking those bugs and will determine how to improve the quality process of your quality process.

Written By
Mark Kowal
Mark Kowal

Ready to talk?