Join the Community

21,008
Expert opinions
43,823
Total members
313
New members (last 30 days)
114
New opinions (last 30 days)
28,280
Total comments

Preparing For a Run: Ensuring Financial App Performance Quality During Unpredictable Circumstances

1 comment 4

 

When it comes to testing the performance of their applications, retailers have somewhat more of a predictable workload. They already know the busy times – Black Friday and the week before Christmas, for example. This makes capacity planning and software testing more straightforward.

The financial sector, on the other hand, doesn’t have this luxury. High demand for financial services can be triggered by any number of unexpected events, such as large changes in the stock market, natural disasters, or major world events. This means that financial applications and systems must be ready at all times to handle higher-than-normal traffic (nothing new here: in the pre-Internet stock market crash of 1987, many onsite telephone systems were ill-equipped to handle the unexpected volume of calls from distraught clients.)

Also, it is a mistake to think of performance testing as an isolated afterthought to be done when a project is completed. It should instead be embedded in every step of the development process, and performance testing should be thought of as one of the four pillars of testing:

Functional Testing – Make sure the functions match the specifications

Regression Testing – Confirm no new errors have been introduced

Performance Testing – Ensure that performance matches specified criteria

Negative Testing – Establish robustness by subjecting the software to unusual and unexpected conditions.

In addition to considering all four pillars, it is important that these tests are performed well before system integration. This means all four levels of testing should be done at the component level (think of these as the blocks of code that make up an app). This is because the details of the code will still be fresh in the developer’s mind should any problems arise. Plus, the more thoroughly the components are tested, the greater the probability of a swift and successful system integration.

Testing should also be embedded in the development process: not having developers hand over components to an isolated testing team. Ideally, an environment will be created for testing each component, which will go a long way in ensuring each facet of an app functions correctly. 

Negative testing is the one pillar that doesn’t get as much attention as it should. These tests should include out-of-bounds conditions (where the values exceed specified ranges), improper data values (such as having alphabetic characters in a numeric field), and unexpected asynchronous calls (where the component is called when it does not expect to be called).

Mock services are a great tool for negative testing. They can simulate a service that is slow, a service that returns invalid data, and a service that returns unexpected responses. There may be third parties that are part of the process (in the case of a bank, this might also include other departments). Again, a mock service can be used for positive, negative, and performance testing in place of a live third party.

The goal of negative testing is to ensure that each component in the application will handle problem conditions in a robust and orderly manner. For example, the typical user will be most unhappy should a mobile app or web page remain unchanged for more than three seconds. Should a mobile gateway not respond within three seconds, the mobile app should display a message saying the processing is still taking place. A slow mock service simulating the mobile gateway allows this sort of test to take place.

The criticality of performance testing cannot be overstated. Simulating load on a large number of devices can be used to gather information about the end user experience when the system is under load.

When performance testing is performed at the component level, the system architect can specify the expected performance of each component. Should component-level performance testing reveal that the component is not meeting the desired level of performance, it can be dealt with quickly, rather than not being exposed as a problem until final performance testing is done when changes are much more difficult.

After the application components are developed and tested, the same four pillars will apply to system integration testing.

Once a new project has been placed into production, testing should still continue. Testing should consist of periodic maximum load tests to ensure the availability of the entire system when there is an unexpected load. Charles Schwab, for example, regularly conducts high volume tests so that if there is a stock market disruption, their online systems have been proven to handle the extraordinary load.

Sometimes when a bank closes, it is the result of the inability of a financial institution’s systems to handle an unexpected load. This could potentially bring about questions about the financial institution’s viability. In today’s banking world, rumours can cause electronic bank runs. The speed of withdrawals can quickly overtake a scramble to fix a capacity problem. It is better to thoroughly test performance before a crisis creates a surprise. 

Performing the four pillars of functional, regression, performance, and negative testing at the component level — and periodically at the system level — can go a long way to building confidence in a financial institution’s technology, the quality of its apps, improve the customer experience, and being ready to deal with the unknown.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,008
Expert opinions
43,823
Total members
313
New members (last 30 days)
114
New opinions (last 30 days)
28,280
Total comments