The investment in testing for financial trading systems far exceeded that of other systems, with tedious test steps repeatedly executed and a low ROI
. As projects and personnel changed, uncontrollable factors inevitably introduced, a common situation being the modification of a field output from Interface A impacting the results of Interface B. With each version release, risk also accumulated.
Theoretical Knowledge
- How to Measure the Value of Automation? An automation testing ROI = (Manual Execution Time) * (Number of Runs) / (Development Cost + Maintenance Cost)
- Which Features Should Be Automated? Frequently used features that are unlikely to change. Writing automated test code for this type of interface yields the highest returns.
- Why Choose This Timing to Drive Automation Testing? Not appropriate near project launch – distant water doesn’t quench immediate thirst; automation is a long-term return model. It’s most suitable when the project is already in a production environment and within a stable release cycle.
Framework Selection
Given the task of automation testing without prior practical experience, a typical starting point is to open a search engine and find tools and frameworks that can be used with the current system’s technology stack, review the user manuals, and get started. If you can immediately find a suitable tool, congratulations, perfect start!
Let me preface this by saying I might be wrong. After reviewing relevant materials, it’s not that these frameworks don’t exist; rather, they are too complex and consume excessive resources. For beginners, what’s needed is something small, streamlined, and concise. Consulting with colleagues in the testing group led to the suggestion of a Python
self-built framework – essentially, wrapping existing unit testing frameworks into an automated testing framework.
Referencing the design thinking for this project: https://github.com/wintests/pytestDemo
Why Use a Framework?
Services have multiple different deployment environments – development, testing, and live testing environments. The purpose of a framework is to act as an abstraction layer, separating test cases and data. This allows for configuring different case data according to various environment configurations, and it also supports shared data.
The core logic is focused on increasing the utilization of automation. When scenarios become more complex, the data between different environments is completely unrelated and has no bearing on each other. Simply add a label
tag when specifying case data to indicate the supported environment.