QF-Test for UI-based Load Tests
This video shows you how you can create system load via the UI with the QF-Test Demo test suite for monitoring and measuring real end-to-end times as well as user acceptance times and ressources needed.
This video shows you how you can create system load via the UI with the QF-Test Demo test suite for monitoring and measuring real end-to-end times as well as user acceptance times and ressources needed.

Mrs Weiß of Münchener Verein, Munich, Germany: „When the cost-benefit-analysis in a project doesn’t recommend an extensive load tests, the existing QF-Test automted tests are started manually or time-controlled as a batch-job on 20 virtual machines at MÜNCHENER VEREIN insurance group. So that load is created on the application and a statement concerning performance is possible.”

Evaluation report for load tests: Comparison of 11 tools, opensource and commercial: Practical example of ALEA GmbH
Connection to highly specialised commercial tools:
As well as open source tools:


QF-Test’s focus is the web interface, NeoLoad’s focus are the Backend systems and network level.
Check it out yourself
How often should automated performance tests run?
Ideally, small smoke-performance checks should run on every major build or merge.
More extensive load and stability tests are usually scheduled nightly or before releases to ensure comparable results.
What minimum requirements should an automated performance test meet?
It should be reproducible, have a clearly defined load profile, and verify measurable target metrics (e.g., p95 latency, error rate).
In addition, monitoring is needed so that the causes of deviations can be traced.
How should external dependencies (e.g., payment, login, APIs) be handled in automated tests?
They are either intentionally included as part of the end-to-end scenario or replaced in a controlled manner (mock/staging) to achieve stable results.
It is crucial to define a strategy for each dependency and make it transparent in the results.
How can “flaky” performance tests be prevented?
By using fixed test windows, stable test data, identical configurations, and repeated runs with median/percentile comparison.
Additionally, it helps to define thresholds with tolerances and avoid putting parallel load on the infrastructure during tests.