Selecting your OnPremise OS and Deployment Model-Manager Versions Prior to 9.1.5

Operating System Environments

qTest OnPremise supports 3 types of OS environments:

  • Windows native
  • Linux native
  • Docker on Linux

NOTE: If your organization has a closed network, we recommend you do NOT select Docker as your environment. Docker's installation package is a web installer, so it requires access to the internet to configure and deploy the Docker container on the local environment. If you have a closed network, select a Windows or native Linux instead.

This article is for OnPremise versions prior to Manager 9.1.5. If you are using OnPremise 9.1.5 or later, refer to this article.

Architecture Diagram

Connectivity between components is agnostic to server deployment. qTest components (and prerequisites applications) communicate with each other via TCP and HTTP connection, so they can be installed on different systems, as long as they can connect via specific ports. Files that may need to be shared among services can be supported by a distributed file system like NFS or SMB.


Deployment Models

Select a deployment model based on the numbers of users (see: Server Sizing Guide) for preferred performance, availability, and security:


Deploy all qTest components including their prerequisites on the same system (host, server, or virtual machine).

  • Benefits: This is the simplest deployment approach, which makes configuration, monitoring, and troubleshooting easy. 


Deploy qTest components and their prerequisites, grouped into multiple servers. 

  • Benefits: Better resource allocation and security control compared to the all-in-one approach. 
  • For example, 2 servers: 
    • 1 for frontend application services (e.g. all qTest applications)
    • 1 for backend services (e.g. all prerequisite applications)


  • For example, 4 servers:
    • 1 for primary application services (e.g. Manager and Sessions)
    • 1 for secondary application services (e.g. Insights, Parameters, Launch, Pulse)
    • 1 for primary database services (e.g. Postgres, Mongo, NFS/SMB)
    • 1 for secondary backend services (e.g. Redis, RabbitMQ)


High Availability (HA)

Use a load balancer to distribute the traffic between multiple instances of the same server. Due to its complexity, we only recommend this model to organizations if there is a real business need and have adequate technical resources to maintain. 

  • Benefits: Best performance, security, and availability. 
  • For example, 5 servers: 
    • 1 for the front end load balancer
    • 2 for primary applications (e.g. Manager and Sessions only)
    • 1 for secondary applications (e.g. Insights, Parameters, Launch, Pulse)
    • 1 for the database service (e.g. all prerequisites)


  • For example, 10 servers:
    • 2 for the front end load balancer either with a floating IP between them or 1 load balancer dedicated for 1 primary application (see below)
    • 2 for the first set of primary applications ((e.g. Manager)
    • 2 for the second set of primary applications ((e.g. Sessions)
    • 1 for the BI application ((e.g. Insights)
    • 1 for the secondary applications (e.g. Parameters, Launch, Pulse)
    • 1 for primary database service ((e.g. Postgres, Mongo, NFS/SMB)
    • 1 for secondary backend service ((e.g. Redis, RabbitMQ)





Powered by Zendesk