Hardware requirements

Nexthink Appliance

The Nexthink Appliance is powered by a Linux-based 64-bit operating system and all the packages needed to run the Nexthink servers components, Portal and Engine. Nexthink recommends installing Portal and Engine on separate physical or virtual machines, except for some small setups, where they can share the same Appliance. When installed in virtual machines, hardware requirements may vary depending on the infrastructure load.

Nexthink supports the following virtualization platforms:

  • VMware ESXi version 7.0

  • Hyper-V on Windows Server (only VMs of type Generation 1)

  • Kernel-based Virtual Machines (KVM) hosted on EL7 Linux or Ubuntu 20.x

But the Portal and the Engine may run on any other virtualization platform. Beware that some versions of popular virtualization platforms may impose particular limits on the number of CPUs and amount of RAM that you can assign to a virtual machine. In installations with many devices, the possible maximum values may not reach the specified requirements. Likewise, in virtualized environments with high loads, the performance of IO operations may not be sufficient for the Portal or the Engine to write to disk normally. In case of doubt, please get in touch with Nexthink Customer Success Services to validate your virtualized setup.

In all cases, your servers must be powered by 64-bit compatible processors (AMD64 or Intel 64 -not Itanium- architecture). Most AMD and Intel processors currently available comply with this requirement. Because of the high memory bandwidth demands of the Portal and the Engine, the installation on machines with a NUMA (Non-Unified Memory Access) architecture is not supported.

For installations in the cloud, the required types of Azure or AWS compute nodes are indicated.

General considerations

The following definitions apply to all tables of requirements detailed below, for both virtual and physical appliances.

CPU cores

The number of CPU cores required by the Appliance. The reference model for a CPU core in a physical appliance is a single CPU core of an Intel Xeon E5-2695 v3 @ 2.3 GHz. Fewer CPU cores may be required when using newer or faster CPUs. Likewise, depending on the measured performance of a specific setup and its particular CPU models, Nexthink may ask customers to increase the number of CPU cores in the Appliance to keep system usability up to acceptable levels. Always validate with Nexthink Customer Success Services in case of doubt.

Memory

The amount of RAM required by the Appliance. As for the type of RAM, the minimal requirement for all configurations is DDR3-1600 with a data rate of 1600 MT/s. Dedicate RAM exclusively to the Nexthink Appliance and avoid using shared memory, which may negatively impact the performance.

Many aspects of the usage of Nexthink and your infrastructure can affect the performance and requirements of your appliances: the total number of devices and their actual activity; the number of defined metrics, services, categories, scores, and remote actions; the type of hypervisor, the load of other VMs, the vCPU to pCPU ratio and the IOPS, when running on a virtual appliance. All of them impact the performance and the amount of data that can be kept on the Nexthink appliances.

As the number of possible combinations is high, always validate your settings with the help of Nexthink Customer Success Services in case of doubt.

Portal requirements

To help you size the Appliance for hosting the Portal, we define a metric called complexity. The complexity of a setup, along with the number of licensed devices, gives you an idea of the computation power required by the Portal for that particular setup:

complexity = entities * hierarchies * (max_levels + 2)

Where:

  • entities is the total number of entities across all Engines;

  • hierarchies is the total number of hierarchies;

  • max_levels is the number of user-defined hierarchy levels for the hierarchy which has the largest number of levels (excluding both the root level and the entity level).

If you have already defined your hierarchies (in a pre-production environment, for instance), find these numbers in the Portal by logging in as administrator and navigating to ADMINISTRATION → Hierarchies.

Note the total number of entities (1) and the number of levels in the selected hierarchy (2) in the example above: 33 entities and 2 levels (Region and Country). If you have defined more than one hierarchy, select the hierarchy with the highest number of levels to use this number as a value for max_levels in the formula.

According to the size and complexity of your setup, the hardware requirements of the Appliance that hosts the Portal are the following:

Max devicesMax complexityMemoryDisk 500 metrics 1000 metricsDetails (90 days) 500 metrics 1000 metricsCPU coresNetwork

150 k

60 000

59 GB

1 TB 2 TB

1 TB 2 TB

8

1 Gbps

100 k

40 000

41 GB

600 GB 1.2 TB

700 GB 1.4 TB

6

1 Gbps

50 k

12 000

23 GB

300 GB 600 GB

450 GB 900 GB

6

100 Mbps

20 k

4 000

17 GB

200 GB 400 GB

220 GB 440 GB

4

100 Mbps

10 k

2000

13 GB

100 GB 200 GB

120 GB 240 GB

4

100 Mbps

5 k

2 000

12 GB

60 GB 120 GB

60 GB 120 GB

2

100 Mbps

  • Ask Nexthink Support for setups with more than 150k devices.

  • The Portal requires at least 10 MB/s of disk throughput.

  • The total number of entities across all Engines is limited to 8 000.

  • The table shows the disk space required for a default maximum of 500 enabled metrics in green and for an absolute maximum of 1000 enabled metrics in red. The relation between disk space required and number is basically linear, so that doubling the number of metrics requires the double of disk space. Nexthink recommends increasing the default maximum number of enabled metrics gradually.

The quantities in the Details column correspond approximately to the additional disk space required to store 90 days of historical details of count metrics. Add the value in the Disk column to the value in the Details column to get the total disk space required. For more information, see the article about data retention in the Portal.

Once you have an Appliance that meets the requirements of the Portal, configure the Portal to allocate your hardware resources and make the most out of them.

Virtualized Portals

The same hardware requirements stated above apply to Portals in virtualized environments. The recommended types of virtual machines for installing the Portal on the supported cloud platforms are the following:

ConfigurationCloud platform

Max devices

Max complexity

Azure

AWS

150 k

60 000

Standard_E8s_V3

r5d.2xlarge

100 k

40 000

Standard_DS13_V2

r5d.2xlarge

50 k

12 000

Standard_DS4_v2

r5d.2xlarge

20 k

4 000

Standard_E4s_v3

r5.xlarge

10 k

2000

Standard_DS3_V2

r5.xlarge

5 k

2 000

Standard_DS3_V2

r5.large

For more information about the requirements of cloud platforms, see the installation instructions on Azure and AWS.

Engine requirements

The following table holds the hardware requirements of the Engine for a few representative configurations:

Max eventsMax devicesMax entitiesMemoryDiskCPU coresNetworkDisk throughput

200 M

10 k

500

26 GB

250 GB

14

1 Gbps

SSD

100 M

10 k

100

18 GB

200 GB

10

1 Gbps

25 MB/s

50 M

3 k

100

8 GB

100 GB

5

100 Mbps

10 MB/s

  • Contact Customer Success Services if you need to exceed the limit of 200 million events.

  • SSD drives are required for setups with more than 100 million events not only because of their high throughput but also because of their faster random accesses, which are critical for the correct functioning of the Engine. These are the reference specifications that your disk setup should match:

    • The block size for single operation: 4 Kb

    • IOPS: 20 000 (random access)

    • Bandwidth: 80 MiB/s

    • Latency: 0.1 ms

  • If you install the Collector on Windows servers, take into account for the sizing of the Engine that a single server is roughly equivalent to 20 normal devices.

  • The indicated number of cores includes 20 simultaneous Finder users. If more than 20 users access Nexthink Engine simultaneously, 1 additional core is required for every 5 users (up to a maximum of 24 cores).

  • Tests under controlled conditions have demonstrated that the Engine is capable of dealing with up to 100 normalized Finder users when run on a 24 cores appliance (20 users for the first 8 cores + 16 cores * 5 users per core).

  • A normalized user is characterized for querying the Engine once every 25 seconds with a query that takes 10% of a core dedicated to the Engine. If Finder users deviate too much from this behavior, the number of supported users may vary drastically. Note as well that any other kind of query to the Engine (such as queries to the Web API) reduces the number of supported users.

  • The maximum number of supported mobile devices for all Engine configurations is 5 000.

  • For Engines with Nexthink Act and more than 8000 devices, allocate an extra half GB of RAM (already accounted for in the table above).

Because memory consumption in the Engine highly depends on the usage of Nexthink, the most difficult hardware requirement of the Engine to adjust is the RAM. Once you have made a rough estimation of your hardware needs for the Engine, fine tune your RAM requirements by computing their value with the help of the following formula:

Code
RAM [bytes] = #Events * 60 +
              #Devices * 500,000 +
              #Entities * #Services * 80,000 +
 3 GB (system and proxy) +
 0.5 GB (for Nexthink Act if #Devices > 8000)

Remember the maximum values per Engine for each parameter that you can enter in the formula:

Code
#Events = 200 million
#Devices = 10,000
#Entities = 500
#Services = 100

Virtualized Engines

The same hardware requirements apply to Engines in virtualized environments. The following table completes the representative configurations in the table above with the recommendations for virtual and cloud platforms:

ConfigurationVirtual platformCloud platform

Max events

Max devices

MHz allocation to resource pool

Azure

AWS

200 M

10 k

from 1 to 2 Ghz per vCPU

Standard_F16s_v2

c5.4xlarge with SSD gp2

100 M

10 k

from 1 to 2 Ghz per vCPU

Standard_F16s_v2

c5.4xlarge with SSD gp2

50 M

3 k

from 1 to 2 Ghz per vCPU

Standard_F8s_v2

c5.2xlarge with HDD sc1

The number of cores per socket must be one for vCPUS. Regarding the allocated MHz, it is fine to go with the lower value of the range if there is proper monitoring of the infrastructure in place. In case of performance issues, Support will ask you the monitoring information (CPU ready ratio, co-stop, RAM usage, ...) and will most probably ask you to increase your settings.

Regarding memory requirements, as the Engine is an in-memory database, it really depends on the way the hypervisor will address memory overcommitment with our appliance OS and Engine process. If the hypervisor can find and consolidate memory pages with identical content from the Engine VMs on the same host, it could be ok to over-commit. Again, in case of performance issues, Support will ask you the monitoring information about memory usage and overcommitment and may ask you to increase your settings.

Because the Portal mainly identifies the Engine appliances through the MAC address of their network cards for licensing purposes, it is important that the MAC address of your virtual appliances does not change with time. Use static assignment of MAC addresses on all your virtual appliances to avoid licensing issues, especially when rebooting the machines.

For more information about the requirements of cloud platforms, see the installation instructions on Azure and AWS.

Running Nexthink on a single appliance

For very small installations, the Portal and the Engine can run on the same physical or virtual Appliance.

Max devices

1 000

Max complexity

2 000

Events

20 M

Memory

19 GB

Disk capacity

120 GB

Disk write speed

10 MB/s

CPU cores

6

Network

100 Mbps

Nexthink traffic redirection service

The Collector traffic redirection service (nxredirect) is a tool included in the Engine appliance that resends activity information (UDP traffic) received from the Collectors to one or more additional Engines. Optionally, the redirection service is able to anonymize sensitive Collector data on the fly.

The hardware requirements of nxredirect depend on the service being run alongside the Engine or in an appliance where the Engine has been stopped:

Nxredirect is run alongside the Engine

The maximum number of supported devices is 10 000 without anonymization, or 5 000 if anonymization is switched on. The hardware requirements of the Engine apply (see table above). No additional hardware is needed.

Nxredirect is run independently (i.e. the Engine has been stopped)

The maximum number of supported devices depends heavily on anonymization being switched on or off, ranging from 5 000 up to 350 000 devices.

Assuming an average traffic per device as indicated in the product overview of the Collector, the hardware requirements of nxredirect are as follows:

Max devicesAnonymizationCPU coresMemoryDisk

Nxredirect + Engine

5 000

On

Engine reqs

Engine reqs

N/A

10 000

Off

Nxredirect alone

5 000

On

2

5 GB

N/A

350 000

Off

External backups

The disk space requirements given for the Appliance already take into account the amount of space needed to keep up to ten internal backups of either the Portal or the Engine.

In the case that you activate external backups, Nexthink recommends you to reserve the following quantities of external storage, depending on the size of your setup. The figures indicate the file size for each individual backup.

Nexthink Portal

The backup size for the Portal depends on the number of devices, the complexity, the amount of history and the number of widgets and reports. We recommend regularly monitoring the used capacity and adapting it based on actual needs.

Max devicesExternal backup size

150 k

50 GB

100 k

30 GB

50 k

15 GB

20 k

10 GB

10 k

5 GB

5 k

3 GB

Nexthink Engine

The disk requirements for the backup of the Engine are more predictable than those of the Portal and only depend on the number of events stored in the Engine.

Max eventsExternal backup sizeNetwork throughput

200 M

16 GB

30 MB/s

100 M

8 GB

25 MB/s

50 M

4 GB

15 MB/s

Data Enricher

The following content applies exclusively to the Nexthink Cloud offering.

For the Cloud offering of Nexthink, the Windows Server that runs the Data Enricher requires the following hardware:

Number of itemsCPU coresMemoryMin network bandwidth

  • 1 k - 20 k users

  • 1 k - 10 k destinations

2

4 GB

25 Mbps

  • 20 k - 100 k users

  • 10 k - 20 k destinations

4

4 GB

25 Mbps

  • 100 k - 500 k users

  • 20 k - 100 k destinations

4

8 GB

25 Mbps

Mobile Bridge

To collect information from mobile devices synchronized via ActiveSync with Microsoft Exchange, the Mobile Bridge uses a Remote PowerShell connection to your Exchange Client Access server.

Install the Mobile Bridge on a dedicated Windows Server 2008 R2 or later. The hardware requirements for the Mobile Bridge are those same ones recommended by Microsoft for installing their operating system. The Mobile Bridge is compatible with Exchange 2010 SP2 or 2013.

Nexthink Collector

Without Web & CloudWith Web & Cloud

Disk

35 MB

Network card

Any, wireless or wired

Average network bandwidth

100-150 bps

150-250 bps

Nexthink Finder

Starting from Nexthink V6.3, the Finder supports high DPI screens. When setting DPI scaling in Windows, the Finder adapts its size properly.

Memory

4 GB system memory, at least 2 GB available

Disk capacity

50 MB

CPU

2 cores, 2 GHz

Network

100 Mbps recommended

Certified Hardware List

Nexthink V6 appliances include a Linux-based operating system that is derived from the freely distributed sources of a major North American Enterprise Linux vendor. This vendor maintains a list of supported hardware that has been tested and is certified to work with its Linux distribution. To help you choose the hardware for your appliances (the Portal and one or more Engines), verify that it is in the following list:

Certified Hardware List (Red Hat link)


RELATED TASKS

RELATED REFERENCES

Last updated