Data we collect and store

Data categories

Nexthink distinguishes between two types of data: objects and events.

Objects

Inventory objects

Inventory objects represent physical or virtual items related to the digital environment. Objects contain elements that, once captured, rarely change.

ObjectProperties

binary

name, size, version, …

user

name, username, department, …

device

name, CPU, OS, …

Configuration objects

Configuration objects refer to all objects that Nexthink users configure, such as alerts, applications, campaigns and remote actions.

ObjectProperties

monitors

name, threshold, priority, …

campaigns

name, status, trigger_method

remote_actions

name, …

Events

The main characteristic of an event is that it is time-linked. In other words, events represent occurrences of something that happened at a given time within your IT environment, for example, execution.events, web.errors.

Event data can serve various purposes, with the primary distinction lying in operational use and trend observation.

Operational data

Use operational data to detect, diagnose and resolve specific problems. These include live events captured from employee devices and alerts triggered by monitors.

Examples:

  • execution.events

  • execution.crashes

  • device_performance.events

  • remote_action.executions

  • alert.alerts

Operational data is granular and extensive. Nexthink stores this data for up to 30 days. Access operational data through various Nexthink modules and use the drill-down capabilities to view it in Investigations. Alternatively, access Investigations directly and use the Visual editor or write an NQL query to retrieve operational data.

Trends allow you to analyze changes in metrics over a long period to observe patterns and support strategic decisions. Trends are less granular and are stored for up to 13 months. They comprise operational event data aggregated into 1-day or 7-day samples and reduced to relevant metrics and properties.

Various Nexthink modules store trend data by default. Configure module content and observe trends after the system has collected related operational data over a sufficiently long period.

Examples:

  • In the Software metering module, view data for a period of up to 90 days. You can also query this data in Investigations: software_metering.events.

  • In the Remote Actions module, view data for a period of up to 13 months and query this data in Investigations: remote_action.executions_summary.

  • In the Applications module, view application-specific data for a period of up to 90 days.

  • In the Digital Experience module, view DEX scores for a period of up to 13 months (See also Precomputed metrics on this page).

Create your own custom trends to capture long-term data that is relevant to you, query it in Investigations and create dashboards to get valuable insights. Go to the Custom trends management documentation page for more information.

Event collection types

There are two types of event collections: punctual and sampled.

Punctual events

Punctual events reflect occurrences at their exact time. They include crashes, boots or logins.

EventDescriptionAssociationsPropertiesMetrics

execution.crash

A crash of a binary

user, device, binary

time, binary_path

cardinality

session.login

A user login on a device

user, device

time, session_uid

time_until_desktop_ready, time_until_desktop_visible

device_performance.boot

A device booting

device

time, type

boot_duration

Sampled events

Sampled events refer to a data collection method essential for monitoring dynamic metrics associated with continuous and long-term activities. This is particularly important for metrics such as CPU utilization, memory usage, and process traffic, which constantly fluctuate and require regular sampling and aggregation to represent data accurately.

The Collector sampling process occurs frequently, every 20-30 seconds, resulting in high-resolution data. This data is then structured into aggregated time slices, which can either be 5 minutes or 15 minutes long, depending on the specific requirements of the data collection. These time slices make it easier to analyze the data.

EventDescriptionAssociationsPropertiesMetrics

session.events

Sample indicating when a device is reporting to Nexthink

user, device

protocol, session_ID, …

RTT, latency, interaction_time, …

execution.events

Executions of a process with resources consumed

user, device, binary

CPU_time, outgoing_traffic, memory_used, …

device_performance.events

Resources consumed by a device

device

CPU_usage, read_operation_per_second, used_memory, …

Aggregation of sampled events

During aggregation, the system merges similar events and combines their metrics using different functions depending on the data type (sum, average, percentile, etc.). Nexthink picks the most meaningful aggregate function to keep the data point’s value intact.

To illustrate, outgoing_traffic is summed while connection_etablishment_time is averaged.

Example 1 - Multiple processes

Consider chrome.exe running on the same device with the same users but with three processes.

timebinary.nameoutgoing_trafficconnection_establishment_time.avg

08:00 - 08:12

chrome.exe

15 MB

6ms

08:05 - 08:12

chrome.exe

5 MB

10ms

08:10 - 08:14

chrome.exe

10 MB

20ms

Data would be aggregated and stored as a 15-minute sampled event starting at 08:00 and finishing at 08:15

start_timeend_timebinary.nameoutgoing_trafficconnection_establishment_time.avg

08:00

08:15

chrome.exe

30 MB (15 + 5 + 10)

12ms ( (6 + 10 + 20) / 3 )

Query with NQL in the following way:

execution.events during past 15min
| where binary.name == "chrome.exe"
| list start_time, end_time , outgoing_traffic, connection_establishment_time.avg 

Example 2 - Device CPU

To store the cpu_usage of a particular device, Nexthink Collector takes samples of the CPU load every 30 seconds.

timecpu_usage

08:00:00

80%

08:00:30

55%

08:01:00

75%

08:04:00

90%

08:04:30

95%

For a device running from 08:00 to 08:05, ten samples are generated and sent to the Nexthink instance, which aggregates it into a new value.

start_timeend_timecpu_usage.avg

08:00

08:05

82% (80 + 55 + 75 + ... + 90 + 95) / 10

Query with NQL in the following way:

device_performance.events during past 5min
| list start_time, end_time, cpu_usage.avg 

Aggregation allows the system to store data over extended periods and retrieve it quickly, without compromising its ability to generate insights.

Precomputed metrics

The precomputed metrics currently used in DEX score computation are based on data from the past 7 days. The DEX score is computed daily and saved in the database as a punctual event corresponding to the computation time, even though it factors in data from the entire 7-day period.

To retrieve the DEX score value computed today, based on data from the past 7 days, use: dex.scores during past 24h.

Example

Your company introduced automated remediation of SharePoint issues using workflows on June 1, 2024. To retrieve the application DEX score that includes only the events occurring after the remediation started, start querying the score after June 8, 2024:

users
| include dex.application_scores on Jul 8, 2024
| where application.name in ["Sharepoint"] and node.type == application
| compute sharepoint_score_per_user = node.value.avg()
| summarize sharepoint_score = sharepoint_score_per_user.avg()

Last updated