Splunk add-on installation and usage

Introduction

This document provides comprehensive information on the definition and installation of event types and data models into a Splunk instance. The Nexthink Splunk Add-on for the Nexthink Event Coordinator allows users to perfectly map and make the most out of the Nexthink data injected into a Splunk instance.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us via Nexthink support portal.

This document is intended for readers with a detailed understanding of Nexthink technology and Splunk technology, as well as some understanding of concepts such as databases and data models.

The installation instructions provided here should be executed by a Splunk certified professional.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Overview

The Nexthink Splunk Add-on allows a Splunk Enterprise administrator to use the proper event types and data models to map the data provided by the Nexthink Event Connector.

Nexthink Splunk Add-on includes the following new features:

  • 3 new eventtypes for those events in Nexthink that correspond to CIM models.

  • 13 new data models matching the different types of events provided by Nexthink.

Requirements

Pre-requisites for the Nexthink Splunk add-on installation:

  • Splunk Enterprise 6.5 or later

  • CIM 4.8 or later installed on the Splunk instance, in order to be able to use the included in the add-on eventtypes.

Installation

Option 1: from store

  • Click on App: Search & Reporting > Find More Apps in the main menu of the Splunk instance.\

    \

  • Type “Nexthink Add-on for Splunk” in the top search box.

  • Click Install for the installation.

Option 2: file install

You can download and install the add-on file to the Splunk instance by following these steps:

If it is necessary to get the add-on file to be uploaded to the Splunk instance:

To upload the add-on file:

  • Log in to the Splunk instance where you have to install the add-on.

  • Go to Apps > Manage Apps. From the home page of the Splunk instance:\

    \

  • Click on Install app from file.

  • Choose the add-on file and click on Upload

Event types

NameSearch StringTag(s)

nxt_connection

sourcetype=_json source=Nexthink (event_type=established_connections OR event_type=failed_connections)

communicate network

nxt_execution

sourcetype=_json source=Nexthink (event_type=execution)

cpu performance

nxt_web_request

sourcetype=_json source=Nexthink (event_type=established_web_requests OR event_type=failed_web_requests)

web

Data models

NXT Connection

Applying Nexthink Event to connection

Data SetConstraints

Connection

source=Nexthink (event_type=failed_connections OR event_type=established_connections)

Failed Connections

event_type=failed_connections

Established Connections

event_type=established_connections

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

cardinality

cardinality

Number

destination_ip_address

dest_ip

String

All_Traffic ->

Network_Traffic

device_ip_address

src_ip

String

All_Traffic ->

Network_Traffic

duration

duration

Number

All_Traffic ->

Network_Traffic

id

id

Number

incoming_bitrate

incoming_bitrate

Number

incoming_traffic

bytes_in

Number

All_Traffic ->

Network_Traffic

network_interface_iana_codestring

network_interface_iana_codestring

String

network_interface_index

network_interface_index

Number

network_interface_type

network_interface_type

String

network_response_time

response_time

Number

All_Traffic ->

Network_Traffic

outgoing_bitrate

outgoing_bitrate

Number

outgoing_traffic

bytes_out

Number

All_Traffic ->

Network_Traffic

status

status

String

type

transport

String

All_Traffic ->

Network_Traffic

NXT Device Activity

Applying Nexthink Event to device_activity

Data SetConstraints

Device Activity

source=Nexthink (event_type=device_boot)

Device Boot

event_type=device_boot

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

duration

duration

Number

id

id

Number

type

type

String

NXT Device Error

Applying Nexthink Event to device_error

Data SetConstraints

Device Error

source=Nexthink (event_type=system_crash OR event_type=smart_disk

OR event_type=hard_reset)

System Crash

event_type=system_crash

SMART Disk Failure

event_type=smart_disk

Hard Reset

event_type=hard_reset

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

error_code

error_code

Number

error_label

error_label

String

id

id

Number

type

type

String

NXT Device Warning

Applying Nexthink Event to device_warning

Data SetConstraints

Device Error

source=Nexthink (event_type=high_io_usage OR event_type=high_memory_usage OR event_type=high_cpu_usage OR event_type=high_number_of_page_faults)

High IO Usage

event_type=high_io_usage

High Memory Usage

event_type=high_memory_usage

High CPU Usage

event_type=high_cpu_usage

High Number of Page Faults

event_type=high_number_of_page_faults

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

duration

duration

Number

id

id

Number

info

info

String

type

type

String

value

value

Number

warning_duration

warning_duration

Number

NXT Execution

Applying Nexthink Event to execution

Data SetConstraints

Execution

source=Nexthink (event_type=execution)

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set → data model)

average_memory_usage

average_memory_usage

Number

binary_path

binary_path

String

id

id

Number

incoming_tcp_traffic

incoming_tcp_traffic

Number

incoming_udp_traffic

incoming_udp_traffic

Number

outgoing_tcp_traffic

outgoing_tcp_traffic

Number

outgoing_udp_traffic

outgoing_udp_traffic

Number

privilege_level

privilege_level

String

status

status

String

total_cpu_time

cpu_time

Number

All_Performance[CPU] → Performance

NXT Execution Error

Applying Nexthink Event to execution_error

Data SetConstraints

Execution Error

source=Nexthink (event_type=execution_crash OR event_type=execution_freeze)

  • Execution Crash

event_type=execution_crash

  • Execution Freeze

event_type=execution_freeze

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

id

id

Number

info

info

String

type

type

String

NXT Execution Warning

Applying Nexthink Event to execution_warning

Data SetConstraints

Execution Warning

source=Nexthink (event_type=high_application_cpu OR event_type=high_application_memory)

High Application CPU

event_type=high_application_cpu

High Application Memory

event_type=high_application_memory

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

duration

duration

Number

id

id

Number

info

info

String

type

type

String

value

value

Number

warning_duration

warning_duration

Number

NXT Installation

Applying Nexthink Event to installation

Data SetConstraints

Installation

source=Nexthink (event_type=installation OR event_type=uninstallation)

Package Installation

event_type=installation

Package Uninstallation

event_type=uninstallation

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

id

id

Number

type

type

String

NXT Network Scan

Applying Nexthink Event to network_scan

Data SetConstraints

Network Scan

source=Nexthink (event_type=network_scan)

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

cardinality

cardinality

Number

device_ip_address

device_ip_address

String

duration

duration

Number

id

id

Number

network

network

String

status

status

String

type

type

String

NXT Port Scan

Applying Nexthink Event to port_scan

Data SetConstraints

Port Scan

source=Nexthink (event_type=port_scan)

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

cardinality

cardinality

Number

destination_ip_address

destination_ip_address

String

device_ip_address

device_ip_address

String

duration

duration

Number

first_scanned_port

first_scanned_port

String

id

id

Number

last_scanned_port

last_scanned_port

String

status

status

String

type

type

String

NXT Printout

Applying Nexthink Event to printout

Data SetConstraints

Printout

source=Nexthink (event_type=printout)

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

color_print

color_print

Boolean

document_type

document_type

String

duplex

duplex

Boolean

id

id

Number

number_of_printed_pages

number_of_printed_pages

Number

page_size

page_size

String

print_quality

print_quality

String

size

size

Number

status

status

String

NXT User Activity

Applying Nexthink Event to user_activity

Data SetConstraints

User Activity

source=Nexthink (event_type=user_logon)

User Logon

event_type=user_logon

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

duration

duration

Number

id

id

Number

real_duration

real_duration

Number

type

type

String

NXT Web Request

Applying Nexthink Event to web_request

Data SetConstraints

Web Request

source=Nexthink (event_type=established_web_requests OR event_type=failed_web_requests)

Established Web Requests

event_type=established_web_requests

Failed Web Requests

event_type=failed_web_requests

Nexthink FieldAdd-on FieldSplunk DatatypeCIM Compliant (data set -> data model)

cardinality

cardinality

Number

connections_duration

connections_duration

Number

http_status

status

Number

Web -> Web

id

id

Number

incoming_traffic

bytes_in

Number

Web -> Web

network_response_time

response_time

Number

Web -> Web

outgoing_traffic

bytes_out

Number

Web -> Web

protocol

protocol

String

protocol_version

protocol_version

String

service_related

service_related

Boolean

web_request_duration

duration

Number

Web -> Web

Use Cases

Traffic Aggregation dashboard in Splunk

What if you want to know, explicitly: How much daily incoming traffic belongs to Google Chrome on 24th January 2018 between 11:00 and 13:00?

It’s easy to retrieve that information in Nexthink_,_ either by creating a new investigation in Finder or by composing a NXQL query like the one in the code snippet below:

Code
(select (name)  
    (from application 
        (with connection 
            (where connection  
                (ne status (enum "no host")) 
                (ne status (enum "no service")) 
                (ne status (enum "rejected")) 
            ) 
            (where application 
                (eq name (pattern "Google Chrome")) 
            ) 
            (compute incoming_traffic) 
            (between 2018-01-24@11:00:00 2018-01-24@13:00:00) 
        ) 
    ) 
) 

The result of such query is 1480649564 bytes (approximately 1.38 GB), as in the illustration below.

How can you achieve that in Splunk? To do so, you run the following search, see the illustration below. Remember that the event configured in the connector to send the data to Splunk is the one in

Error! Reference source not found..

What are you doing exactly with this search? Let’s go through it line by line:

  • Line 1: filtering index=nexthink event_type=detailed_connections earliest=01/24/2018:11:00:0 latest=01/24/2018:13:00:0 app_name="Google Chrome" First line filters the information you want to retrieve. You need to state the index=nexthink where you are searching the information, the type of the event event_type=detailed_connections, the earliest and latest times for the events to be retrieved, and the name of the application you are taking into consideration app_name="Google Chrome".

  • Line 2: Range of incoming traffic per device and per connection | stats range(bytes_in) as range_bytes_in by src id In the second line you are computing some stats after grouping the events by device, mapped as src, and by connection, mapped as id. This grouping is done with the by clause. Once events are grouped, the specific statistics are applied. You are basically computing the range, which is the difference between maximum and minimum values of the incoming traffic, mapped as bytes_in, per connection and per device. The as clause simply renames the new field as range_bytes_in.

  • Line 3: Summation of incoming traffic for all devices and connections. | stats sum(range_bytes_in) as "Total Incoming Traffic (B)" The third line allow you to compute the summation of the aggregated traffic of each connection and each device. Since you are not specifying any by clause this time, you have to rename the field as "Total Incoming Traffic (B)".

  • Line 4: Creation of new field to be shown | eval "Total Incoming Traffic (GB)"=round('Total Incoming Traffic (B)'/(1024*1024*1024),2). " GB" Lastly, you perform an evaluation of the 'Total Incoming Traffic (B)' field, thus creating a new one based on it. You convert bytes to gigabytes and round the result to 2 decimal places. With the search, you get the results displayed as in the illustration below, Total Incoming Traffic (B): 1459712893 and Total Incoming Traffic (GB): 1.36

As you may have noticed, there are some minor differences between the value we get in Splunk and the one provided by Nexthink. This is completely normal as Splunk is storing the information for the desired events on a given frequency making it possible to compute more precise aggregates.

But what if you want to extract the traffic consumed per device? We can simply add the by src clause on the third line and compute the summation of all connections per each device.

| stats sum(range_bytes_in) as "Total Incoming Traffic (B)" by src

If you wanted to retrieve the incoming traffic every two hours for the last 24 hours, you could run another search as illustrated below.

  • Changed the time selector to Last 24 hours and as you are not including the temporal filters in this search.

  • The bin command allows you to put continuous time, internal Splunk field _time, into discrete buckets of a given period span, so that all the items in a particular bucket have the same value.

  • The eval command, along with the strftime function, modifies how the date is going to be displayed.

  • Then, you compute the statistics as in the previous example but including the new Date field in both stats commands.

  • Use the table command to display only the fields of interest.

  • By selecting the Line Chart visualization, you can obtain the result as illustrated below.

CSC 0 dashboard in Splunk

Splunk allows you to display the information in a fancy manner by modifying the visualization. For instance, if you wanted to find out: How to display today’s average value of the Compliance Scores and the weekly trend?

  • First, you need to send the information to Splunk. The event configuration is shown in the code snippet below.

Code
## This one is also used for the Scores dashboards (CSC0 and CSC3) 

[CSC1_INVENTORY] mode = listing query = (select (<device_fields>) 
            (from device 
                (where device 
                    (ne device_type (enum server)) 
                ) 
            )         
      ) 
mapping = {"device_fields": {"name": "src", 
 "id": "src_id",  
 "device_type": "device_type", 
 "os_version_and_architecture": "os_version",  
 "platform": "platform",  
 "#\"score:Device compliance/Device compliance\"": "score_dev_comp_device_compliance", 
 "#\"score:Device compliance/Network\"": "score_dev_comp_network", 
 "#\"score:Device compliance/Protection\"": "score_dev_comp_protection", 
 "#\"score:Device compliance/Software\"": "score_dev_comp_software",        
 "#\"score:Device compliance/System\"": "score_dev_comp_system"}} 
frequency = 1440 
delay = 5 
platforms = windows, mac_os 
  • Once done, you run a simple search in Splunk, see the illustration below.

  • As described in the previous example, the first line is the filters the events, so you get only the Splunk events of importance from last week, defined with the earliest and latest fields.

  • Then, you use the timechart command to aggregate events according to a given span span=24h

  • The fixedrange parameter let you skip periods with no data

  • Compute the desired stats using avg

  • Lastly, you apply the desired visualization. In the corresponding tab, select Single Value as the visualization type and in Format, select the parameters according to the table below:

ParameterValue

General > Show Trend Indicator

Yes

General > Show Trend in

Percent

General > Compared to

24 hours before

General > Show Sparkline

Yes

Color > Use Color

Yes

Color > Color by

Value

Color > Ranges

Select the appropriate ones

Color > Color Mode

First radio button

The value will be displayed as in the illustration below.

The same approach can be followed to obtain the rest of the Scores in the dashboard.

CSC 1 dashboard in Splunk

To compute both numbers appearing in the dashboard, we simply need to follow the previous approach. Splunk searches differ only in the filtering part and in the statistics computed by the timechart command using count instead of avg, see the illustration below.

To show the inventory, run the search as in the illustration below.

With the eval command you are setting the value of the Properties and Scores columns to be "Click for details". You do that because in the dashboard itself, you can configure how the drilldown will work when clicking on each column. Therefore, in the editing mode of the dashboard you select Source, thus displaying the XML code. By playing a little bit with the different options, in particular those related to the drilldown, you can define different behaviors. For instance, the behavior of the Inventory of Desktops by OS panel as shown in the code snippet below.

Code
<panel> 
  <title>Inventory of Desktops by OS</title> 
  <table> 
    <search ref="Inventory of Desktops"></search> 
    <option name="count">10</option> 
    <option name="drilldown">cell</option> 
    <drilldown> 
      <condition field="Number of Desktops"> 
        <set token="selected_desktop_os">$click.value$</set> 
        <link target="_blank">search?q=index=nexthink 
              event_type=csc1_inventory
              device_type=desktop 
              earliest=-1d 
              latest=now 
              os_version=$selected_desktop_os|s$ |            
              table src src_id | sort src
        </link> 
      </condition> 
      <condition field="Properties"> 
        <set token="selected_desktop_os">$click.value$</set> 
        <link target="_blank">search?q=index=nexthink 
              event_type=csc1_inventory
              device_type=desktop 
              earliest=-1d 
              latest=now 
              os_version=$selected_desktop_os|s$ |
              fields - _* score_dev_comp* punct linecount index eventtype 
              source sourcetype
              splunk_server splunk_server_group engine 
              event_category event_mode event_type host  
              | table src src_id * | sort src
        </link> 
      </condition> 
      <condition field="Scores"> 
        <set token="selected_desktop_os">$click.value$</set> 
        <link target="_blank">search?q=index=nexthink event_type=csc1_inventory            device_type=desktop earliest=-1d latest=now os_version=$selected_desktop_os|s$ |            table src src_id score_dev_comp* | sort src</link> 
      </condition> 
    </drilldown> 
  </table> 
</panel> 
  • Specify that the drilldown is going to be based on the cell value with the drilldown option.

  • Define a token for each column <condition field="column name">, which is extracted from the value of the clicked cell using a reserved word $click.value$, as well as the specific search using the token that will be triggered.

  • This drilldown approach is followed, with certain variations, in most of the other dashboards.

As shown in the examples above, there are many different searches and approaches we can follow in Splunk to get the desired data. Please refer to Splunk documentation for learn more about specific commands and approaches.

Support

The Nexthink Splunk Add-on is provided “as is” to its customers, without any warranty of any kind. Limited support for the application is provided via the Nexthink support portal.

Last updated