Configuring ticket escalation with Spark

Ticket Escalation enables Spark to hand over unresolved employee conversations to the service desk by creating a ticket in the organization’s information technology service management (ITSM) system.

Spark resolves many employee issues through self-service. When an issue requires human intervention, Spark escalates the conversation using a predefined API Call that creates an ITSM ticket and preserves the conversation context.

circle-info

Administrators configure Ticket Escalation to control how Spark connects to the ITSM system and what information is sent when a ticket is created.

When does Ticket Escalation occur

Spark escalates a conversation when it cannot resolve the issue autonomously, for example:

  • Technical issues that require service desk support

  • Questions that Spark cannot resolve

  • Requests that are not yet automated

Ticket Escalation ensures these conversations are routed through existing ITSM processes and remain traceable.

How Ticket Escalation works

  1. Spark identifies that the issue requires escalation.

  2. The employee confirms that Spark should create a ticket.

  3. Spark calls the predefined API Call and passes the conversation context.

  4. The API Call creates a ticket in the ITSM system.

  5. The ITSM system returns a ticket reference.

  6. Then, Spark:

    • Shares the ticket reference with the employee.

    • Marks the conversation as Escalated in the Spark cockpit with ticket reference.

  7. If ticket creation fails, Spark sends fallback notifications to the employee.

This process ensures continuity and allows support teams to track and follow up on the issue without losing context from the original conversation.

Prerequisites

To configure connector credentials and API Calls, you must have:

  • Administrator access rights in Nexthink to configure Connector credentials and API Calls.

  • Credentials created in the ITSM system, ready for Nexthink integration.

    • Ensure that the credential used has the required permission in your ITSM to read and create incidents, e.g., has the itil role.

  • CSV file two-tier categorization imported from your ITSM: Spark will use it to dynamically assign categories when creating an ITSM ticket. Refer to step Upload category and sub_category on this page.

  • Obtain a self-service portal URL, which Spark will use to notify employees when it cannot escalate a request.

Integrating flow

To configure Ticket Escalation, set up connector credentials and configure an API call that uses these credentials to create a ticket in the ITSM system. Then, in the Spark Ticket Escalation settings, select this API call and define which data Spark sends to the ITSM during escalation.

The following guide describes each step in detail.

1

Configure tconnector credentials for the ITSM system

Connector credentials define how Nexthink authenticates with the ITSM system.

Create connector credentials to your ITSM system in the Nexthink web interface. Refer to the Connector credentialsarrow-up-right for more information.

2

Configure the API Call for incident creation

The API Call defines how Nexthink will create a ticket in the ITSM system. From the Nexthink web interface:

  1. Go to Administration > API Calls.

  2. Click New API Call.

  3. Configure the API Call by defining settings across the following tabs:

    • General tab: Define the API Call name, description, and connector credentials (configured in step 1).

    • Request tab: Define the request method, resource endpoint, parameters used to receive values from Spark at execution time, and the payload with dynamic parameters.

      • Under Parameters, define parameters that Spark populates dynamically during ticket escalation. The following example shows parameters used to create a ticket in ServiceNow.

        • Category (category): High-level classification of the issue identified during the conversation.

        • Subcategory (subcategory): More specific issue classification derived from the conversation context.

        • Configuration item (config_item): The affected device or configuration item associated with the issue.

        • Caller (callerId): Identifies the employee for whom the ticket is created.

        • Work notes (workNotes): Diagnostic information and additional context collected by Spark during the conversation.

        Spark assigns values to these parameters at runtime based on the conversation and any diagnostic actions performed.

      • Under HTTP call, define the payload sent to the ITSM system when Spark escalates a conversation. During ticket escalation, Spark injects parameter values into the payload by replacing placeholders with runtime values. Use {{parameter}} for dynamic values. The following example shows parameters used to create an incident in ServiceNow.

        • Method: POST

        • Resource: api/now/v1/table/incident

        • Payload:

    • Output tab: Define the values extracted from the ITSM response and returned to Spark. Use JSONata to define how to extract the ticket details from the payload. The output must contain the following identifiers exactly as specified:

      • Ticket Unique ID: ticketId

      • Ticket Display Number: ticketNumber

  4. After configuring parameters and payload, use the Test results panel to run the API Call with real data, inspect responses, and identify errors. Verify the extracted output fields in a Preview column. The test panel helps validate the configuration and reduce trial and error.

3

Upload category and sub_category

From the Nexthink web interface:

  1. Go to Spark > Manage Settings > Issue categorization.

  2. Click Upload CSV file.

  1. Choose the CSV file from your hard drive, with up to 1000 rows per file, to import it into the system.

The file must include two columns category and sub_category, with each row representing a unique issue type. Spark uses them to understand the type of issue being discussed and to classify tickets consistently during escalation.

Example:

category
sub_category

Hardware

Battery

Hardware

Keyboard

Hardware

Display

Software

OS

Software

Application

Network

WiFi

Network

VPN

This list defines the valid issue classifications that Spark can select when:

  • Interpreting employee conversations

  • Populating required categorization fields parameters during ticket escalation

When Spark is configured to use category and sub_category for decision-making, it determines the issue type based on the values uploaded in this list.

circle-info

You can upload a category and subcategory list that is custom-defined and aligned with your ITSM taxonomy, including lists imported from different ITSM systems.

4

Configure Spark settings for ticket escalation

From the Nexthink web interface:

  1. Go to Spark > Manage Settings.

  2. Select the Support escalation tab.

  3. Select the previously configured API Call from the list.

    • Define the Parameters field used for ticket creation. These parameters specify which information Spark includes when escalating a ticket. The following example shows parameters commonly used to create a ticket in ServiceNow.

      • Caller (caller_id): Identifies the employee for whom the ticket is created.

      • Short description (short_description): A concise summary of the reported issue.

      • Description (description): A detailed description of the issue, typically including the Spark conversation context.

      • Work notes (work_notes): Internal notes used to include diagnostic or troubleshooting information.

      • Assignment group (assignment_group): The support team responsible for handling the incident.

      • Category (category): High-level classification of the issue, such as Software or Hardware.

      • Subcategory (subcategory): A more specific classification within the selected category.

      • Configuration item (cmdb_ci): The affected device or configuration item.

    • Define the Fallback URL used to notify employees when Spark cannot escalate a request. Fallback behavior determines what Spark sends to employees when an incident or request cannot be escalated.

Last updated

Was this helpful?