Ticket Escalation with Spark

circle-exclamation

Ticket Escalation enables Spark to hand over unresolved employee conversations to the service desk by creating a ticket in the organization’s information technology service management (ITSM) system.

Spark resolves many employee issues through self-service. When an issue requires human intervention, Spark escalates the conversation using a predefined API Call that creates an ITSM ticket and preserves the conversation context.

circle-info

Administrators configure Ticket Escalation to control how Spark connects to the ITSM system and what information is sent when a ticket is created.

When does Ticket Escalation occur

Spark escalates a conversation when it cannot resolve the issue autonomously, for example:

  • Technical issues that require service desk support

  • Questions that Spark cannot resolve

  • Requests that are not yet automated

Ticket Escalation ensures these conversations are routed through existing ITSM processes and remain traceable.

How Ticket Escalation works

  1. Spark identifies that the issue requires escalation.

  2. The employee confirms that Spark should create a ticket.

  3. Spark calls the predefined API Call and passes the conversation context.

  4. The API Call creates a ticket in the ITSM system.

  5. The ITSM system returns a ticket reference.

  6. Then, Spark:

    • Shares the ticket reference with the employee.

    • Marks the conversation as Escalated in the Spark cockpit with ticket reference.

  7. If ticket creation fails, Spark sends fallback notifications to the employee.

This process ensures continuity and allows support teams to track and follow up on the issue without losing context from the original conversation.

Prerequisites

Configuring Ticket Escalation requires completing two configuration steps. Each step has specific prerequisites.

Connector credentials and API Call

Connector credentials and API Calls define how tickets are created in the ITSM system.

To configure connector credentials and API Calls, you must have:

  • Administrator access in Nexthink

  • Credentials created in the ITSM system and ready for Nexthink integration

Spark Ticket Escalation settings

Spark Ticket Escalation settings define which API Call Spark uses during escalation and which data Spark sends to the ITSM system.

To configure Spark settings, you must have:

  • Access to Spark > Manage Settings

  • A previously configured API Call

Integrating flow

The integration flow covers the Ticket Escalation process using API Calls to create ITSM tickets.

1

Configure the API call credentials to ITSM system

Connector credentials define how Nexthink authenticates with the ITSM system.

Create connector credentials to your ITSM system in the Nexthink web interface. Refer to the Connector credentialsarrow-up-right for more information.

2

Configure the API Call for incident creation

The API Call defines how Nexthink will create a ticket in the ITSM system. From the Nexthink web interface:

  1. Go to Administration > API Calls.

  2. Click New API Call.

  3. Configure the API Call by defining settings across the following tabs:

    • General tab: Define the API Call name, description, and connector credentials (configured in step 1).

    • Request tab: Define the request method, resource endpoint, parameters used to receive values from Spark at execution time, and the payload with dynamic parameters.

      • Under Parameters, define parameters that Spark populates dynamically during ticket escalation. The following example shows parameters used to create a ticket in ServiceNow.

        • Category (category): High-level classification of the issue identified during the conversation.

        • Subcategory (subcategory): More specific issue classification derived from the conversation context.

        • Configuration item (config_item): The affected device or configuration item associated with the issue.

        • Caller (callerId): Identifies the employee for whom the ticket is created.

        • Work notes (workNotes): Diagnostic information and additional context collected by Spark during the conversation.

    Spark assigns values to these parameters at runtime based on the conversation and any diagnostic actions performed.

    • Under HTTP call, define the payload sent to the ITSM system when Spark escalates a conversation. During ticket escalation, Spark injects parameter values into the payload by replacing placeholders with runtime values. Use {{parameter}} for dynamic values. The following example shows parameters used to create an incident in ServiceNow.

    • Method: POST

    • Resource: api/now/v1/table/incident

    • Payload:

    • Output tab: Define the values extracted from the ITSM response and returned to Spark.

  4. After configuring parameters and payload, use the Test results panel to run the API Call with real data, inspect responses, and identify errors. The test panel helps validate the configuration and reduce trial and error.

3

Configure Spark settings for ticket escalation

From the Nexthink web interface:

  1. Go to Spark > Manage Settings.

  2. Select the Support escalation tab.

  3. Select the previously configured API Call from the list.

    • Define the Parameters field used for ticket creation. These parameters specify which information Spark includes when escalating a ticket. The following example shows parameters commonly used to create a ticket in ServiceNow.

      • Caller (caller_id): Identifies the employee for whom the ticket is created.

      • Short description (short_description): A concise summary of the reported issue.

      • Description (description): A detailed description of the issue, typically including the Spark conversation context.

      • Work notes (work_notes): Internal notes used to include diagnostic or troubleshooting information.

      • Assignment group (assignment_group): The support team responsible for handling the incident.

      • Category (category): High-level classification of the issue, such as Software or Hardware.

      • Subcategory (subcategory): A more specific classification within the selected category.

      • Configuration item (cmdb_ci): The affected device or configuration item.

    • Define the Fallback URL used to notify employees when Spark cannot escalate a request. Fallback behavior determines what Spark sends to employees when an incident or request cannot be escalated.

Issue categorization for Ticket Escalation

Spark uses categories and subcategories to understand the type of issue being discussed and to classify tickets consistently during escalation. To support different organizational and ITSM requirements, administrators define the category and subcategory values that Spark can use.

Accessing Issue categorization

From the Nexthink web interface:

  1. Go to Spark > Manage Settings > Issue categorization.

  2. Click Upload CSV file.

  3. Choose the CSV file from your hard drive, with up to 1000 rows per file, to import it into the system.

The file must include two columns Category and Subcategory, with each row representing a unique issue type, for example:

Category
Subcategory

Hardware

Battery

Hardware

Keyboard

Hardware

Display

Software

OS

Software

Application

Network

WiFi

Network

VPN

This list defines the valid issue classifications that Spark can select when:

  • Interpreting employee conversations

  • Populating required categorization fields parameters during ticket escalation

When Spark is configured to use Category and Subcategory for decision-making, it determines the issue type based on the values uploaded in this list.

circle-info

You can upload a category and subcategory list that is custom-defined and aligned with your ITSM taxonomy, including lists imported from different ITSM systems.

Last updated

Was this helpful?