Getting started with Spark

Nexthink Spark is an AI agent that interprets and resolves level-1 IT requests and questions across real-time communication channels—currently, only available for MS Teams.

By accelerating issue resolution, Spark reduces IT support workload and enhances the employee experience.

Before you begin

Configuring Spark prerequisites requires administrator rights.

Before deploying and using Nexthink Spark:

1

Set up a communication channel for Spark-Teams interactions

Ensure that you meet the pre-requisites to setup a Teams communication channel:

  • Use the Collector Option to gather the UPN for each user in clear text. Refer to the Configuring Collector level anonymization documentation for more information.

  • Configure a Microsoft Entra ID inbound connector for your Microsoft tenant

Set up a communication channel in Nexthink to enable Spark interaction with MS Teams.

  • After setting up the communication channel, install the version-specific application package (.zip) for Spark, which Nexthink provides directly for this technical preview.

  • Use the welcome message to inform employees about the chatbot's scope and remind them to exercise judgment when reading AI-generated replies.

2

Configure connector credentials for ServiceNow integration

Set up connector credentials for ServiceNow in Nexthink—other ITSM tool integrations are planned for future releases.

  • Provide Nexthink with the list of required ticket fields and your self-service portal URL.

  • Nexthink completes the initial setup for enabling Spark for ticket/incident creation—a customer-facing UI is planned for future releases.

3

Import knowledge-based article from ServiceNow

Manually upload knowledge-based articles from the ITSM tool, ServiceNow, as CSV files into Nexthink to feed Spark's knowledge base.

Step 1 - Identify data to export within ServiceNow
  1. Open the list of knowledge articles in ServiceNow.

  2. Select knowledge base articles by applying these filters:

    • Published articles only.

    • Articles available to your service-desk users. Spark indexes every uploaded article regardless of ITSM permissions.

    • Exclude translated versions of the same article.

    • Include every article you want to import; uploading a new file replaces the previous import.

Manual uploads are not incremental; you should plan to export all the data required for Spark's knowledge. When uploading a new file, its content replaces the previous content.

Step 2 - Export data from ServiceNow

Export data from ServiceNow by performing a standard CSV export:

  1. Navigate to your knowledge articles list in ServiceNow.

  2. Click Apply filter to configure desired filters.

  3. Click the cogwheel icon > Personalize list.

  4. Add these required columns:

    • Number

    • Short description

    • Knowledge base

    • Category

    • Updated

    • Article body

  5. Click the action menu in any column header and select Export > CSV to download the file.

Ensure the file is UTF‑8 encoded. If your ServiceNow instance is not configured to perform UTF-8 exports by default, convert the file to UTF-8 afterward.

The table below lists the fields that you are expected to add when using a personalized list.

Column name
Description

number

Identifier of the knowledge base article (user readable)

short_description

Title of the knowledge base article

kb_knowledge_base

Name of the knowledge base the article belongs to

kb_category

Category of the knowledge base article

sys_updated_on

Last update date

text

Article content, in one of the supported formats: plain-text, markdown or HTML

Step 3 - Upload articles into the Nexthink
  1. Go to Administration  > Spark  > Knowledge base

  2. Click Upload CSV file.

  3. Choose or drag multiple CSV files from your hard drive to import them into the system.

  4. After your click on Import, the system takes the following steps:

    • If you had already uploaded manually a file of the same type, then the old content will be discarded and replaced by the new content.

    • File processing occurs asynchronously and may take up to 15 minutes for large knowledge bases

4

Configure permissions

Edit your roles to add permissions related to Spark functionalities for admins:

  • Data model visibility:

    • Agent conversations enables users to view conversation information from Spark using Nexthink Query Language (NQL)

  • Spark:

    • View agent overview dashboards enables users to see overview dashboards to monitor the adoption and value of Spark

    • Manage all agent actions enables users to manage the agent actions that are available to Spark

    • View all agent conversations enables users to see the list of Spark conversations and their details including the conversation content

    • Review agent conversations enables users to give feedback about Spark conversations that are used to improve Spark (not currently used)

    • Manage agent knowledge sources enables users to upload knowledge articles that Spark can have access to (not currently used)

5

Enable diagnosis and remediation actions for Spark

Validate and enable actions for Spark in Nexthink.

Enable built-in agent actions

From the main navigation menu:

  1. Go to Spark > Manage actions and review the Agent actions designed to work with Spark.

  2. Enable the desired Agent actions for Spark use.

Activate custom remote actions

From the main navigation menu:

  1. Go to Remote Actions > Manage remote actions.

  2. Create or edit a remote action, make sure the Spark trigger is enabled.

6

Communicate Spark deployment

Select the employee group for Spark deployment and prepare communications.

  • Use the controls in MS Teams admin console to select the employees with Spark access.

  • Inform employees about the scope of the Spark agent and remind them to exercise judgment when reading AI-generated replies.


How does Spark work?

Spark connects with employees' requests across configured channels, runs a diagnosis and attempts issue resolution.

The diagram above visually maps Spark workflow sequence:

  1. The employee reports in real time an issue via enterprise chats or other supported front-end channel integrations—currently only available for MS Teams.

  2. Spark interprets the employee request in natural language—using LLMs hosted in the AWS Bedrock service within the Infinity platform. Depending on the employee request, Spark gathers and evaluates:

  3. Spark responds to the employee by sharing answers to employee questions or potential solutions to resolve their issues. Spark can either:

    • Provide self-help guidance or detailed information, including related links to related knowledge-based articles.

    • Request employee authorization for automated resolutions of device issues.

  4. If unresolved, Spark escalates the support request to the service desk with full context. Spark only escalates requests in the following cases:

    • After exhausting relevant automatic actions and user troubleshooting.

    • Receiving an explicit escalation request from the employee.

    • Running into issues that require administrative access that the employee does not have.

    • Encountering technical limitations that prevent providing an effective solution.

Spark may propose and initiate resolution measures, but all device remediation actions require user approval.

What data does Spark use?

Spark relies on a combination of static and dynamic data sources:

  • Knowledge-based articles: Manually imported knowledge-based articles.

  • Contextual Nexthink data: Device health, diagnostics, remediations, and user metadata from Nexthink Infinity.

Planned data enhancements for future releases:

  • Conversation feedback: In the Cockpit, supervisors provide conversation feedback to help improve response quality in similar future interactions.

Consequently, Spark relies on specific NQL data model tables to query Spark-user interaction data.

Personal data handling is covered under the Nexthink Data Processing Agreement (DPA). Spark processing is user-specific and restricted to the customer region.

Spark never provides data from other organizations.

Last updated

Was this helpful?