Getting started with Spark
This Technical Preview is made available to customers free of charge for their evaluation and feedback; in general availability the functionalities of the preview may be subject to additional cost and/or licensing. As such, the Technical Preview, the documentation, and any updates are provided for limited evaluation only and on an ‘as-is’ and ‘as-available’ basis without warranty of any kind.
Nexthink Spark is an AI agent that interprets and resolves level-1 IT requests and questions across real-time communication channels. Nexthink Spark is currently only available for MS Teams.
By accelerating issue resolution, Spark reduces IT support workloads and enhances the employee experience.

Before you begin
Before deploying and using Nexthink Spark, ensure you have:
Configured Spark prerequisites which require administrator permissions. Refer to the Granting permissions for Spark section.
Configured Collector to gather the UPN for each user in clear text. Refer to the Configuring Collector level anonymization documentation.
Configured the Microsoft Entra ID inbound connector for your Microsoft tenant.
A Self-service portal URL.
The updated version 1.2.0 of the Nexthink Teams application package that was shared with you by email upon inclusion in the technical preview.
Nexthink recommends configuring ServiceNow connector credentials to enable incident creation directly from Spark. This integration enhances the ability of Spark to escalate unresolved issues with full context.
Setting up Nexthink Spark
Set up a communication channel for Spark-Teams interactions
Ensure that you meet the prerequisites to set up a Teams communication channel:
Use the Collector option to gather the UPN for each user in clear text. Refer to the Configuring Collector level anonymization documentation for more information.
Configure a Microsoft Entra ID inbound connector for your Microsoft tenant.
Set up a Communication channel in Nexthink to enable Spark interaction with MS Teams.
Use the welcome message to inform employees about the chatbot's scope and remind them to exercise judgment when reading AI-generated replies.
After setting up the communication channel, install the version-specific application package (.zip) for Spark, which Nexthink has provided directly for this technical preview.
If you have previously deployed the Nexthink Teams app (either the original version or the new Spark version) using the Microsoft Teams Admin console, you will not be able to upload the Spark Teams application into your Teams client directly.
Microsoft Teams blocks the use of the application for users who are not authorized in the Teams Admin console. If you have never deployed the Nexthink Teams app through your Teams Admin console, you can upload the Spark Teams application into your Teams client directly for testing
Grant permissions
Edit your roles to add permissions related to Spark functionalities for administrators:
Data model visibility:
Agent conversations enables users to view conversation information from Spark using Nexthink Query Language (NQL).
Spark:
View agent overview dashboards enables users to see overview dashboards to monitor the adoption and value of Spark.
Manage all agent actions enables users to manage the agent actions that are available to Spark.
View all agent conversations enables users to see a list of Spark conversations and their details including the conversation content.
Review agent conversations enables users to give feedback about Spark conversations that are used to improve Spark (not currently used).
Manage agent knowledge sources enables users to upload knowledge base articles that Spark can access (not currently used).
Import knowledge base articles from ServiceNow
Manually upload knowledge base articles from the ITSM tool, ServiceNow, as CSV files into Nexthink to feed the Spark knowledge base.
Step 1 - Identify data to export within ServiceNow
Open the list of knowledge base articles in ServiceNow.
Select knowledge base articles by applying these filters:
Published articles only.
Articles available to your service-desk users. Spark indexes every uploaded article regardless of ITSM permissions.
Exclude translated versions of the same article.
Include every article you want to import—uploading a new file replaces the previous import.
Manual uploads are not incremental. Plan to export all the data required for the Spark knowledge base. When uploading a new file, the new content replaces the previous content.
Step 2 - Export data from ServiceNow
Export data from ServiceNow by performing a standard CSV export:
Navigate to your knowledge base articles list in ServiceNow.
Click Apply filter to configure the desired filters.
Click the cogwheel icon > Personalize list.
Add these required columns:
Number
Short description
Knowledge base
Category
Updated
Article body
Click the action menu in any column header and select Export > CSV to download the file.
Ensure the file is UTF‑8 encoded. If your ServiceNow instance is not configured to perform UTF-8 exports by default, convert the file to UTF-8 afterward.
The table below lists the fields that you are expected to add when using a personalized list.
number
Identifier of the knowledge base article.
Example: KB0012345
short_description
Title of the knowledge base article.
Example: How to troubleshoot network issues
kb_knowledge_base
Name of the knowledge base the article belongs to
Example: IT Knowledge
kb_category
Category of the knowledge base article.
Example: Network
sys_updated_on
Last update date.
Example: 11-06-2025
text
Article content, in one of the supported formats: plain-text, markdown or HTML.
Example: <p>test kb</p>
Step 3 - Upload articles into Nexthink
Go to Administration > Autopilot connector > Knowledge base.
Click Upload CSV file.
Choose the CSV file, up to 100 MB, from your hard drive to import it into the system.
After you click Import, the system takes the following steps:
If you have already manually uploaded a file, the old content will be discarded and replaced by the new content.
File processing occurs asynchronously and may take up to 15 minutes for large knowledge bases.
Enable diagnosis and remediation actions for Spark
Validate and enable actions for Spark in Nexthink.
Enable built-in agent actions
From the main navigation menu:
Go to Spark > Manage actions and review the Agent actions designed to work with Spark.
Enable the desired Agent actions for Spark use.
Refer to the Managing Agents actions documentation for more information.
Activate custom remote actions
From the main navigation menu:
Go to Remote Actions > Manage remote actions.
Create or edit a remote action, ensure you enable the Spark trigger.
Refer to the Spark actions documentation for more information
Nexthink advises against setting up remote actions for Spark that involve Nexthink campaigns.
Configure connector credentials for ServiceNow integration
You can set up connector credentials for ServiceNow in Nexthink to allow Spark to raise tickets when it cannot solve an incident. Nexthink is planning other ITSM tool integrations for future releases.
Ensure that the credential used has the required permission in ServiceNow to read and create incidents, e.g., has the
itilrole.Provide Nexthink with the credential configuration URL, the list of required ticket fields and your self-service portal URL.
Nexthink completes the initial setup, enabling Spark for ticket/incident creation— there is a plan for customer-facing UI for future releases.
Refer to the Connector credentials documentation for more information.
Communicate Spark deployment
Select the employee group for Spark deployment and prepare communications.
Use the controls in the MS Teams admin console to select the employees with Spark access.
Inform employees about the scope of the Spark agent and remind them to exercise judgment when reading AI-generated replies.
How does Spark work?
Spark connects with employee requests across configured channels, runs a diagnosis and attempts issue resolution.

(*) Conversation logs for reinforced learning are planned for future releases and are currently unavailable.
At the moment, Microsoft Teams is the only available channel. Third-party tool integrations are planned for future releases.
The diagram above visually maps the Spark workflow sequence:
The employee reports, in real time, an issue via enterprise chats or other supported front-end channel integrations—currently only available for MS Teams.
Spark interprets the employee request in natural language—using LLMs hosted in the AWS Bedrock service within the Infinity platform. Depending on the employee request, Spark gathers and evaluates:
Nexthink datasets for the specific user/device diagnosis—limited to the user's own device.
Available actions—built-in agent actions and custom remote actions—for diagnostics or remediation.
Spark responds to the employee by sharing answers to employee questions or potential solutions to resolve their issues. Spark can either:
Provide self-help guidance or detailed information, including links to related knowledge base articles.
Request employee authorization for automated resolutions of device issues.
If unresolved, Spark escalates the support request to the service desk with full context. Spark only escalates requests in the following cases:
After exhausting relevant automatic actions and user troubleshooting
Receiving an explicit escalation request from the employee
Running into issues that require administrative access that the employee does not have
Encountering technical limitations that prevent Spark from providing an effective solution
Spark may suggest and initiate resolution measures, but all device remediation actions require user approval.
What data does Spark use?
Spark relies on a combination of static and dynamic data sources:
Knowledge base articles: Manually imported knowledge base articles.
Contextual Nexthink data: Device health, diagnostics, remediations and user metadata from Nexthink Infinity.
Planned data enhancements for future releases:
Conversation feedback: In the Spark Cockpit, supervisors provide conversation feedback to help improve response quality in similar, future interactions.
Consequently, Spark relies on specific NQL data model tables to query Spark-user interaction data.
Personal data handling is covered under the Nexthink Data Processing Agreement (DPA). Spark processing is user-specific and restricted to the customer region.
Spark never provides data from other organizations.
Granting permissions for Spark
Refer to the Roles documentation for a detailed description of Permissions, View domain options and Data privacy granularity settings.
To enable proper permissions for Spark as an administrator:
Select Administration > Roles from the main navigation panel.
Create a New Role or edit an existing role by hovering over it.
In the Data model visibility section, set Agent conversations to visible.
In the Permissions section, scroll down to the Spark section to enable appropriate permissions for the role.
View domain impact on Spark permissions
The table below shows what users with full and limited View domain access can do, assuming the necessary permissions are enabled.
Manage all agent actions
![]()
![]()
Review agent conversations
![]()
![]()
View agent overview dashboards
![]()
![]()
View all agent conversations
![]()
![]()
Last updated
Was this helpful?