Spark - AI Model Card
Model details
Description
Nexthink Spark is an intelligent, agentic AI designed to transform IT support. Embedded in third-party enterprise chats, such as MS Teams, and in other chatbot solutions, Spark is a conversational interface that helps employees with the IT issues and questions leveraging GenAI models operated by Nexthink within the Nexthink AWS environment in your region.
Spark leverages various knowledge sources (knowledge base articles, past ticket insights, Nexthink automation catalog) and contextual data about the target device (Nexthink user and device data from the Nexthink Infinity platform) to understand user questions, provide assistance in troubleshooting issues, and offering fix suggestion as either manual steps to the employee or as automation that the employee can accept to run. Spark is also able to escalate issues that it is not able to address to the service desk team by raising a ticket including the incident context on behalf of the employee. Nexthink Spark is a mere convenience feature, and its use is optional.
Spark uses data your organization has instructed Nexthink to collect for device health monitoring to respond to user queries submitted in natural language. Spark can also execute actions to retrieve and link to third-party data, which is transient data not stored in Nexthink Infinity.
Refer to the FAQ documentation for more information about Nexthink, and to the Getting started with Spark documentation for more information about Nexthink Spark.
Inputs and outputs
Inputs
Natural language queries submitted by users to Nexthink Spark, through the channels in which Spark is integrated (Microsoft Teams, third-party integrations via APIs, etc.)
System data that the system retrieves while processing the request. This may include Nexthink data about the user and their devices via NQL queries, relevant knowledge articles, past ticket resolutions, feedback on previous conversations, Nexthink configuration objects (e.g. remote actions, workflows, agent actions, etc.)
Execution results of diagnostics and remediation automation (remote actions, agent actions, workflows, etc.) run during the course of the conversation
Outputs
Responses generated by AI models including messages for the end user, automated remediation execution suggestions, instructions for manual actions. Responses may refer to internal knowledge base articles or be generated using the foundational knowledge of the AI model.
Escalated tickets with enriched context for IT service desk agents.
Reasoning details for supervisor review and to enable reinforcement learning.
Intended use
Primary intended users
Employees who interact with Spark through familiar enterprise channels.
Service desk agents benefit from enhanced, context-rich tickets when escalated by the Spark agent.
Service desk supervisors gain increased visibility into Spark-employee conversations, as well as resolution-performance metrics to help measure and improve the value delivered by Spark.
Out-of-scope use cases
When a user types a question outside the scope of IT and workplace support, Nexthink Spark stops the conversation and informs the user that it is an unsupported topic. This ensures that irrelevant topics are identified, restricting AI responses to relevant IT or workplace-related queries.
Additionally, Spark only provides information about employees' own users and devices, and rejects inquiries about others' devices o data.
Model data
Data flow
Spark processes employee support requests through a defined sequence that combines third-party integrations, Nexthink Infinity, and AWS Bedrock. Spark leverages connected ITSM tools and knowledge repositories to complement the model with knowledge relevant to the user query. Spark also accesses Nexthink data about the target device.
Employees starts a conversation and send a message such as, Help, my laptop is slow, through a front-end channel where Spark is integrated into. The user and the device that they use are securely identified using secure access tokens, upon the start of a conversation.
The Spark AI agent may do one or more of the following actions based on the employee message and the current conversation history:
Spark's LLM may select one or more relevant tools at its disposal to gather more information:
Get device and user data within the Nexthink Infinity platform through NQL queries, limited to the current user and their devices, to gather contextual information
Search for relevant knowledge articles among the set of articles made available to the Spark agent
Search for relevant actions (agent actions, remote actions, workflows, etc.) made available to the Spark agent
Based on the gathered information, the Spark LLM model may then:
Generate messages to the employee (questions, steps to follow, progress update messages, etc.)
Offer to the user to execute actions flagged by the administrator as requiring employee consent (e.g., remediations with an impact). In this case, the impact of the action is shown to the user based on the action configuration. After confirmation of the user by an explicit yes answer, the action is then executed.
Automatically execute actions flagged as not requiring employee consent (e.g., diagnostics actions without visible impact)
Decide to escalate a ticket to the support team after a confirmation by the employee. The system then generates a contextual ticket description including the employee's issue and a summary of the attempted remediations.
Decide to end the conversation when it considered that the employee initial issue was resolved
The above process is repeated until an input from the employee is needed.
Spark agent may log its internal reasoning and processes for supervisors' review, but does not share this information with employees.
Supervisors review interactions for quality and provide feedback. This feedback loop may drive reinforcement learning that are made available to the Spark agent only when processing conversations from their employees, continuously improving Spark’s understanding and performance.

Evaluation data
Nexthink employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. This feedback mechanism allows Nexthink to continually refine the AI model's output accuracy and effectiveness. The system tracks metrics such as resolution success rate and escalation frequency to measure Spark's effectiveness.
Training data
Spark primarily uses off-the-shelf large language models provided through AWS Bedrock. These models are not trained by Nexthink across customers. Customer-specific context (knowledge base articles, historical ticket resolution data, supervisor feedbacks, etc.) is made available through tools to Spark enriching the conversation context with customer specific data.
Data is never shared across customers, and Spark does not build a global training set from customer interactions.
Preprocessing data
During NQL query generation, the system automatically removes any Personal Data fields in the query. These cleaned queries may be annotated and used to continuously improve the model.
ServiceNow KB and tickets are indexed for efficient retrieval.
Implementation information
Hardware
Models run within AWS infrastructure in the customer’s geographic region using AWS Bedrock.
Software
Spark service operates within Nexthink Infinity. It uses AWS Bedrock LLM APIs for agentic reasoning workflows.
Spark leverages API-based integration with third-party ITSMs solutions or manual upload of files that is indexed for efficient retrieval by the AI agent.
Nexthink Spark uses off-the-shelf and fine-tuned LLMs, including Meta CodeLlama and Anthropic Claude.
Security
Nexthink employs HTTPS and AES-256 encryption to secure data both in transit and at rest. Nexthink's use of standard encryption methods aligns with industry best practices to prevent unauthorized access and protect data processed by AI features. Visit Nexthink Security Portal to learn more about Nexthink's commitment to information security.
Data processing and retention (by Nexthink) remain in the customer's region, with no cross-region transfers.
Caveats and recommendations
Risk management
Hallucination and bias propagation
Model hallucinations and biases are mitigated by Nexthink through continuous performance monitoring and regular model updates. Sources used to create a response are provided to users who want to check the accuracy of its responses.
Handling of personal data
Covered under DPA, with processing restricted to the customer region, and user-specific. Spark never provides data from other organizations. Nexthink retains data for up to six months.
Inaccuracy of outputs
Nexthink also employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Spark, supervisors assess Spark conversation logs and provide feedback, which is applied in reinforcement learning cycles, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model's output accuracy and effectiveness. Nevertheless, even if AI is getting better everyday, it still makes mistakes. This implies that AI features / functionalities are there for enhancing the experience of end users, but they should still ensure careful review of the outputs provided and verify accuracy. Customer are required to communicate the risk of inaccuracies to the employees that they make Spark available to, in a way suitable to the front-end that hosts the Spark agent's conversation.
Unauthorized access or misuse
Admins control access to Nexthink Spark features for IT users through Role-Based Access Control (RBAC). This RBAC mechanism allows granular user permissions, enabling specific users or groups to access or utilize AI features while others are restricted, hence maintaining control over feature availability within the organization. Access to Spark by employees is controlled by admins via the access control mechanisms made available by the employee-facing interface that they integrate Spark with (such as the Teams Admin Console or similar features available in other solutions)
Dependence on ITSM data quality and knowledge articles
Regular synchronization of knowledge-based articles and incident history is recommended to keep Spark knowledge accurate. Regular reviews of conversations to identify area of improvements in the used knowledge is essential.
Over-reliance on automation
Spark always requires user approval before taking issue-resolution actions unless an admin explicitly enabled actions to execute without user consent. Tickets are escalated only when Spark exhausts other options and after having identified a clear intent from the employee.
Ethical considerations
Nexthink follows both national and international AI guidelines and best practices, emphasizing responsible and ethical AI development. In compliance with the EU AI Act, Nexthink has developed a comprehensive AI compliance framework. Each AI component is reviewed by a dedicated AI Compliance Team comprising Legal, Privacy, and Security experts, among others.
Transparency is fundamental: employees should be informed when they are interacting with AI. Supervisor oversight is required to review Spark outputs, ensuring quality, accountability, and fairness. Nexthink enforces strict controls and continuous monitoring to prevent over-reliance on automation and ensure ethical use of Spark.
AI Limitations
While Nexthink Spark can be highly beneficial in accelerating support and issue resolution, it is important to recognize its current limitations. AI systems are still evolving, and as such, they may occasionally produce errors, inconsistencies, or outputs that deviate from expected results.
To mitigate these risks, Customers should:
Cross-Check AI Outputs: Validate AI-generated results against reliable sources or internal benchmarks.
Implement Human Oversight: Use AI as a supporting tool rather than a decision-making authority, ensuring that critical outputs are reviewed by qualified individuals.
Provide feedback on conversations to reinforce correct outcomes and improve accuracy. When inaccuracies are identified, share them with the AI provider (if applicable) to contribute to model improvement.
FAQ
Last updated
Was this helpful?