Spark - AI Model Card
This Technical Preview is made available to customers free of charge for their evaluation and feedback; in general availability the functionalities of the preview may be subject to additional cost and/or licensing. As such, the Technical Preview, the documentation, and any updates are provided for limited evaluation only and on an ‘as-is’ and ‘as-available’ basis without warranty of any kind.
This page delves into Nexthink Spark AI design, data usage, limitations and compliance safeguards.
Model details
Description
Nexthink Spark is an intelligent, agentic AI designed to transform IT support. Embedded in third-party enterprise chats—such as MS Teams and other chatbot solutions—Spark is a conversational interface that helps employees with IT issues and questions, leveraging GenAI models operated by Nexthink within the Nexthink AWS environment in your region.
Spark leverages various knowledge sources (knowledge base articles, past ticket insights, Nexthink automation catalog) and contextual data about the target device (Nexthink user and device data from the Nexthink Infinity platform) to understand user questions, provide assistance in troubleshooting issues and offering fix suggestion as either manual steps to the employee or as automation that the employee can accept to run. Spark is also able to escalate issues that it is not able to address to the service desk team by raising a ticket including the incident context on behalf of the employee. Nexthink Spark is a mere convenience feature, and its use is optional.
Spark uses data your organization has instructed Nexthink to collect for device health monitoring to respond to user queries submitted in natural language. Spark can also execute actions to retrieve and link to third-party data, which is transient data not stored in Nexthink Infinity.
Refer to the FAQ documentation for more information about Nexthink, and to the Getting started with Spark documentation for more information about Nexthink Spark.
Inputs and outputs
Inputs
Natural language queries submitted by users to Nexthink Spark, through the channels in which Spark is integrated—Microsoft Teams, third-party integrations via APIs and others.
System data that the system retrieves while processing the request. This may include Nexthink data about the user and their devices via NQL queries, relevant knowledge articles, past ticket resolutions, feedback on previous conversations, Nexthink configuration objects, e.g., remote actions, workflows, agent actions, and others.
Execution results of diagnostics and remediation automation—remote actions, agent actions, workflows, and others—run during the course of the conversation.
Outputs
Responses generated by AI models including messages for the end user, automated remediation execution suggestions and instructions for manual actions. Responses may refer to internal knowledge base articles or be generated using the foundational knowledge of the AI model.
Escalated tickets with enriched context for IT service desk agents.
Reasoning details for supervisor review and to enable reinforcement learning.
Intended use
Primary intended users
Employees who interact with Spark through familiar enterprise channels.
Service desk agents who can benefit from enhanced, context-rich tickets when escalated by the Spark agent.
Service desk supervisors, who can gain increased visibility into Spark-employee conversations, as well as resolution-performance metrics to help measure and improve the value delivered by Spark.
Out-of-scope use cases
When a user types a question outside the scope of IT and workplace support, Nexthink Spark stops the conversation and informs the user that it is an unsupported topic. This ensures that irrelevant topics are identified, restricting AI responses to relevant IT or workplace-related queries.
Additionally, Spark only provides information about employees' own users and devices, and rejects inquiries about others' devices or data.
Model data
Data flow
Spark processes employee support requests through a defined sequence that combines third-party integrations, Nexthink Infinity, and AWS Bedrock. Spark leverages connected ITSM tools and knowledge repositories to complement the model with knowledge relevant to the user query. Spark also accesses Nexthink data about the target device.
An Employee starts a conversation and sends a message such as, Help, my laptop is slow, through a front-end channel into which Spark is integrated. The user and the device that they use are securely identified using secure access tokens, upon the start of a conversation.
The Spark AI agent may perform one or more of the following actions based on the employee's message and the current conversation history:
Spark's LLM may select one or more relevant tools at its disposal to gather more information by:
Getting device and user data within the Nexthink Infinity platform through NQL queries, limited to the current user and their devices, to gather contextual information.
Searching for relevant knowledge articles among the set of articles made available to the Spark agent.
Searching for relevant actions—agent actions, remote actions, workflows, and others—made available to the Spark agent.
Based on the gathered information, the Spark LLM model may then:
Generate messages to the employee—questions, steps to follow, progress update messages, and others.
Offer the user to execute actions flagged by the administrator as requiring employee consent (e.g., remediations with an impact). In this case, the impact of the action is shown to the user based on the action configuration. After user confirmation by an explicit yes answer, the action is then executed.
Automatically execute actions flagged as not requiring employee consent, e.g., diagnostic actions without visible impact.
Decide to escalate a ticket to the support team after a confirmation by the employee. The system then generates a contextual ticket description including the employee's issue and a summary of the attempted remediations.
Decide to end the conversation when it considers that the employee's initial issue has been resolved.
The process above is repeated until input from the employee is needed.
Spark agent may log its internal reasoning and processes for supervisor review, but does not share this information with employees.
Supervisors review interactions for quality, and provide feedback. This feedback loop may drive reinforcement learning that is made available to the Spark agent only when processing conversations from their employees, continuously improving the understanding and performance of Spark.

Evaluation data
Nexthink employs a set of performance metrics, including precision, recall and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring that the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. This feedback mechanism allows Nexthink to continually refine the AI model's output accuracy and effectiveness. The system tracks metrics such as resolution success rate and escalation frequency to measure the effectiveness of Spark.
Training data
Spark primarily uses off-the-shelf large language models provided through AWS Bedrock. These models are not trained by Nexthink across customers. Customer-specific context (knowledge base articles, historical ticket resolution data, supervisor feedbacks, etc.) is made available through tools to Spark enriching the conversation context with customer specific data.
Data is never shared across customers, and Spark does not build a global training set from customer interactions.
Preprocessing data
During NQL query generation, the system automatically removes any Personal Data fields in the query. These cleaned queries may be annotated and used to continuously improve the model.
ServiceNow KB and tickets are indexed for efficient retrieval.
Implementation information
Hardware
Models run within AWS infrastructure in the customer’s geographic region using AWS Bedrock.
Software
Spark service operates within Nexthink Infinity. It uses AWS Bedrock LLM APIs for agentic reasoning workflows.
Spark leverages API-based integration with third-party ITSM solutions or manual upload of files that are indexed for efficient retrieval by the AI agent.
Nexthink Spark uses off-the-shelf and fine-tuned LLMs, including Meta Code Llama and Claude by Anthropic.
Security
Nexthink employs HTTPS and AES-256 encryption to secure data both in transit and at rest. Nexthink uses standard encryption methods that align with industry best practices to prevent unauthorized access and protect data processed by AI features. Visit Nexthink Security Portal to learn more about how Nexthink is committed to information security.
Data processing and retention—by Nexthink—remain in the customer's region, with no cross-region transfers.
Caveats and recommendations
Risk management
Hallucination and bias propagation
Model hallucinations and biases are mitigated by Nexthink through continuous performance monitoring and regular model updates. Sources used to create a response are provided to users who want to check the accuracy of the responses.
Handling of personal data
Covered under DPA, with processing restricted to the customer region, and user-specific. Spark never provides data from other organizations. Nexthink retains data for up to six months.
Inaccuracy of outputs
Nexthink employs a set of performance metrics, including precision, recall and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring that the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Spark, supervisors assess Spark conversation logs and provide feedback, which is applied in reinforcement learning cycles, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model's output accuracy and effectiveness. Nevertheless, even if AI is becoming better with each passing day, it still makes mistakes. This implies that AI features / functionalities are there for enhancing the experience of end users, but they should still ensure careful review of the outputs provided and verify accuracy. Customers are required to communicate the risk of inaccuracies to the employees that they make Spark available to, in a way suitable to the front-end that hosts the Spark agent's conversation.
Unauthorized access or misuse
Admins control access to Nexthink Spark features for IT users through Role-Based Access Control (RBAC). This RBAC mechanism allows granular user permissions, enabling specific users or groups to access or utilize AI features while others are restricted, hence maintaining control over feature availability within the organization. Access to Spark by employees is controlled by admins via the access control mechanisms made available by the employee-facing interface that they integrate Spark with—such as the Teams Admin Console or similar features available in other solutions.
Dependence on ITSM data quality and knowledge articles
Regular synchronization of knowledge-based articles and incident history is recommended to keep Spark knowledge accurate. Regular reviews of conversations to identify areas of improvement in the knowledge used is essential.
Over-reliance on automation
Spark always requires user approval before taking issue-resolution actions unless an admin has explicitly enabled actions to execute without user consent. Tickets are escalated only when Spark exhausts other options and after having identified clear intent from the employee.
Ethical considerations
Nexthink follows both national and international AI guidelines and best practices, emphasizing responsible and ethical AI development. In compliance with the EU AI Act, Nexthink has developed a comprehensive AI compliance framework. Each AI component is reviewed by a dedicated AI Compliance Team comprising Legal, Privacy, and Security experts, among others.
Transparency is fundamental: employees should be informed when they are interacting with AI. Supervisor oversight is required to review Spark outputs, ensuring quality, accountability and fairness. Nexthink enforces strict controls and continuous monitoring to prevent over-reliance on automation and ensure ethical use of Spark.
AI Limitations
While Nexthink Spark can be highly beneficial in accelerating support and issue resolution, it is important to recognize its current limitations. AI systems are still evolving, and as such, they may occasionally produce errors, inconsistencies, or outputs that deviate from expected results.
To mitigate these risks, Customers should:
Cross-check AI outputs: Validate AI-generated results against reliable sources or internal benchmarks.
Implement human oversight: Use AI as a supporting tool rather than a decision-making authority, ensuring that critical outputs are reviewed by qualified individuals.
Give feedback on conversations to reinforce correct outcomes and improve accuracy. When inaccuracies are identified, share them with the AI provider (if applicable) to contribute to model improvement.
FAQ
Last updated
Was this helpful?