Nexthink Assist - AI Model Card
Model details
Description
Nexthink Assist is the unified entry point for managing Digital Employee Experience (DEX) across the Nexthink Infinity platform. Embedded in Search, Assist uses AI models operated by Nexthink within the Nexthink AWS environment in your region to help IT teams detect, diagnose and resolve DEX issues faster.
Assist interprets user chat commands in natural language to enable seamless execution. These queries may involve investigations, content retrieval (such as live dashboards and library packs), campaign creation, or questions about Nexthink Infinity. Nexthink Assist is a mere convenience feature, and its use is optional.
Refer to the FAQ documentation for more information about Nexthink and to the Using Nexthink Assist documentation for more information about Nexthink Assist.
Inputs and outputs
Inputs: Natural language queries submitted by users through Nexthink Assist’s conversational interface, along with any data the system retrieves when processing the request. This may include Nexthink data via NQL queries, product documentation, and configuration objects (e.g., remote actions, library packs, live dashboards, etc.).
Outputs: Responses generated by AI models including NQL queries, charts, graphs, insights, campaign drafts, and documentation content and links. The system displays the ✦ sparkles icon to indicate AI-generated content or insights.
Intended use
Primary intended users
IT operations teams, digital workplace specialists, and Nexthink Infinity users managing DEX.
Out-of-scope use cases
When a user types a question outside the scope of Digital Employee Experience (DEX), Nexthink Assist will stop the conversation and inform the user that it is an out-of-scope topic. This ensures that irrelevant topics are identified, limiting AI responses to relevant DEX-related queries.
Model data
Data flow
Assist executes natural language commands in three steps:
When a user enters a question such as, Who has been experiencing the highest number of issues with Salesforce Lightning?, Nexthink Assist interprets the intent with an AWS-hosted AI model running securely in the AWS region where your Nexthink deployment is hosted. It then selects the most appropriate tool or combination of tools, to gather the required information. Depending on the nature of the query, Assist may perform actions such as running an NQL query, consulting Nexthink documentation, creating a campaign, or searching for configuration objects. Assist processes everything entirely inside the Nexthink AWS environment.
Assist then runs the tool(s) that it has selected to gather the relevant data. If it chooses to:
Run an NQL query, Assist leverages an internally hosted AI model within the secure AWS infrastructure of Nexthink in the employee’s geographical region, to generate the NQL query, execute it within the customer's environment, and retrieve the results. This entire process takes place securely within the infrastructure of Nexthink. During this process, the system automatically removes any PII fields in the query. These cleaned queries may be annotated and used to continuously improve the model.
Retrieve answers from Nexthink documentation, Assist securely sends the user’s question—together with the relevant documentation—to an AWS-hosted AI model. The model identifies the key facts and returns them so Assist can craft a clear, actionable reply.
Create a Nexthink Engage campaign, Assist uses an AWS-hosted AI model to generate a campaign and offer to save it to the user. This entire process takes place securely within the infrastructure of Nexthink, in customer’s region.
Finally, the results retrieved by the selected tools, along with the original user question, along with the original user question, are passed to an AWS‑hosted AI model to generate a clear, concise response. To promote transparency and encourage further exploration, Assist includes links to data sources—such as NQL investigations or campaign configurations—within its reply.

Evaluation data
Nexthink employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Assist, users can rate the results of AI-provided queries, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model’s output accuracy.
Human oversight remains essential: users are encouraged to review and verify AI-generated results, applying their expertise and judgment before acting on the information provided.
Training data
Nexthink Assist mostly uses off-the-shelf AI model, except for one capability that uses a fine-tuned model based on Meta CodeLlama (fine-tuned with synthetic and anonymized data). Nexthink does not use Customer Data to train its AI models.
Preprocessing data
During NQL query generation, the system automatically removes any PII fields in the query. These cleaned queries may be annotated and used to continuously improve the model.
Implementation information
Hardware
Models run within AWS infrastructure in the customer’s geographic region using AWS Bedrock.
Software
Nexthink Assist uses off-the-shelf and fine-tuned LLMs including Meta CodeLlama, Anthropic Claude, and AWS Bedrock.
Security
Nexthink employs HTTPS and AES-256 encryption to secure data both in transit and at rest. Nexthink's use of standard encryption methods aligns with industry best practices to prevent unauthorized access and protect data processed by AI features. Visit Nexthink Security Portal to learn more about Nexthink's commitment to information security.
Caveats and recommendations
Risk management
AI tool availability
Nexthink Assist is a non-critical feature within Nexthink’s platform. In cases where Nexthink Assist is temporarily unavailable, users can fall back on manual workflows available in the user interface. This resilience plan supports uninterrupted access to core functionalities, allowing users to continue their work seamlessly, even in the absence of Nexthink Assist support.
Hallucination and bias propagation
Model hallucinations and biases are mitigated by Nexthink through continuous performance monitoring and regular model updates. Sources used to create a response are provided to users who want to check the accuracy of its responses.
Inaccuracy of outputs
Nexthink also employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Assist, users can rate the results of AI-provided queries, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model’s output accuracy. Nevertheless, even if AI is getting better everyday, it still makes mistakes. This implies that AI features / functionalities are there for enhancing the DEX experience of end users, but they should still ensure careful review of the outputs provided and verify accuracy.
Unauthorized access or misuse
Admins control access to Nexthink Assist features through Role-Based Access Control (RBAC). This RBAC mechanism allows granular user permissions, enabling specific users or groups to access or utilize AI features while others are restricted, hence maintaining control over feature availability within the organization.
Ethical considerations
Nexthink follows both national and international AI guidelines and best practices, emphasizing responsible and ethical AI development. In compliance with the EU AI Act, Nexthink has developed a comprehensive AI compliance framework. Each AI component is reviewed by a dedicated AI Compliance Team comprising Legal, Privacy and Security experts, among others.
When an AI functionality involves the processing of personal data, Nexthink ensures it undergoes a thorough privacy assessment, as required by applicable data protection laws and regulations. For Nexthink Assist, while the processing of personal data is not a core component of its functionality, Nexthink's privacy team has conducted a Data Protection Impact Assessment (DPIA). This assessment aligns with our internal policies, established standards, and recognized best practices in privacy management.
AI Limitations
While Nexthink Assist can be highly beneficial in automating tasks, generating insights, and improving efficiency, it is important to recognize its current limitations. AI systems are still evolving, and as such, they may occasionally produce errors, inconsistencies, or outputs that deviate from expected results.
To mitigate these risks, Customers may:
Cross-Check AI Outputs: Validate AI-generated results against reliable sources or internal benchmarks.
Implement Human Oversight: Use AI as a supporting tool rather than a decision-making authority, ensuring that critical outputs are reviewed by qualified individuals.
Provide Feedback: When inaccuracies are identified, share them with the AI provider (if applicable) to contribute to model improvement.
FAQ
Last updated
Was this helpful?