Nexthink Assist - AI Model Card

Model details

Description

Nexthink Assist is the unified entry point for managing Digital Employee Experience (DEX) across the Nexthink Infinity platform. Embedded in Search, Assist uses AI models operated by Nexthink within the Nexthink AWS environment in your region to help IT teams detect, diagnose and resolve DEX issues faster.

Assist interprets user chat commands in natural language to enable seamless execution. These queries may involve investigations, content retrieval (such as live dashboards and library packs), campaign creation, or questions about Nexthink Infinity. Nexthink Assist is a mere convenience feature, and its use is optional.

Refer to the FAQ documentation for more information about Nexthink and to the Using Nexthink Assist documentation for more information about Nexthink Assist.

Inputs and outputs

  • Inputs: Natural language queries submitted by users through Nexthink Assist’s conversational interface, along with any data the system retrieves when processing the request. This may include Nexthink data via NQL queries, product documentation, and configuration objects (e.g., remote actions, library packs, live dashboards, etc.).

  • Outputs: Responses generated by AI models including NQL queries, charts, graphs, insights, campaign drafts, and documentation content and links. The system displays the ✦ sparkles icon to indicate AI-generated content or insights.

Intended use

Primary intended users

IT operations teams, digital workplace specialists, and Nexthink Infinity users managing DEX.

Out-of-scope use cases

When a user types a question outside the scope of Digital Employee Experience (DEX), Nexthink Assist will stop the conversation and inform the user that it is an out-of-scope topic. This ensures that irrelevant topics are identified, limiting AI responses to relevant DEX-related queries.

Model data

Data flow

Assist executes natural language commands in three steps:

  1. When a user enters a question such as, Who has been experiencing the highest number of issues with Salesforce Lightning?, Nexthink Assist interprets the intent with an AWS-hosted AI model running securely in the AWS region where your Nexthink deployment is hosted. It then selects the most appropriate tool or combination of tools, to gather the required information. Depending on the nature of the query, Assist may perform actions such as running an NQL query, consulting Nexthink documentation, creating a campaign, or searching for configuration objects. Assist processes everything entirely inside the Nexthink AWS environment.

  2. Assist then runs the tool(s) that it has selected to gather the relevant data. If it chooses to:

    • Run an NQL query, Assist leverages an internally hosted AI model within the secure AWS infrastructure of Nexthink in the employee’s geographical region, to generate the NQL query, execute it within the customer's environment, and retrieve the results. This entire process takes place securely within the infrastructure of Nexthink. During this process, the system automatically removes any PII fields in the query. These cleaned queries may be annotated and used to continuously improve the model.

    • Retrieve answers from Nexthink documentation, Assist securely sends the user’s question—together with the relevant documentation—to an AWS-hosted AI model. The model identifies the key facts and returns them so Assist can craft a clear, actionable reply.

    • Create a Nexthink Engage campaign, Assist uses an AWS-hosted AI model to generate a campaign and offer to save it to the user. This entire process takes place securely within the infrastructure of Nexthink, in customer’s region.

  3. Finally, the results retrieved by the selected tools, along with the original user question, along with the original user question, are passed to an AWS‑hosted AI model to generate a clear, concise response. To promote transparency and encourage further exploration, Assist includes links to data sources—such as NQL investigations or campaign configurations—within its reply.

Execution of certain tools—such as campaign creation—is subject to user permissions. For example, Assist will only attempt to create a Nexthink campaign if the user is authorized to do so.

Evaluation data

Nexthink employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Assist, users can rate the results of AI-provided queries, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model’s output accuracy.

Human oversight remains essential: users are encouraged to review and verify AI-generated results, applying their expertise and judgment before acting on the information provided.

Training data

Nexthink Assist mostly uses off-the-shelf AI model, except for one capability that uses a fine-tuned model based on Meta CodeLlama (fine-tuned with synthetic and anonymized data). Nexthink does not use Customer Data to train its AI models.

Preprocessing data

During NQL query generation, the system automatically removes any PII fields in the query. These cleaned queries may be annotated and used to continuously improve the model.

Implementation information

Hardware

Models run within AWS infrastructure in the customer’s geographic region using AWS Bedrock.

Software

Nexthink Assist uses off-the-shelf and fine-tuned LLMs including Meta CodeLlama, Anthropic Claude, and AWS Bedrock.

Security

Nexthink employs HTTPS and AES-256 encryption to secure data both in transit and at rest. Nexthink's use of standard encryption methods aligns with industry best practices to prevent unauthorized access and protect data processed by AI features. Visit Nexthink Security Portal to learn more about Nexthink's commitment to information security.

Caveats and recommendations

Risk management

Risk
Mitigation

AI tool availability

Nexthink Assist is a non-critical feature within Nexthink’s platform. In cases where Nexthink Assist is temporarily unavailable, users can fall back on manual workflows available in the user interface. This resilience plan supports uninterrupted access to core functionalities, allowing users to continue their work seamlessly, even in the absence of Nexthink Assist support.

Hallucination and bias propagation

Model hallucinations and biases are mitigated by Nexthink through continuous performance monitoring and regular model updates. Sources used to create a response are provided to users who want to check the accuracy of its responses.

Inaccuracy of outputs

Nexthink also employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. Also, in Nexthink Assist, users can rate the results of AI-provided queries, helping to identify inaccuracies and improve model performance. This feedback mechanism allows Nexthink to continually refine the AI model’s output accuracy. Nevertheless, even if AI is getting better everyday, it still makes mistakes. This implies that AI features / functionalities are there for enhancing the DEX experience of end users, but they should still ensure careful review of the outputs provided and verify accuracy.

Unauthorized access or misuse

Admins control access to Nexthink Assist features through Role-Based Access Control (RBAC). This RBAC mechanism allows granular user permissions, enabling specific users or groups to access or utilize AI features while others are restricted, hence maintaining control over feature availability within the organization.

Ethical considerations

Nexthink follows both national and international AI guidelines and best practices, emphasizing responsible and ethical AI development. In compliance with the EU AI Act, Nexthink has developed a comprehensive AI compliance framework. Each AI component is reviewed by a dedicated AI Compliance Team comprising Legal, Privacy and Security experts, among others.

When an AI functionality involves the processing of personal data, Nexthink ensures it undergoes a thorough privacy assessment, as required by applicable data protection laws and regulations. For Nexthink Assist, while the processing of personal data is not a core component of its functionality, Nexthink's privacy team has conducted a Data Protection Impact Assessment (DPIA). This assessment aligns with our internal policies, established standards, and recognized best practices in privacy management.

AI Limitations

While Nexthink Assist can be highly beneficial in automating tasks, generating insights, and improving efficiency, it is important to recognize its current limitations. AI systems are still evolving, and as such, they may occasionally produce errors, inconsistencies, or outputs that deviate from expected results.

To mitigate these risks, Customers may:

  • Cross-Check AI Outputs: Validate AI-generated results against reliable sources or internal benchmarks.

  • Implement Human Oversight: Use AI as a supporting tool rather than a decision-making authority, ensuring that critical outputs are reviewed by qualified individuals.

  • Provide Feedback: When inaccuracies are identified, share them with the AI provider (if applicable) to contribute to model improvement.

FAQ

How does Nexthink Assist leverage Artificial Intelligence?

Before the introduction of Nexthink Assist, Nexthink users conducted investigations executing the same commands but with a Visual or NQL editor, not natural language. They also had to leave the Nexthink web interface to find answers to their questions and design custom campaigns from scratch.

Nexthink Assist provides users with more convenience and a better experience. For example, if they want to query data, they can enter their query using natural language in a chat box.

If the user wants to create a Nexthink campaign or retrieve information from Nexthink documentation, the user's question is sent to the AWS-hosted AI model running securely within Nexthink’s Amazon Web Services (AWS) environment, in the same geographic region as the Nexthink instance. The AWS-hosted AI model proposes the campaign content or browses the documentation to return content.

Refer to the FAQ in the Using Nexthink Assist documentation to learn more about how Assist executes the natural language commands.

Does Assist make any automated decisions?

While Nexthink Assist generates query suggestions based on user input, users retain the power to make final decisions, enabling oversight and preventing unintended automated actions. This design ensures that users maintain control over decision-making, with AI functioning as a support tool rather than an autonomous system.

How can I identify when content within Nexthink is generated by AI?

When using Assist, any AI-generated content will be indicated with an AI watermark. This watermark appears as a small, four-sided star-shaped icon placed near the generated content, ensuring transparency when you're interacting with AI-driven material. Whenever you see this watermark, it shows that the related content was created using AI within the Nexthink solution.

Does AWS Bedrock process Personal Data and Personal Identifying Information (PII)?

Nexthink is fully committed to protecting its Customers' data. When using AI functionalities hosted on AWS, all processing takes place within the AWS region aligned with the Customer's Nexthink deployment. At no point is Customer Data or Personal Data shared with or hosted by the AI tool providers themselves (eg., Anthropic or Meta). Neither Nexthink nor AWS uses Customer Data or Personal Data for model training purposes. Additionally, any Customer Data processed through AI functionalities is subject to the same retention periods and protective measures as all other Customer Data within the Nexthink solution.

Can AWS Bedrock see the responses to user queries?

AWS Bedrock has no access to or any visibility into the responses returned to a user.

Can AWS Bedrock leverage user data to train its models?

No, AWS Bedrock cannot and does not use data submitted by customers via its APIs to train or improve its models.

Where does AWS Bedrock process its data?

All data processing stays entirely within the Nexthink AWS environment.

How does Nexthink ensure user training?

No specific training is required to use Nexthink Assist, which is self-explanatory. That said, Nexthink provides both documentation and video course about Nexthink Assist through its Documentation and Learn portals.

How does Nexthink inform Assist users about the changes to this AI functionality?

Nexthink users are informed about major changes to Nexthink Assist or changes that may affect user experience through 'What’s New' notifications, email communications and Documentation updates. Finally, customers are promptly informed via an email communication in case of the planned introduction of a new sub-processor used by AI features.

Last updated

Was this helpful?