Global AI Hub

The Global AI Hub provides clear, verifiable information about how Nexthink designs, deploys, and governs AI features. It brings together our policies, controls, and artefacts so that stakeholders can assess our practices with confidence.

Responsible AI and governance

Nexthink develops and operates AI features under documented governance within our established Security and Privacy Program, including policy requirements, risk assessment, testing, and change control. The Global AI Hub provides the core artefacts that describe these controls in practice, such as the Artificial Intelligence (AI) Policy, feature-level model cards, and data-handling notes (collection, minimisation, access, and retention). We design AI capabilities with clear admin configuration, role-based access, and logging, and we document intended use, limitations, and evaluation methods. Where appropriate, features support user review and approval mechanisms; these are described alongside each capability.

Model Cards

We publish model cards for all material AI features, i.e., customer-visible capabilities where AI meaningfully influences outputs or triggers product actions. Each model card is versioned and updated when functionality, providers, or data use materially change (with references in the Change Log). Model cards typically include:

  • Purpose & scope — intended use, supported tasks, and clear non-goals/boundaries.

  • Inputs & outputs — what the feature consumes and produces (e.g., prompts, context), determinism, and fallback behavior.

  • Data handling — collection and minimization, processing locations/third-party involvement, access controls, retention, and deletion.

  • Evaluation & limitations — test approach, quality metrics, known error modes/constraints, and conditions where performance degrades.

  • Risk management — privacy/security considerations (incl. misuse scenarios), bias/safety checks, and mitigations.

  • Operational controls — configuration options, role-based access, logging/monitoring, and (where applicable) user review/approval mechanisms.

  • Versioning — model/provider dependencies, release date, and links to relevant policy or technical notes.

Contact

If you have any questions concerning AI governance, data use, and compliance in Nexthink products, please contact [email protected].

AI-powered Infinity Platform capabilities

Nexthink Assist

Nexthink Assist is the unified entry point for managing Digital Employee Experience (DEX) across the Nexthink Infinity platform. Embedded in Search, Assist uses AI models operated by Nexthink within the Nexthink AWS environment in your region to help IT teams detect, diagnose and resolve DEX issues faster.

Assist interprets user chat commands in natural language to enable seamless execution. These queries may involve investigations, content retrieval (such as live dashboards and library packs), campaign creation, or questions about Nexthink Infinity. Nexthink Assist is a mere convenience feature, and its use is optional.

Refer to the Using Nexthink Assist documentation for more information about Nexthink Assist.

Nexthink Insights

Nexthink Insights is a set of AI-powered capabilities within the Nexthink Infinity platform designed to enhance Digital Employee Experience (DEX) troubleshooting, prioritization, and root cause analysis. Leveraging GenAI models operated by Nexthink within the Nexthink AWS environment in your region, Insights provides contextual descriptions, recommendations, and impact analysis to help IT teams detect, understand, and resolve issues faster.

Insights capabilities are embedded in multiple parts of Infinity, including Experience Central, VDI Experience, Network View, Device View, and Alert Impact Analysis. These AI functionalities interpret technical and contextual data to produce clear, actionable insights—such as identifying likely root causes of poor performance, summarizing employee sentiment trends, or prioritizing alerts based on potential business impact.

Refer to the links below for more information about Nexthink Insights.

AI Localization

AI Localization is a capability within Nexthink Adopt designed to make guide content accessible across multiple languages. Leveraging AI translation models hosted on AWS Bedrock within the Nexthink Infinity infrastructure, this feature automatically translates guide content while preserving its structure, ensuring employees can receive contextual support in their preferred language.

The capability is embedded in the guide creation and management process within Adopt. These AI models process stored guide content and target language configurations to produce localized versions of guides—ensuring consistency, scalability, and accessibility across global organizations

Refer to the Creating automatic translations with AI documentation for more information about Localizing Content with AI.

Workflow Function Generator

The Workflow Function Generator lets you describe the function in automated workflows you want in plain language. It produces clean, syntactically correct JavaScript that follows Nexthink Workflows conventions. This AI-based feature speeds up development, helps with consistency, and provides input/output mapping with an explanation of how the code works.

Refer to the Function Thinklet documentation for more information about generating JavaScript with Workflow Function Generator.

Frequently Asked Questions

How can I identify when content within Nexthink is generated by AI?

When using Assist, any AI-generated content will be indicated with an AI watermark. This watermark appears as a small, four-sided star-shaped icon placed near the generated content, ensuring transparency when you're interacting with AI-driven material. Whenever you see this watermark, it shows that the related content was created using AI within the Nexthink solution.

For other AI-powered features, the system displays the ✦ sparkles icon to indicate AI-generated Insights.

Do the LLMs process Personal Data or any type of sensitive information?

Large language models (LLMs) are not meant to process personal data or sensitive information.

Are there third parties involved?

For the feature listed on this page, Nexthink uses the large language models (LLMs) provided by Amazon Bedrock. OpenAI is not involved.

Where is data processed by the LLMs?

The large language models (LLMs) provided by Amazon Bedrock are used within the same AWS account and geographical area (continent) where Nexthink's customer data is typically stored and processed.

Can LLMs leverage my organization data to train their models?

No, the large language models (LLMs) used across Nexthink Infinity Platform cannot and do not use Customer Data via their APIs to train or improve their models.

Are large language models (LLMs) responding to end-user prompts, or are they powered by predefined prompts from Nexthink?

The large language models (LLMs) are triggered using predefined logic and prompts built into Nexthink’s backend services running in AWS. End-users cannot view or modify these prompts.

In the case of the Generating JavaScript with AI for Function thinklets feature, users can specify what the function should do. However, even in this case, a backend system prompt ensures that the LLM returns only the relevant JavaScript code and nothing else.

Additionally, the LLM does not have access to any Nexthink data. This means that even if a user attempts to query sensitive information, the LLM will not be able to access or infer anything from the platform.

How can I learn more about the security of Nexthink AI capabilities?

To learn more about the security of Nexthink AI capabilities, follow the links below for detailed information:

Last updated

Was this helpful?