Other GenAI Features

Overview

In addition to Nexthink Assist and Nexthink Insights Nexthink leverages GenAI in other parts of Infinity to enhance content development. The following feature is based on GenAI:

The LLMs used are provided by Amazon Bedrock within the same AWS account and geographical area (continent) where customer data is typically stored and processed. OpenAI is not used for the features listed on this page.

Frequently asked questions

Do the LLMs process Personal Data or any type of sensitive information?

Large language models (LLMs) are not meant to process personal data or sensitive information.

Are there third parties involved?

For the feature listed on this page, Nexthink uses the large language models (LLMs) provided by Amazon Bedrock. OpenAI is not involved.

Where is data processed by the LLMs?

The large language models (LLMs) provided by Amazon Bedrock are used within the same AWS account and geographical area (continent) where Nexthink's customer data is typically stored and processed.

Can LLMs leverage my organization data to train their models?

No, the large language models (LLMs) used across Nexthink Infinity Platform cannot and do not use Customer Data via their APIs to train or improve their models.

Are large language models (LLMs) responding to end-user prompts, or are they powered by predefined prompts from Nexthink?

The large language models (LLMs) are triggered using predefined logic and prompts built into Nexthink’s backend services running in AWS. End-users cannot view or modify these prompts.

Additionally, the LLM does not have access to any Nexthink data. This means that even if a user attempts to query sensitive information, the LLM will not be able to access or infer anything from the platform.

Last updated

Was this helpful?