Other GenAI Features

Overview

In addition to Nexthink Assist and Nexthink Insights Nexthink leverages GenAI in other parts of Infinity to enhance content development. The following feature is based on GenAI:

The LLMs used are provided by Amazon Bedrock within the same AWS account and geographical area (continent) where customer data is typically stored and processed. OpenAI is not used for the features listed on this page.

Frequently asked questions

chevron-rightDo the LLMs process Personal Data or any type of sensitive information?hashtag

Large language models (LLMs) are not meant to process personal data or sensitive information.

chevron-rightAre there third parties involved?hashtag

For the feature listed on this page, Nexthink uses the large language models (LLMs) provided by Amazon Bedrock. OpenAI is not involved.

chevron-rightWhere is data processed by the LLMs?hashtag

The large language models (LLMs) provided by Amazon Bedrock are used within the same AWS account and geographical area (continent) where Nexthink's customer data is typically stored and processed.

chevron-rightCan LLMs leverage my organization data to train their models?hashtag

No, the large language models (LLMs) used across Nexthink Infinity Platform cannot and do not use Customer Data via their APIs to train or improve their models.

chevron-rightAre large language models (LLMs) responding to end-user prompts, or are they powered by predefined prompts from Nexthink?hashtag

The large language models (LLMs) are triggered using predefined logic and prompts built into Nexthink’s backend services running in AWS. End-users cannot view or modify these prompts.

Additionally, the LLM does not have access to any Nexthink data. This means that even if a user attempts to query sensitive information, the LLM will not be able to access or infer anything from the platform.

Last updated

Was this helpful?