In addition to Nexthink Assist, Nexthink leverages GenAI in other parts of Infinity to provide insights and offer contextual descriptions and recommendations to enhance troubleshooting, prioritization, and root cause analysis.The following features are based on GenAI:
For all the above, the LLMs used are provided by Amazon Bedrock within the same AWS account and geographical area (continent) where customer data is typically stored and processed. OpenAI is not used for the features listed on this page.
Frequently asked questions
Do the LLMs process Personal Data or any type of sensitive information?
Large language models (LLMs) are not meant to process personal data or sensitive information.
Are there third parties involved?
For all the features listed on this page, Nexthink uses the large language models (LLMs) provided by Amazon Bedrock. OpenAI is not involved.
Where is data processed by the LLMs?
The large language models (LLMs) provided by Amazon Bedrock are used within the same AWS account and geographical area (continent) where Nexthink's customer data is typically stored and processed.
Can LLMs leverage my organization data to train their models?
No, the large language models (LLMs) used across Nexthink Infinity Platform cannot and do not use customer data via their APIs to train or improve their models.
Are the LLMs working with end-user prompts or Nexthink-defined prompts working on the back end?
For all the features listed on this page, the large language models (LLMs) are invoked using logic and prompts from backend services within the Nexthink platform in AWS. End-users cannot modify these prompts.