In addition to Nexthink Assist, Nexthink leverages GenAI in other parts of Infinity to provide insights and offer contextual descriptions and recommendations to enhance troubleshooting, prioritization, and root cause analysis.The following features are based on GenAI:
For all the above, the LLMs used are provided by Amazon Bedrock within the same AWS account and geographical area (continent) where customer data is typically stored and processed. OpenAI is not used for the features listed on this page.
Frequently asked questions
Do the LLMs process Personal Data or any type of sensitive information?
Large language models (LLMs) are not meant to process personal data or sensitive information.
Are there third parties involved?
For all the features listed on this page, Nexthink uses the large language models (LLMs) provided by Amazon Bedrock. OpenAI is not involved.
Where is data processed by the LLMs?
The large language models (LLMs) provided by Amazon Bedrock are used within the same AWS account and geographical area (continent) where Nexthink's customer data is typically stored and processed.
Can LLMs leverage my organization data to train their models?
No, the large language models (LLMs) used across Nexthink Infinity Platform cannot and do not use Customer Data via their APIs to train or improve their models.
Are large language models (LLMs) responding to end-user prompts, or are they powered by predefined prompts from Nexthink?
For most features described on this page, the large language models (LLMs) are triggered using predefined logic and prompts built into Nexthink’s backend services running in AWS. End-users cannot view or modify these prompts.
In the case of the Generating JavaScript with AI for Function thinklets feature, users can specify what the function should do. However, even in this case, a backend system prompt ensures that the LLM returns only the relevant JavaScript code and nothing else.
Additionally, the LLM does not have access to any Nexthink data. This means that even if a user attempts to query sensitive information, the LLM will not be able to access or infer anything from the platform.