Artificial Intelligence (AI) Policy

1. Purpose

Nexthink acknowledges the potential of Artificial Intelligence (AI) technologies to drive innovation and enhance operational efficiency. This policy provides guidelines and rules for using AI within our products and business processes, emphasizing compliance with legal requirements and ethical standards.

2. Scope

This policy applies to all individuals associated with Nexthink, including employees, executives, contractors, consultants, agents who have access to Nexthink's systems and data (“Nexthink Stakeholders”).

3. Principles for Nexthink Stakeholders’ Use of AI

At Nexthink, we are firmly committed to complying with laws and regulations relevant to the use/deployment of AI. All Nexthink Stakeholders must adhere to these legal and regulatory obligations. This encompasses but shall not be limited to current and upcoming AI regulations (such as the EU AI Act), privacy laws and security standards, protection of intellectual property rights, and adherence to anti-discrimination laws.

3.2 Data Protection and Privacy

Nexthink and its Stakeholders are committed to complying with applicable privacy laws, such as the General Data Protection Regulation (GDPR), the Swiss Data Protection Act or the California Consumer Privacy Act (CCPA).

Nexthink Stakeholders must comply with Nexthink's policies and guidelines regarding data protection and privacy when using/deploying AI tools. This includes adhering to the principle of data minimization and the principle of lawful data processing, which includes essential security measures to protect personal information.

3.3 Intellectual Property Rights

Nexthink is committed to respecting intellectual property rights in all aspects of its business. Nexthink Stakeholders have a responsibility to respect the intellectual property rights of third parties when using/deploying AI within Nexthink's products and business activities. This includes refraining from any actions that could infringe copyrights, trademarks, patents, or other intellectual property rights of third parties.

In addition, Nexthink Stakeholders must strive towards a usage of AI which adheres to the principles of fair use and complies with applicable license agreements.

3.4 Transparency

Nexthink commits to be transparent about the usage/deployment of AI technologies within the organization, making efforts to inform users when they interact with AI-generated content and to provide documentation for stakeholders and customers to understand our AI systems' capabilities and limitations.

3.5 Confidentiality and no training on Customer Data

Nexthink Stakeholders must comply with Nexthink's Confidentiality Policy in all activities related to the usage / deployment of AI systems or applications.

3.6 Security

Nexthink implements security measures to protect information processed in connection with the usage and deployment of AI. We will take reasonable steps to protect data and AI systems within Nexthink’s custody or control from unauthorized access, misuse, and potential threats, while recognizing that no system can be completely free of risk. This includes adopting industry-standard security protocols, conducting regular security assessments, and updating our practices as new risks emerge. We will also work to ensure that our security measures are proportionate to the sensitivity of the data being processed and the potential impact of any security breach.

3.7 Nexthink Policies & Enforcement

In addition to any principles and directives set forth in this policy, Nexthink Stakeholders must consistently adhere to all relevant Nexthink policies, standards and guidelines when using/deploying AI systems or applications. Any violation of this obligation will be subject to internal investigations. In serious cases, intentional violations may result in disciplinary consequences.

4. Governance and Human Oversight

4.1 AI Governance Committee

Nexthink maintains an AI Governance Committee (the “AI Committee”) composed of subject matter experts and managers from relevant departments, including but not limited to Legal, Privacy, IT, Product, Security, Engineering, and Solution Consultants. The main responsibilities of the AI Committee are:

  • supervising the implementation of this policy, providing strategic direction related to:

  • the ethical and responsible use of AI across Nexthink and

  • compliance with relevant laws, regulations and applicable policies or guidelines;

  • overseeing and reviewing the deployment / use of AI, and ongoing management of AI initiatives;

  • assessing and mitigating the risks inherent in new AI initiatives; and

  • fostering AI literacy within Nexthink Stakeholders.

4.2 Human oversight

To integrate AI responsibly, human oversight is emphasized through thorough testing, bias mitigation (as applicable), and continuous monitoring by engineering and product teams. These practices help validate AI functionalities, maintain accuracy and reliability standards, and ensure fairness and inclusiveness. Additionally, Customers and Partners are encouraged to verify AI-generated results, as AI technology, although improving, may produce errors. User feedback is key for identifying and fixing issues, aiding the improvement of AI capabilities.

5. Risk Assessment

Before deploying/using AI, Nexthink Stakeholders shall conduct risk assessments to identify and mitigate potential risks in collaboration with the relevant teams, including Legal, Privacy and Security. These assessments cover a range of considerations, including ethical, legal, and operational risks.

6. Training and Awareness

Nexthink is committed to fostering a culture of AI Literacy and regulatory compliance within our organization. To achieve this, regular trainings will be provided to Nexthink Stakeholders on the ethical use of AI, compliance requirements, and best practices. These training sessions aim to ensure that Nexthink employees understand their responsibilities regarding AI systems and data protection, while also raising awareness of this policy.

7. Reporting and Compliance

Nexthink Stakeholders must promptly report any concerns, security incidents, or suspected breaches related to AI usage to Legal, Privacy, and Security. In the event of an incident involving our AI systems, the relevant incident response plan will be promptly activated to address the issue. Affected users and relevant authorities will be notified as required by law. Where relevant, a thorough investigation will be conducted to understand the cause of the incident and corrective measures will be implemented to prevent future occurrences.

8. Policy Review

This policy will be reviewed annually or as needed to comply with laws and company standards. Nexthink values feedback to improve this AI Policy and practices. Any changes will be communicated to all stakeholders.

9. Contact Us

If you have any questions or feedback about this policy, please reach out to our Legal team.

Last updated

Was this helpful?