Workflow Function Generator - AI Model Card
Model details
Description
The Workflow Function Generator lets you describe the function in automated workflows you want in plain language. It produces clean, syntactically correct JavaScript that follows Nexthink Workflows conventions. This AI-based feature speeds up development, helps with consistency, and provides input/output mapping with an explanation of how the code works.
For more information about Nexthink, refer to the FAQ documentation and the Function Thinklet documentation, which provides information for generating JavaScript with Workflow Function Generator.
Prompt inputs and outputs
Inputs: Prompt requests written in natural language, provided by the user, describing the desired JavaScript function for a Workflow thinklet. These prompts may include input parameters, expected outputs, and required transformations, following the supported syntax rules for accessing inputs, assigning outputs, and adding execution logs.
Outputs: AI-generated JavaScript code that follows the Function thinklet structure and syntax, including parameter handling, transformation logic, and output assignments. The generated code can be reviewed, modified through updated prompts, and copied for use in Workflow thinklets.
Intended use
Primary intended users
IT Operations Teams, Nexthink administrators, and other Nexthink Infinity users who manage workflows and build workflow automations.
Out-of-scope use cases
When a user enters a prompt outside the Workflow Function Generator, the system stops processing the request and informs the user that it is an out-of-scope topic.
Model data
Data flow
1. User enters prompt
Describes required inputs, desired outputs, and logic/transformations in natural language; may include existing code for improvement.
2. System sends request
Packages the prompt and sends it to the AWS Bedrock-hosted AI model in the customer’s AWS region.
3. System processes response
Validates the model’s response and prepares JavaScript code, explanations, and output examples.
4. User reviews output
Reviews the generated code, explanation, and mappings.
5. User refines prompt (optional)
Updates the prompt or provides improvement instructions; system regenerates updated results.
6. User finalizes output
Accepts the generated JavaScript code for the Function Thinklet.
Below is an image detailing the data flow for steps 1–3. After step 3, the user can review the inputs, refine the prompt, and—if needed—re-run the same workflow. When satisfied, the user can finalize; this confirmation step ensures every output is always reviewed and approved by a human.

Evaluation data
Nexthink employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, to support confirmation that the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time. The list below presents the evaluation criteria for AI-generated JavaScript in Function Thinklets:
Align code with requirements: Aim for alignment between the generated JavaScript, the declared inputs, outputs, and the user’s intent.
Maintain high code quality: Write ES6+ compliant, readable, and functionally correct code that follows Function Thinklet rules.
Support flexibility: Allow users to refine or edit the code after generation without breaking functionality.
Close the feedback loop: Improve subsequent outputs by modifying the prompt and regenerating the function.
Apply performance metrics: Measure precision, recall, and proprietary in-house metrics tailored to the system. Validate model updates against test datasets to maintain accuracy and reliability.
Monitor continuously: Track model performance to proactively address issues, reduce errors, and improve response quality over time.
Training data
Workflow Function Generator uses AWS Bedrock-hosted foundation model configured via prompts. This AI model is not fine-tuned or trained by Nexthink. Nexthink does not use customer data to train its AI models.
Preprocessing data
Before sending a request to the AI model, the system performs several preprocessing steps to verify the prompt is accurate, complete, and safe. The system does not send any customer data to the model — only the requirements for the function.
Implementation information
Hardwere
Model runs within AWS infrastructure in the customer’s geographic region using AWS Bedrock.
Software
Nexthink generates JavaScript for Function Thinklets within its AWS environment in the customer’s geographic region, using AWS Bedrock–hosted LLMs including Sonnet 3.7, Haiku 3.5, and Nova.
Security
Nexthink employs HTTPS and AES-256 encryption to secure data both in transit and at rest. Nexthink's use of standard encryption methods aligns with industry best practices to prevent unauthorized access and protect data processed by AI features. Visit Nexthink Security Portal to learn more about Nexthink's commitment to information security.
Caveats and recommendations
Risk management
AI tool availability
In cases where the JavaScript code generator AI is temporarily unavailable, users can fall back on manually creating or editing JavaScript code directly within the Workflow thinklet interface. This resilience plan ensures uninterrupted access to core workflow capabilities, allowing users to continue their work seamlessly.
Hallucination and bias propagation
Model hallucinations and biases are mitigated by Nexthink through continuous performance monitoring and regular model updates.
Inaccuracy of outputs
Nexthink also employs a set of performance metrics, including precision, recall, and proprietary in-house metrics, tailored to specific components of the system. Test datasets are used to validate model updates, ensuring the AI system’s accuracy and reliability align with company standards. Additionally, continuous monitoring of model performance allows Nexthink to proactively address potential issues, reducing errors and improving response quality over time.
Unauthorized access or misuse
Admins can control access to Workflow Function Generator through Role-Based Access Control (RBAC). This RBAC mechanism allows granular user permissions, enabling only specific users or groups to manage workflows while others are restricted, hence maintaining control over feature availability within the organization.
All AI processing is performed within Nexthink’s secure AWS environment, ensuring that data remains protected in transit and at rest through HTTPS and AES-256 encryption. The feature operates exclusively on inputs provided by authenticated users within the Nexthink platform, preventing exposure of data beyond the user’s authorized scope. Additionally, system activity is logged and auditable, enabling organizations to monitor usage patterns, detect suspicious behavior, and take corrective action if misuse is suspected. This approach safeguards both the integrity of generated code and the security of the underlying environment.
Ethical considerations
Nexthink follows both national and international AI guidelines and best practices, emphasizing responsible and ethical AI development. In compliance with the EU AI Act, Nexthink has developed a comprehensive AI compliance framework. Each AI component is reviewed by Nexthink’s Privacy Team and a dedicated AI Compliance Team.
When an AI functionality involves the processing of personal data, Nexthink ensures it undergoes a thorough privacy assessment, as required by applicable data protection laws and regulations.
AI Limitations
The AI-generated code is provided "as-is." While the AI is designed to be helpful, it can make mistakes, and Nexthink can’t guarantee the code will be error-free, secure, or free from malicious elements like malware. It is Customer's responsibility to thoroughly review, test, and verify the code for accuracy, security, and safety before using it in any live environment.
To mitigate these risks, Customers may:
Cross-Check AI Outputs: Review AI-generated JavaScript to ensure it aligns with the intended workflow logic, input or output requirements, and Function Thinklet rules.
Implement Human Oversight: Use AI as a supporting tool rather than a decision-making authority, ensuring that critical outputs are reviewed by qualified individuals.
Provide Feedback: When inaccuracies are identified, share them with the AI provider (if applicable) to contribute to model improvement.
FAQ
Last updated
Was this helpful?