Evaluating Spark resolutions

Evaluating Spark is a structured validation cycle that measures how Spark performs using existing ITSM tickets and employee device context. IT support teams can run Spark on behalf of employees to test issue resolution, review AI reasoning, and assess resolution quality directly in Nexthink.

circle-info

The approach and steps outlined below represent best practices designed to ensure a consistent, measurable, and successful training cycle. While the process can be adapted to meet specific organizational requirements, any adjustments should be agreed upon in collaboration with your Nexthink representative to maintain Spark training quality and comparability of results.

Prerequisites

Before you start evaluating Spark, ensure you have the following:

  • An active Nexthink license

  • An ITSM system integrated with Nexthink

  • The following permissions enabled for Nexthink Infinity users performing the tests:

    • Chat with agent through Infinity

    • View all agent conversations

  • An initial setup of Spark completed. Refer to the Getting started with Spark for more information.

Roles

Spark evaluation requires two roles:

  • Spark champion – Owns and coordinates the entire training cycle, ensures alignment with objectives, enables testers, and oversees result analysis and follow-up improvements.

  • Spark testers – Execute the evaluation by testing Spark against real ITSM tickets and documenting outcomes in a structured way.

Spark evaluation process

The Spark evaluation runs in four structured phases:

This phased approach allows teams to first establish a performance baseline, then improve knowledge and configuration, and finally measure progress through a second validation cycle.

Preparation for Spark evaluation

1

Selecting Spark testers

circle-info

Select testers who are familiar with employee interactions and can accurately represent employee language and problem descriptions. This ensures realistic Spark conversation simulations and meaningful evaluation results.

The Spark champion may perform the evaluation or designate additional IT support team members as Spark testers. When selecting Spark testers, choose IT support team members who regularly work with ITSM tickets, communicate directly with employees, and can accurately represent employee language and problem descriptions.

Once the testers are selected, the Spark champion or Nexthink Admin provides them access to Chat with Spark. Refer to the Getting started with Spark for more information.

2

Training selected testers on Spark

The Spark champion enables selected testers by completing the enablement checklist:

This step ensures both tool access and methodological alignment. All testers should follow the same evaluation logic and documentation standards to ensure comparable results.

Initial evaluation using ITSM tickets

1

Executing testing using ITSM tickets

During the testing phase, Spark testers use real and recently raised ITSM tickets to evaluate how Spark handles authentic employee issues. They:

  1. Select a recently raised ticket from the ITSM platform.

  2. Identify the employee who reported the issue.

  3. Open Chat with Spark in Nexthink.

  4. Select Other employee and search for the employee associated with the ticket.

  5. Start the conversation as if they were that employee and describe the issue exactly as reported in the ticket.

  6. Follow Spark’s troubleshooting guidance and confirm whether the issue is resolved.

  7. Document evaluation results, e.g., in MS Excel.

circle-info

If the issue requires employee interaction, Spark testers inform the employee before performing any action that affects their device.

Refer to the Using Chat with Spark and Communication channels documentation for more information.

2

Analyzing documented test results

After completing testing in Chat with Spark:

  • Review the documented test results.

  • If additional context is required, navigate to Spark > All Conversations and review the full conversation, including Spark’s responses and reasoning.

Reviewing the Reasoning tab ensures transparency in Spark’s decision-making process and helps identify gaps in knowledge, missing actions, or configuration improvements that may be required.

Consistent and complete documentation is critical. It enables objective comparison between evaluation rounds and provides structured input for improvement. Refer to the Record evaluation results documentation for more information.

Spark knowledge and configuration improvements

After Spark testers complete the first evaluation round, Spark champions review the collected results and identify patterns focusing particularly on incorrect, incomplete, or suboptimal responses.

Based on findings Spark champion:

  • Update or refine existing knowledge content

  • Import missing knowledge articles

  • Adjust available actions where required

Improvements should be targeted and measurable so their impact can be validated during the second evaluation round.

Second evaluation round

Spark testers repeat the same structured method used in the first Spark evaluation round. They

  • Use new ITSM tickets to simulate employee interactions

  • Log all results using the same template.

Finally, Spark Champion compares the results of the second round with the initial baseline to assess measurable improvements in Spark resolution quality, reasoning accuracy, and overall performance.

This structured validation cycle ensures that Spark’s performance is systematically assessed and optimized before broader rollout, reducing risk and increasing confidence in production deployment.

Last updated

Was this helpful?