KPIs for gen AI: How to Measure Your Gen AI Application

Model quality

Tracking model quality KPIs from the development phase throughout their operational lifespan is not merely about upholding quality. It’s a proactive pathway of continuous enhancement and breakthrough innovation. By prioritizing the model quality, you gain invaluable insight into how the model performs in the wild, as opposed to just on training data, while mitigating hallucinations.

It’s also important to evaluate the advantages and disadvantages of models at multiple levels based on context, references, and sampling. However, you should use a wide range of diverse data, irrespective of the framework.

Monitoring key indicators before and after launch essentially takes the model’s pulse to ensure it’s delivering on its promise. As feedback is generated, it can be fed back into the model to further enhance future outputs. This cycle can be repeated again and again, creating a virtuous cycle of model improvements. At the same time, errors that go unnoticed can be reinforcing over time. Getting things right early and often is, therefore, paramount.

Here are just some of the metrics we recommend for tracking model quality:

  • Quality index: An analysis of multiple metrics aggregated into a single value to represent overall model performance (e.g., BLEU, Rouge, SuperGlueBIG-bench, CIDEr, METEOR)
  • Error rate: The percentage of responses provided by the model, which are incorrect or invalid. Human evaluation helps define and generate this metric.
  • Latency: The time delay between when a query is submitted to the model and when it returns the response. This includes the parallel processing capabilities of the model, model architecture, deployment infrastructure and availability.
  • Accuracy range: The baseline expectation for precision accuracy thresholds for the model to meet. For this metric, it is often helpful to establish a red team to analyze and challenge your model.
  • Safety Score: The number of harmful categories and topics that may be considered sensitive for the business.

System quality

To harness the full potential of your model, an end-to-end AI system is necessary to develop, tune, and deploy models at scale. This system should seamlessly integrate key components, including: data acquisition and pre-processing, context and prompt generation, model flow orchestration (whether in parallel or sequentially), an automated evaluation framework, and efficient management for both model and data.

Moreover, post-processing of results is crucial to ensure the output maximizes business value. The effectiveness of this orchestrated system hinges on successful integration with both upstream and downstream business processes, facilitating informed decision making.

Many organizations, having spent a decade or more digitally transforming, now have a technology stack that resembles Frankenstein’s monster — a mishmash of technologies, systems, and frameworks that have been cobbled together over time by different teams and departments. Unfortunately, understanding how everything works mostly exists within employees’ heads and nowhere else, making knowledge loss a frequent and real risk.

Fostering a knowledge-sharing ecosystem and striving for standardized technologies and processes helps ensure interoperability, quality control, and scalability. Therefore, having a strong, unified platform like Google Cloud’s Vertex AI, for example, can bring more order and control. Without a strong system design, even the most sophisticated AI initiatives are likely to end up being reduced to mere experiments that deliver little to no business value.

Equally important is maintaining a high-quality data environment. The success of a gen AI project is deeply intertwined with the integrity of its data, as models inherit the flaws of the data used to train it. Without proper data governance, models can easily be trained on low-quality, biased, or irrelevant data, increasing the chances of hallucination or problematic outputs. To mitigate the possibility of models perpetuating harmful biases, businesses should invest in labeling, organizing, and monitoring their data.

Here are some metrics to consider for tracking system quality:

  • Data relevance: The degree to which all of the data is necessary for the current model and project. Be warned, extraneous data can introduce biases and inefficiencies that can lead to harmful outputs.
  • Data and AI asset and reusability: The percentage of your data and AI assets that are discoverable and usable.
  • Throughput: The volume of information a gen AI system can handle in a specific period of time. Calculating this metric involves understanding the processing speed of the model, efficiency at scale, parallelization, and optimized resource utilization.
  • System latency: The time it takes the system to respond back with an answer. This includes any ingress- or egress-based networking delays, data latency, model latency, and so on.
  • Integration and backward compatibility: The upstream and downstream systems APIs available to integrate directly with gen AI models. You should also consider if the next version of models will impact the system built on top of existing models (not just limited to prompt engineering).

Business impact

Businesses gain diverse value from gen AI deployments, whether through creative automation and increases in code quality or reduced costs for hiring, training, and onboarding. With their versatile applications, large language and gen AI models can be deployed across various departments of an organization, including marketing, logistics, design, programming, and even legal. Each team, with its unique functions and objectives, can leverage these models to identify and capitalize on opportunities for optimization.

However, adoption doesn’t happen overnight — ingraining new AI-powered behaviors requires patience and persistence. That’s why tracking usage metrics is crucial for understanding how real humans are interacting with the model over time. Monitoring adoption rates within an organization provides insight into whether gen AI is becoming truly embedded in workflows.

If gen AI capabilities are customer-facing, usage metrics can reveal if people find them valuable and how often they are utilized. By isolating these metrics, organizations can gain clear insights on the user experience, which could otherwise get lost in the model optimization cycle. It shifts the focus from nitty-gritty technical details to a bird’s-eye view of the model’s accessibility, reliability, and usability.

Below are the metrics we recommend for tracking business impact:

  • Adoption rate: The percentage of active users over the lifetime of a campaign or project divided by the total intended audience.
  • Frequency of use: The number of times queries are sent per user on a daily, weekly, or monthly basis.
  • Session length: The average duration of continuous interactions.
  • Queries per session: The number of queries users submit per session.
  • Query length: The average number of words or characters per query.
  • Abandonment rate: The percentage of sessions ended before users find answers.
  • User satisfaction: Surveys assessing user experience or other customer satisfaction metrics, such as Net Promoter Score (NPS).

While usage metrics let you zoom in on gen AI adoption by your customers and organization, business value metrics can also provide you with the evidence that these AI investments are making a positive impact on your bottom line. The innovative possibilities of generative AI models mean that you can identify new areas for optimization in departments previously untouched by AI technology. These expansive capabilities are revolutionizing and streamlining departmental functions, unlocking new levels of efficiency and innovation.

Some examples of business value improvement metrics include:

  • Customer service
    • Reduction in average handling time and cost per interaction
    • Lift in customer satisfaction (NPS)
    • Agent productivity via gen AI assist tools
  • Marketing
    • Time saved (e.g. hours) from streamlined processes: brief writing, editing, collaboration, etc.
    • Higher return on ad spend (ROAS) due to increased personalization
    • Augmented creativity and idea generation
  • Healthcare
    • Increased time with patients by reducing administrative burdens
    • Better patient outcomes from clear, consistent care plans
    • Improved efficiency, reduced wait times, and higher care capacity
  • Retail
    • Lift in revenue per visit
    • Increases in sales through AI driven product suggestions
    • Improvements in customer satisfaction/experience
  • Product development
    • Percentage of content influenced by generative AI tools
    • Employee hours saved from automating processes
    • Accelerated time-to-value from product launches

Source: https://cloud.google.com/transform/kpis-for-gen-ai-why-measuring-your-new-ai-is-essential-to-its-success

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *