Skip to main content

(FVA) Forecast Value Add

Author: Drew Shea, Created: 2025-12-18

Product Overview

Forecast Value Add (FVA) is an advanced analytics solution within the OneStream platform that enables organizations to objectively measure and visualize the accuracy of their forecasts.

FVA provides a comprehensive suite of accuracy metrics—including Mean Absolute Error (MAE), MAE Percentage, Bias Error, Bias Error Percentage, and Score Percentage—allowing users to assess model performance at a granular level. Interactive visualizations, such as plots comparing actuals against error percentages, make it easy to identify strengths and weaknesses in forecasting processes.

With FVA, users gain actionable insights into forecast reliability and can drive continuous improvement in planning and decision-making.

Product Value Proposition

FVA empowers finance and analytics teams to make data-driven decisions by providing clear, side-by-side comparisons of SensibleAI Forecast predictions against customer benchmark forecasts and other traditional forecasting methodologies employed by the business. This process highlights accuracy improvements, identifies areas for forecast refinement, and supports continuous forecasting and planning process optimization. The dashboards facilitate transparent performance reviews and help organizations quantify the tangible benefits of adopting AI-driven forecasting.

User Personas

  • SensibleAI Forecast Power Users: Use FVA to find opportunities for model accuracy improvements that can be brought back into the modeling process (new features, new events, different models, etc.) of SensibleAI Forecast.
  • Finance & FP&A Teams: Use FVA dashboards to validate and improve forecasting processes, ensuring more reliable business planning.

Learn More

There are many places to learn more about SensibleAI Forecast:

  • Stay in this current section to learn more about the core capabilities of Forecast Value Add.
  • Visit AI Powered Forecasting and Forecast Evaluation to learn more about the best practices around evaluating model accuracy.

Was this page helpful?