📄️ Error Metrics Explained
Error metrics in time series forecasting are measures used to evaluate the accuracy of predictions made by forecasting models. They quantify the difference between the actual and predicted values, providing insight into the performance and effectiveness of the forecasting models.
📄️ Evaluation vs. Summarization Accuracy
When exploring forecast accuracy, it is important to identify both the dimensional and frequential granularity at which accuracy should be “evaluated” and that by which it should be “summarized”.
📄️ Ensuring 'Apples-to-Apples' Forecast Comparison
The purpose of this article is to highlight the importance of synchronizing your comparison periods and dimensionality of your SensibleAI Forecast results and your benchmark comparison forecast to ensure an 'apples-to-apples' accuracy comparison.
📄️ Dimensional Aggregation - Deployed Model Forecast
A commonality among all SensibleAI Forecast use cases is the need for accuracy evaluation at the dimensionality by which the business currently forecasts.
📄️ Frequency Aggregation for Forecast Accuracy Comparison
This article seeks to outline how an implementor should weigh producing a weekly project for a use-case where monthly forecasts are desired. Furthermore, any challenges in processing will be listed, as well as tips for ensuring standardization.
📄️ Introduction to FVA Analysis - Win Margin TS (Filter) View
It is critical to leverage the Forecast Value Add solution during project experimentation to guide continuous improvement and after project experimentation to identify how an optimal project compares to the stakeholder’s benchmark forecasts.