📄️ Cross Validation: Establishing Splits
The purpose of this document is to highlight how to go about establishing the Train, Validation, Test, and Holdout sets and the important considerations one must take into account to ensure an optimal RPE setup for SensibleAI Forecast (FOR).
📄️ Cross Validation: Manual Model Setup
In an effort to provide additional insight to the Cross Validation: Establishing Splits article, the purpose of the below content is to explain how to create a manual model for setting up the cross-validation splits.
📄️ Model Series: Univariate vs. Multivariate Modeling
With approximately 25 different algorithms in SensibleAI Forecast, it can be a challenge to keep track of what all of them do. Even with each model optimizing on different features, events, seasonality, and growth trends, they all fall into 2 major categories: Univariate and Multivariate.
📄️ Forecasting Series: Scenario Forecasting
One of the major tenants of SensibleAI Forecast is the transparency that can be gleaned with insights into forecasted values. When we analyze a utilized model, for example, we can see individual features as well as the extent to which they impacted the end result of our target variable.
📄️ Merging and Overlaying Forecasts
During the course of Rapid Project Experimentation (RPE), Proof of Values (POVs), or full-scale engagements, what we return from a forecast table isn’t necessarily representative of what we want our final project to be.
📄️ Experiment Faster with Prediction Simulator
The Prediction Simulator is a key routine within the SensibleAI Studio (SAIS). As a power user, running jobs can consume a lot of time throughout Rapid Project Experimentation (RPE).