(XPF) Xperiflow - Architecture FAQ
The OneStream AI Services offering provides an enterprise-grade, specialized compute infrastructure exclusively designed for AI workloads. This offering is seamlessly integrated into your existing OneStream environment, ensuring the same level of security and data protection you expect.
Leveraging a dedicated compute stack guarantees that your AI workloads can access the requisite computational resources. It also ensures that your core OneStream business operations, which have their own dedicated computing and storage resources, remain unaffected by AI tasks.
The AI Services offering features an elastic compute stack underpinned by a proprietary orchestration framework and bot infrastructure. This architecture is engineered to scale dynamically, meeting the computational demands of complex AI workflows, such as massively parallel model training involving hundreds of models running concurrently.
FAQs
General
What Apps/Solutions sit on top of OneStream Xperiflow?
Through the remainder of 2025, OneStream AI Services is comprised of the following solutions:
-
(General Availability) SensibleAI Forecast (FOR) - Highlly accurate and explainable AI powered timeseries forecasts at scale.
-
(General Availability) SensibleAI Studio (STU) - A library of AI algorithms that can plug and play with other business processes around OneStream.
-
(General Availability) AI Data Manipulator (DMA) - Streamline and secure data exploration and transformation.
-
(General Availability) AI System Diagnostics - Scan custom code in a OneStream application for specific conditions to improve performance bottlenecks and accelerate implementation times.
-
(General Availability) Xperiflow Cloud Tools (XCT) - The AI Services engine upgrade utility.
-
(General Availability) Xperiflow Administration Tools (XAT) - The AI Services user access control solution enables administrators to efficiently manage rate limiting and access restrictions for AI Services capabilities, ensuring secure and tailored access for user groups within your OneStream applications.
-
(Limited Availability) AI Account Recs - Augment Account Reconciliations with Anomaly Detection capabilities.
-
AI Services Utility Solutions - A suite of micro solutions to support various AI processes such as specialized data cleansing, data movement, user access controls, and more.
Security & Compliance
What security standards and certifications do you comply with?
OneStream is certified to ISO 27001 and undergoes SSAE18 SOC 1 and 2 Type 2 audits twice yearly.
How do you handle data privacy and security?
We follow a least access privileged framework to ensure our support engineers do not have access to any of your infrastructure unless you grant access within a support ticket. Once granted access, we have a strict PIM procedure established through Azure to help with support cases.
What practices are followed to ensure OneStream remains current with evolving standards?
OneStream maintains an AI Council that is comprised of SMEs from across the business, including Legal, Risk, Compliance, Development, and Security. This is in addition to other complementary working groups such as our Operations and Risk Committee led by our Chief Risk Officer and our Privacy Committee attended by privacy professionals from across the business, all of which have AI and horizon scanning as a key agenda item. We also have dedicated on-site sessions related to AI Governance to ensure OneStream complies and stays ahead of the pack with new regulations being proposed worldwide.
Data Management & Governance
Is my data secure?
Yes, all authentication to your data routes through AI Services' dedicated web service, which is managed in a private VNet. This web service is only accessible from the private compute layers of OneStream, which requires you to log in to OneStream first. We leverage Azure’s AAD tokens to access all AI Services Infrastructure stack resources.
Is OneStream training their own models off your data?
No. OneStream does not use your data to train our own models. The models are custom-built for your data and only used for your data.
Who Has Access to your AI Services data and models?
Only you and those you allow access to your OneStream Application. These additional parties may include:
-
OneStream Support team members, upon your request, to solve or troubleshoot an issue
-
OneStream/Partner consultants working on an implementation
Architecture & Scalability
Does AI Services affect my core OneStream storage and compute?
No. AI Services includes a dedicated storage and compute infrastructure specifically designed for managing models, data transformations, and other AI-related metadata. This is intentionally segregated from your standard OneStream storage and computing to prevent any resource conflicts or cannibalization of important core OneStream processes that you do.
Why do I need a separate storage and compute layer for AI Services?
We require additional computing to not affect OneStream's core workloads and to train and run all the machine learning models our solutions leverage. Furthermore, these solutions have their own specific data, and we keep the storage separate from core OneStream as well.
Can AI Services be deployed in a non-Azure environment?
Currently, the OneStream AI Services offering is exclusive to OneStream Azure SaaS customers only.
Where is AI Services deployed in relation to my core OneStream deployment?
AI Services is deployed alongside the core OneStream deployment. The deployment is in a separate resource group but is still a single-tenant deployment. Your AI-Services resources are only used for your environment and are not shared across customers.
How do you ensure scalability and performance as data volumes grow?
There are different tiers of AI-Services to leverage proper storage/compute resource sizing based on the volume of data. The compute resources are dynamic based on what is currently running in the environment.
What does the AI Services software stack look like?
OneStream began investing in building its proprietary AI stack in early 2018. The AI Services software is written in Python, which is the standard and widely adopted language for writing AI systems and capabilities. A proprietary task orchestration framework (comparable to Celery, Dagster, etc.) is optimized for massively concurrent Python processes. Data storage systems include Azure Storage Accounts and Azure SQL databases. Lastly, OneStream has built C# SDK to interop with AI Services programmatically, given that core OneStream sits on the .NET stack.
How can I bring data into Sensible ML?
You can bring data into Sensible ML in various ways, including source data held within OneStream databases, flat file uploads, or data sourced directly from your source systems using Smart Integration Connector.
What versions of core OneStream are AI Services solutions compatible with?
The following bullets give a breakdown of the compatibility between OneStream Platform and AI Services:
-
(v8.0.0-v8.4.0) OneStream
- (+v3.0.0-v3.6.2) Sensible ML
- Features Included:
- Scenario Modeling
- Forecast Overlays
- More Prebuilt data connectors
- Features Included:
- (v1.0.0) AI Library
- (+v3.0.0-v3.6.2) Sensible ML
-
(+v9.0.0) OneStream
- (+v4.0.0) Sensible ML
- Features Included:
- Hierarchical Forecasting
- Improved explainability
- Deployable Dashboards
- Features Included:
- (+v2.0.0) AI Library
- (+v4.0.0) Sensible ML
Although we recommend being on the OneStream v9.0.0+ platform to get the most out of your AI services experience, this shouldn’t be a show stopper to beginning your OneStream AI Services journey, given there are still great AI Services capabilities OneStream v8.0.0 - v8.4.0.
Access Controls & Authentication
Can I set security and access controls around certain models and projects?
Yes, the Xperiflow Administration Tools solution allows you to set access controls around certain Sensible ML projects (models, use cases, etc.) to limit who can see them and how they can interact with them.
What methods do you use for access control and authentication?
The AI-Services web service is only accessible through a private OneStream VNet. To access the web service and data in AI-Services, a user must authenticate into OneStream. Access controls are managed in conjunction with OneStream access controls for application-level security and AI-Services access controls, which handle access to projects and compute resources. Furthermore, we have infrastructure-level access control, locking down access to resources from certain networks and with Azure AAD tokens.
Auditing & Monitoring
What kind of logging and monitoring do you provide for your AI models?
All historical AI Services data, model builds, and model configurations are stored for reference and viewable within Sensible ML. This provides an audit trail of what has been run in the environment. Users have control over when they delete projects/data in Sensible ML. From a monitoring standpoint, the health scoring system in Sensible ML provides metrics for an end user to know when the models have started to degrade and when it is time to rebuild them to the main acceptable levels of accuracy for your use case.
AI Ethics and Transparency
How do you ensure the AI models are interpretable for business users?
When OneStream adds a new AI model to its suite of capabilities, a core criterion is that it provides sufficient transparency into how the AI model generated its prediction. Almost all models within Sensible ML provide interpretable insights such as:
-
Feature Impact: Explains what drivers were most impactful for generating the model result.
-
Prediction Explanations: Explains which drivers pulled the forecast higher or lower on a given period than the typical average prediction generated by the model
-
Feature Effect: A form of correlation analysis that showcases how a given driver value used by a model affected the model prediction.
Models that do not have certain interpretable capabilities are clearly marked for the user to see prior to deciding to use that particular model.
Portability
Can I bring my own models to Sensible ML?
We do not allow you to bring your own models into Sensible ML because we have specifically curated and rigorously tested a set of models that we know ensure reasonable train times, accuracy, and high availability (low downtime/fault tolerant) to deliver and protect Sensible ML's value propositions for your organization. We are considering allowing this at a future date through our Sensible AI Library.
Can I export Sensible ML models to be used outside of the solution?
At this time, Sensible ML does not allow you to export the models outside of the solution or OneStream. However, you can export a variety of tabular ancillary data that is generated by the models including:
-
The model forecasts values
-
The model Prediction Explanations
-
The model Feature Impact scores
-
The model Feature Effect scores
Support & Service Level Agreements (SLAs)
Are there SLAs in place? What do they cover, and how are they enforced?
AI-Services SLAs are the same as existing OneStream SLAs.