Skip to main content
Author: Drew Shea, Created: 2024-06-14

Overview

The OneStream SensibleAI offering provides an enterprise-grade, specialized compute infrastructure exclusively designed for AI workloads. This offering is seamlessly integrated into your existing OneStream environment, ensuring the same level of security and data protection you expect.

Leveraging a dedicated compute stack guarantees that your AI workloads can access the requisite computational resources. It also ensures that your core OneStream business operations, which have their own dedicated computing and storage resources, remain unaffected by AI tasks.

The SensibleAI offering features an elastic compute stack underpinned by a proprietary orchestration framework and bot infrastructure. This architecture is engineered to scale dynamically, meeting the computational demands of complex AI workflows, such as massively parallel model training involving hundreds of models running concurrently.

info

Xperiflow is OneStream’s proprietary quantitative AI engine. It powers solutions such as SensibleAI Forecast and SensibleAI Studio within OneStream.


FAQs

General

What Apps/Solutions sit on top of OneStream Xperiflow?

Through the remainder of 2025, OneStream SensibleAI comprises the following solutions:

  • (General Availability) SensibleAI Forecast (FOR) - Highly accurate and explainable AI-powered timeseries forecasts at scale.
  • (General Availability) SensibleAI Studio (STU) - A library of AI algorithms that can plug and play with other business processes around OneStream.
  • (General Availability) AI Data Tools (DTL) - Streamline and secure data exploration and transformation.
  • (General Availability) AI System Diagnostics - Scan custom code in a OneStream application for specific conditions to improve performance bottlenecks and accelerate implementation times.
  • (General Availability) AI Narrative Analysis - AI-generated, insight-driven narrative automation that transforms financial commentary into clear, structured, executive-ready analysis directly within OneStream.
  • (General Availability) Xperiflow Cloud Tools (XCT) - The Xperiflow engine upgrade utility.
  • (General Availability) Xperiflow Administration Tools (XAT) - The SensibleAI user access control solution enables administrators to efficiently manage rate limiting and access restrictions for SensibleAI capabilities, ensuring secure and tailored access for user groups within your OneStream applications.
  • (Limited Availability) AI Account Recs - Augment Account Reconciliations with Anomaly Detection capabilities.
  • AI Utility Solutions - A suite of micro solutions to support various AI processes such as specialized data cleansing, data movement, user access controls, and more.

Architecture Overview

210014452655934692049931Untitled Diagram-1750709613849.drawio45https://datasensesoftware.atlassian.net/wikiUntitled Diagram-1750709613849.drawio0lT2_Jyok-m0FMzil18LL 118942378.5

The above diagram details the architecture for a OneStream SensibleAI Tier-enabled environment. Let’s break down each section in detail:

  • Customer OneStream Environment - This is the overall OneStream environment/tenant. It encapsulates all computing, storage, vNets, network, and security-related cloud services. All Xperiflow compute and storage exist next to your standard OneStream environment.

  • Core OneStream Resources - OneStream AI Service deployments do not affect your core OneStream deployment.

  • Xperiflow Resources - We have specifically designed the infrastructure of Xperiflow to be both separate from your core OneStream resources and to be a scalable and efficient backbone for machine learning workloads. The Xperiflow resources are securely accessible through a reverse proxy that authenticates and authorizes every incoming request. All security is managed using Azure’s AAD tokens. The Xperiflow stack has several specialized resources that can be broken up into two layers – Compute and Storage.

  • Compute - The Xperiflow compute layer is broken up into three layers.

    • The first layer is a web service only accessible by configured OneStream users. The webservice is protected with OneStream Identity Server (OIS), which acts as the gateway into the Xperiflow compute stack. This web service serves as the middleware between the two systems, transferring information and all data from Xperiflow to the SensibleAI solutions. This layer also receives SensibleAI jobs from these solutions to execute in the lower layers.
    • The second layer of the Xperiflow stack is the orchestrator servers that run jobs. These servers hold the proprietary orchestration system that organizes and sends thousands of tasks to be run in parallel and/or sequentially to the bot servers. The orchestrator layer runs the automation of this system.
    • The third layer of the Xperiflow stack is the bot servers. These heavy servers are specifically designed for floating-point math and excel at training ML models. They do all the heavy lifting in the system. They receive instructions from the orchestrator layer and return the results to the system. This layer is also dynamic, being scaled as requests come into the system.
  • Storage - The Xperiflow Storage layer has three components.

    • The first and most used components are the Azure SQL Servers and databases. These servers store large amounts of relational data, such as source data and outputted results from the system.

      • The Xperiflow stack always has a handful of default databases:

        • Framework database
        • AIS Data Sources database
        • Shared Store database
        • Ephemeral Store database
        • Routine Store database
    • We also dynamically create databases as users create projects in SensibleAI Forecast. This technology gives us the ability to provide the SLAs required and allows us to execute minute-to-minute rollbacks of our databases if needed.

    • The second storage component is Azure Storage Accounts. We leverage this technology to store snapshots of information the system needs. We store a small number of large files and a large number of small files as the system executes jobs and configurations.

    • The third storage component is our caches. We have several caches embedded throughout the system that help optimize every layer of computing clusters in the Xperiflow stack. These caches relieve pressure on expensive resources and speed up system performance.

Security & Compliance

What security standards and certifications do you comply with?

OneStream is certified to ISO 27001 and undergoes SSAE18 SOC 1 and 2 Type 2 audits twice yearly.

How do you handle data privacy and security?

We follow a least access privileged framework to ensure our support engineers do not have access to any of your infrastructure unless you grant access within a support ticket. Once granted access, we have a strict PIM procedure established through Azure to help with support cases.

What practices are followed to ensure OneStream remains current with evolving standards?

OneStream maintains an AI Council that is comprised of SMEs from across the business, including Legal, Risk, Compliance, Development, and Security. This is in addition to other complementary working groups such as our Operations and Risk Committee led by our Chief Risk Officer and our Privacy Committee attended by privacy professionals from across the business, all of which have AI and horizon scanning as a key agenda item. We also have dedicated on-site sessions related to AI Governance to ensure OneStream complies and stays ahead of the pack with new regulations being proposed worldwide.

Data Management & Governance

Is my data secure?

Yes, all authentication to your data routes through Xperiflow’s dedicated web service, which is managed by an extensive secure authentication and authorization system. This web service is only accessible if you are a preconfigured user. The environments are also single-tenant, which means that customer data is scoped completely to its own single environment. We leverage Azure’s AAD tokens to access all Xperiflow infrastructure stack resources.

Is OneStream training their own models off your data?

No. OneStream does not use your data to train our own models. The models are custom-built for your data and only used for your data.

Who Has Access to your SensibleAI data and models?

Only you and those you allow access to your OneStream Application. These additional parties may include:

  • OneStream Support team members, upon your request, to solve or troubleshoot an issue
  • OneStream/Partner consultants working on an implementation

We also have a role-based access control system called Xperiflow Administration Tools (XAT), where you can configure user-specific permissions around many areas of SensibleAI.

Architecture & Scalability

Does Xperiflow affect my core OneStream storage and compute?

No. Xperiflow includes a dedicated storage and computing infrastructure specifically designed to manage models, data transformations, and other AI-related metadata. This is intentionally segregated from your standard OneStream storage and computing to prevent any resource conflicts or cannibalization of important core OneStream processes that you do.

Why do I need a separate storage and compute layer for Xperiflow?

We require additional computing to not affect OneStream's core workloads and to train and run all the machine learning models our solutions leverage. Furthermore, these solutions have their own specific data, and we keep the storage separate from core OneStream as well.

Can Xperiflow be deployed in a non-Azure environment?

The OneStream SensibleAI offering, powered by Xperiflow, is exclusive to OneStream Azure SaaS customers only.

Where is Xperiflow deployed in relation to my core OneStream deployment?

Xperiflow is deployed alongside the core OneStream deployment. The deployment is in a separate resource group but is still a single-tenant deployment. Your Xperiflow resources are only used for your environment and are not shared across customers.

How do you ensure scalability and performance as data volumes grow?

There are different tiers of SensibleAI to leverage proper storage/compute resource sizing based on the volume of data. The compute resources are dynamic based on what is currently running in the environment.

What does the SensibleAI software stack look like?

OneStream began investing in building its proprietary AI stack in early 2018. The Xperiflow software is written in Python, which is the standard and widely adopted language for writing AI systems and capabilities. A proprietary task orchestration framework (comparable to Celery, Dagster, etc.) is optimized for massively concurrent Python processes. Data storage systems include Azure Storage Accounts and Azure SQL databases. Lastly, OneStream has built a C# SDK to interop with Xperiflow programmatically, given that core OneStream sits on the .NET stack.

How can I bring data into SensibleAI Forecast?

You can bring data into SensibleAI Forecast in various ways, including source data held within OneStream databases, flat file uploads, or data sourced directly from your source systems using Smart Integration Connector.

What versions of core OneStream are SensibleAI solutions compatible with?

The following bullets give a breakdown of the compatibility between OneStream Platform and SensibleAI solutions:

  • (v8.0.0-v8.5.0) OneStream

    • (+v3.0.0-v3.6.2) SensibleAI Forecast

      • Features Included:

        • Scenario Modeling
        • Forecast Overlays
        • More Prebuilt data connectors
  • (+v9.0.0) OneStream

    • (+v4.0.0) SensibleAI Forecast

      • Features Included:

        • Hierarchical Forecasting
        • Improved explainability
        • Deployable Dashboards
    • (+v2.0.0) SensibleAI Studio

    • AI System Diagnostics

    • AI Account Reconciliations

    • AI Narrative Analysis

Although we recommend being on the OneStream v9.0.0+ platform to get the most out of your SensibleAI experience, this shouldn’t be a showstopper to beginning your OneStream SensibleAI journey, given there are still great AI capabilities on OneStream v8.0.0 - v8.5.0.

Access Controls & Authentication

Can I set security and access controls around certain models and projects?

Yes, Xperiflow Administration Tools (XAT) allows you to set access controls around certain SensibleAI Forecast projects (models, use cases, etc.) to limit who can see them and how they can interact with them.

What methods do you use for access control and authentication?

The Xperiflow web service is only accessible by users who have been configured in OneStream Identity Server. To access the web service and data in Xperiflow, a user must authenticate into OneStream. Access controls are managed in conjunction with OneStream access controls for application-level security and Xperiflow access controls, which handle access to projects and compute resources. Furthermore, we have infrastructure-level access control, locking down access to resources from certain networks and with Azure AAD tokens.

Auditing & Monitoring

What kind of logging and monitoring do you provide for your AI models?

All historical SensibleAI data, model builds, and model configurations are stored for reference and are viewable within SensibleAI Forecast. This provides an audit trail of what has been run in the environment. Users have control over when they delete projects/data in SensibleAI Forecast. From a monitoring standpoint, the health scoring system in SensibleAI Forecast provides metrics for an end user to know when the models have started to degrade and when it is time to rebuild them to the main acceptable levels of accuracy for your use case.

AI Ethics and Transparency

How do you ensure the AI models are interpretable for business users?

When OneStream adds a new AI model to its suite of capabilities, a core criterion is that it provides sufficient transparency into how the AI model generated its prediction. Almost all models within SensibleAI Forecast provide interpretable insights such as:

  • Feature Impact: Explains what drivers were most impactful for generating the model result.
  • Prediction Explanations: Explains which drivers pulled the forecast higher or lower on a given period than the typical average prediction generated by the model
  • Feature Effect: A form of correlation analysis that showcases how a given driver value used by a model affected the model prediction.

Models that do not have certain interpretable capabilities are clearly marked for the user to see prior to deciding to use that particular model.

Portability

Can I bring my own models to SensibleAI Forecast?

We do not allow you to bring your own models into SensibleAI Forecast because we have specifically curated and rigorously tested a set of models that we know ensure reasonable train times, accuracy, and high availability (low downtime/fault tolerant) to deliver and protect SensibleAI Forecast's value propositions for your organization. We are considering allowing this at a future date through our SensibleAI Studio.

Can I export SensibleAI Forecast models to be used outside of the solution?

SensibleAI Forecast allows you to export a variety of tabular ancillary data that is generated by the models, including:

  • The model forecasts values
  • The model Prediction Explanations
  • The model Feature Impact scores
  • The model Feature Effect scores

Support & Service Level Agreements (SLAs)

Are there SLAs in place? What do they cover, and how are they enforced?

SensibleAI SLAs are the same as existing OneStream SLAs.

Generative AI Features:

What generative AI features/solutions are available in SensibleAI Studio and Forecast?

SensibleAI Forecast:

  • Generative AI Forecast Explanations of insights and interpretability metrics.

SensibleAI Studio:

  • AI Data Tools - Natural Language to SQL
  • AI System Diagnostics - Generative AI Code Scanning to help improve code performance and debugging speed within OneStream, speeding up implementation time.
  • AI Narrative Analysis - AI-generated, insight-driven narrative automation that transforms financial commentary into clear, structured, executive-ready analysis directly within OneStream.

Which LLMs are being used to power SensibleAI solutions in OneStream

OneStream leverages Azure OpenAI for its Generative AI capabilities. This includes but is not limited to GPT-4o, 4o-mini, o3, o4-mini, 4.1, 4.1-mini, and 4.1-nano. As part of the partnership of Azure and OpenAI, Azure has the weights to OpenAI’s Generative AI models.

Where are these models hosted?

The models are deployed and secured behind Azure infrastructure in each Azure region

Do you use customer data to train the AI models?

The data is not held or trained on by the Azure OpenAI models. The data is not sent to a 3rd party service. Since Azure has the OpenAI weights, the model request/prompt never leaves the Azure ecosystem.

Does OneStream have a private agreement with LLM providers to ensure that no data is used to train the models?

OneStream does not have any private agreements with LLM providers. We use Azure OpenAI, which is governed by these terms:

Generative AI response accuracy and feedback:

What is the accuracy rate of the AI-generated outputs?

For solutions where the LLM prompt is generated internally at OneStream, QA Engineers have audited these prompts and ensured each response is accurate and useful for the user.​

For solutions where a user provides the prompt directly, QA Engineers have been performing tests by asking contextually relevant questions. The accuracy of the responses has been verified utilizing QA Engineer domain expertise.​

Is there a mechanism for users to provide feedback when the AI output is incorrect?

Currently, no agentic systems are incorporated in SensibleAI Studio or Sensible Forecast. Therefore, user feedback is not relevant.​

The generative AI integrations are vetted via an industry-standard Software Development Lifecycle process, including thorough Quality Assurance testing and verification.​

What processes are in place to continuously improve and update the AI models?

OneStream’s Quality Assurance team can iteratively validate the model's accuracy and response across product releases using evaluation data sets.​

Can I opt out of the Generative AI Capabilities?

Yes, an Admin user can configure settings in Xperiflow Administration Tools to turn off the Generative AI capabilities

Was this page helpful?