Skip to main content
Author: Drew Shea, Created: 2026-02-18

Routine Runs

Summary: A Routine Run is a single execution of a routine method. It captures everything about that execution - the inputs provided, the progress made, and the artifacts produced.


Overview

Every time you execute a routine method, you create a Routine Run. Think of it as pressing "play" on a routine - the Run represents that specific execution from start to finish.

A Routine Run captures:

What

Description

Inputs

The parameters you provided

Execution

When it ran, how long it took, what resources it used

Outputs

The artifacts (results) produced

Status

Whether it succeeded, failed, or was cancelled

This creates a complete record that you can review, compare, and learn from.


Understanding the Hierarchy

Routine Runs fit into a simple hierarchy:

Routine
/-- Routine Instance
/-- Routine Run <- You are here
/-- Artifacts

Routine Runs have a single Routine Instance parent and n-many Artifact children.

Level

Purpose

Example

Routine

The blueprint/template

"KMeans Clustering Routine"

Routine Instance

A configured copy of the routine

"Customer Segmentation Analysis"

Routine Run

A single execution

"December 15th clustering with k=5"

Artifacts

The outputs from that run

Cluster assignments, centroid data, metrics

Key Insight: You can execute methods on the same Routine Instance multiple times, each creating a new Run with its own inputs and outputs.


The Run Lifecycle

Every Routine Run progresses through a predictable lifecycle:

(4CD31417-6B0C-49A6-9E87-2A440D14482B)-20260108-162035.png

Status Descriptions

Status

Description

What Happens Next

Created

Run configuration is defined but not yet started

Waiting to be started

Queued

Run is waiting for available compute resources

Will start when resources are available

Running

Actively executing the routine method

Progress updates visible

Completed

Successfully finished

Artifacts are available

Failed

Encountered an error

Error details logged for review

Cancelled

Stopped by user or system

Partial results may be available


What Makes Up a Run

A Routine Run consists of several key components that work together:

1. Input Parameters

The values you provide to customize this specific execution. These are defined by the routine's method signature and validated before execution begins.

2. Method Target

Each run executes a specific method on the routine. Different methods produce different types of outputs:

Method Type

Purpose

Example

Constructor (Stateful routines only)

Initialize the routine instance with configuration

Setting up model hyperparameters

Standard Method

Perform the main computational work

Fit clusters, classify records

3. Execution Context

The run tracks important execution details:

  • Who started the run (user identity)
  • When it started, finished, and was last updated
  • How much memory it's allocated
  • Where its artifacts will be stored

4. Artifacts

The results produced by a run. These are automatically saved and organized for retrieval:


Execution Modes

Routine Runs can execute in different modes depending on your needs:

Background Execution

The default mode for most runs. Processing happens asynchronously, freeing you to do other work while the routine executes.

(AE712916-15CD-4ACA-B165-CEA090673A93)-20260108-162201.png

Best for: Long-running computations, large datasets, production workloads

In-Memory Execution

Processing happens immediately and results are returned directly. Faster for small operations but blocks until completion.

(34BB08D0-9145-48E6-A105-0E8E42980CB0)-20260108-162251.png

Best for: Quick operations

In-Memory execution must be explicitly allowed by a Routine method. Check the Routine Documentation to understand if a method allows In-Memory execution.


Invocation Methods

How a run receives its input parameters:

Workflow Invocation

Input parameters are collected through a guided user interface. The system walks you through each required input step-by-step.

Advantages:

  • Step-by-step guidance
  • Built-in validation
  • Discoverable options
  • Reduced errors

Direct Invocation

Parameters are provided directly, typically through an API call.

Advantages:

  • Faster for experienced users
  • Better for automation

Run Configuration Options

When creating a run, you can configure several options that affect how it executes and what outputs are produced:

Artifact Storage

Option

Description

When to Use

Store Artifacts

Save outputs to the MetaFileSystem

Production runs, results you need to keep

Optional Outputs

Option

Description

Include Statistics

Write statistics about the artifact

Include

Previews

Write a subset of the artifact

Resource Allocation

Memory allocation controls how much computational memory the run can use:

Memory Allocation

Small datasets

Default allocation

Large datasets

Memory override (increase)


Tracking Progress

While a run is executing, you can monitor its progress:

Progress Indicators

Status Messages

Routines can report what they're currently doing, giving you insight into the execution progress:

10%  - Loading input dataset...
25%  - Preprocessing and scaling features...
50%  - Fitting KMeans model (k=5)...
75%  - Calculating cluster metrics...
90%  - Saving artifacts...
100% - Complete!

Organizing Runs

Labels

Attach descriptive labels to runs for easy filtering and discovery:

Run: Customer Segmentation Analysis
Labels: ["production", "clustering", "customer-analytics", "q4-2024"]

Common labeling strategies:

  • Environment: production, staging, development
  • Routine Type: clustering, classification, regression
  • Team: finance, marketing, operations
  • Time period: 2024-q4, december, weekly

Attributes

Store additional metadata as key-value pairs:

Run Attributes:
{
business_unit: "Retail",
cost_center: "12345",
requested_by: "analytics-team",
priority: "high"
}

Run Summary

To summarize what can be provided when creating a Run:

Option

Description

Method Name

The name of the method to execute

RunName (Optional)

The name of the run

Description (Optional)

A description for the run

Input Parameters (If applicable)

The input parameters for the run

Include Statistics (Optional)

If the artifact(s) should include statistics

Include Previews (Optional)

If the artifact(s) should include previews

Invocation Method Type

Workflow vs. Direct

Memory Override (Optional)

Memory override for the run

Store Artifacts (Optional)

Whether artifacts should be saved

Execution Type

Background vs. In-Memory

Labels (Optional)

Labels for the run

Attributes (Optional)

Arbitrary json attributes for the run


Run File Storage

Every run has a dedicated storage location within the MetaFileSystem:

routine://
/-- instances_/
   /-- {routine_instance_id}/
/-- internalvars_/
/-- shared_/
         /-- runs_/
            /-- {routine_run_id}/           ← Your run's home
/-- shared_/
               /-- artifacts_/               ← Where outputs live
                  /-- cluster_report/
                     /-- metadata_/
                     /-- data_/
/--previews_/
                  /-- clustered_data/
                     /-- data_/
                     /-- metadata_/     
                     /-- previews_/       ← If enabled
/-- statistics_/ ← If enabled

Note: This structure is managed automatically.


Best Practices

1. Name Runs Meaningfully

Good names help you find and understand runs later:

✅ Good: "Customer Segmentation - K=5 - Q4 2024 Data"
✅ Good: "Churn Classification - Random Forest - 100 estimators"
❌ Poor: "Run 1" or "test"

2. Use Labels Consistently

Establish labeling conventions for your organization:

Standard Labels:
-- Environment: production | staging | development
-- Routine Type: clustering | classification | regression | anomaly-detection
-- Frequency: daily | weekly | monthly | adhoc
-- Team: finance | operations | analytics

3. Review Failed Runs

When a run fails, check:

  • Error messages and logs
  • Input parameter validity
  • Data availability and quality
  • Resource allocation adequacy

4. Clean Up Test Runs

Remove test runs that are no longer needed to keep your instance organized and storage efficient.


Troubleshooting

Run Stuck in Queued

Symptom: Run stays in "Queued" status for a long time

Possible Causes:

  • All compute resources are in use
  • System is processing higher-priority runs
  • Resource allocation exceeds available capacity

Resolution: Wait for resources, or check with your administrator about system capacity

Run Failed Immediately

Symptom: Run fails shortly after starting (or upon creation)

Possible Causes:

  • Invalid input parameters
  • Missing required data
  • Configuration error

Resolution: Review the error message and validate your inputs

Artifacts Not Available

Symptom: Run completed but artifacts are missing

Possible Causes:

  • "Store Artifacts" was disabled
  • Run may not generate artifacts (e.g., Constructors)
  • Run executed in-memory without persistence

Resolution: Check run configuration; re-run with artifact storage enabled if needed


Summary

Routine Runs are the execution heartbeat of Xperiflow Routines:

Aspect

What It Provides

Complete Record

Full audit trail of every execution

Flexible Execution

Background or in-memory processing

Progress Tracking

Real-time status and progress updates

Organized Outputs

Structured artifact storage and retrieval

Reproducibility

Captured inputs enable re-execution

By understanding runs, you can effectively execute routines, track their progress, organize your work, and access results - all while maintaining a complete history of your data science activities.

Was this page helpful?