(PFC) AI Account Recs Data Rule Tuning Guide
This guide provides practical guidance for configuring and tuning the Data Rules that power the AI Account Reconciliations Anomaly Detectors. Each Anomaly Detector ships with a set of default Data Rules designed to cover the most common use cases, but organizations often need to adjust these rules to better reflect their specific workflows, risk tolerance, and reconciliation patterns. The sections below cover common tuning scenarios — such as reducing over-flagging, catching more anomalies, isolating recent trends, and handling document type preferences — followed by a detailed breakdown of tuning options for each individual Anomaly Detector.
Common Scenarios:
Flagging Too much
Summary:
When too many anomalies are flagged, first review the detector counts, then tighten P-Values or Z-Scores before adjusting Rolling Z-Scores. Gradually lower thresholds until anomaly rates align with the organizational needs. If the Data Rules do not include P-Values, Z-Scores, or Rolling Z-Scores, review the tuning guide for that specific anomaly detector or modify the rule to target specific accounts or entities.
Overview:
If the rules are triggering too many anomalies, they should be adjusted to be more restrictive. The counts of anomalies by anomaly detector should be reviewed to identify which detectors may need modification. If any of the following detectors are flagging too many reconciliations, the statistical thresholds should be tightened (e.g., lowering from a 5% threshold to a 1% threshold): Aging Changes, Number of I-Items, Number of Reconciliation Detail Items, Reconciliation Balance, I-Doc Sparsity, and Reconciliation Detail Text Field Length.
When tuning thresholds, it is recommended to first adjust P-Values or Z-Scores before modifying the Rolling Z-Score. If a Data Rule continues to flag too frequently, the Rolling Z-Score should then be tightened to a lower threshold. If there continues to be an excessive number of anomalies, the statistical thresholds should be lowered further until a suitable flagging rate is achieved.
For the other Anomaly Detectors that calculate more specific statistics and metrics outside of P-Values, Z-Scores, and Rolling Z-Scores, the Data Rules should be reviewed to identify where they can be made more stringent. Additionally, specific reconciliations can be targeted for inclusion in the anomaly detection, while irrelevant reconciliations can be excluded.
Flagging Too Little
Summary:
When too few anomalies are flagged, first review detector counts, then loosen P-Values or Z-Scores before adjusting Rolling Z-Scores. Gradually raise thresholds until anomaly rates align with organizational needs. If the Data Rules do not include P-Values, Z-Scores, or Rolling Z-Scores, review the tuning guide for that specific anomaly detector and loosen the rules if possible and deemed necessary, otherwise leave them as they are.
Overview:
If the rules are triggering too few anomalies, the rules should be adjusted to be less restrictive. The counts of anomalies by anomaly detector should be reviewed to identify which detectors may need modification. If any of the following detectors are flagging too few reconciliations, the statistical thresholds should be loosened (e.g., raising from a 5% threshold to a 10% threshold): Aging Changes, Number of I-Items, Number of Reconciliation Detail Items, Reconciliation Balance, I-Doc Sparsity, and Reconciliation Detail Text Field Length.
When tuning thresholds, it is recommended to first adjust P-Values or Z-Scores before modifying the Rolling Z-Score. If a Data Rule continues to flag too infrequently, the Rolling Z-Score should then be loosened to a higher threshold. If there continues to be too few anomalies, the statistical thresholds should be gradually increased further until a suitable flagging rate is achieved.
Anomaly Detectors that use metrics calculated for specific use cases may be tuned by reviewing the Data Rules and identifying areas where they can be loosened (i.e., reducing the percentage of prior periods with an R-Doc attached). However, in many scenarios with more specific Anomaly Detectors that do not rely on P-Values, Z-Scores, or Rolling Z-Scores, a lack of anomalies detected might indicate that there are no anomalies.
Identifying Recent Trends
Summary:
When attempting to capture historical variation and recent trends, the most effective strategy is to combine the P-Value and Rolling Z-Score to capture both long-term deviations and short-term shifts. Other pairings are possible but less comprehensive, while relying only on Rolling Z-Scores risks false positives by overemphasizing recent fluctuations at the expense of historical context.
Overview:
If recent trends in the data should be identified, it is recommended to combine the P-Value and Rolling Z-Score in a single Data Rule and require both conditions are met before a reconciliation is flagged. This approach balances the long-term view of the P-Value with the short-term sensitivity of the Rolling Z-Score. Alternatively, Data Rules can be written using a combination of Z-Score, Tukey Fence Flag, or IQR Flag with the Rolling Z-Score, but this is less robust, since the P-Value offers the most comprehensive measure to compare the current value to the historical distribution. Data Rules can also be written to rely solely on Rolling Z-Scores to focus only on the most recent periods. However, this increases the risk of false positives because a deviation from the last three periods may still fall within the broader historical pattern. In summary, the following options should be considered:
- P-Value and Rolling Z-Score provide balanced detection that captures both recent shifts and the most comprehensive historical view.
- Z-Score, Tukey Fence Flag, or IQR Flag combined with Rolling Z-Score is suitable when P-Value is not desired, while still incorporating both short-term and historical shifts in anomaly flagging.
- P-Value, Z-Score, Tukey Fence Flag, or IQR Flag can be applied to detect anomalies based solely on the historical distribution.
- Rolling Z-Score alone focuses exclusively on short-term shifts, with less consideration of longer-term historical patterns.
Exclude Specific Accounts
Summary:
Sometimes, specific reconciliations or a group of reconciliations should be excluded from a Data Rule to avoid unnecessary over-flagging. To do this, use the SAccount and SEntity fields to remove accounts and entities that should not be considered in the Data Rule.
Overview:
Organizations might want to exclude specific reconciliations or groups of reconciliations from specific Data Rules. To exclude individual reconciliations, the SAccount and SEntity fields can be used in combination to remove them from being flagged by a Data Rule. Reconciliations can also be excluded by using only the SAccount field (to remove multiple reconciliations that share the same account name across Entities) or only the SEntity field (to remove all reconciliations within that particular Entity).
Excluding reconciliations is appropriate when accounts inherently exhibit behaviors that change frequently. For example, it may make sense to exclude cash accounts from the Data Rule for the Reconciliation Balance Anomaly Detector, since cash balances can fluctuate widely. In addition, a tailored data rule can be created for individual accounts or groups of accounts with either more stringent or looser thresholds. In the cash account example, a rule with stricter requirements might be created to better reflect expected variability and only flag extreme anomalies.
Creating Critical and Warning Rules
Summary:
Organizations can add a combination of Data Rules at the warning and critical levels if they believe having both will aid their workflow process. When creating Data Rules with multiple severity levels, bounds need to be established to avoid simultaneously triggering the warning and critical rules.
Overview:
Currently, none of the default Data Rules include a combination of warning and critical rules for an Anomaly Detector. However, the capability exists to create Data Rules at the warning and critical level. When making rules at various severity levels, care should be taken to ensure that a warning rule and a critical rule do not flag at the same time. To accomplish this, bounds need to be provided within the warning rule so that the warning and critical rules do not overlap.
One example of this is creating separate Reconciliation Balance increase and decrease warning and critical rule that follows the default Data Rules while adding a check for an extreme change in the balance. The new warning and critical rules are shown as follows:
Warning:
- Increase:
PValue < 0.01andZScore > 2.576andRollingZScore > 2.576andBalanceAbsDiff < 10,000,000 - Decrease:
PValue < 0.01andZScore < -2.576andRollingZScore < -2.576andBalanceAbsDiff < 10,000,000
Critical:
- Increase:
PValue < 0.01andZScore > 2.576andRollingZScore > 2.576andBalanceAbsDiff >= 10,000,000 - Decrease:
PValue < 0.01andZScore < -2.576andRollingZScore < -2.576andBalanceAbsDiff >= 10,000,000
Flagging Anomalies by Direction of Change
Summary:
Anomaly Detector Data Rules, when applicable, can be separated based on whether the detected anomaly is an increase or decrease by using metrics such as Z-Score, Rolling Z-Score, or period-over-period differences. These metrics can be applied individually or in combination, with thresholds set at various statistical levels or used solely to indicate direction, providing flexibility in how anomalies are flagged.
Overview:
For some Data Rules, it makes sense to create two rules that identify whether the anomaly detected was an increase or decrease from the prior periods. The Anomaly Detectors whose default rules target both increases and decreases are as follows:
- Reconciliation Balance
- Number of I-Items
- Number of Reconciliation Detail Items
- Aging Changes
To create a rule that separates whether the anomaly was an increase or a decrease, a statistic that captures the direction of the change must be used. Statistics that capture this include Z-Score, Rolling Z-Score, and any difference metric from the previous period.
The Z-Score captures the current period's direction compared to the reconciliation's entire history. The Rolling Z-Score compares the direction of the change to the three most recent periods. Finally, the difference metric compares the current period to the immediately preceding period to identify the direction. These three metrics can be used on their own or in combination to detect the direction of the anomaly.
Additionally, for the Z-Score and Rolling Z-Score statistics, the thresholds can be set at different statistical thresholds or used just to signal direction (i.e., >/< 0). Here are the combinations of metrics that can be used to signal direction:
One Metric:
-
Z-Score (5% statistical threshold):
- Increase:
ZScore > 1.960 - Decrease:
ZScore < -1.960
- Increase:
-
Rolling Z-Score (direction only)
- Increase:
RollingZScore > 0 - Decrease:
RollingZScore < 0
- Increase:
-
Difference (from previous period):
- Increase:
DifferencePrevPeriod > 0 - Decrease:
DifferencePrevPeriod < 0
- Increase:
Two Metrics:
-
Z-Score and Rolling Z-Score (5% statistical threshold)
- Increase:
ZScore > 1.960andRollingZScore > 1.960 - Decrease:
ZScore < -1.960andRollingZScore < -1.960
- Increase:
-
Z-Score and Difference (direction only)
- Increase:
ZScore > 0andDifferencePrevPeriod > 0 - Decrease:
ZScore < 0andDifferencePrevPeriod < 0
- Increase:
-
Rolling Z-Score and Difference (5% statistical threshold)
- Increase:
RollingZScore > 1.960andDifferencePrevPeriod > 0 - Decrease:
RollingZScore < -1.960andDifferencePrevPeriod < 0
- Increase:
All Metrics
-
5% statistical threshold
- Increase:
ZScore > 1.960andRollingZScore > 1.960andDifferencePrevPeriod > 0 - Decrease:
ZScore < -1.960andRollingZScore < -1.960andDifferencePrevPeriod < 0
- Increase:
-
Direction only
- Increase:
ZScore > 0andRollingZScore > 0andDifferencePrevPeriod > 0 - Decrease:
ZScore < 0andRollingZScore < 0andDifferencePrevPeriod < 0
- Increase:
Document Type Indifference
Summary:
Some organizations treat all supporting documentation types as the same and do not want anomalies triggered for a missing R-Doc when a T-Doc is already attached to the reconciliation. In these cases, Data Rules can be tuned to prevent unnecessary warnings by checking if any document type was attached to the reconciliation (i.e., I-Doc, R-Doc, S-Doc, or T-Doc).
Overview:
Some organizations are indifferent to the type of supporting documentation that Preparers upload to support their reconciliations (i.e., I-Doc, T-Doc, R-Doc, or S-Doc). For these organizations, they do not want an R-Doc Sparsity Warning anomaly to flag when the reconciliation has a T-Doc attached instead. The default set of Data Rules will trigger a warning in this scenario. However, the rules can be easily tuned to avoid creating an anomaly when there is some form of supporting documentation attached by adding each document type to each document Data Rule. The following Anomaly Detectors use a True/False field to indicate if a document exists in the current period and can be configured by checking that the following fields equal False:
- R-Doc Sparsity:
HasRDoc = False - S-Doc Sparsity:
HasSDoc = False - T-Doc Sparsity:
HasTDoc = False
Meanwhile, the I-Doc Sparsity Anomaly Detector checks that each I-Item has an I-Doc attached to it and can be set as follows to ensure every I-Item has an I-Doc attached:
IItemsPctWithIDoc = 100
Anomaly Detectors Overview:
Aging Changes
Summary:
The Aging Changes Anomaly Detector flags reconciliations when the average aging days increase or decrease significantly compared to prior period detail items (I-Items, T-Items, S-Items, and X-Items). The default rules use P-Values and Z-Scores to detect changes, but they can be tuned by adjusting thresholds or by switching to Rolling Z-Scores to capture more recent changes. Additional rules can target percentage changes, aging items only, or unusually high maximum aging days to better align with organizational needs.
Overview:
The default Aging Changes Anomaly Detector Data Rules has two rules that identifies if the anomaly is an increase or a decrease compared to the historical data for I-Items, T-Items, S-Items, and X-Items. If these Data Rules are not working as expected, much of the tuning can be accomplished by slightly modifying the Data Rules. The default Data Rules are as follows:
- AgingChangesIncreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod > 0andZScore > 0 - AgingChangesDecreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod < 0andZScore < 0andNumItems > 0
It should be noted that the Aging Changes Decrease Warning has NumItems > 0. This ensures that reconciliations that no longer have detail items attached do not get flagged for an aging change decrease. Otherwise, the increase and decrease rules are the reverse versions of each other. If reconciliations are being flagged but the most recent periods do not seem to deviate much from the current period, the ZScore metric can be switched with the RollingZScore metric and given a significant value as follows:
- AgingChangesIncreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod > 0andZScore > 1.645(Rolling Z-Score 10% Threshold) - AgingChangesDecreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod < 0andZScore < -1.645andNumItems > 0(Rolling Z-Score 10% Threshold) - AgingChangesIncreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod > 0andZScore > 1.960(Rolling Z-Score 5% Threshold) - AgingChangesDecreaseWarning:
PValue < 0.05andAvgAgingDaysDeltaPrevPeriod < 0andZScore < -1.960andNumItems > 0(Rolling Z-Score 5% Threshold)
Another modification that can be made is decreasing the P-Value if too many reconciliations are getting flagged compared to the historical context, or increasing the P-Value if too few reconciliations are getting flagged. This can be accomplished as follows:
- AgingChangesIncreaseWarning:
PValue < 0.10andAvgAgingDaysDeltaPrevPeriod > 0andZScore > 1.645(Too few flags: P-Value 10% Threshold ) - AgingChangesDecreaseWarning:
PValue< 0.10 andAvgAgingDaysDeltaPrevPeriod < 0andZScore < -1.645andNumItems > 0(Too few flags: P-Value 10% Threshold) - AgingChangesIncreaseWarning:
PValue < 0.01andAvgAgingDaysDeltaPrevPeriod > 0andZScore > 1.645(Too many flags: P-Value 1% Threshold ) - AgingChangesDecreaseWarning:
PValue < 0.01andAvgAgingDaysDeltaPrevPeriod < 0andZScore < -1.645andNumItems > 0(Too many flags: P-Value 1% Threshold)
If the organization wants to flag reconciliations based on more recent changes than the statistics above, the AvgAgingDaysPctChg field can be utilized to identify an increase or decrease in the percentage change that they consider alarming. Rules can also be created using AgingItemsOnlyAvgDeltaPrevPeriod or AvgAgingDaysDeltaPrevPeriodAbs to spot the difference between the current period and prior periods for aging items only, and to flag cases where the change in average aging days exceeds acceptable limits. Finally, an informational rule can be created to flag reconciliations that contain an item with an exceptionally high number of aging days. This can be done by using the MaxAgingDays metric and determining what the ideal maximum number of aging days is for the organization before it gets flagged.
Aged Item Commentary
Summary:
The Aged Item Commentary Anomaly Detector flags reconciliations when aged items lack required commentary on detail items (I-Items, T-Items, S-Items, and X-Items) by comparing current-period commentary rates to those of prior periods. The default rules focus on older aging buckets (61+ days), but organizations can expand them to include earlier buckets or apply stricter rules to flag any missing commentary. Data Rules can also be configured with percentage thresholds to ensure that aged detail items with comments consistently meet a set threshold.
Overview:
The Aged Item Commentary Anomaly Detector buckets the detail items on a reconciliation based on the detail items' aging and checks if the Items have comments. It uses the default aging periods (0, 31, 61, and 91) to bucket the detail items on a reconciliation based on their aging and creates metrics for 31 (i.e., aging between 31-61), 61 (i.e., aging between 61-91), and 91 (i.e., aging 91+). The default rules check the third index and above (61 and 91 in this example) to determine if the Reconciliation has items aged within that bucket and if the percentage of items in that bucket without comments has increased compared to the prior period. If it has, then the reconciliation is flagged. Some organizations might want to expand the Data Rules to include detail items aging from 31 to 61 days so they are also flagged. To do this, the following rule would be created:
- AgedItemCommentaryWarning31:
HasAgedItems31 = TrueandPctNoComment31Diff > 0
Custom Data Rules can also be created for this Anomaly Detector to be more stringent. Some organizations might want to flag a reconciliation any time a detail item is aging and does not have a comment attached. The rule above and the default rules take into account how the reconciliation was prepared in the prior periods and only flags if there was a drop in commentary compared to the prior period. However, more stringent Data Rules can be created such as the following:
- AgedItemCommentaryWarning61:
HasAgedItems61 = TrueandAged61NoComment = True - AgedItemCommentaryWarning91:
HasAgedItems91 = TrueandAged91NoComment = True
Finally, Data Rules can be created that check if the Reconciliation has Items aging and a set percentage of those Items aging within that Aging Periods bucket do not have a comment. The Data Rules can be set to evaluate any percentage between 0-100, and below is an example implementation using an 80% cut off:
- AgedItemCommentaryWarning61:
HasAgedItems61 = TrueandPctNoComment61Current < 80 - AgedItemCommentaryWarning91:
HasAgedItems91 = TrueandPctNoComment91Current < 80
Check Balance I-Items
Summary:
The Check Balance I-Items Anomaly Detector flags reconciliations that contain both balance check items and manual items by default, but it can be customized to require more manual items, evaluate the percentage of items, or compare changes in item percentages across periods.
Overview:
The Check Balance I-Items Anomaly Detector has a default Data Rule that checks whether the current period has both a B-Item (Balance Check) and I-Items (Manual Items) and flags an anomaly if it does. The majority of implementations will not need to tweak or modify the default Data Rule. However, there are a few custom changes that can be made to the Data Rules. First, the Data Rules can be adjusted to require a larger number of Manual Items, such as the following:
- Increase I-Items:
IItemCount > 1andBItemCount > 0
Additionally, the Anomaly Detector calculates the percentage of B-Items and I-Items relative to the total number of detail items to flag cases where the organization finds an unacceptable percentage of I-Items when the reconciliation has B-Items. Finally, the Anomaly Detector can compare the current period's percentage of B-Items to the prior period's percentage of B-Items to see if the percentage has changed, using BItemPctDiff. Similarly, the difference in the percentage of I-Items in the current and prior periods is captured in IItemPctDiff.
I-Doc Sparsity
Summary:
The I-Doc Sparsity Anomaly Detector flags reconciliations when the percentage of I-Items with I-Docs decreases compared to historical patterns. The thresholds in the default rules can be adjusted, or new custom Data Rules can be created to consider more recent periods, check whether any other type of documentation is attached, or require that a set percentage of I-Items have I-Docs attached.
Overview:
The I-Doc Sparsity Anomaly Detector identifies decreases in the percentage of I-Items with I-Docs attached based on historical patterns. Unlike other document-related Anomaly Detectors, which only identify if at least one document exists in a period, the I-Doc Sparsity detector reviews how they are being used. Due to this, it follows some of the other Anomaly Detectors and collects statistics for P-Value, Z-Score, Rolling Z-Score, IQR Flag, and Tukey Fence Flag. The default Data Rule is as follows:
IItemCount > 0andPValue < 0.05
This Data Rule does two things. First, it ensures that reconciliations do not get flagged when I-Items do not exist (i.e., it’s impossible for I-Docs to be attached). Second, it checks for decreases in the current period compared to prior periods using the P-Value statistic. In most scenarios, the default rules should meet the needs of the organization. However, there might be a few potential modifications that can be made. If the Data Rule is flagging too many or too few reconciliations, the default thresholds can be modified:
IItemCount > 0andPValue < 0.01(More stringent - flagging at 1% threshold)IItemCount > 0andPValue < 0.1(Less stringent - flagging at 10% threshold)
If the rule is still not performing as expected, additional modifications can be made. If the current period does not appear to decrease compared to the most recent prior periods, the Rolling Z-Score statistic can be used to compare the current period to the prior three periods. As a one-sided test (i.e., only looking for decreases), the Rolling Z-Score has the following critical values: -1.282, -1.645, and -2.326, which correspond to 10%, 5%, and 1% thresholds, respectively. The following rule flags the reconciliation at the 5% threshold as shown below:
IItemCount > 0andPValue < 0.05andRollingZScore < -1.645
Additional informational rules can be created to flag reconciliations when they are not being performed according to the organization's expectations.
-
IQRFlag = TrueandIItemCount > 0- Flags if the current period has at least one I-Item, but the current period falls in the bottom 25% of the percentage of I-Items with I-Docs attached compared to historical values.
-
IItemsPctWithIDoc < 80andIItemCount > 0- Flags the reconciliation if it has I-Items, but the percentage of I-Items with I-Docs is below 80%.
Finally, the I-Doc Sparsity Anomaly Detector can be modified if the organization is indifferent to the types of documents attached to a reconciliation. The tuning outlined below, is used when the organization does not require I-Docs to be attached to every I-Item if the reconciliation already has an R-Doc, S-Doc, or T-Doc attached. The default and Rolling Z-Score Data Rules appear below with the additional check for other document types:
IItemCount > 0andPValue < 0.05andHasRDoc = FalseandHasSDoc = FalseandHasTDoc = FalseIItemCount > 0andPValue < 0.05andRollingZScore < -1.645andHasRDoc = FalseandHasSDoc = FalseandHasTDoc = False
Manual AutoRec Ratio
Summary:
The Manual AutoRec Ratio Anomaly Detector identifies reconciliations that are usually auto-reconciled but fail to auto-reconcile in the current period. Default rules check whether reconciliations that typically Auto Prepare or Auto Approve now require manual handling, with options to tune lookback percentages, historical periods, or consecutive failures. Additional customizations allow targeting specific AutoRec Rules or fields, enabling organizations to monitor when reconciliations that typically Auto Prepare or Auto Approve do not.
Overview:
The Manual AutoRec Ratio Anomaly Detector flags reconciliations that typically Auto-Reconcile but in the current period do not. The AutoRec Rules can be set to Auto Approve a reconciliation or Auto Prepare. For this reason, Data Rules are typically separated by whether they Auto Approve or Auto Prepare a reconciliation, unless written for a specific AutoRec Rule. Thus, there are two default Data Rules to identify Reconciliations that are normally automated but in the current period are not:
- Auto Prepare:
PrepareOnly = TrueandAutoPrepared = FalseandLookbackPctAuto >= 80andNumberOfLookbackPeriods >= 5 - Auto Approve:
PrepareOnly = FalseandAutoPrepared = FalseandLookbackPctAuto >= 80andNumberOfLookbackPeriods >= 5
The most common tuning will update the LookbackPctAuto value to be either more stringent or looser (any value between 0-100), and increase or decrease the historical periods (NumberOfLookbackPeriods). Note: it is not recommended to set the LookbackPctAuto below 50% because there are scenarios, such as the Zero Balance and No Activity AutoRec rule that are widely applied to reconciliations, but for the majority of reconciliations the AutoRec rule never Auto Prepares or Auto Approves. Other rules can be set using the LookbackConsecutiveAuto metric to identify reconciliations that normally Auto Prepare or Auto Approve but do not. An example of this Data Rule is as follows:
- Auto Prepare:
PrepareOnly = TrueandAutoPrepared = FalseandLookbackConsecutiveAuto >= 5 - Auto Approve:
PrepareOnly = FalseandAutoPrepared = FalseandLookbackConsecutiveAuto >= 5
Additionally, the names of AutoRec Rules are included in the Manual AutoRec Ratio Anomaly Detector, which allows users to create a rule that is only applicable to one data monitor. An example of this is the following rule:
AutoRecRuleName = “Pulled Items and No Activity”andAutoPrepared = FalseandLookbackPctAuto >= 80andNumberOfLookbackPeriods >= 5
The newly created rule above will only run against reconciliations that have the “Pulled Items and No Activity” AutoRec Rule assigned to them. Finally, the following True/False fields from the AutoRec Rules are included in the Anomaly Detector: ZeroBalance, Activity, BItems, IItems, SItems, PulledItems, and XItems. These fields can be used to write specific Data Rules that target AutoRec Rules that have these fields attached to them. For instance, the rule below will only apply to reconciliations that have an AutoRec Rule assigned that requires the reconciliation to have X-Items:
- Auto Prepare:
PrepareOnly = TrueandXItems = TrueandAutoPrepared = FalseandLookbackPctAuto >= 80andNumberOfLookbackPeriods >= 5 - Auto Approve:
PrepareOnly = FalseandXItems = TrueandAutoPrepared = FalseandLookbackPctAuto >= 80andNumberOfLookbackPeriods >= 5
Number of Approvers
Summary:
The Number of Approvers Anomaly Detector flags changes in the number of Approvers assigned between periods and, by default, flags any decrease. It can be customized to detect all changes or only increases or decreases of any magnitude, based on organizational needs.
Overview:
For a reconciliation, the number of Approvers must be between one and four. By default, the Number of Approvers Anomaly Detector flags any decrease in the number of assigned Approvers by comparing the current period to the prior period. If only larger decreases in the number of Approvers should be detected, the following Data Rule can be written:
- Large Decrease:
ApproverDiff <= -2(e.g., 4 to 2 or 4 to 1) - Largest Decrease:
ApproverDiff <= -3(e.g., 4 to 1)
Some organizations may also want to flag any change in the number of Approvers. This can be done using the following rules:
- Any Change:
ApproverDiff <= -1orApproverDiff >= 1(e.g., 4 to 3 or 3 to 4) - Increase:
ApproverDiff >= 1(e.g., 1 to 2) - Decrease:
ApproverDiff <= -1(e.g., 2 to 1)
Number of I-Items
Summary:
The Number of I-Items Anomaly Detector flags changes in the count of manual adjustments by comparing current-period values to prior periods using P-Values, Rolling Z-Scores, and related statistics. Default rules detect both increases and decreases, but thresholds can be tuned to make them more or less stringent or simplified to focus only on historical values and direction. Organizations can also toggle off one direction if they only care about either increases or decreases in I-Items.
Overview:
The Number of I-Items Anomaly Detector reviews the count of manual adjustments (I-Items) in the current period and compares it to prior periods. Similar to some other Anomaly Detectors, it outputs the following statistics that can be used in the Data Rules: P-Value, Z-Score, Rolling Z-Score, IQR Flag, and Tukey Fence Flag. By default, the rules are broken out to detect both increases and decreases based on the most recent periods:
- Increase:
PValue < 0.05andIItemCountDiff > 0andRollingZScore > 1.960 - Decrease:
PValue < 0.05andIItemCountDiff < 0andRollingZScore < -1.960
The default rule checks the following. First, the P-Value identifies if the current period’s value differs from the historical periods. Next, the current period and the immediate preceding period are compared to establish direction and ensure that the number of I-Items on the reconciliation has increased or decreased. Finally, the Rolling Z-Score ensures that the current period's number of manual adjustments differs from the most recent prior periods and identifies whether the current period has increased or decreased compared to the prior periods. After reviewing the anomalies created by this detector, if the Data Rules are too strict or too loose, they can be modified as follows:
-
Too Strict:
- Increase:
PValue < 0.10andIItemCountDiff > 0andRollingZScore > 1.645(Flagging at 10% threshold) - Decrease:
PValue < 0.10andIItemCountDiff < 0andRollingZScore < -1.645(Flagging at 10% threshold)
- Increase:
-
Too Loose:
- Increase:
PValue < 0.01andIItemCountDiff > 0andRollingZScore > 2.576(Flagging at 1% threshold) - Decrease:
PValue < 0.01andIItemCountDiff < 0andRollingZScore < -2.576(Flagging at 1% threshold)
- Increase:
If the Data Rules are still not flagging the number of I-Items correctly, the rules can be modified so that the P-Value is set at the 1% or 10% level to make the rules more or less stringent, while the Rolling Z-Score value remains consistent at the 5% threshold:
-
Too Strict:
- Increase:
PValue < 0.10andIItemCountDiff > 0andRollingZScore > 1.960(P-Value: 10%, Rolling Z-Score: 5%) - Decrease:
PValue < 0.10andIItemCountDiff < 0andRollingZScore < -1.960(P-Value: 10%, Rolling Z-Score: 5%)
- Increase:
-
Too Loose:
- Increase:
PValue < 0.01andIItemCountDiff > 0andRollingZScore > 1.960(P-Value: 1%, Rolling Z-Score: 5%) - Decrease:
PValue < 0.01andIItemCountDiff < 0andRollingZScore < -1.960(P-Value: 1%, Rolling Z-Score: 5%)
- Increase:
After tuning the Data Rules as described above, if the Data Rules are still not producing the desired results, they can be modified to only take into account historical values. The Rolling Z-Score will still be used to identify the direction of recent changes:
- Increase:
PValue < 0.05andIItemCountDiff > 0andRollingZScore > 0 - Decrease:
PValue < 0.05andIItemCountDiff < 0andRollingZScore < 0
Breaking out the number of I-Items by increases and decreases lets Preparers and Approvers easily identify whether the reconciliation has more or fewer manual adjustments than it typically has attached to it. It also allows organizations to easily toggle off one of the detectors because some organizations might only care about either increases or decreases in the number of I-Items. In these scenarios, the Data Rule for the direction that the organization does not care about can be toggled off.
Number of Reconciliation Detail Items
Summary:
The Number of Reconciliation Detail Items Anomaly Detector flags increases or decreases in the number of reconciliation detail items (I-Items, T-Items, S-Items, and X-Items) by comparing current-period counts to prior periods using P-Values and Rolling Z-Scores (which check for historical and recent changes). Thresholds can be tuned to be more or less stringent or simplified to only detect historical changes and indicate the direction of the change. Additionally, the Repetitive Detail Items Flag identifies items that reappear from prior periods but were not pulled forward into the current period correctly.
Overview:
The Number of Reconciliation Detail Items Anomaly Detector is similar to the Number of I-Items Anomaly Detector, except it considers all types of detail items by comparing the number of reconciliation detail items attached in the current period with those in prior periods. Due to this, it outputs the following statistics: P-Value, Z-Score, Rolling Z-Score, IQR Flag, and Tukey Fence Flag. By default, the rules are broken out to detect both increases and decreases based on the most recent periods, and they appear as follows:
- Increase:
PValue < 0.05andItemCountDiff > 0andRollingZScore > 1.960 - Decrease:
PValue < 0.05andItemCountDiff < 0andRollingZScore < -1.960
The default rule checks the following. First, the P-Value identifies if the current period’s value differs from the historical periods. Next, the current period and the immediate preceding period are compared to establish direction and ensure that the number of detail items on the reconciliation has increased or decreased. Finally, the Rolling Z-Score ensures that the current period's number of detail items differs from the most recent prior periods and identifies whether the current period has increased or decreased compared to the prior periods. After reviewing the anomalies created by this detector, if the Data Rules are too strict or too loose, they can be modified as follows:
-
Too Strict:
- Increase:
PValue < 0.10andItemCountDiff > 0andRollingZScore > 1.645(Flagging at 10% threshold) - Decrease:
PValue < 0.10andItemCountDiff < 0andRollingZScore < -1.645(Flagging at 10% threshold)
- Increase:
-
Too Loose:
- Increase:
PValue < 0.01andItemCountDiff > 0andRollingZScore > 2.576(Flagging at 1% threshold) - Decrease:
PValue < 0.01andItemCountDiff < 0andRollingZScore < -2.576(Flagging at 1% threshold)
- Increase:
If the Data Rules still are not producing the desired result, the rules can be modified so that the P-Value is set at the 1% or 10% level to make the rules more or less stringent, while the Rolling Z-Score value remains consistent at the 5% threshold:
-
Too Strict:
- Increase:
PValue < 0.10andItemCountDiff > 0andRollingZScore > 1.960(P-Value: 10%, Rolling Z-Score: 5%) - Decrease:
PValue < 0.10andItemCountDiff < 0andRollingZScore < -1.960(P-Value: 10%, Rolling Z-Score: 5%)
- Increase:
-
Too Loose:
- Increase:
PValue < 0.01andItemCountDiff > 0andRollingZScore > 1.960(P-Value: 1%, Rolling Z-Score: 5%) - Decrease:
PValue < 0.01andItemCountDiff < 0andRollingZScore < -1.960(P-Value: 1%, Rolling Z-Score: 5%)
- Increase:
After making these changes, if the Data Rules still do not produce meaningful results, it might indicate that the current period does not differ enough from the prior three periods. In these scenarios, the Rolling Z-Score can be used purely to signify the direction of the change:
- Increase:
PValue < 0.05andItemCountDiff > 0andRollingZScore > 0 - Decrease:
PValue < 0.05andItemCountDiff < 0andRollingZScore < 0
Some organizations do not care if the number of detail items increased compared to prior periods because they are trying to encourage Preparers to add more detail items. In these scenarios, the Data Rules can be tuned by toggling off the Number of Reconciliation Detail Items Increase Warning rule.
Also collected by the Number of Reconciliation Detail Items Anomaly Detector is the Repetitive Detail Items Flag. This metric captures whether the detail item Name and Amount match a detail item in the prior period, but the detail item in the current period is not Aging. The goal of this metric is to spot detail items that should have been pulled forward from the prior month but were not. A Data Rule using this metric can be implemented as follows:
RepetitiveDetailItemsFlag= True
R-Doc Sparsity
Summary:
The R-Doc Sparsity Anomaly Detector default Data Rule flags reconciliations missing an R-Doc in the current period when they were consistently attached in prior periods. The Data Rules can be tuned to adjust the number of prior periods (3, 6, 9, 12, or 18), the percentage of prior periods with an R-Doc, and expand the rule to check if any other documentation types are attached in the current period. Additionally, the detector can be updated to compare the ratio of R-Docs in the current period against historical averages to capture changes in volume.
Overview:
The default Data Rule for the R-Doc Sparsity Anomaly Detector identifies reconciliations that do not have an R-Doc attached in the current period but did have at least one attached in each of the prior three periods. There are a few scenarios in which the Data Rules might need to be tweaked and modified. First, if the customer wants to adjust the number of periods to look back and calculate the R-Doc usage percentage, they can use the following metrics for these periods:
- 3 Periods:
Prior3RDocUsagePct(Default) - 6 Periods:
Prior6RDocUsagePct - 9 Periods:
Prior9RDocUsagePct - 12 Periods:
Prior12RDocUsagePct - 18 Periods:
Prior18RDocUsagePct
Additionally, the customer can change the percentage of periods that utilized an R-Doc to any number between 0 and 100. The default rule states that HasRDoc = False and Prior3RDocUsagePct = 100.
Another area that might require tuning is allowing any type of supporting documentation to be used in the reconciliation. Some customers do not distinguish between R-Docs, T-Docs, S-Docs, and I-Docs. They only care that some type of supporting documentation is attached. In these scenarios, the default Data Rule or any custom-made rule can be updated to include these other types of documentation and utilize the I-Doc Sparsity, S-Doc Sparsity, and T-Doc Sparsity Anomaly Detectors along with R-Doc Sparsity. The rule looks something like this:
HasRDoc = FalseandPrior3RDocUsagePct = 100andHasSDoc = FalseandHasTDoc = FalseandIItemsPctWithIDoc < 100
Third, some customers might care about the number of R-Docs attached to a reconciliation and compare that to the historical periods. In this case, the RDocRatioCurrentVsPrior3 period statistic can be utilized to flag the desired ratio. This metric compares the number of R-Docs in the current period to the mean number of R-Docs in the prior periods. If the same number of R-Docs exists in the current period compared to the prior three-period mean, this value will equal 1, while if the number of R-Docs in the current period is less than the mean, it will be less than 1.
Reconciliation Balance
Summary:
The Reconciliation Balance Anomaly Detector flags unusual account balances using P-Values, Z-Scores, and Rolling Z-Scores to detect both historical and recent deviations. Thresholds can be tuned to be more or less strict, and Data Rules can be customized to meet the specific needs of the organization. Additional informational rules, including IQR, Tukey Fence, and checking specific balance amounts, provide further context for identifying extreme or unusual fluctuations.
Overview:
The Reconciliation Balance Anomaly Detector Data Rules flag reconciliations that have abnormal balances in the current period. Balances can fluctuate widely, with some accounts being more prone to extreme fluctuations (e.g., certain cash accounts). Due to this, the P-Value, Z-Score, and Rolling Z-Score are all used in combination to flag anomalies. Additionally, the default Rec Balance Data Rules break out changes into increases and decreases to easily identify the direction of the Rec Balance warning when an anomaly is triggered, and appear as follows:
- Increase:
PValue < 0.01andZScore > 2.576andRollingZScore > 2.576 - Decrease:
PValue < 0.01andZScore < -2.576andRollingZScore < -2.576
The combination of these three statistics does the following. First, the P-Value determines if the value differs from the historical values and the Z-Score ensures that the current value is far away from the mean balance. Second, the Rolling Z-Score ensures that the current period’s balance also differs from the most recent values. Finally, the combination of Z-Score and Rolling Z-Score sets the direction of the anomaly based on whether both values are positive or negative. Depending on whether the data rules are flagging too many reconciliations or too few, the Data Rules can be made more or less strict as follows:
-
More Strict:
- Increase:
PValue < 0.005andZScore > 2.807andRollingZScore > 2.807(.5% critical threshold) - Decrease:
PValue < 0.005andZScore < -2.807andRollingZScore < -2.807(.5% critical threshold)
- Increase:
-
Less Strict:
- Increase:
PValue < 0.05andZScore > 1.960andRollingZScore > 1.960(5% critical threshold) - Decrease:
PValue < 0.05andZScore < -1.960andRollingZScore < -1.960(5% critical threshold)
- Increase:
If the Data Rules still are not producing the desired result, the rules can be modified so that the P-Value is set at the 0.5% or 5% threshold level to make the rules more or less stringent, while the Z-Score and Rolling Z-Score remain consistent at the 1% threshold:
-
More Strict:
- Increase:
PValue < 0.005andZScore > 2.576andRollingZScore > 2.576(P-Value 0.5%, Z-Score/Rolling Z-Score 1%) - Decrease:
PValue < 0.005andZScore < -2.576andRollingZScore < -2.576(P-Value 0.5%, Z-Score/Rolling Z-Score 1%)
- Increase:
-
Less Strict:
- Increase:
PValue < 0.05andZScore > 2.576andRollingZScore > 2.576(P-Value 5%, Z-Score/Rolling Z-Score 1%) - Decrease:
PValue < 0.05andZScore < -2.576andRollingZScore < -2.576(P-Value 5%, Z-Score/Rolling Z-Score 1%)
- Increase:
Most likely, the Data Rules outlined above will cover typical scenarios. However, if an organization is amortizing a reconciliation and the balance increases by a set amount each period, the Balance warning flag may be raised incorrectly. In this case, the Balance Difference Standard Deviation statistic can be used to avoid false positives. The modified default Data Rules would look as follows:
- Increase:
PValue < 0.01andZScore > 2.576andRollingZScore > 2.576andBalanceDiffStd != 0 - Decrease:
PValue < 0.01andZScore < -2.576andRollingZScore < -2.576andBalanceDiffStd != 0
Additional informational rules can be created, including the following:
-
IQRFlag = True- Flags if the Balance is in the top 25% or bottom 25% of the distribution (i.e., flags 50% of the data).
-
TukeyFenceFlag = True- A stricter version of the IQR Flag that extends the IQR range. Useful for identifying extreme values that may warrant further review.
-
Balance > 100,000,000orBalance < -100,000,000- Flags extreme balances to raise awareness.
-
BalanceAbsDiff > 100,000,000- Flags if the balance increased or decreased by 100,000,000 from the previous period.
-
BalanceDiff < -100,000,000- Flags only if the balance decreased by 100,000,000 from the previous period.
Reconciliation Detail Text Field Length
Summary:
The Reconciliation Detail Text Field Length Anomaly Detector identifies decreases in commentary by collecting the character count for ItemName, ItemComment, Reference1, and Reference2 on detail items (I-Items, T-Items, S-Items, and X-Items) and generating statistics for the total and average character counts. The default rules focus on identifying decreases in the average character length, capturing scenarios where the commentary has declined. Organizations can tune the custom rules to be more or less stringent and create additional custom rules.
Overview:
The Reconciliation Detail Text Field Length Anomaly Detector identifies decreases in the amount of commentary Preparers used in a reconciliation’s detail items by reviewing the length of text in ItemName, ItemCommnet, Reference1, and Refence2 fields in OneStream Financial Close. The Anomaly Detector calculates statistics both for total character length and average character length, but the default Data Rules use the average character length: The default rule sets 2.5% as a critical threshold and is defined as follows:
AvgCharCountPValue < 0.025andAvgCharCountPctChgPrevPeriod < 0
This rule checks for two things. First, it checks that the current average text length decreased compared to prior periods, and second, it checks that the current period average character count is less than the prior period. If this rule is over-flagging or under-flagging, the first tuning attempt should adjust the P-Value thresholds to either a more or less stringent threshold.
AvgCharCountPValue < 0.01andAvgCharCountPctChgPrevPeriod < 0(More stringent - flagging at 1% threshold)AvgCharCountPValue < 0.05andAvgCharCountPctChgPrevPeriod < 0(Less stringent - flagging at 5% threshold)
If the rules are still not producing the desired result, the Anomaly Detector can be updated to include the AvgCharCountRollingZScore, which has critical values of -1.282, -1.645, and -2.326 for the 10%, 5%, and 1% thresholds. Adding this statistic to the Data Rule will flag only reconciliations experiencing both a recent and historical decline in average text field length. Here is an example implementation at the 5% critical level:
AvgCharCountPValue < 0.05andAvgCharCountPctChgPrevPeriod < 0andAvgCharCountRollingZScore< -1.645
The following statistics can be used within an informational rule to provide additional context around the decrease in the average character counts for reconciliations:
-
AvgCharCountIQRFlag = TrueandAvgCharCountPctChgPrevPeriod < 0- Flags if the current period falls within the lower 25% of the historical values and is a decrease from the prior periods
-
AvgCharCountRollingAvgPctChg < 0- The current period average character count is less than the average character count for the prior 3 periods
-
AvgCharCount < 50- Flags if the current period average character count is less than 50, or below a threshold set by the organization. This reminds the preparer to add the necessary commentary.
The above custom Data Rules all pertain to the average character length. However, as mentioned earlier, the Reconciliation Detail Text Field Length Anomaly Detector also collects statistics for the total character length. Unless there is a specific use case, it is recommended to use average character length statistics because total character length is directly affected by the number of detail items attached. Equivalent Data Rules, as displayed above, can be created for the total character length. For example, the following modifications adjust the total character length thresholds to be more or less restrictive depending on the organization’s needs:
TotalCharCountPValue < 0.01andTotalCharCountPctChgPrevPeriod < 0(More stringent - flagging at 1% threshold)TotalCharCountPValue < 0.05andTotalCharCountPctChgPrevPeriod < 0(Less stringent - flagging at 5% threshold)
Repetitive Rejections
Summary:
The default Repetitive Rejections Data Rule flags reconciliations that are rejected in the current period, have a reject rate over 50%, and include at least four historical periods. Custom Data Rules can be tuned to adjust the required number of lookback periods and the rejection rate, or to create new rules that provide information about the reconciliation for Preparers, such as prior rejection rates, consecutive rejections, and the total number of rejections.
Overview:
The default Data Rule for the Repetitive Rejections Anomaly Detector requires at least four historical periods to establish a baseline, that the reconciliation in the current period has been rejected at least once, and that the prior periods have a rejection rate of at least 50%. There are a few areas that organizations might look to customize and tune the Data Rules for this anomaly detector. First, some organizations might want to increase the number of lookback periods, which can be done using the following rule:
- Increase to require 6 historical periods:
RejectedInCurrentPeriod = TrueandLookbackPctOfPeriodsWithARejection >= 50andNumberOfLookbackPeriods >= 6
Another modification that can be made to the Data Rule is to increase the percentage of periods that have a rejection, which looks similar to the following:
- Increase Rejection Percentage:
RejectedInCurrentPeriod = TrueandLookbackPctOfPeriodsWithARejection >= 65andNumberOfLookbackPeriods >= 4
One of the main purposes of this Anomaly Detector is to inform both Preparers and Approvers that a reconciliation has been rejected multiple times, with the goal of decreasing the number of rejections in the current and future periods. Due to this, some organizations might want to create a variety of Data Rules at the “Info” level, which can inform both the Preparer and the Approver that this reconciliation either has a systemic issue or is not being prepared to the organization's standards consistently. A simple rule that can be created to accomplish this is the same as above but without requiring the reconciliation to be rejected in the current period:
LookbackPctOfPeriodsWithARejection >= 50andNumberOfLookbackPeriods >= 4
There are additional metrics that can be used to provide information on whether a reconciliation has repeatedly had rejections. These include LookbackConsecutivePeriodsWithARejection, LookbackCountOfPeriodsWithARejection, and LookbackTotalNumberOfRejections. These Data Rules can look like:
LookbackPctOfPeriodsWithARejection >= 4(the past four periods had at least one rejection)LookbackCountOfPeriodsWithARejection >= 4(there are four or more periods with at least one rejection)LookbackTotalNumberOfRejections >= 10andNumberOfLookbackPeriods >= 4(there are at least 10 rejections and a minimum of four historical periods)
Risk Level Fluctuations
Summary:
The Risk Level Fluctuations Anomaly Detector default Data Rule identifies changes in assigned risk levels between periods, with options to flag all changes or only large swings. Data Rules can also be customized to set thresholds, classify anomalies by severity, or detect directional changes across multiple prior periods.
Overview:
It is unlikely that the Risk Level Fluctuations Anomaly Detector default Data Rule will require tuning or custom Data Rules because, by default, it detects change in the Risk Level from the prior period (e.g., going from High to Medium). However, a new Data Rule can be written that only flags large swings in the assigned Risk Level (e.g., going from High to Low or Low to High). To accomplish this, the Data Rule would use the following condition:
RiskChangeFromPriorPeriodAbs = 2
This rule ensures that only large swings in Risk Level from the prior period are flagged. In some scenarios, it may be useful to flag the large swings in Risk Level flagged as a “Critical” anomaly and smaller swings (e.g., High to Medium or Medium to Low) flagged as a “Warning.” This would use the following condition:
RiskChangeFromPriorPeriodAbs = 1
Another customization is setting the direction of the change. Increases and decreases in the Risk Level can be specifically identified using the Data Rules along with the magnitude of the change. This allows rules that identify whether the change moved from lower to higher risk (e.g., Medium to High) or higher to lower risk (e.g., Medium to Low). The rules would look like the following:
- Any Increase:
RiskLevelPriorPeriod >= 1(e.g., Low to Medium or Low to High) - Any Decrease:
RiskLevelPriorPeriod <= -1(e.g., High to Medium or High to Low) - Large Increase:
RiskLevelPriorPeriod = 2(e.g., Low to High) - Large Decrease:
RiskLevelPriorPeriod = -2(e.g., High to Low) - Small Increase:
RiskLevelPriorPeriod = 1(e.g., Low to Medium or Medium to High) - Small Decrease:
RiskLevelPriorPeriod = -1(e.g., High to Medium or Medium to Low)
Finally, the Anomaly Detector also supports detecting changes in the Risk Level across two or three prior periods. It is unlikely that this capability will be utilized, however, it does exist. If a Data Rule is desired to detect any change, users can apply RiskChangeFrom2PriorPeriodsAbs and RiskChangeFrom3PriorPeriodsAbs using similar logic as discussed above for RiskChangeFromPriorPeriodAbs. They can specify the direction of the prior change using RiskLevel2PriorPeriods and RiskLevel3PriorPeriods with similar logic to RiskLevelPriorPeriod.
S-Doc Sparsity
Summary:
The S-Doc Sparsity Anomaly Detector default Data Rule flags reconciliations missing an S-Doc in the current period when they were consistently attached in prior periods. The Data Rules can be tuned to adjust the number of prior periods (3, 6, 9, 12, or 18), the percentage of prior periods with an S-Doc, and expand the rule to check if any other documentation types are attached in the current period. Additionally, the detector can be updated to compare the ratio of S-Docs in the current period against historical averages to capture changes in volume.
Overview:
The default Data Rule for the S-Doc Sparsity Anomaly Detector identifies reconciliations that do not have an S-Doc attached in the current period but did have at least one attached in each of the prior three periods. There are a few scenarios in which the Data Rules might need to be tweaked and modified. First, if the customer wants to adjust the number of periods to look back and calculate the S-Doc usage percentage, they can use the following metrics for these periods:
- 3 Periods:
Prior3SDocUsagePct(Default) - 6 Periods:
Prior6SDocUsagePct - 9 Periods:
Prior9SDocUsagePct - 12 Periods:
Prior12SDocUsagePct - 18 Periods:
Prior18SDocUsagePct
Additionally, the customer can change the percentage of periods that utilized an S-Doc to any number between 0 and 100. The default rule states that HasSDoc = False and Prior3SDocUsagePct = 100.
Another area that might require tuning is allowing any type of supporting documentation to be used in the reconciliation. Some customers do not distinguish between S-Docs, T-Docs, R-Docs, and I-Docs. They only care that some type of supporting documentation is attached. In these scenarios, the default Data Rule or any custom-made rule can be updated to include these other types of documentation and utilize the I-Doc Sparsity, R-Doc Sparsity, and T-Doc Sparsity Anomaly Detectors along with S-Doc Sparsity. The rule looks something like this:
HasSDoc = FalseandPrior3SDocUsagePct = 100andHasRDoc = FalseandHasTDoc = FalseandIItemsPctWithIDoc < 100
Third, some customers might care about the number of S-Docs attached to a reconciliation and compare that to the historical periods. In this case, the SDocRatioCurrentVsPrior3 period statistic can be utilized to flag the desired ratio. This metric compares the number of S-Docs in the current period to the mean number of S-Docs in the prior periods. If the same number of S-Docs exists in the current period compared to the prior three-period mean, this value will equal 1, while if the number of S-Docs in the current period is less than the mean, it will be less than 1.
Sign Difference
Summary:
If an organization utilizes the Proper Sign field in the Account Reconciliations solution, it might make sense to update the Data Rules to leverage this information. One rule can be created for cases where all reconciliations have a Proper Sign assigned, using BalanceSignDifferentFromAssignedProperSign, or multiple rules can be created to utilize the Proper Sign when it is assigned and the mode of the Balance sign when it is unassigned.
Overview:
By default, the Sign Difference Anomaly Detector Data Rule compares the current period to the mode sign of the Balance. However, if an organization utilizes Proper Sign, it might make sense to tune the Data Rule to utilize this information. There are two approaches that can be used to achieve this measure. Approach one is the following and is the more basic implementation:
- Utilize the Boolean
BalanceSignDifferentFromAssignedProperSignto flag when the assigned Proper Sign does not align with the current sign of the Balance. Note that if a reconciliation does not have Proper Sign assigned, this Data Rule will not flag the reconciliation.
Approach two is similar but utilizes a combination of statistics to appropriately flag reconciliations when Proper Sign is assigned and unassigned:
- Use
IsProperSignAssigned = TrueandBalanceSignDifferentFromAssignedProperSign = Trueto flag when the Proper Sign does not align with the current sign of the Balance for reconciliations that have a Proper Sign assigned. - Use
IsProperSignAssigned = FalseandBalanceSignDifferentFromMode = Trueto flag when the current sign of the Balance does not align with the mode sign of the Balance for reconciliations that do not have a Proper Sign assigned.
T-Doc Sparsity
Summary:
The T-Doc Sparsity Anomaly Detector default Data Rule flags reconciliations missing an T-Doc in the current period when they were consistently attached in prior periods. The Data Rules can be tuned to adjust the number of prior periods (3, 6, 9, 12, or 18), the percentage of prior periods with an T-Doc, and expand the rule to check if any other documentation types are attached in the current period. Additionally, the detector can be updated to compare the ratio of T-Docs in the current period against historical averages to capture changes in volume.
Overview:
The default Data Rule for the T-Doc Sparsity Anomaly Detector identifies reconciliations that do not have an T-Doc attached in the current period but did have at least one attached in each of the prior three periods. There are a few scenarios in which the Data Rules might need to be tweaked and modified. First, if the customer wants to adjust the number of periods to look back and calculate the T-Doc usage percentage, they can use the following metrics for these periods:
- 3 Periods:
Prior3TDocUsagePct(Default) - 6 Periods:
Prior6TDocUsagePct - 9 Periods:
Prior9TDocUsagePct - 12 Periods:
Prior12TDocUsagePct - 18 Periods:
Prior18TDocUsagePct
Additionally, the customer can change the percentage of periods that utilized an T-Doc to any number between 0 and 100. The default rule states that HasTDoc = False and Prior3TDocUsagePct = 100.
Another area that might require tuning is allowing any type of supporting documentation to be used in the reconciliation. Some customers do not distinguish between T-Docs, S-Docs, R-Docs, and I-Docs. They only care that some type of supporting documentation is attached. In these scenarios, the default Data Rule or any custom-made rule can be updated to include these other types of documentation and utilize the I-Doc Sparsity, R-Doc Sparsity, and S-Doc Sparsity Anomaly Detectors along with T-Doc Sparsity. The rule looks something like this:
HasTDoc = FalseandPrior3TDocUsagePct = 100andHasRDoc = FalseandHasSDoc = FalseandIItemsPctWithIDoc < 100
Third, some customers might care about the number of T-Docs attached to a reconciliation and compare that to the historical periods. In this case, the TDocRatioCurrentVsPrior3 period statistic can be utilized to flag the desired ratio. This metric compares the number of T-Docs in the current period to the mean number of T-Docs in the prior periods. If the same number of T-Docs exists in the current period compared to the prior three-period mean, this value will equal 1, while if the number of T-Docs in the current period is less than the mean, it will be less than 1.