Posts

Documented and Retained Causal and Action Intelligence

StrategyDriven Organizational Performance Measures Best Practice ArticlePerformance measures reflect the organization’s successes and shortfalls over extended periods of time. Well-maintained metrics include a periodic performance analysis summary capturing underlying drivers and associated follow-on actions. These summaries, however, are typically overwritten with the next analysis rather than being preserved; robbing leaders of critical lessons learned information that could support future performance improvements and more rapid decision-making.[wcm_restrict plans=”41699, 25542, 25653″]

Electrons are cheap… and so is memory

The primary reason for not retaining performance analysis summaries lies with the reporting system’s configuration. Given the relatively low cost of data retention, the only reason for not retaining this information is that the organization’s reporting system, often a Microsoft Excel spreadsheet, is not set up to store these write-ups. If this is the case, implementing one of the following methods can serve to capture this data and make it easily available for future recall:

  • Upgrade or replace the performance metrics system with one that captures individual performance analysis summaries and follow-on actions
  • Modify the data collection system to contain fields specific to the performance analysis summary and associated follow-on actions
  • Document the performance analysis summary within the condition reporting system citing associated follow-on actions, if any
  • Document the follow-on actions within the condition reporting system identifying the relationship with the specific metric, including its report date
  • Capture the performance analysis summary and follow-on actions within the lessons learned system
  • Create and retain the each performance metric individually within a filing system that enables document content searching
  • Maintain a separate performance analysis summary and follow-on actions file for each metric within a filing system that enables document content searching

Final Thought…

Retaining lessons learned information is not enough. For this information to be of value to the organization, protocols must be in place to drive its use at the appropriate time. Consequently, the organization’s performance assessment and improvement (root/apparent/direct cause analysis, self-assessments, etcetera), strategic analysis, and decision-making processes should direct the use of performance metric summary and follow-on action information.[/wcm_restrict][wcm_nonmember plans=”41699, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Organizational Performance Measures Best Practice 29 – Documented and Retained Causal and Action Intelligence for just $2!

[reveal_quick_checkout id=”41698″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Time Matters

StrategyDriven Organizational Performance Measures Best PracticePerformance measures record specified outcomes achieved either at a specified time or within a defined interval and so, by their very nature, are time dependent. Consequently, a performance measure alters the behaviors of those being monitored not only in relationship to what is being monitored but also to when the outcome is being monitored. Additionally, ambiguity and misinterpretation of a performance metric’s time-based characteristics diminishes decision-making, performance, and moral.

Performance Measure Timing Impacts

Performance measures depict observable outcomes at a specified time or within a defined timeframe. Because individuals tend to act in a manner that is positively reflected by measures documenting their performance, this time dependence tends to place intended and unintended productivity pressure on those monitored.

Intended Impacts

In addition to driving a specific desired behavior based on what is monitored, the metric’s time-based dependency:

  • Reinforces the need to meet established productivity quotas
  • Encourages identification of productivity improvement opportunities

Unintended Impacts

However, this time-based dependency can promote undesirable behaviors including:

  • Implementation of unauthorized shortcuts that diminish quality in order to enhance measured counts
  • Employment of severe overtime to increase production by the measurement date
  • Manipulation of processes, such as dumping stock in customer warehouses, to achieve a better measured outcome

All performance metrics exert desired and undesired influence on behaviors. Therefore, it is important for executives and managers to anticipate these influences and implement counterbalancing metrics to ensure that only desired results are achieved. (See StrategyDriven articles, Organizational Performance Measures Best Practice – Diverse Indicators, Documenting Performance Measure Drivers.)

Time-based Characteristics Impacts

Ambiguity and misinterpretation of a performance metric data diminishes decision-making, performance, and moral. Because the time-based characteristics of performance measures are less visible, the likelihood of errors resulting from the misunderstanding of these parameters increases.

Ambiguous Time-based Parameters

Ambiguous time-based metric parameters can also adversely affect reflected performance. This is especially true when the specified measurement time is itself expressed as an interval. For example, work management processes frequently refer to ‘T-weeks,’ where the T-week number reflects the number of weeks before the execution or T-0 week. Measurements associated with T-week activities must specify whether the activity is to be assessed before, during, or at the conclusion of the T-week lest there be a 7 day ambiguity in when the action is to occur.

Misinterpreted Time-based Parameters

Misinterpreted time-based metric parameters routinely occur because of a lack of ‘field-level’ definition specificity and/or a failure to communicate the ‘field-level’ definition. For example, the time required to correct a deficiency can be anchored on several different start and stop times, each impacting the metric reflected outcome. Does the correction interval start at the time of deficiency occurrence, time of deficiency identification, time of work start, or some other time? Does the correction interval stop at the time of work completion, time of paperwork closure, time of work tracking system status update, time of work financial closure, or some other time?

Ambiguity and misinterpretation of a performance measure’s time-based characteristics can lead to diminished performance at all levels of the organization. Without time-based characteristic definition specificity and/or communication of the definition, the same metric may be implemented differently by various workgroups within an organization and those whose performance is measured may become frustrated and confused because their perceived performance and metric reflected performance differs. Additionally, when decision-makers lack clarity on what is being measured, they misinterpret the resulting information diminishing the quality of their decisions.

Final Thought…

A performance measure’s time-based characteristics profoundly affect its reflected outcomes. Consequently, it is advisable to develop and conduct trials with inexpensive test metrics prior to the implementation of more costly, permanent metrics as a need to adjust the less visible time-based characteristics often arise. (See StrategyDriven article, Organizational Performance Measures Best Practice – Ad Hoc Reports First, Metrics Second.)


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.


StrategyDriven's organizational performance measures catalogEnterprise Performance Measurement

We can work with you to assess and improve your performance measurement system; yielding metrics and reports that are operationally relevant, organizationally consistent, and economically implemented. The resulting system helps improve managerial decision-making, organizational alignment, and individual accountability. Learn more about how we can support your implementation and upgrade efforts or contact us for a personal consultation.

Eliminate Low-Value Metrics

StrategyDriven Organizational Performance Measures Best Practice ArticleOver time, organizational performance measurement systems can grow to include hundreds if not thousands of individual metrics. While each metric contributes some value in establishing the overall picture of performance, not all metrics offer equal value in doing so. Some metrics contribute so little value that they may be more costly and distracting than their value warrants. Consequently, executives and managers should consider eliminating these measures from the overall system.[wcm_restrict plans=”41677, 25542, 25653″]

Modern enterprise resource planning systems enable companies to rapidly gather more data than ever before. Too often, this data is organized into performance metrics for no other reason than it is available. While these measures provide some insight they usually do not significantly contribute to the shaping of decisions and behaviors and, more importantly, distract company leaders who can process only so much data at any one time. Consequently, a judicious review of the performance metric system should be periodically performed (often annually) and those metrics not offering sufficient value eliminated so to optimize return on investment and help prevent the data overload that leads to management distraction and diminished decision quality.

Determining Which Metrics Should Be Eliminated

Determining which performance measures should be eliminated from the system takes on critical importance as keeping or eliminating a metric could have a significant impact on future decision-making and employee behaviors. The review of the performance measurement system should establish the materiality of each metric and identify elimination candidates. These reviews should consider the following criteria:

  1. Sensitivity Assessment – test to identify the corresponding change in organizational results associated with an increase or decrease in a metric’s indicated performance. No or low change in organizational performance associated with even large swings in a metric’s monitored performance variable(s) suggests the metric should be considered for elimination except if meeting one or more of the following criteria:
  2. External Risks – metrics monitoring external environmental conditions which could threaten and/or present opportunities for the organization should be kept
  3. Internal Risks – measures monitoring internal operating factors representing material business risk should be kept
  4. Regulatory Requirements, Commitments/Obligations, and Contractual Agreements – indicators required by law, license, commitment/obligation, or contractual agreement should be maintained
  5. Industry Best Practice Performance Measures – metrics associated with monitoring common performance variables commonly used within the company’s industry, particularly those that provide ongoing benchmarking with other organizations, should be retained
  6. Performance Improvement Measures – temporary measures to monitor performance improvement decision outcomes and/or operating conditions of concern should be kept
  7. Safety Performance Measures – indicators monitoring overall personnel and equipment safety, including the organization’s safety culture, should be maintained

Additional Considerations When Performing the Sensitivity Assessment

When performing the sensitivity assessment, care should be taken to not eliminate a metric based on characteristics that may be viewed as suggesting a lack of material importance including:

  • the metric itself not having an associated action threshold because no action is warranted
  • the measure itself not being materially consequential from an operational, financial, reputational, regulatory perspective

Additionally, only probably swings in indicated performance should be considered during the sensitivity assessment as those representing nearly impossible extremes (natural disasters, war, government/market collapse, etcetera) may result in the inappropriate retention of some low-value metrics.

Final Thoughts…

While metrics may be eliminated from the organizational performance measurement system, their specification documentation, including the reason for elimination, should be retained. Additionally, if reasonably inexpensive, it is often desirable to maintain the metric’s associated data so that the measure can be reconstructed if needed. These actions ensure that if circumstances change, leaders will have ready access to the metric’s information while retaining the reduced distraction and cost benefits associated with the metric’s elimination.[/wcm_restrict][wcm_nonmember plans=”41677, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Organizational Performance Measures Best Practice 26 – Eliminate Low-Value Metrics for just $2!

[reveal_quick_checkout id=”41676″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Performance Metrics Inventory Database

StrategyDriven Organizational Performance Measures Best PracticeOver time, leaders can grow their performance measurement systems to include almost countless numbers of interrelated metrics. Ensuring these numerous metrics remain well aligned, their output quality and relationship integrity preserved, and their meaning well understood while continuing to be of value to executives, managers, and employees necessitates a method of inventorying the measures themselves and their underlying construction characteristics. In our experience, the optimal method for maintaining such an inventory is through the use of a centralized metrics inventory database.[wcm_restrict plans=”41669, 25542, 25653″]

Information Captured by the Performance Metric Inventory Database

In order for the performance metrics inventory database to support achievement of the aforementioned goals, it must contain the definition, construction, relationship, ownership, and revision data associated with each measure and be broadly accessible in at least a read-only format. Metric elements that should be captured within the database include:

  • Metric Title – name of the metric
  • Purpose Statement – written discussion of why the metric is being employed, including performance drivers (see StrategyDriven article, Organizational Performance Measures Best Practice – Documenting Performance Measure Drivers) and the performance observers are likely to see
  • Word Definition – written description of the metric including what it is monitoring
  • Mathematical Definition – calculational description of the metric including a written description for each variable. Variables that are themselves constructs of several underlying components should be further defined in mathematical terms so that each defined point is associated with a single data source
  • Data Sources – explicit application and application field from which each mathematical term within the performance measure’s definition draws its data (see StrategyDriven article, Organizational Performance Measures Best Practice – Get Data Directly from the Source)
  • Metric Type – bar, line, pie, etcetera graph
  • Graphed Lines, Bars, Pie Segments – items, by name, whose performance will be reflected on the metric
  • Graphed Line, Bar, Pie Segment Colors – color coding associated with each graphed line, bar, and/or pie segment
  • X-Axis Label – x-axis descriptive label applied to the metric chart
  • X-Axis Unit of Measure – quantity used as a standard of measurement for the x-axis, typically time
  • X-Axis Scaling – start and end points of the x-axis
  • Y-Axis Label – y-axis descriptive label applied to the metric chart
  • Y-Axis Unit of Measure – quantity used as a standard of measurement for the y-axis
  • Y-Axis Scaling – start and end points of the y-axis, commonly using a zero reference frame
  • Secondary Y-Axis Label – secondary y-axis descriptive label applied to the metric chart
  • Secondary Y-Axis Unit of Measure – quantity used as a standard of measurement for the secondary y-axis
  • Secondary Y-Axis Scaling – start and end points of the secondary y-axis, commonly using a zero reference frame
  • Z-Axis Label – z-axis descriptive label applied to the metric chart
  • Z-Axis Unit of Measure – quantity used as a standard of measurement for the z-axis
  • Z-Axis Scaling – start and end points of the z-axis, commonly using a zero reference frame
  • Frequency of Measure – this may not match the x-axis displayed time measure. The frequency should typically be equal to or more frequent than the x-axis displayed interval
  • Direction of Goodness – direction in which a positive performance trend is indicated
  • Performance Thresholds – written description identifying excellent (green), average (white), below average (yellow), and unacceptable (red) performance levels
  • Performance Thresholds Numeric Values – measured value above or below which each performance threshold is achieved respectively
  • Primary Action Threshold – written description of the general actions that should be taken and the desired outcomes to be achieved (see StrategyDriven article, Organizational Performance Measures Best Practice – Predefined Action Thresholds)
  • Primary Action Threshold Numeric Value – measured value at which action should be taken
  • Secondary Action Threshold – written description of the general actions that should be taken and the desired outcomes to be achieved (see StrategyDriven article, Organizational Performance Measures Best Practice – Multiple Action Thresholds)
  • Secondary Action Threshold Numeric Value – measured value at which action should be taken
  • Relationships to Higher-Tier Performance Measures – there can be more than one senior performance measures contributed to by the metric
  • Relationships to Subordinate Performance Measures – there can be more than one performance measure feeding into the metric particularly if it is an index metric
  • Relationships to Peer Performance Measures – there can be more than one instance where a performance measure is shared/common among workgroups across the organization (see StrategyDriven article, Organizational Performance Measures Best Practice – Horizontally Shared)
  • Core Performance Measure – special Yes/No metric designation field (see StrategyDriven article, Organizational Performance Measures Best Practice – Core Performance Measures)
  • Reports the Metric is Included In – listing of the reports containing the individual performance metric (see StrategyDriven article, Organizational Performance Measures Best Practice – Diverse Metric Groupings)
  • Accountable Person / Metric Owner – individual who is accountable for the performance/outcomes reflected by the metric. Should include the individual’s name, position title, and contact (phone, email, and location) information (see StrategyDriven article, Organizational Performance Measures Best Practice – Map Performance Measure Ownership)
  • Accountable Person / Metric Owner Organization – the company business unit, division, department, and/or workgroup to which the metric is assigned
  • Responsible Person – individual(s) who maintains and updates the performance measure. Should include the individual’s name, position title, and contact (phone, email, and location) information
  • Informed Person(s) 1 – Persons whose performance contributions are reflected in part or whole by the metric’s indicated outcomes (by position titles or groups)
  • Informed Person(s) 2 – Executives, managers, and supervisors who through lines of authority or functional collaboration need to be made aware of the metric’s reflected performance (by position titles or groups)
  • Informed Persons(s) 3 – Public locations where the metric is to be posted (see StrategyDriven article, Organizational Performance Measures Best Practice – Broad Communication)
  • Informed Person(s) 4 – Accountable individuals and contributors who are notified of changes to the performance measure (by position titles or groups)
  • Consulted Person(s) – individuals consulted when changes to the metric are proposed (by position titles or groups)
  • Concurrence Person – name, position, and a date/time stamp of the person whose approval is required prior to changing the associated performance metric (see StrategyDriven article, Organizational Performance Measures Best Practice – System Approval by the CEO). This data is collected and stored for each metric revision (by position titles or groups)
  • Metric Revision Number – automated count each time a metric’s update is approved
  • Revision Reason Text Field – documentation of the background reasons for changing/updating the metric
  • Metric Revision Date – date the new, revised, or updated metric is authorized to be placed in service
  • Metric Creation Date – special field noting Revision 0 of the performance measure
  • Metric Termination Date – special field noting the metric’s removal from service
  • Comments / Notes Field – elaborating information

For additional background information related to several database fields see the following StrategyDriven articles, podcasts, and whitepapers:

Articles

Podcasts

Whitepapers

Performance Metric Inventory Database Construction

The performance metric inventory database should be constructed as a relational database given the multiples associated with several individual metric elements to be captured. Additionally, the user interface should be aligned with the performance measure development form used during the construction of the overall system. In this author’s experience, customizable interface applications such as Adobe Flex are useful when employing complex ERP systems as the metric repository whereas Microsoft InfoPath easily accommodates Microsoft Access or SQL databases.

Performance Metric Inventory Database Governance

Performance Metric Inventory Databases should be governed to control both access and changes to measurement characteristic data. While broad access to database information is desirable, access should be limited based on legal, business, and ethical grounds to protect individual privacy and the organization’s intellectual property. Additionally, some information is appropriate for only company executives and senior-level managers. Maintaining data integrity is often served by limiting change access and, therefore, read-only access is recommended for those not directly involved with database information maintenance and approval. Electronic security protocols can help ensure access and data changes are authorized in accordance with established procedures.[/wcm_restrict][wcm_nonmember plans=”41669, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Organizational Performance Measures Best Practice 25 – Performance Metrics Inventory Database for just $2!

[reveal_quick_checkout id=”41668″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Post System Implementation Challenges

StrategyDriven Organizational Performance Measures PrincipleA performance measurement system’s complexity and organizational impact can bring with it many people, process, and technology challenges post implementation. For several months following a system go-live or significant upgrade, the organization adjusts its processes, procedures, and behaviors so to achieve the best possible reflected performance. This evolution is not without its costs or problems.[wcm_restrict plans=”25541, 25542, 25653″]

Several common challenges arise during the go-live phase of a new or significantly upgraded organizational performance metrics and reports system. Our decades of experience indicates that each can be anticipated and successfully addressed through upfront planning:

  • Dissatisfaction with the System – Users often communicate significant dissatisfaction with the implementation of any new application. In addition to end user involvement with the system’s design and traditional change management communications and training, system driven performance improvement goals should be established at the project’s start; comparing pre- and post go-live efficiency, data access, etcetera
  • Dates and Times Matter – Metrics and reports are largely driven by the date-time stamps associated with measured data as well as when data is pulled from supporting applications. Both need to be clearly documented and repetitively communicated throughout the measurement system’s design and implementation phases
  • Need for Initial Support – Inevitably, the organization’s performance as reflected by the new systems will differ from that of the legacy system because of differences in data used to derive the metrics and reports. Frequent, weekly questioning will arise regarding the accuracy of the new metrics and reports. A development staff should be in place to quickly assess and respond to these questions in order to protect the reporting system’s legitimacy and maintain end user confidence
  • Initial Data Cleanup – Automated metrics and reports commonly reveal a significant number of underlying data errors that were largely unrecognized prior to the reporting system’s implementation. A development staff should be in place to quickly identify and correct data errors in order to protect the reporting system’s legitimacy and maintain end user confidence
  • Pilot New Metrics and Reports – When altering any aspect of a performance metric or report (underlying system, data source, definition, etc.), the resulting output is often different than expected. While some differences simply require data cleanup to correct, others drive end users to demand changes in the metric/report definition or design. From experience, the number of change request can be significant. Thus, it is often more cost effective to pilot new metrics and reports prior to implementing a fully automated reporting system
  • End Users Always Want More – Following implementation, end users frequently request additional performance metrics and reports be made available. Promptly assessing and delivering these, as appropriate, significantly helps organizational adoption of the new reporting system

[/wcm_restrict][wcm_nonmember plans=”25541, 25542, 25653″]


Hi there! Gain access to this article with a FREE StrategyDriven Insights Library – Sample Subscription. It’s FREE Forever with No Credit Card Required.

Sign-up now for your FREE StrategyDriven Insights Library – Sample Subscription

In addition to receiving access to Organizational Performance Measures – Post System Implementation Challenges, you’ll help advance your career and business programs through anytime, anywhere access to:

  • A sampling of dozens of Premium how-to documents across 7 business functions and 28 associated programs
  • 2,500+ Expert Contributor management and leadership articles
  • Expert advice provided via StrategyDriven’s Advisors Corner

Best of all, it’s FREE Forever with No Credit Card Required.

[/wcm_nonmember]Additional Resources

Numerous other StrategyDriven articles provide elaborating information on how to avoid/address many of the challenge points above including:


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Pages

Nothing Found

Sorry, no posts matched your criteria