Management Observation Program Best Practice 17 – Paired Observations

StrategyDriven Management Observation Program Best Practice ArticleManagers translate leadership’s vision into the day-to-day actions of the workforce. They do this through their decisions, published standards, and operational procedures. They reinforce desired behaviors through organizational performance measures and management observations. But how do executives ensure their manager and supervisor direct reports understand and properly translate and reinforce their vision with the workforce? One method of doing so is through the conduct of paired observations.[wcm_restrict plans=”41999, 25542, 25653″]

Paired observations are management observations performed by a manager or supervisor with his/her superior in attendance. During these observations, the superior performs an observation of the manager or supervisor; noting:

  • What activity, document, and/or technology the observer choses to evaluate
  • How the observer evaluates the activity, document, or technology (subject)
  • What subject characteristics the observer deems to be excellent, above average, average, and needs improvement (See StrategyDriven Management Observation Program Best Practice article, Criteria Scoring System)
  • What the observer deems important enough to document as substantiating comments
  • How and what feedback is provided to the observed subject or subject owner

The superior documents his/her assessment of the observer’s performance; providing coaching as necessary to better align the individual’s understanding with leadership’s vision.

Final Thoughts…

Paired observations further communicates the importance of the management observation program and helps ensure the program itself continues to provide quality reinforcement of expectations and input to the organization’s performance improvement efforts.

Like other observations, a standard observation form should be used during paired observations. In these instances, the focus of the observation and established criteria should align with the five goals listed above.[/wcm_restrict][wcm_nonmember plans=”41999, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Management Observation Program Best Practice 17 – Paired Observations for just $2!

[reveal_quick_checkout id=”41996″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Management Observation Program Best Practice 16 – Don’t Tolerate Substandard Observations

StrategyDriven Management Observation Program Best Practice ArticlePeople, processes, and technologies are not perfect. And if they were, you’d still have something to write about.[wcm_restrict plans=”41990, 25542, 25653″]

Management observations are performed to reinforce leadership’s expectations and identify opportunities for meaningful organizational improvement. Consequently, observations that fail to identify either those exceptional or deficient performance attributes rob the organization of the opportunity to improve safety, reliability, and efficiency.

A critical component of an effective observation program is the review and grading of the observations themselves. Such reviews seek to determine whether a quality observation that will meaningfully contribute to organizational improvement was performed. Key characteristics of high quality observations include:

  • Criteria scoring across a spectrum from excellent to needs improvement (See StrategyDriven Management Observation Program Best Practice article, Criteria Scoring System)
  • Substantiating comments captured for exceptional and deficient performance such that replication (excellent or above average performance) or corrective action (needs improvement) can be taken
  • Observations performed across a diverse set of activities/individuals, documents, and technologies and are not focused on a singular subject by a particular observer or workgroup
  • Observations perform for activities, documents, and technologies important to the organization’s operational safety, reliability/continuity, efficiency, and compliance (See StrategyDriven Management Observation Best Practice article, Selecting an Activity to be Observed)
  • Observation feedback provided to the individual observed or responsible for the documents and/or technologies observed (See StrategyDriven Management Observation Program Best Practice articles, Immediate Feedback and Documented and Signed Observations)
  • Condition reports written for those deficiencies meeting that program’s reporting criteria (See StrategyDriven Business Performance Assessment Program Best Practice article, Capture Improvement Opportunities with the Corrective Action Program)

Should an observation not meet the organization’s minimum quality standards, it should be either improved or rejected. Improvements should be performed in cases of missing details that do not require a material recollection on the part of the observer and are performed within a short period of time following the initial observation. More significant deficiencies should result in the observation’s rejection and not being counted against the organization’s established observation schedule and/or quota. Consequently, the observation will need to be reperformed.

Final Thought…

Leaders who maintain consistent, rigorously applied, high standards for the performance of management observations communicate the program’s importance to observers and the workforce. They further reinforce the organization’s commitment to continuous improvement rather than showing a dedication to ‘checking the quota box.’ The result of this reinforcement is not only the achievement of ever improving performance but also the establishment of organizational accountability that helps ensure performance meets and/or exceeds expectations even when individuals are not being watched.[/wcm_restrict][wcm_nonmember plans=”41990, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Management Observation Program Best Practice 16 – Don’t Tolerate Substandard Observations for just $2!

[reveal_quick_checkout id=”41989″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Management Observation Program Best Practice 15 – Selecting an Activity to be Observed

StrategyDriven Management Observation Program Best Practice ArticleNot all activities impact or potentially impact the organization equally. Consequently, they should not be treated equally when being selected for observation. So what activities should be prioritized for observation?[wcm_restrict plans=”41983, 25542, 25653″]

The management observation program topical schedule or quota system defines the management standards (topical areas) to be reviewed and reinforced. While the activity needs to embody these standards and a reasonable number of subordinate performance criteria, observers should seek to choose an observation opportunity based on the following priorities:

  • Operational safety (personnel and equipment safety)
  • Business continuity and/or asset reliability
  • Organizational efficiency
  • Regulatory compliance

Priority should be given to significant and high-risk activities over more routine activities even if the routine activities support the topical areas to be observed. This is because significant evolutions are usually more indicative of employee behaviors and organizational issues. Note that it is important to observe less significant and low-risk activities from time-to-time in order to convey the importance of adhering to management’s standards regardless of the activity’s perceived importance. (Such observations should typically be directed by the management observation program manager.)

Also consider observing activities on backshifts and weekends. Doing so further reinforces the importance of adhering to management’s standards at all times.

Lastly, observers should remain alert for new observation opportunities. Emergent activities often provide useful insight into organizational performance.[/wcm_restrict][wcm_nonmember plans=”41983, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Management Observation Program Best Practice 15 – Selecting an Activity to be Observed for just $2!

[reveal_quick_checkout id=”41984″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Management Observation Program Best Practice 14 – Criteria Scoring System

StrategyDriven Management Observation Program Best Practice ArticleManagement observation cards are intended to be easy and straightforward to complete in the field. Consequently, the card’s structure should be such that it requires the minimal amount of data collection; reducing the administrative burden (and physical awkwardness) of completing form while ensuring quality performance data collection. Such a structure promotes the number and frequency of observation performance which in-turn yields additional management engagement points and performance data. Key to simplifying management observation cards is a predefined criteria scoring system whereby the observer need only select specific scores for each criteria accompanied by substantiating comments for performance outliers (high and low).[wcm_restrict plans=”41978, 25542, 25653″]

Common Criteria Scores

The number and definition of criteria scores is often best when aligned with the performance ratings of the organization’s personnel performance management program. This alignment facilitates the transfer of data from the management observation program to the personnel performance management program. (See StrategyDriven Management Observation Program Best Practice article, Feeding the Performance Management System). That said, a common scoring system might include:

  • Excellent (E) – World-class or near perfect performance
  • Above Average (AA) – Performance exceeding the organization’s established standard and/or meets an industry leading practice but does not necessarily reflect excellent performance
  • Average (A) – Performance meeting the organization’s defined standards typically reflective of regulatory requirements and industry standards and guidelines
  • Needs Improvement (NI) – Performance not meeting the organization’s established performance standards
  • Not Applicable (NA) – Those instances where a specific criteria was not observed or was not applicable to the observed activity, document, or software program reviewed as a part of the management observation

Substantiating Facts

When scoring a specific criteria as excellent, above average, or needs improvement, the observer should provide substantiating comments highlighting the gap to the established performance standard. Such comments:

  • Recognize Outstanding Performance – Reinforces the desired behavior with the individual observed and provides the opportunity to communicate and reinforce the behavior throughout the organization (See StrategyDriven Management Observation Program Best Practice articles, Immediate Feedback and Metrics and Results Communication)
  • Enable Performance Improvement – Provides the details necessary to correct the conditions not meeting management expectations and supports aggregate analysis so that long-term improvements to people, processes, and technologies can be made (See StrategyDriven Management Observation Program Best Practice article, Cross Organizational Trending, and Business Performance Assessment Best Practice article, Capture Improvement Opportunities within the Corrective Action Program)

Typical Scoring Distribution

Most observed performance will meet management’s expectations. Common rules of thumb for the percent distribution of observation criteria scoring by category include:

  • Excellent: 5 – 10 percent
  • Above Average: 15 – 20 percent
  • Average: 50 – 75 percent
  • Needs Improvement: 10 – 20 percent

The management observation program manager should periodically perform a criteria scoring analysis on a programmatic and individual observer basis to assess the overall scoring breakdown. While corrective action may not necessarily be needed if aggregate scoring is either abnormally high or low, further review may be warranted to substantiate the findings should such a trend continue for an extended period of time. These reviews maintain program integrity and credibility, ensure management and supervision standards application understanding, and deter observation performance/scoring complacency. Overly average performance ratings suggests observers may simply be filling out the forms to satisfy the observation quote with a minimal documentation effort while also avoiding the follow-on action required for above and below average performance.[/wcm_restrict][wcm_nonmember plans=”41978, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Management Observation Program Best Practice 14 – Criteria Scoring System for just $2!

[reveal_quick_checkout id=”41977″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.

Management Observation Program Best Practice 13 – Metrics and Results Communication

StrategyDriven Management Observation Program Best Practice ArticleManagement observation programs serve to reinforce leadership expectations throughout the workforce. This reinforcement not only includes standards associated with the performance of day-to-day operational activities but also manager and supervisor performance of observations. One effective way to reinforce the importance of the program itself, desired performance of management observations by managers and supervisors, and behaviors expected of workers is through a publicly published management observation program metrics set.[wcm_restrict plans=”41970, 25542, 25653″]

Management Observation Program Performance Metric Sets

Metrics programs provide a constant, indirect reinforcement of leadership’s expectations when published in the highly trafficked areas of those to whom they apply. For the management observation program, reinforcement is most optimally achieved through the routine publication of programmatic and findings metrics. These metrics commonly include:

Programmatic Performance Metrics (overall and by department, workgroup, etcetera)

  • Management Observations Performed
  • Management Observations Performed by Managers & Supervisors
  • Average Management Observation Time
  • Average Management Observation Grade
  • Management Observations of Training
  • Management Observation Identified Improvement Opportunities
  • Management Observations Identifying Improvement Opportunities
  • Management Observation Open Condition Reports or Corrective Actions

Findings Performance Metrics

Logical collection of categorical performance metrics based on the observation card criteria (See StrategyDriven Management Observation Program Best Practice article, Use of Standard Observation Forms) graded during each observation. This collection of metrics is derived as follows:

  1. Observation card criteria are grouped by people, process, and technology categories (done for each observation card performed)
  2. Number (specific criteria count) and/or percent (specific criteria count / total of all criteria counts) of criteria within the category receiving a particular grade/score calculated for a defined period (typically one month)
  3. Subordinate metrics for each department, workgroup, etcetera are created (Note that the aggregate ‘number’ from the subordinate metrics should equal that of the parent metric defined in Step 2.)

Worker Behaviors (People) Management Observation Program Findings Metric Example

  • Category: Worker Behaviors (People)
  • Criteria: Procedure Use, Procedure Adherence, Self-Checking, Peer Checking, Independent Verification, etcetera (See the StrategyDriven Human Performance Management Forum)
  • Scoring: Excellent, Above Average, Average, Needs Improvement
  • Periodicity: Monthly
  • Worker Behaviors Number/Counts Metric: Stacked bar chart showing the monthly sum of the counts of the worker behaviors criteria receiving an excellent, above average, average, and needs improvement score

StrategyDriven Management Observation Program Best Practice Article

Figure 1: Worker Behaviors (People) Management Observation Program Findings Metric

Management Observation Program Metrics System Structure

Management observation program performance measures should be well-constructed and horizontally shared. (See the StrategyDriven Organizational Performance Measures Forum) To achieve these goals, each metric within the system should use consistent units of measure and scaling so to enable comparison between workgroups. Such comparison fosters a healthy comradery-based completion between workgroups that serves to further elevate performance.

Management Observation Program Metrics Communication

Each workgroup’s specific management observation program performance metric set should be posted in highly trafficked locations so to serve as a frequent reminder of their compliance with leadership defined standards.

Results briefings should also be provided to workgroup leaders with the expectation that the information will be discriminated to subordinate personnel. These communications provide the additional underlying performance details needed to improve performance

Final Thoughts…

Mature management observation programs leverage software applications to capture observation data and automatically generate the associated performance metrics. These applications greatly reduces or eliminates the administrative overhead associated with maintaining this program.

As with other workgroup level performance metrics, accountability for the measured results should be assigned to the associated workgroup leader and included in his/her performance goals. This alignment between management observation program outcomes and individual manager goals further reinforces the program and promotes its effective implementation.[/wcm_restrict][wcm_nonmember plans=”41970, 25542, 25653″]


Hi there! Gain access to this article with a StrategyDriven Insights Library – Total Access subscription or buy access to the article itself.

Subscribe to the StrategyDriven Insights Library

Sign-up now for your StrategyDriven Insights Library – Total Access subscription for as low as $15 / month (paid annually).

Not sure? Click here to learn more.

Buy the Article

Don’t need a subscription? Buy access to Management Observation Program Best Practice 13 – Metrics and Results Communication for just $2!

[reveal_quick_checkout id=”41969″ checkout_text=”Access the Article Now!”]

 
[/wcm_nonmember]


About the Author

Nathan Ives, StrategyDriven Principal is a StrategyDriven Principal and Host of the StrategyDriven Podcast. For over twenty years, he has served as trusted advisor to executives and managers at dozens of Fortune 500 and smaller companies in the areas of management effectiveness, organizational development, and process improvement. To read Nathan’s complete biography, click here.