>> Article Archives >> Sales Strategies >> The Execution Confidence Index: Visualizing Initiative Success!

Sales Strategies
The Execution Confidence Index: Visualizing Initiative Success!
By Jonathan Narducci, Narducci Enterprises

Execution needs preparation.  Business leaders use questions to get a handle on which execution components are working well and identify obstacles to the effectiveness of others.

Strategy execution is the subject of a lot of talk these days.  That's because, according to conventional wisdom, businesses are realizing it's the main reason strategies, initiatives, projects and programs -collectively called "initiatives" from now on - fail at all levels in the company.  They believe that inventing the moves to capture markets is actually easy compared to making them work!

Execution is hard because of the complexity caused by mixing culture, communication, planning, process, change, management controls and other factors together to create all necessary activities.  It's this complexity that makes execution "hard to see" before activity starts and "hard to see" also makes it hard to model.

Simplifying execution's complexity with a framework of components and their relationships - a model - would help initiative leaders easily visualize its potential success.  That's the subject of this paper; simplifying execution into a model that works - making it "easy to see" - and the definition of a measure that captures the leader's confidence that execution will work as intended - the "Execution Confidence Index," ECI.

It's important to note something about initiatives that makes excellent execution so imperative; they are created with the sole purpose of adding value, or benefits, to the proposition - the company's or its customers' or both.  Think about it: if clear, well thought out initiatives are executed poorly, not only could value not be added, some could be subtracted.

Also note that anything done to improve execution could wind up being an initiative as well.  This is important because it stresses the need to identify what's working well and what's not early on in the definition and implementation of initiatives.  But, more importantly, it points out that defining initiatives and making sure they work is an iterative process; they are co-developed.

Guiding Principles

Modeling execution requires knowing what it is.  Simply put, execution is managing and carrying out the activities that ensure the successful completion of an initiative.  Any model must consider that successful execution relies on a system of activities subordinate to the alignment of the following three components:  goals of the initiative (i.e. what's going to be accomplished in light of overall business goals), resources, and operations.

Execution needs preparation.  Business leaders use questions to get a handle on which execution components are working well and identify obstacles to the effectiveness of others.  The model relies on questions as well, because any improvements, customizations, and additions to the company's execution abilities should be knowledge based.

A robust, "knowledge based" model means execution assessments require "across the board" participation from everyone who will be part of the system and even some folks who are not.  Successful assessments need different, fresh, perspectives from those who are performing related, collaborative activities and those who have relevant experiences.

Execution activities result in adding value to propositions and creating other advantages, such as improving operations and resource management.  Therefore, a model helps make sure that the initiative is the "right" one, not poorly defined, and addresses the critical issues facing the company's participation in its chosen markets.  As the great philosopher Waylon Jennings (country singer) said "There ain't no right way to do the wrong thing."

The "Five Step" Model

Step 1

The rationale behind developing an initiative is to have it help achieve one or more business objectives.  Therefore, clearly describe the initiative in terms of its relationship to these objectives and identify the top expected results the initiative should produce.

Step 2

Execution and initiative development go hand in hand:  The question "Can the initiative be executed and do 'present' execution abilities support the initiative?" must be answered. To determine whether an initiative is the well defined, right "solution" it's thought to be and whether it will succeed - executed well - requires the development of questions used to scrutinize its viability.

These questions, or some of them, must do the following:
  • Be comprehensive.  They must cover the three major components - goals, operations and resources - and collectively point to the execution activities that need to be successful.

  • Provoke deeper responses.  Most questions can be answered quickly with "first reactions."  But in order to develop a confidence level that accurately reflects reality, any assessment question must lead to "follow ups" that begin with, for example, "why" or "how."

  • Tie into activities that seem unrelated to the initiative.  For example, an initiative that adds skills to the company may seem unrelated to a question about customer profitability.  But inquiring about the connection may lead to, for example, the need to better communicate - execution activity - to customers about how they can take advantage of these new skills.

  • Focus on what's expected of the initiative, such as lower costs, more profits, new markets, or organization structure.
The fact that lots of questions need to be asked raises its own question:  About what?  Even though questions must cover the three components (goals, resources and operations), initiatives innately have six well known, simple categories that need to be explored (the relationship between the three and the six will be explained in the next step).  They are (followed by example questions):
  • Why this initiative?  "Will the results of the initiative add to our or our customers' revenues?" and "Does the initiative eliminate a competitor's strength?"
  • What does it consist of?  "Will it result in a new service?" and "Does it result in addressing a customer's specific need, like quality or training?"
  • How will it be accomplished?  "How much change does the organization or the customer need to make?" and "Can the initiative be effectively planned?"
  • Who is involved?  "Is the target market well defined?" and "Who are the customer champions?"
  • Where will it be implemented?  "Are there any geographical distinctions that need to be considered?" and "Can the results be deployed to all customer facilities?"
  • When does it need to be or can it be done?  "Can the initiative show results in the timeframe anticipated?" and "Can customers use the results in the timeframe they anticipated?"
Step 3

The goal of this step is to develop a set of execution ability visuals requiring the best accuracy possible.  Besides developing good questions, the company increases this accuracy by getting the right people at all the right levels to participate in the ratings process.

By assigning a "confidence level" to each question asked, each of the six categories will have its own confidence rating plus contribute to an overall rating.  These ratings can be seen graphically, numerically, and in a table of category tallies, all showing, easily and quickly, the relationship between categories.

But execution is defined as three aligned components:  goals, operations, and resources.  The model depicts their relationship in the form of a cube, as seen to the right.  The cube's volume represents the overall confidence level (the "Execution Confidence Index," ECI) that the relationship between the three components will produce the desired results.

To get the cube's volume, assign each question to one of the relationship components.  For example, "Why" questions tend to be assigned to "Goals," "How" questions to "Operations," and "Who" questions to "Resources."  Regardless of these tendencies, each question has to be assigned, individually and not arbitrarily, to a component.  Now, the volume is calculated using the confidence level ratings assigned to each question.

Why does this work?  Say there's a high confidence in understanding the initiative's goals and required resources but very little confidence in the company's operations.  The cube would look like the one to the right, out of balance, quickly "showing" the company that the initiative needs some work to get it ready for planning and implementation.

Since the graph visuals and the cube are developed together, the initiative's business leaders can quickly "see" their confidence in its success and, more importantly, they can "see" what execution factors negatively impact their confidence.

Step 4

Regardless of the confidence level, the next step is to take each question and actually answer it (as opposed to rating it with a confidence level).  For example, if the rating question is "Who is the target market?" participants would identify each market and some of its characteristics.  If the question has a low confidence level rating, it could lead to an answer that would consist of a high number of overlapping markets, each with conflicting characteristics.  A high confidence level rating could lead to an easy and consistent answer that identifies a well defined market that's understood by all participants.

Answering the questions could result in "Don't Knows," thus creating tasks that need to be accomplished in order to raise the confidence level to where the initiative's leaders feel comfortable starting the next step.

Step 5

Using the answers to the questions, develop an execution plan.  The "Whys, Whats, Hows, Whos, Wheres, and Whens" are confidently known with a minimal number of "Don't Knows" assigned to each category.


Going through the model's steps is relatively quick compared to hitting unknown roadblocks during the initiative's implementation (commonly known as "wheel spinning").  Plus, using the model could uncover new execution ideas, test their viability, and put them into action sooner than if these ideas were discovered as a result of an execution crisis.

By developing "visuals" for several initiatives, a company can easily start the initiative prioritization process based on its ability to execute each one successfully.

By asking execution-initiative questions, the process will surface the company's assumptions and hidden objections (by involving "across the board" participation) about its abilities and about the relative importance of the initiative to the company's overall objectives.

Participation is the best way to get the business to align the different groups and people actively taking part in an initiative's execution.  Involving participants in the assessment process gets early buy-in and across the board knowledge about other, related activities.

By assessing several initiatives, the process could point to common execution strengths and weaknesses, offering an opportunity to share what works well in other areas of the company and strengthen those that tend to stop the "generic" initiative from working.


Business leaders ask initiative preparation questions anyway.  Putting many of these questions into an assessment framework as early as possible and using it to develop a clear picture of a strategy's, initiative's, project's, or program's probable success can only help. This "change" from starting initiatives too early and finding glitches sooner rather than later will go a long way to creating a new "conventional wisdom."

Jonathan Narducci uses his 30+ years of experience in business, management, and quality systems to lead international companies in their search to locate and implement the ideas that helps craft the business performance needed for the business results expected. For more information visit

More articles by Jonathan Narducci
More articles on Sales Strategies