Performance, Learning, Leadership, & Knowledge

1,823 views

Published on

Performance Management is a long term process that focuses on continuous performance improvement or "change" for short. Its goal is to create a climate of shared understanding about what is to be achieved, and then developing people to increase the chance that it will indeed be achieved.

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,823
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Drink from the new idealYet, developing people or having them change can often be quite difficult at times.  People tend to have a small or closed mind when it comes to change. But if we "drink" from the new ideal, we can build from it. Yet, like the song, if we only take small sips, then we end up returning to our old way of doing things by building walls. To keep the new way (change), we need to immerse ourselves in the waterfall of change, rather than just sip from it. While a lot of the literature only looks at the "Yin" side of performance management, such as improving processes and creating procedures; this commentary will focus more on the opposite side or "Yang" side -- the people. To do that, it will focus on four aspects on managing the human side of performance management: learning, reframing, flowing, and viscosity.
  • Performance Management Performance Management is a long term process that focuses on continuous performance improvement or "change" for short. Its goal is to create a climate of shared understanding about what is to be achieved, and then developing people to increase the chance that it will indeed be achieved. A lot of people associate performance management with the annual performance review. While it can and often does include performance reviews, it goes far beyond it in that it looks at performance improvement as a daily activity, rather than just a yearly event. In addition, some people relate performance management with Taylor's Scientific Management. However, Taylor's method focuses mainly on the time and motion studies of processes; while performance management is more about focusing on people, or to be more exact -- developing people so they can perform.
  • Learning Verses Rejection of New IdeasIn a great audio file called Human Nature (MP3) Malcolm Gladwell, author of The Tipping Point and Blink, explores why we can't trust people's opinions -- he believes we don't have the language to express our feelings. He gives several examples, to include Herman-Miller's Aeron chair, which became the best-selling chair in the history of office chairs. . . it succeeded in spite of the research that suggested it would fail. When the Aeron chair was first introduced, people thought it was one of the most comfortable chairs built, yet they also thought is was quite ugly looking. In fact, when people first looked at the completed chair, they commented that they might consider such a chair once the designer finished it! However, after people used it for a while, they become quite accustomed with the look of it. So much in fact that they actually began to think of it as, well. . . good-looking. Gladwell believes there are two main reasons: We often say that people don't like change, yet if given a chance to learn about a new innovation or process, people will see the beauty in the idea if given a chance to learn about it. First impressions will threaten to derail innovation simply because people do not have an initial conception to judge it against, thus new ideas must be presented in ways that allow the users to learn about it, such as through prototypes, models, or trial and error.
  • Reframe for the HolisticAll leadership comes down to changing people's behavior. But why does it seem so hard at times? Science has come up with a few surprising answers to this perplexing problem In an article by Fast Company, they tell the story of coronary-artery bypass patients who have surgery to relieve pain, rather than to cure them. In fact, the only real cure is for them to start taking better care of themselves, such as quitting smoking, eating less, and exercising. Yet, in study after study, very few do! When these patients are looked at two years after their surgery, 90% had not made any significant change to their lifestyle. Here are people facing a life or death situation, yet they fail to make the right choice, thus they face more pain, more surgeries, and possibly even death. How can we expect leaders to change people when they will not even change themselves when faced with a major personal crisis? John Kotter, a Harvard Business School professor says that you need more than careful analysis, measurement tools, and management to help someone make a behavioral change; what you actually need to directly address are people's feelings. Going back to the heart patients, a researcher, Dr. Ornish, showed that a holistic program, focused around a vegetarian diet, can actually reverse heart disease without surgery or drugs. This holistic program includes going after their feelings by having them attend a twice-weekly support group sessions led by a psychologist. It includes instruction in aerobic exercise, meditation, relaxation, and yoga and lasts for about a year. A study showed that after three years, 77% of the patients had stuck with their lifestyle changes and avoided the surgery. This is a far cry from the 10% who succeed when only given cognitive instructions. This holistic method works better as the change is reframed -- rather than trying to motivate patients with the fear of death; they are motivated with the joy of living. Facing death for most people is much too frightening to think about, thus patients often go into denial; whereas making daily life more enjoyable is a powerful motivator.
  • Revise System Once a training deficiency has been noted, the ISD process is repeated to correct the deficiency. This does not mean that the entire training program is rebuilt — just the portions that had deficiencies or will be affected by the changes. The Four Levels of Training Evaluation Perhaps the best known training methodology is Kirkpatrick's Four Level Evaluation Model (1994) of reaction, learning, performance, and impact. The chart below shows how the evaluation process fits together:
  • Level One - ReactionAs the word implies, evaluation at this level measures how the learners react to the training. This level is often measured with attitude questionnaires that are passed out after most training classes. This level measures one thing: the learner's perception (reaction) of the course. Learners are keenly aware of what they need to know to accomplish a task. If the training program fails to satisfy their needs, a determination should be made as to whether it's the fault of the program design or delivery. This level is not indicative of the training's performance potential as it does not measure what new skills the learners have acquired or what they have learned that will transfer back to the working environment. This has caused some evaluators to down play its value. However, the interest, attention and motivation of the participants are critical to the success of any training program. People learn better when they react positively to the learning environment. When a learning package is first presented, rather it be e-learning, classroom training, CBT, etc., the learner has to make a decision as to whether he or she will pay attention to it. If the goal or task is judged as important and doable, then the learner is normally motivated to engage in it (Markus & Ruvulo, 1990). However, if the task is presented as low-relevance or there is a low probability of success, then a negative effect is generated and motivation for task engagement is low. This differs somewhat from Kirkpatrick. He writes, "Reaction may best be considered as how well the trainees liked a particular training program" (1996). However, the less relevance the learning package is to a learner, then the more effort that has to be put into the design and presentation of the learning package. That is, if it is not relevant to the learner, then the learning package has to "hook" the learner through slick design, humor, games, etc. This is not to say that design, humor, or games are unimportant. However, their use in a learning package should be to promote the "learning process," not to promote the "learning package" itself. And if a learning package is built of sound design, then it should support the learners in bridging a performance gap. Hence, they should be motivated to learn! If not, something went dreadfully wrong during the planning and building processes! So if you find yourself having to hook the learners through slick design, then you probably need to reevaluate the purpose of the learning program.
  • Level Two - LearningThis is the extent to which participants change attitudes, improve knowledge, and increase skill as a result of attending the program. It addresses the question: Did the participants learn anything? The learning evaluation requires post-testing to ascertain what skills were learned during the training. In addition, the post-testing is only valid when combined with pre-testing, so that you can differentiate between what they already knew prior to training and what they actually learned during the training program. Measuring the learning that takes place in a training program is important in order to validate the learning objectives. Evaluating the learning that has taken place typically focuses on such questions as: What knowledge was acquired? What skills were developed or enhanced? What attitudes were changed? Learner assessments are created to allow a judgment to be made about the learner's capability for performance. There are two parts to this process: the gathering of information or evidence (testing the learner) and the judging of the information (what does the data represent?). This assessment should not be confused with evaluation. Assessment is about the progress and achievements of the individual learners, while evaluation is about the learning program as a whole (Tovey, 1997, p. 88). Evaluation in this process comes through the learner assessment that was built in the design phase. Note that the assessment instrument normally has more benefits to the designer than to the learner. Why? For the designer, the building of the assessment helps to define what the learning must produce. For the learner, assessments are statistical instruments that normally poorly correlate with the realities of performance on the job and they rate learners low on the "assumed" correlatives of the job requirements (Gilbert, 1998). Thus, the next level is the preferred method of assuring that the learning transfers to the job, but sadly, it is quite rarely performed.
  • Level Three - Performance (behavior)In Kirkpatrick's original four-levels of evaluation, he names this level "behavior." However, behavior is the action that is performed, while the final results of the behavior is the performance. Gilbert said that performance has two aspects — behavior being the means and its consequence being the end (1998). If we were only worried about the behavioral aspect, then this could be done in the training environment. However, the consequence of the behavior (performance) is what we are really after — can the learner now perform in the working environment? This evaluation involves testing the students capabilities to perform learned skills while on the job, rather than in the classroom. Level three evaluations can be performed formally (testing) or informally (observation). It determines if the correct performance is now occurring by answering the question, "Do people use their newly acquired learning on the job?" It is important to measure performance because the primary purpose of training is to improve results by having the students learn new skills and knowledge and then actually applying them to the job. Learning new skills and knowledge is no good to an organization unless the participants actually use them in their work activities. Since level three measurements must take place after the learners have returned to their jobs, the actual Level three measurements will typically involve someone closely involved with the learner, such as a supervisor. Although it takes a greater effort to collect this data than it does to collect data during training, its value is important to the training department and organization as the data provides insight into the transfer of learning from the classroom to the work environment and the barriers encountered when attempting to implement the new techniques learned in the program.
  • Level Four - ResultsThis is the final results that occur. It measures the training program's effectiveness, that is, "What impact has the training achieved?" These impacts can include such items as monetary, efficiency, moral, teamwork, etc. While it is often difficult to isolate the results of a training program, it is usually possible to link training contributions to organizational improvements. Collecting, organizing and analyzing level four information can be difficult, time-consuming and more costly than the other three levels, but the results are often quite worthwhile when viewed in the full context of its value to the organization. As we move from level one to level four, the evaluation process becomes more difficult and time-consuming, however, it provides information that is of increasingly significant value. Perhaps the most frequently type of measurement is Level one because it is the easiest to measure. However, it provides the least valuable data. Measuring results that affect the organization is considerably more difficult, thus it is conducted less frequently, yet it yields the most valuable information. Each evaluation level should be used to provide a cross set of data for measuring training program. The first three-levels of Kirkpatrick's evaluation — Reaction, Learning, and Performance are largely "soft" measurements, however decision-makers who approve such training programs, prefer results (returns or impacts). That does not mean the first three are useless, indeed, their use is in tracking problems within the learning package: Reaction informs you how relevant the training is to the work the learners perform (it measures how well the training requirement analysis processes worked). Learning informs you to the degree of relevance that the training package worked to transfer KSAs from the training material to the learners ( it measures how well the design and development processes worked). The performance level informs you of the degree that the learning can actually be applied to the learner's job (it measures how well the performance analysis process worked). Impact informs you of the "return" the organization receives from the training. Decision-makers prefer this harder "result," although not necessarily in dollars and cents. For example, a recent study of financial and information technology executives found that they consider both hard and soft "returns" when it comes to customer-centris technologies, but give more weight to non-financial metrics (soft), such as customer satisfaction and loyalty (Hayes, 2003).
  • Level Four – ResultsImpact informs you of the "return" the organization receives from the training. Decision-makers prefer this harder "result," although not necessarily in dollars and cents. For example, a recent study of financial and information technology executives found that they consider both hard and soft "returns" when it comes to customer-centris technologies, but give more weight to non-financial metrics (soft), such as customer satisfaction and loyalty (Hayes, 2003). Note the difference in "information" and "returns." That is, the first three-levels give you "information" for improving the learning package. While the fourth-level gives you "impacts." A hard result is generally given in dollars and cents, while soft results are more informational in nature, but instead of evaluating how well the training worked, it evaluates the impact that training has upon the organization. There are exceptions. For example, if the organizational vision is to provide learning opportunities (perhaps to increase retention), then a level-two or level-three evaluation could be used to provide a soft return. This final measurement of the training program might be met with a more "balanced" approach or a "balanced scorecard" (Kaplan & Norton, 2001), which looks at the impact or return from four perspectives: Financial: A measurement, such as an ROI, that shows a monetary return, or the impact itself, such as how the output is affected. Financial can be either soft or hard results. Customer: Improving an area in which the organization differentiates itself from competitors to attract, retain, and deepen relationships with its targeted customers. Internal: Achieve excellence by improving such processes as supply-chain management, production process, or support process. Innovation and Learning: Ensuring the learning package supports a climate for organizational change, innovation, and the growth of individuals.
  • Flow Rather than ScriptDutch composer Simeon ten Holt wrote Canto Ostinato (mp3 music file) for various instruments and flexible duration. Ten Holt uses repetition and minimalist permutations to create an original, evolving work with ever-shifting moments. These parts are woven together into an overlapping and flowing whole. One popular arrangement is to have three or four pianos facing each other in the center of a room, and unlike most conventional performances, the audience is allowed to move and surround the performers as they play. Canto Ostinato has 106 figures or parts and the players themselves, decide on the stage, how many times they repeat each section, etc. Ten Holt has said, "That what happens on-stage is like you're looking at an object from different angles, and the object is changed by the input people put into it. If you look at an object from above or below, it's still the same object, but the colors are different, the shapes are different, and that's what happens on-stage. So the players' input is very important." Canto Ostinato has become a metaphor for a new approach to design (Thackara, 2005). The composer, score, musicians, stage, and audience all interact in subtle, yet complex ways. Neither the musicians or the audience knows exactly what will happen next as the arrangement is quite is flexible. But they are not flying blindly as there are principles and to a point, a score. The situation itself, in a sense, becomes designed. Flows are not just one element of social organization; they are the expression of the processes dominating our economic, social and symbolic life - Manual Castells in The Rise of Network Society.Notice how this relates to performance management. There is a space of flows in each performance situation -- techniques, technologies, information, sounds, symbols, people, and the performance itself. Yet, like the Canto Ostinato, while there is a score (goal or blueprint); life unfolds anew, thus, each situation basically designs itself. Fredrick Taylor's Scientific Management tried to freeze the performance and then totally script it. While Taylor's method focused primarily on the process itself, flow is more about helping people to develop so that they can perform. The manager becomes more like Ten Holt in that he or she givers her players a score to follow and conducts them, while at the same time giving them the freedom so that the situation is allowed to design itself, rather than just becoming a carbon copy.
  • Job Performance NeedsWhile the first analysis looked at business needs, this analysis looks at the job performance needs and these two needs could slightly differ. The first need, business, often has a slightly more visionary or future look to it, while the job performance need normally looks at what is needed now. Thus, business needs often tend to be more developmental in nature (future orientated), while job performance needs are normally more related towards the present. This is perhaps the most important need to look at as it links the performer with the organization. When analyzing job performance, you want to look at the entire spectrum that surrounds the job: processes, environment, actual performance verses need performance, etc, thus it often helps to divide the analysis into three groups: people, data, and things. To ensure you have captured the Job Performance Needs, the analysis must look at the Process Level and measure the Performance (Behavior) of the job holders. Some tools that should help are: Performance Gaps Analysis Information Jobs and Tasks Tasks Analysis Templates (RTF file) Various Approaches to Needs Analysis After assessing the business and job performance needs, you should have a pretty good ideal of what needs to be fixed and future requirements. Analyze your findings and then start making plans for any needed performance interventions, such as training and development, job aids, coaching and mentor programs, process improvement, etc. Training NeedsWhile Phillips named this part of the analysis Training Needs, a better term might have been Performance Intervention Need. That is, you need to think beyond training, and determine what type of performance intervention will actually bridge the performance gap. As you assess the performance for any needed interventions, look at the Job/Performer requirements, that is, what the performer needs to know in order for the performance intervention to be successful. In addition, look at how you are going to evaluate any learning requirements (level 2). It is one thing to determine the learning needs (skill, knowledge, & self system [attitude, metacognition, etc.]), but it is quite another thing to ensure that those requirements actually take place. Individual NeedsThe Individual Needs Analysis is the identification of the target population. While this is closely related to the Training Need above, in that they both look at the Job/Performer Level, Individual needs go a little bit deeper. It ensures that the performance intervention actually conforms to the individual requirements. For example, in the Training Needs analysis, it might be determined that the job holders need to learn a new process. In this need analysis, the target population is looked at more closely to determine the actual content, context, and delivery method of the performance intervention. In the Training Needs analysis, you look at learners as whole, while in this need analysis you look at them as individually as possible to determine Job/Performer levels. In addition, you want to determine how well this analysis was carried out by using a level one evaluation -- Reaction. Throughout the training industry this evaluation is also known as smiley sheets -- how well did the learners like the performance intervention. This is entirely the wrong thing to measure as it does not matter if the learners like it or not. What matters the most is, "does it actually help them to improve their performance?" Thus, it needs to go beyond smiley sheets and actually measure their self-system. Putting It All TogetherThe whole idea behind this concept is to look at the system, identify a need, build an evaluation (measurement instrument) that identifies the objective that is required, identify the "why", select the intervention, and then build content and context that will bridge the gap between the need and the objective. Once you have a program in place, performing an evaluation should be a snap because you already have the measurement tools in place.
  • Performance Improvement is a tool to bridge the performance gapAt this time, do not worry about how you are going to bridge the gap (creating content and context). Rather, the goal is to discover the present level of performance and the performance that is actually required. In addition, find out the "why." That is, what is causing the gap? The Japanese have an interesting performance improvement concept -- they ask "why" five times when confronted with a problem. By the time the fifth why is answered, they believe they have found the ultimate cause (root) of the problem. So when looking at a performance gap, look deep -- "What exactly is causing the gap?" Once you understand the problem (gap), then you need to see how it fits in with the various levels of the organization. The chart below shows the basic outline of a Performance Analysis and how Phillips' Four Needs and Kirkpatrick's Four Levels of Evaluations link in with Rummler & Brache'sThree Level Performance Framework. Backwards planning or more specifically, backwards analysis should be used. That is, look at the broader goals of the organization and then work your way down to individual needs. Business NeedsInvestigate the problem or performance initiative and see how it supports the mission statement, leader's vision, and/or organizational goals, etc. Fixing a problem or making a process better is just as good as an ROI, if not better. Organizations that focus strictly on ROI are normally focusing on cost-cutting. And you can only cut costs so far before you start stripping out the core parts of a business. A much better approach is to improve a performance or process that supports a key organization goal, vision, or mission. When senior executives were asked the most important training initiatives, 77% cited, "aligning learning strategies with business goals"; 75% cited, "ensuring learning content meets workforce requirements"; and 72%, "boosting productivity and agility" (Training Magazine, Oct 2004). Thus, senior leadership is not looking at training to be a profit center (that is what other business units are for), rather they are looking at performance improvement initiatives to help "grow" the organization so that it can reach its goals and perform its mission. The goal is to make an impact or get some sort of result. So once you have identified the gap between present performance and the organization's goals and vision; create a level 4 evaluation (impact) that measures it -- that is, what criteria must be met in order to show that the gap has actually been bridged? To ensure you have accurately captured the Business Needs, the analysis must look at the needs of the Organizational Level and measure the Results or Impact.
  • AnalysisThe study we do in order to figure out what to do. - Allison Rossett & Kendra Sheldon (2001)A performance analysis is generally called for when you want to improve a part of the organization (look for needs) or to fix a problem that someone has brought forth. Both are generally fixed in the same manner. There are four performance improvement needs: Business, Job Performance, Training, and Individual (Phillips, 2002). When performing an analysis, it is best to take a long term approach to ensure that the performance improvement initiative ties in with the organization's vision, mission, and values. This connects each need with a metric to ensure that it actually does what it is supposed to do. This is best accomplished by linking performance analysis needs with Kirkpatrick's Four Levels of Evaluations (Phillips, 2002): • Business Needs are linked to Results or Impact (level 4) • Job Performance Needs are linked to Behavior (level 3) • Training Needs are linked to learning (level 2) • Individual Needs are linked to Reaction (level 1) By linking the analysis and evaluations models together, a total system concept is formed: Analysis is performed to determine what is needed, thus it begins with a gap analysis: The "needs" of the organization minus the present performance level equals the gap.
  • Design PhaseThis phase insures the systematic development of the training program. This process is driven by the products of the analysis phase and ends in a model or blueprint of the training process for future development. This model should contain five key outputs: Entry behaviors Learning steps (performance steps) Learning objectives Performance test Structure and sequence program outline The entry behaviors describe what a learner must know before entering into the training program. Just as a college requires certain standards to be met in order to enroll, a learning process should require a base level of knowledge, skills, and attitudes (KSA). The learning objectives tell what tasks the learners will be able to perform after the training, the learning steps tell how to perform the tasks, while the performance test tells how well the tasks must be met. Finally, the learning objectives are sequenced in an orderly fashion to provide the best opportunity for learning to occur. Mr. Spock from Star Trek had a great training technique called "the Vulcan mind meld." Spock placed his fingertips on another person's head, which in turn, transferred knowledge, vivid images, and memories from their brain to his, or vice versa.  Unfortunately, we do not have the mind meld capability. . . at least for now. So for the time being a systematic method of to help transfer the required KSAs is used. This method is known as ISD. Just as Spock could extract only the information he wanted, the goal in ISD is to make the transfer as effectively and efficiently as possible and tailored to the learners' needs. Training, at its simplest, is the transfer of KSA. ISD is nothing but Spock's Vulcan mind meld equivalent. There are no better terms available to describe the difference between the approach of the natural and the social sciences than to call the former "objective" and the latter "subjective"... While for the natural scientist the contrast between objective facts and subjective opinions is a simple one, the distinction cannot as readily be applied to the object of the social sciences. The reason for this is that the object, the "facts" of the social sciences are also opinions — not opinions of the student of the social phenomena, of course, but opinions of those whose actions produce the object of the social scientist. - The Counter-Revolution of Science by Friedrich August Von Hayek.
  • Knowing to Learn This type of learning means greater relatedness between people in the organization and the result of greater relatedness is enhanced social well-being and higher organizational performance based on the shared learning. This type of performance-based learning generates a proactive approach for an emergent discovery process that shares current experience to formulate future strategy based on the accomplishments of people and organizations.  Real and active learning that matters is composed of a dynamic pattern of coordinated activities and actions generated from experience and emerges to create new or expanded patterns of individual and organizational knowledge. When learning occurs in a trust-based environment between people, the very means to build robust strategic capability is generated to take the knowledge learned and apply it – implementing both strategic direction and strategic learning at the same time in a greatly accelerated manner.THREE LEARNING RESULTS•    Personal Development•    Relationship Development•    Leadership DevelopmentThis development means uncovering people’s full potential thereby activating previously unseen possibilities that enable the organization to expand its ability to create the desired future. When this kind of applied learning is employed throughout the enterprise as a strategic process the very conditions for reflection, strategic thinking, analysis, and creating new understanding by creating expanded mental models of what is possible generate new knowledge formerly non-existent.Through shared participation, enhanced levels of motivation and performance occur, that in turn expand the larger strategic capability and capacity of the organization. Learning in this manner is holistic and involves both intra- and inter-personal skills, combining the ability to understand and know our selves as well as others.  Deep learning takes place through the process of active feedback loops that both discover and uncover the knowledge resident within the social space of both the people and the organization. This knowledge generates the actions necessary for expanding awareness and the larger context given by the environment – both internally and externally. The key is expanding the learning cycle of people and the organization at the first order level through active reflection of the value for the knowledge available within the social network. Learning in this manner is at the center of the knowledge necessary to create and sustain a performance-based collaborative culture that expands and conserves the very social systems that generate the knowledge in the first place. At the heart of individual and organizational performance is the knowledge and experience embodied and embedded in social networks - the very fabric of who we are – both individually and collectively. Deep learning also generates the very knowledge capital necessary to bind it all together through the relationships of people one to another within their social networks. The following graph provides a simpler view of the learning network flow creating the knowledge that when applied creates expanded awareness and accelerated performance.
  • KnowledgeKnowledge is the perception of the agreement or disagreement of two ideas -- John Locke (1689) BOOK IV. Of Knowledge and Probability. An Essay: Concerning Human Understanding.Locke gave us the first hint of what knowledge is all about. Since that time, others have tried to refine it. Davenport and Prusak (1998, p. 5) define knowledge as, "a fluid mix of framed experience, contextual information, values and expert insight that provides a framework for evaluating and incorporating new experiences and information." Notice that there are two parts to this definition: First, there is content: "a fluid mix of framed experience, contextual information, values and expert insight." This includes a number of things that we have within us, such as experiences, beliefs, values, how we feel, motivation, and information. The second part defines the function or purpose of knowledge, "that provides a framework for evaluating and incorporating new experiences and information." Notice how this relates back to Locke's definition -- we have within us a framework (one idea) that we use for evaluating new experiences (the second idea). Knowledge is information that changes something or somebody -- either by becoming grounds for actions, or by making an individual (or an institution) capable of different or more effective action." -- Peter F. Drucker in The New RealitiesAchterbergh & Vriens (2002) further write that the function has two main parts. First, it serves as a background for the assessment of signals, which in turn, allows the performance of actions. As to the first part, they write, "To determine whether a signal is informative, an observer has to "attach meaning to it," e.g., to perceive and interpret it. Once perceived and interpreted the observer may evaluate whether the signal is informative and whether action is required." And secondly, "The role of knowledge in generating appropriate actions is that it serves as a background for articulating possible courses of action (articulation), for judging whether courses of action will yield the intended result and for using this judgment in selecting among them (selection), for deciding how actions should be implemented and for actually implementing actions (implementation)." Velocity and ViscosityTwo important concepts in understanding knowledge are Velocity and Viscosity.
  • Types of KnowledgeExplicit knowledgeCan be articulated into formal language, including grammatical statements (words and numbers), mathematical expressions, specifications, manuals, etc. Explicit knowledge can be readily transmitted others. Also, it can easily be processed by a computer, transmitted electronically, or stored in databases. Tacit knowledgePersonal knowledge embedded in individual experience and involves intangible factors, such as personal beliefs, perspective, and the value system. Tacit knowledge is hard to articulate with formal language (hard, but not impossible). It contains subjective insights, intuitions, and hunches. Before tacit knowledge can be communicated, it must be converted into words, models, or numbers that can be understand. In addition, there are two dimensions to tacit knowledge: Technical Dimension (procedural): This encompasses the kind of informal and skills often captured in the term know-how. For example, a craftsperson develops a wealth of expertise after years of experience. But a craftsperson often has difficulty articulating the technical or scientific principles of his or her craft. Highly subjective and personal insights, intuitions, hunches and inspirations derived from bodily experience fall into this dimension. Cognitive Dimension: This consists of beliefs, perceptions, ideals, values, emotions and mental models so ingrained in us that we take them for granted. Though they cannot be articulated very easily, this dimension of tacit knowledge shapes the way we perceive the world around us. Nonaka & Takeuchi (pp. 63-69) further discuss the four modes of knowledge creation or conversion that are derived from the two kinds of knowledge:  Socialization: from tacit to tacit -- Sharing experiences to create tacit knowledge, such as shared mental models and technical skills. This also includes observation, imitation, and practice. However, "experience" is the key, which is why the mere "transfer of information" often makes little sense to the receiver. Internalization: from explicit to tacit -- Embodying explicit knowledge into tacit knowledge. Closely related to "learning by doing." Normally, knowledge is verbalized or diagrammed into documents or oral stories. Externalization: from tacit to explicit -- The quintessential process of articulating tacit knowledge into explicit concepts through metaphors, analogies, concepts, hypothesis, or models. Note that when we conceptualize an image, we express its essence mostly in language. Combination: from explicit to explicit -- A process of systemizing concepts into a knowledge system. Individuals exchange and combine knowledge through media, such as documents, meetings, and conversations. Information is reconfigured by such means as sorting, combining, and categorizing. Formal education and many training programs work this way.Artifacts derived from knowledge creation are facts, concepts, processes, procedures, and principles. These, in turn, are used to help create knowledge in others.
  • Velocity and ViscosityVelocity - the speed with which knowledge moves through an organization. Viscosity - the richness or thickness of the knowledge transferred. Knowledge requires viscosity, which uses rich sources and context; while an information transfer normally uses velocity for quick transfer. This is why knowledge can be so hard to come by at times -- it takes time as knowledge, unlike information, is full of details and context; and at times, it can be quite subtle, thus one has to dig for it. Performance Management is quite similar as deeply rooted belief systems cannot be removed with a quick memo, rather they require details and contexts to give it substance. For a good story of this concept, please read Davenport and Prusak's excerpt of Mobile Oils tale of Velocity and Viscosity. Mobil Oil's engineers developed some sophisticated ways of determining how much steam is required to drill under certain conditions. When they applied their techniques, they found they could reduce the amount of steam they generated themselves and bought from outside sources. Since they knew the precise amount needed, the potential savings were huge. So they embedded the technique in an intelligent system, with the main focus being the knowledge's velocity in order to quickly get the information out to the field. This was done by sending a memo to all the drilling operations detailing the calculations and benefits. They assumed other sites would quickly adapt the innovation. Nothing happened. The effective viscosity level was zero. Simply improving a process will not be enough to win over everyone. So Mobil came up with some other techniques for transferring the knowledge, such as videos and case studies and soon raised the adoption rate to 30%. Adding the additional contexts added viscosity to it. It now looks as if it will grow to 50%. Will it ever reach 100%? Who knows? The resistance to abandoning processes that have been successful for years is a universal phenomenon that is not just limited to Mobil. Velocity strips knowledge and information down to its bare essentials by removing portions of its context and richness, however it does allow it to move faster. And this stripping effect often takes the message down to the next level for its intended receivers -- from knowledge to information or from information to data. Thus, the intended knowledge exchange can fail. Now look at how 3M does it -- they have regular meetings and fairs for exchanging knowledge. One of their most famous products, scotch tape, was invented by Dick Drew...a sandpaper salesman. In almost any other company his idea would have been tossed out as tape and research were not his specialty, yet the culture of 3M allows for the viscosity of information to spread. And the way they ensure it spreads is through meetings and knowledge fairs -- it is not left for a chance happening on their intranet or through a memo. Knowledge requires rich sources and context. For example, one of the best knowledge-enablers for a beginning learner is apprenticeship as it allows a rich exchange of concepts and ideas between the learner and teacher. As one starts to slowly move away from this form of learning, the knowledge exchange slowly starts to decrease, while the rate of information flow increases. That is, in a class, the information is going out to a number of learners at once, hence the speed of information exchange increases; however, the flip-side is that one-on-one interactions decrease, thus the rate of knowledge transfer decreases. This is the first dimension of the data/information/knowledge continuum -- Viscosity/Velocity. The second dimension of the data/information/knowledge continuum is the number of paths or streams of information flow. A single email (monophonic) sent to a direct report is one stream, where as a list server, such as trdev, normally has multiple posts (context) to each thread. The more posts, then the more viewpoints one can gather or harvest. Once a learner moves past the beginner stage, then these multiple viewpoints start to become invaluable as they add context to the knowledge base that one has gained. For example, A PhD student studies one small section of his or her field, but does so from multiple contexts.
  • Performance, Learning, Leadership, & Knowledge

    1. 1. Performance, Learning, Leadership, & Knowledge<br />Roy Burchfield<br />1/1/2007<br />
    2. 2. Drink from the new ideal<br />When you lose small mind,<br />You free your life.<br />Life is a waterfall,<br />We drink from the river,<br />Then we turn around and put up our walls. <br />When you free your eyes.<br />Internal prize. -- Aerials by System of a Down <br />
    3. 3. Performance Management<br />
    4. 4. Learning Verses Rejection of New Ideas<br />Our preferences are quite unstable, especially when we are first introduced to something new. This is because we need time to "learn" about the new object or idea. <br />We also tend to make up stories by picking up subtle clues when we are introduced to something new. This is because we do not have the "language" to talk about something new, radical, or daring. <br />
    5. 5. Reframe for the Holistic<br />
    6. 6. The Four Levels of Training Evaluation <br />
    7. 7. Level One-Reaction<br />Reaction informs you how relevant the training is to the work the learners perform<br />Learning informs you to the degree of relevance that the training package worked to transfer KSAs from the training material to the learners<br />The performance level informs you of the degree that the learning can actually be applied to the learner's job <br />Impact informs you of the "return" the organization receives from the training. Decision-makers prefer this harder "result," although not necessarily in dollars and cents. For example, a recent study of financial and information technology executives found that they consider both hard and soft "returns" when it comes to customer-centris technologies, but give more weight to non-financial metrics (soft), such as customer satisfaction and loyalty (Hayes, 2003). <br />
    8. 8. Level Two<br />What knowledge was acquired? <br />What skills were developed or enhanced? <br />What attitudes were changed? <br />
    9. 9. Level Three - Performance (behavior)<br />
    10. 10. Level Four - Results<br />Reaction informs you how relevant the training is to the work the learners perform<br />Learning informs you to the degree of relevance that the training package worked to transfer KSAs from the training material to the learners<br />The performance level informs you of the degree that the learning can actually be applied to the learner's job <br />Impact informs you of the "return" the organization receives from the training. Decision-makers prefer this harder "result," although not necessarily in dollars and cents. For example, a recent study of financial and information technology executives found that they consider both hard and soft "returns" when it comes to customer-centris technologies, but give more weight to non-financial metrics (soft), such as customer satisfaction and loyalty (Hayes, 2003). <br />
    11. 11. Level Four - Results<br />Financial: A measurement, such as an ROI, that shows a monetary return, or the impact itself, such as how the output is affected. Financial can be either soft or hard results. <br />Customer: Improving an area in which the organization differentiates itself from competitors to attract, retain, and deepen relationships with its targeted customers. <br />Internal: Achieve excellence by improving such processes as supply-chain management, production process, or support process. <br />Innovation and Learning: Ensuring the learning package supports a climate for organizational change, innovation, and the growth of individuals. <br />
    12. 12. Flow Rather than Script<br />
    13. 13. Job Performance Needs<br />Performance Gaps <br />Analysis Information <br />Jobs and Tasks <br />Tasks <br />Analysis Templates (RTF file) <br />Various Approaches to Needs Analysis <br />
    14. 14. Performance Improvement is a tool to bridge the performance gap <br />
    15. 15. Analysis<br />
    16. 16. Design Phase<br />
    17. 17. Know<br />
    18. 18. Knowledge<br />First, there is content: "a fluid mix of framed experience, contextual information, values and expert insight." This includes a number of things that we have within us, such as experiences, beliefs, values, how we feel, motivation, and information. <br />The second part defines the function or purpose of knowledge, "that provides a framework for evaluating and incorporating new experiences and information." Notice how this relates back to Locke's definition -- we have within us a framework (one idea) that we use for evaluating new experiences (the second idea). <br />
    19. 19. Types of Knowledge<br />
    20. 20. Velocity and Viscosity<br />

    ×