Solving problems and building feedback loops…

People access, use and rely on KM systems to solve problems. We can break problems into three distinct and fairly easily considered buckets.

clip_image002[4]The three are simply new, variation and known problem. While they may be exotic types of problems that are as of yet in the new bucket but not part of a new problem to date those three work very easily in classifying problems. Once you determine which bucket you are facing you then reach into your DLM© system. But notice that variations of existing problems and known problems start at the bottom of the system with known solutions. The new problem starts at the top with the SME. The SME may tell you your new problem is actually a known issue being worked on by another group. So while the problem is new to you, it is known by the organization or group.

This is a very simply diagram showing the complexity and reality of problems. There are also scales by which we measure the “difficulty” of the problem and of course our good friend time raises its hand as well saying some problems must be solved in X time period. Time creates a funnel forcing the reduction of time you have to consider options. That’s why the new problem starts with the SME at the top of the knowledge system, you aren’t sorting through a large number of options and actions.

Going back to our OODA Loop base, we know that if time is our orientation, our actions have less available flexibility (wiggle room- my personal favorite). That said, the wrong action taken because of time pressure can also cause massive impact. So while time is the issue, we still have to evaluate the overall impact of the problem first.

Time critical problems aren’t always new problems as well. Sometimes they are variations and actually known problems. One of the feedback loops in the system that we will add is that of value over time. Value over time is simply a way for us to denote how quickly a specific solution worked. A great example of this is a machine with an oil leak. You notice the smoke and smell the odor of burning oil which causes you to fix the engine. You fix the engine ending the leak but don’t wipe up the already leaked oil so the overall original problem (smell) remains. The fix worked but the cleanup of the fix wasn’t effective as it could have been (when done with leak wipe up excess oil)!

Difficulty is the other addition that may cause additional time delays and other considerations when approaching the solution. Take that engine above that we just solved the engine leak for. This time the engine itself is not in an open space but is actually crammed into a corner or boxed into a space. Much harder to get to, we know the fix and solution but getting to the applications is actually more difficult. Our known fix says this takes 1 hour, but we know because of the position of our specific engine we should make that two hours. We now taking the difficulty and potential time constraints add the new option of replacing the engine completely and repairing it off-line rather than in the actual system.

Time and difficulty change problems. Using the DLM© system and John Boyd’s OODA Loops it’s critical that we build feedback loops for difficulty and for time. One for the two combined of course as well.

.doc

I am not a graphic artist!

Process meet Data……

Getting past the limits of a known good sources is critical. Understanding the importance of time in this process while also creating a repository of we’ve tried this before is the goal. Building an inclusive knowledge creation structure is important, including the various processes of brainstorming and parking lots.

· Creative idea creation support system (Brainstorming)

· Non-linear creative solution discussion (Parking lot)

· Timely information provision system

· Structured source and IP feedback loops

· Structured and managed intellectual property system

· Informal IP capture system

· Interactive IP system (Social)

· Information lifecycle system

The new addition to the process is the inclusion of the formalized feedback loops. Now we include information such as relevance of source, timeliness of source and a new concept within our IP feedback – relevance to our specific problem set.

imageOur initial concept is the reality of timely versus known good. A known good source can be a timely source, in that case the feedback loop for a known good source would capture timeliness (information arrived but it was too late, a little late, just on time, or well ahead of time) the other checkbox/feedback loop would focus more on the solution. The solution worked as is, fixed our problem or enabled what we were building. The next loop would be we modified the IP slightly, and are resubmitting it with changes to reflect our solution and finally we had to modify the data significantly in order to solve our problem. Not all environments work exactly as projected or designed. The feedback for known good sources would focus on improving the variations within each solution.

Timely sources would be those leveraged if the known good source wasn’t available or as a backup system for the known good data. Simply put, it’s a launching of a search ending with the intent of finding the answer. You may search sources you know to be good (solving a windows problem, start your search at TechNet, solving an iPad problem start at Apple and so on.) While these are known good sources, the keywords and search terms are not always what you expect. Natural language search or sequential search doesn’t always result in a known response. Based on the nature of the data collection you may take these sources into the brainstorming session (initial feedback loop). If time isn’t the driver you may utilize the information in a test or trial and error system. Finally, the data may move from the unmanaged/unknown source into your managed IP system (feedback loop). This can be done in a variety of ways including knowledge articles in your service management system, or as included documentation for the working solution.

.doc

Information process widget

Expanding the concept of a known good source

image

A fatal flaw with source validation is the creation of expert systems. There is a need for an expert system such as DLM© covered later in this book. But like all systems there has to be checks and balances. One of the cheeks is the reality of valid sources. A valid source is known good for a situation and has to be evaluated, a known not good source continues to be relevant for other issues just not for any one particular problem. The separation of the data from the source is critical. For example, always right data that is hard to get to, can actually be a known bad source. If I have to expend significant energy to get the data isn’t a good source.

Implementation of the system then takes into account the reality of information hoarding. The design will account for variances within sources for both applicability to the problem as well as the broader responsiveness of the system.

Our first step in building a system like this is making sure we have easy acc3ess to the information. This is accomplished in a number of ways but has to be applied to all information the system can contain.

The second step is that the data can be provided rapidly. Data that solves a problem but that arrives 10 days late or 10 minutes late isn’t relevant.

Our last step is the known or unknown source. It’s important that we not create an exclusionary system in the last step. Known good sources simply are pre-validated sources in relations to the specific problem we have. A great example of this would be a peer reviewed journal. It is a known good source, but if we are authoring the article for the journal we don’t always have the advantage of previous Intellectual Capital to base our article on.

imageOur system has to include evaluations not only of can we get to the data quickly but can we also consume it quickly. Is that data from a known good source or do we need to verify before we implement? Finally, the last piece in our overall process has to be the simple ask does the information acquired solve the problem. It is our base because frankly if information solves the problem, the rest of the points decrease. Except in the case where time is the driver. This brings us to the IP acquisition model that rides under the OODA Loop, the goal again to get to good decisions, using the model we need to evaluate the aspect of decision over time. A spectrum of decisions that starts with poor slow and moves all the way to good fast decisions.

.doc

following a knowledge loop

Determining Source Validity in an intern-generational knowledge transfer system

John Boyd created the OODA loop concept. In building the concept he created a straight line of observation, orientation leading to decisions and actions. There is a series of feedback loops that keep the system moving. The goal? Simply to create a model for good decisions. The speed of the decisions varies and frankly is controlled by the orientation. If, and John Boyd was a US Air Force officer at the time he created the OODA loops, someone is shooting at you the expedience of the orientation changes to reflect that. IE make the decisions faster!

Taking that initial OODA Loops process we will add for our new system a source validation process focused on improving the quality of inbound data. Source validation is a process within orientation. How we see, view and capture the observation. Based on that concept we will create this new Loop from orientation to observation changing observation (collection) to reflect the source validity (orientation) and then when moving forward (Decide) we will create maximum return for our activity (Action).

Source validity takes a number of forms. A known good source, has to be consistently reliable, which means effectively that the feedback loop used within the system to capture sources has to be applied within the source. The outcome of the internalized source validation system has to mesh with our validation system. As a source validates the validity of its information, we need to capture that “validation” can carry it into the system.

Expert systems exist in the wild. Every company has one. They may effectively be little more than sneaker net systems but they exist. The first problem to solve is how do we capture the knowledge in the heads of younger and older workers. It is a changing of the traditional value of workers, while recognizing the new value of the new ideas.

Our system is based on an orientation of realized value and potential value.

imageRealized value in our system is what we know, or the existing experts or existing expert systems. Potential value represents the new idea, the new concept or the radical modification of existing information. This gives us our feedback loop to provide us with the Optimal Outcome on a consistent basis. The goal of Boyd’s OODA Loops is consistent quality decisions and therefore consistent quality actions. Capturing the two different flavors of information requires an adaptive knowledge system. One that is aware of tradition, this is the way things have always been done, with an allowance for change over time or introduction of radical new ideas.

· Do the sources we use have validation systems?

· Do we have internal validation systems for information?

· Do our SME’s manage the validation system?

· If I capture IP from our system and use it in a modified form how do I resubmit the new IP?

· If IP doesn’t work for me, how do I notify the system?

More to come

.doc

IGKT dreamer

Getting the horse to water, helping him or her drink, and making sure how we got the horse to water is documented…

Not everyone wakes up to the soothing sounds of an adaptive alarm clock. One that increases in volume or increases the light in the room as the alarm goes off. Not everyone uses clock radios some people still prefer the traditional alarms. Finally, there are people who just get up. Perhaps they have a blog to write or they to reflect on the day that is coming.

Timagehe same is true of technology. Frankly not everyone uses technology. Technology exists around us in two distinct forms, the tacit form of broadcast such as television and the more active form such as the gathering for specific information (email, web) from a device. The passive form of technology is this broadcast reality. The active form is the one we are seeking to build as a component of an IGKT. The goal is not to force people that only consume passive technology to modify their behavior and suddenly become an active technology user. The goal is to create a system that allows for consistent managed interactions with both types. Like the old adage, I like both types of music “Country and Western.”

Note: The use of active and passive is to denote the type of interaction with technology it is not a statement, or a value judgment on the interactions.

The value of information is also not the answer to this problem. All information has value, we may not understand the value of it today, but it has value. The more we capture the better things will be as we finally shed the last vestiges of the limits to the information age. Inclusion knowledge systems don’t have value judgments on sources, only on applicability to the problem we are trying to solve.

The next graphic lays out the concept of information acquisition. Where we have the concept at the top of get to done (I got what I needed). The paralleled is the children’s game go fish. When you draw and get what you wanted your turn continues. However, information doesn’t work like that. Sometimes you get close to done, but have to modify the information. Sometimes you get some of what you need. Sometimes you realize the source in question isn’t reliable so you have to weigh the risk/reward of using the source and finally you have the no help on the problem bucket.

In an inclusive IP system this scale also applies to people but we don’t use the unreliable designation for sources. We move that to unverified rather than unreliable. The concept is the same for either. Information provided by that source needs to be validated regardless.

We need to get the information into the system first before we start worrying about the validity of sources. The scale is more for driving towards getting to done. Ultimately an IGKT is a process improvement system. We are taking information hat previously wasn’t even in the system, adding it back into the system to improve the orientation. The OODA Loop system designed by John Boyd drives constant feedback loops to both modify orientation but also to improve decisions. Getting the right information into the system is critical.

So what do we do about people that don’t use technology today. The reality of this system going forward is that no matter what it will require technology to implement the system that provides the answers. It has to be easy to use, simple and it has to be easy to input date into.

.doc

OODA Loop dreamer

The validity of my source…

The value of sources

#Ilackasourceism

It was during a meeting once near Seattle Washington when a friend posted my hashtag above on Wikipedia. It lacked a valid source so Wikipedia as expected removed it. They, the leaders of Wikipedia, removed a phrase for lack of a source, the ultimate compliment for #Ilackasourceism.

Now, the phrase was meant as more of a way of pointing out that there are many types of information and sources for that information.

1.1.1 Social Sources:

· My source: What are the sources you use most? A collection of people, and sources other than people such as Google, Bing or the Encyclopedia?

· Your source: I ask you for help, you reach out to your sources. They are hopefully different than the one’s I’ve already used.

· Shared sources: sources we have in common – so they may get pinged twice.

1.1.2 Professional Sources

· Peer reviewed journal or web site

· Internal Knowledge Management System

· External to your organization paid validated information source (Information broker)

· Validated source such as Microsoft, Apple, Google or any published organizational knowledge base

The reality of sources is this we have both a knowledge source I am aware of, and a trusted knowledge source. There are people that when I ask them questions, I trust their answer more than even professional sources. They have a knowledge of my specific situation and apply their knowledge to the question so it is more likely to match my problem. That said, normally 90% of the time I start with a professional source for information.

Source, when consider the three ingest, analyze and consume actually changes the order. Given a trusted source of information you may skip ingest and analyze going straight to consume. Given a non-or new source you may ingest and analyze the data multiple times before consuming. Taking this back to John Boyd’s OODA Loops, the validity of the source impacts orientation of the observation. Trust sources move us quickly to good decisions and actions. Untrusted sources require either a leap of faith, or decrease the speed of the overall decision system.

.doc

Edison Scale

Finding, using and well reusing a known good source. And the battle to get that into a digital information system. After all–we are nearly in the information age right…

In the world of information, a known good source is an interesting situation. Interesting for two considerably different reasons. There are many people that are personally known good sources. In other words, you know that if you got to that person you will get the right information, every time. Or worst case you will get directions as to how to gather the right information. The second is an electronic source of information you can search.

An inclusive system for Inter-generational knowledge transfer starts with that electronic system. Why? You cannot guarantee that the “expert” will respond in the same way to every person, and frankly worse case they are a choke point. A system with a choke point is not as efficient as a system built without choke points.

Taking into account the social and professional sources you have at your disposal you build a personal knowledge network. That network contributes to your ability to quickly answer questions or solve problems. The larger and more effective your network the better you are able to take the observations gained, modified by orientation and building the right decision model.

Of course reality sets in. Systems that extend often weaken. The larger the network the longer the response and the further from known good sources you get. The sensitivity of time when solving problems is both a reality and risk.

· How fast do I need the information

· What sources have given known good information on this topic in the past

· What factors limit the response rate of my network?

The first and the last seem pretty similar but they are extremely different. The middle of this system is the problem area. If the middle is a human being that information is always at risk. Does the person you are reaching out to continue to study the problem once they have solved it once? As a former teacher I can tell you there are many types of test takers. The first type likes to make patterns when taking a test. They don’t guess, they randomly pick answers. Another type answers once, turns it in and walks away. The Ah-Ha test taker reads the questions they know light goes on and they answer those quickly. They then come back and work on the ones they didn’t have an Ah-Ha answer. They are the hybrid between the set and forget and the ponder and wonder test takers. Our last type is the test retaker. They answer everything, go back and check. Some, check over and over and over. Known good sources aren’t the first type but the can be the second type.

If your known good system is an inclusive knowledge capital system designed around the concepts of Screen, time and source you have a leg up. Now there is a person (SME) whose job it is to answer those questions and then look for every variation of that question. To find every other possible answer. Taking the concept of known good source to a new level.

imageSocial, in that there is an interactive action that occurs. Professional in that the SME understands the concept and the topic. Inclusive because all answers to the question are evaluated. Inclusive because you get the experts that have done this 100 times before. Fresh and exciting because you get the brand new people looking at the problem from a different angle.

From left, the inclusive system to right the tacit knowledge network. The left system always updates and evaluates information, solutions and ways to look at things. The middle section relies on the expertise of those that have been there or those who’ve studied the problem. Experts aren’t always people that look at something for 10,000 hours. It’s important to evaluate the validity of a source. If the source is stuck (the way things are) you end up with the top solution. You don’t ever get the new solution to the problem. That creates the know good source error.