Solving problems and building feedback loops…

People access, use and rely on KM systems to solve problems. We can break problems into three distinct and fairly easily considered buckets.

clip_image002[4]The three are simply new, variation and known problem. While they may be exotic types of problems that are as of yet in the new bucket but not part of a new problem to date those three work very easily in classifying problems. Once you determine which bucket you are facing you then reach into your DLM© system. But notice that variations of existing problems and known problems start at the bottom of the system with known solutions. The new problem starts at the top with the SME. The SME may tell you your new problem is actually a known issue being worked on by another group. So while the problem is new to you, it is known by the organization or group.

This is a very simply diagram showing the complexity and reality of problems. There are also scales by which we measure the “difficulty” of the problem and of course our good friend time raises its hand as well saying some problems must be solved in X time period. Time creates a funnel forcing the reduction of time you have to consider options. That’s why the new problem starts with the SME at the top of the knowledge system, you aren’t sorting through a large number of options and actions.

Going back to our OODA Loop base, we know that if time is our orientation, our actions have less available flexibility (wiggle room- my personal favorite). That said, the wrong action taken because of time pressure can also cause massive impact. So while time is the issue, we still have to evaluate the overall impact of the problem first.

Time critical problems aren’t always new problems as well. Sometimes they are variations and actually known problems. One of the feedback loops in the system that we will add is that of value over time. Value over time is simply a way for us to denote how quickly a specific solution worked. A great example of this is a machine with an oil leak. You notice the smoke and smell the odor of burning oil which causes you to fix the engine. You fix the engine ending the leak but don’t wipe up the already leaked oil so the overall original problem (smell) remains. The fix worked but the cleanup of the fix wasn’t effective as it could have been (when done with leak wipe up excess oil)!

Difficulty is the other addition that may cause additional time delays and other considerations when approaching the solution. Take that engine above that we just solved the engine leak for. This time the engine itself is not in an open space but is actually crammed into a corner or boxed into a space. Much harder to get to, we know the fix and solution but getting to the applications is actually more difficult. Our known fix says this takes 1 hour, but we know because of the position of our specific engine we should make that two hours. We now taking the difficulty and potential time constraints add the new option of replacing the engine completely and repairing it off-line rather than in the actual system.

Time and difficulty change problems. Using the DLM© system and John Boyd’s OODA Loops it’s critical that we build feedback loops for difficulty and for time. One for the two combined of course as well.

.doc

I am not a graphic artist!

Process meet Data……

Getting past the limits of a known good sources is critical. Understanding the importance of time in this process while also creating a repository of we’ve tried this before is the goal. Building an inclusive knowledge creation structure is important, including the various processes of brainstorming and parking lots.

· Creative idea creation support system (Brainstorming)

· Non-linear creative solution discussion (Parking lot)

· Timely information provision system

· Structured source and IP feedback loops

· Structured and managed intellectual property system

· Informal IP capture system

· Interactive IP system (Social)

· Information lifecycle system

The new addition to the process is the inclusion of the formalized feedback loops. Now we include information such as relevance of source, timeliness of source and a new concept within our IP feedback – relevance to our specific problem set.

imageOur initial concept is the reality of timely versus known good. A known good source can be a timely source, in that case the feedback loop for a known good source would capture timeliness (information arrived but it was too late, a little late, just on time, or well ahead of time) the other checkbox/feedback loop would focus more on the solution. The solution worked as is, fixed our problem or enabled what we were building. The next loop would be we modified the IP slightly, and are resubmitting it with changes to reflect our solution and finally we had to modify the data significantly in order to solve our problem. Not all environments work exactly as projected or designed. The feedback for known good sources would focus on improving the variations within each solution.

Timely sources would be those leveraged if the known good source wasn’t available or as a backup system for the known good data. Simply put, it’s a launching of a search ending with the intent of finding the answer. You may search sources you know to be good (solving a windows problem, start your search at TechNet, solving an iPad problem start at Apple and so on.) While these are known good sources, the keywords and search terms are not always what you expect. Natural language search or sequential search doesn’t always result in a known response. Based on the nature of the data collection you may take these sources into the brainstorming session (initial feedback loop). If time isn’t the driver you may utilize the information in a test or trial and error system. Finally, the data may move from the unmanaged/unknown source into your managed IP system (feedback loop). This can be done in a variety of ways including knowledge articles in your service management system, or as included documentation for the working solution.

.doc

Information process widget

Expanding the concept of a known good source

image

A fatal flaw with source validation is the creation of expert systems. There is a need for an expert system such as DLM© covered later in this book. But like all systems there has to be checks and balances. One of the cheeks is the reality of valid sources. A valid source is known good for a situation and has to be evaluated, a known not good source continues to be relevant for other issues just not for any one particular problem. The separation of the data from the source is critical. For example, always right data that is hard to get to, can actually be a known bad source. If I have to expend significant energy to get the data isn’t a good source.

Implementation of the system then takes into account the reality of information hoarding. The design will account for variances within sources for both applicability to the problem as well as the broader responsiveness of the system.

Our first step in building a system like this is making sure we have easy acc3ess to the information. This is accomplished in a number of ways but has to be applied to all information the system can contain.

The second step is that the data can be provided rapidly. Data that solves a problem but that arrives 10 days late or 10 minutes late isn’t relevant.

Our last step is the known or unknown source. It’s important that we not create an exclusionary system in the last step. Known good sources simply are pre-validated sources in relations to the specific problem we have. A great example of this would be a peer reviewed journal. It is a known good source, but if we are authoring the article for the journal we don’t always have the advantage of previous Intellectual Capital to base our article on.

imageOur system has to include evaluations not only of can we get to the data quickly but can we also consume it quickly. Is that data from a known good source or do we need to verify before we implement? Finally, the last piece in our overall process has to be the simple ask does the information acquired solve the problem. It is our base because frankly if information solves the problem, the rest of the points decrease. Except in the case where time is the driver. This brings us to the IP acquisition model that rides under the OODA Loop, the goal again to get to good decisions, using the model we need to evaluate the aspect of decision over time. A spectrum of decisions that starts with poor slow and moves all the way to good fast decisions.

.doc

following a knowledge loop

Determining Source Validity in an intern-generational knowledge transfer system

John Boyd created the OODA loop concept. In building the concept he created a straight line of observation, orientation leading to decisions and actions. There is a series of feedback loops that keep the system moving. The goal? Simply to create a model for good decisions. The speed of the decisions varies and frankly is controlled by the orientation. If, and John Boyd was a US Air Force officer at the time he created the OODA loops, someone is shooting at you the expedience of the orientation changes to reflect that. IE make the decisions faster!

Taking that initial OODA Loops process we will add for our new system a source validation process focused on improving the quality of inbound data. Source validation is a process within orientation. How we see, view and capture the observation. Based on that concept we will create this new Loop from orientation to observation changing observation (collection) to reflect the source validity (orientation) and then when moving forward (Decide) we will create maximum return for our activity (Action).

Source validity takes a number of forms. A known good source, has to be consistently reliable, which means effectively that the feedback loop used within the system to capture sources has to be applied within the source. The outcome of the internalized source validation system has to mesh with our validation system. As a source validates the validity of its information, we need to capture that “validation” can carry it into the system.

Expert systems exist in the wild. Every company has one. They may effectively be little more than sneaker net systems but they exist. The first problem to solve is how do we capture the knowledge in the heads of younger and older workers. It is a changing of the traditional value of workers, while recognizing the new value of the new ideas.

Our system is based on an orientation of realized value and potential value.

imageRealized value in our system is what we know, or the existing experts or existing expert systems. Potential value represents the new idea, the new concept or the radical modification of existing information. This gives us our feedback loop to provide us with the Optimal Outcome on a consistent basis. The goal of Boyd’s OODA Loops is consistent quality decisions and therefore consistent quality actions. Capturing the two different flavors of information requires an adaptive knowledge system. One that is aware of tradition, this is the way things have always been done, with an allowance for change over time or introduction of radical new ideas.

· Do the sources we use have validation systems?

· Do we have internal validation systems for information?

· Do our SME’s manage the validation system?

· If I capture IP from our system and use it in a modified form how do I resubmit the new IP?

· If IP doesn’t work for me, how do I notify the system?

More to come

.doc

IGKT dreamer

Getting the horse to water, helping him or her drink, and making sure how we got the horse to water is documented…

Not everyone wakes up to the soothing sounds of an adaptive alarm clock. One that increases in volume or increases the light in the room as the alarm goes off. Not everyone uses clock radios some people still prefer the traditional alarms. Finally, there are people who just get up. Perhaps they have a blog to write or they to reflect on the day that is coming.

Timagehe same is true of technology. Frankly not everyone uses technology. Technology exists around us in two distinct forms, the tacit form of broadcast such as television and the more active form such as the gathering for specific information (email, web) from a device. The passive form of technology is this broadcast reality. The active form is the one we are seeking to build as a component of an IGKT. The goal is not to force people that only consume passive technology to modify their behavior and suddenly become an active technology user. The goal is to create a system that allows for consistent managed interactions with both types. Like the old adage, I like both types of music “Country and Western.”

Note: The use of active and passive is to denote the type of interaction with technology it is not a statement, or a value judgment on the interactions.

The value of information is also not the answer to this problem. All information has value, we may not understand the value of it today, but it has value. The more we capture the better things will be as we finally shed the last vestiges of the limits to the information age. Inclusion knowledge systems don’t have value judgments on sources, only on applicability to the problem we are trying to solve.

The next graphic lays out the concept of information acquisition. Where we have the concept at the top of get to done (I got what I needed). The paralleled is the children’s game go fish. When you draw and get what you wanted your turn continues. However, information doesn’t work like that. Sometimes you get close to done, but have to modify the information. Sometimes you get some of what you need. Sometimes you realize the source in question isn’t reliable so you have to weigh the risk/reward of using the source and finally you have the no help on the problem bucket.

In an inclusive IP system this scale also applies to people but we don’t use the unreliable designation for sources. We move that to unverified rather than unreliable. The concept is the same for either. Information provided by that source needs to be validated regardless.

We need to get the information into the system first before we start worrying about the validity of sources. The scale is more for driving towards getting to done. Ultimately an IGKT is a process improvement system. We are taking information hat previously wasn’t even in the system, adding it back into the system to improve the orientation. The OODA Loop system designed by John Boyd drives constant feedback loops to both modify orientation but also to improve decisions. Getting the right information into the system is critical.

So what do we do about people that don’t use technology today. The reality of this system going forward is that no matter what it will require technology to implement the system that provides the answers. It has to be easy to use, simple and it has to be easy to input date into.

.doc

OODA Loop dreamer

The validity of my source…

The value of sources

#Ilackasourceism

It was during a meeting once near Seattle Washington when a friend posted my hashtag above on Wikipedia. It lacked a valid source so Wikipedia as expected removed it. They, the leaders of Wikipedia, removed a phrase for lack of a source, the ultimate compliment for #Ilackasourceism.

Now, the phrase was meant as more of a way of pointing out that there are many types of information and sources for that information.

1.1.1 Social Sources:

· My source: What are the sources you use most? A collection of people, and sources other than people such as Google, Bing or the Encyclopedia?

· Your source: I ask you for help, you reach out to your sources. They are hopefully different than the one’s I’ve already used.

· Shared sources: sources we have in common – so they may get pinged twice.

1.1.2 Professional Sources

· Peer reviewed journal or web site

· Internal Knowledge Management System

· External to your organization paid validated information source (Information broker)

· Validated source such as Microsoft, Apple, Google or any published organizational knowledge base

The reality of sources is this we have both a knowledge source I am aware of, and a trusted knowledge source. There are people that when I ask them questions, I trust their answer more than even professional sources. They have a knowledge of my specific situation and apply their knowledge to the question so it is more likely to match my problem. That said, normally 90% of the time I start with a professional source for information.

Source, when consider the three ingest, analyze and consume actually changes the order. Given a trusted source of information you may skip ingest and analyze going straight to consume. Given a non-or new source you may ingest and analyze the data multiple times before consuming. Taking this back to John Boyd’s OODA Loops, the validity of the source impacts orientation of the observation. Trust sources move us quickly to good decisions and actions. Untrusted sources require either a leap of faith, or decrease the speed of the overall decision system.

.doc

Edison Scale

Finding, using and well reusing a known good source. And the battle to get that into a digital information system. After all–we are nearly in the information age right…

In the world of information, a known good source is an interesting situation. Interesting for two considerably different reasons. There are many people that are personally known good sources. In other words, you know that if you got to that person you will get the right information, every time. Or worst case you will get directions as to how to gather the right information. The second is an electronic source of information you can search.

An inclusive system for Inter-generational knowledge transfer starts with that electronic system. Why? You cannot guarantee that the “expert” will respond in the same way to every person, and frankly worse case they are a choke point. A system with a choke point is not as efficient as a system built without choke points.

Taking into account the social and professional sources you have at your disposal you build a personal knowledge network. That network contributes to your ability to quickly answer questions or solve problems. The larger and more effective your network the better you are able to take the observations gained, modified by orientation and building the right decision model.

Of course reality sets in. Systems that extend often weaken. The larger the network the longer the response and the further from known good sources you get. The sensitivity of time when solving problems is both a reality and risk.

· How fast do I need the information

· What sources have given known good information on this topic in the past

· What factors limit the response rate of my network?

The first and the last seem pretty similar but they are extremely different. The middle of this system is the problem area. If the middle is a human being that information is always at risk. Does the person you are reaching out to continue to study the problem once they have solved it once? As a former teacher I can tell you there are many types of test takers. The first type likes to make patterns when taking a test. They don’t guess, they randomly pick answers. Another type answers once, turns it in and walks away. The Ah-Ha test taker reads the questions they know light goes on and they answer those quickly. They then come back and work on the ones they didn’t have an Ah-Ha answer. They are the hybrid between the set and forget and the ponder and wonder test takers. Our last type is the test retaker. They answer everything, go back and check. Some, check over and over and over. Known good sources aren’t the first type but the can be the second type.

If your known good system is an inclusive knowledge capital system designed around the concepts of Screen, time and source you have a leg up. Now there is a person (SME) whose job it is to answer those questions and then look for every variation of that question. To find every other possible answer. Taking the concept of known good source to a new level.

imageSocial, in that there is an interactive action that occurs. Professional in that the SME understands the concept and the topic. Inclusive because all answers to the question are evaluated. Inclusive because you get the experts that have done this 100 times before. Fresh and exciting because you get the brand new people looking at the problem from a different angle.

From left, the inclusive system to right the tacit knowledge network. The left system always updates and evaluates information, solutions and ways to look at things. The middle section relies on the expertise of those that have been there or those who’ve studied the problem. Experts aren’t always people that look at something for 10,000 hours. It’s important to evaluate the validity of a source. If the source is stuck (the way things are) you end up with the top solution. You don’t ever get the new solution to the problem. That creates the know good source error.

The act and art of sharing, its just do it!

The concept of an apprentice is still one that is in use today. The reality of the apprentice model is that it is radically differently. We have pushed the old model into the newer more agile mentor model. But a mentor cannot build and drive the information system. Mentors are designed to improve any one person’s skills based on their knowledge and ability to convey that information. It is, mostly, a one on one relationship much like the apprentice of old.

Considerations in building an effective IGKT system are actually quite easily defined.

image

An easy to use interface has a number of components to it. The first being what is easy and the second being on what device. It is fairly common today to utilize the traditional web interface as a presentation interface. Designing a web system that is inclusive and spans the generations for knowledge capture requires some creative thinking

Hey – it’s time for your first brainstorming session! How do we build a starting point where everyone in the company will feel comfortable posting, adding, sharing and consuming information?

So here are some topics to discuss for your first brainstorming session:

· Cluttered is always an option – depends on your organization

· Fast? Mobile enable? Mobile empowered?

· Many tabs? One Tab with many links?

· What do we have (good) today?

· What do we have (needs improvement) today?

· What are new employees using?

· What are established employees using?

Once you’ve got that first brainstorming session under your belt you will have a great idea of two very distinct things, loosely what this site should include and roughly what it should look like. Oh yeah and when your first Parking Lot meeting is!

The concept of one-stop-shop truly varies by organization. What does it mean in your company?  Today most organizations have distinct sites for the following:

· Employee Services

· Knowledge Capital

· Mentoring

· Training

Oh, yeah, that is going to be a problem right? Time for your second Brain storming session (you can probably base on this initial pattern do the math. 7 core concepts, 7 brain storming and 7 parking lot sessions. According to adult learning theory only 16 more to go to make that process de rigor. So this session for brainstorming we need to do something a little different. Not change the brain storming rules but we need more roles. We will need to have someone for the IP team, someone from the employee services group, someone from training and finally someone working in the mentoring program (which may be part of the training team). As they walk into this session you will need to consider the following:

· There are no right answers.

· Enter the room seeking answers, not holding them.

· Open mind is more than open to persuasion.

The first bullet is to get rid of the way things are syndrome. The second bullet asks people to leave that way things are specific to their group in their cube or office. The last one is about truly accepting a brainstorming session. The hardest thing to do in a brainstorming session is to prove the merit of every idea. Parking lot discussions are for the art of the possible. Brian storming sessions are all about the universe of potential. So if the team starts with the assumption that no one in the room has the only answer, then the meeting goes a lot smoother.

Hey – parking lots are places ideas flourish not go to die. Start every brainstorming session off reminding people that a parking lot idea isn’t a punishment it is an opportunity for the idea to be reviewed in the right context!

Creating an ongoing list of existing systems and what should be in our new system will be useful. The place to start is finding the three top used sites in the company.

.doc

Hey I share – how about you?

Wait, there have been inter-generational knowledge transfer systems for years, right?

The concept of Screen, time and source modified by ingest analysis and consume results in a capture and reuse infrastructure for the end points. The reality of the back end is something to be considered very carefully. Where with the end point we are deeply concerned in our design with the screen and consumption capabilities for the end user, or SCRaaS (screen as a service) for the back end we actually are as much concerned with the source, its validity and the ingestion.

In considering a system like this you have two distinct technology presentations that you have to consider. The first system like this I was involved in began as a series of communities. People, producing information, sharing information and distributing shared information. It was less effective than it could have been because there wasn’t a true sharing culture (knowledge hoarders) and there wasn’t an effective search creating Dumpster Divers[1]. A dumpster diver is someone that uses the KM system as a mass retrieval system. They search for terms, and then download everything they find. They then search the “dumpster” created to find what they need. That is not a good behavior as they end up with lots of out of date information.

clip_image002

We merged communities with search in our second attempt. It got pretty close but the search technology we had failed in the end. It could not go beyond natural language search to adaptive search. Adaptive search understands that when I say dogleg left, it’s a golf term and don’t present a bunch of pictures of left legs of various dogs. Search engines of today actually create a window presenting initial findings and a “Did you mean this…” line at the very top. Better, but still not fully adaptive enough for a true system of value. Hence the need for SME’s. The SME’s would be a mix of automation and human thinking. The automation being trending the human thinking being the adaptive search terms posted on the home page of the system. From that second attempt at a KM system I came up with the Knowledge Scale shown. Asked and many options returned, asked a few options returned, asked and my question answered. The scale shows the value of a system that adapts via a SME and automation to the information available and the problem being asked.

My third attempt at building a system like this went a different way. We created the SME static information for users to consume. We created communities of interest around the concept sand topics focused on solving problems. We mixed in training and built a considerable training infrastructure that was unique to the problem we were solving. This last system encompassed everything but adaptive search and we got around that by creating the community of experts.

All three systems were ahead of their times. None of them had adaptive search. But they had many parts that encompass what an inter-generational system has to have. First off, inter-generational knowledge transfer is not a new concept. In the last century we moved away from an IGKT system known as the apprentice system. Why? Well it was a focused one on one knowledge transfer system that worked, one on one, or one on a small group. The reality was the move to Universities and away from the apprentice system to create a greater uniformity of professions. If you went to a doctor you were going to a professional adhering to set and known standards. You were not going to someone that spent two years learning at the knee of their Uncle and then started a medical practice. The rise of professions beyond what had been was a cause of the birth of university training and a reduction in the inter-generational knowledge transfer system known as an apprentice system. But what we are talking about now is beyond an apprentice system. We are talking about the creation of a knowledge system that allows for the ingestion, analysis and consumption of data in a manner that benefits the user, the system owner and the subject matter experts.


[1] “Dumpster Diving” a KM term coined by Bob Forgrave ICE team.

Moving past Screen, Time and Source to data modifiers…

Screen, Time, Source being the three technology drivers we then move towards the question of ingest, analysis and consumption the three user states of a knowledge system. The critical goals here are to create a system that is inclusive regardless of all states of user. This includes the status of the user as a person (which we ignore) and the current emotional state of the user (which we also ignore) the goal is a system that takes all input.

Ingest has to be user and system based. By that the system or provider of information needs an ingest model that takes into account both the type of data they generate but also the raw material that data comes from. The user may capture information via a variety of devices for additional consumption. Today for example a slower system that is better today than it was 20 years ago is that of package tracking. Where the system today has a flaw that technology points out. If you are shipping something the tracking number should be automated. Not within just the shipping system. You should be able to take a picture of a bar code provided by the shipper and have your shipping information uploaded to whomever you are shipping to, automatically. That is a knowledge ingestion system. Automate things that today are manual.

We then need to apply our Screen, time and source to the ingestion process. If you are shipping at box at a shipping store being able to ingest, consume and share that information should be real time. You don’t want to ship a box and remember a day later that you need to send that information to the person getting the box. The same is true for receipts and other business transactions. If you are loading your organization money (expense out) you want to send your expense report (money back) in as quickly as possible. The analysis of a shipping system is done by the shipper providing status of the package your job is to provide access to the number to the person receiving the package. Expenses are processed by your organization you again are merely providing information.

At the point of ingest there are situations where you need to process information. For example, if your ingest job is to measure a lot for potential construction then you are creating data. There are many automated laser measurement tools so capturing the information is critical. The human or analysis component is understanding what the process of building does and is that lot truly feasible. For example, you are considering building a gas station. You have done the pre-work (is it needed and geographically where should it be plus or minus a 5-mile span). You have four lots to consider. You take measurements using a laser system and four lots are large enough for your gas station. But, one is on a very busy road on the corner of an intersection with a stop light. The other three are on smaller side streets in the middle of the block. The human analysis would be provided in choosing which of the four lots to pursue. (my guess would be the corner lot – easier access for people because of the stop light).

So ingest includes both pure ingestion and ingestion plus analysis. The analysis may be automated (capture an image, and translate to the language of the user) or it may include human input (measure a lot, evaluate the lot against the other lots being considered). Finally, there is the reality of consumption where we have to provide information in the format required by the consuming system or user.

State

Modification

Output

Screen

· Ingest

· Analysis

· Consumption

Easy screen independent capture and consumption for the user.

Time

· Ingest

· Analysis

· Consumption

Information is routed to the core or central system automatically based on the criteria of the system and the user.

Source

· Ingest

· Analysis

· Consumption

User as a source and system as a source are evaluated in an automated fashion to assure proper information is routed, ingested, analyzed and consumed.

Our states are modified by the modality of the information and the system goal for the information. The goal here is to automate as much ingestion and analysis as possible. Consumption is also something that request a level of automation but some consumption (human ingestion) requires the traditional reading or listening/watching of the information.

.doc

Knowledge Capital dreamer