The act and art of sharing, its just do it!

The concept of an apprentice is still one that is in use today. The reality of the apprentice model is that it is radically differently. We have pushed the old model into the newer more agile mentor model. But a mentor cannot build and drive the information system. Mentors are designed to improve any one person’s skills based on their knowledge and ability to convey that information. It is, mostly, a one on one relationship much like the apprentice of old.

Considerations in building an effective IGKT system are actually quite easily defined.


An easy to use interface has a number of components to it. The first being what is easy and the second being on what device. It is fairly common today to utilize the traditional web interface as a presentation interface. Designing a web system that is inclusive and spans the generations for knowledge capture requires some creative thinking

Hey – it’s time for your first brainstorming session! How do we build a starting point where everyone in the company will feel comfortable posting, adding, sharing and consuming information?

So here are some topics to discuss for your first brainstorming session:

· Cluttered is always an option – depends on your organization

· Fast? Mobile enable? Mobile empowered?

· Many tabs? One Tab with many links?

· What do we have (good) today?

· What do we have (needs improvement) today?

· What are new employees using?

· What are established employees using?

Once you’ve got that first brainstorming session under your belt you will have a great idea of two very distinct things, loosely what this site should include and roughly what it should look like. Oh yeah and when your first Parking Lot meeting is!

The concept of one-stop-shop truly varies by organization. What does it mean in your company?  Today most organizations have distinct sites for the following:

· Employee Services

· Knowledge Capital

· Mentoring

· Training

Oh, yeah, that is going to be a problem right? Time for your second Brain storming session (you can probably base on this initial pattern do the math. 7 core concepts, 7 brain storming and 7 parking lot sessions. According to adult learning theory only 16 more to go to make that process de rigor. So this session for brainstorming we need to do something a little different. Not change the brain storming rules but we need more roles. We will need to have someone for the IP team, someone from the employee services group, someone from training and finally someone working in the mentoring program (which may be part of the training team). As they walk into this session you will need to consider the following:

· There are no right answers.

· Enter the room seeking answers, not holding them.

· Open mind is more than open to persuasion.

The first bullet is to get rid of the way things are syndrome. The second bullet asks people to leave that way things are specific to their group in their cube or office. The last one is about truly accepting a brainstorming session. The hardest thing to do in a brainstorming session is to prove the merit of every idea. Parking lot discussions are for the art of the possible. Brian storming sessions are all about the universe of potential. So if the team starts with the assumption that no one in the room has the only answer, then the meeting goes a lot smoother.

Hey – parking lots are places ideas flourish not go to die. Start every brainstorming session off reminding people that a parking lot idea isn’t a punishment it is an opportunity for the idea to be reviewed in the right context!

Creating an ongoing list of existing systems and what should be in our new system will be useful. The place to start is finding the three top used sites in the company.


Hey I share – how about you?

Wait, there have been inter-generational knowledge transfer systems for years, right?

The concept of Screen, time and source modified by ingest analysis and consume results in a capture and reuse infrastructure for the end points. The reality of the back end is something to be considered very carefully. Where with the end point we are deeply concerned in our design with the screen and consumption capabilities for the end user, or SCRaaS (screen as a service) for the back end we actually are as much concerned with the source, its validity and the ingestion.

In considering a system like this you have two distinct technology presentations that you have to consider. The first system like this I was involved in began as a series of communities. People, producing information, sharing information and distributing shared information. It was less effective than it could have been because there wasn’t a true sharing culture (knowledge hoarders) and there wasn’t an effective search creating Dumpster Divers[1]. A dumpster diver is someone that uses the KM system as a mass retrieval system. They search for terms, and then download everything they find. They then search the “dumpster” created to find what they need. That is not a good behavior as they end up with lots of out of date information.


We merged communities with search in our second attempt. It got pretty close but the search technology we had failed in the end. It could not go beyond natural language search to adaptive search. Adaptive search understands that when I say dogleg left, it’s a golf term and don’t present a bunch of pictures of left legs of various dogs. Search engines of today actually create a window presenting initial findings and a “Did you mean this…” line at the very top. Better, but still not fully adaptive enough for a true system of value. Hence the need for SME’s. The SME’s would be a mix of automation and human thinking. The automation being trending the human thinking being the adaptive search terms posted on the home page of the system. From that second attempt at a KM system I came up with the Knowledge Scale shown. Asked and many options returned, asked a few options returned, asked and my question answered. The scale shows the value of a system that adapts via a SME and automation to the information available and the problem being asked.

My third attempt at building a system like this went a different way. We created the SME static information for users to consume. We created communities of interest around the concept sand topics focused on solving problems. We mixed in training and built a considerable training infrastructure that was unique to the problem we were solving. This last system encompassed everything but adaptive search and we got around that by creating the community of experts.

All three systems were ahead of their times. None of them had adaptive search. But they had many parts that encompass what an inter-generational system has to have. First off, inter-generational knowledge transfer is not a new concept. In the last century we moved away from an IGKT system known as the apprentice system. Why? Well it was a focused one on one knowledge transfer system that worked, one on one, or one on a small group. The reality was the move to Universities and away from the apprentice system to create a greater uniformity of professions. If you went to a doctor you were going to a professional adhering to set and known standards. You were not going to someone that spent two years learning at the knee of their Uncle and then started a medical practice. The rise of professions beyond what had been was a cause of the birth of university training and a reduction in the inter-generational knowledge transfer system known as an apprentice system. But what we are talking about now is beyond an apprentice system. We are talking about the creation of a knowledge system that allows for the ingestion, analysis and consumption of data in a manner that benefits the user, the system owner and the subject matter experts.

[1] “Dumpster Diving” a KM term coined by Bob Forgrave ICE team.

Moving past Screen, Time and Source to data modifiers…

Screen, Time, Source being the three technology drivers we then move towards the question of ingest, analysis and consumption the three user states of a knowledge system. The critical goals here are to create a system that is inclusive regardless of all states of user. This includes the status of the user as a person (which we ignore) and the current emotional state of the user (which we also ignore) the goal is a system that takes all input.

Ingest has to be user and system based. By that the system or provider of information needs an ingest model that takes into account both the type of data they generate but also the raw material that data comes from. The user may capture information via a variety of devices for additional consumption. Today for example a slower system that is better today than it was 20 years ago is that of package tracking. Where the system today has a flaw that technology points out. If you are shipping something the tracking number should be automated. Not within just the shipping system. You should be able to take a picture of a bar code provided by the shipper and have your shipping information uploaded to whomever you are shipping to, automatically. That is a knowledge ingestion system. Automate things that today are manual.

We then need to apply our Screen, time and source to the ingestion process. If you are shipping at box at a shipping store being able to ingest, consume and share that information should be real time. You don’t want to ship a box and remember a day later that you need to send that information to the person getting the box. The same is true for receipts and other business transactions. If you are loading your organization money (expense out) you want to send your expense report (money back) in as quickly as possible. The analysis of a shipping system is done by the shipper providing status of the package your job is to provide access to the number to the person receiving the package. Expenses are processed by your organization you again are merely providing information.

At the point of ingest there are situations where you need to process information. For example, if your ingest job is to measure a lot for potential construction then you are creating data. There are many automated laser measurement tools so capturing the information is critical. The human or analysis component is understanding what the process of building does and is that lot truly feasible. For example, you are considering building a gas station. You have done the pre-work (is it needed and geographically where should it be plus or minus a 5-mile span). You have four lots to consider. You take measurements using a laser system and four lots are large enough for your gas station. But, one is on a very busy road on the corner of an intersection with a stop light. The other three are on smaller side streets in the middle of the block. The human analysis would be provided in choosing which of the four lots to pursue. (my guess would be the corner lot – easier access for people because of the stop light).

So ingest includes both pure ingestion and ingestion plus analysis. The analysis may be automated (capture an image, and translate to the language of the user) or it may include human input (measure a lot, evaluate the lot against the other lots being considered). Finally, there is the reality of consumption where we have to provide information in the format required by the consuming system or user.





· Ingest

· Analysis

· Consumption

Easy screen independent capture and consumption for the user.


· Ingest

· Analysis

· Consumption

Information is routed to the core or central system automatically based on the criteria of the system and the user.


· Ingest

· Analysis

· Consumption

User as a source and system as a source are evaluated in an automated fashion to assure proper information is routed, ingested, analyzed and consumed.

Our states are modified by the modality of the information and the system goal for the information. The goal here is to automate as much ingestion and analysis as possible. Consumption is also something that request a level of automation but some consumption (human ingestion) requires the traditional reading or listening/watching of the information.


Knowledge Capital dreamer

Building an Inter-Generational Knowledge Transfer system…(what technologies and tools are needed)…

The first requirement is search. In The Syncverse[1] the concept of creating Verses, or places in your personal storage world where you could control who had access, who could put data in the location and ultimately what data was there. The reality of compute power today is that search is so much better. But now at the edge of the information age we need to get at other types of data and be able to search those quickly.

Where The Syncverse is a story of getting the data to a single place for mobile consumption the new paradigm of search adds the ability to extend the concept even further. In The Syncverse the concept of data storage is discussed. What should be on your device. What should be in the cloud. What should be as detailed yesterday available real time versus what can be delayed din delivery. The concept of just-in-time data comes out of that discussion. Have data on my device that is relevant to what I need to do based on a standard. The standard for most people would be their calendar. Why? For the most part if you need specialized information it will be based on meetings in your calendar. No one faults people for saying hang on a second, I don’t have that information readily available but I can get it, when a new concept comes into a meeting.

So the ability of a search engine to search voice, video, music, text, spreadsheets, web pages and all other emerging content types is critical. A concept presented is that of the Screen as a Service. Where we prioritize screen RealEstate based on information, user preference and available screen. The example is information placed on your smart watch would be different than on your larger cellular phone screen and the even larger tablet screen. So intelligent information presentation is a critical goal to initially create the concept of bound search.


The three Screen, Time and Source are linear. They build on each other in presenting valid information to a user. Prompting you when necessary (you are about to get data that will require a larger screen than you are currently using). The data will be returned in 22 seconds. The source we are using is a known good source. All three result in a presentation of information in the time required to make a good decision (to borrow from Bill Gates). Basing this then on the decision framework John’s Boyd’s OODA Loops, we use the three Screen, Time and Source as the components of the orientation and feedback loops.


The first component being to move observation closer to the desired result. The second is to create a source validation loop within the process. For example, if based on Screen and Time we select a source. But that source provides bad information more than 20% of the time, we need to use a feedback loop and change the weighting of that source.

Source weighing gives us a strong KM system. The formula for source weighting is really quite simple. Using a traditional modeling, we take the following components:

1. How fast can the data requested be returned

2. Is there flexibility in the formatting of the data

3. What is the validity of the source (failure rate)

Speed of return + Flexibility of data presentation are the initial parts. Of course the last one trumps the first two every time. Invalid sources produce bad orientation, worse observations and ultimately both bad decisions and actions.

Validity of course – Speed of Return + Flexibility of presentation= Source weight.

[1] The Syncverse, by Scott Andersen*Version*=1&*entries*=0

[2] Source



Inter-Generational Knowledge Transfer Wanderer

Building the capture, processing and delivery system for knowledge transfer!~

Building an Inter-generational knowledge transfer system.

The reality of the almost information age is the reality of information. As I’ve mentioned previously 110 Zettabytes of information produced just by devices operating in the Internet of Things (IoT). From sensors and video surveillance systems, connected doorbells and connected cars data is being produced. A lot of the data produced by IoT devices is not consume and store. Some of it is automatically stored (video surveillance and other video feeds).

It is best in such a design process to figure out what the true requirements are first. So we need a data organization system in the world of data analytics you have two distinct concepts the first being a data lake, a data stream. Lake representing ongoing storage. A stream being either from a source (sensor or other) or from a lake and as it denotes a smaller amount of data constantly arriving. Think of it as a television station and a television. The television station has tons of content, but only streams part of the content to the television set. The TV can receive streams and choose between which streams it needs to consider.

Within the lake and stream you then break data into four distinct categories (for action/reaction) that allow you to decide what needs to be done to the data before presentation.

clip_image002The four buckets show the critical nature of the information; the example being how quickly do I need the data. The other side of this information process is the amount of analysis needed before data presentation. This second process applies to the first in how the information is moved from source to delivery.

clip_image004We take the four buckets of data delivery times and add the processing required for the overall impact of the information. This gives us a system with the overall processes in place.

Examples of this process in action are in the table below:

Data type

Time requirement

Modification requirement

Delivery type





Stored video


Remove non-requested timeline information


Sensor (water level of river)


Remove extra data (if water level is normal don’t report)

Various screen, portable device and others

Human being


Critical to be able to receive direct human input and modify that input so that it can be shared with other humans

Screen, various

Critical warning


Flexible formatting of all critical information

All devices at the same time.

Effectively what happens is information beings to move from the 110 Zettabytes possible to the more consumable amounts. Given that people use various devices, it is critical that the system be device aware in presentation. That drives into my long stated direction of the Screen as a Service. Be aware of the outputted device.

So what we get here is a delivery system for information. What we need to build is the system that captures the information effectively. The goal, is to create a human capture system that encompasses the ways humans produce information. Email, documents, presentations, video, audio and any other format including art, music and so on. The system has to adapt to the type of information produced and the person producing the information. It then adapts again to the person consuming the information. The faster the information is required, the faster the adaptation has to occur.

Placing this system into the wild requires planning. While the requirements we are gathering focus on the capture of knowledge we need to be able to place it into a simple decision matrix. As stated before John Boyd’s phenomenal OODA Loop process fits perfectly. It adds the additional needed feedback processes to improve information flow and capture along the way.


Inter-Generational Knowledge Transfer

Look upon my works ye mighty (Ozymandias) and know that I built it. I know how to rebuild it and if you want that information you have to come to the temple of knowledge and pay…

There are no bad ideas. It is something I learned as an elementary school teacher. You cannot effectively encourage creativity and problem solving if you act as though there are bad ideas. Or worse beyond acting actually behave as though there are bad ideas.

Bad ideas are not possible. There are buckets to consider, wrong time, requires resources we don’t have right now and so on. That is why you build out the IGKT system Parking lot. The better you get at brainstorming and parking lot meetings the more buy in your organization will get.

Creating an inclusive knowledge environment takes more however than just having brain storming sessions and a good parking lot infrastructure. There are two components to consider the first is the overall management goal of inter-generational knowledge transfer. Some drive towards an expert system. Expert systems are wonderful but hard to convert to an inclusive information gathering system. Expert systems are built around the knowledge in an expert’s head. Experts are encouraged to provide answers to specific problems. This traditional model is often the reality of consulting. Experts brought in to solve problems based on knowledge they have. That knowledge is required all the time by the consulting customer so it leaves with the consultant. Consulting companies often then create internal knowledge stores of information so they can recreate and solve problems quickly. However, this is where the expert culture often breaks down. I chased IP from smart people for 5 years in a past life. No matter how you approach the problem it exists. Hence this blog and my sharing openly of ideas. I got tired of the reality of consulting knowledge hoarding.

Inclusive systems don’t function with knowledge hoarders. They cause the system to break down by sending around Locked PDF files with their nuggets. They seldom submit concepts to the overall knowledge management system and they don’t participate in open Brain storming sessions. They are the exhausted high priests of the expert culture. We go to them when we can’t solve the problem any other way.

Back to the point does your organization drive towards an expert culture? The second problem is the reality of the technology. As we move to the information age (we are not there yet and have a while to go before we will be there). There is a growing amount of data. The reality of CPS (Cyber Physical Systems) which is the commercial IoT, or IIoT expanded to include the broader realities of integration and management as components of the things hosted in the internet. Those many sensors, devices and reporting systems generate 110 zB (Zettabytes) of information every year. The rise of data analytics comes out of the generated data that organizations need to quickly grab, shake and if no longer viable discard.

So the things that impact your inclusive system are the technology you need and the reality of organization culture. This doesn’t even consider the reality of language barriers; which translation programs are knocking down very quickly. What once was the great limiter of sharing (translation) isn’t as relevant now as it was. It continues to grow less and less relevant. Cultural differences are a critical component of an inclusive inter-generational knowledge transfer system. Culture, like the other factors should never be considered a part of measuring inbound knowledge. It should also be considering as a critical part of building a repository using the DLM© model. If someone says in my culture we like to submit information orally, then your inter-generational knowledge transfer system needs a podcast element. That way the cultural desire to submit information that is spoken can be supported.

My culture (personal) likes to submit information via blogs. Creating an easily consumed blogging system that supports external blogs as well as considers both the personal and professional aspects of blogs would be of value.

Many cultures. Many types of people. All of them building a new world by sharing information. The organizational cultural blockers are the people that see things as well, the way they are. I am going to throw that idea into my blog parking lot. I will be discussing it later as part of this series but for now I am going to close this up for today.

(my law of information – every time I have to stop and reach out for the expert is time I will never get back)!


Inventor DLM© and The Edison Scale©

Building an inclusive Inter-Generational Knowledge Transfer System…

Why does an inter-generational knowledge transfer system require inclusion? First off because knowledge isn’t in the heads of the few. If it is your organization has an issue. The reality of information is that it needs to move freely about the organization.

(this image shows traditional  Inter-Generational Knowledge transfer. One way, to similar people).

Hierarchy’s block free information flow. Exclusive behavior blocks information flow. The reality of needing to build and Inter-generational knowledge transfer capability is that for many generations now we’ve built the opposite. One of the interesting studies released a few years ago is the concept of an expert. 10,000 hours in your profession is required for you to be an expert in the profession. But, once upon a time doctors with 10,000 hours worked on cadavers that then delivered babies. There was a high rate of both mother and child mortality. It was a non-professional that realized you can’t go from a cadaver to a living pregnant woman without stopping and washing your hands. That was the brain child of a hospital administer in Vienna. Not a professional, 10,000-hour doctor. The result was a reduction in infant and mother mortality, radically in fact.

Ideas can come from anywhere and anyone. One of my personal complaints about our education system is we don’t encourage the creation of new ideas. But, it’s a reflection of society so I can’t blame educators for the reality of their world. There is probably, more IP on the laptops of people in a meeting than in the KC system. There are probably more ideas and dreams in the heads of the workers in that meeting than on the hard drive and so on.

Traditional KM system are hierarchical. The reason for the is the concept of information validation. People go to KM systems to verify information and to get an answer. They don’t go because its Tuesday and on Tuesday’s I search for names of Roman Emperors. The DLM© was built with this in mind. I grew up in a traditional KM system. So when I designed the system (process) that would become DLM© I took into account the need to continue what has always been. As we evolve that thinking the next step is the broader in inclusive step. From frequent brain storming sessions (why don’t more companies have a monthly Monday am brain storming session?) Small group and large group brainstorming can provide massive value and if done regularly you will have the opportunity to solve the big problems facing your organization.

Monthly brain storming session and monthly parking lot sessions. Each month dealing with an organizational problem, with all ideas thrown out. Ones that are relevant to other issues are put in the parking lot. Monthly parking lot issues are addressed in the parking lot meeting. It becomes a more inclusive environment for ideas.

Leaving only the reality of experts to deal with. Ideas as I have said before are flowers. You water them, you put them into good soil and you help them grow. Some flowers like sunshine and lots of it. Some flowers like some sun and some shade. Some flowers really only like shade. Ideas like flowers need to be cared for. They need to feel like the neighbor’s dog isn’t going to dig them up and chew them into oblivion. Or that they are in an environment that is all sun, when they prefer shade.

Inclusion means that ideas are given a chance. Again. Monthly brain storming sessions will increase the amount of information produced by the organization. The worst thing that can happen out of the monthly brain storming sessions is that you can’t cover everything in a single parking lot session. So you move to twice monthly parking lot meetings. Inclusion also creates an open environment in the organization. The ways things are being an excuse not a mission and vision for the organization.

It is easy to say inclusion. The word in and of itself isn’t hard to throw out. It is hard effectively because inclusion requires change. It requires everyone in the room while brain storming to actually listen to the ideas of others. Listen is a hard word for people to accept. First off the person speaking may not have invested the 10,000 hours in the profession to be an expert. Or worse the person speaking may be different than you are. So the ideas are rejected. In building a DLM©, based on The Edison Scale© the value of any one person and their ideas is 1. No idea has greater weight than 1. No person has greater weight than 1.

(Using the Edison Scale© and building an inclusive knowledge transfer system creates a broader IGKT system that is inclusive of experience, culture and the origin of ideas!)

Inclusion means that the parking lot isn’t used to throw ideas away. It is used to encourage later discussions. Edison kept trying to create the light bulb. He found many ways that wouldn’t work. Ways that upon consideration weren’t good as conduits for electrical power to produce light. But over the years Edison took ideas and kept building on them. Ultimately he created a lot more than the light bulb. You do that by storing information.

I how many light bulbs has your company or organization missed because you have an old fashioned system that doesn’t consider inclusion critical and ideas the most valuable thing your organization has.

When is your parking lot meeting?


Investor  of DLM©  and Edison Scale©