A fatal flaw with source validation is the creation of expert systems. There is a need for an expert system such as DLM© covered later in this book. But like all systems there has to be checks and balances. One of the cheeks is the reality of valid sources. A valid source is known good for a situation and has to be evaluated, a known not good source continues to be relevant for other issues just not for any one particular problem. The separation of the data from the source is critical. For example, always right data that is hard to get to, can actually be a known bad source. If I have to expend significant energy to get the data isn’t a good source.
Implementation of the system then takes into account the reality of information hoarding. The design will account for variances within sources for both applicability to the problem as well as the broader responsiveness of the system.
Our first step in building a system like this is making sure we have easy acc3ess to the information. This is accomplished in a number of ways but has to be applied to all information the system can contain.
The second step is that the data can be provided rapidly. Data that solves a problem but that arrives 10 days late or 10 minutes late isn’t relevant.
Our last step is the known or unknown source. It’s important that we not create an exclusionary system in the last step. Known good sources simply are pre-validated sources in relations to the specific problem we have. A great example of this would be a peer reviewed journal. It is a known good source, but if we are authoring the article for the journal we don’t always have the advantage of previous Intellectual Capital to base our article on.
Our system has to include evaluations not only of can we get to the data quickly but can we also consume it quickly. Is that data from a known good source or do we need to verify before we implement? Finally, the last piece in our overall process has to be the simple ask does the information acquired solve the problem. It is our base because frankly if information solves the problem, the rest of the points decrease. Except in the case where time is the driver. This brings us to the IP acquisition model that rides under the OODA Loop, the goal again to get to good decisions, using the model we need to evaluate the aspect of decision over time. A spectrum of decisions that starts with poor slow and moves all the way to good fast decisions.
following a knowledge loop