The migration tool
It seems like a perfect storm, perhaps it is in the long run view. A migration tool that includes the business intelligence required for transitional services to be effective.
What would such a tool look like?
1. Understanding the information flow structure of hte organization, allow for automated data retention in the specific required location. This would be automated cloud placement, correct creation of an index and search placement. Then the update of the on premise backup of the data as changes occur.
In reality this tool would have three core pieces:
Search engine placement (best best)
Search engine index (this would need to incldue the meta data to make the overall search better)
Data migration following business rules from on premise to the cloud.
Two way migration (back to on premise) for data.
Business rules that allow the management of what IP is moved and where.
Such a tool does not exist today of course, but there are several that have pieces of this. The search integration would be the difficult component to build going forward.
Podcasts of this topic are also here http://docandersen.podbean.com
What are the three pillars of Transitional Servies?
1. Building a system to determine what information lives where.
2. Using a tool or building a tool that will create the two way replication pattern from the cloud and back.
3. Providing the right level of security.
Today let’s talk about the tools that are out there. First off there are point in time solutions that move data from one place to the other and back again. But there are not control systems that makes this an automated and effective solutoin. What we need is a tool that will effectively allow us to create business rules that support the creation of IP, and then follow a decision matrix as to where that information should be.
I’ve thought about this for the past week and have some idea’s for creating a new data mining tool. It’s really what this is all about – a tool that leverages a search engine (Bing) and produces a net result. This result is then provided as a best bet to the person searching. The data is then moved into a consumption area for the users to access.
This buids really three area’s for a user to interact with:
So within the concept of “i need information” the user heads to their search area. Once they “found what I needed” they move that data into consumption. If during consumption they realize that the data as provided does’t cover what they fully need they move that data into the “update” section.
More coming once I figure out how such a tool would look/work.
I will be posting a podcast on my initial thoughts later today/tomorrow. Http://docandersen.podbean.com. Let me know your thoughts (comment here or comment there).
1. These services would be a change from what is done today. THe goal here would be an evaluatative services engagement first where the actual process and data are evaluated first, then a migration.
2. There are no “cloud out” and “cloud in” tools today that are effective. There are a number of tools that are close (protogroup is the closest) but the logic and intelligence for these migrations hasn’t been built into a tool yet.
3. This is a mix of figuring out what the business needs, what can be secured and what is an acceptable risk for the business. As such the rules for these migrations may end up being extremely flexible going forward.
Watch for the podcast!
As if data would come to us like the line from the song “should I stay or should I go?”
Should I stay? Am I data that meets some of the criteria listed:
1. To sensitive to be readily avialble
2. To sensitive to allow for the risk of exposure
Should I go? Am i data that meets some of the following criteria:
1. by being readily available to all users of the company wherever they are, I would create a competitive advantage.
2. I can speed up the sales, delivery or other aspects of customer service by being easily available.
I could build these criteria out more, you get the basic concept. The issue is how do we develop a process that will allow us to actually create an engine to move this data?
Transitional Services is the vechile I believe will get us there. In the past services was a human function. With transitional services it may be a computer or a human initiating the process. Transitional services focuses on ways to help the organization better manage the information they have (Explicit) while enabling the information they want to capture (tacit) to move more easily into a formal system.
However, because in reading this past paragraph it is easy to make the assumption that in fact Transitional Services is simply a way to hide KM, it is more than that.
Beyond sync there needs to be an effective way to move data from the business owned cloud to the broader “SaaS Cloud”. This would invovle automated migration processes or simply using a human being (laptop) to move the data.
The how of the data migration is the key to Transitional Services – more coming!
Over the years various organizations have considered the concepts of architecture and how it interacts with the organization. It seems to me now that we are in a period of transitional services. Where in addition to actually helping people solutions with software we are now in a period where can actually influence the data measured and captured by those solutions.
The time of data management is near.
We’ve dreamed about it – those of us in the consulting and IT worlds. A search engine that finds everything. But returns only those things that actually apply to the question asked.
Transitional services is beyond traditional services in the sense that we are now helping customer’s consier the what (applications and solutions) they why (business value and business process alignment) and finally the how (the how of data – almost should be the TAO of data). How gives us access into how the information is to be used when in this new solution. With how we are able to leverage and build on traditional services and solutions to provide the right solution to the business.
Like anything there is a hierarchy in which information flows. Formal data is usually published in a formal data site. It is not always the authority (how do you do that? Normally the response is not go to x site and read y document it’s go see Fred – he knows).
Explicity knowledge has a publication system that allows you to evaluate the value of the data (consider the source, are they an expert). Where as much of the data even today remains in the Tacit system (this blog would be an example of tacit information) or in the tacit delivery system (called an expert system often).
All of this leads me of course back to a central concept that has bothered me for years. How do you build a knoweldge network within an organization. How do you ensure that information moves up and down the IC stack easily.
I’ve talked about Salmon streams and bears, ways that information moves within an organization. But the reality, there isn’t a perfect system today to get that napkin into IC.
That becomes the essential quest for architects today. Getting information off the napkins and out of the heads of people and into a system that others can use.
For years in my current job I focused on IC development, capture and reuse. As a company we focused on building a services portal that would focus the creation, development and reuse into a single experience. The reality for us was that we are/were a product company so the IC/IP of services was relavent but not the be all end all.
Which brings me to my quest around the concept of how information flows within an organization.
To me it seems logical that information should be like a salmon. Idea’s that are new swim upstream against the current but are enbabled/facilitated as they move further along. Not all ideas are the next great idea. You have to have a weeding system (bears) that remove the idea’s (salmon) that won’t cut it. You have to be careful however that the ratio of weeding system to successful idea’s is kept sound and that the ratio doesn’t unfairly favor one or the other (bears or salmon).
The goal of a system built to capture the “next great idea” is to facilitiate the napkin to production process. There are other types of IP/IC that needs to be presented in a structured or managed fashion. The consideratons around ip/ic that represents the final authority on a topic I will address in a later blog.
The concepts of IP/IC management are very similar to those of communication and for that matter meeting management. You seek to get the salmon from the stream to the table in the least impactful manner and of course you want to avoid ticking off the bears.
What used to be KM (knolwedge management) has become information architecture. That is a good transition overall. Information and how we acquire it, store it and move it within an organization will be the new frontier of computing for the next 5 or so years.
As we move from on premise to cloud based computing the concept of transitional services will actually sit in the baliwick of information archtiecture. How do you move information from your business to the cloud. What type of business information is relavent in the cloud. Where do you put that information and how do you manage the three pillars of information:
I’ve been calling this transitional services, ensuring that the right information is available for the right people in time for them to make a solid or good business decision. It seems to me that transitional services may in fact change both the way we interact with information and with our very computers.
It is the future.
Interesting – I read a post for an Information Architect on monster – they want a GUI specialist – someone missed the boat.
To me an Information Architect is someone who specializes in the flow of information within an organization.
1. Authoratative data is stored in a location that everyone knows where it is.
2. There is an underlying expert system that supprots the authority site.
3. There is a lifecycle of the information – that allows for change, and for refresh in a time controlled manner.
4. This is easily accessed by search.
5. Where it exists the search returns an authority
6. Where it does not exist – the next most likely authority is presented.
Of course all of this would encompass a KM system – but would go beyond KM. You need to include digitial media – but you also need to include the knowledge or IC creation process. Its an iternative process whereby information is created and moves within the stack of the organization. The life of IC is relavent to the value of the data and the problems that the IC solves.
So a GUI specialist…ummm…
A person who builds an IC management system that includes and information information publication. (I like to call this getting that great napkin idea into a formal document, easily).
A person who understand digitial media and the creation of net new content and net new authoratative content based on new media.
Finally someone who understands how to build effective metadata collection as that remains the best way to build an effective search engine.
But then again in our world, sometimes it is more about how things look then how they actually work – 😉
new podcasts posted on http://docandersen.podbean.com – one more about the society of dead architects and another one about Fred and Ed.
For those interested – http://www.architect-center.com for more on the society of dead architects.
Really nothing to ramble on about today – kind of solved my business architect question with help from the netizens. Really not as concerned about the definition of information architect – although if you check Monster.com for job listings (information architect) it is interesting that many companies list IA’s as GUI specialists. Why would information be GUI bound? Seems to be the old way of thinking in my eyes.
So today is short and sweet.