Expanding the cloud transformation equation…

20 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

I realize that I’ve spent a lot of whitespace recently on the concept of transitions. The reason is that I’ve talked to a number of vendors over the past 10-12 months. All of them have a transitional framework that maps IT, business and mission requirements to cloud solutions. They have wonderful paths that take you from where you are to the nirvana of cloud computing.

The go, from your beginning whatever your process is, to the cloud. I am always curious why a specific CSP would have a transition framework. The risk of course is that like today everyone has one and they are all different. Wouldn’t it behoove companies to push the new groups like IASA and Open Group (in particular Togaf as their framework) to build to build the transition frameworks? Then they simply consume the framework once rather than with each CSP they connect with. Or perhaps a group of SI’s that are completely neutral could contribute to a unified LLC that produces a framework for government and a framework for commercial transitions.

You will be connecting to a lot of CSP’s. No matter what the specialization and capabilities of each CSP makes the reality of the transition different and frankly causes the risk. A unified framework published by an organization that doesn’t have skin in the CSP game would be of value.

The old saw here is the plane in flight that you are building. That works only if you know you have everything you need to finish and the time required to finish when you take off. The plane being built in flight is really a methodology. You have everything needed to deliver results. Frameworks are broader and encompass many more variables. The company that takes off in an airplane (unfinished) armed only with a framework is going to be in trouble. With any luck you took off with the landing gear already attached.

Cloud is coming. There is something beyond cloud computing as well that will require a cloud presence to take advantage of. But the limits require a framework that understands what can and will fail. A team of both the company and its partners  working with CSP’s to establish a common and unified transition plan.

I believe the answer lies in the Cloud Broker model. I’ve published this thinking in a number of different places including this blog several times. Transition in the end is not about understanding the organizations mission, goals and processes. It is about supporting a framework of capabilities that reduce the risk of transition.

More risk than the organization can handle costs the transition teams their jobs. More risks than the organization can handle kills the project before it starts. It is a balancing act, the right level of risk with the right level of benefit. I wrote an equation yesterday that attempts to balance the need for transition against the reality of transition.

Does the risk of the transition equal the gain provided by the new solution?

  • The math for this is fairly complex. The first consideration is of course do you have to transition (if you are not competitive in your market because you don’t have a capability you have to add that capability.} That’s a forced transition and in that case speed is the most critical aspect. You find forced capabilities most often in the red ocean of competition. (interestingly by the way why do all the CSP’s have transitional frameworks – the red ocean that is IaaS.).
  • A second consideration is time.
  • A third consideration is the overall cost.
  • A fourth Consideration is the Return on that initial cost or ROI.

Risk is harder to quantify in dollar amounts, normally we are only able to tell what the impact in dollars is if the risk happens or we have to kick in mitigation or contingency plans.

Overall cost + cost of risks + time required + any pressure (good or bad around the solution)- (time to return and total value returned). If that is near zero than the transition is a good one. If it isn’t near 0 but the pressure (red ocean) is there you can still move forward just be careful.

It is after all a transition.

.doc

Scott Andersen

IASA Fellow.





The transition equation. Or the value of not landing on rocks when you transition to cloud computing solutions…

19 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

Does the risk of the transition equal the gain provided by the new solution?

It is perhaps the most simple rule of transitions you could have. Does the end justify the means.The opposite of the traditional saying that says if you achieve your goal than the means are justified.

With transitions it is not always the case. The reality of cloud computing today is you have to be careful. Consider the old analogy of the person on top a ladder 20 feet in the air. They, the person are prepared to jump into a pool of water below. However while standing on the platform and before they climb the ladder they are not told how deep the pool is.

Would you jump?

That’s that cloud market today. We are not clear as to how deep the pool is. We know there are factors that limit the amount of water we can leap into. Bandwidth, security, O&M planning are all things that will impact not only the depth of the pool itself but also what happens when you drop your solution into the pool.

In the end that is the testing solution for that transition. Let’s drop a solution in the cloud and see if it works. In an infinite pool the ripple is small. But the ripple has nothing that stops it so it goes forever.

Sure we can create all sorts of limits and concepts that would force the CSP to enable our transitional needs. But the risk isn’t the first one in the pool. The risk is the one millionth in the pool. It isn’t the first rock but the first rock that breaks the surface that we care about in the end.

Each rock thrown in (or in this case each transition that occurs) displaces some water. The first few rocks into our pool will line the pool bottom. The depth of the pool is limited by a number of things so we have to be careful.

  • Not the total available bandwidth of the Internet but of that Cloud Service Provider
  • Not the total compute capacity of all clouds, just the one we are talking to

Over time we fill a pool with rocks. Now that isn’t a bad thing just something to consider. The water in the pool has to go somewhere. As we add rocks we increase the pressure to expand the pool. IF they pool doesn’t expand we end up having rocks that break the surface and create navigation issues for the pool. Worse, jumping off that platform and landing on rocks – isn’t conducive to a successful transition.

Depth and Breadth become our initial baseline. The next transitional baseline to consider has to be security. IF the security of the organization is too far from what we are expecting than the transition leaves us vulnerable for a time (as the security team learns the new IA model).

Finally we loop back to the reality of bandwidth. I guess for now it will always loop back to the reality that is bandwidth. It is a dual edged sword in the case of cloud computing. Traditional networks are built around tootsie pop security (hard outer core, soft chewy center). As you move the core out into the cloud you need also to move the security layer further into the organization and further out into the cloud. It’s a reversing of the way networks have been traditionally.

Take for example Netflix, the awesome purveyor or video on the web. They are a large (the largest in fact) cloud only company. But they are only securing access to their core and then pushing it out to you.  First off they are already in the cloud so they don’t have to worry about transitions. Second their security focuses on the user ID’s and credit cards they maintain as their hard inner core. If the content they share gets compromised they can kill all the connections to that content and share the same content from other server.

Netflix however has to fear the reality of bandwidth. Their business however won’t fail if bandwidth declines unless that results in a large number of cancellations of accounts. Its why they focus their development efforts on building “design to fail” solutions and solutions that consume less bandwidth.

In the end transition planning has to come from organizations that work with multiple clouds. Organizations that understand not only the nuances of your business (or mission) but also understand the limitations of the various cloud service providers. No matter what the CSP’s tell you they each have fatal flaws. You need a partner that see’s the flaws, knows the flaws and in the end can make sure there is enough water in the pool so that when you jump you don’t land on rocks.

.doc

Scott Andersen

IASA Fellow





The risks of transitioning to the cloud–its about the data….

18 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

I had a great hallway conversation at work about the concept of the Data of the Internet of Things (DiOT). The person pointed out that the data produced is going to continue to grow and frankly at a much faster rate than I personally would have projected.

  • Its about sensors.
  • Its about bandwidth.

Today there are millions of sensors walking about connected to a network 100% of the time (or draining your battery trying to connect). From cellular to Wi-Fi the personal productivity devices are creating and consuming data rapidly. I have a friend who has 11 different email accounts synced with his iPhone. He runs two different business’s from his phone and the emails handle different aspects of each.

There are many more data points than email. In fact from home video sources, home weather and home connections you have tons of data producing sensors. Add to that the number of people in the house and the number of cellular phones and you begin to see the flaw.

In the end it is a cautionary tale for organizations considering cloud transformation today. Will the bandwidth expand quickly enough that you can move your organization to the DiOT. Its not a thing because I wrote it in a blog. The data created by the Internet of things is going to consume bandwidth. The security required for some of the sensor data is going to consume bandwidth and processing (to operate the security).

The thing is, bandwidth is limited. In fact without a switch to Ipv6, for the most part devices won’t be able to connect. The number of Ipv4 connections remaining is tight. In fact the number of remaining Ipv4 addresses – is FINITE.

So we have a problem. Transformations are starting. People are considering moving organizations to the cloud. Organizations are building and sharing sensors and more and more data with anyone interested. Every day the number of free wi-fi hot spots increases. The amount and number of cellular phones increases. The number of remote sensor systems you can buy increases.

Plus the number of companies running cloud applications continues. More and more people are “cutting the cord” and moving away from cable and telephone providers. By the way, today one of the ways we manage bandwidth is that cable providers keep a chunk of bandwidth separate from the cellular and wi-fi worlds. Telephone (land line) and Satellite providers do the same thing. When that bandwidth also shifts completely its going to reduce performance for everyone.

I have a really good friend who always tells me its not about the data its about what you do with the data that is the real game changer. In effect creating a tiered data system may be the transitional component for an organization considering leaping to the cloud.

Data tier”ing” is simply accepting that information has various levels of criticality. IE some information isn’t real time. Some information isn’t near real-time and finally there is information that is nice to be able to access but it isn’t driving anything.

We’ve had this capability in networks for years called quality of service. If we can tier data into the quality of service solutions that exist we can begin to alleviate some of the transitional risk of bandwidth. Not all the risk mind you, but some of it.

In the end as organizations being their transitions we have to consider three simple things (that will be expanded upon in a later blog)

  • Where is my data today
  • How much data do I need to share
  • What are the tiers of the data I want to share

Every organizational transition plan should include a consideration not of where the data is today but where the data will be in 5 years and even 10 years. Preparing for the Internet burp now will reduce the end game pain of your transition.

.doc

Scott Andersen

IASA Fellow





One toggle switch away from a brave new world…

17 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

About a year ago now I argued that motion was the new touch and that touch was slowly going to be less relevant than it is now. Motion is still struggling to launch as is often the case with new technology.

The improvements Microsoft made in Windows 8.1 to support touch computing are really good. No where near where they need to be overall but better than they have been in the past.

Motion remains however the future. The leap controller and the new Haptix controllers provide you with an ability to interact with the computer without the keyboard and without the mouse.

The question I have about motion, and that includes the 509 (US laws on accessibility) versions that support eye movement and voice only is how much further do we have to go? You can now effectively control your computer with your voice (Dragon). You can control your computer with touch (most of the new tablets). You can control the computer with pen (Windows Tablets and Samsung Tablets) and of course you have motion and motion + vision.

Most are fairly straight forward and easy to setup. There are minor configuration and updates that occur that you have to account for. Eye tracking systems require a clean line of site and the sensors really only work about 2-5 feet from the subject so there are some physical room setups to consider. Motion requires airspace over the leap (for the controller as it reads up) or space in front of the Haptix controller to use motion.

Voice of course requires an environment that encourages vocal interaction. Can you imagine a building bull of people using voice commands to talk to their computers? In the end you would need noise cancelling headphones for everyone.

Additionally today you have to be careful what task you are doing. Something that requires considerable input (multi-line forms) is less supported by the technologies than creating a document or editing a document is. I suspect that is a machine learning capability that will be added over time to the solutions but for now it’s a little harder than it can be to be effective.

In the end I wonder what comes next. An integrated input package that lets you use motion when you have space, pen when you don’t, voice when you can’t move at all and eye control when you are stiff and sore from working in the yard or exercising too much?

Just a simple toggle switch to change the world.

.doc

Scott Andersen

IASA Fellow.





In the end we moved the wrong spear…

16 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

A conversation rule. This blog is a conversation. As things become more congealed and worth cleaning up I do so on Linkedin, IASA, CloudTeaks or SafeGov. But this blog is a conversation. During a conversation you don’t stop and worry about grammar or spelling.

Based on that I acknowledge that I am terrible at grammar and spelling early in the morning.

Like I said this is a conversation.

That in the end is the value of a blog. You and the writer having a conversation. Its why I always carry the comments and emails from people forward into the blog because the interaction is part of why I write.

The other part of writing is simply because it is there inside me and I need to get it out or my brain starts creating overly complex solutions to problems instead. Yesterday I launched a missive into the void about the concept of why cloud transformations fail.

You can’t always change people.

Human beings can and do frequently change. But the solutions around them change sometimes less often than the people do. Part of that is the consistency of the solution. Part of that is its really hard to get everyone to change. The reality of personnel turn over is that if you keep one person from a team and only one person that knows how to operate the old system that is actually time to improve things. The more people you retain in a system the harder the retraining for a new process becomes.

The resistance to change is everywhere.

You can see the resistance to change in meetings. The positive of the inclusion of telephone, video and web meeting functionality makes it easier to share information quickly. There is a negative impact – people bringing their laptops to meetings and having them open in front of them. Laptops open in front of someone not working on a specific document in the end is well multi-tasking. I once had a group of people scheduled for an important brain storming meeting. None of them (all in the same building as I was) appeared in person. Their response to being asked why not be here in person “we like to multi-task.”

So while technology slowly modifies behavior in the end it doesn’t change the actual behaviors. That requires a BF Skinner box and you aren’t really allowed to do that anymore. Tongue in cheek on the last comment of course. The problem with a one way conversation is your can’t see the wry smile on my face sometimes.

Reality is transformation projects are hard. They have nothing to do in the end with technology. They are tied deeply in the other end of the pool, people and process. Not that this is a negative rant against change. Rather that the considerations for transformation are significant.

It boils down to a value conversation. It isn’t how smart you are or how quickly you can build a solution. Its how deeply do you understand the process that is being transformed. How carefully do you examine the requirements of the solution to build the right technical environment.

In the Monty Python movie Jabberwocky the efficiency expert moves one spear in the weapons factory. The result was not the faster production of weapons. It was the complete destruction of the factory. Let’s be careful in transformation projects that we don’t start off or end by moving the wrong spear.

.doc

Scott Andersen

IASA Fellow.





It is nearly impossible to transform people…

15 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

For many years I have chased the great transformation ghost. Looking at home applications are built and deployed within large organizations and helping them attempt to rationalize their reality. While cloud presents a new face for transformation it remains just one tool in the tool belt of change.

What are the transformational limiters that organizations run into?

  • Large amounts of data
  • Massively complex custom applications
  • Massively complex processes that leverage applications
  • specific data security requirements

Today the way cloud works massive amounts of data don’t fit. You have to set-up a partnership with a CSP at a level most won’t even consider when talking 1 petabyte or more of data. 1 petabyte for most CSP’s represents more than a million dollars a year in storage fees.

Massively complex custom applications require some consideration. I used to do application transformation for organizations. They (the organization) were wrapped around 10-15 years of process baked into an application. In some cases they didn’t even know everything that had been connected to the application in question – they just knew that the process was A-Z and the application handled the inputs for B-X.

I also had an opportunity to work with companies that had very simple applications but incredibly complex processes around the input. In those cases the application covered Q-X as far as inputs and the rest was a manual process. The words onerous came to mind, the answer to that of course “that is the way we have always done this.”

Specific data security requirements can also be a difficult transition. First off because people tend to assume security requirements are the way they are. In fact as we transition to more advanced cloud solutions security may be better in the cloud than on premise. The problem then becomes the time it leaves your security perimeter and enters the CSP’s security perimeter. The reality of data in motion and data security in motion is the next problem for transformation.

Massively complex applications and processes as well as unique data security do not fit well with the mobile paradigm that is cloud 2.0.

So how can you transform your application portfolio to embrace the new paradigm of cloud?

You can’t/

You can do application portfolio analysis and build a enterprise application portfolio using some of the many fine tools out there. In the end you have a static list. The reality of cloud transformation isn’t the applications.

Its everything else.

In the end the applications will run in the environment you build for them. They will run with the security you provide for them.

it is the operations, management and potential that will transform your enterprise. I attended a class once from Dana Bredemeyer. Dana is a really smart guy. He talked about the concept of enterprise architecture as an umbrella. Where the lower sections of the umbrella (what is at the end of the umbrella’s spines) represent the projects that connect to the central enterprise architecture.

Its as much a capabilities map as it is anything else. When you abstract your enterprise to capabilities you begin to see why transformation fails. My favorite example of this is how many ways can you create a PDF in your organization. If it is more than 1, then you have a capability problem.

The real transformation for IT in the next ten years isn’t cloud. Its people and process. The technology continues to improve. Eventually the data and the security will be better and better. In fact it may already be the case that cloud security is better than on premise simply from the reality of practice. There be outliers that have exceptional security but on the whole the cloud providers probably are better overall. The problem is the reality of process and people.

You see its easy to transform technology. It is nearly impossible to transform people.

.doc

Scott Andersen

IASA Fellow.





The data we have, the data we need and oh by the way our ship is sinking….

14 10 2014

http://docandersen.podbean.com
http://docandersen.wordpress.com
http://scottoandersen.wordpress.com
My Amazon author page!!!!
http://www.safegov.org

I got a great email about the iOD or Internet of data concept yesterday/ It was from a friend who works with large data sets for an auto company. He mentioned first off that he liked the concept but that it was more the ocean of data than the Internet of data. While the Internet is critical for the overall movement of the data there is another concept that he felt was as critical to the iOD concept.

The depth of data was what he called it. By depth he said that in effect there is the concept of data you can consume now, data that can be consumed somewhat quickly and finally the fire-hose affect.

Now consumption was interesting to me. You can regardless of screen or device consume the data right now. Fast quick and effective data that can be easily leveraged. That fits of course with the broad training reality (10,000 hours to mastery) in that if you understand the context and setting of the data you can quickly consume it. In theory everyone driving can rapidly interpret the data presented by a stop light (go, proceed with caution, stop).

The next point was data that requires something else. This is data that may require an application or piece of hardware to interpret. It can be consumed but probably not in its raw form easily. GPS data that is all around us all the time is easily consumed by a cell phone, but not by a human being alone. The other example is a room filled with something. Pick anything that you walk into and see. You are more likely to find most of the details in the room if you have a picture to examine after walking in. You understand the orientation and the context of the room so the information can be slotted into specific bins and buckets. In the snapshot (or first walking into the room) you can see a % of the objects. With the picture you can see what you missed but also with the context put the rest into place.

Lastly the fire-hose. How I regret the fire-hose. In a previous life we used to tell people that getting up to speed in the organization was like drinking from a fire-hose for the first year. I regret that now because in effect the fire-hose is a bad-data analogy. The context of information cannot be overly complex. If it is, overly complex, then the learner will struggle. Change one variable and everyone struggles. Looking back the intelligent view of the fire-hose model is to divert the flow of water into a container and then evaluate the container as it fills rather than the actual flow.

The ability to gather, analyze and use data is critical. The three models my friend presented are interesting. I think there are many more models to consider in the consumption of data. It is also critical to consider the modality of the data. On-line, off-line consumption makes a difference. Within the concept of on-line are the speed modalities and the reference modalities. You can’t overwhelm someone with data (fire-hose). Experience speeds up the ability to consume data but experience can miss nuances that the inexperienced would catch. So you have to be careful to balance experience with in-experience in managing on-line data flow. The same is true of off-line data flow but you can remove the size factor a little. The bigger modality for off-line data manipulation is of course how much time does the solution require. IE if the solution has a boundary outside the person doing the research than that boundary is the driver not the time the person needs to complete the task. The old captains saying “if it takes 2 hours to build a lifeboat with the materials we have, but our ship is going to sink in one hour 32 minutes, I guess we launch an unfinished lifeboat and hope we can finish it once we are safely on the lifeboat.”

more to come…

.doc

Scott Andersen

IASA Fellow.








Follow

Get every new post delivered to your Inbox.

Join 1,392 other followers