The “My Best Friend Box”

24 10 2014
My Amazon author page!!!!

To the large number of people that E-mailed me – yes I am aware of the various cloud drives you can buy and run in your home. The home cache that I am referring to would in fact be two way rather than the cloud drive model (store on your home network and reuse anywhere).

A cache device would encompass what the cloud drives do and a lot more. It would have the ability to communicate with the various video systems and download a movie. The service would allow this because frankly it reduces their load and they can DRM the video so that its only playable by their player. They could even time bomb the file with a 72 or 96 hour life. Sure someone could hack the file at that point but since they are paying for monthly viewing it isn’t the services issue if they break the law.

I would think a connection between your cache and your home automation system would be a good idea as long as there is enough security so that someone can’t login to the cache and shut off your alarm system.

The cache needs intelligence built in. Watch the home network. When the traffic drops to below y, where x = normal traffic when people are home and y = the lower represents traffic of only the sensors on in the home. This would signal after 15 minutes that it is time to start syncing. The syncing should then stop when traffic moves closer to X.

You could even connect to a number of cloud storage services (the exceptional product CloudHQ does that today). Then you create a data portfolio. The cache could simply place your data where you want it so that you can consume it when you need it. Sounds a lot like the Syncverse model in the end.

Another very cool option is the rotation of images you share on the various photo share services including Flickr and Photobucket. Today the exceptional Amazon Fire device allows you to connect to your cloud drive and play the pictures you have stored there on your television. Syncing between your various cloud drives and adding a connection to a television would be of value for the concept of a home cache.

Its needs a catchy name. There are many intelligent products that get close and are around the edges of this idea. So a catchy name is the only thing missing, right? (OK there are a few development pieces missing and maybe some hardware) but overall it’s a simple concept and it needs a cool name.

Previously I called the concept around this the Myverse. A section of the broader Syncverse devoted to information I wanted, needed and used. In the end the name isn’t catch enough. Today all the cloud drives have cool names.

Thoughts? what is a good name for this new type of solution? It would be the iOT manager for a home network, while providing connection to the various iOT and Internet services consumed in your home. A device that reduces the network footprint of your house when people are home by working when you are not there.

Initial thought from me “my best friend. The box that does for you when you aren’t there to do for yourself!”


Scott Andersen

IASA Fellow

Fixing home bandwidth continued…

23 10 2014
My Amazon author page!!!!

Yesterday I talked about a concept I’ve written about a number of times, the concept of improving home bandwidth. The internet of things (iOT) and some of the newer articles and analysts studies point to 100’s of devices (smart devices) sharing data and interacting with the world around them

The article was a bit “rude” in that it referred to the concept of the iOT helping old people. I suspect the concept old people, like many of the other concepts out there from the past in the end isn’t politically correct.

What it does in the end speak to (this is the newest Gartner Report on the smart device sin our home by 2022) is the concept of connection. The reality of home bandwidth for tomorrow is going to be tricky.

Today the various ISP’s advertise in your home about their capabilities (fastest in home wi-fi, same upload speed as download speed). Reality is that if you have a wi-fi network in your home you will see bandwidth degradation at 2 devices watching a remote video source. You can upload to 100 megabytes download but frankly that remains too expensive today.

You can bounce around with some easy to implement concepts.

  • Wired network for the heavy users
  • Wi-fi for sensors and other data producing devices

They collide in the end at your router. That way your home router which should be able to handle that allows your bandwidth to remain separate and hopefully allows you to have a 3rd video source.

Today most homes have one more work around. That is the actual cable that connects their satellite dish or connects them to the cable company. A huge chunk of bandwidth is saved by having that separate connection. If we cut the cable as many people are doing we are even more reliant on a small amount of net bandwidth in our homes.

I argued in my book “The Syncverse” that this is in the end a Synchronization problem. I still believe that. If we had caching and intelligent synchronization we could in effect work around less bandwidth by spreading it out over a long time period. For example if you go to work at 8 am you could use the intelligent synchronization to get things up to date while you are at work. Or if you work from home you could do it at say 1 am. It would remove a chunk of bandwidth consumed from 5 pm to 9 pm (imagine what a network looks like when people get home from work and turn on their home devices).

There are a number of Kickstarter and Indiegogo projects that start down this path, and a number that make the path a little more bumpy (home video interaction devices etc.) Eventually I suspect we will have to evaluate internet providers as much on what they do in the home as how much data they let us move up and down from the internet. We will evaluate their service based on their understanding of how we use information and how they manage the time available to move data.

In the end it isn’t about what you promise or what you say you can do, its how what you do fits into the life we lead. Almost like we are in charge (not quite but almost).


IASA Fellow

Innovating around the limits of home bandwidth…

22 10 2014
My Amazon author page!!!!

I’ve been thinking a lot about the broad concept the Data of the Internet of things (DiOT) that I had originally called the Internet of data (iOD). When we think about companies and government agencies they have specific goals and missions that in many cases make the use of some DiOT devices less likely.

At home however we may be heading towards a faster problem than we are in the enterprise. Wi-Fi is a pretty robust standard. Inside your house (or your business) it has a spectrum and moves data within that bandwidth. The number of devices that impact your home network are different than the ones you use at work. For sake of conversation I am creating a table below that we will call work/life balance. The goal is to compare activities you do at home and at work and the amount of streamed information in each case.


Life Balance

Which one uses more bandwidth?

PC connected to the network on for 8 hours connecting to email and web sites.

Media devices on for 4 hours connecting and streaming media files

Most media files in HD are about 1.2 gig. Most media files in S.D. are around 1/2 a gig or so. Streaming wins. What happens when two computers in your house are streaming the entire time?

Web Meeting focused on a project lasts about an hour (maybe a little over)

Skype with your friends in Germany

Clearly the web meeting wins here…

Getting weather information (go to the web)

having a home weather station broadcasting your information to the web

Most data would be the broadcast station.

I could go on for a long time but you get the idea. In the end there are many more devices in your home that consume massive amounts of information. Our home networks begin to saturate fairly quickly. I’ve watched mine with an old fashioned network sniffer and have found my Friday pm and Saturday pm traffic to be around 70% saturation. It is the only reason I haven’t cut the cord. If I cut the cord (no more cable/satellite TV service) I will in the end increase my network stress considerably.

Personally I think the next big innovation is the cache ahead movie system. This is where you go into your account and pick 3-5 movies you are planning to watch in the near term. Those movies are downloaded during the work day and during the wee hours of the morning to a cache on your system. Then when you want to watch them they are available.

The real bandwidth issue for home networks is that they are only used for smaller time periods during the day. There are long segments of time that have small bandwidth use. We could using technology like caching move the home network around a little. It would by the way also help the impact on the broader Internet by moving some % of traffic to off hours rather than peak times.

Bandwidth in the end is finite. Human ability to innovate around limits is not.


Scott Andersen

IASA Fellow.

The risks of transitions….

21 10 2014
My Amazon author page!!!!

The grain of sand that slipped through my fingers yesterday without quite being completed is below.

Does the risk of the transition equal the gain provided by the new solution?

  • The math for this is fairly complex. The first consideration is of course do you have to transition (if you are not competitive in your market because you don’t have a capability you have to add that capability.} That’s a forced transition and in that case speed is the most critical aspect. You find forced capabilities most often in the red ocean of competition. (interestingly by the way why do all the CSP’s have transitional frameworks – the red ocean that is IaaS.).
  • A second consideration is time.  Time is the driver and the destroyer for transitions. The window which represents the optimal transition time is critical. Missing that window makes the transition a failure. If the cost remains high and the transition misses its potential window in the end the chance of cancellation increases.
  • A third consideration is the overall cost. Time, I’ve been told time and time again is money. Time isn’t money. If time were money immortality would have been achieved by the elite years ago. Time is an asset in transitions money is also. Enough money can buy time. A project that comes in with a high price tag may have a larger window than a project that has a small initial cost. Its inverse to logic but in reality if you are spending a lot of money the ability to cut the cord moves up the food chain (how is that for mixing two metaphors and creating a new one). Reality the more money it costs the higher the person has to be in the organization that can in the end terminate the project (or in the first place approve the project as well). Simply put expensive projects often have board level visibility which can be good. (of course the inverse is also true).
  • A fourth Consideration is the Return on that initial cost or ROI. There is an old saw about doing technology for technologies sake. It isn’t relevant in the end because not one ever did that. Certainly some R&D shops play with technology that is beyond the pale. But it isn’t beyond the need. The R&D failure of today can become the hot new thing once its market is realized. That means that in the end if your project can align to the mission or business goals and return value beyond the cost it becomes an interesting project.

Measure the following:

  • We are at a competitive disadvantage (remove time as a factor)
  • We are not at a competitive disadvantage (adds time removes competitive pressure)
  • In our company projects over X must have CEO approval (adds level pressure)
  • Our company likes to see projects that return the cost in 18 months (adds ROI pressure)

We can dance around other variations as well there are 100’s. The four listed above are the ones I am going to build out for this series. Less that they are the be all and end all of this discussion rather they are the ones I picked.

More to come!


Scott Andersen

IASA Fellow.

Expanding the cloud transformation equation…

20 10 2014
My Amazon author page!!!!

I realize that I’ve spent a lot of whitespace recently on the concept of transitions. The reason is that I’ve talked to a number of vendors over the past 10-12 months. All of them have a transitional framework that maps IT, business and mission requirements to cloud solutions. They have wonderful paths that take you from where you are to the nirvana of cloud computing.

The go, from your beginning whatever your process is, to the cloud. I am always curious why a specific CSP would have a transition framework. The risk of course is that like today everyone has one and they are all different. Wouldn’t it behoove companies to push the new groups like IASA and Open Group (in particular Togaf as their framework) to build to build the transition frameworks? Then they simply consume the framework once rather than with each CSP they connect with. Or perhaps a group of SI’s that are completely neutral could contribute to a unified LLC that produces a framework for government and a framework for commercial transitions.

You will be connecting to a lot of CSP’s. No matter what the specialization and capabilities of each CSP makes the reality of the transition different and frankly causes the risk. A unified framework published by an organization that doesn’t have skin in the CSP game would be of value.

The old saw here is the plane in flight that you are building. That works only if you know you have everything you need to finish and the time required to finish when you take off. The plane being built in flight is really a methodology. You have everything needed to deliver results. Frameworks are broader and encompass many more variables. The company that takes off in an airplane (unfinished) armed only with a framework is going to be in trouble. With any luck you took off with the landing gear already attached.

Cloud is coming. There is something beyond cloud computing as well that will require a cloud presence to take advantage of. But the limits require a framework that understands what can and will fail. A team of both the company and its partners  working with CSP’s to establish a common and unified transition plan.

I believe the answer lies in the Cloud Broker model. I’ve published this thinking in a number of different places including this blog several times. Transition in the end is not about understanding the organizations mission, goals and processes. It is about supporting a framework of capabilities that reduce the risk of transition.

More risk than the organization can handle costs the transition teams their jobs. More risks than the organization can handle kills the project before it starts. It is a balancing act, the right level of risk with the right level of benefit. I wrote an equation yesterday that attempts to balance the need for transition against the reality of transition.

Does the risk of the transition equal the gain provided by the new solution?

  • The math for this is fairly complex. The first consideration is of course do you have to transition (if you are not competitive in your market because you don’t have a capability you have to add that capability.} That’s a forced transition and in that case speed is the most critical aspect. You find forced capabilities most often in the red ocean of competition. (interestingly by the way why do all the CSP’s have transitional frameworks – the red ocean that is IaaS.).
  • A second consideration is time.
  • A third consideration is the overall cost.
  • A fourth Consideration is the Return on that initial cost or ROI.

Risk is harder to quantify in dollar amounts, normally we are only able to tell what the impact in dollars is if the risk happens or we have to kick in mitigation or contingency plans.

Overall cost + cost of risks + time required + any pressure (good or bad around the solution)- (time to return and total value returned). If that is near zero than the transition is a good one. If it isn’t near 0 but the pressure (red ocean) is there you can still move forward just be careful.

It is after all a transition.


Scott Andersen

IASA Fellow.

The transition equation. Or the value of not landing on rocks when you transition to cloud computing solutions…

19 10 2014
My Amazon author page!!!!

Does the risk of the transition equal the gain provided by the new solution?

It is perhaps the most simple rule of transitions you could have. Does the end justify the means.The opposite of the traditional saying that says if you achieve your goal than the means are justified.

With transitions it is not always the case. The reality of cloud computing today is you have to be careful. Consider the old analogy of the person on top a ladder 20 feet in the air. They, the person are prepared to jump into a pool of water below. However while standing on the platform and before they climb the ladder they are not told how deep the pool is.

Would you jump?

That’s that cloud market today. We are not clear as to how deep the pool is. We know there are factors that limit the amount of water we can leap into. Bandwidth, security, O&M planning are all things that will impact not only the depth of the pool itself but also what happens when you drop your solution into the pool.

In the end that is the testing solution for that transition. Let’s drop a solution in the cloud and see if it works. In an infinite pool the ripple is small. But the ripple has nothing that stops it so it goes forever.

Sure we can create all sorts of limits and concepts that would force the CSP to enable our transitional needs. But the risk isn’t the first one in the pool. The risk is the one millionth in the pool. It isn’t the first rock but the first rock that breaks the surface that we care about in the end.

Each rock thrown in (or in this case each transition that occurs) displaces some water. The first few rocks into our pool will line the pool bottom. The depth of the pool is limited by a number of things so we have to be careful.

  • Not the total available bandwidth of the Internet but of that Cloud Service Provider
  • Not the total compute capacity of all clouds, just the one we are talking to

Over time we fill a pool with rocks. Now that isn’t a bad thing just something to consider. The water in the pool has to go somewhere. As we add rocks we increase the pressure to expand the pool. IF they pool doesn’t expand we end up having rocks that break the surface and create navigation issues for the pool. Worse, jumping off that platform and landing on rocks – isn’t conducive to a successful transition.

Depth and Breadth become our initial baseline. The next transitional baseline to consider has to be security. IF the security of the organization is too far from what we are expecting than the transition leaves us vulnerable for a time (as the security team learns the new IA model).

Finally we loop back to the reality of bandwidth. I guess for now it will always loop back to the reality that is bandwidth. It is a dual edged sword in the case of cloud computing. Traditional networks are built around tootsie pop security (hard outer core, soft chewy center). As you move the core out into the cloud you need also to move the security layer further into the organization and further out into the cloud. It’s a reversing of the way networks have been traditionally.

Take for example Netflix, the awesome purveyor or video on the web. They are a large (the largest in fact) cloud only company. But they are only securing access to their core and then pushing it out to you.  First off they are already in the cloud so they don’t have to worry about transitions. Second their security focuses on the user ID’s and credit cards they maintain as their hard inner core. If the content they share gets compromised they can kill all the connections to that content and share the same content from other server.

Netflix however has to fear the reality of bandwidth. Their business however won’t fail if bandwidth declines unless that results in a large number of cancellations of accounts. Its why they focus their development efforts on building “design to fail” solutions and solutions that consume less bandwidth.

In the end transition planning has to come from organizations that work with multiple clouds. Organizations that understand not only the nuances of your business (or mission) but also understand the limitations of the various cloud service providers. No matter what the CSP’s tell you they each have fatal flaws. You need a partner that see’s the flaws, knows the flaws and in the end can make sure there is enough water in the pool so that when you jump you don’t land on rocks.


Scott Andersen

IASA Fellow

The risks of transitioning to the cloud–its about the data….

18 10 2014
My Amazon author page!!!!

I had a great hallway conversation at work about the concept of the Data of the Internet of Things (DiOT). The person pointed out that the data produced is going to continue to grow and frankly at a much faster rate than I personally would have projected.

  • Its about sensors.
  • Its about bandwidth.

Today there are millions of sensors walking about connected to a network 100% of the time (or draining your battery trying to connect). From cellular to Wi-Fi the personal productivity devices are creating and consuming data rapidly. I have a friend who has 11 different email accounts synced with his iPhone. He runs two different business’s from his phone and the emails handle different aspects of each.

There are many more data points than email. In fact from home video sources, home weather and home connections you have tons of data producing sensors. Add to that the number of people in the house and the number of cellular phones and you begin to see the flaw.

In the end it is a cautionary tale for organizations considering cloud transformation today. Will the bandwidth expand quickly enough that you can move your organization to the DiOT. Its not a thing because I wrote it in a blog. The data created by the Internet of things is going to consume bandwidth. The security required for some of the sensor data is going to consume bandwidth and processing (to operate the security).

The thing is, bandwidth is limited. In fact without a switch to Ipv6, for the most part devices won’t be able to connect. The number of Ipv4 connections remaining is tight. In fact the number of remaining Ipv4 addresses – is FINITE.

So we have a problem. Transformations are starting. People are considering moving organizations to the cloud. Organizations are building and sharing sensors and more and more data with anyone interested. Every day the number of free wi-fi hot spots increases. The amount and number of cellular phones increases. The number of remote sensor systems you can buy increases.

Plus the number of companies running cloud applications continues. More and more people are “cutting the cord” and moving away from cable and telephone providers. By the way, today one of the ways we manage bandwidth is that cable providers keep a chunk of bandwidth separate from the cellular and wi-fi worlds. Telephone (land line) and Satellite providers do the same thing. When that bandwidth also shifts completely its going to reduce performance for everyone.

I have a really good friend who always tells me its not about the data its about what you do with the data that is the real game changer. In effect creating a tiered data system may be the transitional component for an organization considering leaping to the cloud.

Data tier”ing” is simply accepting that information has various levels of criticality. IE some information isn’t real time. Some information isn’t near real-time and finally there is information that is nice to be able to access but it isn’t driving anything.

We’ve had this capability in networks for years called quality of service. If we can tier data into the quality of service solutions that exist we can begin to alleviate some of the transitional risk of bandwidth. Not all the risk mind you, but some of it.

In the end as organizations being their transitions we have to consider three simple things (that will be expanded upon in a later blog)

  • Where is my data today
  • How much data do I need to share
  • What are the tiers of the data I want to share

Every organizational transition plan should include a consideration not of where the data is today but where the data will be in 5 years and even 10 years. Preparing for the Internet burp now will reduce the end game pain of your transition.


Scott Andersen

IASA Fellow


Get every new post delivered to your Inbox.

Join 1,392 other followers