As designed and implmented a complete violation of the architecture rules. It’s expanded base does not support the large leafy expanse at the top of the device. In fact it is amazing to me that trees don’t randomly give up the pursuit of sunlight and collaspe. Perhaps they do and there is a secret organization of anti-architects that run around propping them back up.
Seriously, trees break all the rules, expansive base supporting 200-300 feet of building that should by rights narrow at the top.
But they don’t narrow – they get wider.
How is that possible?
From a functionality perspective it is exactly what the tree needs. You need an expansive area to collect enough sunlight for energy.
From a requirements perspective the needs are met as well (take up the least amount of space in the forest per tree – expand above and underground.)
There is much to learn from the archtiecture of nature. The concept of matching elegance and functionality with requirements to create a functional form that provides more than planned capacity.
Heck some trees even have apples.
My apologies to loyal readers. A long time ago I wanted to be a writer. So I have these lines in my head that I don’t really have anything elsefor – the t tlefor – i of today’s blog is one.
Its actually from a sci-fi story I started 20 years ago. But it never made it past the introduction of the hero.
Which brings up a concept that is critical going forward. The concept of “what data do I keep.” My wife says I am a packrat and that if I thought I could I would keep everythingthat was evergiven to me. That is completely true – I throw stuff away all the time. But like the opening line of this blog I do have lots of crap that I keep as well.
What and how do we determine that IP is no longer relavent. I put a pure costing model out a few days ago. However there are some variables to problems that need to be included in the costing of disposable IP.
1. Can the problem be solved by more than one solution.
2. Do numerous people know the various solutions – increasing the probability of a quick solution.
3. Is there simply a good way and a bad way to solve the problem?
All of these play into the eventual downfall of IP. THe problem is that as we can see from the title – it’s possible to justify a fragment of IP having value – the business rules should be concise – leading to the removal of data rather than the retention of data.
The advantage of a smaller data pool is the search is now more effective. We can more easily and effectively find things related to what we are looking for. We are also not presented with numerous solutions to the problem.
I was talking to some friends last night about the concept of spelling and grammar. Grammar and spelling are rules of communication but do they at times limit the very conversation we are trying to have?
Of course there are many other barriers to communication around us. Language is the first one, even if two people share a common language (English) the regional variance (dialect) may make communication difficult.
The last thing is the idea or concept that the person is trying to share. Communication is the sharing of ideas, assuming both parties are listening, sometimes the idea itself is too radical for the “listener” to accept.
Examples of this:
The world is round…that idea caused a stir for a long time.
Earth is not the center of the universe. (another huge stir)
Earth is not the center of the solar system (see above – still a huge problem).
Human beings bodies cannot withstand speeds of over 35 miles an hour.
Man will never go to space.
You get the point. So the last piece of communication is the willingness and openness to a new idea.
Are you ready to communicate?
In the child’s game go fish – if no one you ask has the card you are looking for you are allowed to draw from the deck. If you get what you want from the deck your turn continues, and you say “I got what I wanted.” If you play go fish with four people, each person holding 5 cards, then the first person to draw from the deck will have a 1 in 32 chance of getting what they want.
There are times, with search engines that you have less than a 1 in 32 chance in getting what you want.
The interesting thing about this is that in most cases there is a simple fix to reduce the odds.
Metadata is the tag of information implanted with every document. The better the IP/IC management system is the more metadata they capture with the document.
Number of revisions
Are all pieces of metadata that are captured within most word processors by default.
But you need more than that you need things like:
Project IP was created for
Modifications to concept
Problems solved by concept
and so on …
How do you build a system that captures that information by tagging the actual creation process?
In the new transitional services model we would need to add a couple of additional pieces of data…
Cloud hosted one way
Cloud hosted two way
One of the things that is critical to consider in moving forward with a transitional services solution is the concept of data management. This is something that has taken an interesting spin in recent years. When does a document or when does IP itself expire? Does IP ever expire? How long can information live before its relavence is reduced to nothing.
IP represents property, something tangible. Kodak corporation this week discontinued the production of Kodachrome. Everything known about Kodachrome will move from practical information to historical information. Someday someone may invent a new solution that the “Kodachrome” data will impact but for now the IP has exceeded it’s ttl (time to live in the directory world).
Does Data expire? Does it move from relavent to historical (as above). What does it do when it stops applying to the problem it was solving before?
Without a data management solution within a transitional services offering the reality is that in no time the problems that plague on premise solutoins will plague the cloud solutoin.
There is a finite limit due to energy and heat produced in the amount of spinning disks the world can have.
So data management is critical.
We boil this into two simple concepts:
1. Cost of keeping the data in the cloud.
2. Cost of the problem (solved by the data) reoccuring.
When the cost of 1 exceeds the cost of 2 it is time to remove the data from the cloud solutoin.
What does required IP mean? Anything that someone has created that in fact solves a problem is IP. IC, is the other iteration of IP that invovles anything created by someone in an organization.
But IP is something that solves a problem. Why the distinction? It matters in terms of what the organization is going to place in the cloud scenario. The value of the solutoin has to be greater than the overall cost of the risk.
This gives us a new metaphor for IP. The concept of IP risk. Is the value of the solutoin greater than the cost of IP loss?
Our risk table would resemble this:
IP Cost Problem Cost number of times problem applies cost of IP Loss
X X X X
This gives us an initial business rule for placing IP in the cloud. Does the end justify the means?
Do bee’s like red balloons? that’s the problem with creating a new metaphor/analogy. No matter what you run the risk of making a Symantec or environment error.
Anyway – watch a bee fly. They really take the least cost route to the flowers that they can. To a degree they have to search from side to side to ensure they haven’t missed any net new flowers, but for the most part they fly as straight as possible.
Now when a human walks – there are obstracles and issues that bees don’t have to deal with as much. They have trees and hills but relative to a straight line there is less impact. Hence the old adage – 10 miles as the crow files, 30 if you take the road.
So Honeybee search takes the least cost most effective route to the data we are reviewing. Pending trees, rocks and other things that even airborne routes are impacted by, the straight line of search is the best way.
If you have a thousand balloons (all red) in the air being chased by a thousand children through the streets of Paris how many of them will be stung by a bee.
My brain doesn’t always work the way it should – this is an example of that.
The concept of Honeybee search is how do we find things. I’ve worked on teams that built and ran three different IP management systems. Each time the system got in the way of usage.
No matter what we did, the publication methodology always left the system wanting and often unable to deliver the information people needed (again, Bob Forgrave a dear friend coined the concept of dumpster diving for our IP system – download everything and “hope you find what you need.”).
How do we move information from the napkin to the bestseller list?
As I thought more about the concept of beeline search – it became more and more apparent to me what a boon for seach that would be. In a scenario where your goal is a specific set of data, what if the search engine could adapt to that.
The issue with search is always the reality of the data you are collecting/gathering/reusing. The reality is that you need to build out a system that allows for a solution resembling the bees search.
1. bee finds flower
2. Bee returns to hive
3. Bee communicates flower location
Now the health of the hive depends on the bee that finds the flowers to find healthy successful flowers. Businesses rely on that in relation to the data that is or isn’t selected by their IP system to present to users.
So the other day I was watching a bumble bee fly from flower to flower (enjoying a nice cigar at the time). I thought about the amazing gift bees have to leave a trail somehow for their peers to find the exact flowers they found.
To me that seems to be the ultimate search pattern. Point to point with an exact targeting system.
I have a dear friend, Jim Wilt who once proposed the concept of butterfly search, a fluttering search pattern that floated over the data dipping only when there was something relavent. He actually printed t-shirts with butterfly search logos. I still have mine.
That concept, while a joke at the time of it’s creation isn’t actually a bad solution to the high level problem of dumpster diving. Dumpster diving (coined by Bob Forgrave – another dear friend) is the concept of going into an IP system and downloading anything that is remotely relavent to what you are looking for. Regardless of the value of any piece of IP – you download them all.
This brings me full circle to the bees. Point to point specialized/controlled data collection.
Versus the butterly – light engagement with IP fluttering over the entire span of the IP system.
Finally dumpster diving, the concept of just downloading everything regardless of relavence.
Of the three of course dumpster diving is the “worst” habit. Butterfly search actually encompasses much of what google leverages in the way people search.
The value here is the creation of a point to point relavence search system. A system that conveys specific signals to the user pointing them at the right IP every time.
The flight of the IP Bumblebee.