Do you really think brokerage of commodity clouds will be a thing?

June 5, 2015 CIO, Corporate IT, Devops, Future 1 comment

Why do we think that one day we’ll broker usage of commodity clouds against each other for the lowest price per usage?

I’m in tune with the clouderatii.  I’ve read the books equating the current transformation of IT into a continuous delivery pipeline as analogous to the recent past transformation of manufacturing into just in time.  I’ve participated in the conferences.  I’ve read the pundits blogs and tweets.  I’ve listened to the podcasts.

I still have a question…

Is there really a precedent for this idea of brokering multiple commodity clouds, and deploying workloads anywhere you get the best price assuming everyone provides basically the same features?

Icloud-coupons it like an integrator sourcing the same component from multiple manufacturers?  Is it like market brokering for the best price electricity from the grid?  It’s like hedging fuel if you’re an airline?

Maybe, but I’m not sure those are truly analogous.  Is there an IT based precedent?

Is it like having a multi-vendor strategy for hardware, software, networking, or storage?  (Does that really work anyway?)

I don’t think so.

I’ve had conversations of late about the reality (or not!) of multi-vendor cloud, and the need for vendor management of multiple cloud offerings.  The idea seems to be that there will be a commoditization of cloud service providers, and through some market brokerage based solution, you’ll be able to deploy your next workload wherever you can get the best deal.

Keep in mind, I deal primarily with enterprise customers… not Silicon Valley customers.

There are still very few workloads that I see in the real world that fit this deploy-anywhere model.  If they exist, these apps are very thin links in the value chain limited by network complexity and proximity to data.  They typically exist to manage end-user interaction, pushing data to a central repository that doesn’t move.  Maybe there are some embarrassingly parallel workloads that can be farmed out, but how big of a problem is data gravity?  (the payload can’t be that big or the network costs eat you up)  How often do you farm out work to the cloud?  Is it so frequent that you need a brokerage tool, or will you simply negotiate for the best rate the next time you need it?

Multi-vendor IT management doesn’t come close to the dynamism suggested by such brokerage.  In my experience individual management ecosystems are developed by each vendor to differentiate and make themselves “sticky” to the consumer.  Some would say “lock-in,” but that’s not really fair if real value is gained.

Are all server vendors interchangeable such that you could have virtualization running on several vendor machines all clustered together?  Only if you want a support nightmare.  No one really does this at scale, do they?  They may have a few racks of red servers and a few racks of blue, but they’re not really interchangeable due to the way they are managed and monitored.

Linux is a common example of this commoditization in action… Are all Linux OS’s the same?  Can you just transition from RedHat to Ubuntu to SUSE on a whim?  Do you run them all in production and arbitrage between them?  You don’t, because even though they might be binary compatible Linux, each distribution is managed differently.  They have different package management toolchains.  They’ll all run KVM, true; but you’ll need to manage networking carefully.  What about security and patching?  Are they equally secure and on the same patch release cycle?

Shifts between vendors only really happen across multiple purchase cycles each with a 3-5 year term and with the cost of a lot of human effort.

Will the purchase cycle of cloud be 3-5 months of billing?  Yes, this I could imagine.  This would allow multiple cloud vendors to compete for business, and over 12 months or so a shop could transition from one cloud to another (depending on how frequently their workload instances are decommissioned and redeployed).  And yet, network complexity and data gravity imply extreme difficulty in making the switch between clouds if app instances are clustered or must refer to common data sets; or if the data sets themselves must transition from one cloud to another (again with the network complexity and data gravity).

The only way to really engage in the brokerage model is to have very thin apps whose deployment does not rely on differentiated features of the cloud providers.  You’ll have to create deployment artifacts that can run anywhere, and you’ll have to use network virtualization to allow them all to communicate back to the hub systems of record.  Then there’s the proximity to the data to be processed.  You’d better not be reliant on too much centralized data.

It’s too early to have all the answers, but I’m suggesting that the panacea of multi-cloud brokerage imagined by the pundits will never really materialize.  If the past is any guide to the future, the differentiation of the various systems won’t allow easy commoditization.  They’ll be managed differently, and it’ll be hard to move between them.  Any toolset that provides a common framework for management will reduce usage to the least common denominator functionality.  And nothing so far is really addressing network complexity or data gravity.

The issue is more complex than the pundits and the podcasts would have you believe.  I don’t know the answers, do you think you do?  I’d love to hear your opinions.

Big IT is Swallowing OpenStack Upstarts

June 4, 2015 CIO, Corporate IT, OpenStack No comments

OpenStack partners getting swallowed by big corporations

OpenStack partners getting swallowed by big corporations

And with that, OpenStack is now unquestionably a big vendor driven set of projects.  EMC acquired CloudScaling a while back, Cisco has announce their acquisition of Piston Cloud, and IBM is acquiring Bluebox.  The only meaningful independent OpenStack generalist company now is Mirantis.  (props to HP, RedHat, and Canonical, but they also do other things).

It’s not that anyone really ever questioned that OpenStack was being driven by corporate interests.  The cliche has always been that it has more vendor sponsors than customers.  But does that matter?  The point of OpenStack seeks to provide a common IaaS layer that’s not owned by Amazon, so that all these corporate interests can collectively catch up to the head start Amazon enjoys.  A the same time, corporations that feel like hosting their own IaaS is strategic to their business are encouraged to consider OpenStack since their traditional IT vendors are also leveraging it as an emerging standard.

What do traditional IT vendors want with OpenStack upstarts?

What do traditional IT vendors want with OpenStack upstarts?

What are these traditional players going to do with these OpenStack upstarts?  ensure compatibility with existing solutions… build OpenStack-in-a-box products… provide service and support offerings around the platform… make sure that there’s just enough innovation within OpenStack solutions to be competitive, but not too much that would devalue existing products too quickly… you know, the standard stuff.

From the Vancouver Summit, though there appear to be more direct customers using “OpenStack,” it’s more nuanced than that.  The nuance is OpenStack is not a product or a single project.  OpenStack is a collection of projects that encompass compute, networking, and storage.  Customers do not have to swallow the whole pill.  Many of the customers “using OpenStack” at the summit are really only using Nova, Glance, and Cinder; or Swift; or Ceph (not OpenStack BTW); and very few are leveraging most or all the projects for an all-encompassing deployment.

I think OpenStack has a future.  It’ll be up to the governance model to ensure that OpenStack remains a common playing field or diverges into separate incompatible offerings.  It’ll be fun to watch the run-by-committee model and see if it can produce a truly viable IaaS before while such a thing is still a relevant need in IT.

Measuring the Value of Corporate Data

February 15, 2015 CIO, Future, Research No comments

Value of Corporate Data

Increasingly, smart people are taking up the mantle of assigning economic value to the data or information assets within organizations. Steve Todd is producing a blog series on the topic.  Dave McCrory has begun looking at gravity theory applied to data (Data Gravity) in an attempt at some point to aid valuation or at least guide investments.  Various scholarly papers have been written since the 90’s discussing methods of economic valuation.

First, who cares?  We are fully involved in the digital economy. How do we estimate the value of an information based company?  Why isn’t the very information asset against which many digital businesses are based represented on the balance sheet?  For how much should we insure our data assets against loss?  What is a fair price to charge for access to information?  What value can we assign information to use to collateralize a loan?

And on the flip side, what is the negative value of leakage of information into the hands of bad guys?  What about the tax implications of an information economy?  What value is being traded by an information currency every time I barter my personal data for online services or discounts at the local grocery?

Economists, venture captialists, insurers, CFO’s, shareolders all care how valuable a company’s information assets are, but how can they assign fair value?  That’s the topic of an EMC research study in partnership with University of California San Diego researcher Dr. Jim Short to explore “all things data value”. Jim is the Research Director for the Global Information Industry Center (GIIC).

To further this discussion, I’d like to comment on a paper I read several years ago on this topic titled “Measuring The Value Of Information: An Asset Valuation Approach” presented at the European Conference on Information Systems (ECIS ’99) by Daniel Moody and Peter Walsh.

In short, information is an Asset in the economic technical term, and it has measurable value.  Only the method of measurement is in question.  Moody and Walsh identify 7 “laws” of information that I will make further comment on

  1. “Information is infinitely shareable.”  I agree, information is not generally “appropriable” in the sense of exclusive possession.  Anyone can make a perfectly valid copy or share access to the original if they are within the network universe.  In this sense the value can be cumulative of all shared points of use, and is more valuable the more it is shared.  This is the primary driver of “value in use” of information.  On the other hand, what about copies of data?  Is duplicated information more valuable?  What if it is duplicated for DR purposes?  Perhaps only if it is still “owned” by the original party.  Data piracy is an example of copies of information returning no or negative value to the orignial owner.
  2. “Value of Information increases with use.”  Yep, absolutely as opposed to other more tangible assets that depreciate over time.  This law also leads to one of the most important methods of valuation—measuring the frequency of access of information and by whom.
  3. “Information is Perishable.” They are suggesting that information’s value decreases over time.  I would tend to agree, but with the caveat that Big+Fast Data analytics are extracting value from old data in ways never imagined before.  There needs to be a method of predicting future value of presently unused information.  This type of future value speculation may be very hard to do.  I wonder if there are economic forces similar to holding land for long periods, in the hopes that one day gold may be discovered.  There may be categories of information holdings that are assumed to be more valuable due to past discoveries of value in similar types of datasets.
  4. “The value of information increases with accuracy.”  So I tend to think accuracy is overrated.  I mean, have you seen all the hoax articles on facebook recently?  Fabrications and lies are valuable too.  But I get it.  Generally you want accracy in your datasets.  This builds trust which in turn builds value.
  5. “The value of information increases when combined with other information.”  Absolutely a gold star on this one.  This is much of the premis of Data Gravity.  Information pulls other information to it, and data sets become ever larger.  This accumulation of valuable information increases its pull of other information and applications.  Another name for this is “context.”  Information is much more valuable in context, and the buying and selling of information is a huge business today.  This law drives the “value in exchange” portion of information valuation as companies buy and sell data.
  6. “More is not necessarily better.”  Hmmm.  This law is beginning to seem a little outdated.  The authors discuss human psychology and information overload.  These days the machines are doing the analytics on our behalf, and more of this good thing does seem to be more of a good thing.
  7. “Information is not depletable.”  You don’t reduce the quantity of information as it is used.  In fact the opposite is true.  More information is created through the use of information.  This “metadata” (information about information) is often as valuable or more valuable than original content (just ask the NSA).  In fact, it is through this metadata on information usage that many aspects of value themselves are determined.

Since information is sharable, imperishable, and nondepletable, we can look at summing both the market “value in use” and utility “value in exchange” to find the true value of a dataset.  I wonder if any of the following industries have valuation models that are similar:

  • Research libraries and their value to a community
  • Land holdings and their value for mineral rights
  • Methods of appraisal of antiques or other goods of subjective worth

It’s an interesting topic to begin dialoguing about.  Much more needs to be discussed.  What do you think?