Monthly Archives: July 2014

Continuous Integration or Continuous Improvement?

One funny thing about DevOps is that it is often touted that constant, on-the-fly changes are the way of the future in operations, and DevOps enables those changes. While this sounds really good, and some organizations are actually doing this type of DevOps, I think it is time that, for the enterprise, we strongly question that premise.

image

While it is really very cool to think about moving an entire web server from a farm to the cloud with just a script, upgrading a system while it’s hot, or spinning up more instances of a server without having to configure anything, I propose that, for the average enterprise, it is simply not necessary.

I’m working on a test automation project that is being implemented for a generally available library. In this case, test automation gives standardized testing with standardized reports for those who are going to use the library to review before implementing or upgrading. This makes perfect sense, but the need for this level of effort and maintenance (remember that nearly all test systems are code too) for a library you’ve developed internally for use amongst your applications becomes much less clear. While testing should be mandatory for such a library, the question becomes what the coverage needs to be. If 80% of your applications utilize 10% of the library, I think I have an automated test coverage plan for you that will balance the cost of implementation with the benefits of having the tests run as part of the build effort.

The same is true with moving web servers around. Let’s face it, in day-to-day operations most enterprises just don’t do this. Really don’t. So having automation scripts to move a web server because you did it once may not be the best use of your time. If you are one of the organizations that perpetually moves things around, then yes, this is a solid solution for you. But if hardware replacement cycles are the most likely determinant for when you will next need to spin up a whole new copy of this app, or move your server from one host to another, then it is highly likely that automating this process is not the best use of your time.

The thing is, DevOps is not a black and white endeavor. Little in IT is. Think about the organizations you’ve known (and we’ve all known them) that tried to standardize on a single language or a single database. It rarely works out, not because the decision to make such a move wasn’t serious, but because the needs of the business trump the desires of IT management to focus skill sets. DevOps is trying to simplify a complex environment with a high rate of change. That’s hard enough, don’t shoot for automating everything that moves.

Think of each automation you write as a liability. I know that sounds weird and counter to the current DevOps thinking, but each script, like it or not, will be dependent upon the (changing) environment it runs in. Unless you are in one or two organizations that I’ve worked with who have abstracted their entire infrastructure (with a huge man-hour investment, I might add), each change in your architecture is reflected in maintenance costs for existing automation. Most of the time, this cost is worth it, but ignoring this fact will drag your IT operations down, even while you are improving things. The best you can hope for by automating little-used processes is reduced ROI from your efforts. The worst could be a nightmare of perpetually out of date scripts that have to be modified just about every time they’re used.

Tools are coming that will ease this pain – a lot – that are more focused than what’s available today. The thing is, until they’re ready and your staff has time to learn them, they won’t help. And like any new market, it could take a while for this one to shake out. So for the near term, just weigh the cost/benefit equation for each process you want to automate. If it saves operator man-hours, is used frequently, and is not too terribly complex, that’s probably where you want to start.

Yes, we’re still talking “low-hanging fruit”. That is always the best place to start, and it gives you the biggest return on your man-hours.

After all, isn’t the point of this whole exercise to get more time at the beach?

Product Roadmaps – from vendor to CIO tool

Thought I’d take a moment to drop a bit of an idea out there for making the most of vendor relationships.

You see, product roadmaps in the tech space are simply a sales and marketing tool. They are designed so that whomever you are talking to can point at them and say “those fifteen must have items? Look! They are coming!”

FakeRoadmap

How most of us really view roadmaps… 

But there is wealth in products roadmaps. We all view them with skepticism because we all know plenty of cases where the roadmap and reality didn’t mesh.

So the other day when I was telling a friend how to get more out of them than the vendor, I decided I’d share this tidbit with you all also. It’s really pretty simple.

When a vendor shows you the roadmap, ask for a copy. If this is the first time you’ve gotten a copy, ask for a road map from 12-18 months ago also (though they’ll claim they don’t have one, ask anyway). Create a location somewhere in your org – intranet, NAS, cloud storage, where ever – to store roadmaps, and drop what you have into a directory bearing the vendors’ name.

Then, you have an actual example to judge a vendors’ ability to deliver what’s presented in the roadmap. It won’t be perfect, there are a lot of things that impact product/feature development, but it’s far more than what you have now, which is skepticism. Comparing last years’ (or any old) roadmap to the product literature should give you an idea of how well they delivered. If it’s not in the feature list on the website, it’s worth asking if they have the feature.

Now you have turned the tables of the roadmap. Instead of a potential smoke-and-mirrors list of promises, you are using it to evaluate vendors’ willingness/ability to deliver. It gives you the edge in choosing solutions that are best for your organization, by comparing past performance.

Whenever a vendor meeting is scheduled, go out and look briefly at the roadmap, compare it to the literature on the website, go in forearmed with knowledge of what they said was going to happen and what did happen in the past.

I did this in the storage space while writing for Network Computing, and you find relatively quickly that size does not matter so much on delivery as other factors. And having the roadmap helps you understand which vendors – be they huge multi-nationals with decades of experience or tiny startups with just a couple of people – are more likely to deliver what they promise.

Of course, many vendors will be resistant to this intelligent use of their roadmaps. In fact, I didn’t tell them I was doing it when I was, because I worried they’d stop providing me this valuable tool. It is of course up to you how your org wants to handle getting and keeping copies of roadmaps, but it I heartily recommend doing so. Use the tools available, and avoid “we can’t do that yet” failures in your IT endeavors.