A Security Twist.

USS Constitution

Like most of you, I am not a dedicated security person, but security duties have fallen to me in one form or another at almost every role I’ve filled. The fact is that all of IT is involved in security, whether we acknowledge that fact or not. This was true before DevOps, and it is more true in a DevOps environment, where an already overtaxed security group needs to show developers and ops members of the DevOps teams what to do, rather than do it all themselves. Because DevOps creates more work for security, just by virtue of releasing more often and speeding the rate of change.

If you are like me, the last year or two of your security mind-share has been focused on the crazy rate of change in the average IT environment. There seems to be something new to worry about just about every month, and there have been some spectacular failures to show that we really do have to worry about them.

So the announcement of ages old pre-execution bugs in just about every chip on the market was an unwelcome surprise, no doubt for you as well as me. And it’s a surprise we were highly unlikely to have discovered independently, no matter how good our security team. I’m an architecture aficionado. I like to write compilers and linkers as a hobby, so knowledge of the workings of CPUs is kind of part of the pool of required knowledge. And I had zero reason to suspect this was an issue.

Which brings me to the current direction my thoughts are taking. What else do we not know? How can we plan for such broad issues?

I fear the answers to those questions are a whole lot (feel free to insert an expletive here), and Not much (feel free to sigh when you say that). We are dependent upon an insane number of other engineers/developers and their security practices, simply in the process of keeping the IT lights on. From chips to OS to VMs and containers to libraries to apps to hosting providers… You name it, our infrastructure relies on others. And we don’t really know the level of security acumen they applied most of the time. If you’ve been in IT for very long, you’ve seen an instance of a vendor doing something that is definitively unsafe, and that’s just the ones you see.

So what can you do? I honestly don’t have an answer to that at this point, but it’s a problem that we are increasingly going to have to deal with. Internal configuration and coding errors create vulnerabilities, but so do internal coding errors for the products we use. I’m beginning to suspect we only think our devs are creating more of them because we can actually see when our devs create them. In this regard, at least Open Source lets us look, but even the much touted “anyone can look at and update the code” is not reality. How many open source products do you use? How many of those have you done a thorough code review for? A small number of orgs can claim “all the critical ones”. I don’t think any org can claim 100% coverage of source for 100% of Open Source apps in use. So again, you are counting on others and their acumen. “Someone looked at the Open Source”, you tell yourself. I would argue that Heartbleed proves that is not always true.

I guess the best you can do is prepare to be boarded. Arm the troops and watch the main sail, so rigging doesn’t get snagged. In other words, expect a vulnerability you are not aware of in someone else’s code will eventually get you, and have a reaction plan in place. That’s about the best we have, for now.

Leave a Reply

Your email address will not be published. Required fields are marked *