Welcome!

Enterprise IT Context for the CTO

Bob Gourley

Subscribe to Bob Gourley: eMailAlertsEmail Alerts
Get Bob Gourley via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Continuous Integration, Application Performance Management (APM), DevOps Journal

Blog Feed Post

Vendor-Side DevOps Practices Can Still Deliver Better Value By @CaldwellW | @DevOpsSummit #DevOps

How DevOps practices can positively impact application delivery for DoD (and other) clients

Vendor-Side DevOps Practices Can Still Deliver Better Value While Client-Side Government Processes Catch Up
By Wes Caldwell

With the private sector making the cultural and technological shift to better DevOps practices, it was only a matter of time before private providers to government clients began to probe how DevOps practices can positively impact application delivery for DoD (and other) clients. I heard a lot of questions targeted to this exact area at a recent event I attended in Washington, D.C., where I was on an expert panel regarding accelerating application delivery in the government.

I bring up the cultural aspect because, after all, it takes groups of humans buying into better practices before new tools and process can take root — and there are significant challenges in government channels to adopting these. We must approve and deploy in an environment with significant cultural, regulatory and security guardrails.

The new gospel of agile development and continuous delivery is antithetical to the realities on the client side — an environment where waterfall methodologies prevail, and the need to define large-scale releases across year-long timeframes makes it very challenging to operate in an agile manner.

Accelerating application development and enabling continuous delivery processes empowers development teams to own their code from inception to deployment. For this to work, you have to break down traditional barriers between development (your engineers) and operations (IT resources in charge of infrastructure, servers and associated services). In the private sector, IT can set up a self-provisioning environment that lets development teams move at the required speed without ceding control of enterprise resource management – things such as compute, storage, and random access memory (RAM). This “self service” model gives the ability for development teams to run continuous delivery processes that rebuild their application stack from the ground up in literally minutes, while still allowing IT administration to monitor and manage overall allocation, cost, and security boundaries with ease.

But with private vendor-government relationships, development resides on the vendor side, typically in their corporate datacenters, and IT is controlled by the government entity, which makes culture-driven collaboration tougher to achieve and highlights another major hurdle: certification and accreditation (C&A) for deployment to government networks. A common scenario in the private sector — delivering a beta version in the cloud and letting the customer validate it and provide feedback — is more challenging on government projects where testing and validation is done onsite. Government software acceptance processes are in place for a reason, but it can hinder the continuous delivery methodology that is at the core of DevOps strategy.

I’m only bringing these challenges up in the service of highlighting what vendor-side development teams can do with new DevOps practices to increase the quality and level of feature-richness of what is delivered, if not the speed. Let’s explore this next.

Don’t Let what You Can’t Do Stop You from Doing what You Can
Vendors are constantly faced with making as many changes as they can to improve deployment processes and deliver more feature-rich applications, quicker.

Deployment optimization has been a low-hanging fruit in terms of finding improvements. One of the challenges facing application providers is deploying into several government-controlled Cloud environments at once — examples include Amazon Web Services (AWS), Pivotal Cloud Foundry, and Red Hat OpenShift. Each cloud computing provider has “opinionated” ways of handling things such as load balancing, elastic scaling, service discovery, data access, and security to name just a few. Additionally, how one would deploy their application into these environments can vary greatly.

This is where container technologies help out. Docker is leading the pack in this area, allowing you to package an application with all of its dependencies in a standardized manner. Docker allows application providers to containerize applications and package them in a consistent way for ubiquitous deployment across multiple government Cloud platforms. Docker is the next leap past virtualization, employing self-contained Linux kernels that can be shared across components, sparing the Linux host and making better use of client IT resources — which drives cost effectiveness. Coupling container architectures such as Docker with a solid continuous delivery process can really supercharge your development teams in their application development and delivery.

As I pointed out earlier, there are some hurdles yet to be cleared before application development and delivery can become truly optimized in government organizations. DevOps requires cultural, technical, and process evolution to occur to fully realize its benefits. We don’t operate in a perfect world, and there are many things that an application provider to the government can do today to optimize their internal development and deployment pipelines to provide better value to their customers. Cloud architectures hold great promise in the ability to promote applications to new heights in ubiquity and scale. Companies who embrace this change and master DevOps practices will succeed in this new IT landscape.

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder and partner at Cognitio Corp and publsher of CTOvision.com