Welcome!

Python Authors: Matt Davis, AppDynamics Blog, Pat Romanski, Donald Meyer, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Open Source Cloud, Containers Expo Blog, Python

@DevOpsSummit: Blog Feed Post

Secure Container Delivery Pipelines with Docker @DevOpsSummit #DevOps #Microservices

A PaaS-like model that are automatically combined with an Ops-owned container definition

Building Effective, Secure Container Delivery Pipelines with Docker, rkt et al.

Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running instance.

Here, I’d like to talk about how we can apply lessons from experience of shipping code using many different formats in order to build effective, secure Container Delivery pipelines.

container-build-pipeline

The way it’s described above, container delivery does not sound much different from most other packaging and deployment models: replace container image with WAR file, AMI or OVA and the process looks pretty much the same. A trivial point, perhaps, but one worth making: the fact that containers aren’t that different from other full stack formats means that there are quite a few lessons that we’ve already learned which we can apply to the process of building and delivering containers.

One thing certainly does stand out in the general discussion around container delivery, however: the strong emphasis on how containers apparently will “empower the developer” and “free them from the shackles of Ops.” Unlike delivering, say, a WAR file, .NET application or Rails app, however, a container is not just a bundle of traditional “application code”. It includes all the other levels of the software stack, right down to the operating system.

Most container technologies provide nice mechanisms that mean that developers generally don’t have to bother with the actual details of configuring all the lower levels, but can inherit these from a base image instead. In most cases, though, the developer – as the “deliverer” of the final artifact – is still ultimately responsible for the entire container.

The point being: application developers often aren’t expert at, or even particularly interested in, the levels of the system below the application tier. The tendency more frequently – and I’ve fallen victim to this temptation myself – is to “just quickly change this setting because I read in that Stack Overflow post that it might fix the problem we’re having.”

This is especially tempting with containers as a distribution format because the “system levels” are encapsulated in the base image and so are practically invisible – a generally very desirable characteristic. With other full stack packaging formats – OVAs, Virtual Boxes, AMIs and the like – the system levels are not abstracted away as much, so it feels more logical that Ops should play a role in the production process of such packages.

Not that the “in your face” presence of the system levels is something we want to repeat: arguably, one of the reasons the other formats haven’t caught on in the way many hope containers will is precisely because they have such a comparatively heavyweight feel. Still, as the recent discussions about the number of insecure images in the Docker Registry shows, we need to find a way to add more Ops-side input into the container delivery process than is common today.

The points below assume that the development team is indeed ultimately responsible for the definition of the entire system. Different models are possible with containers, too. For example, you could have a PaaS-like model in which the developers provide app components that are automatically combined with an Ops-owned container definition. However, I would not call this a container delivery model because here the container definition is not the deliverable, but a runtime implementation detail.

Here, then, are my “food for thought” discussion points for building effective, secure container delivery pipelines.

1. Developers provide container definitions in SCM

The container definition – Dockerfile or other, higher-level definition that “compiles down” to a container descriptor – is the source deliverable, and so should be stored as a versioned artifact in a source control repository.

2. Container definitions and dependencies are compiled to container images in an Ops-controlled environment

Concretely: the build or CI system that generates the container images from the container definitions should not be owned or administered by the development team. This helps ensure that all Ops-related checks that need to be executed against the container definitions and associated dependencies are carried out correctly.

Implementing this recommendation does not require the entire CI setup to be controlled by Ops. For example, you could limit direct publish access to your image registry to an Ops-managed “image build service”, which could be called from a developer-run CI server.

3. Minimal base image catalog is enforced

Just as many build environments for application code enforce a whitelist of libraries and other allowed dependencies, your container pipeline should only process container definitions that inherit from a whitelisted set of supported base images. From a maintenance perspective, this whitelist should be as small as possible.

If exceptions need to be made – a particular project requires a component that can only run on a specific OS, for example – these should be limited to container definitions for that specific project.

4. Developer-provided container definitions can be pre-processed to choose a different base image

Requiring development teams to manually update their container definitions in source control whenever the base image whitelist is updated, e.g. after a security patch, is tedious and creates unnecessary delay. Instead, the image build system should be able to automatically choose a different base image, if necessary, with suitable notification back to the development team.

Container definition formats that allow symlink-style image references, such as Docker via the latest tag, can support this out-of-the-box. However, you may want to exert more fine-grained control over base image choice, such as automatically replacing a reference to an explicitly-specified version such as v10.4.8 by v10.4.9 if 10.4.9 contains an important security patch.

5a. Developer-provided container definitions can be scanned to enforce particular policies

In general, even though image inheritance means that developers generally don’t need to mess with lower-level system settings in their container definitions, nothing prevents them from doing so. For example, the container definition could change the security configuration of the OS, install insecure versions of libraries, create open mail relays etc. etc.

Ideally, code review that includes Ops will prevent such changes from ever making it into the container definition. You will most likely also be running security scans against your running container instances to try to catch such problems after the fact. The ability to automatically check for “problematic” parts of a container definition at image build time – having a linter for container definitions, if you will – is an additional tool that should be in your toolbox, however.

5b. Developer-provided containers can be “black box” tested to enforce particular policies

If the previous point can be described as “white box” testing of container definitions, then this point is about black box testing of the resulting image: ensure that your image build system is able to create instances of new container definitions in a safe/sandbox environment and run assertions (using a tool such as Cucumber or similar) against them.

6. Images in your registry that were created from a particular base image can be invalidated

The ability to enforce a base image whitelist at build time should prevent any new container images from referencing insecure or otherwise unsupported base images. But what about all those images already in your registry that inherited from such a base image? How do you prevent new container instances being created from those?

Consider implementing a system that allows you to check, just before spinning up a container instance from an image, whether that image is still “safe”. This can be as simple as creating a wrapper for your ‘instantiate container’ command that checks for the absence of an ‘unsupported’ tag or other piece of metadata on the image, or as advanced as a plugin or extension that hooks directly into your container runtime.

7. Images derived from a “banned” base image can be rebuilt using an updated base image automatically

When a base image is banned, you want to be able to immediately trigger new container image builds, using an updated base, for each container definition inheriting from the now banned base image. Otherwise, you won’t have any runnable container images for that application until the next code change, or until someone manually triggers the delivery pipeline for that app, since the latest version is now “unsafe”.

Note that I’m not trying to recommend this as a “standard” part of the image build process – the correct way to create new container definitions once a base image is invalidated is definitely to update the container definition in source control and allow the pipeline to build a new image version based on that. Rather, this capability is a stop-gap solution to help bridge the gap until the new image is available.

8. Post-deployment commands can be automatically run against running containers

While all the new image versions are building, you’ll still have plenty of running container instances that use the now banned base image. In that case, it’s very useful to be able to specify commands to be run automatically against all container instances that meet a certain condition (such as inheriting from a particular base image).

Modifying containers at runtime in this way is generally frowned upon, and this capability should again only be considered a temporary fix until new image versions based on updated container definitions in source control have been built, and all running instances have been updated.

But if you have large numbers of running container instances and many images that need rebuilding (which might take quite some time – think build storm), the ability to quickly apply an “emergency band-aid” can be essential. Even if it only takes a few minutes until all the new images have been built and all running container instances have been updated, that can still be a big problem in the face of a critical security vulnerability.

And finally…

For inspiration ;-)


How Shipping Containers are Made


With thanks to Boyd Hemphill for his thoughtful review comments.

XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control to deliver software faster and with less risk. Learn how…

The post Building Effective, Secure Container Delivery Pipelines with Docker, rkt et al. appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.