Blueprinting continuous delivery inside the enterprise

Lothar Schubert Laurence Sweeney

Lothar Schubert is senior director of solution marketing and Laurence Sweeney is vice president of enterprise transformation at CollabNet, a development platform as a service (DPaaS) provider.

Lothar Schubert and Laurence Sweeney of CollabNet compare different enterprise maturities and ways to bring each type closer to a continuous delivery model.

Interview conducted by Bo Parker, Bud Mathaisel, Robert L. Scheier, and Alan Morrison

PwC: How do established large enterprises begin to move to a continuous delivery model? Where do they start, and how do they then integrate what they have developed using this new capability back into the core?

Lothar Schubert: We have been working closely with customers and with some of the analysts on this question, and the result was that we put together a blueprint for organizations to deploy enterprise cloud development. The blueprint considers existing practices that we see in organizations of different sizes across different industries, and it consists of five steps, or principles, as organizations mature in the development processes. (See Figure 1.)

Figure 1
Principles and maturity levels in CollabNet’s DevOps blueprint

Figure 1: Principles and maturity levels in CollabNet’s DevOps blueprint 

The first step [embrace the cloud] is really about standardization as well as consolidating and centralizing assets associated with your development processes, not just about doing stuff somewhere in the cloud. It’s as simple as centralizing the uses and providing common access to shared resources in the organization—a common view on code, development processes, and development artifacts, as well as documents, wikis, and requirements.

The second step [implement community architecture] starts with mapping the business and enterprise architecture according to a taxonomy that allows different stakeholders to work together effectively. Particularly in globally distributed, large organizations, different stakeholders work on different projects. You need to have one common view and to structure your development portfolio and the different stakeholders according to the taxonomy.

PwC: Does that effort require an enterprise architecture?

Laurence Sweeney: My focus is working on these kinds of transformations, and my experience has been that when new tools such as Subversion and Git and some of the practices such as agile and DevOps come into traditional enterprises, they’re not usually brought in by the execs. They’re quite often brought in by the practitioners, and quite often it’s under the covers.

Certainly before 2009, no one had heard the word DevOps, and it was quite common for portfolio managers to focus very tightly on requirements and ensure value was being delivered. It was kind of OK that there was a black box in the middle. As long as the developers were happy and didn’t make too much noise, stuff more or less happened.

But in many cases, organizations have ended up with a very fractured development environment. It’s not unusual to walk into an enterprise and find that they have every tool under the sun, sometimes two of them. And now they want to start doing things like agile.

You start having real problems if the source code is in two or three dozen different repositories, if requirements management is all over the place, and if some teams are using some of these more modern tools and they’re completely disconnected from the corporate portfolio management. You can end up with a nightmare very quickly.

So is a new enterprise architecture necessary? The answer is yes and no. It depends on how you want to approach the problem. I see people approach the problem in three major ways.

In the first way, people close their eyes and hope the problem goes away. In other words, they work on something else, and the problem just doesn’t make it to the top of the stack because another part of the business is hemorrhaging. Let’s set that case aside for the purpose of this conversation and look just at the people who have decided they want to make a change for good reasons.

In the second way, an enterprise architect designs all this beautiful stuff up front. The architect will lay out the hierarchies and taxonomy, and that’s a really good approach if you have the bandwidth, the right people, and the greenfield to do it. You’re essentially building a completely new greenfield environment and then moving people to that greenfield.

Forge.mil is an example case for this kind of greenfield opportunity. Basically it was an “if you build it, they will come” situation. It was a cloud development initiative for the DOD [US Department of Defense], and they did the enterprise architecture work up front. They laid out what they wanted to do. They built it. They constructed the projects and communities. I think that’s a very effective way to work.

Laurence Sweeney: The third way is sometimes a bit more pragmatic and contrasts with the full enterprise architecture approach. It is probably best described as draining the swamp. You don’t do the up-front work and build the full greenfield environment because you don’t have time.

“You need to find out where all the source code, requirements, and projects are. After that, the problem space suddenly gets smaller for most enterprises.”

Instead, the first thing you need to do is to get projects onto a cloud development platform such as CollabNet. You just get all the stuff into one place, or centralize it. You need to get your arms around the problem. You need to find out where all the source code, requirements, and projects are. After that, the problem space suddenly gets smaller for most enterprises.

If you approach the problem space from the enterprise architecture point of view in the draining the swamp scenario, you will worry about everything that’s out there and will likely get bogged down. But if you are pragmatic, initially stand up the platform very vanilla, and just move people onto that platform, people will immediately start voluntarily coming out of the woodwork and saying, “Hang on a second. This product or this project is retiring in 18 months,” for example. An ROI discussion can follow, and it provides the data to make investment decisions. You may discover some significant part of your problem space is not worth moving. There are a variety of good reasons.

Perhaps you can archive it. Perhaps it’s a project that never paid off. You can remove those projects from the list. Everything else gets moved onto the platform. That’s the first step, but it doesn’t mean you can’t have your enterprise architects working in the background.

Getting back to Step 2, once you have corralled everybody into a single standard platform, you know where all your IP is. Ideally for your timeline, you also are giving some thought to how you want to structure going forward.

PwC: What happens then in Step 3 [codify development processes] of your blueprint?

Laurence Sweeney: With startups and modern greenfield environments, codified development processes are pretty much going to be agile and using a lot of the thinking from the DevOps community.

But if your enterprise has been running software for 10 to 15 years or more, you must identify which of your processes are agile and which are waterfall processes. Then you must ensure the right projects are in the right process. When you do an ROI analysis, you’ll find that some projects are executing just fine as waterfalls. There’s no business advantage to moving them forward as an agile project. On the other hand, for lots of projects there will be a business advantage to moving forward, so you want to invest in codified development processes.

Lothar Schubert: One company we worked with, Deutsche Post, is a really good example case of the codify development processes step of the blueprint. The company outsourced many of its development efforts and had about 200 vendors that did various developments for them. Deutsche Post also had the old application portfolio in the enterprise architecture. The challenge for Deutsche Post was that it had no qualification of the development processes and it had different metrics for every vendor. Every vendor used its own methodology and code repositories. This situation was obviously a nightmare for management.

In that case, you assess the quality of the projects delivered. You assess which projects—and which vendors—are going better than others. The best ones inform your codified development process and your standard set of templates for process, the tools you can use, and the metrics by which to measure vendors.1

PwC: What about Step 4?

Laurence Sweeney: That step is to orchestrate DevOps. These steps imply a serial process, but in the best cases, you’re working on these as related work streams. Perhaps think of them as swim lanes with interconnected concerns.

Orchestrating DevOps is really about making sure you have automated your continuous delivery and integration. You need to make sure those things are available, so you can start collaborating enterprise-wide. For this step, you need to have good role-based access controls and allow people to see as much as can be seen within the constraints of the separation of duties. In a collaborative version control platform, a trust and transparency is created, but that doesn’t imply you throw away your role-based access controls or give up on separation of duties.

PwC: Git is a software instantiation of Linus Torvalds’ model of trust. Is that model consistent with what you’ve just been talking about?

Laurence Sweeney: If you look at the basic model of trust that is in Git [a social network for version control developed by Linus Torvalds of Linux fame; see the article, “Making DevOps and continuous delivery a reality,” on page 26 for more information], it’s a very technocratic model and it’s one I actually like as a developer. It often is referred to as the benevolent dictator model. Sometimes he claims to be benevolent, and sometimes he claims to be less so.

But what is problematic is the concept of compliance. Git is a distributed version control system, and as a developer it gives me great flexibility in how I refactor my code, what my code base looks like, and ultimately where I push my code. From a compliance point of view, however, if you check in something in Subversion, it’s there forever. There is no obliterate command. The only way to get stuff out of Subversion is an offline admin process with dump and filter. With Git and the rebase command, you can make history look like and say whatever you want it to look like and say in your repository.

At CollabNet [which developed Subversion], we solved that problem with TeamForge History Protect. The solution allows developers to do what they need to and compliance teams to see exactly what’s been done. It also provides a handy “undo” button. That’s very important.

Subversion has 50 percent market share for software configuration management right now, and a lot of the people running on that are agile. I don’t think Git will replace Subversion completely. We’ve seen a 50/50 split between those folks who wanted to work in Subversion. Some teams preferred a single stable trunk approach, which is Subversion. I think we’ll need to have two hammers in the toolbox for some time. They serve slightly different purposes.

One of the smart things the Git community did was design Git to be a client for Subversion. You can use Git quite happily on your desktop, and from the corporate compliance point of view, you’re working in Subversion.

PwC: Finally, how about the fifth step?

Laurence Sweeney: The fifth step is leveraging the hybrid cloud, or using both private clouds and public clouds. The most famous hybrid cloud use case I’m aware of right now is Zynga. The company had made use of the Amazon public cloud, but found that somewhat limiting and developed a private zCloud.2 There’s still the whole concept of being able to use the public cloud as elastic capacity when you need it. I believe one of the studios has done some other things with its rendering farms.

The ability to use that public capacity when desired is certainly very important. However, it’s really hard to manage that capability in an organized business fashion if you don’t have your act together on the other stuff. If you really don’t know where your IP is, for example, that automatically becomes problem number one and is an indicator of your organization’s level of maturity generally.


1 “Deutsche Post DHL Case Study: Enterprise Agility through DevOps,” CollabNet case study, http://www.collab.net/sites/all/themes/collabnet/_media/pdf/cs/CollabNet_casestudy_DeutschePost.pdf, accessed June 20, 2013.

2 Allan Leinwand, “The Evolution of zCloud,” Zynga blog, February 15, 2012, http://code.zynga.com/2012/02/the-evolution-of-zcloud/, accessed June 20, 2013.