Register now for better personalized quote!

Containerized Cloud Foundry is Key Element for Cloud Native

Sep, 28, 2016 Hi-network.com

As founding members of the Cloud Native Container Foundation and Open Container Initiative. We believe cloud Foundry is a key element in cloud native platform strategy. We are also committed to contributions in the open source Cloud Foundry community.

Many readers will be familiar with Cisco's partnership with Pivotal; providing a fully assured, high performance, Pivotal Cloud Foundry environment, leveraging Cisco Metapod.

However, you may be less aware of Cisco's commitment to the wider Cloud Foundry ecosystem and open source efforts.

As we look at the modern application landscape, there are multiple challenges facing our customers:

Firstly, migration to an all-microservices infrastructure is a serious undertaking for most enterprises. Potentially a multi-year, complex strategy; With uptime of legacy and in-transition applications being critical to the businesses, operational effectiveness and customer SLA's.

This is to say, for companies moving towards cloud-native, greenfield is a luxury most do not have.While we believe this transition will occur, successful migration requires a phased approach, with newer services, applications and components sitting alongside and often interoperating with older components.

Secondly, the rapidly evolving ecosystems around DevOps enablement, microservices architecture, and container technologies add their own complexities, especially when considering the number of options available and the associated pros and cons around each. This complexity is then compounded by competing marketing messages and hype.

We see these issues echoed in our customer interactions, observing a mixture of technologies such as Cloud Foundry, side by side with containerization/orchestration strategies, and the companies' legacy infrastructure in a siloed fashion.

While this evolution is a known pattern, it cycles through the enterprise every 5-7 years as new technologies emerge. The current shift is causing additional complexity primarily due to the organizational dependencies, breaking down siloes in order to allow cloud adoption, and DevOPS methodologies to succeed, enabling infrastructure convergence, and a reduction in complexity.

Distilling the reasoning for the mixed environments we observe produces common themes and a snapshot of the current landscape, covered below:

1. There isn't a magic bullet.

While Cloud Foundry provides an excellent way of running cloud native and now dockerized HTTP(S) applications, it is too inflexible when it comes to some workloads; especially those in the earlier phases of migration from legacy infrastructure, where packaging and easy deployment and distribution are the intended goals at this phase (rather than*true*microservices.)

Equally, the container movement provides much more flexibility for packaging and orchestrating a range of application types, however this route creates the complexity of choosing, managing and assuring your own container stack and related infrastructure. Additionally, container layer re-use often allows software developers to take shortcuts and unknowingly accept risk when composing their applications leveraging a set of services or container images.

2. Competing Ecosystems have differing focus areas.

Due to the container ecosystem's momentum around Docker, a huge array of community and vendor-backed services are becoming rapidly consumable, especially in the area of data persistence.

The Mesos project's primary goal is to allow self-managing service frameworks and it is clear to see why the container ecosystem would bring benefits to developers looking to rapidly test, consume or evaluate data services. Adding to this, Kubernetes brings an opinionated orchestration tool that comes with service discovery and replication baked-in.

3. Infrastructure Automation and Deployment Effort.

True production automation is an often oversimplified or ignored concern when considering technology stacks for an application or shared pool of resource.

What works well in small clusters may not scale well. Technologies that do scale may not cope well with multi-datacenter or multi-AZ topologies. Even if the stack components scale perfectly, the automation code and glue to reliably deploy and manage these components is often lacking far behind the capabilities of the tools themselves; leading to an investment in tooling which must be considered when considering a platform.

In this area, we see a large number of roadblocks in the consumption of cross-ecosystem efforts, with each ecosystem and sub-project usually mandating it's own automation choices. This makes unification quite hard...

Consider for example running a data service cluster alongside the regular day to day management of your cloudfoundry cluster;

Your primary ecosystem would suggest investing in time to produce a BOSH release to 'productionize' the service, where as a production ready mesos framework may already exist.

Equally, a developer with a kubernetes infrastructure, knowing Pivotal Cloud Foundry operations manager has a certified production-ready RabbitMQ service would be expected by their ecosystem to re-automate or re-package that service into their environment, rather than having to learn and deploy BOSH and it's dependencies for just one data service.

In this way, the automation pinned to each ecosystem or project, increases the barrier to entry for leveraging cross-ecosystem benefits.

The above issues cause barriers that prevent companies, even those with microservices and modern infrastructures, from fully unlocking the speed and Agilities associated with the 'DevOps' mindset, especially where legacy or brownfield environments exist.

If we do not leverage the best from both the PaaS and Container ecosystem, developers will continue needing to trade off one ecosystem for potential early access to innovations in another. Alternatively, be stuck managing two, mainly-siloed infrastructures.

This picture is worse in larger companies which are trying to support their teams with a development platform: In this case, choosing a company-wide platform with one ecosystem will prevent access to capabilities could drive some teams away from adoption and into another round of Shadow IT.

As part of our cloud native open source efforts, we wanted to solve this problem of DevOps team choice vs siloed infrastructures internally;

  • Cloud Foundry simplicity and developer UXfor developers who wished to consume it.
  • Container Orchestration (Kubernetes/Marathon) for developers with their own container build pipelines.
  • Container ecosystem dataservicesand support for non-microservices transitional applications.
  • No operational cost of two infrastructure stacks, reducing the cost of automation, resource, tooling, and infrastructure.

Introducing Container Cloud Foundry

We have taken the open source Cloud Foundry components, including DEA and Diego and packaged them to run simply and efficiently on modern container orchestrators, such as Mesos/Marathon or Kubernetes. The deployment has also been coupled with service discovery to allow the components to converge without needing deployment/start ordering and allows items to re-converge or move compute nodes due to failures or scheduled maintenance.

Consul becomes our BOSH in this case, providing information on the state, reachability and location of each component.

We realized Containerizing the Cloud Foundry application components would make them more consumable to the wider container communities than trying to move the container ecosystem to BOSH releases.  In this way, we can augment container stacks deployments such as Mesos or Kubernetes, allowing a subset of teams to leverage Cloud Foundry.

Whats more, we now have theflexibility in a single stackto leverage Mesos Framework and Containerized dataservices, reduce the need for two siloed stacks and bring the two ecosystems closer together.

Consider the potential benefits below:

  • Reduction of deployment complexity for open source Cloud Foundry.
  • Simple,rapid bare metal deployments of Cloud Foundry.
  • Cloud Foundry scale-out or upgrade operations that take seconds.
  • Service brokers taking advantage of immediately available containerized services.
  • Enterprises able to offer what developers choice from a single stack.
  • Support for Cloud Foundry in a huge number of non-IaaS environments (without hacking BOSH CPI's).
  • Developers able to evaluate new data services without needing operational blessing.
  • Easier customization of Cloud Foundry components by users.
  • New community members can leverage Cloud Foundry from their existing container stack without dedicating VM's.

Containerized Cloud Foundry Tech Preview

We did not want this to be a one off technical preview. Therefore we produced a build pipeline that will take an existing cloudfoundry BOSH release, and produce the relevant containers for a deployment, tagged with that cloudfoundry version; ready to be consumed from a docker hub style repository.

We have also limited the complexity of configuration, by picking a subset of Cloud Foundry configuration parameters (such as admin username/password, system domain name etc) and configuring those dynamically, using our Consul service discovery cluster as a bootstrap K/V store for the information.

Therefore the generated docker images are generic and consumable for all deployments:

In this way, you won't need to use the pipeline to produce your own ContainerCF as we don't bake in per-deployment items like passwords or domains; just consume the same public images for a given Containerized Cloud Foundry release.

See it in action!

Here's a little demo video of the deployment and consumption of CloudFoundry via the cf cli.

Sounds good? What next..

We are excited to see what the community can achieve with Containerized Cloud Foundry. To that end, we'll be OpenSourcing not only the generated Containerized Cloud Foundry docker images into a public docker hub, but also the build pipeline to allow community members to add their own innovations.

Over the coming days, we'll be building out documentation at http://container.cf, firstly on consumption of the existing images and then on the pipeline itself.

If your interested in learning more or contributing, follow our efforts @ContainerCF or join the#containerCF cloudfoundry slack channel.

Matt Johnson (@MattdashJ) and his team contributed this innovation and drafted the content for this blog post.
Thanks Matt!


tag-icon Hot Tags : Open Source devops containers Cisco Metacloud Cisco Metapod developers paas cloud native Cloud Foundry

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.