March 24, 2016

How to create a service end to end in Cisco ONE ECS

Training and real world usage of the products

Sometimes training is more focused on the procedural detail of the individual components than on the real world usage of a system.
You might miss the understanding of the end-to-end architecture and the use cases that you could address with that solution so you go home, at the end of the training, without a complete awareness.

In the case of the Cisco ONE Enterprise Cloud Suite, that is composed of a number of components, in a course for beginners you will learn how to use Prime Service Catalog, UCS Director, Intercloud Fabric Director and VACS.
But, after you know how to configure them and what's the value provided by every tool, you might still wonder "what I'm going to do with this architecture?" or "how complex would it be to implement a complete project?".


I put this sample use case together to show what is the process to create a brand new service in the self service catalog, complete with all the implementation of the delivery of the service. My colleague Maxim Khavankin helped me to document all the steps.
If you download PSC and UCSD and run them with the evaluation license, you could run through this exercise very easily and make friends with the platform.

Hello World with Cisco ECS

I implemented a very simple service, just to have a context to show the implementation.
No business logic is involved, or integration with back end systems, to keep you focused on the mechanics: you can easily extend this framework to your real use case.


The idea is to order a service in PSC, providing a input, and let UCSD deliver the outcome.
In our case the expected result is writing a "Hello <your name>" message in the log file.

Generally workflows in UCSD make use of tasks from the library (you have more than 2000 tasks to automate servers, network, storage and virtualization). But a specific use case might require a task that is not available already, so you build it and add it to the library.
I created a custom task in UCSD just to write to the log: of course, you could replace this extremely exciting logic with a call to the REST API - or any other API - of the system you want to target: infrastructure managers in your data center, enterprise software systems, your partner's API for a B2B service, etc. 
 
Then I created a custom workflow in UCSD, that takes your name as a input and makes use of the task I mentioned to deliver the "Hello World" service. The workflow can be executed in UCSD (e.g. for unit testing) or invoked via the UCSD API.

Prime Service Catalog has a wizard that explores the API exposed by UCSD, discovers and imports all the entities it finds (including workflows) so that you can immediately offer them as services in the catalog for end users. Of course a customization can be added with the tools provided by PSC.

So the end to end procedure to create a business services is described by the following steps:
  1. Create a custom task (if required)
  2. Define a workflow that uses the custom task -> define input variables
  3. Create a catalog item in UCSD -> offer the workflow from step 2
  4. Synchronize PSC and UCSD
  5. Use the wizard to import the service in PSC
  6. Customize the service in the PSC catalog with Service Designer (optional)
  7. Order the customized service
  8. Check the order status on PSC side
  9. Check the order status and outcome in UCSD

I illustrate every step with some pictures:

Create a custom task (if required)    

Custom tasks can be added to the existing library where 2000+ tasks are offered to manage servers, network, storage and virtualization.


You can group tasks in Categories so that they can be found easily in the workflow editor later. 

 

Custom tasks can have (optional) input and output parameters, that you define based on variable types: in this case I used a generic text variable to contain the name to send greetings to:


The format, contraints and presentation style can be defined:



You can skip the steps "Custom Task Outputs" and "Controller" in the wizard to create the task: we don't need them in this use case.

Finally we create the logic for our use case: a small piece of Javascript code that executes the custom action we want to add to the automation library.

The UCSD logger object has a method to write an Information/Warning/Error message to the UCSD log file. As I wrote earlier, you could use http calls here to invoke REST API if this was a real world use case.



After you've created your custom task it's available in the automation library.
Now you have to define a workflow that uses the custom task: to pass the input that the task requires, you will define a input variable in the workflow.


The workflow is an entity that contains a number of tasks. The workflow itself has its own input and output parameters, that can be used by the individual tasks.


Input and output parameters of the workflow are defined in the same way as tasks' input and output.
They can be useful if you launch the workflow via the REST API exposed by UCSD.


Now that you've created the workflow, it's time to add some tasks to it picking from the library (exposed in the left panel of the workflow editor).
We'll only add one task (the custom task that we created): select it from the library, eventually searching for the word "hello".
Drag and drop the task in the editor canvas, then configure it.

You will see a screen similar to this one:


 Configure the new task giving it a name:


Map the input variable of the task to the input parameter of the workflow that you created:


If you had not a variable holding the value for this task's input, you could still hard code the input value here (but it's not our case: this form would appear different if you hadn't mapped the variable in the previous screen).
 

The task does not produce any output value, so there's no option to map it to the output parameters of the workflow.


Finally we see the complete workflow (one single task, in our example) and we can validate it: it's a formal check that all the tasks are connected and all the variables assigned.


Then we can execute it from the same window, to check that it behaves correctly. You will have access to the log file from the same window that pops up when you execute the workflow, so you can see that the greetings appear in the log.






Next action is to expose this workflow to users in UCSD (in the GUI and via the API).

Create catalog item in UCSD -> offer workflow from Step 2   

UCSD catalog items are offered to non-admin users if you so choose. They are grouped in folders in the user interface, and you can make them visible to specific users or groups.


You can give them a name and a description and associate a service, that could be the provisioning of a resource or a custom workflow - like in our case.


The workflow is selected and associated here: 




After defining the new catalog item, you'll see it here - and in the end-user web GUI.


If the service is offered to technical users (e.g. the IT operations team), your work could be considered complete.
They can access UCSD and launch the workflow. The essential user interface of the tool is good enough for technical users that only need efficiency.

But if you're building a private cloud you might want to offer your end users a more sophisticated user interface and a complete self service catalog populated with any kind of services, where you apply the governance rules for your business.

So we'll go on and expose the "Hello World" service in Cisco Prime Service Catalog.

Synchronize PSC and UCSD   

Login to PSC as admin, go to Administration -> Manage Connections.
Click on the connection to UCSD (previously defined by giving it the target ip address and credentials) and click "Connect & Import".


PSC will discover all the assets offered by UCSD.
Now you can use the wizard to import the "Hello World" service in PSC. With few clicks it will be exposed in the service catalog.  


The wizard allows you to associate an image and a description with the service. Here you can describe it at the level of detail and abstraction that are more appropriate for your users (or customers).
You have a full graphic editor that does not require any skills as a web designer.



Additional metadata (attributes of the service) can be added, so that users can find it when searching the catalog: there is a search engine that PSC provides out of the box.


And finally you decide who can see and order the service in the catalog: you can map it to single users, groups, roles, organizations or just offer it to everyone.

 

At this time the service is fully working in the self service catalog and his lifecycle is managed. But, if you like, you can still apply customization and leverage the power of PSC. 

Customize the service in the PSC catalog with Service Designer (optional)   

There a subsystem in PSC, accessible only to specific user roles, that is called "Service Designer". It can be used to build services from scratch or to edit existing services, like the one that the wizard generated for us. Just go there and select the "Hello World" service.


The user interface of the service is built on reusable elements, that are called Active Forms (one active form could be reused in many services). The wizard generated a Active Form for our service, with a name corresponding to it.

If you select the active form and go to the panel "Display Properties" you can change the appearance and the behavior of the order form.


As an example the only input field, named "person", can be transformed into a drop down list with pre-populated items. Items could even be obtained from a database query or from a call to a web service, so that the list is dynamically populated.


The power of the Service Designer offers many more customization options, but here we want to stay on the easiest side so we'll stop here   :-)


Order the customized service   

Go to the home page of the Service Catalog. Browse the categories (did you create a custom category or just put the Hello World service in one of the existing categories?). You can also search for it using the search function, accessed via the magnifier glass icon.

In this picture you also see a review made by one of the users of the catalog that has already used the service. You can add your own after you've ordered it at least once.


You will be asked to provide the required input:

When you submit a request, your order is tracked in My Stuff -> Open Orders.
This is also used for audit activities.

Check order status on PSC side  

You will see the progress of the delivery process for your order: in general it has different phases including, if needed, the approval by specific users.



Check order status and outcome in UCSD     

If you go back to the admin view in UCSD (Organizations -> Service Requests) you will see that a new service request has been generated: double click on it to see the status.



if you click on the Log tab you can check the result of the execution of the service: the hello message has been delivered!




Now that you appreciated how easy is to build new services with PSC+UCSD you're ready to use all the features provided by the products and the pre-built integration that makes it very quick.

All your data center infrastructure is managed by UCSD, so that you can automate provisioning and configuration of servers, network and storage (of course, from any vendor and both physical and virtual). Once you've the automation done, offering services in the self service catalog takes just few minutes.

References

Cisco Enterprise Cloud Suite
and its individual components:
- Cisco PSC - Prime Service Catalog 
- Cisco UCSD - UCS Director




February 23, 2016

Become a cloud provider in 3 months

This is the story of a company that decided to become a Cloud Service Provider.
They were already a successful IT outsourcer in the financial industry, with many customers' environments running in their data center.
Outsourcing was a healthy business but they started having some challenges, due to slow and inefficient provisioning processes and operations.
Any new request from a customer started a new project, so their customers started exploring public cloud services to get more flexibility and speed.
For this reason, the company decided to adopt the cloud delivery model and to offer their customers a self service catalog.



Of course a cloud project cannot be done in one night, so they were cautious in their approach.
Both technology and operational processes needed to be proven before embarking in such a challenge, but the traditional waterfall methodology made the expected return appear uncertain and distant.
To make things worse, they had tried to implement a PaaS project with a different vendor and they had spent a lot of money without tangible return.

I was engaged to support the evaluation of a new IaaS catalog that could evolve to PaaS and to self service applications management.
I made sure that the Business and IT strategy were in sync and I proposed to start with small steps to validate the approach. I also invited them to qualify the quick wins that they would expect to justify the investment and show the stakeholders an immediate return, so that the project lived enough to reach the expected success.
As you know well, many projects last too much and die before showing any business return.

We analyzed the current situation and defined a future vision. This was the driver for a gap analysis and for the prioritization of user stories, that we decided to implement in short iterations (sprints of 2 weeks, according to the Agile Scrum methodology).
Their data center was mainly based on Cisco networks and servers, but this was not the main reason for selecting the Cisco software stack for the cloud project.
After the initial workshops, some product demo and talks about other projects they understood that our people - and our partner company that implemented the project with them - were experienced enough to plan the project seriously and to chase the quick wins that we all considered so important.

The Cloud Management Platform chosen for the project was the Cisco ONE Enterprise Cloud Suite (aka ECS).



One of the most important features considered in the decision was the possibility to create flexible templates, later exposed as self service options in the end user catalog, for the deployment of complex applications. A set of servers with different roles, and all the networks needed to make them work, can be provisioned as a dedicated and virtually separated environment (multi tenancy in a shared infrastructure that offers economy of scale).

As an example, the following picture shows a environment that could be ordered - fully configured - with a single click. It is based on a component of the ECS architecture that is named VACS (virtual application cloud segmentation):


It was easy to engage the SME (subject matter experts) for the servers, the network, the storage and the virtualization in the customer organization and to ask them to define the basic policies that we would use as building blocks for all the services to be offered.
This model-based implementation is quicker to build and easier to maintain, and it can be exposed to the end users in a way that they understand and trust soon.

The automation that we built was considered useful by the SME (after winning their initial suspicion, because every good craftsman loves manual work) because it set them free from the manual operations that previously made their work tedious and error prone.
Delegating the configuration to an automated service gave their customers a faster service and a higher quality (no rework needed because of manual errors or missing information).


One more component in the architecture is the Stack Designer.
It is a tool provided by the Cisco ECS to create templates for application provisioning. It takes IaaS templates - made in the infrastructure management layer, that in our case is UCSD, to deploy a topology of servers and networks - and layers the software stack on top of them.


You can decide what software products (or custom applications) must be installed - and configured based on the input parameters provided by the end user - including monitoring agents and backup agents, and save this new template in the repository.
The integration with Puppet, an open source solution used to provision software applications, is leveraged to install and configure the entire software stack from the images in the repository.


The new template can now be offered as a self service option in the catalog, so that the end users don't need to install and configure the software stack themselves. A end-to-end solution is provided, up and running and ready to be used.
All the components of the ECS solution are pre-integrated and this makes the project faster than you would expect. But, since they communicate through standard protocols and open API, every component of the architecture could be replaced by an alternative product (from a different vendor or from the open source community). You should not be afraid of vendor lock in  :-)

Agile Delivery

In terms of project delivery, the following table shows the different iterations that allowed to complete the delivery in only 3 months.
But the amazing result is that at every sprint (i.e. every 2 weeks) new use cases were available in a usable environment.
The first demo to a real customer (a customer of my customer) was done after 2 months from the start of the project, and the first customer was onboarded after the 5th sprint (i.e. 2.5 months).



Conclusion

This quick win demonstrates that even complex projects like building a public cloud platform can be done in a reasonable amount of time.
The era of endless projects, based on complex technology and measured in function points, has passed forever.
There are simple solutions (like ECS) that make your work easier, but a good organization and the right methodology allow for incremental building and refinement of the solution. Every iteration of the project delivers a usable result in the production environment, and you don't need to wait the completion of the entire project to start using the solution.
If you are a service provider, you can start selling your services soon and produce a ROI.
More services will be added incrementally and the catalog will be richer at every iteration.


References

Cisco Enterprise Cloud Suite
and its individual components:
- Cisco PSC - Prime Service Catalog 
- Cisco UCSD - UCS Director
- Cisco VACS - Virtual Application Cloud Segmentation

Fast IT
Cisco Prime Service Catalog in action: Cisco eStore

Scrum (agile development) 







February 2, 2016

Governance in the hybrid cloud

This post shows how a company can solve one of the main issue that CIOs have today: the so called Shadow IT.



This term defines the usage of cloud services (either IaaS, PaaS, SaaS) in a project without any control, decided by the application developers or designers because they think it's beneficial for the agility of the project.



Sometimes leveraging available services is really good for a project: it's useless to rebuild something that is easily available as a standardized service. Even when the IT organization of your company (or your customer, if you're a consulting company) provides the building blocks that you need for your architecture, it could be difficult to get approvals or a fast enough provisioning.
So there are different valid reasons to incorporate public cloud services, we can't blame those that try to fully exploit a Service Oriented Architecture.



Unfortunately this way of assembling applications using any available resource you consider useful creates troubles for the IT organization.
Besides additional costs, that arrive as a surprise (developers bill to a personal credit card or to a corporate one, but sooner or later those costs will be factored into the cost of the project), some corporate rules could be violated without even being aware.
Just a few examples: storing reserved data in a database outside the company's datacenter, or invoking services without encrypting the input/output parameters, not granting the end to end High Availability or Disaster Recovery of the entire system.


The subject of costs can be easily underestimated: at development time you need very limited cloud resources, for a limited time. It costs near to zero, before the application goes to a full production environment. But after that, it will need more computing power and more storage, and of course more bandwidth, to serve all the users. Cloud services tend to increase surprisingly in these conditions.

So the CIO has a dilemma: to try to block, or limit, the usage of cloud services - limiting cost and risk but appearing like the one that slows the innovation down and prevents the lines of business from achieving their business result - or to allow maximum freedom, with the additional risk of becoming not relevant because they can bypass the IT organization?


There is a solution in the middle: IT could offer a facilitated access to cloud services, adding them to a Service Catalog where users can self serve, granting compliance by design.
Public cloud services will be selected based on agreed architectural and security policies, they will be documented, audited and reported, eventually subject to approval from a financial standpoint.



One possible implementation of such a catalog can be based on the Cisco ONE Enterprise Cloud Suite, as I did in a recent project at one of my customers.

The Cisco ECS is a reference architecture comprising one flexible Service Catalog, a automation engine and a platform for hybrid cloud that allows the extension of your datacenter into a kind of "bubble" in the public cloud. In case you need additional power, you can burst your workloads into the virtual private datacenter keeping all the security and networking policies you defined in your private cloud: even the IP address of the virtual machines does not change, as long as the secure segmentation of the application layers and any other policy.

I'm not going to describe the Cisco ECS, because you can find the official documentation here.
I'm showing how we extended the services offered in this catalog with CliQr Cloud Center for managing the provisioning and the lifecycle management for applications in the cloud. So the great capabilities of Cisco ECS in term of IaaS are complemented with the offer of the deployments of simple or complex applications and software stacks, that you can target at any cloud just selecting from a drop down list.

I mean that the template for the deployment is not cloud dependent,  and the user can - within the limits of his authorization level and the corporate policies - choose to provision it in the private cloud (e.g. on vmware in the corporate data center) or in the public cloud (e.g. AWS or Azure).
The lifecycle operations (start, stop, resume, delete, etc.) will be also offered as well as the migration to a different cloud: from private to public after the QA test is done and you're ready for production, from a public provider to a more convenient one, etc.

THIS POST HAS BEEN REDACTED

After the publication of this post Cisco announced the intent to acquire Cliqr (not because of the post :-) ), and our policies require that we don't speak of deals while they are in progress. I can't show the way we integrated Cliqr in this project because the official statement on the reference architecture will be communicated by Cisco after the acquisition is eventually completed.


References:
http://blogs.cisco.com/datacenter/introducing-cisco-one-enterprise-cloud-suite
http://www.cliqr.com/



October 20, 2015

DevOps, Docker and Cisco ACI - part 2

This post is a follow up to the initial discussion of the DevOps approach based on Linux containers (specifically with Docker).
Here I elaborate on the advantage provided by Cisco ACI (and some more projects in the open source space) when you work with containers.

Policies and Containers  

Cisco ACI offers a common policy model for managing IT operations.
It is agnostic: bare metal, virtual machines, and containers are treated the same, offering a unified policy language: one clear security model, regardless of how an application is deployed.


ACI models how components of an application interact, including their requirements for connectivity, security, quality of service (e.g. reserved bandwidth for a specific service), and network services.   ACI offers a clear path to migrate existing workloads towards container-based environments without any changes to the network policy, thanks to two main technologies:
  • ACI Policy Model and OpFlex   
  • Open vSwitch (OVS)   

OpFlex is a distributed policy protocol that allows application-centric policies to be enforced within a virtual switch such as OVS.
Each container can attach to an OVS bridge, just as a virtual machine would, and the OpFlex agent helps ensure that the appropriate policy is established within OVS (because it's able to communicate with the Controller, bidirectionally). 



The result of this integration is the ability to build and manage a complete infrastructure that spans across physical, virtual, and container-based environments.
Cisco plans to release support for ACI with OpFlex, OVS, and containers before the end of 2015.


Value added from Cisco ACI to containers

I will explain how ACI supports the main two networking types in Docker: veth and macvlan.
This can be done already, because it's not based only on Opflex. 

Containers Networking option 1 - veth

vEth is the networking mode that leads to virtual bridging with br0 (a linux bridge, the default option with Docker) or OVS (Open Virtual Switch, usually adopted with KVM and Openstack).
As a premise, I remind you that ACI manages the connectivity and the policies for bare metal servers, VMs on any hypervisor and network services (LB, FW, etc.) consistently and easily:

Cisco ACI as the any to any network integration in the data center

On top of that, you can add containers running on bare metal Linux servers or inside virtual machines (different use cases make one of the options preferred, but from a ACI standpoint it's the same):


That means that applications (and the policies enabling them) can span across any platform: servers, VM, containers at the same time. Every service or component that makes up the application can be deployed on the platform that is more convenient for it in terms of scalability, reliability and management:


And the extension of the ACI fabric to virtual networks (with the Opflex-enabled OVS switch) allows applying the policies to any virtual End Point that uses virtual ethernet, like Docker containers configured with the veth mode.

ACI model brought to all workloads at the same time: Docker, VM, bare metal


Advantages from ACI with Docker veth:

With this architecture we can get two main results:
- Consistency of connectivity and services policy between physical, virtual and/or container (LXC and Docker);
- Abstraction of the end-to-end network policy for location independence altogether with Docker portability (via shared repositories) 

Containers Networking option 2 - macvlan

MACVLAN does not bring a network bridge for the ethernet side of a Docker container to connect.
You can think  MACVLAN as a hypotetical cable where one side is the eth0 at the Docker and the other side is the interface on the physical switch / ACI leaf.
The hook between them is the VLAN (or the trunked VLAN) in between.
In short, when specifying a VLAN with the MACVLAN, you tell a container binding on eth0 on Linux to use the VLAN XX (defined as access or trunked).
The connectivity will be “done” when the match happens with the other side of the cable at the VLAN XX on the switch (access or trunk).

At this point you can match vlans with EPG (End Point Groups) in ACI, to build policies that group containers as End Points needing the same treatment, i.e. applying Contracts to the groups of Containers:


Advantages from ACI with Docker macvlan:

This configuration provides two advantages (the first one is common to veth):
- Extend the Docker containers based portability for applications through the independence of ACI policies from the server's location.
- Performance increase on network throughput from 5% to 15% (mileage varies, further tuning and tests will provide more detail) because there’s no virtual switching consuming CPU cycles on the host.



Intent based approach

A new intent based approach is making its way in networking. An intent based interface enables a controller to manage and direct network services and network resources based on describing the “Intent” for network behaviors. Intents are described to the controller through a generalized and abstracted policy semantics, instead of using Openflow-like flow rules. The Intent based interface  allows for a descriptive way to get what is desired from the infrastructure, unlike the current SDN interfaces which are based on describing how to provide different services. This interface will accommodate orchestration services and network and business oriented SDN applications, including OpenStack Neutron, Service Function Chaining, and Group Based Policy.

Docker plugins for networks and volumes

Cisco is working at a open source project that aims at enabling intent-based configuration for both networking and volumes. This will exploit the full ACI potential in terms of defining the system behavior via policies, but will work also with non-ACI solutions. 
Contiv netplugin is a generic network plugin for Docker, designed to handle networking use cases in clustered multi-host systems.
It's still work in progress, detail can't be shared at this time but... stay tuned to see how Cisco is leading also in the open source world.



Mantl: a Stack to manage Microservices  

And, just for you to know, another project that Cisco is delivering is targeted at the lifecycle and the orchestration of microservices.
Mantl has been developed in house, as a framework to manage the cloud services offered by Cisco. It can be used by everyone for free under the Apache License.
You can download Mantl from github and see the documentation here.

Mantl allows teams to run their services on any cloud provider. This includes bare metal services, OpenStack, AWS, Vagrant and GCE. Mantl uses tools that are industry-standard in the DevOp community, including Marathon, Mesos, Kubernetes, Docker, Vault and more.
Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows for you to work with every piece of your DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices you need will function when you need them to.

When working in a container-based DevOps environment, having the right microservices can be the difference between success and failure on a daily basis. Through the Marathon UI, one can stop and restart clusters, kill sick nodes, and manage scaling during peak hours. With Mantl, adding more VMs for QA testing or live use is simple with Terraform — without one needing to piece together code to ensure that both pieces work well together without errors. Addressing microservice conflicts can severely impact productivity. Mantl cuts down time spent working through conflicts with microservices so DevOps can spend more time working on an application.






Key takeaways

ACI offers a seamless policy framework for application connectivity for VM, physical hosts and containers.

ACI integrates Docker without requiring gateways (otherwise required if you build the overlay from within the host) so Virtual and Physical can be merged in the deployment of a single application.

Intent based configuration makes networking easier. Plugins for enabling Docker to intent based configuration and integration with SDN solutions are coming fast.

Microservices are a key component of cloud native applications. Their lifecycle can be complicated, but tools are emerging to orchestrate it end to end. Cisco Mantl is a complete solution for this need and is available for free on github.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/  
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper   
Opendaylight and intent
Intent As The Common Interface to Network Resources
Mantl Introduces Microservices as a Stack
Project mantl



October 9, 2015

DevOps, Docker and Cisco ACI - part 1

In this post I try to describe the connection between the need for a fast IT, the usage of Linux containers to quicky deploy cloud native applications and the advantage provided by Cisco ACI in containers' networking.
The discussion is split in two posts to make it more... agile.
A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject. 

DevOps – it’s not tooling, it’s a process optimization


I will not define DevOps again, you can find it in this post and in this book.
I just want to remind that it’s not a product or a technology, but it’s a way of doing things.
Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.
Steps are:
  1. alleviate bottlenecks (systems or people) and automate as much as possible, 
  2. feed information back so problems are solved by desing in next iteration, 
  3. iterate as often as possible (continuous delivery).



Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.
Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.



One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed). So, if it is programmable it fits much better in this vision. 

 

Infrastructure as code

Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.
In this way you can automate the build and the management very easily.

There are a number of tools supporting this operational model. Some examples:



One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric to DevOps as a code library.

 
You can download it from:


The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.
Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.
No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).




So you can create, modify and delete all of the following objects and their relationships:




Linux Containers and Docker


Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications.  Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.

Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:
- Docker Engine: a portable and lightweight, runtime and packaging tool
- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.



Docker’s main purpose is the lightweight packaging and deployment of applications.   

Containers are lightweight, portable, isolated, self-sufficient "slices of a server" that contain any application (often they contain microservices).
They deliver on full DevOps goal:
- Build once… run anywhere (Dev, QA, Prod, DR).
- Configure once… run anything (any container).  

Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.

 

Kernel namespaces isolate containers, avoiding visibility between containers and containing faults.   Namespaces isolate:
◦     pid (processes)
◦     net (network interfaces, routing)
◦     ipc (System V interprocess communication [IPC])
◦     mnt (mount points, file systems)
◦     uts (host name)
◦     user (user IDs [UIDs])    

Containers or Virtual Machines


Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.



Containers networking

When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted




You can create different types of networks in Docker:

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.   

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.

phys:  an already existing interface specified by the lxc.network.link is assigned to the container.

empty: will create only the loopback interface (at kernel space).

macvlan:  a  macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.  It also specifies the mode the macvlan will use to communicate between  different macvlan on the same upper device.  The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge

Docker Evolution - release 1.7, June 2015  

Important innovation has been introduced in the latest release of Docker, that is still experimental.

Plugins  

A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico.  For volumes, this means that volumes can be stored on networked storage systems such as Flocker.

Networking  

The  release includes a huge update to how networking is done.



Libnetwork provides a native Go implementation for connecting containers.  The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.

Containers can now communicate across different hosts (Overlay Driver).  You can now create a network and attach containers to it.

Example:
docker network create -d overlay net1    
docker run -itd --publish-service=myapp.net1 debian:latest  

Orchestration and Clustering for containers  

Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker Swarm
Most use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.

Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/ 
http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper    

Some content from the Docker documentation reused based on the Apache 2 License.

September 6, 2015

The Phoenix Project - how DevOps can change your life

It’s been a long time since I did my last post: as promised, I only post information from my experience in the real world and I avoid echoing messages from marketing   :-)
I’ve not been at rest, though, but I’ve worked at customer projects that can’t be mentioned publicly (yet).

But I’ve also been in vacation and I could finally read a great book, “The Phoenix Project”. 
It is a novel and a very educational reading at the same time.
I wholehearted recommend you to read it (though I’m not earning anything from the book) because I enjoied it a lot and I learned important lessons that deserve to be spread - for our common benefit as IT community.








You are not required to be a IT professional but, if you are, you will benefit the most and it will recall many familiar stories.
Since I’ve led some mission critical projects, and my skin is still impressed with both tragedy and triumph, this story reminded me those great moments. 
If you are new to DevOps, you can read my introductory posts in this blog.

Essentially, The Phoenix Project describes the evolution of IT in a company that, on the verge of a complete failure, pioneers DevOps and revolutionizes the way they work.
The impact on the core business is huge and their strategy creates a gap with the competition thanks to agility and flexibility.
Also personal lives are affected because the new organization ends the tribal war among Development, Operations, Security and the business stakeholders: they establish respect, trust and satisfaction for all the involved parties.
Of course the DevOps methodology is not a magic wand that makes the miracle for them: it is the outcome of a new way of thinking and working together.
This is a story of people, rather that technology.

If every IT department put themselves in the shoes of the others, instead of finger pointing, they can help each other to reach a common goal.
If the whole IT is not a counterpart of the LOBs but is a partner (understanding why they are asked something instead of focusing on how to do it), they can offer a huge value to the company… and be highly rewarded (see the coup de théâtre at the end of the story).
This would stop the “dysfunctional marriage” between two parties that don’t understand each other and suffer from a forced relationship.
In my experience, most of the business people see the IT as the provider of a services that is never satisfactory.
On the other side, IT sees that business people don’t understand the complexity and the effort required and ask for impossible things.
In most cases, they are bound to a traditional way of working and don’t even raise their head to see that they already own what’s needed to win.
They are overwhelmed by current tasks, troubleshooting and budget cuts, so they can’t think strategically.

The great idea, here, is importing the concepts and the experience from Lean Manufacturing into IT.
They start considering the IT organization similar to a production plant and optimizing its organization.
Finding bottlenecks and avoiding rework are the first steps, then automation follows to free the smart guys from the routine work and so the quality skyrockets.
At the end of the story the release of new features required by the business no longer takes months (and high risk at the roll out) but they can deploy 10 project builds per day!

That is not impressing if you think that these days some companies achieve 1000s of deployments per day thanks to Continuos Integration and Continuous Deployment.
But it is light years ahead of what most of my customers are doing, though some are exploring DevOps now.
Of course, one organization cannot change overnight.
You shouldn’t see the adoption of DevOps as a single step, and be scared by the effort.
In the book, they learn gradually and improve accordingly: you could do the same.
They go through a process that is made of Three Ways, until they master all.
A brief description of the three ways follows, thanks to Richard Campbell:

The First Way – Systems Thinking
• Understand the entire flow of work
• Seek to increase the flow of work
• Stop problems early and often – Don’t let them flow downstream
• Keep everyone thinking globally
• Deeply understand your systems

First Way Goals
• One source of truth – Code, environment and configuration in one place
• Consistent release process – Automation is essential (one click)
• Decrease cycle times, Faster release cadence

The Second Way – Feedback Loops
• Understand and respond to the needs of all customers (internal and external)
• Shorten and amplify all feedback loops
• With feedback comes quality

Second Way Goals
• Defects and performance issues fixed faster
• Ops and InfoSec user stories appear as part of the application
• Everyone is communicating better
• More work getting done

The Third Way – Synergy
• Consistent process and effective feedback result in agility
• Now use that agility to experiment
• You only learn from failure – So fail often, but recover quickly

Third Way Goals
• Ability to anticipate, even define new business needs through visibility in the systems
• Ability to test and optimize new business opportunities in the system while managing risk
• Joy

You should not think that The Phoenix Project is a technical book: though I’ve learned new things or reinforced concepts I knew already, the value I found in it is motivational.
It really moves you to action, and you want to measure the immediate improvement you can get.
More, you want to partner with other stakeholders to achieve common goals.

The Essence of DevOps
• Better Software, Faster
• Pride in the Software You Build and Operate
• Ability to Identify, Respond and Improve Business Needs

My final take from this story is that everybody in the IT (like in other fields) should:

- take risk and innovate - if you fail, probably the result would not be worse than staying still
- invest time - at cost of delaying important targets - to think strategically: the return will overpay the effort
- study what others have done already: learning by examples is much easier
- always try to understand your counterpart before fighting by principle, there could be a common advantage if you shift your perspective

Some useful references:
Other DevOps books:
- Visible Ops Handbook (Gene Kim)
- Web Operations (Allspaw/Robbins)
- Continuous Delivery (Humble/Farley)
- Lean Startup (Eric Reis)