Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

June 20, 2016

Is Agile dead (already)?


I've been pushing Agile development for long time, as opposed to traditional methodologies like waterfall... or to no methodology.
I no longer deliver IT projects myself, but I help customers and partners to plan and deliver theirs.
The most important goal I set is achieving quick wins, like I described in become-cloud-provider-in-3-months.
A quick win encourages all the stakeholders (the project team, their clients, the lines of business that provide the budget, everybody up to the CEO).
Not only it demonstrates that the solution works, but it is a concrete measurement of the return on the investment.
Generally projects are not done because they are smart, but because they are supposed to generate a financial gain (more revenues or lower expenses). Even when the goal is described as a faster go to market, the ultimate target is generating more revenues.

Agile development make projects easier and faster


Agile development is not the only way to achieve a quick win, but it helps.
It also helps in reducing the project risk because, if you have to fail, you fail soon (and save a useless effort).
So, when a colleague sent me this article to solicit my comment, I almost felt insulted by the author... though I'm pretty sure he was not referring to me  :-)

A note on the author: 
Matthew Kern has a long experience in the field, so he knows what he's talking about.
He’s been writing many posts since 2015 to explain that Agile is dead.
Definitely he knows the Agile methodology and its usage, so he deserves respect.
More, he published a followup of that post offering the correct interpretation: probably he received too many protests.

Nevertheless my first impression was negative, because he was criticizing my fundamental believes.
But reading it carefully I understood that he's not wrong. He criticizes the evolution of the Agile methodology and the usage that someone made of it as a marketing tool, also in the light of newest trends like DevOps.

the feedback loop in devops


In my opinion, some overstatements in the article - starting with the title - are a mean to get visibility.
Indeed, in the conclusion he explains what he really means (and I partially agree): he refers to the “Agile” brand, to politics and to commercial usage (literature, consulting, marketing...).

When he says that agile don't work for large enterprises, I would distinguish between vendors of software products and customers doing it for their own project. 
The lifecycle of a software applications is completely different in these two scenarios, and so are the business requirements, the expected quality of the product, the variety of users, the frequency of the updates and bug fixes. 

When he says that many projects fail, he highlights a fact that is common to all methodologies.
But, at least, with Agile you fail soon (that is one of the objectives: better to fail in one month than after 1-2 years of unproductive activities eating your time and money).

it's better to fail before you fly too high


So, if we focus on the hype, on brands and marketing activity, Agile is being replaced by DevOps (that can be considered its evolution, taking care also of the Operations with continuous delivery and feedback) and later even DevOps will be replaced by next hype.

But they both produce a value for developers and for the IT: you can see it in the cultural shift and in the individual interpretation of the principles, rather than in  coded best practices. As an example, I’ve seen that my colleagues in Cisco Advanced Services started using Agile with visible benefits for both themselves (less bureaucracy) and customers (better and faster projects).

In conclusion, definitions are important and they help to spread the knowledge.
But theory is important for professors only, while a good practice makes developers and project managers happy.
If they adopt the principles of Agile, they work - even using Scrum informally - implementing those guidelines and produce good results, would you stop them?
It’s better to be Agile than not... 

it is better to be agile than not...

References


April 8, 2015

Software Defined Networking For Dummies


A very simple, yet complete description of what SDN is, now available as a free ebook that you can download from http://www.cisco.com/go/sdnfordummies


Software defined networking (SDN) is a new way of looking at how networking and cloud solutions should be automated, efficient, and scalable in a new world where application services may be provided locally, by the data center, or even the cloud. This is impossible with a rigid system that’s difficult to manage, maintain, and upgrade. Going forward, you need flexibility, simplicity, and the ability to quickly grow to meet changing IT and business needs.

Software Defined Networking For Dummies, Cisco Special Edition, shows you what SDN is, how it works, and how you can choose the right SDN solution. This book also helps you understand the terminology, jargon, and acronyms that are such a part of defining SDN.
Along the way, you’ll see some examples of the current state of the art in SDN technology and see how SDN can help your organization. 


You can find additional information about Cisco’s take on SDN by visiting:
http://cisco.com/go/aci
http://cisco.com/go/sdn
http://blogs.cisco.com/tag/sdn

March 11, 2015

Cloud Computing as an extension of SOA

When I started explaining my view of Cloud Computing as an extension of SOA (Service Oriented Architecture) someone didn't take it seriously.
I delivered some TOI sessions to increase the awareness on topics that Cisco was approaching in its transformation into a IT company: software architecture, distributed systems, IT service management. I reused some of the concepts and the slides that I created when I was a SOA evangelist.

The feedback was positive and generated a useful discussion, but I also got few comments like: "this is old stuff, cloud is different" and "don't be nostalgic".
After those days, indeed, I've seen many articles comparing Cloud and SOA.

And it is natural: both the architectures (actually cloud is a consumption model more that a architecture) are based on the concept of Service. To be precise, to offer and consume cloud services you need to build a SOA.



It is easy to understand: to begin with, the consumer of a cloud service wants to delegate the build, the ownership and the operations to a third party, that assumes the responsibility for the SLA.
The service is considered a function that someone else provides to you, and you only care the interface to access it (and the quality and the price). You are interested only in the protocol and the user interface - or the API - plus the URL where you get the service.



The actual implementation is not your business. The service (IaaS, PaaS, SaaS) can run on any platform, in any part of the world, fully automated or manual, implemented in any of the hundreds of programming languages. You just don't care, as long as they respect the SLA.



Definitions

The most known definition of cloud computing is from NIST:
 

While SOA was defined, when I was at BEA Systems (one of the SOA pioneers), in this way:
SOA is an architectural approach that enables the creation of loosely coupled
interoperable business services that can be easily shared  
within and between enterprises.


A slightly more technical definition is: "Service-Oriented Architecture is an IT strategy that organizes the discrete functions contained in enterprise applications into interoperable, standards-based services that can be combined and reused quickly to meet business needs.

You can find a discussion of the SOA reference architecture (sorry, it's limited to my italian readers...) here. Also IBM has a good definition of SOA here.

 

SOA concepts that apply to Cloud 

There are some concepts that you find in both the models: each one would deserve a dedicated post, or maybe a book. I will try to give some essential detail in this post.

  • The concept of Service: Consumer and Provider’s responsibility
  • Distributed systems, where remote API are invoked over standard protocols
  • Separation of concerns: interface vs implementation
  • Interface and Contract
  • Reuse and Loose Coupling
  • Service Repository and Service Catalog
  • Service Lifecycle
  • Service Assurance
  • Strategy and Governance

Basic detail 

 

Distributed systems

A distributed system is made of components that are deployed separately, in most cases remotely. Each of them provides a lower level functionality that can be used as a building block for the solution of a business need.
To inter-operate, they need connectivity and a well defined framework for sending and receiving data, managing security, transactions consistency, availability and many other non-functional requirements.

To make the development of such a complex system easier, the software industry has separated the concept of interface from the actual implementation.
The interface of a sw component specifies the functions it implements, the parameters it expects and  returns, their format, the conversation style (sync/async) and the security constraints. It is an artifact that can be produced - and deployed - before the actual implementation is ready: you can generate a stub (or mock) component that always returns fake data, but at least it replies to clients allowing the end to end test of the architecture.

So different developers can split the implementation of the system in components that are built in parallel, based on the definition of the interface that they present to each other. The basic integration test can be executed against a stub, to ensure that the conversation works. This also helps rapid prototyping and agile development.

The separation of the interface from the implementation is fundamental when a distributed system is designed.


A Service = Contract + Interface + Implementation 
The set of the above mentioned artifacts identifies a service.
As I stated, the implementation is not relevant for the consumer of the service - but it must exist, otherwise the service cannot be delivered.
The interface is the only visible part of the service, because the consumer will use that one. Depending on the service, it could be a GUI or the API that a client program invokes.
The most important part is the Contract: the agreement (generally defined in a document) defining who has the right to consume the service, the credentials, the price, the SLA, the constraints (e.g. the response time is granted up to 1000 transactions per second), and more.


A given interface could be offered with two distinct contracts, e.g. with different security requirements. Or different price, or different SLA, ect.
If you do that, a new service is generated (a different triple of contract+interface+implementation):


And of course you can differentiate the interface (e.g. sysnchronous vs asynchronous, that is pretty easy if you use a service bus). Also the addition of a new interface will generate a new service:



Reuse and Loose Coupling 

The effort of building a service in a way that makes it reusable is bigger than just implementing a local component in a software project.
Potential consumers of the service will trust it if it is robust enough, it scales, it is secure, etc.
You need to provide information on what the service does, how to use it, how do you support it.
So a business justification is needed for the additional effort to create a reusable service, both for internal usage (SOA) or as a cloud service.

The integration between service consumers and providers should not create tight dependencies, to allow for innovation and maintenance. Coupling refers to the degree of direct knowledge that one element has of another. The separation of the interface from the implementation plays an important role here, because one could change the implementation without affecting the published interface.
In case of major changes, versioning the interface helps.
See also these definitions of loose coupling on Wikipedia and Techtarget.


Service Repository and Service Catalog

I said that you need to provide information on the service and, eventually, market it. If potential consumers don't know that it exists, they will never use it. They also need descriptive info and technical details.
This is true when you build services for the enterprise architecture, even more if you want to sell them in the cloud. 

An important element of the Service Oriented Architecture was the Service Repository. A central point where all the artifacts produced by projects are exposed for reuse, complemented by the Registry offering a link to the service end points.
Now we have the concept of Service Catalog, managing the entire life cycle of a cloud service: from the inception to the decommissioning, passing through cost models and tenants management.
You can find a definition of a service catalog and its usage in this excellent free book: Defining IT Success Through the Service Catalog

 

Service Lifecycle

When a new service is created, you need to design its provisioning process - that could include fully automated or manual steps, including authorizations - its cost model, the management of the resources allocated for a tenant, the assurance of the quality of the service, the billing and end user reporting, the decommissioning and returning the resource to the shared pool.

It is good to have tools to manage all these phases of the life cycle. A choice of CMS (Cloud Management Systems) is offered by Cisco, that have a solution for a ready to run cloud implementation with pre built services (Cisco Intelligent Automation for Cloud, aka IAC) and the just released Cisco ONE Enterprise Cloud suite, a flexible environment where you can create new services with a very little effort, in a bottom-up approach (from the infrastructure to the catalog).
Both the suites use Cisco Prime Service Catalog (PSC) and the front end. PSC is ranked very high by analysts when they examine the features of service catalogs on market.

 

Service Assurance

Monitoring the infrastructure is essential, if you are a service provider. But it is not enough, because you can't immediately correlate the health status of the infrastructure with the quality of the services that consumers perceive (availability, response time, completeness of the result...).
More sophisticated tools are needed to report the services heath score to the Operations team and to the end users, and to allow troubleshooting.
Root cause analysis is the investigation of the ultimate cause for a service failure that could be due to software, servers, network, storage.
Impact analysis is the notification of the list of services impacted by a fault in the infrastructure, that helps the Operations team to restore the services before consumers complain for a violation of the SLA.

Strategy and Governance

IT governance provides the framework and structure that links IT resources and information to enterprise goals and strategies. Furthermore, IT governance institutionalizes best practices for planning, acquiring, implementing, and monitoring IT performance, to ensure that the enterprise's IT assets support its business objectives.

In recent years, IT governance has become integral to the effective governance of the modern enterprise. Businesses are increasingly dependent on IT to support critical business functions and processes; and to successfully gain competitive advantage, businesses need to manage effectively the complex technology that is pervasive throughout the organization, in order to respond quickly and safely to business needs.

In addition, regulatory environments around the world are increasingly mandating stricter enterprise control over information, driven by increasing reports of information system disasters and electronic fraud. The management of IT-related risk is now widely accepted as a key part of enterprise governance.

It follows that an IT governance strategy, and an appropriate organization for implementing the strategy, must be established with the backing of top management, clarifying who owns the enterprise's IT resources, and, in particular, who has ultimate responsibility for their enterprise-wide integration.

I discussed this topic with reference to SOA (only in italian, again... sorry) in SOA è solo tecnologia? and in
6 errori da non fare in un progetto SOA

 

Enterprise Service Bus

The ESB is a core component in the SOA Reference Architecture. It has the role of a mediation layer between the consumers and the providers of any service, managing the match of available interfaces, the security, the quotas and - in general - the enforcement of the Contract.
The ESB is the backbone of a Enterprise Architecture where new projects benefit from reusing already implemented services.

When you think about cloud, the public interface to available services is offered publicly to consumers. Very often, it consists in a set of API to provision and consume the services. A ESB is not strictly required to expose your implementation as a service, but it can certainly help.
Creating multiple interfaces, as long as new contracts are defined for a service, is just a few clicks activity. There are many ESB available as commercial products, next paragraph shows one example but the same capabilities are commonly available on the market and in the open source.

ESB Core Capabilities (courtesy of Mule Soft - http://www.mulesoft.com/platform/soa/mule-esb-open-source-esb):
  • Service Mediation
    Separate business logic from protocols and message formats for rapid, nimble development and long-term flexibility.
  • Service Orchestration
    Coordinate and arrange multiple services and expose them as a second-generation composite application.
  • Service Creation & Hosting
    Expose app functionality as a service and create an efficient standards-based architecture or host existing services in lightweight containers.
  • Message Routing
    Direct messages based on content or predetermined rules and filter, aggregate, or re-sequence as required.
  • Data Transformation
    Transform data to and from any format across heterogeneous transport protocols and data types or enhance incomplete messages.
  • Event Handling
    Deliver synchronous and asynchronous events, transactions, streaming, routing patterns, and a SEDA architecture.

So are SOA and Cloud identical?

Of course not. They have a lot of common concerns, but while SOA was created to address IT and business needs in a single Enterprise context, Cloud is a wider model that offers commercial services across companies.
There's still the private cloud model, where services are offered internally.
Here we have the same self service consumption model, so the automation of the provisioning is critical as well as the quality of the Service Catalog that you offer to consumers.

The most important lesson from SOA that we can reuse in Cloud is that the human factor is sometimes more impactful than the technology.
Change management is one of the key initiatives that help winning the resistance (both in the IT organization, when a new operational model is adopted, and across consumers that are offered a new way of using applications or implementing new projects). 

A proper documentation of the services is key, and the definition of a go-to-market strategy before you start your journey is fundamental: technology should not be adopted because it's smart or because others are doing the same.
It should always be functional to business requirements and be aligned with the corporate strategy.

January 19, 2015

The Elastic Cloud project - Methodology

This posts is the continuation of the post The Elastic Cloud Project - Architecture.
Here I will explain how we worked in the project: the sequence of activities that were required and the basic technologies we adopted.
The concepts are mostly explained by using pictures and screen shots, because an image is often worth 1000 words.
If you are interested in more detail, please add a comment or send me a message: I’ll be glad to provide detailed information.

To begin with, we had to:
  • map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture
  • create sequence diagrams to make the interaction clear to all the members of the team - and to the customer
  • understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce
  • implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC
  • integrate Hyper-V compute nodes in Openstack Nova
  • create a new service in the Service Catalog to order the deployment of our 3 tiers application

Some detail about the activities above:

1 - Map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture



know that some of you still don’t know Cisco ACI… I promise that I will post a “ACI for dummies” soon.   :-)


  
This picture shows how concepts in Openstack Neutron map to concepts in Cisco ACI:


2 - Create sequence diagrams to make the interaction clear to all the members of the team


3 - Understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce

This is a call to the Cisco APIC controller, using XML


This is a call to the Openstack Nova API, using JSON:

to do this, we used a REST client to learn the individual behavior and how the parameters need to be passed
a REST call is essentially a http call (GET or POST) where the body contains XML or JSON documents
some http headers are required to specify the content type and to hold security information (like a token for single sign on, that is returned by the authorization call and you need to resend in all the following calls to be recognized.
So we adopted Google Postman, that is a plugin for the Chrome browser (latest version is also released as a standalone application) to practice with the REST Calls then,after we learned how to manage them, we just copied the same content (plus the headers) into the “http call” tasks in the CPO workflow editor.



The XML or JSON variables that we passed are essentially static documents with some placeholders for current values, i.e. the Tenant name, the Network name, etc. were passed according to the user input.
Of course the XML elements tags are described in the APIC product documentation, you don’t have to reverse engineer their meaning   ;-)
Another way to get the XML ready to use is to export it from the APIC user interface: if you select an object that has been created already (either though the GUI or the API), you can export the corresponding XML definition:



This is how we copied the XML content from the test made in Postman and replaced some elements with placeholders for current values (that are variables in the workflow designer):

This is how the variable appear in the workflow instance viewer, after you have executed the process because a user ordered the service:


4 - Implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC

An example of the services that Cisco IAC provides out of the box.
They are also available through the API exposed by the product, so we created a custom workflow that reused some of the services as building block for our use case implementation.
his is the workflow editor, where we created the orchestration flow:



5 - integrate Hyper-V
At the time of this project, a direct support for Microsoft Hyper-V was not available in Openstack Nova.
But a free library was available from Cloudbase, so we decided to install it on our Hyper-V serverso that the virtual data center (VDC) we had created in Cisco IAC thanks to the integration with Openstack could use also Hyper-V resources to provision the VM.
More detail on the integration can be found here: http://www.cloudbase.it/openstack/
In the current Openstack release (Juno), Hyper-V servers are managed directly.


6 - create a new service in the Service Catalog

Conclusion

This project had a complexity that derived from being the among the first teams in the world to try the integration of so many disparate technologies: Cisco software products for Service Catalog and Orchestration, three hypervisors (ESXi, Hyper-V, and KVM), physical networks (Cisco ACI) and virtual networks in all the hypervisors, Openstack.
I didn't tell you, but also load balancers and firewalls were integrated.
Maybe I will post some detail about the Layer 4 - Layer 7 service chaining in the next weeks.
We had to learn the concepts before learning the products. Actually theinvestigation of the API and their integration was the easiest part... and was also fun for my ancient memory of programmer   :-)

Now, with the current release of the products involved in this project, everything would be much easier.
Their features are more complete (actually the integration of the Neutron API in the management of Virtual Data Centers in ACI was fed back to our engineering during this project).
Skills available on the field are deeper and widespread.

I've already implemented the same use case with alternative architectures twice.
Cisco UCS Director was used once, replacing the IAC orchestration and pre-built services.
And, in another variation, the Openstack API were integrated directly instead of reusing the existing services that manage the Openstack VDC in IAC.
Just to have more fun... ;-)

January 15, 2015

The Elastic Cloud Project - Architecture

This posts is the continuation of The Elastic Cloud Project post.

There is a team at Cisco, called System Development Unit, that creates reference architectures and CVD (Cisco Validated Design).
They work with the product Business Units to define the best way to approach common use cases with the best technology.
But at the time of this project, they hadn’t completed their job yet (some of the products were not even released).
So we had to invent the solution based on our understanding of the end to end architecture and integrate the technologies on the field.

As I explained before, the most important components were:
- servers - Cisco UCS blades and rack mount servers
- network - Cisco ACI fabric, including the APIC software controller
- virtualization - ESXi, Hyper-V, KVM
- cloud and orchestration software - Cisco PSC and CPO, Openstack (PSC and CPO, plus pre-built services, make up Cisco Intelligent Automation - IAC)
IAC can integrate different “element managers” in the datacenter, so that their resources are used to deliver the cloud services (e.g. single VM or Virtual Data Centers - VDC).
Element managers include vmware vCenter and Openstack, so a end user can get a VDC based on one of these platforms.
There is a autometed process in IAC, called CloudSync, that discovers all the resources available in the element managers and allow the admin to select those he wants to use to provision services (resource management and lifecycle management are amongst the features of the product).

The ACI architecture
I will cover it in detail in one of next posts, but essentially ACI (http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html), that stands for Application Centric Infrastructure, is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance. 
Cisco ACI consists of:



The policies that you create in the software controller (APIC) are enforced by the fabric, including physical and virtual networks.
You describe the behavior your application need from the network, non the configuration you need.
This is easier for the application designer, in the collaboration with network managers, because it can be graphically described by the Application Network Profile.
A profile contains End Point Groups (EPG, representing deployment units of the application: both physical and virtual servers) and Contracts (that define the way EPG can communicate).
A profile can be saved as a XML or JSON document, stored in a repository, participate in the Devops lifecycle, used to clone a environment and managed by any orchestrator.
ACI is integrated with the main virtualization platforms (ESXi, Hyper-V, KVM).

To deploy our 3 tier application on 3 different hypervisors, we had to manage vmware and Openstack separately - but in a single process, because everything should be provisioned with a single click.
Initially we based our custom implementation of the new service on the standard IAC services, using them as building blocks.
So we had not to implement the code to create a network, create a VDC, create a Virtual Server, trigger CloudSync, integrate the virtual network with the hardware fabric.
This sequence of operations was common to the Openstack environment and the vmware environment.
The main workflow was built with two parallel branches, the Openstack branch (creating 2 web servers on one network and 1 application server on anothernetwork) and the vmware branch (doing the same for the database tier).



The problem is that the integration of IAC with Openstack, in the 4.0 release that we used at that time, only deals with Nova - that, in turn, manages both KVM and Hyper-V servers.
No Neutron integration was available out of the box, hence no virtual networks for the Openstack based VDC.
So we built the Neutron integration form scratch (implementing direct REST calls to the Neutron API) to create the networks.

The ACI plugin for Neutron does the rest: it talks to the APIC controller to create the corresponding EPG (End Point Group).
This implementation has been fed back to IAC 4.1 by the Cisco engineering, so in the current release it is available out of the box.

Solution for Openstack
A plugin distributed by Cisco ACI was installed in Openstack Neutron, to allow it to integrate into the APIC controller.
This is transparent to the Openstack user, that goes on working in the usual way: create network, create router, create VM instances.
Instructions are sent by Openstack to APIC, so that the corresponding constructs are deployed in the APIC data model (Application Profiles, End Point Groups, Contracts).
The orchestrator can then use these objects to create a specific application logic, spanning the heterogeneous server farms and allowing networks in KVM to connect to networks in ESXi and Hyper-V.
So the workflow that we built only needed to work with the native API in Openstack.

Logical flow:
— web tier --
create a virtual network for the web tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the web server and attach it to the network created for the web tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— application server tier — 
create a virtual network for the app tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the application server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— connect the tiers via the controller —  
connect the two EPG with a Contract, that specifies the business rules of the application to be deployed
     this is done via the APIC Controller’s API, creating the Application Profile for the new application in the right Tenant


Solution for vmware
The APIC controller has a direct integration with vmware vCenter, so the integration is slightly different from the Openstack case:
The operations are performed directly against the APIC API and, when you create a EPG there, APIC uses the vCenter integration to create a corresponding virtual network (a Port Group) in the Distributed Virtual Switch.
So we added a branch to the main process to operate on APIC and vCenter, to complete the deployment of the 3 tier application with the database tier. 

Logical flow:
— database server tier — 
call the APIC REST interface, implementing the right sequence (authentication, create Tenant, Bridge Domain, End Point Group, Application Network Profile).
     specifically a EPG for the database tier is created in the APIC data model, and this triggers the creation of a port group in vCenter.
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the database server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"


Service Chaining
The communication between End Point Groups can be enriched by adding network services: load balancing, firewalling, etc.
L4-L7 services are managed by APIC by calling external devices, that could be either physical or virtual.
This automation is based on the availability of device packages (set of scripts for the target device), and a protocol (Opflex) has been defined to allow the declarative model supported by ACI being adopted by all 3rd party L4-L7devices. 
Cisco and its partners are working through the IETF and open source community to standardize OpFlex and provide a reference implementation.


In next post, I will describe the methodology we used to integrate the single pieces of the architecture, how we learned to use the API exposed by the target systems (APIC and Openstack) and to insert these calls into the orchestration flow.

Link to next post: The Elastic Cloud Project - Methodology


January 7, 2015

The Elastic Cloud Project

The Elastic Cloud was a pilot project that I led as a software architect for a customer.
They wanted to evaluate the complexity and the overal quality of a end to end solution provided by Cisco for Cloud Computing.
So we built a complete infrastructure, including hardware and software, for a public cloud.
The number of use case was limited, and I will tell you about the most curious one.
When I was told about the requirement, I thought it didn’t make sense in the real world… but later I understood the rationale  :-)

The request was to deploy a three tier application, with a single click, provisioning also all the needed infrastructure at the same time: (virtual) servers, network and storage.
So this is an example of software defined datacenter, not only SDN but the entire stack: from hardware to the sw application. 
The strange aspect of this requirement is that every tier of the application should run on a different virtualization platform: web servers on KVM, application servers on Hyper-V, database server on ESXi.
It was a technical demonstration, sure, but it also reflected a real world use case. They have some customers that, for legacy applications, have certification constraints that mandate a specific hypervisor for at least one part of the application. So the deployment cannot be standardized on a single platform.
Our customer wanted to verify that Cisco is able to orchestrate a multi vendor infrastructure and that the advantages of stateless computing (I will explain it in a different post, but essentially it is joining SDN with the quality of a hardware infrastructure) are not compromised by the etherogeneity of the virtualization layer.
They said that no other vendor was able to implement the entire use case on the 3 major hypervisors together.

This is a very high level description of the use case:



We decided to implement the self service process in the portal provided by Cisco Prime Service Catalog.
The orchestration layer was Cisco Process Orchestrator, interfacing with Openstack and vmware vCenter directly.
Openstack had compute nodes running KVM and Hyper-V.
Virtual networks were created, on demand, in the virtual switches in each hypervisor and then joined by the physical network connecting the virtualized servers.
The configuration of the network was done through the APIC software controller, that is part of the Cisco ACI architecture (Application Centric Infrastructure).


There is a solution, offered by Cisco, that provides cloud services out of the box.
It is called Intelligent Automation for Cloud (IAC) and it is based on the Prime Service Catalog (PSC) as a front end, the Cisco Process Orchestrator (CPO) and prebuilt services created by the Cisco engineering.
IAC integrates your infrastructure by interfacing with vCenter, vCD, Openstack and other so called “element managers” and has all the tools to manage the resources lifecycle.
We decided to reuse some of the existing services as building blocks, wrapping them in a custom process implementing the logic of the specific use case.
So we created a workflow in the visual editor of CPO that invokes the existing atomic services: create a virtual network, create a VDC (virtual data center), create a virtual server.
Then we added explicit calls to the API of the target systems that were not integrated in IAC: the APIC Controller for the ACI fabric, Neutron as the manager of the networking in Openstack (the 4.0 release of IAC only managed Nova for provisioning VMs in Openstack, now also Neutron is integrated in IAC 4.1).

So the effort in this project was only dedicated to understanding the overall logic of the use case and implementing the needed API calls.
It was not particularly difficult, because the target API are all based on a REST interface (both for APIC and Neutron) so invoking them from CPO was a kids’ play.
We created a process with 3 branches, one for each tier of the application, creating all the needed networks and virtual machines, then “plumbing” all together with the ACI fabric through the API of the APIC controller.

We were forerunners, trying to implement a exotic use case before the Business Units looked at it… now everything is available out of the box    ;-)
So we faced the following issues:
  • Lack of reference architecture and some products features.
  • Dispersed team (time zones and location) made the coordination difficult.
  • Fragmented skills: none of us had the complete knowledge of all the products and technologies, due to the amount of innovation involved. 
  • Multitasking: many of us were working part time, engaged on different projects.
  • Generous support from individuals and organizations in the company, but limited governance
  • Products limitations discovered in progress (and solved with... fantasy)
  • Usage of beta code and daily builds for some of the products
  • Limited documentation available  

In next post I will tell the entire story and how we were able to demonstrate the main concepts:
  • Cisco ACI is one of the best solutions for SDN and is not limited to software overlay only
  • Cisco has a end to end solution for cloud, including sw and hw and people that can design the architecture
  • Open source solutions can be easily integrated into a commercial architecture, providing additional value
  • The contribution Cisco provided to Openstack allows customers to manage our network fabric from Neutron seamless