First let’s look at what Cloud computing can offer and the delivery models. Then we will look at the Intercloud?
Cloud computing delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers? These services in industry are respectively referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Clouds aim to power the next generation data centers by architecting them as a network of virtual services (hardware, database, user-interface, application Logic) so that users are able to access
and deploy applications from anywhere in the world on demand at competitive costs depending on users QoS (Quality of Ser-vice) requirements. It offers significant benefits to IT companies by free-ing them from the low level task of setting up basic hardware (servers) and soft-ware infrastructures and thus enabling more focus on innovation and creating business value for their services.
Types of Cloud Delivery Models
Public - Microsoft OneDrive, Apple iCloud, DropBox and others
Private - The public does not have access
Hybrid - Mix of public and private
Community - A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns (security, Compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally.
Now let’s look at how the Intercloud got created?
The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based. The term was first used in the context of cloud computing in 2007 when Kevin Kelly opined that "eventually we'll have the Intercloud, the cloud of clouds". It became popular in early 2009 and has also been used to describe the datacenter of the future.
In July 2009 in Japan, an effort called the Global Inter-Cloud Technology Forum (GICTF) was launched with the stated goal of "We aim to promote standardization of network protocols and the interfaces through which cloud systems interwork with each other, and to enable the provision of more reliable cloud services than those available today". As of mid-2012 they have over 85 member companies and have published proposed use cases as well as technical documents.
In July 2010 in France the First IEEE International Workshop on Cloud Computing Interoperability and Services (InterCloud 2010) was held bringing researchers together and yielding many published papers. The workshop became an annual meeting with InterCloud 2011 held in Turkey and InterCloud 2012 held in
In February 2011 the IEEE launched a technical standards effort called P2302 - Standard for Intercloud Interoperability and Federation (SIIF). The stated goal of the working group is to produce a standard as such: "This standard defines topology, functions, and governance for cloud-to-cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust
infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. The standard does not address intra-cloud (within cloud) operation, as this is cloud
implementation-specific, nor does it address proprietary hybrid-cloud implementations." As of mid-2012 they have over 50 member companies and have published a Working Draft 1.0.
In March 2012 "Intercloud" made the Wired Magazine Jargon Watch list.
In June 2012 at the 5th International Conference on Cloud Computing (CLOUD 2012) the IEEE announced an Intercloud Test Bed with stated goal of "The test bed will be a cloud infrastructure comprised of assets from participating universities and industry partners. It will be used to develop and test protocols that will be formalized in the IEEE P2302 interoperability standard."
In December 2012, Cisco Systems commissioned Forrester Consulting to delve deeper into the growing interest in IaaS, and in the hybrid model specifically. Forrester asked 69 IT decision-makers in the US, UK, France, and Germany who were interested in or already using a service provider for cloud IaaS about their cloud strategy and found that 76% are planning to implement a hybrid scenario.
The majority of these hybrid adopters plan to use IaaS as a complement to their on-premises servers and storage, but a significant number will also be looking to their service provider for primary support, using their in-house resources only for peak load or special needs. This will no doubt change the dynamic of how IT professionals at all levels will work in the coming years.
In October 2013 the IEEE announced a Global Testbed initiative. The 21 cloud and network service providers, cloud-enabling companies, and academic and industry research institutions from the United States, the Asia-Pacific region, and Europe. The members have volunteered to provide their own cloud implementations and expertise to a shared testbed environment. They will also collaborate to produce a working prototype and open-source global Intercloud.
As of January 2014 cisco announced Cisco Intercloud as a means through which customers can lower total cost of ownership while and paving the way for interoperable and highly secure public, private and hybrid clouds.
There are basically three types of cloud workloads and software distribution models currently in use with Intercloud:
2. On-Premises or On-Site - your organization takes on the cost of hardware, software and support
3. Hosted - most popular method as someone else absorbs the cost and support
1. Saas - Software As A Service
2. Paas - Platform As A Service
3. Dbaas - Database As A Service
4. Iaas - Infrastructure As A Service
The Intercloud scenario is based on the key concept that each single cloud does not have infinite physical resources or ubiquitous geographic footprint. If a cloud saturates the computational and storage resources of its infrastructure, or is requested to use resources in a geography where it has no footprint, it would still be able satisfy such requests for service allocations sent from its clients. The Intercloud scenario would address such situations where each cloud would use the computational, storage, or any kind of resource of the
infrastructures of other clouds. This is a precise analogy to how the Internet works, in that a service provider, to which an endpoint is attached, will access or deliver traffic from/to source/destination addresses outside of its service area by using Internet routing protocols with other service providers with whom it has a pre-arranged exchange or peering relationship. It is also analogous to the way mobile operators implement roaming and inter-carrier interoperability. Such forms of cloud exchange, peering, or roaming may introduce new business opportunities among cloud providers if they manage to go beyond the theoretical framework.
HOW DOES IT WORK & WHAT TECH COMPANIES ARE LEADING THE WAY
On-demand flexibility for hosting workloads on-premises or in the cloud that creates seamless integration between the data center and public cloud provider to handle on demand computing. For example a new Intercloud tool called a "hypervisor" was created to help manage the connectivity between clouds. Some of the biggest players are: Amazon Web Services or Amazon AWS, Microsoft Azure, VMware's vCloud Hybrid, Rackspace, and Citrix. There are others but that is not the prime focus of this article. Now let’s continuing looking at the Intercloud.
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor is running one or more virtual machines is defined as a host machine. Each virtual machine is called a guest machine. Cisco for example has created an Intercloud management tool called Nexus 1000V. This tool was developed specifically to assist companies in management the connectivity between clouds and switches so that users have a seamless way of sharing data between clouds.
Cisco InterCloud, the first of the two new products, is infrastructure software designed for hybrid cloud environments that allows organizations to combine and move workloads, such as data or applications, across different public or private clouds. Data centers can take advantage of Cisco's Nexus 1000V tool that will help them manage connectivity in a world of many clouds.
Cisco Nexus 1000V Intercloud
With Cisco InterCloud, customers can build secure hybrid clouds and extend their existing data center to public clouds as needed and on demand. Connect on-premises data center infrastructure to multiple
service providers and take advantage of flexible, pay-as-you-grow capacity. Ultimately, achieve lower costs and faster delivery of resources. That means you can more securely and seamlessly burst capacity to handle increased demands, rapidly provision new applications, improve disaster recovery, and migrate workloads with confidence. Do all of that without compromising security and control.
Features and Capabilities
Cisco InterCloud is a highly secure, open and flexible solution that enables complete freedom in workload placement as per business needs. It ensures the same network security, quality of service (QoS), and access control policies in public cloud previously enforced in the data center. And as capacity is added, there is no demarcation between internal cloud and external cloud.
Key features include:
• Self-service consumption of hybrid resources with end-user and IT portals
• Workload provisioning and bi-directional migration across on-premises and cloud resources
• End-to-end security with consistent policy enforcement across the hybrid cloud
• A single point of management and control for physical and virtual workloads across multiple private and public clouds
• A choice of cloud providers and hypervisors
So how does this type of solution really work?
It is built around a switching gateway around the edge of your private cloud that provides a highly secured link to your cloud provider. The gateway at both ends encrypts the traffic for a more secured communication channel between the two sites. This provides companies with a safe, highly secure, isolated environment using their own virtual network overlay and existing infrastructure. This allows companies like Cisco to embrace Amazon and Microsoft's cloud solutions to offers increased functionality for moving workloads between public clouds.
Conclusions and Future Directions
Development of fundamental techniques and software systems that integrate distributed clouds in a federated fashion is critical to enabling composition and deployment of elastic application services. I believe that outcomes of this research vision will make significant scientific advancement in understanding the theoretical and practical problems of engineering services for federated environments. The resulting framework facilitates the federated management of system components and protects customers with guaranteed quality of services in large, federated and highly dynamic environments. The different components of the proposed framework offer powerful capabilities to address both services and resources management, but their end-to-end combination aims to dramatically improve the effective usage, management, and administration of Cloud systems. This will provide enhanced degrees of scalability, flexibility, and simplicity for management and delivery of services in federation of clouds.
The business potential of Cloud computing is recognized by several market re-search firms including IDC, which reports that worldwide spending on Cloud ser-vices will grow from $16 billion by 2008 to $42 billion in 2012. Furthermore, many applications making use of these utility-oriented computing systems such as clouds emerge simply as catalysts or market makers that bring buyers and sellers together.
By 2016, over 3 billion connected users will drive and increase of more than 8x in mobile data traffic compared to 2012! By 2020, there will be over 30 billion connected devices. Moreover, information is growing at 2x per year from the massive growth in structured data (traditional databases) and unstructured data (e-mail, web content, videos, social media).
IT is under pressure to become much more agile and efficient, while turning this increasing variety and volume of data into actionable insights. At the same time, data centers are often pushed to capacity, while IT resources and cost are constrained. Plus, there is a need to continually enhance security to keep ahead
of increasingly sophisticated hackers. These challenges are driving the need for IT to quickly evolve toward a more efficient, automated, and secure infrastructure.
Years from now we may even see a self-aware Cloud.