Upgrade orchestrations are essential! We're both delighted and frustrated by OpenStack's pace of innovation because by the time we get the current release working then new hotness arrives. Last year, it was enough to just install OpenStack, but now we think it's required to have an upgrade plan. As the founders of Crowbar, we are leaders in the cookbook design for OpenStack and have a lot of experience with orchestration for OpenStack deployments. This community discussion about our proposed upgrade pattern reviews our recommendations and orchestration design. If you're interested in cookbooks that are testable and minimize complexity then this session is for you! We want orchestrations between versions that can focus on the specific use-cases around the migration scenarios like incremental, fastest-possible, change of operating system, or VM migration. If you agree that migrations between versions are also very important then look no farther!
Comcast has been building out its private cloud to host its next-generation Cloud TV platform (http://techcrunch.com/2012/05/21/comcast-x1/), and as part of that effort the Comcast Silicon Valley Innovation Center has developed and in the process of open sourcing a compatible version of Amazon's Simple Notification Service (SNS) and Simple Queue Service (SQS) that we plan to contribute to or integrate with OpenStack. We've built these services on top of Redis (http://redis.io/) and Cassandra (http://cassandra.apache.org/) for multi-data-center availability and extreme scalability and very low latency.
I'd like to present the work we've done to date and get feedback from the OpenStack community. By the time of this conference our code will be open sourced on GitHub and available to the community.
We will discuss how the open source Cloud Foundry project is used by Appfog to extend OpenStack to create true hybrid clouds. Inter-cloud connections and workload portability across various instances of OpenStack will be dived into. Demonstrations of how developers can simply create applications in their language of choice, deploy to instances of public OpenStack-based Clouds (i.e. HP Public Cloud, Rackspace, etc.), and easily move workloads from one cloud to another will be shown.
The Ceilometer project was started 6 months ago on the realisation that all public cloud provider wanting to use OpenStack would need to rewrite exactly the same code to properly meter the use of their infrastructure. Ceilometer stated goal is not to provide a full billing solution. It deliberately limits itself to the first phase: collecting the information needed to establish billing lines.
The project description states:
"Ceilometer aims to deliver a unique point of contact for billing systems to aquire all counters they need to establish customer billing
Although many workloads are moving to virtual machines, there are circumstances where dedicated physical servers are required. To meet these different requirements, the provider typically manages these services in heterogeneous environments, using Cloud Automation for VMs and provisioning physical servers manually. In this session we share a solution for seamless provisioning and management of physical servers together with virtual machines in a unified OpenStack environment controlled through a sin
This collaborative session focuses on creating/maintaining community OpenStack deployment Chef cookbooks. The goal of community cookbooks is that they can become the upstream source for OpenStack deployments and shared best practices. In this session, we will review the current state of the code, flag items to be resolved, identify upcoming features and discuss design impacts required to implement those features. Even if you’re not contributing to the upstream effort, this is a great place to join in the discussion around best practices and OpenStack operations.
Ceph is an open source distributed object store, network block device, and file system. Ceph can be used for object storage through its S3-compatible REST interface. It can also provide storage for network block devices, with the thin provisioning and copy-on-write cloning features necessary to support large-scale virtualization. With the Folsom release, Cinder makes block storage for backing VMs a first class feature in OpenStack. Block devices can be created from images stored in Glance, and with RBD behind both, new VMs can be created faster while using less space. This session will cover the current status of the integration, and discuss the technical implications and the advantages of block storage within the OpenStack cloud operating system.
Linux brought us main stream open source software. OpenStack gave us open source cloud but we aren't done yet. Open Compute has arrived. The Open Compute Project is an open source hardware platform that will change the way we design and deploy data centers. From software definition to supply chain management to efficiency, Open Compute will prove to be a game changer in the scale out data center.
With one production service already available at Rackspace and another on in the works at HP, cloud database services are quickly changing the landscape around OpenStack Compute. Join Rackspace and HP for a lively discussion on how we are adding value to OpenStack with database services (Project RedDwarf). As more companies move their applications and data to the cloud, it is becoming increasingly difficult to manage and maintain database systems on default virtual servers. RedDwarf simplifies database management in the cloud while providing a model for extensible service deployment that will be used to deliver not only database services, but also other services in the future. In this session you will get a chance to hear about RedDwarf’s progress and future plans and learn how you can become active in the community.
Crowbar was the first open source OpenStack-focused deployment framework and been gaining significant traction as the foundation of both Dell and SUSE private clouds. Crowbar makes deployments fast, repeatable and maintainable. This session will give an update about Crowbar’s progress and capabilities such as late-binding deployment and sophisticated network configurations. We’ll take time to explain where Crowbar is going because we’re expanding to include OpenStack upgrades, improved networking modeling, expanded cmdb support, heterogeneous operating systems, and pull from source.
This talk provides an overview of Heat, a peek inside the CloudFormation template language, and a live demonstration of heat technologies. Heat provides an Apache 2 licensed CloudFormation orchestration engine that orchestrates cloud infrastructure resources such as storage, networking, instances, and applications into a repeatable running environment for OpenStack IAAS platforms. Heat also provides several advanced features such as authentication, nested stacks, high availability, and auto-scaling which are demonstrated.
The audience will learn how Heat applies to OpenStack cloud environments using repeatable orchestration templates. OpenStack Summit attendees can learn about the emerging CloudFormation template standard and its impact on Linux and open source cloud communities. A speaker experienced with live demonstrations makes the medium technical difficulty approachable through real-life examples.
Let’s eliminate the lag between coding and deploying! As we drive towards DevOps continuous deployment, it makes sense that our deployment scripts should be able to bypass packaging and pull directly from source code. That’s exactly what the Crowbar team has created as an option for Folsom deployments. This is a central use case for feature development because your testing code that is ahead of trunk; however, we see the same use cases for deployments that have bug fixes, proprietary features, pre-release features or any drift from trunk. This feature is the path to get maximum control of your OpenStack deployment.
This is a brainstorm session to discuss Openstack API:
Support – track the clients and versions that we support . How to keep track and communicate to Openstack community
Testing – brainstorm and agreement on version support, clients ,and data formats that are used for testing and also backward compatability , changes of API implementation, deprecating API process, Core products after integration and functional testing
Hyper-V server is a powerful and free virtualization platform developed by Microsoft. Hyper-V Nova Compute functionalty has been restored and merged into the Nova Compute code base in time for this Folsom release, thanks to the combined effort of Microsoft, Cloudbase Solutions and the great developers in our community.
In this session we'll demonstrate how to setup a Hyper-V 2012 based Folsom infrastructure for running Linux, Windows and FreeBSD instances.
Highlighting Hyper-V server’s great features like Live Migration, Snapshotting, and Replica all within an Folsom Compute infrastructure.
Nova (OpenStack Compute) gives us agility and flexibility for computing infrastructures. Virtualization plays an important role in Nova to provide those agility and flexibility; however, there is a performance penalty caused by virtualization. For example, there is significant performance degradation for response time and context switch on virtualized servers compared to bare-metal servers. Some (non-x86 based) machine architectures of interest to technical computing users have either poor or non-existent support for virtualization. Also some users want to use bare-metal machine itself w/o virtualization. One alternative to using virtualization to provision hardware in a cloud environment is to do bare-metal provisioning: rebooting the machine to a fresh system image before handing over control to the user, and wiping the local hard drive when the user is done with the resources.
NTT docomo and USC/ISI are proposing "General Bare-Metal Provisioning Framework on OpenStack (http://wiki.openstack.org/GeneralBareMetalProvisioningFramework)"" to solve the problems. In the framework
OpenStack delivers a massively scalable cloud management framework for use by any organization running on standard hardware. Joshua McKenty, one of OpenStack’s founders at NASA and a driving force behind its continued success, will discuss how OpenStack got its start, how it’s changing the cloud computing landscape, and the future of OpenStack in this new era of cloud computing.
In particular, Joshua will describe the major components of OpenStack — the virtual machines, virtual hard-drives, object storage, virtual networks, image registry, and the dashboard — as well as the associated sub-projects and the OpenStack services that combine to provide an incredibly flexible self-service infrastructure platform. He’ll also talk about some of the diverse use cases for OpenStack and the exciting ways in which companies are extending the capabilities and accessibility of OpenStack through partnerships and technology development.
Jonathan Bryce, Executive Director of the OpenStack Foundation, starts Day 2 of the Summit by talking about why OpenStack—both the software and the community—will be successful.
“Ubuntu's vision for Open Cloud computing and role in OpenStack”
Agility and scale are the twin goals of cloud computing. Companies want to improve developer agility and productivity, they also want the benefits of scale economics, both internally and from their public cloud providers. With production deployments of OpenStack onUbuntu12.04 LTS at many sites, including telco's, service providers and enterprises, and with Ubuntu being the OS deployed at the largest scale on public clouds, we have gained valuable insight into the challenges and opportunities that both present. Mark will share knowledge from Canonical's support of real OpenStack deployments together with news and roadmaps for cloud users that are building out large scale public and private clouds with Ubuntu and OpenStack.
After only two years, OpenStack has become the de-facto standard open source cloud computing platform and is the fastest growing open source project in history. With over 5,000 members from over 850 different organizations in over 80 countries supporting over 500 active developers that have contributed over 500,000 lines of code, OpenStack is the foundation of numerous products and services. The future of OpenStack will depend on ensuring that customers, service providers, and product companies are able to build the most innovative products and services on top of the OpenStack platform. Chris C. Kemp, co- founder of OpenStack and CEO of Nebula, will discuss the challenges and opportunities ahead for OpenStack, the Foundation, the community, and the opportunity for OpenStack to disrupt enterprise computing in the next five years in the same way that mobile has disrupted personal computing.
At the Folsom design summit, OpenStack developers decided we would embrace infrastructure high availability, a standard feature in other cloud stack solutions. Infrastructure HA is particularly important to those users who are looking to OpenStack as a means of data center consolidation involving legacy applications, and who are not at liberty to execute a major application redesign from the ground up.
Since then, we have had seen the addition of an in-progress High Availability Guide, we have contributors writing integration code for the Pacemaker high-availability stack (the standard Linux HA infrastructure layer), and OpenStack is rapidly improving in high availability features. We are also seeing exciting progress in storage technologies like Swift and Ceph, which come with high availability built-in.
This session is a progress report and outlook on high availability features in Folsom and Grizzly.
Fortune 2000 companies are eager for OpenStack to open up cost models around cloud computing much like Linux did for operating systems but most have not yet taken the plunge. Why? In this talk, hear tales from the field of the most commonly cited missing features that the enterprises are asking for before taking fully embracing OpenStack for both private and public cloud usage.
Modern web and mobile applications demand a highly-available, distributed object storage system that supports highly-concurrent workloads. OpenStack Swift solves these problems at large services providers, top web properties and large enterprises. This talk will also provide an overview of Swift’s architecture where you will learn about the components of Swift. This talk will also cover use cases including high-volume websites, mobile application development, custom file-sharing applications, data analytics and providing private/public storage infrastructure-as-a-service. This talk is also for those who want to understand the design goals of Swift and how to best make use of this component of OpenStack. It’s a great introduction for those interested in using or learning more about Swift.
It's time to take Fog to the next level. Fog is the leading Ruby abstraction library for the OpenStack API and it's embedded in several ecosystem products. With the addition of Quantum, there is a need to extend Fog's models to comprehend cloud networking. Our vision includes adding both hidden functionality like setting up networks by default and explicit functions that expose the power of elastic networking. The goal of this session is to discuss the best ways to surface this functionality and coordinate development so that we do not duplicate or fork efforts
Moderator: Gretchen Curtis, Piston
Sean Michael Kerner
A presentation to introduce new members of the OpenStack Community to Nova. This will include a brief history of the project, an overview of the suppor
The open source configuration management and automation framework Chef is used to deploy and manage many large public and private installations of OpenStack and supports a wide variety of deployment scenarios. Chef for OpenStack is a project based on the healthy exchange of code, ideas and documentation for deploying and operating OpenStack with Chef. With involvement from Intel, Dell, HP, Rackspace and many others there is a community of collaboration between users, developers and operators. This session will discuss the currently available resources and documentation, the evolution and layout of the project and the roadmap going forward.
Joshua McKenty will take a look at the fu
Let me tell you a dirty little secret. While OpenStack is a great project, it is extremely complicated for and indivdual with an engineering/operations focus vs a programming focus to get to their first code contribution.
My name is Colin, I am and engineer. Although I initially got involved with OpenStack in the context of operations, I quickly was drawn into actually contributing code to the project. What I found is that many of the tools and workflows used to contribute to OpenStack are completely foreign to those (like me) with an operations focus.
In this session I will go over the biggest challenges that I faced as an engineer contributing. And review the tools and techniques to that I used to get past them. This information will be presented with the goal of arming engineers just getting involved with the knowledge tools necessary to get to their first successful contribution and beyond.
The importance of community - Leveraging the power of the meeting
Talking your employeer into supporting OpenStack and the CLA
Setting up your dev environments - getting beyond Devstack
Getting git, using the git repository for those that don't code for a living
Testing your code - what do you mean it doesn't build?
How to give back, and get other people involved in the community.
In this case study, Mercadolibre, e-commerce leader in Latin America, will share how they satisfy their huge needs of Infrastructure resources provisioning with OpenStack, when their technologies changed.
The will share a brief story about how they moved from a HigOps Virtualization Environment, to a real Cloud OS, set up around Openstack Compute (+1000 compute nodes), Swift, Keystone and Glance core services and how they managed to move from Cactus to Essex.
As an open innovation project, OpenStack attracts contributions from an overwhelming number of individuals and companies. Often, contributions are tactical, scratch-your-own-itch type. While welcome, this narrow focus can tend to add technical debt to the project and lower overall quality of the result. To ensure its long-term health, a project needs strategic contributions, focused on improving the quality, coherence and security of the end product.
In this presentation, Thierry Carrez, Release Manager for OpenStack, will introduce the contribution process within OpenStack, present the different types of contributions, and explain why some are more desirable than others. Mark McLoughlin, Principal Engineer at Red Hat and Project Technical Lead for OpenStack-Common, will follow up explaining how Red Hat is successfully contributing strategically to OpenStack and how other companies can follow this example.
Limitations of hardware-dependent networks are preventing enterprises from realizing the full potential of cloud computing, and therefore vastly limiting the return on their investment. Traditional networks don't scale in the same way storage and compute resources can scale and the only option generally is to scale up (purchasing bigger networking devices). There are several cons to this approach, such as; it’s not linearly scalable, it is expensive and this approach can cause service interruptions. The solution: virtualize the network.
In this session, Ben Cherian will educate the audience on what network virtualization is and the potential for this modern approach.
Operating OpenStack successfully requires investing in automated process and configuration management. This approach, known as DevOps, is changing how cloud applications and infrastructure is deployed and managed. We’ve assembled a panel of top industry experts to discuss lessons learned and challenges remaining as OpenStack embraces DevOps. Our panel speakers represent Puppet (Dan Bode), Chef (Matt Ray), Juju (Jorge Castro) and DevStack (Jesse Andrews). Monty Taylor will lead the discussion as moderator. Come prepared to learn about operating OpenStack and hear about the pros and cons of these different tools.
We are all participating in building OpenStack and just like Linux distributions, which helped would-be Linux users manage the complexity and configuration of myriad libraries, placement of files, and executables to successfully get the system to boot and run, all indications are that OpenStack distributions are poised to help would-be OpenStack users to quickly get a fully-functional and configured cloud up and running. Companies are bringing unique value-added capabilities to the OpenStack core while fully providing enterprise support and services for their distributions. In this panel discussion, Dell will moderate a discussion with experts from Red Hat, Suse, Canonical, Morphlabs and Dell to discuss the importance of OpenStack distributions in the evolution of OpenStack and how they can support the needs of different markets and customer profiles.
How OpenStack will score in the Brazilian Cloud! - Renato Armani
Openstack in India - opportunities galore - Sriram Subramanian
OpenStack for Vietnam's growth, and how OpenStack works with Government to make its impacts in developing countries - Trung Nguyen
The OpenStack way in China - Yujie Du
Hear about how to get started deploying OpenStack and XenServer, including a demo of a new tool to help move your exisiting images into a XenServer based OpenStack cloud. Also come along to hear more about how Rackspace deploy XenServer in their public cloud.
In this session we will explain our vision of how OpenStack will gain relevance in the datacenter to become the cornerstone from which all the services required for operation, existing or new, are going to be managed. This vision is the result of the feedback StackOps gets from users and customers worldwide. There are already service providers basing all their operations in OpenStack, many new OpenStack services being launched in different parts of the world and large companies betting on OpenStack as the main revenue generator for the future.
This change will be a process that will occur at different paces in the many different regions, sectors and businesses, due to factors of economic, legal, cultural and technological nature. In all cases, the result will be a total abstraction of most of the current datacenter procedures through automation, with the consequent gain in efficiency.
This process has a common denominator in all cases: it needs to be non-disruptive from the business perspective. This will be achieved by balancing out:
simplicity, not only on the deployment phase but most important on the daily operation
a high level of customization to match the specifics of each business
integrability with existing applications and or services
For all three elements, the OpenStack ecosystem has the challenge to find the right balance for each customer and evolve to fill in the existing gaps.
This talk firstly provides an overview of dodai-deploy, the software management tool which can be used to install openstack into multiple machines environment. Then it reviews the history of dodai-deploy, and show the new feature "Install As a Service". With the new feature
Enterprises have been steadily embracing private cloud environments and are increasingly finding use cases for the public cloud. We are past the point of the cloud being limited to shadow IT and startups. We are now entering the next wave of innovation and adoption which will drive the migration of production workloads to a cloud model and will require seamless public to private interoperability.
This move will create a greater demand for cloud platforms that provide innovative, open services in a secure environment, backed by business level SLAs, and supported by quality service. Adopting a hybrid delivery model to ensure flexibility and scale will no longer be an option but rather a necessity.
Zorawar 'Biri' Singh, SVP & GM, HP Cloud Services, will share how HP's customers are currently taking advantage of this hybrid delivery capability through HP's Converged Cloud vision and strategy. He'll discuss the untapped opportunities and benefits to moving production workloads to the cloud and the potential for OpenStack to lead this next wave of adoption.
WebEx is a leader in on-demand collaboration and the second largest vendor of SaaS for business applications in the world. Earlier this year, the company placed a strategic bet on OpenStack to build a private cloud platform for many of its mission-critical apps. In collaboration with Mirantis, WebEx was able to successfully traverse the implementation path and is currently onboarding a number of production workloads onto its OpenStack cloud. In this talk, WebEx will share more about the project, rationale behind choosing OpenStack, some of the key road-blocks on the path to production and touch upon the future roadmap.
DreamHost is launching several public cloud offerings, and along the way
we have learned a lot of information. The goal of this talk is to share
some of the lessons, tips, and tricks we have learned while designing
public cloud architectures.
Talk Outline and Notes:
The problems: Scale, Speed, Monitoring, Uptime, Security, Cost.
The domains: Networking, Storage, Hypervisors.
Scale: The pervasive problem. There are obvious issues (Data Center
Size, Network Switching Architecture, etc), but there are also the
not-so-obvious problems: DNS zone sizes and rebuild times, ARP/ND table
sizes, growing beyond Ethernet VLANs, and multiplication factor on small
delays. IPv6 is a requirement, not a nice to have, because we're out of
Speed: Disk I/O, Memory I/O, Network I/O, and CPU time all matter.
Performance cannot be cloud washed any longer. Beyond the user focused
problems there are pressing concerns inside the provider as well: how
fast can you expand? Automation is a requirement here that may cause
some initial delays will pay off long term.
Monitoring: Start with simple service monitoring via Nagios, then go
deeper. Agents on every everything. "Graph all the things"" (we use
Uptime: Decouple everything. Have multiple paths. Maintenance windows
are a thing of the past. HA is no longer an option
In a little over two years, OpenStack has shattered adoption benchmarks set by previous open source projects and gained acceptance as the future of the data center but has your career kept up with its blistering pace? Come join Niki Acosta, Cloud Evangelist at Rackspace in an engaging panel session to learn about the career paths of OpenStack heavyweights and how you can accelerate your career with OpenStack. Panelists will include John Purrier, CTO at AppFog; Gretchen Curtis, Co-Founder and Chief Strategy Officer at Piston Cloud Computing; Mike Metral, formerly of Sandia Labs and current Enterprise Architect at Rackspace; and John Dickinson, Director of Technology at SwiftStack and PTL for OpenStack Swift.
Did you know that VMware is a top-10 code and bugfix contributor to OpenStack and that VMware has helped customers with Openstack production deployments? In this session, learn howcustomers are using OpenStack with VMware offerings like ESX, CloudFoundry and RabbitMQ. Walk away with a better understanding of VMware’s plans with OpenStack and the software-defined data center as well as VMware’s efforts around expanding Nicira’s contributions to Quantum and Open vSwitch.
While traditional HA and clustering techniques work for many cases, they are also frequently at the root of catastrophic failures. In this presentation we will cover other well understood and proven approaches to providing greater uptime. Load balancing and service distribution patterns provide an alternative that allows for better horizontal scaling, greater aggregate throughput, and have failure characteristics that reduce the chance of cascading failures. In addition, these alternative approaches are typically simpler to implement and have fewer moving parts, which means they are less prone to failures and have significantly less operational overhead. In this session we'll discuss general principles around using equal-cost multi-pathing (ECMP) for IP flow load balancing, using routing protocols judiciously for managing ECMP flows, and show what happens in various failure conditions and how this is different from traditional load balancers, HA pairs, or clustering.
Rackspace’s Enterprise Business Intelligence group (EBI) was seeking a way to move away from their current Data Warehouse solution. They were looking for a cost effective way to scale out new infrastructure in order to meet the increasing business demands of users, house increasing amounts of data, and customize the collection of data. For this, they utilized Hadoop, Cassandra and PostgreSQL with an OpenStack cloud and build the Analytical Compute Grid (ACG).
Analytical Compute Grid(ACG) is solution that enables Rackspace to:
House an ever growing set of data collected from multiple business units.
Allow for quick collection of data
Rapidly scale up and down to meet fluctuating demands.
Provision a wide variety of open sourced virtual machines.
Utilize open source technology to move away from enterprise license fees and avoid vendor lock in to any one particular product.
The team selected OpenStack to be the heart of the Analytic Compute Grid for the following reasons:
OpenStack is compromised of a rich and robust API allowing the ACG engine to interface with OpenStack to perform all of the necessary dynamic scaling functions.
ACG needs to rapidly create and destroy virtual machines. OpenStack provides the necessary speed of provisioning and scale to accomplish these tasks.
ACG utilize OpenStack images to create system VMs. An OpenStack image contains all components necessary for VM to join ACG system.
OpenStack allow us to configure images with different data stores:
Cassandra database for columnar data structures
PostgreSQL for relational data structures
Hadoop distributed file system for large unstructured and noisy data
As the result, ACG enables users to select optimal data store for information collected. ACG provides SQL like syntax for data retrieval via standard JDBC interface regardless of the underlying data store type.
Startups and enterprises alike have placed their strategic bets to monetize the OpenStack wave in various ways. As an ecosystem insider and board memeber of the OpenStack Foundation, in his talk, Boris Renski will offer his views on how various organizations in the OpenStack ecosystem are trying to monetize it today. He will also share his perspective on what works and what doesn’t, based on his experience helping grow Mirantis’ OpenStack business to 60+ people in just under 18 months.
Everyone loves it when things are fast, and that statement holds true whether you're visiting http://www.livingsocial.com or whether you're hitting the OpenStack Nova API and requesting, "Please show me all the instances which I've got running"". Nobody ever writes in asking for support and saying
With the rapid growth of Sina Weibo, which now has over 350 million registered users around the world, and its affiliated services such as Weibo open API and the WeiGame platform which are now the most popular social and online gaming platform in China, we needed a robust and flexible infrastructure to host those applications and platforms.
In the early 2011, we initiated Sina Web Services (SWS), the first public IaaS cloud based on OpenStack in China. Thanks to the extensible architecture of OpenStack and the help of the OpenStack community, we were able to build up our IaaS platform and push it into production in a very short time.
We have accumulated much experience over 1.5 years of developing and operating an OpenStack public cloud. Since OpenStack now is far away from a 'turnkey' solution, in order to put it to production use, we must develop some necessary services in addition to the current projects of OpenStack. In this presentation, I will dive into the actual deployment and network topology of our production environment. I will also share about the extensions that we have made to OpenStack, such as Keystone integration to our existing identity system, OpenStack security enhancement, Swift performance optimization. I will also make public the design and architecture of our own service, such as LBaaS implementation; user & admin console, whose UI and underlying components are designed by ourselves that is now completely different from Horizon; Dough and Kanyun, which are now community project addressing metering and billing issue of OpenStack.
Regarding the operation of OpenStack, I will also talk about how we manage the internal branch and official code base of OpenStack, and how we build our CI system to archive highly automatic operation.
The OpenStack project consists of dozens of sub-projects and each of those are managed via a number of email lists and forums as well as software engineering tools such as issue trackers, version control, continuous integration, etc. Achieving a coherent view of all the information is tedious and immensely difficult. In this session, you will learn how the OpenStack Community team is piloting the Wikidsmart platform from zAgile to integrate information across different systems for a unified view in real-time dashboards. Instant questions can now be answered with faceted search of concepts across all the different repositories. And people and artifacts can be traced across different repositories in order to reconcile people and corresponding contributions.
With the Nicira acquisition, the spotlight is on network virtualization. The Folsom release will introduce OpenStack’s networking component for the first time. And indeed, networking may turn out to be the most strategic component in OpenStack, and the key to realizing a vision of the software-defined data center. It’s a big opportunity and in addition to Nicira, there are many start-ups already tackling the challenges to network virtualization. This panel will include representatives from a few of these companies: Midokura, BigSwitch and 1-2 others, in a space that is quickly becoming more strategic to OpenStack success. Topics will include VMware’s acquisition of Nicira, estimated market opportunity for network virtualization (or SDN), realities of enterprise adoption, challenges to standard OpenStack networking (automation, scale, security etc.) and different technical approaches. The panel will conclude with predictions from the panel and an interactive Q&A with the audience.
The default Nova configuration works well for many use cases, but there are a myriad of options (many underused) that can help Nova better suit your needs. This talk will discuss some of these options, with a focus on new options in Folsom.
Nova’s configuration options:
3 RPC backends
3+ DB backends
5+ virtualization layers
500+ configuration options
At the beginning of 2012, eBay decided to overhaul its technology platform to support an increased focus on rapid innovation, and developer efficiency. To support this effort we built a developer cloud as an extension to our production cloud to give developers maximum flexibility and freedom while protecting business critical infrastructure.
This talk will present the challenges of designing a network infrastructure for both scale, and isolation, and how by using Software Defined Networks we were able to accommodate these two dimensions. We will also go through the automation aspects, presenting how the combination of openstack and quantum, allowed the rapid deployment of our developer cloud.
Openstack has started what will be a complete rebuild of the enterprise data center. It will take time and wont be a straight path but this shift represents one of the most fundamental changes to enterprise infrastructure. Many existing companies will adapt and thrive - many more will fail. The massive disruption in the enterprise IT market will create enormous opportunities for startups in the years ahead. My talk will go over some of my observations as a venture investor and why this is the time to think about openstack as a foundation for your startup.
CERN, the European Laboratory for particle physics, supports over 10,000 scientists world wide in their quest to find out what the Universe is made of and how it works. As part of a multi-year project to modernise and streamline its tools and processes, CERN is using OpenStack and other leading open source tools to double the computing resources available for physicists to around 15,000 servers by 2015.
This talk will review the current state of OpenStack at CERN, the early experiences of deployment such as LDAP integration and configuring with Puppet along with the plans for production.
At the OpenStack Design Summit and Conference in October 2011, Rackspace announced that it would be moving the OpenStack open source project to a separate foundation. Following that, OpenStack invited community participation to help guide the process of moving to a foundation. There have been a number of lessons learned during this process and a couple of members of the OpenStack Drafting Committee would like to share those with you. This session will be led by Alice King, Vice President & Associate General Counsel, Rackspace, and Eileen Evans, Vice President & Associate General Counsel, Hewlett-Packard Company
When Morplabs created mCloud - cloud software intended to allow service providers and enterprises to create EC2-like private clouds - Eucalyptus was the only open source compute cloud project in the game.
That all changed in July 2010 with the launch of OpenStack.
With multiple production deployments of Eucalyptus-based mCloud, find out how Morphlabs moved from Eucalyptus to OpenStack in 6 weeks.
Christopher Aedo will go into detail around pain points and challenges the team faced.
Nicira runs its Worldwide(WW) Field and Sales Engineering team's customer-facing demo and evaluation infrastructure as well as Engineering DevTest, Build and QA automation infrastructure on an internal private OpenStack Cloud that is built using OpenStack Essex components, OpenStack Quantum and a front-end UI that lets users deploy complex, multi-tier Enterprise application with multiple private networks, with just a click of a button!
In this presentation, we will provide user's with an overview of Nicira OpenStack Cloud, our complex enterprise customer use-cases for an OpenStack Cloud and how we have on-boarded following applications to the cloud:
A complex training environment for Nicira's Network Virtualization Platform that requires 9 VMs and 4 Private networks, and a topology that requires varying # of nics on each VM.
A training lab that uses OpenStack, Quantum and Nicira's Network Virtualization Platform to create a complete OpenStack deployment within an OpenStack Tenant!
Ability to dynamically add/remove users to "projects"" based on the user's current assignment.
The default Essex Nova scheduler is slow when it comes to scheduling multiple VMs. Taking about 26 seconds to schedule 100 VMs in our lab environment. This talk will cover the process we took to identify the bottlenecks in the scheduler, in both the Essex and Folsom schedulers. Covering:
how eventlet changes everything
interplay between sql and Nova
understanding Nova logs
Since its creation, OpenStack has enabled developers to build easy-to-use, massively scalable public and private clouds. This session will look at a new use for OpenStack that is helping organizations of all sizes thrive in a changing IT environment: cloud-hosted virtual desktops.
The consumerization of IT is changing the way the workforce operates, with mobility, remote workers and BYOD presenting tough challenges for IT administrators. Since introduction, virtual desktop infrastructure (VDI) looked to be a promising solution, but fell short due to high CapEx costs and complicated on-site infrastructure requirements. Solution providers such as Dell, NaviSite, Rackspace and NetApp are instead turning to the cloud to host virtual desktops that are affordable, quick to provision and easy to manage. With OpenStack as the foundation for cloud-hosted virtual desktops, service providers can reduce operational costs while deploying Windows desktops that are secure, scalable and accessible on any device.
In this session, Desktone Vice President of Engineering Ken Ringdahl will discuss how OpenStack can simplify virtual desktop deployments. The discussion will include:
Examples of use cases for cloud-hosted virtual desktops founded on OpenStack.
The unique properties of desktops when running on the cloud, and how these can be addressed in OpenStack type architectures.
How the underlying storage, compute and networking infrastructure of OpenStack enables service providers to choose a preferred infrastructure vendor.
The ability of tenant automation to drive down onboarding costs.
Best practices for integrating cloud-hosted virtual desktops with other IaaS initiatives to share infrastructure, reduce equipment and enable faster deployment.
In this talk the patent context of open source and Linux will be outlined including the evolution from the SCO litigation through to the Mobile Patent Wars. As part of this history the formation and operational evolution of OIN from passive to active deterrent will be discussed with an eye toward providing OpenStack Foundation members the benefit of OIN's experience as well as offering a set of considerations designed to aid in shaping the Foundation's IP policy. This session will be presented by Keith Bergelt, CEO of Open Invention Network, in the hope of ensuring the Foundation's IP policy advances the OpenStack project's goals and is coherent with OIN's community-based approach to the preservation of freedom from patent aggression.
As enterprises grow to adopt OpenStack, use cases around resillient storage and the vendors they already have procurement relationships with keep popping up. There are two foundational functions that need to be supported within OpenStack in order as foundations to build a rich storage system that meets the demands of the enterprise. I'd like to focus on what the community wants to see in these features and how they should be built.
Storage Tiering - This allows for less critical instances to be placed on commodity storage, and for dev&test environments to be placed on lower grade environments. Can this be accomplished by the multi-provider code being submitted, a hint to the storage system underneath, or do we need to put in something new? What semantics would we like to put in place here?
Shared Volumes - Many clustering filesystems and solutions such as SQL Server rely on shared storage volumes being presented to multiple instances. Currently, volumes can only be mounted to one instance within Cinder. Should we change this restriction to all volumes (traditional local disk installations are in capable of this), or create a new type of volume to support instance sharing?
Enterprise IT is increasingly challenged to not only reduce costs but also respond to business imperatives with agility and innovation. Intel IT has addressed this challenge since 2010 by deploying a private cloud and continuously evolving its infrastructure. To improve the speed and availability with which we deploy new cloud services, we recently augmented our private cloud with OpenStack along with our own internal code. Das Kamhout, Intel IT Principal Engineer, will map the journey Intel took to integrate OpenStack into our enterprise environment and what enterprise IT needs from OpenStack. Join this session to discuss how I ntel and the OpenStack community can work together to catalyze this transformation in the datacenter.
Cloud computing workloads are increasingly sophisticated and specialized. End users continue to recognize the value of choice of their computing infrastructure (i.e. being able to select the 'right tool for the job' in their data centers). Why is computational architectural diversity important? How do you work with a relatively new community on enabling choice? Is it possible to do so in a manner that is viewed as a "win-win"" for everyone? Please join us for an overview of how IBM is working with the OpenStack Community to expand the choices available to our customers which enables them to optimize their workloads
The past five years have been a time of significant transformation in the legal and policy side of open source, particularly con
In recent times, there has been a growing interest in evaluating the feasibility of running High Performance Computing (HPC) applications on cloud computing environments. Flexibility, scalability, and dynamic provisioning capab
OpenStack is a maturing force in the Cloud ecosystem and has significant security related “growing-pains”. No environment is more challenging for deployment than a public cloud. Our business is to allow people to run code and place files deep within our infrastructure. With customer data touching most systems this can be a dangerous proposition in this talk I will discuss some of architectural hurdles we have had to deal with and the countermeasures we have deployed over and above what you’d expect to see in a private cloud. We’ll walk through a security wish list that would make OpenStack the most secure Cloud platform in the world and discuss how to move in that direction.
In a number of OpenStack projects, systems communicate via a messaging/RPC mechanism. The safety and reliability of this mechanism is vital to the security of OpenStack clouds. However, this messaging layer currently relies on implicit trust based on basic network connectivity. In Grizzly, there exists a blueprint to add cryptrographic trust between systems.
Eric Windisch is currently developing this trust mechanism based on feedback from the Folsom design summit. He will highlight the requirements of a trusted messaging system and the architecture of this solution.
72% of the 21 million health care records that have been compromised in the United States since September of 2009 should have been trivially protected using comprehensive encryption of the data before being written to disk. See: http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/breachtool.html.
A busy OpenStack compute node might spin up hundreds or thousands of instances per day. Ephemeral, block, and object storage -- each and every one of these should always be encrypted before being written to the underlying physical media. Multiple excellent file and disk encrpytion solutions exist in Linux, such as eCryptfs and dmcrypt. With cryptographic co-processor acceleration (AES-NI) available on most modern CPUs, encryption is essentially "free""
The lack of quality sources of entropy in cloud computing environment is a problem that has gained considerable attention this year, and has consequences that permeate the entire fabric of cryptography in enterprises. Virtual machines typically lack physical hardware devices that provide random noise, such as microphones, wireless adapters, or serial bus interrupts. Monitoring network interrupts generated by traffic (such as ARP requests) is one of the few sources of unpredictable input in cloud networks, but even that traffic can be somewhat scarce in some networks. Without sufficient randomness, servers routinely generate vulnerable TLS certificates and predictable RSA/DSA private SSH keys.
In this session, we’ll discuss a draft RFC, proposing a network protocol for peer-to-peer exchange of randomness, review an open source implementation of that protocol in C, consider the results of some entropy quality tests, propose its inclusion as an OpenStack Incubator project. We’ll consider the opportunity for collaboration among cloud guests to interchange randomness in ways that defy predictably from outside observers, internal users, as well as offline users.
We'll also discuss other potential solutions to the problem, such as passing through Intel's new DRNG to guests, extending Nova to seed guests with better entropy through a virtio or disk device, as well as other suggestions brought by attendees.
The presentation will look into the new security challenges that network virtualization presents, and the issues faced by both traditional tools and emerging approaches in addressing these challenges. It will discuss the importance of integrating security considerations in the design and deployment of network virtualization. It will also explore the new ideas and technologies in network virtualization security offered by networking companies in the OpenStack ecosystem.
There is a growing demand from cloud service providers and consumers alike to have better transparency into the system infrastructure and hardware platform used for the services. This impacts the audit and the resultant trustworthiness of the compute environment. Methods purely based on the trusted computing (TC) based solutions have proven to be difficult to implement and scale in the last decade. However there has been continued extensive research in this area to address the challenges because of the increasing unmet need. While the original intentions of TC - to ensure trustworthiness of a platform - still hold, there is an opportunity today to simplify the implementation. The key idea is to include platform attributes in an Attribute-Based Identity Management system (IdM) to have better visibility into the platform and use it to deduce the security state of the system. Incorporating the platform attributes will enable service providers to predict the behavior of the platform and enforce policies to protect digital content. Such a trust model may also reduce the burden on the user and may allow cases for platform credentials to be sufficient avoiding the need for user credentials if they are not needed for the service. This would preserve privacy of the user, provide higher security assurance, audit based risk assessment and help in better usability of the overall cloud system.
In this presentation we will provide an architecture considerations of Platform Attribute based IdM for Cloud Identity Platform. We will show how the access control policies can leverage platform attributes for security decision making as well as a fine granular audit. We will demonstrate how this maps to key real world security, identity management and auditing process from prevalent Standards Initiatives including Cloud Security Alliance, OASIS and Open Data Center Alliance.
We would also show how this model opens doors for extended research in (1) privacy preserving cryptographic primitives that can enforce platform attribute based IdM policies; (2) real world examples of security policies based on Platform Capabilities (with or without user credentials); and (3) Scalable and seamless mutual attestation model in a cloud provider and cloud consumer environment. A better view and understanding of the hardware platform capabilities (beyond just the TPM registers) and how they integrate with an Attribute-based IdM is key to leveraging the transparency and trustworthiness advantages of the proposed model.
This talk will describe the R&D recently performed at the University of Kent to add federated identity management to OpenStack. Specifically the Keystone pipeline has been modified by adding a new middleware component that calls a discovery service and credential validation service, in order to facilitate outgoing and incoming federated access, respectively. A client library has been built that makes use of these new keystone services. Several OpenStack clients have been modified to make calls to these new library APIs, so that federated access to Keystone services is possible. The technique that has been employed is designed to be federated identity management protocol agnostic, so that different FIM systems can be plugges in such as OpenID, Oauth, SAML, PKI, Kerberos etc. The working prototype uses SAML requests and responses.
As OpenStack continues to mature, it is increasingly important for the community to be proactive in improving security. The OpenStack Security Group (OSSG) is a new effort led by Nebula and HP to bring together security professionals who can work to address this need. Our goal is to create a group that complements the Vulnerability Management Team by working to improve the security in each project's software architecture, contributing software to address security relevant blueprints and bugs, and providing cross-project security assessments. This talk will introduce the OSSG and describe some of our early success stories, while starting a conversation about the best path forward for OpenStack security.
For many of the same reasons that software-as-a-service is catching on with enterprise buyers, delivering web services on top of infrastructure-as-a-service architectures is appealing to the SaaS developers. Operational agility, lower CapEx, and a broad array of tools and services are on tap that make both public and private IaaS clouds a great platform to build on. But how do you do this securely, especially in the public cloud where you have no access to the network or hypervisor your servers are running in?
Furthermore, for many SaaS providers, the person charged with security considerations isn’t a CSO or IT specialist, but rather, a “DevOps” guru – someone with their hands in both development and operations. While the traditional security professional is focused on compliance and security rules, this new crop is more concerned with continuous development and high availability.
In this session, CloudPassage Chief Evangelist, Andrew Hay, will break down the top security considerations that are specific to the cloud and offer practical steps for securing cloud-based application development. He’ll also address the following:
Why perimeter-centric and hypervisor-based security doesn’t work in the cloud
Which components of cloud security are the customer’s responsibility and which belong to the service providers
Which layers of security are the must-haves for those just getting started
Why the cloud server itself has to be self-defending (i.e. if you put a server out into the cloud, usually it’s being attacked within 30 minutes)