Have you ever completed a build and wondered what exactly changed? Typically, output logs and parameterized build input data such as SCM branches, bug tracking issues and notes entered by the developer are lost once the build has completed. At best, Jenkins keeps this information for a limited time. The only historical reference is, perhaps, in a report. Consequently, this data cannot easily be reused for future builds or reviewed during the auditing process. Using Jenkins, Groovy and Neo4j, this data can persist for the life of a project. This presentation will describe the simple steps taken to save this information for posterity.
Automated testing throughout the continuous integration (CI) process allows you to detect errors instantly, work more efficiently and, ultimately, deliver software faster. But implementing and scaling automated testing is often complicated, time-consuming, and downright frustrating!
In this talk, BlazeMeter CEO & Founder Alon Girmonsky will explore ways to take the pain away - with the help of a new open source test automation framework. This framework makes it easy to scale automated testing while using Jenkins and JMeter. Join this session and learn how to:
-Take away the pain of integrating JMeter and Jenkins
-Dynamically create JMeter tests from a configuration file and run from Jenkins on every commit
-Run hundreds of tests in parallel and automatically feed results back into Jenkins
-Simultaneously run any number of JMeter tests from a single command line
- Make JMeter scripts version control friendly and human readable
This presentation will cover an actual use case of mobile automation to proactively drive mobile website and application testing activities. Topics covered include:
- Using one script to test mobile and desktop websites
- Ensuring smoke test execution as part of the continuous integration (CI) process
- Providing timely reporting to make go/no-go decisions
- Use the Perfecto Mobile plugin with Eclipse and Java
- Leverage Perfecto Mobile cloud for mobile execution
- Use Selenium WebDriver
- Use TestNG and ReportNG
- Integrate with Jenkins
- Execution time reduced by 80%
- Coverage increased by 100% over manual testing
- Jenkins triggers automation of scripts across desktop browsers, including Firefox, IE, Safari and Chrome
- Jenkins schedules regression executio
Get up and running with Jenkins in just a few days with no prior knowledge. Jenkins allows for many different configurations and plugins for creating a true continuous delivery/continuous integration pipeline from build through test. Job design, source control of Jenkins artifacts, parametrization and abstraction are key elements to ensure a reliable pipeline is achieved. These best practices will get you off to a fast start.
Why is it that more teams talk about extending build automation to include functional and non-functional testing than actually can do it? Is the challenge in implementing automated tests that don’t require babysitting? Perhaps it is reliably executing parallel execution on real devices? Or is the real point of failure a lab that triggers a 40% false negative rate? Are there other obstacles?
This session will explore these issues and offer attendees lessons learned from multiple successful projects that are driving fast feedback through implementing a robust CI practice using Jenkins.
The speakers will highlight:
- Two preconditions for success:
a. A test ready lab offering the ability to mic end user environments
b. Automation that executes continuously and in parallel across multiple targets
- Requirements for strong test planning:
a. Designing a progressive test plan account for daily, nightly and weekly feedback windows, along with determining appropriate device coverage
b. Inserting non-functional tests for early identification of performance challenges
Presentation examples will feature Eclipse, Selenium Remote WebDriver, TestNG and Perfecto’s cloud-based lab.
If you are taking the quality of your software seriously, you have numerous automated tests across many different Jenkins jobs. But getting a grip on all of your automated tests -- and then figuring out whether your software is good enough to go live -- becomes harder and harder as you speed up the delivery of your software. In this session, Andrew will share tips on how naming conventions, partitioning of testware and mirroring the application's structure in the test code help you best handle automated testing with Jenkins. He will also provide insight into how to keep the setup manageable and will share practical experiences from managing a large portfolio of automated tests. Finally, Andrew will showcase several practices that help manage all of your results, plus add aggregation, trend analysis and qualification capabilities to your Jenkins setup. These practices will help you draw the right conclusions from your tests and deliver code faster, with the confidence that your systems won't fail in production.
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Mark will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
Production Jenkins installations often rely upon a large number of custom scripts and third-party plugins. While this eases initial deployment of Jenkins and allows for greater customization, it can often lead to a worse user experience and hard to debug problems - such as output failures not being properly captured in Jenkins build output. This talk will focus on experiences gained at Delphix writing Jenkins plugins. Delphix is a large user of Jenkins and thousands of builds and all testing at the company are being managed with Jenkins. Recently, work began to write internal plugins for several parts of the Jenkins build infrastructure, including the provisioning of test environments. This talk will demonstrate how these plugins provide an improved user experience - for example, by handling cancelled and failed jobs in a smoother fashion - and make some suggestions about what to focus on when writing your own plugins.
The Workflow functionality for Jenkins lets you integrate complex, long-running processes with Jenkins as a management interface, going beyond basic continuous integration (CI). Since the 1.0 release in November 2014, there have been many new features (and bug fixes!) and people have started using it in earnest. Whether you have already begun setting up your own flows or are just interested in hearing what is possible, come and learn where Workflow is at today. This session will cover the basics of Workflow, the more important changes made in the past few months and ideas for the future. Current users, come prepared with questions and suggestions!
Come see how we transformed a stovepiped organization anchored in slow, fear-inducing manual processes into a streamlined team delivering code continuously across a large, federated big-data application using Jenkins and Chef. It's rare when we DevOps professionals get a chance to start fresh, but in this situation our customer brought us in to kick-start their continuous integration/continuous delivery (CI/CD) process using Jenkins and Chef on a greenfield project. We started with a software development group that had made various attempts at CD over a period of two years with little success. Within three months, we had a full continuous delivery pipeline with release builds, automated testing, automated deployment and push-button production releases. Rich will show how he and his team integrated Jenkins, Git, Maven, SonarQube, jUnit, Robot, TestNG and Chef to deliver code changes through multiple test environments in a matter of minutes, allowing for rapid production deployments with a simple manual trigger in the pipeline. The co-ownership of the "infrastructure as code" between the development groups and technical operations teams allowed for rapid, pain-free changes in the software configuration across all environments.
This talk will dig deeply into continuous integration (CI) and the key factors that make up the overall CI process. It will cover the relationships and process flows between change management, configuration management and release/build management. Additionally, we will explore how the CI process, when coupled with a solid performance engineering discipline across the product lifecycle, can result in a better user experience for web and mobile applications. We will speak about the entire lifecycle, the "conveyor belt" of the application lifecycle, with concentration on the Big 3 processes that support the overall CI strategy. Included will be a real-world example of how SOASTA uses Jenkins for its conveyor belt and how this process enabled SOASTA to complete over 100 product releases in 2014 and still maintain customer SLA's for its SaaS product offerings.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
At FINRA, our CloudBees Jenkins Enterprise-based ecosystem supports over 100 applications, with hundreds of folders, thousands of jobs and corresponding build pipeline views. The need to effectively manage this ecosystem became obvious rather quickly. This talk is about some of the issues we've encountered and how by implementing a list of plugins, i.e. Folders, Role-Based Access Control, Folder Templates, Jenkins Build Pipeline and Job DSL - among others - we were able to reduce the overall effort of creating and managing the Jenkins-based ecosystem by 70-80%.
Drilling Info uses chat ops everyday. This talk will be about the exciting, new realm of chat ops in software development and how to use it with Jenkins, as well as ways Drilling Info is using it. Come learn what chat ops entails and how chat ops can deliver information to your team as well as teach new members about your processes. This presentation will also go over some of the more popular frameworks like Slack (Slackbot and slash commands), Hubot and Jenkins bot and how to use them to interact with Jenkins, as well as how to extend them to do more. When the information being shared is in the middle of the conversation, everyone knows what is going on.
Agilex is building a medical data portal application that aggregates data from many sources. As part of the continuous integration process, the Agilex team has set up a complete environment with actual implementation VMs of the different data sources. A total of 12 or more VMs must be launched, per build. Once launched, the team then deploys the portal and runs acceptance tests. The provisioning process is complicated by the need to observe specific startup dependencies, and overall performance is slow. The amount of output generated by the VM startup process is large and challenging to analyze. In order to review the startup performance and identify options for improvements, the provisioning and test process was instrumented and the startup timeline was graphed for review. This allowed the team to verify that dependencies between the VMs are observed and showed opportunities for optimizations to reduce the build times.
Moving to DevOps in large complex enterprise-IT environments is an incremental process. One that requires culture, process and technology. The "technology" part includes Jenkins, Nexus, Puppet, Docker and more. In this session, you’ll hear about first-hand experiences building successful enterprise-scale DevOps practices and specifically looking at the role Jenkins played with other key technologies in the continuous tool chain. Learn about additional practices to support the goal of driving down cycle times. Finally, no DevOps practice is complete without accounting for compliance and security requirements. Jenkins plays a key role there, too.
This talk will focus on how to set up a continuous delivery (CD) pipeline in Jenkins for an application with a microservices architecture. Michael will talk briefly about what it means to deliver multiple cooperating but independent services and some of the challenges this presents. He will demonstrate the Delivery Pipeline plugin to organize and visualize the pipeline. Also discussed will be how other plugins, such as the Cloudbees Jenkins Enterprise Templates plugin can be useful in this environment.
This presentation will highlight an integrated development process that involves Java and non-Java code built with CloudBees Jenkins Enterprise and deployed to CloudFoundry. A software lifecycle of continuous delivery from source code control (Git) to Jenkins build (Maven and Gradle) to live deployment on a Cloud Foundry instance will be shown. We will demo using Jenkins to do a blue/green application deployment. Blue/green deployment as defined in continuous delivery and well described by Martin Fowler is having two environments that you can easily switch between without downtime. With a Cloud Foundry blue/green Jenkins deployment, you can push a new version of the application and have a software router add that to an existing version of the application's route. The two versions are then load-balanced, allowing for testing of the new version and easy replacement or fall-back to the existing version. Running jobs on private and public clouds with deploy to either/both Jenkins running on a PaaS and integrated into the PaaS * full development lifecycle automated with Jenkins. We will run a hands-on demo and show the beauty and simplicity of an integrated build pipeline with Jenkins and Cloud Foundry.
Since DevOps is based on continuous delivery, anything that breaks the continuity is a bottleneck. Often, QA becomes that bottleneck due to an unstable test environment, unavailable test data and/or manual processes. This presentation explores how Jenkins, together with other automation tools and techniques, can help to address process bottlenecks and achieve a true DevOps state.
The OpenStack CI has grown to require over 4,000 Jenkins jobs to build and test across a multitude of configurations. Supporting that many jobs can be quite insane so we created the Jenkins Job Builder (JJB) to help us automate and manage that complexity. This talk is an introduction to JJB and how it helped us scale out our CI.
Ravello Systems has relied heavily on Jenkins since the early days. During the company's first four years, their Jenkins setup changed and evolved to the point that it was out of control. Ravello decided to take all the experience from those years and create a new and improved Jenkins setup. This session will share the lessons that were learned the hard way - and explain how the CI process was optimized. The process of re-doing the CI mechanism for a large, always developing group will be discussed, as well as mechanisms for revision control in Jenkins, job infrastructure and architecture guidelines for maximal flexibility and various other considerations. The overall continuous integration and testing strategies - which are completely cloud-based and cover all our varied components (from a hypervisor through networking and storage layers, distributed backend systems and all the way to an HTML5-based UI and a cross platform client side application) - will be described.
vCloud Air is the ultimate destination for development and testing in the cloud. Many organizations are implementing DevOps initiatives to increase development throughout and shorten product delivery timeframes. The combination of vSphere and vCloud Air provides you with the ability to deploy applications on premise, in the cloud or in hybrid environments with no changes, thanks to the underlying vSphere platform. In this session, we will present how the DevOps team can use Jenkins, VMware Code Stream and other existing tools they love to implement hybrid continuous integration and continuous delivery pipeline architectures.
Create Easy buttons to enable everyone with different skill levels to leverage the power of Jenkins. Learn fun Groovy coding tricks to enhance your build, test and deploy pipelines.
Sanoma is the largest publisher in The Netherlands and Finland, running some of the largest websites and mobile applications in those regions. Since 2010, Sanoma has been using Hadoop to process big data and gain insights into their products, customers and advertisers. After using traditional ETL tools for data ingestion and process management, Sanoma moved to Jenkins in 2012 to automate the big data infrastructure. This presentation is about the past, present and future of the Sanoma Data Platform and the role Jenkins plays within it. Attendees will get a brief introduction of the challenges involved with big data and the way they have been tackled at Sanoma. Sander will also touch on the pains and joys he and his team get from Jenkins.
In this session, Darryl will demonstrate the combined capabilities of Jenkins and deployment automation. The strengths of Jenkins are already well-understood with its intuitive interface and extensive plugin ecosystem. This hands-on presentation will extend the build pipeline to enterprise deployment scenarios, where more control and governance are required for production environments.
You will learn:
· Why you should use a dedicated deployment automation tool
· How to manage the deployment of enterprise, multi-tier applications
· How to use pipelines to control deployments through multiple environments
· About gates and approval processes
· How to integrate to other tools in the deployment toolchain
Jenkins is the clear #1 continuous integration (CI) and continuous delivery (CD) tool because it is very effective when it comes to automated testing and deployment of software in complex and diverse environments. Jenkins has achieved this position in spite of the fact that it doesn’t look as “trendy” as some of the other tools available in the CI/CD space.
In this talk, Gus and Tom will detail ongoing efforts to evolve the Jenkins UI. The goal of the initiative is to give Jenkins a more modern look and feel. They’ll talk about the challenges inherent in this effort, as well as some new Jenkins UI building tools and patterns being experimented with at CloudBees.
People interested in the Jenkins UI should attend this talk. Please bring your bag of ideas and get involved with this effort.
Is there an easier way to managing Jenkins jobs than via the UI, XML files, API? There IS a simpler solution; you can use Jenkins Job Builder (OpenStack upstream project). Jenkins Job Builder offers the following:
- Simple descriptions of Jenkins jobs in YAML format
- Job descriptions kept in human readable text format
- Kept in a version control system to make changes and auditing easier
- Flexible template system, so creating many similarly configured jobs is easy (avoids duplication).
Jenkins and IBM UrbanCode Deploy can be used together to automate the end-to-end continuous delivery process. See how Jenkins passes builds to IBM UrbanCode Deploy to automate the deployment of applications, middleware configurations and database changes into development, test and production environments, thus delivering higher quality software in a repeatable fashion.
Jenkins workflow automation accelerated and improved a critical development team process at IBM. This process includes running code to build a custom database that is integral to a production application. One of the code steps took 20+ hours to run before workflow automation and now takes two hours to complete. Anti-patterns from long running jobs were eliminated. The key for the solution was using Jenkins workflow in parallel steps. We will walk through how the workflow job runs as configured and the challenges associated with running tasks in parallel. With the workflow job in Jenkins, anyone on the team can now run their latest code to produce new ‘Chef Watson’ databases. No special knowledge is required. This has freed up the team to do what they do best, while letting Jenkins oversee the previously tedious task of running a variety of parsing steps. When someone makes code changes, now they quickly run this process across the full dataset and get fast feedback. As a result, quality issues are quickly identified and the database is delivered to production more frequently. Context will be provided on how this key process fits into the bigger continuous delivery picture. Attendees will see how Tom's team has boosted overall productivity and quality with Jenkins.
James has delivered four enterprise-ready plugins to automotive, banking and telecommunications/OEM industries. In this session, he will demonstrate how to use the Credentials plugin for authentication, Global Configuration for shared resources, Support REST Services for job automation and Workflow to automate your pipeline. James will demonstrate why all of these are important to an enterprise.
This session will highlight experiences with transitioning a large, federal agency towards agile development/continuous delivery best practices. Over 150 projects were moved onto Jenkins and the agency had to manage support and administration of thousands of jobs running continuously. The project’s background and some of the architectural and administrative implementations to scale Jenkins services will be discussed.