What plugins, tools and behaviors can help you get the most out of your Jenkins setup? How do you learn this without all of the pain? We’ll find out together as we go over a set of Jenkins power tools, good habits to get into and best practices to follow that will help with any Jenkins setup.
As modern developers, we all want to take advantage of continuous delivery, microservices and the cloud. To help deliver on this model, Jenkins pipelines are a great enabler by staging software builds into sections, but this also requires flexibility in your application delivery platform. So, how do we achieve an agile platform for a Jenkins pipeline? Companies across all industries and of any size, need to iterate and release software updates quickly to differentiate their business. Join Sufyaan Kazi as he covers some of the chellenges in delivering complex software and modern software architectures. He will also cover ideas for creating a platform and agile infrastructure to support continuous delivery and highlight how modern Platform as a Service (PaaS) systems like Cloud Foundry complement the use of Jenkins.
Attendees will learn about a system which solves a code reproducibility problem in the automotive systems area. The code reproducibility problem is defined as being able to rebuild a binary identical software release for a specific amount of time (up to 15-20 years after initial release) and to rebuild a software application with only small changes, keeping the unchanged parts binary identical with the related release. The problem complexity grows due to: external dependencies in a changing environment, a requirement to perform cross-platform builds, scaling issues (hundreds of gigabytes of source code) and more. The Technisat Dresden team achieved their goals by combining several tools in the right way: Jenkins + custom Jenkins plugin, SaltStack, KVM and SVN/Git.
This talk will address Jenkins-based continuous integration (CI) in the area of embedded systems, which include both hardware and software components. An overview of common automation cases, challenges and their solutions based on Jenkins CI services will be presented. The specifics of Jenkins usage in the hardware area (available plugins and workarounds, environment and desired high availability features) will also be discussed. The session will cover several automation examples and case studies.
Jenkins is an incredibly flexible application. In this session, Stephan will show attendees how the Jenkins infrastructure is set up at bitExpert AG and how the team uses Jenkins on a daily basis. You will learn about their favourite Jenkins plugins and how they use PHP tools like Composer, Phing, PHPUnit and others in Jenkins jobs. Last not least, some practical insights will be given about how to use Jenkins and Satis to build your own internal Composer package repository.
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Yoav will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
If you are taking the quality of your software seriously, you'll have numerous automated tests across many different Jenkins jobs. But getting a grip on all of your automated tests -- and then figuring out whether your software is good enough to go live -- becomes harder and harder as you speed up software delivery. Viktor will share tips on how naming conventions, partitioning of testware and mirroring the application's structure in the test code help you best handle automated testing with Jenkins. Viktor will also provide insight into how to keep this setup manageable and will share practical experiences of managing a large portfolio of automated tests. Finally, he will showcase several practices that help you manage all your results, plus add aggregation, trend analysis and qualification capabilities to your Jenkins setup. These practices will help you draw the right conclusions from your tests and deliver code faster, with the confidence that your systems won't fail in production.
Jenkins is the clear #1 continuous integration (CI) and continuous delivery (CD) tool because it is very effective when it comes to automated testing and deployment of software in complex and diverse environments. Jenkins has achieved this position in spite of the fact that it doesn’t look as “trendy” as some of the other tools available in the CI/CD space.
In this talk, Gus and Tom will detail ongoing efforts to evolve the Jenkins UI. The goal of the initiative is to give Jenkins a more modern look and feel. They’ll talk about the challenges inherent in this effort, as well as some new Jenkins UI building tools and patterns being experimented with at CloudBees.
People interested in the Jenkins UI should attend this talk. Please bring your bag of ideas and get involved with this effort.
So it is release crunch time and your developers are hacking away. Your Jenkins CI build queue is getting longer because of lack of free slaves. What do you do? Allocate more dedicated hardware? OK, but what happens when the crunch time is over? Those slave instances will then be sitting idle. Not good! Do you wish that nodes in your infrastructure could be better utilized? Is your infrastructure a victim of "static partitioning"? How awesome would it be if there was a way to scale to multiple Jenkins slaves automatically, whenever needed - and then also scale back down when the work is done? This talk is about solving these problems. Pradeepto will cover all the concepts necessary to understand Apache Mesos, its architecture, what it does and how it does it. He will then demo Jenkins running over Apache Mesos and demonstrate the ability to scale up and down as needed. Code will be shared for better understanding of the solution and the concepts behind it.
Moving to DevOps in large complex enterprise-IT environments is an incremental process. One that requires culture, process and technology. Technology like Jenkins, Nexus, Puppet, Docker and more. In this session, you’ll hear about first-hand experiences building successful enterprise-scale DevOps practices and specifically look at the role of Jenkins working with other key technologies in the continuous tool chain. Learn about additional practices to support the goal of driving down cycle times. And no devops practice is complete without accounting for compliance and security requirements Jenkins can play a key role there too. Learn more.
This talk will dig deep into continuous integration (CI) and the key processes that make up CI. We will discuss the relationships and process flows between change management, configuration management and release/build management, and how the CI process when coupled with a solid performance engineering discipline across the product lifecycle, can result in a better user experience for your web and mobile applications. We will speak about the entire lifecycle, the "conveyor belt" of the application lifecycle, with concentration on the "Big 3" processes that support the overall CI strategy. We will include a real-world example of how SOASTA uses Jenkins and other open source solutions for its "conveyor belt" and how this process enabled SOASTA to deliver over 100 product releases in 2014 and still maintain customer SLA's for its SaaS product offerings.
Learn how to practice configuration as code by using the Job DSL plugin for Jenkins. Find out how to organize Job DSL scripts and apply code reuse and refactoring to your Jenkins configuration. This talk will cover advanced techniques for large scale installations and show how to extend the Job DSL for your favorite plugins.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
Masood will illustrate the achievements and challenges faced whilst implementing a continuous delivery (CD) framework for a major retailer by using a rigorous but simple development process, integrated with Jenkins build pipelines. The pipelines have been carefully architected to orchestrate the various build, deployment, testing and release stages of ecommerce applications. The presenter will conclude with future goals regarding a cloud-based CD process using Jenkins.
The speaker has delivered four enterprise-ready plugins to automotive, banking and telecommunications/OEM industries. In this session, you will learn how to use the Credentials plugin for your authentication needs, global configuration for shared resources, support REST services for job automation and Workflow in your pipeline. We’ll demonstrate why this is all important in the enterprise.
Why do mobile development teams struggle to include functional and non-functional testing during build automation? Is the challenge implementing automated tests? Or parallel execution on real devices? We’ll discuss how to drive fast feedback through implementing CI using Jenkins, specifically:
· Techniques to automate test execution
· Techniques to identify performance issues
This presentation will highlight an integrated development process built with Jenkins and Cloud Foundry. We will show a continuous delivery lifecycle from source code control (Git) to Jenkins build (Maven and Gradle) to live deployment on Cloud Foundry. We will use Jenkins to do a blue/green deploy of an application by deploying two environments, then switch request routing between the two without downtime. The two versions are then load balanced, allowing for testing of the new version and easy replacement or fall back to the existing version. We will run a hands-on demo and show the usability and flexibility of this integrated build pipeline.
Jenkins is all about streamlining the flow of greatness from coders to customers. But don't stop there - even that pipeline can be accelerated with the adoption of a DSL to simplify and optimise the workflow from developer check-in to release. In this session, chief mechanic Sven Erik Knop will demonstrate how to lubricate the CD machine with a new DSL to connect a coder check-in with code review, build, test and release.
Ravello Systems has relied heavily on Jenkins since the early days. During the company's first four years, their Jenkins setup changed and evolved to the point that it was out of control. Ravello decided to take all the experience from those years and create a new and improved Jenkins setup. This session will share the lessons that were learned the hard way - and explain how the CI process was optimized. The process of re-doing the CI mechanism for a large always developing group will be discussed, as well as mechanisms for revision control in Jenkins, job infrastructure and architecture guidelines for maximal flexibility and reuse and various other considerations. The overall continuous integration and testing strategies - which are completely cloud-based and cover all our varied components (from a hypervisor through networking and storage layers, distributed backend systems and all the way to an HTML5-based UI and a cross platform client side application) - will be described.
vCloud Air is the ultimate destination for development and testing in the cloud. Many organizations are implementing DevOps initiatives to increase development throughout and shorten product delivery timeframes. The combination of vSphere and vCloud Air provides you with the ability to deploy applications on premise, in the cloud or in hybrid environments with no changes, thanks to the underlying vSphere platform. In this session, we will present how the DevOps team can use Jenkins, VMware Code Stream and other existing tools they love to implement hybrid continuous integration and continuous delivery pipeline architectures.
This session is about using Jenkins, AWS and Docker to run your own PaaS and move from DevOps to NoOps. The goal of NoOps is to improve the process of deploying applications while reducing the time to market - all without sacrificing quality. At Choose Digital, the developers own the complete process, from writing code through production deployment. This is enabled by a simple combination of GitHub, Jenkins, Docker and AWS Elastic Beanstalk. After our Platform as a Service (PaaS) provider left the market, we successfully replicated our own PaaS-like environment using Jenkins and a plugin that we wrote to enable blue/green deployments with AWS Elastic Beanstalk. These changes allowed us to continue our normal deployment flows without having to change our development processes.
Do you have complex pipeline scripts that you run for your continuous integration (CI)? Do you want to access Jenkins from those scripts? What about searching and accessing the artifacts? Or would you like to block a job until something has finished correctly? Interact with slaves? Would you like to have some kind of synchronization between your jobs and pipeline scripts and yet keep the your job configurations as simple as possible? This talk is about you getting more programmatic control of your Jenkins instance from your pipeline scripts using the Python API. We will look at the power of the API by showing working code examples and demoing the results. This talk will be very friendly to Jenkins beginners and intermediates alike. We will walk through the concepts and actual code during the presentation.
Big data is now everywhere, from mobile media analytics, banking, industry, avionics and even in medicine to monitor expansion of epidemics. In this session, Luca will show how continuous integration and continuous delivery is applied to a big data scenario that poses new challenges to the existing Jenkins framework. He will present the implementation of an agile build and deployment process used in big data software development projects for media and financial organizations in London. The talk will start with a presentation of the workflow and then will explain how existing Jenkins plugins were leveraged, as well as how integration with Docker, Mesos and the Hadoop ecosystem was achieved.
Have you ever completed a build and wondered what exactly changed? Typically, output logs and parameterized build input data such as SCM branches, bug tracking issues and notes entered by the developer are lost once the build has completed. At best, Jenkins keeps this information for a limited time. The only historical reference is, perhaps, in a report. Consequently, this data cannot easily be reused for future builds or reviewed during the auditing process. Using Jenkins, Groovy and Neo4j, this data can persist for the life of a project. This presentation will describe the simple steps taken to save this information for posterity.
Camunda is an open source, Java-based framework process/business process automation. As a middleware technology, Camunda integrates with six different Java application servers (in different versions) and supports six different database products. The team at Camunda maintains five supported versions of Camunda itself, adding two versions every year. Maintaining the necessary continuous integration (CI) infrastructure based on virtual machines became increasingly problematic, with poor build reproducibility and limited scalability. Feedback cycles for developers were unacceptable. Recently Camunda switched from the virtual machine model to a container model based on Docker. The Camunda team now develops infrastructure as code and applies microservice-like separation of concerns. In the talk, Daniel will share the new CI architecture and present lessons learned.
Christophe will share his experience at Wyplay, a provider of connected TV middleware, in Jenkins scaling and best practices. At Wyplay, the Jenkins master server had grown to include 34 attached slaves, 78 plugins and 527 jobs. That would not have been a problem if there were no major issues - but that was not the case. The team had issues related to performance, reliability, ability to upgrade and security.
Because of these issues, project teams started to build their own master servers and on several of them, the issues (particularly security) were even greater than on the original Jenkins master server.
The Wyplay team decided to migrate to a system where they could easily generate and manage new masters for each project. They migrated the 500 original jobs to this new infrastructure. Christophe will explain in his talk how Wyplay used Docker to support the migration to the new architecture. In addition to this master infrastructure, Wyplay also makes use of Docker for launching dedicated slaves for small jobs using the Docker plugin. Tips on how this infrastructure can be enhanced will be provided.
Sanoma is the largest publisher in The Netherlands and Finland, running some of the largest websites and mobile applications in those regions. Since 2010, Sanoma has been using Hadoop to process big data and gain insights into their products, customers and advertisers. After using traditional ETL tools for data ingestion and process management, Sanoma moved to Jenkins in 2012 to automate the big data infrastructure. This presentation is about the past, present and future of the Sanoma Data Platform and the role Jenkins plays within it. Attendees will get a brief introduction of the challenges involved with big data and the way they have been tackled at Sanoma. We will also touch on the pains and joys we get from Jenkins.
In a large company where several dozens of projects and branches will need their own pipelines, you cannot afford to maintain all the jobs, manually. For security reasons and knowledge limitations, Clear2Pay does not want to open Jenkins job configurations in the centralised master. Instead, the Clear2Pay team offers project teams a "shopping list" that they can use to automatically generate their own pipelines for all branches, without requiring the Jenkins administration team to intervene. Projects just update a jenkins.properties in the SCM branch and the pipeline for this branch is updated accordingly. This allows the number of projects to scale, each getting their own pipeline in the Jenkins master without having to worry about administering hundreds of jobs.
In this session, you will learn how you can easily utilize continuous delivery practices with Jenkins CI. From build, deploy, test, maintenance and monitoring, lots of processes can be easily orchestrated with Jenkins. Nobuaki will cover a very simple case for which he implemented continuous delivery with Jenkins CI, Azure and Selenium. This project is a basic case of continuous delivery. Especially noteworthy, for a Windows program. :-)
LPCXpresso is a multi-platform IDE for developers of embedded software to run on NXP Semiconductor's ARM-based microcontrollers. NXP needs to test that the debugger can execute programs on numerous different development boards that connect to the USB ports of host computers. Besides building a complex software product, the Jenkins installation drives an automated test farm consisting of home-built software-controlled USB switches ("cows") that control a huge array of combinations of test board, debug probe and host platform. This talk will give a tour of the NXP farm, including video of the cows in action, and will describe the features of Jenkins that are used to make it work, with particular emphasis on dynamic selection of combinations within matrix jobs, parameterized triggers and the Summary Display plugin. Finally, plans to migrate to the new Workflow plugin will be discussed. NXP believes the Workflow plugin will simplify the structure and make it more maintainable.
Learn how to use the Gradle JPI plugin to enable a 100% Groovy plugin development environment. We will delve into Groovy as the primary programming language, Spock for writing tests and Gradle as the build system.
Attendees will learn about the concept of using several nodes as a single Jenkins slave. At Yandex, we’re deploying software environments consisting of many nodes from the cloud and we can leverage existing Jenkins slave management mechanisms - such as scheduling, automated provisioning and load-balancing - via this single-slave abstraction. This greatly decreases the time needed to support lots of separate slave nodes. At Yandex, we have created a plugin for these purposes and it will be presented in this session.