Video recording and production done by DevOpsDays.
Are you frustrated at work, unable to get everyone to see the obvious? Are you blocked by political machinations, incompetence, sloth, or the embodiment of other human failings? I’ve here to give you the good news: it is probably your fault, which means you can do something about it.
Like most people you probably suffer from Skilled Incompetence. Skilled, because you produce actions effortlessly and unconsciously; Incompetence because your actions generate outcomes other than what you want. Without being aware of it the strategies you enact generate the very mistrust and resistance that is the source of your frustration.
In this talk I will introduce the work of Chris Argyris, his models for understanding our action strategies, and tools that can help you understand your own behavior better, build trust with others, and to create better outcomes by seeking learning rather than winning.
Ultimately this talk will attempt to convince you to take up the challenge of changing your own behavior. Despite what you may espouse about wanting a learning culture your frustration is a signal that what you want is for other people to learn. Saying you are open to change is different than acting that way. I hope by the end of the talk you want to know the truth about yourself enough to put these tools into practice.
A keynote based on John's blog on burnouts - itrevolution.com/karojisatsu/
DevOps is growing in popularity and even usage in "the real world." It seems like we're slowly getting past unicorns only and seeing many "horses" do DevOps. Soon we'll see the mainstream market - the "donkeys" - start to pick it up and bend DevOps to its will.
This talk will go over "the state of the union" of DevOps and provide some guidance for how to prepare for the donkey apocalypse.
You want to implement DevOps ideas in your team? Then be aware that implementing DevOps means a lot more than just changing the general set-up of the actual organization of the work. It demands lots of changes in personal behaviour and attitude - first of all by the initiator! Besides powerful arguments it is of an absolute importance to convince others of the necessity of the personal change! So you – as the initiator – have to think about punchy arguments as well as reflecting yourself – and all this before integrating the whole team! With starting your own personal changes you will also commence a viral process in your team. Your own attitude, your way of communication or your attempts to recognize other people´s points of view are possible sources for your own change. Sooner or later multipliers will follow you step by step and help you to spread out your ideas! A motivational loop will start and will reach more and more people.
In this session you will learn: - which questions you will have to ask yourself before starting the personal change - why personal changes need time to grow - why leading people to change is much more efficient than pushing them to change - how you will identify different roles of all stakeholders in a team - how you will make multipliers work for your ideas - how to deal with “NoNos” - how this smart way of personal change management will have a motivational effect on you and your team - that it will also work if your are not “the big boss” in the team
But most of all you will learn that there always has to be someone to start with the change – YOU!
While it’s easy to pay lip service to the idea of innovating by failing fast, humans are both neurally geared and financially incentivized to avoid failure. We’ve all heard the tales of woe: blameless reviews that were anything but blameless, encouragement to work on an experimental project with punishment being the primary result of its failure, and the associated fear of doing anything new, speculative or untried. The results are simple: individuals, teams and companies that stagnate slowly.
So how can we create an environment that makes failing fast safe for the participants and their organizations?
In this talk, we’ll cover key strategies for creating an environment that fosters rapid innovation in your organization, including:
Conducting effective and truly blameless project post-mortems
Building a corporate culture & HR processes that encourage failing ""the right way""
Measuring the impacts of positive failure – and failing to fail – on your organization’s bottom line
Attendees will leave this presentation with concrete strategies to conquer their own fear of failure, and how to help their organizations do the same.
Businesses are speeding up development and automating operations to remain competitive and to get large organizations to scale. Project based monolithic application updates are replaced by product teams owning containerized microservices. This puts developers on call, responsible for pushing code to production, fixing it when it breaks, and managing the cost and security aspects of running their microservices. In this world operations skill-sets are either embedded in the microservices development teams, or building and operating API driven platforms. The platform automates stress testing, canary based deployment, penetration testing and enforces availability and security requirements. There are no meetings or tickets to file in the delivery process for updating a containerized microservice, which can happen many times a day, and takes seconds to complete. The role of site reliability engineering moves from firefighting and fixing outages to buiding tools for finding problems and routing those problems to the right developers. SREs manage the incident lifecycle for customer visible problems, and measure and publish availability metrics. This may sound futuristic but Werner Vogels described this as “You build it, you run it” in 2006.
The aim of continuous delivery has always been to reduce risk by increasing the rate of feedback. Its predecessor continuous integration had the same goal and lead to the development silo swallowing the testing silo. Continuous delivery, by way of the devops movement, is now taking the next step: development is swallowing operations. This has lead to the emergence of so called 'full stack' developers; developers who claim to be able to do the dev and the ops.
But the stack is not full. As software becomes bigger and more complex, the only way to make useful software is with teams of software developers. Teams means soft skills, and they are conspicuously absent from most people's definition of 'full stack developer.'
In this talk I will explain why, to build cutting edge software, technical skills are simply not enough. All 'modern' software developers must be strong communicators, must display empathy for others and must be resilient in the face of criticism; skills that we hardly ever interview for, and that we almost never know how to develop in our staff.
This talk will be of interest to anyone who has a technically strong team who fail to perform, or anyone who spends too much of their working day in petty conversations about tool choices or any developers who want to know how to move into the upper echelons of their profession.
100% uptime is impossible. Modern architectures are designed around failure but what does that mean for the human aspect of incident management? This talk will consider how to prepare for outages, how to structure the response, and how those experiences and techniques differ for small and large companies.
Key topics will include:
On call - rotations, scheduling, systems and policies
Preparing for downtime - teams, systems and product architecture
Checklists and playbooks
How we actually handle incidents
As an Operations Engineer at Germany’s leading fashion website STYLIGHT, my team and I are constantly looking for ways to reach monitoring zen– with mixed results. Despite using leading tools like New Relic, Loggly, and DataDog, we still aren’t getting the answers we need from our data. My proposed talk will address #monitoringsucks in 2015, what this new fangled data science thing means, and what the tools of the future look like.
Ever since the #monitoringsucks trend kicked off a conversation about the state of monitoring tools in 2011, there has been a flurry of activity resulting in new solutions, improved tools, and applications generating tons of data. However, we are still faced with the same issues almost 4 years later. Alerts still generate far too much noise to be useful. Dashboards aren’t actionable and require human interpretation. The volume of log, time series, and other data makes it difficult to collate, visualize, and interpret in the mythical single pane of glass. How do we definitively solve these problems?
Data science. Using advances in data science and machine learning that are already being applied to “sexy” problems at companies around the globe, we can finally reach a tipping point when it comes to #monitoringsucks issues. New data science tools can pinpoint problems before they hit a static threshold, group alerts from a variety of sources into a single logical error, and prevent eye strain from studying hundreds of graphs. In this talk, I will be discussing the virtues – and pitfalls – of new monitoring entrants like Kale from Etsy, Bosun from StackExchange, and Twitter’s open source R package AnomalyDetection.
The ignite talks of the second day of devopsdays Amsterdam 2015
The Ignite tale from day one of devopsdays Amsterdam 2015
To be a data driven company, you can't be satisfied with measuring every tiny aspect and detail of your business or service, you have to be sure that the raw data is accurate. Assuring the quality or ever growing unstructured clickstream data is very hard though, espacially if the accountable team is your dedicated data team but the logs are written and produced by all other product teams.
We faced this problem recently and this is the way we tackled it: First, forget unstructured data, you can get sustainable quality only with structured logs. Then convince your fellow developers in the product teams about the importance of this project. Make the migration as smooth as possible by leveraging from the nature of structured data: provide tools that build upon the schema of the JSON logs, make monitoring and alerting automatic, so facilitating to sustain the quality of logs.
All in all, we focused on communication between teams, selling rather than telling, creating the demand on better quality data. Eventually all data team, product teams and the company won.