All sorts of reasons are used to justify Enterprise Architecture. At the top of the list is usually a desire for high efficiency and minimization of risk. In other words, Enterprise Architecture knows best. We repeatedly end up in the model because, ultimately, you can’t have the wild west in large technology organizations. Or…can you?
Yes you can, with Simon Wardley’s Pioneers, Settlers, and Town Planners model. Take advantage of organizational theft to ensure amazing people in each of these three groups contribute to increasing the velocity and on-going sustainability of your organization. While many know about this model, the mechanics (like theft, exciting!) are counter intuitive and it is often understandably difficult to relate this to real-world problems.
This talk aims to provide real-world tooling/technical practice adoption examples and key things you need to know to begin shifting your complex organization to this model. And yes, even Enterprise Architecture has a place in the new frontier. I consider this one of the best organizational constructs out there for achieving velocity and managing technical risk/debt. Let’s get it out there and keep testing it!
Sharing and transparency is one of the core tenets of DevOps. One outgrowth of this principle is a plethora of open source software and tooling, as well as blog posts and conference talks. Despite all of this available information, it can be difficult to understand how to put together the puzzle for a particular infrastructure, which is where reference implementations can provide greater learning and understanding than copy/pasting multiple blogs together. The trouble with much of the available reference material and example architectures is that they only cover initial setup or the first steps along the path of building a production environment. One answer to this problem is to open source the actual configuration and design of real-world systems. This serves multiple purposes that are beneficial from the level of the individual to the level of the business. You can use these points to help address resistance about releasing your work: - People working outside of the company can gain a greater understanding of the challenges being faced, which also helps when recruiting - When soliciting advice or help from the community it is easy to link to the specific piece of code or configuration that is causing the problem - Knowing that your work is public encourages loose coupling and more conscientious management of sensitive information which makes your code easier to maintain
While this idea may seem naïve and impossible to pursue in some companies, there is a continuum along which you can participate. If you have a brand new project you may be able to release everything publicly from day one, but if you have an established infrastructure then you can still factor out modules that can be shared.
Old staging methodology is broken for modern development. In fact, the staging server is left over from when we built monolithic applications. Find out why microservice architectures are driving ephemeral testing environments & why every sized dev shop should deliver true continuous deployment.
Staging servers slow down development with merge conflicts, slow iteration loops, and manhour intensive processes. To build better software faster containers and infrastructure as code are key in 2017. Dev Ops professionals miss this talk at their own peril.
AWS Lambda ushers in a new way of thinking about solving technical problems. Unfortunately, when the only tool you have is a hammer everything starts to look like your thumb. In this talk, come explore the design patterns that work well with Lambda, through a discussion of what failure looks like.
This talk starts off with a quick introduction to what AWS Lambda is and where it came from. From there, we transition into demonstrating the absolutely worst possible ways in which to use it– along with what make these simple mistakes terrible ideas at scale. You will laugh, you will cry, you will immediately begin migrating your code away from anywhere the speaker might conceivably have access to it.
What went wrong? Why does this always happen? How can we ensure it Never Happens Again? For most of the internet age, engineering teams have focused on finding a cause of an outage. A belief existed, and persists, that all errors or behaviors can be traced back to a single causal entity. The Root Cause Analysis is conducted in service of finding that entity, and correcting it. By doing so, we have been taught, we prevent recurrence of the error in question.
Much of RCA thinking comes from manufacturing and electrical systems, where simple causality can exist. An oft failing fuse is caused by poor wiring. In computing environments, there is rarely so simple a cause. Within even the simplest application nest dependencies, logic, bottlenecks, and inefficiency. By wrapping that application in an operating system, on a server, on a network, on the internet, managed by process, actioned by people we add enough complexity to force us to reconsider the Root Cause Analysis approach.
Modern tools and practices, like DevOps, enable engineering teams to adopt significant complexity at relatively low operational cost. Once unthinkable, microservice architecture in a public cloud environment is now a common choice for new software projects. Consider, for a moment, the layers of complexity captured in that decision. Now consider how opaque the agents in those systems are to the operators (us).
Emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities. In theory, complex systems exhibit highly unpredictable behavior, and generate surprising patterns. In practice, teams operating complex engineering systems always see deeply interrelated causality - a blend of people, process, and the systems themselves. So why do we still focus our after action analysis on a Single Cause?
In this talk, we’ll explore these conflicting realities for incident management teams. Attendees will learn about differences between Root Cause Analysis, and more techniques like Postmortem. While this is a technical talk with examples of both simple and complex infrastructures, much time will be spent considering the impacts of people and process to those same systems. Attendees will leave with some actionable ideas to bring back to their teams to improve their own after action analysis activities.
Many of the poor security stances we see are the result of security paralysis. We’re presented with two options, being insecure or being secure, with little understanding of how to get from one state to another. With APTs, 0-days, logoed vulnerabilities that make us think we’re all gonna die, and the difficulties understanding these and other security subjects… Many of us choose to just work on other areas of our environment that need our attention because it’s not like there’s not enough work to do. Why bother investing your time and effort into something you don’t feel you can do well?
But security isn’t a single state. It’s an iterative process that adapts to your needs and risk profile. This session will take people through the process of going from bad to better today in a way that they can then reapply to improve again tomorrow. We’ll walk through the security topics that we obsess about and contrast them with the ways many organizations are actually breached. From there we’ll evaluate our risks, analyze our constraints, and finally apply this mode of thinking to make a bad situation better even if still not perfect.
You won’t walk away from this with the knowledge to prevent a breach from a determined state sponsored adversary. But you will walk away with an understanding of evaluating your risks and needs, evaluating paths forward, and finally performing action to make forward progress that you can apply to a nagging security issue in your environment.
Another day, another high-profile security incident. Forty percent of all data breach incidents occur from attacks on web applications. With DevOps accelerating the pace at which software is developed and deployed, it’s critical to integrate proper security thinking into the DevOps process. Without this, rapid software development can introduce security flaws.
The cybersecurity labor crunch is expected to hit 3.5 million unfilled jobs by 2021. So where do you turn for help when the demand for qualified cybersecurity professionals is high, but the supply is low?
In addition, all security professionals aren’t created equal. How do you identify the security skills needed in DevSecOps?
AppSec engineers have been called unicorns, and in this talk we will make these mythical creatures a reality and discuss: * The skills needed to be a successful AppSec engineer * Scenarios in which these skills are used in DevSecOps * How to identify and groom talent within your own organization * Ways to scale your team
Just within the last ten years (+-3), we have seen at least two separate communities evolve from the generic idea of Systems Administration/Operations. The first, DevOps, grew up very much in public, the second happened more within the halls of “special” companies like Google and Facebook and is only now starting to gain visibility and traction in the wider world. They have a number of priorities in common but also diverge in some very interesting ways. In this talk we’ll explore the key things we can learn from each group and see what might happen if we cross the streams.
Companies adopting microservices are recognizing the importance of the service mesh: a transparent layer that adds application-level resilience, observability, and security to your infrastructure. In this talk, we’ll talk about what a service mesh, why you might need it, and how it works. We’ll show a service mesh in action, using the open source Envoy, Istio, and Ambassador projects running on Kubernetes.
As technology becomes a bigger part of our lives, so does our power to affect people’s lives: not always for the better.
Technology can be a life saver, (i.e. the Maps app), but there has been an increase in cases where technology has done irrefutable damage to its users. Despite intending to connect people and serve as a positive resource, tech companies such as Facebook, Uber, Twitter, AirBnB, and others have also become vehicles for hate, assaults, murder and more.
Though the stories we hear are extreme, fear not, for we have the power to make a difference. This power to make a change does not solely fall on CEOs and company founders. Even the entry level dev has the potential to make a positive change for countless users. It is imperative that we use our power for good.
DevOps is the best, isn’t it? It feels great to be empowered to accomplish your goals without costly handoffs from, and consultations with, functional teams. But what happens to an organization as it scales its DevOps teams past one or two? Silos. And what do you get with silos? Waste. Unless, of course, you can get the silos talking to each other.
In this talk, we’ll delve into one possible solution to this problem: community-building. DevOps teams need agency, without superfluous and misguided consistency edicts. They will also benefit from access to vetted materials, best practices, and a real-time sounding board to work through the challenges of navigating a shared space. The knock-on benefits of a well-built community are many:
a continuous “hallway track” for developers to bounce ideas off of one another
a forum for motivated individuals to hone their leadership skills
a flattened organization where everyone gets a chance to contribute and learn
an environment where live support for simple and/or well-solved problems is crowdsourced
All you need is a seed, a platform, and a few guiding principles, some of which may not be immediately obvious. We’ve all heard of making your vocation your vacation. This is kind of like that. Your organization should be productive, of course, but maybe a little less “concrete, beams, and rivets” and a little more “crayons, glue, and stickers”.
Here’s what attendees can expect to take away from this talk:
a clear idea of why communities are important to a development organization
a high-level view of communication archetypes, and how this is different
an actionable set of ingredients for building a community wherever DevOps is being practiced
There are a lot of great things about the cloud, but the “destroy and rebuild” philosophy which is really good for building a continuous delivery pipeline, really sucks when applied to troubleshooting production problems. When your application goes haywire, the most valuable engineering skill is not the the ability to bring up a copy of your system or even the knowledge of a your technology stack (although it doesn’t hurt). It is the skill of understanding and solving problems.
Finding the root cause of the issue and mitigating it with minimal disruption in production is a must-have skill for engineers responsible for managing and maintaining production systems, which nowadays includes ops, dbas and devs alike. In this talk I will discuss the skills required to troubleshoot complex systems, traits that prevent engineers from being successful at troubleshooting and discuss some techniques and tips and trick for troubleshooting complex systems in production.
How does the QA role fit into a world of DevOps and Continuous Delivery? Is it still even relevant? This session will examine these question and propose ways in which a QA mindset contributes to a successful DevOps and Continuous Delivery practice.
Topics will include: * A short history of software quality assurance: Waterfall - Agile - DevOps * Definitions: What is QA anyway and what it isn’t. Who are we? * Why QA and Ops traditionally have a complimentary mindset * Why a QA mindset is still relevant in a “Full-stack” “NoOps” world * What the quality implications are of “Infrastructure as Code” * What changes about QA in a DevOps world? What skills are needed and what opportunities exist for QA minded individuals * Q&A about QA in a DevOps environment
Many organizations have some kind of incident response process to coordinate during a major service outage. Some operationally mature companies incorporate a formal Incident Commander role in their process for a faster, more effective response. The Incident Commander serves as the final decision-maker during a major incident, delegating tasks and listening to input from subject matter experts in order to bring the incident to resolution. Whether or not a company has a formal process that includes an Incident Commander, most companies believe that your most senior engineer is best suited to lead an incident response. I challenge this assumption.
I have learned first hand that you do not need to be highly technical, let alone senior, to effectively lead a coordinated response to a major incident. Comfort with a structured process and soft skills such as communication are actually more important than technical knowledge for an effective Incident Commander.
All organizations need to maximize the number of people able to lead a major incident response to avoid burn-out of their most senior technical leaders and increase overall availability of their service. Audience members will learn from this talk how to develop an inclusive incident response process that welcomes more Incident Commanders without compromising response effectiveness that they can immediately apply at their own organizations.
Feelings are messy and uncomfortable, so why can’t you just ignore them? Because it doesn’t work; you’ll write substandard code and be a suboptimal teammate.
In this talk you’ll learn how emotions are affecting your work and what you can do about it. I use the metaphor of an API as a way to talk about how your emotional system functions. I’ll go over some of the code in my own emotional API, and how I have refactored it using a toolbox full of techniques.
This will improve your coding, your relationships, and the arc of your career.
Kubernetes is a powerful, operational platform for containerized applications. However, the developer workflow on Kubernetes – how you code, deploy, update, and monitor your services – is much less mature.
How should you lay out your Git repo? How do you create loosely coupled services? How do you support deploying your service at any time?
In this talk, we’ll talk about these questions and more. We’ll discuss the journey towards a rapid development workflow, discuss best practices, and, talk about the process we followed to get to a rapid development workflow.
Microservices is an increasingly popular approach to building cloud-native applications. Dozens of new technologies that streamline adopting microservices development such as Docker, Kubernetes, and Envoy have been released over the past few years. But how do you actually use these technologies together to develop, deploy, and run microservices?
In this presentation, we’ll cover the nuances of deploying containerized applications on Kubernetes, including creating a Kubernetes manifest, debugging and logging, and how to build an automated continuous deployment pipeline. Then, we’ll do a brief tour of some of the advanced concepts related to microservices, including service mesh, canary deployments, resilience, and security.