Get ready for three days full of mind-blowing talks. One single track so you won't miss anything, the best 24 speakers you could ever imagine and the most exciting content. What else?
Containers are the next model of compute, after VMs and bare metal. And you all know containers are here to stay. They're the leaner, faster and more portable alternative, and one day every app will run in a container. Containers will be ubiquitous because of the wide range of problems they solve, and the huge ecosystem that's making the solutions.
The core concepts in container platforms are all open - the image and runtime specifications, the registry, the engines and the orchestrators. The promise of portability makes containers a safe choice for the next generation of software delivery. Companies are making that choice and investing in containers for everything from legacy apps to new cloud-native projects.
In this session I'll demonstrate two of the main uses cases for containers - moving existing apps to the cloud, and building apps in lean, modern technology stacks. I'll take an existing ASP.NET 3.5 WebForms app from a Windows Server 2003 VM, migrate it to Docker with no code changes, and then run it in Azure. Then I'll deploy a brand-new .NET Core Web API running on Nano Server in a container alongside the WebForms app.
We will discuss the different stages that Facebook went through for operations automation as the number of servers grew, introduce some key concepts applicable to any company that wants to scale their operations teams as their fleet grows, and describe the frameworks we developed in Python to automate away most of the manual operations work.
For nearly a decade, Chef Software has been helping companies big and small automate themselves into more flexible, agile, responsive organizations. This talk will highlight the lessons we've learned, that you can learn too! to automate not only infrastructure but also security and compliance with tools like InSpec, and build and deploy with tools like Habitat. We'll look at the challenges and benefits of providing integrated workflows to help all of your teams meet their requirements in an efficient and safe manner.
Today’s data centers aren’t the same anymore. We are constantly moving from one set of technologies to the other. We often find it hard to connect newly deployed applications to rest of our infrastructure. See how to use Consul to help connect applications across your infrastructure. Consul is an open source tool for service discovery, monitoring, and globally distributed key-value store. In this talk we will cover how Consul can help bridge the gap between various types of applications that might exist in a datacenter at any given time. As we move towards micro service oriented architectures we find ourselves using cluster schedulers like Kubernetes and Nomad. We will discuss how Consul’s service discovery features help connect applications running in such environments. We will also explore how we can connect applications that are running in traditional environments to ones running in these cluster schedulers using Consul’s API and built in DNS support. We will do a live demo of connecting two applications running in separate environments using Consul and showcase important features of Consul.
"With microservices every outage is like a murder mystery" is a common complaint. But it doesn't have to be! This talk gives an overview on how to monitor distributed applications. We dive into:
System metrics: Keep track of network traffic and system load.
Application logs: Collect structured logs in a central location.
Uptime monitoring: Ping services and actively monitor their availability and response time.
Application metrics: Get metrics and health via REST or JMX.
Request tracing: Trace requests through a distributed system and Zipkin to show how long each call takes. And we will do all of that live, since it is so easy and much more interactive that way.
I will show how Schibsted as an heterogenous company with services on-premises and in several Public Clouds is creating a global network to enable service to service private communication across all our platforms will extend the scope of global services and also increasing efficiency by consolidating towards a common hybrid infrastructure while automating and abstracting networking tasks. The talk will have a general introduction to the problem, the suggested architecture and the tooling to make it possible following a DevOps approach.
My dad once told me about a guy from Birmingham, a martial artist who’d learned tai-kwon-do, who could do jujitsu and would box.
Now, in his wisdom he devised his own style of martial arts, the principle was simple, if you couldn’t do it in a phone booth, it was no good to you in a pub fight.
With two paragraphs about Birmingham, kung-fu and pub fights you might be wondering how this could ever have anything to do with practical DevOps.
Introducing ChatOps, and here’s the tag line, when you’re in the pub, down the gym or stuck on a plane to New York, what’s the maximum you can do from just your phone.
We don’t all lug around a laptop 24⁄7 and I know I don’t always have a handbag big enough for mine, so if your production environment was burning down, and all you had was your phone, data and slack, could you save it all?
In this talk I will cover lessons learnt when building a distributed team to deliver an infrastructure capable of handling several billion requests per month across the world. We will discuss fostering Engineering Mentality, Agile Teamwork, Scalable Knowledge and User Focus in a team. Focus will be in the DevOps and Infrastructure domains, but most content can be applied in areas like application development. Expect examples, challenges, questions, plenty of puzzlement and practical advice to survive the team building experience.
Nowadays, startups, medium-sized companies and even big companies try to follow DevOps path. For startups and medium-sized companies, this transformation is relatively easy by comparison with well-established company. Turkcell is the leading TELCO in Turkey who has begun to work agile, so DevOps insight was inevitable. In our speak, we will try to put across our feelings to the audience by telling about 7 steps which we experienced.
1) How did the need reveal? We will try to explain the factors which forced us to think about DevOps.
2) How we decided how we start? We could choose all out transformation, but we chose two applications as pilots, since it was hard to reserve resources for applying DevOps while we must continue to production.
3) What factors did we think while we were choosing applications as pilots? We chose a service-based application and a front face application to experience differently.
4) What happened in process of building Devops pipeline? We examined what we do. How do we handle test and deployment issues within Waterfall methodology ? We used DevOps method to solve bottlenecks of deployment and testing.
5) Test Automation? We use UFT and Selenium as Test Automation tools. We focused especially front-end testing and we integrated 6 front-end application for test automation.
6) Challenges? Slowness of decision making for the big enterprises was a huge challenge. Also, since we were working in MARKETING SOLUTIONS domain, there was huge demand for business needs, we had to carry out both regular developments and DevOps initiatives with same amount of resources.
7) How much effort we spent? What were our KPIs? What are we planning to do in future, how we use lessons learned in this journey?
Mesos has a two-level scheduler architecture. Mesos handles low-level infrastructure scheduling operations, while another layer on top (The frameworks) handles all the application specific operations and logic. In this talk, we will discuss the pros and cons of having separate layers. And limitations caused by the gap between development cycles of both layers.
Microservices architecture has changed how companies develop and deploy applications. This change has affected testing process as well. New techniques have emerged and others have been enhanced.
Come to this session to learn about service virtualization, contract testing, smart testing or testing in production to increase deployment velocity from 1 week to N times per day or deploy each service independently without worrying about breaking the compatibility between services anymore.
Kubernetes is changing the way we manage our infrastructures, even in big traditional companies.
In everis, we are using OpenShift, the "enterprise kubernetes" brand from RedHat and Docker as an enterprise-wide development and continuous delivery platform.
During this talk, I will talk about the path we started two years ago, our successes and our mistakes, on the journey to cloud-based development. we will also have a bit in-depth of the OpenShift tools that make it the perfect CI/CD platform.
During this session, I will explain how we created a data pipeline to store and analyze trillions of domain events produced by microservices. Then, we will discuss how the underlying infrastructure is built in AWS using automation tools like Terraform, Puppet and Jenkins. Also we will talk about the Kafka ecosystem and how we use it together with Cassandra.
At the end of the talk, we will show in a small demo the journey of a domain event passing through this pipeline until it is stored in S3 for further analysis.
With the physical data center we used a Castle & Moat approach based on networking, but in the cloud, we have no clear perimeter and need to take an application-centric approach to security.
This talk examines the areas of vulnerability inside a typical microservice application, areas like authentication and authorization, secrets management, and credential management. We examine these problems from an operational perspective and how we can solve them by leveraging the power of Vault, an open-source tool from HashiCorp.
Although QAs and Devops can be seen as two different roles, the truth is that they have more things in common than we think.
When we are creating the structure of our test suites we need to take into account some practices and tools devops normally use in their daily work. From integrating the test automation with the CI tool to catching bugs as soon as possible to the use of containers to speed up test executions.
In this talk we are going to discover the different tips and tricks that makes QAs and Devops best friends.
As a beginner, finding your place and grasping the idea of DevOps is very much depending on the personal motivation and the environment you work. They did not teach you this in schools.
In this talk, I am going to share my journey. You will see the representation of a story that adopts the DevOps as a mindset rather than people, tools and so on. The talk will be a story-line that is supported by real-world examples such as using the tools, communication in the teams etc. from the experiences of a Junior Engineer’s road-map to DevOps.
We do love technology, don't we? Great infrastructure and tooling take cloud adoption and mobile app development to new heights. By deeply integrating DevOps paradigms into everything, technology makes it easier than ever to build, deploy, and manage software across clouds and on-premises.
The promises of DevOps are not fulfilled by technology alone. Its adoption can only be successful if organizations nail all three aspects of DevOps: People, Process, and Technology.
Over the course of 5 years my team has been involved in 100s of customer and partner engagements, working hands-on with entities of all sizes from small startups to large enterprises. We worked on ways to improve the flow of value in their software lifecycle, helping them to become better and faster sooner through DevOps practices. We have learned at least as much as we taught.
This session takes deeper look at the intersection of technology and people in a DevOps world and summarizes our diverse hands-on experiences with the cultural and social aspects of a DevOps adoption.
This presentation wants to share how the Skyscanner Security Engineering team have integrated security into our continuous integration and delivery (CI/CD) pipeline. Skyscanner's microservice architecture is a highly distributed and constantly changing environment supported by Kubernetes and Amazon ECS. Furthermore, Skyscanner has strong lean and agile principles, which imposes fast-paced delivery along with integrated learning cycles and decentralized decision-making. This challenges security that must shift left and become more integrated with the CI/CD pipeline without becoming a blocker.
Not so long ago, in the monolithic applications era it was relatively easy to setup development and integration environments. You just need to run your application and start working on it.
Nowadays, with tons of different cloud services and the microservices approach for the applications, providing and maintaining many development and integration environments for developers, QA or designers could be nightmare. Not only because of the multiple processes you have to span, but also because of the more than ever usage of cloud managed services.
In this talk, we'll show you our proposal on how to provision ephemeral and dynamic development / integration environments that make use of cloud managed services with Kubernetes.
When recruiting and onboarding new grads and others who haven't worked in site reliability, how do we build (and become) the engineers we want to work with? While seasoned engineers debug, fix issues in production, engage with clients, automate mundane tasks, and build new tools to streamline their workflows, in school new grads are mostly only taught how to build things from scratch - not support, maintain, and protect them.
In this talk, I'll share my experience and describe how Facebook has made me bring ideas and people together, not only to realize my potential, but also to make a difference at the company.
New grads have to be able to “drink from the firehose” of new information, learn by doing, and make connections throughout the company. On top of that, there's also the Production Engineering role and philosophy which has learned to embrace and grow people who haven't done exactly this type of work before. At Facebook, the goal is to have impact, while doing the things we enjoy. Connecting those two dots is the key.
Creating a software development Value Stream Map is an exercise to identify delays caused by people, processes and tools. We have been running DevOps focused "hackfests" with customers. Prior to each event we meet to create a Value Stream Map of the software development processes.
In addition to improving understanding of the existing state the exercise enables opportunities for improvement to be identified. Given the fact engagements only last for a few days they have been vital in pinpointing which DevOps practices we should focus our effort.
The Value Stream Mapping process has not just been useful for the hackfest but has proved to be an extremely valuable process for all involved.
The value stream mapping exercise wasn't just useful in terms of laying out the technologies and processes. It was also a bit of a trust and familiarisation exercise for the teams and individuals. We found it extremely valuable.
Isolating and realising how much ‘waste’ there was really interesting to me. I knew there was quite a lot but identifying exactly where and how much there was essentially gave us a green light to carry our further work to improve our build system.
During the session I will describe Value Stream Mapping and the process we carry out. I will then present a number of real world case studies and discuss some of the more interesting areas of waste that we have been able to identify.
We will share how the Skyscanner Security Engineering team have created a service to make sense of what's going on in a highly distributed microservices environment. By gathering data from a variety of security and internal tools, and mapping it across the multiple teams throughout the organisation, we can more effectively understand the security posture of our services, allowing us to take decisions based on the risks discovered. We will cover the challenges, our road to building this solution, and the future goals.
Managing infrastructure as artifacts of code, instead of hardware, is key to scaling software organizations. Cloud APIs and automation tools can bring many techniques from software engineering to platform operation, including version control, automated testing, configuration management and reliable duplication. Programmable infrastructure becomes invaluable as organizations and applications scale and decomposes.
Automating the provisioning, configuration and deployment of complex applications requires some design choices on top of AWS services. Specifically, when some resources are shared among tenants, such as databases, and others dedicated, such as distributions, these automations can become quite complex. Learn how to automate complex multi-tenant applications using CloudFormation and other tools from Amazon Web Services.
Building your infrastructure in baremetal servers is a very interesting and cost-effective option, especially in these container period where we can leverage the power of orchestrators like kubernetes. Baremetal is of course less flexible and it comes with tasks long forgotten in our cloud-based days: booting management and configuration.
CoreOS is a good and popular choice as a base OS for a container-based system. It is also interesting due to the tools it has to make booting and OS configuration a much simpler task.
In this talk I'm going to describe two of them: Matchbox and Ignition and how we can integrate them in our Terraform infrastructure description using specific providers for it. This is part of a general approach towards immutable infrastructure provisioning.
This is our excellent speakers lineup. Check it out! We are still receiving papers so more speakers will be introduced.
Avinguda Diagonal, 547, 08029