Get your badges and swag while we wait for the conference to start!
Welcome to the DevOps Barcelona Conference!
This talk explores Fastly's journey in building and scaling its platform, highlighting key architectural principles and addressing the inherent challenges of achieving scalable growth. The focus is on understanding how Fastly prioritizes availability, horizontal scaling, data ownership, and a platform-centric approach.
We’ll discuss the critical role of real-time monitoring and user feedback in our engineering cycles, ensuring that our platform evolves in response to actual usage patterns. Through concrete case studies, we’ll illustrate how these practices have led to measurable improvements in performance and user experience.
Join us to learn how Fastly’s dedication to continuous improvement helps create a better internet where all experiences are fast, safe and engaging
Join us in this insightful session as we dive into the world of serverless architectures and explore common cost mistakes and learn actionable tips for cutting down wastes and reducing your AWS bill.
Whether you're looking to cut down on CloudWatch costs or improve cost-efficiency for your serverless application, we've got some helpful tips, just for you.
See you back at 11.15! Don't forget to visit our sponsors booths
When deploying an application to Kubernetes, each container in a pod should define CPU requests and limits. It is more commonly understood how CPU requests affect the scheduling of your pod and the future pods in the same node. But outside scheduling, CPU requests and limits have some effects on how your containers are created and can heavily impact their performance and their energy footprint.
In this talk we will help clarify some misconceptions about CPU requests and limits by explaining, in a developer friendly way, how they translate to some Linux internals. We will offer some quick tips on how to understand those effects, minimise them, and select good values to reduce your application energy footprint while ensuring its performance.
Are you navigating the complexities of compliance frameworks like SOC2, CIS, and HIPAA and seeking a more efficient path? This talk breaks down these frameworks simply and shows you a time-saving trick, making it perfect for anyone wanting to make their organization's compliance journey much easier.
I'll start by outlining the basics of these frameworks and highlighting the challenges businesses face in implementing them.
As the creator and maintainer of the terraform-aws-modules projects, I'll be excited to share how using these open-source Terraform AWS modules can streamline the compliance process. I'll walk you through real-life examples showing how such solutions significantly reduce the effort and time required for compliance.
At the end of the talk, attendees will get actionable insights on using Terraform AWS modules for efficient compliance management.
Free time for lunch! Lots of nice options around the area!
See you back at 14.45! Don't forget to visit our sponsors booths if you want to participate in the raffle
GenAI is not a brand new technology yet it has become a hot topic in the last couple of years. As many organisations are adopting it within different business cases, SREs and DevOps engineers have a great deal to say about its best use cases.
This talk puts GenAI on the DevOps map, and deep dives into the GenAI applications within the DevOps/SRE realm.
In this talk, I will revisit concepts and technologies linked to GenAI such as transformers, LLMs and RAG and see them applied into observability, in particular in the shape of the AI Assistant and focus on some use cases for DevOps engineers. Whether GenAI is really a must try technology, we will understand by the end of this talk.
Are your applications really cloud native? As a developer, you must be concerned about who can access resources in your system.
You probably think of authentication and authorization as any other logic – ifs and elses executed before performing critical operations
Did you know the Kubernetes Role-Based Access Control and authentication can be wisely combined to other cloud native technologies to compose a platform that will help you avoid spaghetti code, implement best practices for application security as a true cloud native developer, while delegating some of the burden to other layers of your system?
Attendees to this session will learn how to leverage Kube to build Zero Trust authorization the cloud native way. The talk will demo use cases of tailor-made data security leveraging cloud native technology, including Envoy and Open Policy Agent, that reclaim security policies as a proper concern, decoupled from the application's code at the same level as Deployments and Services.
See you back at 16.45!
Last chance to get all the stickers to participate in the raffle!
In the rapidly evolving landscape of cybersecurity, botnets remain a significant threat to Kubernetes and containerization environments. In this talk, we will present a comprehensive overview of our latest research on new groups, delving into their organizational structures, codebases, and tactics. We will explore how these malicious actors share information, select their targets, and offer their services.
By sharing our findings, we hope to raise awareness and facilitate a better understanding of these threats, ultimately contributing to the development of more effective countermeasures.
Botnets represent a significant and evolving threat in the cybersecurity landscape. This presentation aims to shed light on the inner workings of these networks based on extensive research and real-world examples. Attendees will gain insights into:
- Organization and Structure: Understanding how modern botnets are set up and managed.
- Code Analysis: A deep dive into the types of code used by botnet operators to exploit container vulnerabilities.
- Information Sharing: Exploring whether and how these networks share data amongst themselves.
- Target Selection: Analyzing the methods and criteria used by botnets to choose and attack applications.
Our aim is to provide a global view of the current state of botnets, offering valuable knowledge that can aid in the detection, analysis, and mitigation of these threats. This talk is designed for security professionals, researchers, and anyone interested in understanding the complexities and dangers posed by botnets in today’s digital world.
A story of how our infrastructure evolved over time to accommodate an increasing number of users - from on-premise to cloud and back down.
How does one make an infrastructure to handle more than a couple of users? How do you go from 100 to 1000 to 100,000 to tens of millions? What happens when due to popular demand hundreds of thousands of users hit your servers at the same time?
I'll tell you a story of how a small team of people managed to move software and services from one server to two, and then to dozens on cloud and then back to on-premise. What we encountered on the way, where we failed, and how we solved it.
End of the 1st day announcements and raffle time!
Ready to observe your GitHub Actions from a central repository? At Elastic, we implemented our custom OpenTelemetry Collector receiver to collect GitHub Actions logs and combine it with the existing traces receiver to observe all workflows in our GitHub organization. Learn about the challenges we encountered, how we solved them, and see how centralized logs, traces, and metrics empower the analysis and visualization of GitHub workflows.
At Elastic, we use GitHub Actions in multiple repositories for our CI/CD pipelines. However, we faced challenges with decentralized logs, which made troubleshooting issues that spanned multiple workflow runs or repositories difficult.
In this session, we explain how we centralized GitHub Actions telemetry using OpenTelemetry Collector and how it helped us improve our analysis and visualization of GitHub workflows.
Initially, we focused on scanning logs to detect security vulnerabilities and creating a unified platform for searching, analyzing, and visualizing logs, complete with custom alerts and notifications.
As our project progressed, we realized the broader advantages of centralized logs combined with traces and metrics, which we are going to explore with real-world examples.
We will examine how we handled spikes in log volume, navigated GitHub Actions API rate limits, and ensured data integrity while implementing the custom OpenTelemetry Collector receiver for GitHub Actions log collection.
We planned to use OpenTelemetry Collector as the primary log receiver and exporter. To ensure reliability, we intended to queue webhook events with a proxy service, which sends them to the collector at a controlled pace and retries failed requests.
We will discuss how to fine-tune the receiver for log volume efficiency and optimize the collector's reliability. Visualizations will showcase the impacts of various configuration changes on performance, and we will explain why we did not implement the proxy service.
Finally, we will share real-world examples of how centralized logs, traces, and metrics have empowered our analysis and visualization capabilities by showcasing how we leveraged detection rules to find leaked secrets and sensitive information in logs, making identifying and remediating security vulnerabilities easier. showing how we used traces to identify bottlenecks and the most failing runs to optimize our workflows, demonstrating how centralized logs helped us identify the frequency of flaky commands and prioritize optimization and troubleshooting efforts, sharing how we crafted informational dashboards using the provided traces and metrics to help us find optimization opportunities.
I will explore how an engineer can build his/her own Cloud Native Platform without losing the cool. I will take a deep dive into the realm of Platform Engineering. What is it? Is it just a buzzword? Can we learn how to build a platform in 50 minutes and demonstrate its value?
Platform Engineering has become a hot topic in the tech industry. It promises to streamline operations, foster innovation, and accelerate product development. But is it just another industry fad, or is it a game-changer here to stay?
Together, we will embark on an exhilarating journey to build a platform from scratch. This hands-on approach will provide attendees with practical insight into the intricacies of Platform Engineering. We'll break down the processes, discuss the tools, and navigate the best practices to construct a robust and scalable platform.
By exploring the practical aspects of Platform Engineering, we aim to demystify the hype and equip participants with the knowledge and skills to leverage this emerging field effectively. Whether you're a seasoned pro or a curious novice, join us as we uncover the real value of Platform Engineering. Let's build, learn, and hack together, as part of a supportive and collaborative community!
Transactional infrastructure is not suited for processing large amounts of data for analytics. In this talk, participants will learn about data architecture fundamentals and get deep insights into building an enterprise-grade data lake with a business intelligence frontend on AWS, using AWS analytics services such as Glue, Athena, Lake Formation, Kinesis and QuickSight.
(While the presentation is based on AWS, the fundamental concepts are transferable to other environments.)
Usage of GitOps methodology to deploy Infrastructure as code using Crossplane and ArgoCD:
- Configuration of Crossplane to have rights to deploy infra from one tooling cluster to the rest of the target Accounts
- Implementation new Infrastructure Kompositions in order to deploy Infrastructure as CRD’s.
- Lifecycle of this kompositions and deployment of the Infrastructure as separate tenants.
- Limitations during the maintenance of this methodology
- Roadmap for the evolution of this toolset.
Many successful paradigms in engineering and computer science are the result of two distinct approaches colliding with each other, leading to broader and more powerful applications. In this talk, we’ll look at the parallel backgrounds of two established paradigms: SQL and Observability.
We’ll be tracing back the history of both paradigms. How they managed to avoid each other despite SQL being the lingua franca of data manipulation, and how the industry standardization, fuelled by open-source innovation, has now propelled SQL back into the game as an observability language. We’ll also highlight case studies and benchmark results to provide the necessary elements for the attendee to answer a simple question: is Sql-based observability applicable to my use case? highlighting also the current limitations of this approach and leaving the conclusions for the attendees to draw.
More info in: https://clickhouse.com/blog/the-state-of-sql-based-observability
Join us as we discuss innovative methods designed to reduce toil within Security Operations (SecOps) at Amplify Education.
First, we'll detail the use of custom security rules within Datadog, exploring Datadog's built-in scanner detection rules as well as our own methods. Then, we'll discuss a custom tool called IP Blocker that utilizes AWS Web Application Firewall (AWS WAF), Datadog, and other sources to automate blocking of IPs.
Next, we'll discuss the advantages of harnessing Datadog workflows for automating a broad range of SecOps procedures, a strategy that the Amplify DevSecOps team has successfully implemented. Finally, we'll discuss some of the problems that Amplify has run into with our implementation of AWS WAF with a combination of AWS-managed and custom rules.
In our upcoming presentation, we'll explore a cutting-edge architectural solution for real-time SMS and email notifications, particularly geared towards responding to earthquake events. This system is designed to handle rapid data transmission, listening for event changes every second, making it ideal for real time critical alert scenarios. Central to our discussion will be the integration of Lambda functions and Confluent Kafka, coupled with advanced multithreading techniques and DynamoDB lock strategies.
A focal point of our presentation will be addressing the challenges and innovative solutions involved in integrating Confluent Kafka with Lambda functions to enable serverless operation of both producers and consumers. This is a key element in ensuring the quick and efficient distribution of notifications through parallel methods. Additionally, we will delve into the implementation of an automated scaling mechanism, which is vital for optimising the performance of the Serverless Notification ecosystem.
Our aim is to provide a comprehensive insight into how these technologies can be effectively combined to develop a robust and efficient system, capable of delivering critical real-time alerts for situations like earthquake occurrences, ultimately playing a crucial role in saving human lives.
How to properly size a service through performance testing and take those metrics into production. The key lies in Observability
In the last 5 years, the Lidl Plus product has grown from 2 stores in Zaragoza to 13,000 stores across Europe. From 100,000 users in 2018 to 90million in 2024. To carry out this titanic work in an organized and budget-friendly manner, emphasis was placed on two relevant points:
Monitoring and Observability
Performance Testing
Basic monitoring transitioned to a culture of Observability, which not only provided visibility into system metrics but also into the complete flow and user experience. When we talk about observability, we no longer talk about isolated systems but about understanding what happens as a whole.
Performance testing was highly relevant throughout the rollout period, inferring the volume that each country would bring based on the number of tickets coming in from the stores. Performance tests were conducted for each critical product, and end-to-end tests were constantly performed to measure the user experience of the Lidl Plus app.
We lacked real-time visibility from the application to the backend. Over the past 5 years, we have worked on that traceability to measure the "happiness" of our users, moving from tools like Firebase or Dynatrace to the current solution based on OpenTelemetry.
We will show the current stack and the ability to infer performance data for a product before going into production, validating workload hypotheses and feedback to improve tests once they are in production.
We have optimized the costs of the conference to make it as affordable as possible for everyone. Save money by buying the tickets at a 40% (blind ticket), 30% (early ticket), or 15% (regular ticket) discount before jumping to the Late Ticket! Are you a group? Get an additional 2% to 10% discount when buying more than 3 tickets. If your need an invoice, don't worry, our payments platform ti.to allows you to fill your billing data and download the invoice after the purchase.
We feel passionated and inspired by everyone in the DevOps community. From the small Open Source Projects to the Big Cloud Players. This Conference is for every SysAdmin, Ops, DevOps, Developer, Manager or Techie who wants to level up. For those teams that want to leave a dent both in their companies and in the community.
We've crafted the best possible DEVOPS conference just for you so you can see the future and get ready before anybody else. We'll gather top-notch speakers and an awesome community eager to share a lot of knowledge. All of this will take place in the very center of the gorgeous city of Barcelona in an excellent venue. What are you waiting for? Go get your tickets!
1. CONTENT IS THE KING
We want you to learn as much as possible, so we'll bring you the best speakers and the best content.
We don't select mainstream and/or introductory content, we cherrypick the most advanced and interesting topics that we believe that are trending and useful for the current community.
2. ONE SINGLE TRACK
We want you to enjoy the conference without worrying, so there is no need to choose between talks, you can attend to all of them!
3. COMFORTABLE
The Auditori AXA is a modern and comfortable venue located in the very center of Barcelona. Easy to arrive and well-located to explore the city center when the conference is over.
It has comfortable seats, and amazing audio quality, and huge high quality projector for spending a whole day listening to the best speakers in the world
4. NETWORKING AND FUN
We want you to have fun and meet new people, so we'll provide you with a lot of opportunities to do so. Sponsors, speakers, and attendees will be able to meet and talk during the breaks.
If you are lucky, you may even win a game console!