This is a comprehensive, introductory course that covers HashiCorp's Vault. The course is aimed at both Vault administrators operationalizing vault and developers writing applications that utilize Vault secrets. The first part of this course covers the operational components of Vault including:
• Initializing a Vault
• Understanding secrets and leases
• Mounting and configuring secret backends with Vault
• Configuring and parsing audit backends with Vault
• Deploying Vault in an HA environment
The second part of this course covers techniques for integrating Vault secrets into your applications including:
• Using Consul Template and Envconsul
• Communicating directly with Vault in your application
A key part of any organization's testing strategy is the performance test harness. Without access to accurate telemetry and results, it is difficult to reason about the limits of a system's performance. We place our trust in various tools to place our services under stress, but how can we be sure that the outputs from these tests reflect reality?
This full-day course explores how to validate that your load-testing harness is producing accurate results. Attendees will develop and iterate on a load-test harness to measure the responsiveness of a simple microservice. The course will cover how to measure and report system throughput and latency, and how to measure the system-under-test to understand where bottlenecks lie.
Key takeaways:
In software as well as science, we have seen long periods of normal science and puzzle solving and disruptive paradigm shifts. This talk will provide a new view of the DevOps movement through the lens of scientific and cognitive revolutions. It will show how to make strategic choices in recruitment, organization, culture, and technology to best situate an organization for success in this brave new world.
For any product, empathizing with your users - going beyond a checklist of requirements - helps foster a stronger connection with providing a quality product. In DevOps, this product might be your CI/CD platform and your users the engineers who need to deploy code.
At Adobe, we've developed a CI/CD platform used by dozens of internal teams deploying upwards of hundreds of times a day for a variety of high-visibility applications. We place an emphasis on user (deploying engineers) happiness and deployment integrity, and utilize our strengths in UI/UX to design and build a CI/CD interface and system that is not only beautiful and easy to use, but empowers engineers to manage every step of the deployment process with confidence.
This talk will cover the values we embody as we think about deployments:
1. How aesthetics benefit both the user and codebase
2. Designing a process that empowers every engineer to manage their deploy from start to finish, from day 1, safely, effectively and quickly
3. How features of our CI/CD platform benefit the user with effortless deploy queuing, multi-deploys, branch/merge management, rollback tools and more
4. How our team structure and culture allow us to quickly identify and iterate on improvements to the CI/CD platform
Development of a feature doesn’t stop at deployment. Your involvement continues for the lifetime of the product. If you want great power to control the choice of tooling and approaches, then you must accept the great responsibility of ensuring it works - and remains working - in production.
In this talk, I’ll explore the topic of why developers need to support their own features in production. I’ll cover the benefits of this approach, which include a greater understanding of your product, its usage and performance, and how this data can be fed back to improve your product. I’ll also talk about the downsides of being on-call and share some strategies from Ops teams on how to handle these issues. You’ll come away from this talk feeling empowered to own your own work.
At OpenGamma, we designed and built our web services to run in a fully serverless manner, primarily using AWS Lambda. This talk will look at how we did it and what problems we encountered. Was it worth it? Come along to find out!
APIs are key for managing the challenges of digital transformation delivery and building future proof, modern agile software solutions. In addition, APIs are essential for creating new digital economies and evolving new business models. But very often, consistent implementation guidelines, reference architectures and design best practices and methodologies are missing. The variety of tools including; API Gateways, Mobile Backend as a Service (MBaaS) and Analytics can accelerate and simplify the digital challenge but when and how is one of the most important questions! This session removes the uncertainty of where to start, sharing different classes of APIs, how they can be implemented, best practice recommendations and an example of a reference architecture to help solve and create success on your digital journey.
Follow on a journey about how a successful Swedish FinTech company revamped their organization to implement a completely new distributed system running on the Java platform and using Continuous Delivery. Today, autonomous development teams can push code to production as frequently as they like with fully automated delivery pipelines. We take a look at high-performing teams, their ways-of-working, and the technologies used to deliver quality software at a high pace. We also take a look at the future and what challenges lie ahead.
The Agile and DevOps movements place a lot of emphasis on the autonomy, self-organization, and responsibility of teams. A common misconception is that there is little or no role for leaders of such teams. On the contrary! In this rapidly changing world, where competition is fierce and the pressure to deliver is high, effective leaders create environments that allow teams to thrive. They enable, coach and inspire.
But what makes a leader an effective leader? Attend this talk to learn more about various leadership theories, how they can be applied in practice, and my thoughts on leadership based on experiences as a tech lead, scrum master, and from my time in military service.
With so much Media attention on the focus of the rise (and falls) of crypto currency valuations, it is easy to lose sight of the wider picture of the blockchain opportunity. Add to that the uncoordinated nature of the non-centralized model and the growing number of scams and mismanagement scandals and the result becomes a very fragmented market.
We all intuitively know that appropriate use of Blockchain technology does have real potential to make a change to how we operate cross-border and within societal groups, bypassing middle layers and removing the trusted centre. The question is how far can Blockchain go and what is next?
This keynote will feature real live blockchain applications which are making that change, including vibrant (global) communities transacting with their values, provenance of product research, impact investment and institutional consultancy research and advise. Having had more than six years in the research and application of the movement of non-financial values, and two years in blockchain applications, I will be sharing our insights, learnings and future aspirations to achieve total value in finance in Blockchain
Running applications and services across several cloud providers and/or data centers can bring many benefits for organizations. Actually, in some cases it can even be a mandatory requirement. Making your application stack compliant with multiple cloud providers can be problematic as there are differences between cloud providers, for example in networking configurations. And to make things even more difficult, you should have a way to secure the intra-services’ communications between many cloud providers. In practice, this means cumbersome network configurations with VPN and other networking security solutions. Luckily, containers and modern (container) overlay networks can solve this complexity for you.
In growing companies, we often deal with hard limitations on resources and time. In the market, there is an explosion of SaaS tools that enable a company to plug and play their "Business Intelligence" department. While in the open source world, software like Kafka or Luigi and Snowplow are available as the building blocks of large pipelines.
As a result, more now than ever companies are facing a build vs. buy decision with a strong bias on SaaS for speed of implementation. Unfortunately, this decision often leads to a complete re-engineering of the platform when a company reaches the next level of capabilities or needs compliance. What happens when you have to make sub-second decisions with unstructured data coming from a 3rd party on a different continent? Or factor-in GDPR compliance?
In this talk, we would like to present a considerate and deep technical comparison between the various options available for data infrastructure in a growing company while considering money and human resources. We also want to introduce our "Not-Go-Back" practice at Curve and how we define the need for an evolutionary approach in data engineering decisions that is able to evolve with the size of the company without needing to re-architect the platform.
Many software projects use build pipelines including tools like Jenkins, SonarQube, Artifactory, etc. But often those pipeline tools are installed and maintained manually. There are certain risks with this approach and in case of failure, it often takes a long time to have a running pipeline again.
This session shows how to automate the creation of a build pipeline. Kai will explain how to create a Docker based infrastructure where Jenkins, SonarQube and Artifactory are pre-configured and deployed. Then Kai will use Terraform to deploy this infrastructure to AWS. The pipeline is ready for operation in just a few minutes, as Kai will demonstrate in a live demo.
By now I bet your company has hundreds, maybe thousands of services. Heck, you might even consider some of them micro is stature! And while many organizations have plowed headlong down this particular architectural path, your spidey senses might be tingling. How do we keep this ecosystem healthy?
In this talk, I will go beyond the buzzwords into the nitty-gritty of actually succeeding with a service-based architecture. We will cover the principles and practices that will make sure your systems are stable and resilient while allowing you to get a decent night's sleep!
Jenkins is one of the most popular tools for Continuous Integration and Continuous Delivery today. It’s easy to set up and get started with. But what are the best practices for building delivery pipelines with Jenkins? Should you use the traditional build jobs or opt for the new Jenkins pipelines? What options are there for, e.g. visualization or infrastructure as code support? How do I structure my shared libraries so that they’re easy to maintain?
In this talk, I’ll share my many years of experiences from working with Jenkins for Continuous Delivery. We’ll take a look at pros and cons of different approaches but also how Jenkins compares to its competitors, where it shines, and where it leaves room for improvement.
Changing behaviors to change culture is not rocket science. However, most of us actually find rocket science to be much easier! In this session, Morgan will explain how you can build a Behavior Framework for your organization to implement behavioral changes.
Microservices are no longer only the latest buzzword, but stand for a broader development/paradigm change to Continuous Delivery, RESTful Services, and Agile Development. Teams develop independent (micro) services with a private life cycle so that new functionality is placed in a very short space of time in production. For online services such as Netflix and Spotify, this is vital to compete.
Microservices set high standards for architecture and infrastructure: asynchronous message-based applications that are automatically deployed containerized, scaled and managed. This presentation deals with the differences between a monolith and microservices and what are the advantages and disadvantages of each approach.
Modern software development architecture has almost completed its evolution towards being properly component-based: this can be seen by the mainstream embracing Self Contained Systems (SCS), microservices, and serverless. We all know the benefits this can bring, but there can be many challenges delivering applications built using these styles in a continuous, safe, and rapid fashion.
This talk presents a series of patterns based on real-world experience, which will help architects identify and implement solutions for continuous delivery of contemporary architectures. Key topics and takeaways include:
Does your application or service use a database? When that application changes because of new business requirements, you may need to make changes to the database schema. These database migrations could lead to downtime and can be an obstacle to implementing continuous delivery/deployment.
How can we deal with database migrations when we don’t want our end-users to experience downtime and want to keep releasing? In this talk, we’ll discuss non-destructive changes, rollbacks, large data sets, useful tools and a few strategies to migrate our data safely with minimum disruption to production.
Since the introduction of Agile, a lot of companies have tried to impose the same framework at a big scale without suiting the framework to customer context and to a wider strategic vision. Agile has never been about frameworks or locking your customers into one way of doing things.
I will show examples of bad Agile practices and why the Agnostic Agile Oath is needed - One size does not fit all! I will introduce the Agnostic Agile Oath, it's creators, and the reasons it was created. I will then review the twelve agreements of the oath citing real life examples for each. Agile is not the final goal, it is only the mean to reach the real goal.
Much of the adoption of DevOps tools and processes focus on the benefits to delivering high quality code on an industrial scale. Although we all recognize that good monitoring is critical to the availability of a service, it may not be obvious that the act of monitoring can have a profound effect on the attitudes and culture of the teams involved. The right sort of monitoring and appropriate dash-boarding can improve the morale and effectiveness of all the teams involved. The wrong sort of monitoring or badly considered dash-boards can have the opposite effect.
This talk examines how what you share will define you. Through real examples and live demos, the speakers will show you how to design status and trend displays that will make your teams more effective without overloading them. The talk will also include case studies with various types of teams to highlight how you can apply this thinking to help make any group more effective.
If you want to be taken seriously, you need to provide Docker images to your users. It's easy — everybody is uploading containers to Docker Hub, right? Unfortunately, reality is never as easy as it sounds at first. This talk gives an overview of Elastic's ongoing journey to providing official Docker images:
People’s relationship with their finances in the future will be nothing like they imagine today. People crave for their finances to be managed without effort, but they still expect the best products and value. There is a parallel to be drawn with investments, which are now managed on autopilot. This change will spill across into retail banking and banks will fall behind unless they accept and embrace it. By drawing information from multiple sources and using powerful analytics, people will be offered advice to act on today, to improve their finances for tomorrow. How will people will engage with their money in the future? Managing money will become effortless. Money will be on mobile. Money will be on autopilot.
Microservices, yay! Monoliths, boo! A microservices architecture bring a raft of benefits over an old school monolithic beast, but at a cost. Live coding using Gradle, Dropwizard, and Swagger, I will show an approach that I have been using to defer the cost of a microservices architecture until you need it while avoiding the problems associated with a monolith.
Error Messages in logs are the starting point of a search for clues. But if you're running in a distributed Microservice environment, correlation of logs entries and tracing between services becomes important. Logging frameworks can add context information that will be logged together with the message. If you need more functionality, Zipkin can help you: It will provide correlation IDs to correlate log entries between systems. You can add this information to your existing logs or forward it to a central Zipkin server. The traces can be augmented with tags so you can search them online. This presentation gives an introduction about the possibilities of Log4j and shows the additional possibilities of Zipkin and Spring Sleuth.
Development teams often focus on getting code to production while losing sight of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you will likely have someone taking on that role. We can all benefit from looking at the principles and practices of SRE that we can bring to bear on our own projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
The concept of Technical Debt is always different when you ask a developer or a sponsor. Why is it that complicated to understand for the sponsors? Why do developers struggle to explain it? I’m sharing my failed and successful attempts with Technical Debt in three different countries and different contexts. Each time I was on a different role and my perspective was different, but in every case, I needed to deal with technical debt.
Agenda of the talk:
There are many known models and ways out there to scale Agile: SAFe, LeSS, Spotify, Nexus and others. Which one would you choose to scale Agile at your organization? Have you ever thought about mixing them all, picking up the best of each? Would it be bottom-up or top-down? Just the IT division or the whole company? I'll share my experience in participating in the global Agile transformation, together with other 65 Agile coaches, as part of one of the biggest financial organizations in Europe. In this talk, I'll explain why it started, when, how and what are the successes and challenges.
Being able to observe the state of a running application is the key to understand a system's behavior and be more efficient when failure happens. This talk focuses on why observability should be an integral part of system design and on the techniques to build a clearer picture of distributed systems in production.
Find out how you can build better software by uncovering dark debt in your culture. When an organization takes on technical debt unknowingly we call it "Dark Debt". And while we’ve developed practices around identifying and managing technical debt, decisions that create cultural debt are still very hard to see.
By understanding what cultural debt is, when we typically take it on, and what to watch out for, we can protect ourselves from dark cultural debt. Some symptoms of dark cultural debt include:
If you are experiencing these and wondering why, you are likely dealing with the consequences of decisions that unknowingly cultural debt.
In this workshop you will learn how to provision infrastructure in AWS using tools for automating everything. We will cover how to use Terraform for provisioning basic infrastructure on AWS, including VPCs, networking, security groups (firewall'ish) and deployment of applications on Elastic Beanstalk in an autoscaled and load balanced environment. We will also set up a hosted database and a bastion host (jump host) for connecting to servers inside your private subnet. As a bonus you will learn how to handle secrets when working in an environment built for continuous delivery.
Docker is a popular choice in tech today. However, containers alone are not enough to bring complex applications into production. Load balancing, fault tolerance, continuous integration and delivery, logging/monitoring, and release management are some of the other important aspects for successfully rolling out software products.
Kubernetes helps achieve these tasks by transferring the area of containers to the cloud. It makes it possible to model a single large host from many “small” hosts, which then benefits from automation. However, Kubernetes is just a piece of technology meant to simplify the release and development process.
Finally, OpenShift from Red Hat is a well-rounded approach towards DevOps that brings everything together.