Blog Insights Is serverless the end of ops?
Published on: September 12, 2019
7 min read

Is serverless the end of ops?

What is Serverless architecture, what are the pros and cons of using it and where will it go in the future?

serverless-ops-blog.jpg

We’re not playing tricks when we say serverless isn’t actually serverless. It’s not that servers aren’t doing work, it’s just that your servers aren’t necessarily having to do the work. In these exciting times of automation, not having to worry about servers seems pretty appealing.

Serverless architecture has an annual growth rate of over 700% and shows no signs of slowing down. Its popularity is all due to the operational efficiency it promises. Instead of worrying about infrastructure, you can essentially outsource those responsibilities to your cloud provider. Once you specify the resources your code requires, the cloud provider provisions the servers and deploys. Even better, you only pay for what is used.

The dream of serverless computing is pretty simple: Developers deploy into infrastructures they don’t have to manage, set up, or maintain. Once they upload a simple cloud function it just works. Since organizations are only paying for what they use, this system is infinitely scalable, and because this is all managed by a cloud provider, they take over security as well.

With a serverless architecture carrying all of the ops load, what does that mean for sysadmins?

Serverless: The end of ops?

Serverless hype hasn’t been without skepticism. On the ops side of things, there has been some concern that serverless is trying to force ops out of the picture. A successful DevOps team structure is all about dev and ops working together but, as we well know, there are some challenges to overcome. For one: dev and ops teams are incentivized by vastly different things. Development wants faster feature delivery, whereas operations wants stability and availability. These two goals contradict each other. With serverless bypassing ops altogether, it unintentionally reinforces the “ops as a barrier” trope.

Getting to the point: No, serverless is not the end of ops as we know it. Ops looks after monitoring, security, networking, support, and the overall stability of a system. Serverless is just one way of managing systems, but it isn’t the only way. The sysadmin is still happening – you’re just outsourcing it with serverless, and that’s not necessarily a bad (or good) thing.

Even with so many new technologies and methodologies out there – Kubernetes, serverless, containerization – the basics of computing remain the same. It’s only when we understand the fundamentals and commit to building reliable code that we can make the most of these new platforms.

In a recent interview with Google Staff Developer Advocate Kelsey Hightower, one of the biggest challenges he mentions is the “all-or-nothing” approach. “Either I’m all serverless, or I’m all Kubernetes, or I’m all traditional infrastructure. That has never made sense in the history of computing.” Ultimately, you don’t have to choose: Pick the platforms that work best for the job. Monoliths are easy to build and run, and microservices and Kubernetes can help organizations scale faster. Serverless is just another tool that teams can use to keep innovating.

Video directed and produced by Aricka Flowers

Serverless pros and cons

As with any architecture, there are going to be some benefits and some disadvantages. It’s important to weigh the pros and cons carefully against your organization’s needs.

Less operational overhead

This is frequently listed as one of the biggest advantages of serverless. Security patches, server upgrades, and other maintenance are already taken care of, which can free up resources for more important things.

Scalability

You just upload a code/function and your cloud provider handles the rest. Serverless allows as many functions to be run (in parallel, if necessary) as needed to continually service all incoming requests. Or you can have serverless run an entire application (with frontend, backend, etc.) and still reap the benefits. Because you’re not boxed into a certain pricing structure or number of minutes, serverless can be infinitely scalable (in theory).

Less operating costs

You’re only using what you need and all costs are purely based on usage. Finances are dynamic, which is more representative of how companies actually operate.

One example of this concept is comparing a rideshare service to the costs of owning a vehicle. With a car, there are costs you pay regardless of usage (insurance, registration, car payment), there are costs you pay depending on the usage (gas, maintenance), and then there are additional costs tied to unforeseen circumstances (accidents, that pothole again). With a rideshare, you’re just paying to go from point A to point B – all car costs we listed previously are being taken care of by someone else.

Less control

Often cited as the biggest con, what you gain in reduced operational costs, complexity, and engineering lead time comes with increased vendor dependencies and less oversight. There has to be a lot of trust in the cloud vendor since you’ll be unable to manage the server yourself. Not having control of your system means that if errors happen, you’re reliant on someone else to fix them. In business, no one cares more about your problems than you do.

Potential security risks

While cloud vendors will manage security for you, and are generally well equipped for that task, it’s the architecture of serverless itself that could introduce vulnerabilities into the system. The problem is especially true for serverless applications built on top of microservices, with independent pieces of software interacting through numerous APIs. Gartner warns that APIs will become the major source of data breaches by 2022.

Unpredictable costs

How can we list costs as both a pro and a con? That’s mainly due to the elasticity serverless offers. Since everything is event-triggered, rather than paid up front, elasticity becomes a double-edged sword: You’re not paying for cloud usage you don’t need, but it being so easy to use means you may end up using more.

For another real-world example of this concept in action, let’s examine ketchup, mainly the introduction of the plastic squeeze bottle.

Heinz ketchup had been served in the iconic glass bottles we all know and love since 1890, but in 1983 the Heinz corporation unveiled the squeezable plastic bottle to consumers. This was heralded as a huge innovation – consumers could squeeze more precisely, the bottles were unbreakable which reduced losses in shipment, and the ergonomic design made it perfect for hands of all sizes. After the introduction of the new squeezable bottle, ketchup sales went up by 3.7% from the prior year. Why? Now that ketchup could be dispensed more easily, people used a lot more of it. Instead of tapping on a glass bottle hoping for a drop, the ketchup cup runneth over.

With serverless being so easy to use, it’s best to assume that developers will use it more than you expect.

Where are we on our serverless journey?

So much of the literature about serverless comes from the cloud providers themselves, so of course it focuses on the most idealized vision of what serverless can be. As a result, those in the ops community felt like they were being forced out, and organizations were too busy paying attention to the benefits to see the potential downsides.

Serverless opens up a lot of opportunities in DevOps, and offers a unique solution for many use cases. Does this mean that sysadmins everywhere will soon be out of a job? Probably not. Serverless is just another tool in the toolbox, and at GitLab we’re exploring how to help users leverage Knative and Kubernetes to define and manage serverless functions in GitLab. We’re also looking into how we can be even more multi-faceted. Some users want to work with a Kubernetes cluster, some want to push a serverless function into AWS Lambda. We can already help with monoliths and microservices, and we’re actively working on supporting serverless as well.

Interested in joining the conversation for this category? Please join us in our public epic where we discuss this topic and we can answer any questions you might have. Everyone can contribute.

Photo by Tomasz Frankowski on Unsplash

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

Find out which plan works best for your team

Learn about pricing

Learn about what GitLab can do for your team

Talk to an expert