Picnic logo

A Microservice Success Story

Written by Ernir ÞorsteinssonDec 9, 2020 16:139 min read
1 1Sd 74Ko5DBuIJS6Zpo2Kg

Picnic “went microservices” a while ago.





The migration happened around the same time the growth of the tech team started picking up, and the previously, well, monolithic tech team was divided into dedicated product teams. As it turned out, the cut was three-fold in our case:

  • The people were re-organized into product teams;





  • The code was split across multiple git repositories;





  • The deployments were made more granular.





In the beginning, this led to each product team maintaining approximately one microservice. But the company kept growing, and with growth comes greater scope. We started spotting more opportunities for breaking things down. Our own Jakob has written about the principles involved before, but what does this process look like?





Identifying the Use Case





Our case study is Picnic’s Runner App, used by our drivers on the road to receive navigation instructions, manage the recyclables picked up from customers, to provide ETAs back to our customers, and so on. The “so on” part has grown significantly since development of the application started in 2018, one notable addition being the one of various safety features, where we collect GPS and accelerometer data from our Runners (delivery vehicle drivers) in order to provide them with feedback on their driving.





Looking at the structure of the Runner App’s backend, we realized that our neat little service was now fulfilling more than one purpose. On the one hand, we had the original functionality, primarily responsible for managing the state of trips and user interactions. On the other hand, we had developed significant functionality around various sensor data, collectively referred to as “telemetry”.





Image for post




Just because two parts of our application are not the same, doesn’t mean we must introduce a new service. There are plenty of reasons not to go microservices. It took us a while to get to the point where we could properly support what we already had to begin with.





Nevertheless, we wrote an internal RFC detailing the plans, considered the options, and came to a conclusion we need another separate service. Let’s look at our reasoning.





Behavior Characteristics





The two parts of the application did not behave in the same way.





Managing trip states is a relatively heavy, transactional process. The requests made to the backend are relatively few, as we are mostly tracking concrete actions taken by the Runners. Each request is vitally important and must be verified for integrity — failures mean immediate disruptions for both customers and Runners. Here, we make heavy use of our open-source Jolo library for instantiating object graphs, which implies in-memory processing.





Meanwhile, handling live sensor data is entirely different. That results in high-frequency requests where each payload is ephemeral and unlikely to be tracked individually, and the data types involved are relatively simple, if voluminous. We can make more extensive use of stream processing here, fully leveraging the reactive Spring stack.





More than a simple difference in endpoint implementation, this implied using different application frameworks. And that in turn meant splitting the application.





The Database Matters





Databases rarely come up in discussions surrounding microservices, the discussions being more on the structure of the services themselves, how they communicate, and how to maintain performance. Which I find rather surprising.





Microservices as independent deployments mean certain calls are pulled from the well-optimized database layer to the notoriously difficult network layer. In this case, identifying that we could split the application on a database level without trashing our existing behavior in terms of network requests was key when deciding where the new service boundary should be.





Another consideration was that the significantly different data model opened up the possibility of using another persistence strategy. The original service was built on Postgres, but the streaming nature of the to-be-independent service made using reactive MongoDB with geospatial queries a strong contender. In the end, we didn’t manage to come up with a MongoDB document structure which we trusted to be future-proof, so we stuck with our trusted Postgres + Postgis. But we took away the learning that these databases can evolve independently — and so far they have.





I would consider database design to be a huge part of designing and scoping a microservice — and that if a microservice split is to be considered, a good starting point is to consider how the database could be split. If functionality can not be split on the database level, how much benefit could possibly be extracted from splitting the codebase on the application level? In fact, I would recommend looking into database splitting as a necessary first step. Were I designing a new system today and wanted the classic microservice benefits of loose coupling and interface segregation without having to touch all of their difficulties, I would look into a modular application with a multi-database setup.





Independent Failures





Microservices are a bit like cats. They sometimes make your life harder, but once you have them you don’t want to imagine life without them. Their behavior can be hard to predict. And much of the internet is devoted to discussing them.





Image for post
My beloved cats come in various sizes, as do my microservices.




Unlike my cats, however, I am not required to love all of my microservices equally.





If the telemetry gathering functionality of the Runner App backend goes down, it’s no more than a mild inconvenience for our drivers. But if the service managing trip states has so much as a hiccup, it can fully block our drivers and ruin the day of our customers.





We really don’t want the nice-to-have functionality to bring down the operationally critical must-have functionality. And that is what truly independently running microservices can bring us which monolithic deployments really can’t. All services can go down, but with microservices, we have a tool to degrade gracefully.





And Not Much Else





We deemed that to be sufficient. Note, that while making this decision we were not after independent deployment processes, development workflows or code repositories. We were still working as one team, working on one product with one release cycle. Within the team, the newly split microservices are simply an aspect of the technical implementation. We would not have gained anything from independently managing dependencies, version declarations or anything else caused by splitting the codebase, so we didn’t touch those aspects at all.





If you are headed towards adopting a microservice architecture, I recommend thinking long and hard about what you actually want and need to get out of it, rather than constraining yourself to someone else’s idea of what constitutes a “correct” microservice setup.





Contributors to Success





This is a success story, where at this point the product team has gotten everything we hoped for from adopting the microservice pattern.It could have gone differently, it could have been a “why we have abandoned microservices” post. Instead, here are some of the things I believe were major factors contributing to success.





This Was Not a Slippery Slope





The team has done more than splitting up the telemetry functionality and the trip state functionality into separate microservices. In fact, by now, the team manages four different microservices.





We have only added microservices when we have weighed and considered our options. We do not routinely add services as part of our growth. We may scale our existing services, but the company’s growth does not necessitate a new deployment. The number of microservices scales only when the application’s domain requires it.





The Technologies Were Up to the Task





Distributing your application logic means you now have a whole host of new problems to deal with. You’re not just replacing database calls with HTTP calls, this has a deep impact on what kind of an application you’re developing.





Fortunately, we have already solved these problems at Picnic during the original microservice migration. We have a Kubernetes cluster managed in-house, we have a message broker set up and ready to go, the deployment pipeline is smooth, and the authentication system can handle distributed systems. Each of these required a significant amount of sweat and tears to get running.





Investing in the technologies to make microservices actually possible to work with can become the single most labor-intensive project your tech team will ever take on. Make sure you’re doing it for good reasons.





Releases and dependency management did not increase in complexity





It’s easy to find yourself in a situation where you spend your days doing tedious multi-step deployments, having to juggle dependency versions and doing work in multiple code repositories for a single feature. Microservices mean a technically more complex system, but that does not mean the developers’ day-to-day workflow needs to suffer because of that.





This is, for the most part, a question of structuring the projects in a way that suits the developers — a topic that has little to do with microservices themselves. Update your deployment processes, have a logical hierarchy to your dependencies, split code into as many repositories as your workflow requires. When racing towards a microservice pattern, don’t lose sight of those aspects of a monolithic application which are highly convenient. Odds are you can preserve the good bits.





Next steps





The software discussed here is still evolving, tasks still remain. Our testing can improve — we have automated unit tests, integration tests and component tests in this part of the code base, but end-to-end testing is still an ongoing topic. The number of services our clients would like to interact with is leading us towards the topic of API gateways and cross-service data aggregation via GraphQL. I don’t yet know how this microservice story is going to end, but as long as we keep iterating carefully and with both eyes open, I’m confident that this story will have a happy ending!


Recent blog posts

We're strong believers in learning from each other, so
our employees write about what interests them.