Crafting a web application that is able to run in the cloud, is a bit different than an application that is built to run on a single web server and some developers learn it the hard way. In order to avoid last-minute surprises and be well prepared for a cloud migration, here’s a list of the things you need to consider while you’re still developing the app.
Develop a mindset first
The first and probably the most important thing you need to develop, is a cloud mindset. We need to think that the app is always running in a distributed way, across multiple, disposable instances, even if you plan to deploy it on a single server with everything bundle in it. This way, deploying or migrating to the cloud will be a much easier process, that requires minimal effort. Notice the word disposable we used to describe our instances; this is going to be key for the next things we need to consider.
You also need to keep in mind that hacky solutions like connecting via SSH to run a script, or directly updating the database are off the table as well (it should be off-the table anyway) since, in our theoretical example, we have 30 server instances, the databases are sharded and there is a cluster of cache servers. We can’t “quickly update” our CSS files or upload a file via FTP either, since every file is stored in an Object store which doesn’t support manual updates and served through a Content Delivery Network (CDN) which means that the original file is cached across the globe.
Sounds complicated? Well, it’s a different mindset, which means that it might sound complicated at first but it would then become second nature to you and the way you craft your applications. Not having the server for granted and keeping in mind that all services are external and distributed is the key concept. Now let’s break it down into smalller parts.
Media files, user uploads, static assets and everything in between, can only live on the server for a short period of time. The storage on cloud instances is usually ephemeral which means that whenever the instance gets terminated, the disks are terminated as well. Now, there is a way to add persistent volumes even elastic file systems that scale to your needs, but this comes with a cost and it’s not the common case for a web application that say, handles user uploads.
Most cloud providers like AWS, DigitalOcean, Azure and others, use something called Object Storage, which can be eloquently described as “a bucket of files”. Such buckets can contain media files, the application’s static files, PDF documents or even larger files that need to be downloaded by multiple users simultaneously.
The two common patterns used to upload files to an Object Store, are either a) directly uploading the files when the user has uploaded them on the app via an API, an SDK or a framework library, or b) syncing the server’s file storage with the object store and deleting the files afterwards, with (a) being the common case and best practice and (b) only being used for cloud migrations, where we need to migrate files that the application used up to now, to the cloud.
Every object (file) in the object store, comes with its own permissions and path, as it would in a plain server, with the only exception that the path is now a URL so, instead of storing a file path in the database, you can now store the full URL for the file specified.
A minor caveat here is that if your files need to be public and directly accessed by the users frequently, an object store might prove to be slow, which means that you need to serve the files via a CDN, that “sits” in front of the object store and heavily caches the files until they get modified or said cache expires.
Static assets, can be stored in the object store as well, since the web servers may hold different file paths, depending on your build process, but what needs to be taken into account here, is that since they’re stored in the object store and served by a CDN, there is heavy caching for them. The most popular solution to this issue, is to have a predictable file hash, which is generated by a build tool so that whenever the application serves a different file name, the CDN will retrieve this file from the object store, and cache it globablly all over again.
Storing the users’ sessions in the server is another cloud anti-pattern. In our theoretical example where our application is spawned across 30 web server instances, we can’t have the sessions stored there because the user might be directed to a different instance upon every subsequent request. Some load balancers, mitigate this issue by using “sticky sessions” which is an option that can be enabled and instructs the load balancer to use the same web server instance upon subsequent request but, this option comes with a hidden performance cost.
The actual solution to the session issue, is to have a centralized cache storage that is being used for storing the sessions, something like Redis or Memcache. Luckily, all major frameworks and languages have session storage in a separate store built-in, enabling the web server to forget all about session management, hence making our app cloud-friendly.
Retrieving & storing information in cache
Speaking of cache, we can’t really have the cache as a software dependency in our web server, simply because our web servers are too many and the users request might hit a random server where the cache item is missing. Again, a centralized cache store, where we store and retrieve items via TCP is key here. This way, our cache service can scale up or down, depending on our caching needs and our items are available even if the web server instance gets replaced.
If you find yourself in need of having more control of the cache server though, for example if you force-empty your cache periodically or whenever your database schema changes, you might need to do that programmatically by creating a few scripts that run on your deployment tool.
Scheduled jobs (eg. Cron Jobs) can run on your web serve or dedicated worker instances that only do that. The important thing to consider here, is that the worker instances can scale as well. The problem that arises now that our theoretical scenario has 15 worker instances, is that we don’t want them to proces the same data over and over again because a) it would be a waste of resources and b) we might run into problems. Think for example a scheduled job that processes some data, then emails the users about the outcome. We wouldn’t want our users to receive the same email 15 times, do we?
There are two possible solutions to this issue: first would be a mutually exclusive lock and second would be a queue. These two approaches might sound similar but they aren’t really. During the first approach, we “lock” the data that is to be processed, eg. by updating a boolean flag in the database so that the next worker process that runs, excludes this data from what is to be proccessed by it and so on and so forth.
The second option, is to have a queue (for example a First-in-first-out queue) where we enqueue the data that is required by a job to run (by using some central store like Redis), then the worker processes that start, retrieve and delete this data from the queue so that it’s only available to the specific process.
One caveat here, is failure strategy, which means that whenever a worker fails for some reason, they need to somehow handle that, so that no job (or data) gets lost. Practically, you’d have to enquque the message with its original payload again so that another worker picks it up and/or notify the team that something went wrong.
In the beginning of this article, we mentioned that we can’t perform manual database updates, an action which is an anti-pattern in general, not only cloud-based applications. In cloud environments however, the database might have replicas or it can be sharded, making a manual update somehow more difficult.
The solution is to treat every database update as a code change, meaning that your application should leverage database migrations, scripts which handle database versioning in order to update its data and schema.
Crafting a web application for a cloud-based environment relies mainly on a different mindset which forgets all about “the server” and thinking about “the services”. Most frameworks and tools these days make our lives easier by embedding cloud tools in our work flow however, we as developers have the responsibility to enable these technologies and use them to the benefit of our users and ourselves.