Recently, Rust has started to become more and more popular as a choice for writing web services and Rust web apps with. Although Rust deployments are typically not given first-class support, there are still a variety of platforms that you can use to deploy a Rust web app. In this article, we'll be going over what your options are and the (dis)advantages of each of them - as well as the best way to deploy Rust for your use case.
How do you deploy Rust?#
When you compile Rust, it gets compiled to an executable. You can then run the binary from anywhere - including your own self-hosted server!
The compiled nature of Rust programs means that they are normally best ran in containers or alternatively a VPS. You can also run them as serverless functions through supported platforms like AWS Lambda. Each of these deployment methods have their own tradeoffs - which we'll be looking at below. Interested in writing a Rust API? We also have a guide to writing and deploying a Rust API with the Axum framework, which you can find here.
Different types of hosting#
VPS#
VPS (Virtual Private Servers) deployments allow you to deploy to a machine that you have full control over. By SSHing into it, you can add or remove any software on the VPS that you want (or don't want!) to use.
The basic process involves mostly setting up Nginx or Apache (or a similar proxy), then grabbing the files you need from your Git repository on GitHub or GitLab and compiling the program itself. You can then set the application up as a systemd service and it will automatically start up whenever the machine starts.
Unfortunately, there are a couple of issues that come along with this: you need to handle all issues yourself (unless the VPS itself has an outage), and typically it's less scalable than other deployment options. VPS machines typically have a given hard maximum for resource usage and you pay for a set amount - so if you're not making full usage of the given resources, you may be overpaying. This is especially relevant to Rust, as Rust web applications are typically quite low memory footprint. Most applications are around 50-150mb of usage depending on what your application does.
Here's a list of advantages and disadvantages for using a VPS:
Serverless#
Serverless deployments nowadays are quite popular and a great way to host functions that see usage but aren't run all the time. Typically when developers talk about "serverless", it is normally referring to serverless functions - Rust code that gets run on servers when a given endpoint is hit and otherwise isn't run.
Serverless is typically used to solve the "scale-to-zero" problem. Sometimes, you may have an endpoint that isn't used often - but your Rust code is still consuming memory because it's in a deployed application. By using serverless, we can move the endpoint to a separate function that doesn't consume memory while not serving HTTP traffic. Platforms like Cloudflare and AWS both have their own version of "serverless functions" (Cloudflare workers and AWS Lambda, respectively) that simply get run when a HTTP request gets made to a given endpoint. This allows companies to save money by only running the code when required to do so, and is reflected in their pricing - AWS lambda gives you up to 1 million serverless function invocations per month for free!
However, there are some caveats. You often need to adapt your code to the platform so that it can serve your serverless functions. When it comes to Rust, this can typically mean you need to write multiple binaries. You will also be unable to use regular Rust backend frameworks - though given that one serverless function equates to an endpoint, it's not too much of a loss. Additionally, cold starts can cause your functions to initially function slower while the machine is "warming up". This can cause a huge problem with applications that require low latency.
There is also the potential issue of over-engineering your functions - by creating several serverless functions when you only need one. Coupled with cold starts, this can be quite a performance (and wallet) drain while also hurting maintainability of said functions. This can be mitigated with good engineering practices.
Here's a list of advantages and disadvantages for deploying via serverless:
Managed serverless#
While VPS has a maximum hard limit on resources and serverless forces you to adapt your Rust code to fit the platform, managed serverless allows you to pay only for what you use while being able to still allow whatever you want to run. This is done by putting your Rust application into a Docker container image, which then gets built into a final image that gets added to a Kubernetes cluster (for example) or a similar orchestrator.
This has the huge advantage of being able to deploy whatever will fit in a container image - which allows you to ship software quickly and efficiently without delay. A lot of managed serverless platforms also have database integrations as well as other kinds of infrastructure which saves time in needing to find other platforms or tools that you can use with your Rust code.
On the other hand, there are certain disadvantages inherited from both VPS and Serverless. The web host itself typically uses AWS, GCP or one of the larger cloud computing platforms and service outages can cause a domino effect of outages. Some companies have found ways around this, but otherwise you're still at the mercy of whatever provider is being used in the hood. However, this also comes with an advantage in that these platforms can also leverage their provider's infrastructure.
See this short list of advantages of disadvantages for managed serverless below:
Loved by developers
Join developers building with Shuttle
Deployed my second service with Shuttle and I really like it! It's fast and integrates well with cargo, so I can focus on the Rust code instead of the deployment. Well done!
Shuttle#
So, where does Shuttle fit into this?
Shuttle uses AWS under the hood and aims to reduce the amount of work you need to do with cloud deployments by using our own runtime and dockerizing your Rust application for you with dependency caching when deploying to Shuttle servers. We use Infrastructure from Code so that you can declare your infrastructure in-code instead of through configuration files. When you run the application, the runtime will know what to provision for you based on the annotations. This brings a couple of advantages:
- No docker knowledge required
- No configuration files
- Just run
shuttle deployand you're done
Of course, we are talking about our own product here - we're biased. Our product is also somewhat early-stage when it comes to provisioning resources and various use cases. However, if you're looking to quickly deploy a Rust web app (for example, an MVP or POC), we believe Shuttle is a great fit for you!
Deploying your Rust app to Shuttle#
Let me show you how quick it is to get a Rust web app deployed on Shuttle. We'll use Axum, the most popular Rust web framework.
First, install the Shuttle CLI:
Linux and macOS:
Windows (PowerShell):
Alternatively, using Cargo:
Create a new Rust project using the Axum template:
This generates a basic Axum web service with Shuttle annotations already set up, along with all necessary dependencies. The Rust project structure looks like this:
Notice the #[shuttle_runtime::main] macro - this tells Shuttle how to run your application. The return type shuttle_axum::ShuttleAxum indicates we're deploying an Axum web service.
To test locally:
Your application will be available at http://localhost:8000:
When you're ready to deploy:
Going to the Shuttle console, you'll see your app building:
That's it. Shuttle handles building and deploying your application. You'll get a URL where it's live - typically something like <your-project-name>.shuttle.app.
Visit the public URL and you'll see your app running in production:
Your Rust web app is now deployed and accessible to the world - all done in minutes without writing a single configuration file.
Want to add a database? Just add the annotation to your main function:
Shuttle provisions the database automatically when you deploy. No configuration files, no separate database setup, no environment variables to manage manually - it's all declared in your Rust code. Learn more about provisioning databases with Shuttle.
Automating deployments with GitHub#
Once you've deployed manually, you'll likely want to automate the process. Shuttle lets you connect your GitHub repository directly from the console for automatic deployments on push.
Navigate to your project settings in the Shuttle Console, connect your GitHub repository, and enable automatic deployments. Select which branch to track, and you're done. Now whenever you push to GitHub, Shuttle automatically pulls your latest Rust code, builds it, and deploys the updated application.
Learn more about GitHub integration with Shuttle.
Finishing up#
Thanks for reading! I hope you enjoyed this guide on how to deploy a Rust web application. Rust applications are easier to deploy than ever - from simple APIs to full web apps. With so many deployment choices available, it can be difficult to figure out what the best fit is for your specific use case.
Feel free to join our Discord server if you have any questions or need help deploying your Rust application.
Join the Shuttle Discord Community
Connect with other developers, learn, get help, and share your projects





