Table of Contents
Amazon’s Free Tier for new AWS accounts helps startups get going with an impressive array of cloud technologies at no cost for the first year. But what happens when your Free Tier period expires or you exceed its limits?
Many AWS customers are shocked by the cost of services as their business scales. Underutilized servers, excessive outgoing data transfer costs, and pay-as-you-go service fees can add up quickly. Here are a few ways to minimize your AWS bill without sacrificing performance or availability.
Monitor Your Usage
The first step to cut costs is to understand which services and what contributing factors are most responsible for your high AWS bill. Use the AWS Billing Dashboard to see a breakdown of your costs. The Billing Dashboard will show you which services make up most of your AWS bill. This will help you determine where to focus your cost optimization efforts.
Cost Explorer
AWS gives a breakdown of which services, instance types, and AWS Regions contributed to usage fees each month. Cost Explorer will give you a good indication of where to focus your efforts on cost optimization by showing which services make up the majority of your costs each month.
Budgets
AWS allows you to set budgets with alert thresholds for your current month’s projected invoice. AWS will continuously monitor your usage throughout the month and notify you via email if it looks like you may exceed your budget threshold.
You can use Service Control Policies to automate actions for AWS to take when you exceed your budget threshold. Service Control Policies let you limit user privileges, restrict the types of EC2 instances that can be launched, or even prevent users from launching new server instances.
Cost Anomaly Detection
AWS Cost Anomaly Detection uses machine learning to detect unusual activity in your AWS account. The service will send alerts and analyze the cause of the anomaly to help you understand what action should be taken. This is especially useful if you automate server orchestration or have concerns about malicious users gaining access to your AWS account.
Saving Money in EC2
Elastic Compute Cloud (EC2) allocates server instances for running your software. Different instance types are best suited to different types of jobs.
Starting with On Demand Pricing
The typical way to allocate an EC2 instance is through On Demand Pricing. This option is the most expensive way to pay for EC2, and common especially when evaluating needed compute resources for a new application or workload. Pricing is straightforward: you pay for the time the server is running. On Demand Pricing is a reasonable starting point for new application deployments, but generally means you are paying for some amount of under-utilization (e.g., running a server when no one is using it).
Spot Instances and Spot Fleets
Spot Instances is a cost-saving option for EC2 that makes use of AWS’s spare capacity. Using Spot Instances can decrease your server costs by as much as 90% when compared to On Demand Pricing, but has some trade-offs. Spot Instances are not guaranteed to remain available and can be terminated by AWS with a 2-minute warning. This means use of Spot Instances requires some planning.
AWS’s Spot Fleet service helps you plan a targeted capacity and maximum price you are willing to pay for Spot Instances. Spot Fleets give your deployment some resiliency against any single termination event impacting service availability. Spot pricing is variable depending on demand, so exclusively using Spot Instances only makes sense for certain services or workloads.
For example, Spot Instances may not make sense as the go-to way to increase your web application server’s throughput. It can, however, augment throughput for an application running on dedicated instances that handle some minimum level of throughput. Spot Instances are ideal for background jobs that are not as time-sensitive, such as continuous integration server tasks or bulk data processing.
Auto Scaling
You can optimize spend for your On Demand Instances with Auto Scaling. Auto Scaling can increase or decrease your desired capacity by turning more instances on or off. You set the metric—such as a CPU utilization threshold—that determines when to scale up or down. Additionally, Auto Scaling supports scheduled changes in capacity based on time of day.
A common practice in web application availability is to run your application in an “N+1” configuration. That means you have however many instances you need to handle traffic, plus one spare in case there is a sudden increase in requests or an unexpected failure with one of your other instances. Auto Scaling is ideal for managing variable traffic needs for EC2-hosted web applications, and supports running one or more spare instances, whatever makes the most sense for your workload. Properly configured Auto Scaling rules ensure you have the server capacity you need without overspending on EC2.
Choose the Right Instance Type
Amazon offers several types of job-optimized instance types. For example, you can purchase servers with higher-end GPUs to run machine learning tasks. Memory-optimized instances are ideal for cache-heavy applications. CPU-optimized instances can help with real-time computationally intensive jobs. Generally, unless you know where bottlenecks for your work reside, most people start with a general instance type.
Each of the optimized instance types have different pricing structures and generally cost more than a general instance type. However, for the right types of work, a more expensive optimized instance can accomplish more work than the general instances. As an example, running a computationally expensive task on several general instances could be more expensive than running the same task on fewer CPU-optimized instances.
Reserved Instance Pricing
If you know that you will keep a certain number of instances of a certain type running for a year or more, Reserved Instance Pricing offers deep discounts. Compared to On Demand Pricing, Reserved Instance Pricing can help you realize up to a 72% discount with a 1-year or 3-year commitment.
That may sound like a long time to commit to a specific instance class in a specific AWS Region. Amazon gives you the ability to sell off Reserved Instance commitments once you no longer need them. While re-selling your Reserved Instance commitments should not be your primary plan to save money, it does offer some peace of mind knowing that you can still save money even if your needs do change before the commitment term ends.
Savings Plans
A Savings Plan is similar to Reserved Instance Pricing and offers roughly the same cost savings compared to On Demand Pricing, but is a bit more flexible than Reserved Instance Pricing. Savings Plans don’t require a commitment to a specific instance size or even a commitment to use resources in a specific AWS Region. Instead, Savings Plans are a commitment to a consistent amount of instance type usage across EC2.
Update to the Latest Generation Instances
AWS regularly uses their pricing model to encourage customers to update to the latest generation of EC2 instance types. Updating to the latest generation of EC2 instances can shave off a few cents per day per instance. While it’s not a significant amount of savings, it can add up for larger systems.
Direct Uploads to S3
Eliminating unnecessary work is another way to optimize EC2 costs. Uploading media is a common use case for web applications. A user may be required to upload a profile photo as part of signing up for an account on your app, for example.
Traditionally, web application servers have been responsible for permitting user uploads. However, adopting other AWS services can save you money by offloading such work from your web application servers. S3 Direct Uploads let you take your web application server out of the loop for uploading large files.
Instead of your server receiving a file and then placing it in an S3 bucket or other storage, you can have your web application send the user an upload token to ship their file directly to an S3 bucket. This reduces the workload for your web application server, freeing up resources that can be used to handle other web requests. Since data ingress (uploading) to AWS is generally free, this optimization essentially gives you a free way to reduce CPU utilization on your web application server while speeding up file uploads.
Right-Size Your Instances
Oftentimes web app servers are over-provisioned. That is, the resources available on the server exceed the needs of your web app. Examine your CPU and memory utilization to determine if a smaller EC2 instance makes sense. Note that there is a difference in network performance for the smallest instance sizes, but generally the performance impact is negligible for workloads that can be handled by those instances.
Right-sizing your capacity may also involve choosing different instance types. As mentioned earlier, choosing the right instance type for multi-instance environments can reduce your AWS bill. For example, if your application is fully utilizing available CPU but isn’t fully utilizing available memory, you may find that a CPU-optimized instance type makes more financial sense for your application than a general instance type or some other optimized instance type.
Web App Performance Optimization
So far we’ve discussed ways you can optimize the cost of hardware itself. However, optimizing your software can help you reduce costs as well. “Expensive” (as in “computationally expensive”) functions in your code can contribute to an expensive AWS bill. Start optimizing your web app by analyzing the slowest endpoints with a performance monitoring tool.
New Relic, for example, can show you which endpoints are accessed most frequently and which endpoints have the slowest response times. You can then drill down into the specific operations or service calls within a slow endpoint to identify the bottleneck.
Performance optimization can reduce your application’s CPU utilization, memory footprint, or response times. All of these improvements will increase throughput, which means you can use fewer (or smaller) EC2 instances to host your web application.
Saving Money in S3
AWS’s Simple Storage Service (S3) is an object storage service, commonly used for storing multimedia or other large files that need to be made available across multiple servers or services. A common use case for S3 is storing photos, videos, or other files uploaded by your web app’s users. S3 offers nearly four times the amount of storage space for the price of EC2’s Elastic Block Store (EBS) volumes, and can be utilized by several other AWS services.
Caching Responses for Asset Requests
Forming a caching strategy for images, CSS, and JavaScript resources can drastically reduce requests to your S3 bucket. While it’s free to upload data to AWS, downloading files from AWS can get expensive. The fewer requests your users have to make to retrieve resources, the more money you can save on your AWS bill.
Every time a user visits a web page, their browser will request images and other assets that need to be displayed on the page. You can reduce your S3 data transfer costs by setting appropriate cache response headers for your assets. For example, you can instruct a requesting browser to cache a user’s profile image for 24 hours. The browser will then use the first copy of the image it fetches for an entire day before attempting to download an updated profile image.
File Compression
Compressing files you store in S3 will conserve storage space and reduce outgoing data transfer fees. All modern web browsers support gzip compression, so gzip-compressed assets have no downside. Compression will decrease storage costs, bandwidth costs, and response times.
Text files benefit the most from gzip compression; you will likely see a 70% or greater reduction in file size for HTML, CSS, and JavaScript files.
Images do not share these incredible gzip compression results, however. If you are storing images or videos, you may need to perform some processing of your own to reduce file size. Image compression tools such as Image Magick can compress images, or resize large images to reasonable sizes that fit your use case. For example, you may wish to generate thumbnails or resized images instead of always retrieving the full-sized image. Storing multiple resized copies of images will increase storage costs. However, serving those smaller resized copies will reduce your overall costs by reducing your data transfer fees.
Similarly, video compression, thumbnail generation, or clipping can reduce your video file storage and transfer costs. Video manipulation is a computationally intensive task, so you may decide to offload this work from your web app server to an AWS Lambda function (or use an AWS service specifically designed for video processing, such as AWS MediaConvert).
Data Retention
A simple way to reduce S3 storage costs is to truncate old files that are no longer used. S3 Lifecycle configurations can manage expiring objects—files that are marked for deletion if they have not been retrieved for some period of time. You can design Lifecycle configuration rules for specific file paths, extensions, object Tags, or other parameters to build sophisticated data retention policies.
Storage Classes
S3 supports a few storage classes to help you balance costs with availability. S3 Standard is the default storage class, which is useful for general purpose data storage that may be frequently accessed.
S3 Standard - Infrequent Access (IA) storage offers redundant data storage across multiple Availability Zones (data centers) like S3 Standard storage. It is optimized for long lived but infrequently accessed data that needs millisecond response times. S3 Standard-IA has lower storage costs than S3 Standard, but slightly higher retrieval costs. These costs make S3 Standard-IA a good candidate for use cases such as user-uploaded file archives and multimedia backup services.
For infrequently accessed data that can be less resilient to loss (e.g., assets you can programmatically regenerate if needed), you can choose S3 One Zone-IA storage. S3 One Zone-IA storage only keeps data in a single Availability Zone. This is even cheaper than S3 Standard-IA storage, but is most susceptible to physical loss from hardware failures.
Cold Storage with S3 Glacier
S3 Glacier is a set of low-cost data storage classes that are suitable for rarely accessed data. Cold storage will cost a fraction of a penny per gigabyte, significantly reducing long-term storage costs compared to more regularly accessible storage classes.
S3 Glacier Instant Retrieval maintains similar performance to S3 Standard-IA storage at a lower storage cost. However, S3 Glacier Instant Retrieval has a higher cost for data retrieval. This trade-off may make since for rarely accessed data such as PDFs of tax documents or invoices from prior calendar years.
S3 Glacier Flexible Retrieval is another low-cost storage class that can be accessed on the order of minutes. This may be a useful storage solution for large, rarely-accessed archives such as whole system backups.
S3 Glacier Deep Archive storage is for data that rarely needs to be accessed, and has a retrieval time on the order of hours. This storage class can be used for compliance assets such as long-term access logs or historical records.
CloudFront for Serving Assets
AWS CloudFront is a content delivery network (CDN) that reduces latency for file retrieval and has cheaper data transfer fees than retrieving files directly from an S3 bucket. Implementing CloudFront saves you money while delivering files faster than S3 can do alone.
Saving Money in RDS
The Relational Database Service (RDS) is a managed database hosting solution. The pricing model for RDS is similar to EC2, so you can use some of the same strategies to save money in RDS.
Reserved Instance Pricing
You can purchase Reserved Instance commitments for RDS in the same way you can for EC2. In exchange for your commitment to use a specific instance type in an AWS Region, you can realize up to a 72% discount over a 1-year or 3-year period when compared to On Demand Pricing.
Update to the Latest Generation Instances
AWS regularly uses their pricing model to encourage customers to update to the latest generation of RDS instance types. Updating to the latest generation of RDS instances can shave off a few cents per day per instance.
The typical use case for RDS depends on a single primary database, so this approach does not have as large of an opportunity for cost savings as it does with EC2. However, as your user base or data store needs increase, this strategy is useful with more advanced database configurations such as sharding, Multi-AZ Deployments, or Read Replica deployments.
Consider Adopting Amazon Aurora
Amazon Aurora is a completely managed database service that has API compatibility with MySQL, PostgreSQL, and other database engines. It offers a greater level of database management with an SSD-backed virtualized storage layer. Replicated instances share the same underlying storage as the primary database, which costs less than traditional RDS architectures. Amazon Aurora has performance, availability, and security benefits as well.
However, switching to Aurora will not automatically decrease costs. Aurora is best-suited for applications that have high availability requirements, where you’d otherwise use a Multi-AZ Deployment. When you strictly compare a simple RDS setup to an Aurora setup, you may find that Aurora costs slightly more at face value. However, the overall benefits will save maintenance time, reduce management complexity, and help you save money in the long run if you need a highly available system that supports a large user base.
Data Retention Policies
Every production environment needs database backups. AWS RDS can create snapshots and incremental backups to ensure your have reliable backups of your data. Backups have a monthly cost for each gigabyte of storage used. Therefore, evaluating your data backup schedule can help you save some money. While you do not want to shorten your backup retention solely to save a few dollars, you might consider tweaking parameters such as snapshot frequency or incremental backup frequency if you have a particularly large data store.
More importantly, you can review what unused data is stored in your database. You may have data duplicated from other sources, such as login attempt records which could be derived from a server log without being stored in the database. Otherwise, older records in certain database tables may be truncated to reduce needed storage, assuming the older records are no longer needed. Only keeping the data you need for your system to function is also considered a sound security practice: hackers can’t steal data you don’t have.
Saving Money in Other AWS Services
Savings Plans are beginning to be rolled out to more AWS services. If your cloud infrastructure utilizes other services, review the relevant pricing pages to see what cost saving opportunities exist.
Also evaluate the overall cost of each service you use in AWS. Determine if the service utilization and its business value make sense when compared to the cost. If you find the value does not justify the cost, consider evaluating alternative services. You may find another AWS service that can accomplish your goal for less money. Alternatively, you may find ways to accomplish your goal by cutting out the service entirely: custom developed software, self-hosted open source services, or third-party cloud-hosted services may give you some economical alternatives.