After migrating to AWS or refactoring your current cloud architecture, you may be surprised by the size of your monthly AWS bill. Even with the breakdown of what you’re being charged, if your bill is higher than you estimated, it’s not always obvious what’s causing it. Every scenario is uniquely different, but here are some ways to troubleshoot your AWS budget and decrease costs.
It’s common for EC2 instances to be the biggest line item on your bill, and with good reason. The cost of even moderately sized instances adds up quickly. Right-sizing your servers can be a relatively painless way to reduce costs.
AWS offers a cost management tool to generate suggestions for EC2 changes, or you can do it yourself using CloudWatch. Go to CloudWatch -> Metrics to view information about your instances. If you’re seeing consistently low CPU Utilization, there could be an opportunity to downgrade to a smaller instance.
Alternatively, if you only see rare spikes in usage, consider setting up auto scaling. By default, your setup will include a minimal number of instances. When AWS detects a spike in usage, it will automatically create more, and you only pay for what you use. If you already use auto scaling, review your settings. Confirm you’re not running too many instances by default or spinning up too many instances too quickly.
Many developers store objects in AWS S3 without considering the different storage tiers available. To figure out which one is right for you, consider three questions:
- How often is the object accessed? S3 offers price-saving options if the object is only accessed once per quarter or less.
- When I request the object, does it need to be retrieved immediately, or can it be delayed? Most users will expect immediate access to their files, but you can take advantage of the Glacier tier to retrieve decades-old documents with a few hours’ notice.
- Can the object be recreated if it’s lost? It may sound like a strange question, but some images can be created dynamically, like thumbnails for videos. While S3 has very high reliability, you can sacrifice it for cost savings.
S3 offers storage tiers for a variety of access situations. They even provide an analytics tool to help you decide what storage tier is right for you.
Taking backups of your EC2 instances can be vital for your disaster recovery, but you need to be sure you’re not storing unnecessary data. If your data doesn’t change frequently, or there’s no use case for storing several versions of data. You can safely delete old backups. In AWS Lifecycle Manager, you can configure how often backups are taken and how long to retain them.
After you’ve confirmed the EC2, Storage, and Backup Lifecycle is in good condition, there are a few questions you can ask about your environment.
- Is there anything running that can be shut off? This idea may sound obvious, but there are often resources running that haven’t served a purpose in a while. Check for old files in S3, abandoned EC2 instances, ELB volumes that aren’t connected to anything, and IP addresses that can be released.
- Is my development or staging environment using more resources than it should? Developers often set up resources for testing and troubleshooting, then forget to clean them up afterward. Also, confirm any development resources are right-sized. For example, if you need an S3 library for development, do you need to copy every production file or just a small subset?
- Are there resources such as Lambda methods or CloudFront that are being accessed more frequently than they should be? It’s easy to forget about these pay-for-what-you-use resources, but it’s important to keep track of whether their usage aligns with your expectations. Sloppy code can result in Lambda methods being called more frequently than they need to be, and caching CloudFront resources can also reduce costs.
Need some additional help troubleshooting your cloud costs? Don’t hesitate to contact us.