The ultimate guide for your RDS optimization

Amazon RDS (Relational Database Service) relieves engineering teams of the operational complexity typically associated with database management including backups, replication, patching, and high availability. But this convenience comes at a price: a cost that is sometimes reducible, sometimes coverable, and in other cases, completely opaque.
At Opsima, we frequently refer to a key FinOps metric: Coverage Rate. It represents the percentage of workloads covered by Savings Plans or Reserved Instances, relative to your total On-Demand eligible consumption. In short, it tells you how much of your AWS usage is being proactively optimized and how much is still leaking cash.
When it comes to RDS, poor optimization is the norm and usually stems from two root causes:
- Instances are launched using default configurations, which are rarely cost-efficient.
- Databases are duplicated across all environments (dev, staging, prod) without being tailored to actual needs.
Although you don’t directly manage the underlying compute, storage, traffic, or control plane, RDS pricing fully reflects that hidden complexity. It’s a multi-layered system and if you’re not careful, it’s full of costly surprises.
In this guide, we break down the top 3 optimization levers that can help you reduce up to 80% of unnecessary RDS costs, with just 20% of the effort.
Rightsize your RDS instance
The most common source of under-optimization in RDS stems from oversized instances. This situation typically arises for three main reasons:
- Out of caution, engineering teams often choose to over-provision, selecting instances larger than required to ensure safety margins. For example, provisioning a
db.r6i.4xlarge
when anr6i.xlarge
would be more than sufficient. This is especially frequent in early-stage startups or smaller technical teams. - A lack of visibility, either due to insufficient observability tooling or because actual usage patterns are not yet known such as during the rollout of a new application or an expansion into a new region.
- The absence of a consistent right-sizing routine. In many cases, teams do not review instance sizing regularly. Instead, they conduct large optimization sprints every six months, by which point substantial unnecessary costs may have already accumulated.
These scenarios are entirely understandable and often reflect team maturity, available resources, and the trade-offs between engineering time and financial return. More broadly, it’s worth acknowledging the inherent difficulty and the complex trade-offs between time invested and the potential reward. That’s precisely why the following three recommendations focus on simple, effective measures that are easy to implement:
- Apply the golden rule: CPU < 40% and Memory < 50%. If a database consistently uses less than 40% of CPU and 50% of RAM, the instance is likely over-provisioned potentially by a factor of two or three. Downsizing can typically be done without compromising performance.
- Use T3 or T4g burstable instances for dormant or short-lived environments. These low-cost instances provide sufficient baseline performance for lighter workloads. Based on a CPU credit system, they accumulate credits during idle periods and consume them during spikes such as nightly test runs.
- For variable workloads, consider Aurora Serverless v2. While marginally more expensive than traditional instance-based engines under steady usage, it offers real-time elasticity. This means teams avoid overcommitting or under-provisioning, paying only for the capacity actually used.
Turn Off What You’re Not Using
A second common area for RDS optimization stems from forgotten or unused components elements quietly running in the background, often overlooked because they result from manual actions. AWS does not proactively flag these resources, yet they accumulate over time and contribute to unnecessary costs. Typical examples include manual snapshots, legacy backups, and outdated RDS engine versions.
- Manual snapshots are never automatically deleted. Each one may weigh several gigabytes and is billed at $0.095 per GB per month. These snapshots are often created during tests, migrations, or deployments and then forgotten. For instance, a 500 GiB manual snapshot left in storage will cost $47.50 per month. Over a year, that’s $570 for a file that may no longer be needed.
- Outdated RDS engine versions can trigger hidden extended support fees. When a version like MySQL 5.6 reaches end-of-life, AWS may continue running it, but silently transitions it into an “extended support” phase, billing extra fees per active vCPU. These costs can add up significantly and often go unnoticed.
These issues are not the result of mismanagement, but of limited time and tooling. With a minimal investment and the right FinOps observability or audit stack most of these inefficiencies can be corrected in under two hours. Recommended actions:
- Conduct a monthly manual snapshot audit to remove unused snapshots or archive them in cold storage such as S3 Glacier.
- Implement a retention policy with AWS Backup and define automatic snapshot lifecycles per environment to avoid long-term accumulation.
- Monitor RDS engine versions to set up a simple alert system to track lifecycle milestones and avoid falling into extended support fees.
Optimize your RDS storage: the IO1 / IO2 trap
As mentioned earlier, caution is needed when using AWS’s default settings. One of the most underestimated and costly levers lies in the type of Elastic Block Store (EBS) storage selected for RDS databases. By default, AWS allows the use of IO1 or IO2 volumes, which are designed for extremely high performance in terms of IOPS and latency.
In 90% of cases, our volumes are over-provisioned and, more importantly, up to 20 times more expensive than necessary. Take the example of 200 GiB with 14,000 provisioned IOPS.

For the same use case, GP3 is around 15 times cheaper, while still delivering 80% of the performance. The only real difference lies in latency: sub-millisecond for IO1/IO2 vs single-digit millisecond for GP3. For this reason, we strongly recommended to review development and test environments, which typically do not require such low latency.
Use RIs to cover your RDS’s compute part

The last major source of optimization in Relational Database Service concerns its compute part, often forgotten by FinOps teams. Around 60% of this component is coverable by RDS-specific Reserved Instances (RIs). A parallel between the stock market and the operation of RIs could make sense. There are three main ways to adjust the coverage rate : the duration, the billing method or the engine type.

Duration: We can think of duration just like a financial bond. If one compares a 5 year U.S. bond to a 20 year bond, the principle is the same: the longer the duration, the more attractive the financial return. It works the same way here. We can choose between a 1 year or a 3 year Reserved Instance for our databases. The longer the commitment (3 years), the less flexibility we retain but in return, AWS offers a higher discount rate.

Billing method: From a cash flow perspective, the value of money changes over time. The more we pay upfront, the more AWS can reinvest that capital. We can choose between 3 payment options: All Upfront, Partial Upfront, and No Upfront. For this reason, AWS offers an additional 4% discount when comparing No Upfront with All Upfront.

Engine Type: the first 2 engines include additional costs due to licensing fees, as they are not open-source technologies. Aurora is an interesting option and here, it’s truly a matter of choice. It is a more “managed” service, which makes it more expensive, but also offers better scalability and reduced maintenance efforts. For these reasons, the On-Demand pricing for the same instance type can vary significantly and so does its coverage rate through Reserved Instances.

In this example, our 1y reserved instance covers us more on Oracle than on MySQL, because AWS wants to favor the more expensive engiges types.
Please note: DB RIs are more flexible than traditional ones. They can be shifted from one instance size to another. If you have one DB Instance and need to scale it to a larger capacity, your reserved DB instance is automatically applied to your scaled database instance. In other words, reserved database instances are automatically applied to all database instance sizes. Variable-size reserved DB instances are available for DB instances with the same AWS region and database engine. Flexible reserved DB instances can only evolve in their instance class type. For example, a reserved DB instance for a db.r5.large can be applied to a db.r5.xlarge, but not to a db.r6g.large, as db.r5 and db.r6g are different instance class types.
Optimization Is Not a One-Time Project
Most teams tend to treat RDS as a static asset “we launch it and move on.” However, this is a misconception, as its usage constantly evolves:
- Feature flags are turned on or off, changing usage patterns
- Teams create test environments and sometimes forget to decommission them
- Queries become more or less efficient over time
With the right discipline and proper tooling in place, we can reduce RDS costs by 30–60% without compromising on performance.
Opsima optimization engine
At Opsima, we specialize in helping organizations optimize their cloud costs with advanced algorithmic tools. Our optimization engine leverages reinforcement learning to deliver maximum savings (best AWS pricing) while minimizing risk.
By adapting to your unique usage patterns and business needs, our solution ensures you stay flexible and efficient in an ever-changing cloud environment. If you’re ready to take your cloud cost management to the next level, reach out to learn more about how Opsima can support your goals.
