March 13, 2025

Why I Needed to Finest PostgreSQL Setup on a Budget VM

I did not plan to run PostgreSQL in production on a tiny VM. It just happened.

PostgreSQL on Budget VM

The setup was simple, almost suspiciously simple.

2 GB RAM. 2 CPU cores. A budget VM that looked innocent until production traffic showed up.

And the workload? Heavy writes. Continuous pipelines. No mercy.

The Moment Reality Hit

At first, everything worked fine. Development was smooth. Tests passed. Confidence was high.

Then production happened.

Writes started pouring in. Pipelines ran non stop. Tables grew faster than my optimism. Latency spikes appeared like jump scares in a horror movie.

Upgrading the VM was not an option. Budget said no. Timeline said no. Reality said "optimize or suffer."

So I chose optimization.

Production reality

Step One: Respect Hardware

The biggest mistake people make is treating small machines like big ones.

PostgreSQL loves memory. My VM did not have much to offer.

So I had to be strict.

shared_buffers = 512MB 
effective_cache_size = 1536MB
work_mem = 8MB
maintenance_work_mem = 64MB

This was not about perfection. It was about survival.

I left enough RAM for the OS to breathe and enough for Postgres to do real work. Anything more would have been reckless.

Memory usage stabilized after tuning

The Silent Killer: Connections

The next problem was not CPU or disk.

It was connections.

Each connection quietly ate memory like it was free. On a 2 GB machine, nothing is free.

So I capped it.

max_connections = 50

Then I added PgBouncer.

That single move changed everything. Memory stabilized. Latency dropped. The database stopped panicking under pressure.

Lesson learned. PostgreSQL is not a fan of connection chaos.

Heavy Writes Need Discipline

The pipelines were write heavy. Inserts. Updates. More inserts. More updates.

Autovacuum became my unlikely hero.

I did not disable it. I tuned it.

autovacuum_max_workers = 2 
autovacuum_naptime = 30s
autovacuum_vacuum_cost_limit = 1000

Once autovacuum kept up with the writes, table bloat stopped creeping in like a silent performance thief.

Autovacuum activity keeping tables healthy

Indexes Saved Me and Hurt Me

At first, I indexed everything. Then writes slowed down. Then I panicked.

So I audited indexes.

SELECT * FROM pg_stat_user_indexes WHERE idx_scan = 0;

The results were humbling. Many indexes were doing absolutely nothing.

I removed them carefully. Writes got faster. Disk usage dropped. Maintenance became lighter.

Indexes are powerful. Unused ones are expensive decorations.

Parallel Queries on a Tiny CPU

PostgreSQL wanted to be helpful and run things in parallel.

My CPU disagreed.

I limited it.

max_parallel_workers_per_gather = 1 
max_parallel_workers = 2

After that, CPU usage became predictable. No more random spikes. No more sudden slowdowns during peak writes.

Sometimes doing less is doing better.

Logging Without Burning Disk

When something went wrong, I needed answers. But logging everything would have killed performance.

So I logged smart.

log_min_duration_statement = 500ms 
log_autovacuum_min_duration = 1s
log_checkpoints = on

This showed me slow queries, vacuum behavior, and I/O pressure without turning my disk into a complaint journal.

Slow query logs revealing bottlenecks

Query Optimization Was Non Negotiable

On this VM, bad queries had consequences.

I stopped using SELECT star. I paginated everything. I ran EXPLAIN ANALYZE religiously. I cached aggressively in the application layer.

One bad query could undo weeks of tuning. One good query could save the whole system.

The Result I Did Not Expect

After all the tuning, something surprising happened.

The system became boring.

No crashes. No panic alerts. No late night emergencies.

The pipelines ran continuously. Writes stayed fast. Latency stayed flat.

On a budget VM.

Calm production dashboard under heavy write load

Final Thoughts

Would I recommend running PostgreSQL with heavy writes on a 2 GB RAM, 2 core VM?

Only if you are willing to respect the limits.

This setup taught me more about PostgreSQL than any oversized cloud instance ever did. When resources are limited, every decision matters.

PostgreSQL does not need luxury. It needs discipline.

And when treated right, it will quietly carry your production workloads without asking for more.