Smarter Audience Targeting for Healthcare—Finally, It’s Compliant
Check it out
Login
Platform
Improve Marketing Performance
Drive growth without increasing budgets through better performance
Audiences
Smart, privacy-first targeting with lookalikes and retargeting lists
Healthcare Integrations
Bring full-funnel marketing to healthcare with 100+ integrations
Measure Marketing Impact
Prove marketing's value to unlock budget with real outcome data
Insights
One dashboard with every channel's performance and real ROI metrics
Ad Performance
Optimize for patient visits, not form fills, with attended appointment data
Protect Privacy & Compliance
Safeguard your marketing strategy from compliance disruption
Web Tracker Manager
See every tracker on your site and identify compliance risks
Consent Management
Consent that actually works by enforcing preferences at the data layer
Event Tracking
Measure what users actually do, not just what they view on your site
Healthcare Solutions
By Healthcare Vertical
By Use Case
Hospitals & Healthcare Systems
Payers
Urgent Care
Pharma & Med Devices
Orthopedics
Specialty Providers
DSO & Dental
Improve Marketing Performance
Measure Marketing Impact
Protect Privacy & Compliance
Customers
Get started / for customers
Login
Docs
Freshpaint Status
Partner Directory
Featured Healthcare Case Studies
Allergy Partners
Dropped Their CPL from $300 to $12
BU Dental
Optimized their digital marketing, reducing CPA by 30%.
Baptist Health
Journey to HIPAA-Compliant Digital Marketing ipsum dolor sit
Explore all Case Studies
Product update
Freshpaint Audiences: Better ROI with Audience Targeting Built for Healthcare
Freshpaint Audiences gives healthcare marketers the power to run targeted ad campaigns that cut wasted spend, reach the right audience faster, stay fully privacy-compliant, and, most importantly, improve ROI.
Resources
Learn & Insights
Blog
Privacy Hub
FAQ
Prove your ROI
Marketing Hub
Events
Case Studies
Product update
Freshpaint Audiences: Better ROI with Audience Targeting Built for Healthcare
Freshpaint Audiences gives healthcare marketers the power to run targeted ad campaigns that cut wasted spend, reach the right audience faster, stay fully privacy-compliant, and, most importantly, improve ROI.
Plans & Pricing
Login
Book a Demo
Posted on 
October 15, 2020

A Beginner's Guide to Postgres Performance Monitoring

When setting up a new Postgres instance, there are a few things I do to make sure if something goes wrong, I'll be able to figure out what went wrong. Many people think you need to setup additional software to monitor Postgres performance, but you can get pretty far with some of the builtin tooling. This post is going to cover the following tools, all of which are provided by Postgres:

  • pg_stat_statements
  • The Slow Query Log (log_min_duration_statement)
  • statement_timeout

pg_stat_statements

The pg_stat_statements extension provides an aggregate view of how long your queries are taking.  You can enable pg_stat_statements by:

  1. Running sudo apt-get install postgres-contrib-${YOUR POSTGRES VERSION}. This installs the postgres contrib package which provides additional functionality which is part of the official Postgres project, it's just not bundled with the default Postgres installation.
  2. Add pg_stat_statements to the shared_preload_libraries in your postgres.conf file, like so: shared_preload_libraries = 'pg_stat_statements'.
  3. Restart Postgres and run CREATE EXTENSION pg_stat_statements.

After enabling pg_stat_statements, you'll see a view called pg_stat_statements. The pg_stat_statements view has a ton of information on what queries are taking up the most time:

{% c-block language="sql" %}
> \d pg_stat_statements;
                   View "public.pg_stat_statements"
      Column        |       Type       | Collation | Nullable | Default
---------------------+------------------+-----------+----------+---------
userid              | oid              |           |          |
dbid                | oid              |           |          |
queryid             | bigint           |           |          |
query               | text             |           |          |
calls               | bigint           |           |          |
total_time          | double precision |           |          |
min_time            | double precision |           |          |
max_time            | double precision |           |          |
mean_time           | double precision |           |          |
stddev_time         | double precision |           |          |
rows                | bigint           |           |          |
shared_blks_hit     | bigint           |           |          |
shared_blks_read    | bigint           |           |          |
shared_blks_dirtied | bigint           |           |          |
shared_blks_written | bigint           |           |          |
local_blks_hit      | bigint           |           |          |
local_blks_read     | bigint           |           |          |
local_blks_dirtied  | bigint           |           |          |
local_blks_written  | bigint           |           |          |
temp_blks_read      | bigint           |           |          |
temp_blks_written   | bigint           |           |          |
blk_read_time       | double precision |           |          |
blk_write_time      | double precision |           |          |
{% c-block-end %}
When optimizing a database, I start by finding which queries to optimize. I specifically look for two different kinds of queries. I first look at the queries that take up the most CPU time and then look at the queries that take the longest on average. You can get the queries that take up the most CPU time by running:

{% c-block language="sql" %}
SELECT total_time, query
FROM pg_stat_statements
ORDER BY total_time
DESC LIMIT 10;
{% c-block-end %}
The total_time column is the total amount of time spent running the query. You can think of it as average query time * number of times the query was ran.

You can get the queries that take the longest on average by running:

{% c-block language="sql" %}
SELECT mean_time, query
FROM pg_stat_statements
ORDER BY mean_time
DESC LIMIT 10;
{% c-block-end %}
Once I have the queries that are taking up the most time, I'll use the slow query log to find runnable instances of the query.

Relevant Links:

Postgres Docs on pg_stat_statements
Citus Blog Post on pg_stat_statements

The Slow Query Log

The issue with pg_stat_statements is it gives you normalized versions of the query. For example, if you run the query SELECT * FROM users WHERE id = 1, pg_stat_statements will show SELECT * FROM users WHERE id = ?. While nice for looking at queries from an aggregate perspective, when you want to optimize the query, you need an actual instance of the query with all the parameters filled in. That's where the slow query log comes in. When you enable the slow query log, Postgres will log the full text of any query that takes over some threshold. I usually like to set it to 100ms.

You can enable the slow query log by setting the log_min_duration_statement parameter in postgres.conf to the desired threshold:

{% c-block language="sql" %}
log_min_duration_statement = 100
{% c-block-end %}
After updating the config and running SELECT pg_reload_conf() to reload the config, Postgres will start logging all queries that take over the threshold.

After obtaining a query from pg_stat_statements, I'll search the slow query log for a particular instance of the query I can run. I'll then use EXPLAIN ANALYZE to understand why the query is taking so long.

Relevant Links

Postgres Docs on Logging Configuration
PGBadger
- A tool for analyzing the Postgres slow query log.

statement_timeout

While not for performance monitoring per se, statement_timeout is a setting you should set regardless. When a query takes over the statement_timeout Postgres will abort it. Setting statement_timeout is the easiest way to prevent runaway queries from overloading your DB. Even though the Postgres docs recommend against it, I like setting a global timeout of 15s:

{% c-block language="sql" %}
statement_timeout = 15000
{% c-block-end %}
Then if there are queries that I know are going to take longer, I can raise the timeout for just those queries (when you use a Postgres library and set a timeout, that's likely using statement_timeout under the hood). If I'm running a query from psql and hitting the timeout, it's pretty easy to turn off the timeout by running:

{% c-block language="sql" %}
set statement_timeout = 0;
{% c-block-end %}
By setting a global statement_timeout and then raising it when necessary, I ensure that no query takes longer than 15 seconds, unless I've explicitly given that query permission to take longer.

Relevant Links

Postgres Docs on Connection Settings
Crunchy Data Blog Post on statement_timeout

‍

Get Freshpaint in your email

If you want to learn how to be even more agile with customer behavioral data, join more than 3k+ others by signing up.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Michael Malis
Founder & CEO
view All Posts
Featured Posts
Hipaa Compliance
Direct Response, Remarketing, and Programmatic Advertising: The HIPAA Pitfalls You Didn't Know
Hipaa Compliance
IP Addresses and HIPAA Compliance: Unpacking the Risks for Healthcare Websites
Use Cases
Don't Remove It! Make Google Analytics HIPAA Compliant Instead
Hipaa Compliance
Staying HIPAA-Compliant: How to Detect Web Tracking Risks on Your Website
Hipaa Compliance
A Privacy-First Framework for HIPAA Compliance: Managing Third-Party Tracking on Healthcare Websites
Hipaa Compliance
Cut the Jargon: A Look at the FTC-HHS Privacy Warning and What It Means For Your Healthcare Org
Use Cases
How To Make Facebook Ads HIPAA Compliant and Still Get Conversion Tracking
Use Cases
What HHS Has to Say About Tracking Technologies in Latest HIPAA Guidance
Growth & Startups
Two Chairs Journey to a HIPAA Compliant Growth Stack
Stay Connected
Platform
Improve Marketing Performance
Audiences
Healthcare Integrations
Measure Marketing Impact
Insights
Ad Performance
Protect Privacy & Compliance
Web Tracker Manager
Consent Management
Event Tracking
Healthcare solutions
Hospitals & Health Systems
Health Tech & Virtual Care
Payers & Health Plans
Urgent Care & Outpatient
Dental & Specialty Practices
Pharma & Medical Devices
DSO (Dental Support Organizations)
Orthopedics
Specialty Providers
Resources
FAQ
Blog
Events & Webinars
Marketing Hub
Privacy Hub
Newsletter Signup
About
About Us & Careers
News & Press
Plans & Pricing
Contact Us
Get a Demo
Case Studies
Customers
Login
System Status
Docs
Follow Us
Privacy Policy
Terms of Service
© 2025 Perfalytics, Inc. Crafted in San Francisco - Site By Takeoff®