JOINing against VIEWs can be harmful

I had a recent code review problem that was very curious at first glance, but came down to the use of complex VIEW in an even more complicated and frequently used reporting query.

I’ll just paste a edited version of the review below.

tl;dr: Don’t use product_info (a view, not a table) in this query, move WHERE clauses for product_name and version_string into the infos CTE, strictly limit the number of columns in tables being joined

This query is unfortunately doomed because it is using product_info — a view which already contains data from product_versions. There are four other tables which we don’t care about for the query that are included in the view.

As a result, you get a self-join many times over. A hint at the horrors of what Postgres decides to do with this is here:

Unique  (cost=10248.32..10248.35 rows=1 width=294)
   CTE infos
     ->  Hash Right Join  (cost=301.82..1683.83 rows=40195 width=96)
           Hash Cond: (pvb.product_version_id = pv.product_version_id)
           ->  Seq Scan on product_version_builds pvb  (cost=0.00..768.71 rows=42271 width=16)
           ->  Hash  (cost=282.46..282.46 rows=1549 width=84)
                 ->  Hash Right Join  (cost=218.53..282.46 rows=1549 width=84)
                       Hash Cond: (pv.product_version_id = pi.product_version_id)
                       ->  Seq Scan on product_versions pv  (cost=0.00..40.29 rows=1629 width=35)
                       ->  Hash  (cost=199.17..199.17 rows=1549 width=53)
                             ->  Subquery Scan on pi  (cost=179.81..199.17 rows=1549 width=53)
                                   ->  Sort  (cost=179.81..183.68 rows=1549 width=62)
                                         Sort Key: product_versions.product_name, product_versions.version_string
                                         ->  Hash Join  (cost=5.70..97.73 rows=1549 width=62)
                                               Hash Cond: ((product_versions.product_name = product_release_channels.product_name) AND (product_versions.build_type = product_release_channels.release_channel))
                                               ->  Seq Scan on product_versions  (cost=0.00..40.29 rows=1629 width=52)
                                               ->  Hash  (cost=5.03..5.03 rows=45 width=42)
                                                     ->  Hash Join  (cost=2.34..5.03 rows=45 width=42)
                                                           Hash Cond: (product_release_channels.release_channel = release_channels.release_channel)
                                                           ->  Hash Join  (cost=1.23..3.29 rows=45 width=34)
                                                                 Hash Cond: (product_release_channels.product_name = products.product_name)
                                                                 ->  Seq Scan on product_release_channels  (cost=0.00..1.45 rows=45 width=22)
                                                                 ->  Hash  (cost=1.10..1.10 rows=10 width=12)
                                                                       ->  Seq Scan on products  (cost=0.00..1.10 rows=10 width=12)
                                                           ->  Hash  (cost=1.05..1.05 rows=5 width=8)
                                                                 ->  Seq Scan on release_channels  (cost=0.00..1.05 rows=5 width=8)

Whenever you see so many nested joins, subquery sorts and sequence scans mushed together in a staircase, that’s a signal that we should investigate whether the query we’re running is really what we thought it was.

While @peterbe dug through code with me, he mentioned that product_info was a view! Now all the self-JOINs made sense and I started refactoring.

The product_info view was being deconstructed into it’s component parts, which already included product_versions (resulting in a self-join) and including a bunch of junk that for the purposes of this query, we don’t really care about. So, as the first step, I just made a copy of the SELECT query from the view (you can get that by running \d+ product_info in psql or you can dig it out of the socorro/external/postgresql/procs/views section of our code.

Here’s my proposal for what should go into infos:

         SELECT 
                product_versions.product_version_id
                , product_versions.version_string
                , 'new'::text AS which_table
                , product_versions.product_name
                , product_versions.release_version
                , product_versions.build_type
                , product_version_builds.build_id
                , product_versions.is_rapid_beta
                , product_versions.rapid_beta_id
                , product_versions.version_sort
        FROM product_versions
                LEFT JOIN product_version_builds USING (product_version_id)
        WHERE  %(product name and versions)s

We really need to move the product name and version filtering to this portion of the query because otherwise we end up doing a horrible self join on a 42,000 row table! :watch:

Here’s what the self-join looks like in the EXPLAIN:

   ->  Sort  (cost=8564.48..8564.49 rows=1 width=294)
         Sort Key: i1.version_sort, i1.product_version_id, i1.product_name, i1.version_string, i1.which_table, i1.release_version, i1.build_type, i1.build_id, i1.is_rapid_beta, i2.is_rapid_beta, ((((i2.product_nam
e)::text || ':'::text) || (i2.version_string)::text))
         ->  Merge Join  (cost=7755.52..8564.47 rows=1 width=294)
               Merge Cond: ((i1.product_name = i2.product_name) AND (i1.release_version = i2.release_version) AND (i1.build_type = i2.build_type))
               Join Filter: (((i1.product_name = 'Firefox'::citext) AND (i1.version_string = '26.0a2'::citext) AND (i1.version_string = i2.version_string)) OR ((i1.rapid_beta_id = i2.product_version_id) AND (i2.pr
oduct_name = 'Firefox'::citext) AND (i2.version_string = '26.0a2'::citext) AND (i2.is_rapid_beta IS TRUE)))
               ->  Sort  (cost=3877.76..3978.25 rows=40195 width=233)
                     Sort Key: i1.product_name, i1.release_version, i1.build_type
                     ->  CTE Scan on infos i1  (cost=0.00..803.90 rows=40195 width=233)
               ->  Sort  (cost=3877.76..3978.25 rows=40195 width=133)
                     Sort Key: i2.product_name, i2.release_version, i2.build_type
                     ->  CTE Scan on infos i2  (cost=0.00..803.90 rows=40195 width=133)

sad_kitten

This is pretty sad. The Sort at the top of Mt. Sadness. There are a series of sorts further down that are just HUGE because we’re tossing 45k records that must be joined to each other, and the width of the query is 294 — 294 columns in addition to our 45k rows.

The obvious (but sadly not always effective) thing to try is to see if we can filter our rows out earlier. Because we’re using infos, conveniently, that looks possible without too much trouble.

That just leaves sorting out the rapid beta self-join, which based on my tests should be pretty easy to continue to do in the body of the main SELECT, at line 125.

With the changes I proposed, the estimated duration of this query is ~200 ms in stage and the query plan looks like:

                                                                                QUERY PLAN                                                                                 
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=37.07..37.08 rows=1 width=294) (actual time=221.131..221.149 rows=31 loops=1)
   CTE infos
     ->  Nested Loop Left Join  (cost=0.00..35.18 rows=26 width=64) (actual time=0.136..0.459 rows=150 loops=1)
           ->  Index Scan using product_version_version_key on product_versions  (cost=0.00..7.27 rows=1 width=52) (actual time=0.111..0.112 rows=1 loops=1)
                 Index Cond: ((product_name = 'Firefox'::citext) AND (version_string = '26.0a2'::citext))
           ->  Index Only Scan using product_version_builds_key on product_version_builds  (cost=0.00..27.58 rows=33 width=16) (actual time=0.019..0.268 rows=150 loops=1)
                 Index Cond: (product_version_id = product_versions.product_version_id)
                 Heap Fetches: 150
   ->  Hash Join  (cost=0.84..1.86 rows=1 width=294) (actual time=0.943..47.334 rows=22500 loops=1)
         Hash Cond: (i1.product_version_id = i2.product_version_id)
         Join Filter: ((i1.version_string = i2.version_string) OR ((i1.rapid_beta_id = i2.product_version_id) AND (i2.is_rapid_beta IS TRUE)))
         ->  CTE Scan on infos i1  (cost=0.00..0.52 rows=26 width=233) (actual time=0.141..0.236 rows=150 loops=1)
         ->  Hash  (cost=0.52..0.52 rows=26 width=69) (actual time=0.778..0.778 rows=150 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 8kB
               ->  CTE Scan on infos i2  (cost=0.00..0.52 rows=26 width=69) (actual time=0.002..0.664 rows=150 loops=1)
 Total runtime: 221.321 ms
(16 rows)

A practical guide to using Alembic

I spent some time guiding a coworker through using Alembic for the first time with Socorro this morning and what follows are my notes from that meeting.

I’ve been using Alembic, a database schema migration tool, for about three months now, and really liking it a lot. I created a blog post that served as a slide deck for an internal team called A lightspeed tour of Alembic as my first stab at user education.

Setting things up initially was pretty simple, but explaining it to a coworker after I’d set everything up for myself proved slightly more difficult. Below are my notes on the differences between Alembic and some other migration tool.

Terminology

Alembic calls each migration a revision. Revisions know what order to be run in because each revision is given a down_revision to identify its parent. If down_revision is None, that revision is the very first revision according to Alembic. You can put your whole schema in that revision, or you can just start adding changes to this initial revision. Alembic doesn’t complain either way.

A best practice would likely be putting your entire model into the first revision. I may go back and “fix” this for us later. I opted to just have the default use case be to create a database fresh with a tool we call setupdb_app.py.

If you’re looking to migrate to using alembic, you’ll also need to use SQLAlchemy. I used sqlautocode for my initial schema reflection, and there’s a new tool sqlacodegen you may want to check out for generating your SQLAlchemy models for the first time.

Preparation: edit config and activate a virtualenv

Our environment was set up per the alembic tutorial for creating an environment. I ran:

alembic init alembic

I also put an alembic.ini-dist file into our project’s config/ directory, and modified alembic/env.py to include our model.

To get started working with an existing install, you’ll need to modify alembic.ini-dist, and copy it to config/alembic.ini to fit your environment – setting the connection string and the path to the alembic directory are the two most important settings. We have a script which creates databases from our models.py called setupdb_app.py. This script takes --database_name as a command-line argument. My default for our project is to use breakpad.

We use a virtualenv called socorro-virtualenv. The virtualenv is created automatically if you run make test. If you’re creating a standalone virtualenv, you can do that with virtualenv socorro-virtualenv. Activate this with . socorro-virtualenv/bin/activate.

Creating a revision

  1. Create a fresh database to work from. For Socorro, the command is: PYTHONPATH=. socorro/external/postgresql/setupdb_app.py --database_name=breakpad
  2. Edit models.py with the change to the schema
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini revision -m 'your message about the migration'. The output will include the name of the new file.
  4. Edit the new file as needed alembic/versions/*.py
  5. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1
  6. Test your downgrade with PYTHONPATH=. alembic -c config/alembic.ini downgrade -1

If all goes well, your revision is ready! If something goes wrong, edit and try again. The revision will automatically rollback if there are any errors.

Downgrades are a little tricky to properly execute. In an ideal world, you’d be able to revert the underlying code, but preserve only the commit containing the migration. More on this in a future blog post!

Creating a revision using --autogenerate

This is very similar to the above, with the addition of --autogenerate to your revision command. This should do the right thing, but definitely check your generated file for accuracy.

  1. Create a fresh database to work from. For Socorro, the command is: PYTHONPATH=. socorro/external/postgresql/setupdb_app.py --database_name=breakpad
  2. Edit models.py with the change to the schema
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini revision --autogenerate -m 'your message about the migration'. The output will include the name of the new file.
  4. Edit the new file as needed alembic/versions/*.py
  5. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1
  6. Test your downgrade with PYTHONPATH=. alembic -c config/alembic.ini downgrade -1

If all goes well, your revision is ready! If something goes wrong, edit and try again. The revision will automatically rollback if there are any errors.

Production deployment

You’ll need to deploy an alembic.ini on your production database system and probably a virtualenv to support your python modules.

We deploy our virtualenvs with our application, so this step was pretty simple for everything except for alembic itself. The virtualenv put in full, static paths for the python binaries and had some dependencies that I haven’t figured out yet for actually running alembic. To get around this, I created a virualenv locally on the system for the postgres user. Having your postgres user run the migrations locally is a must for me because I need to access the filesystem to pull in new versions of user defined functions stashed in the directory my model lives in.

I just deploy a new release of our application code on the database server locally, and then I run alembic against the versions directory that’s deployed.

FAQ

And here’s an FAQ for the common problems folks ran into:

OOPS I forgot to create a database before I created a revision!

To “fix” this, try:

  1. Create the database from scratch using your current models.py.
  2. Run: PYTHONPATH=. alembic -c config/alembic.ini downgrade -1
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1

Assuming your downgrade function works, this should allow you reverse the latest revision and then test your migration.

Error message: “Only a single head supported so far.”

See Working with Branches.

I’m using schemas, and alembic doesn’t recognize them when I try to use --autogenerate.

See include_symbol. And be sure to add this to both the “offline” and “online” versions of the revision code in env.py.

Error message: Target database is not up to date.

This means you’ve got a file in your versions directory that contains one or more migrations that haven’t been applied to the current database. You can either apply them with alembic upgrade head or have a look in that directory and remove the migration(s) that you don’t want.

Facebook acquires Instagram (and a Postgres cluster) for $1billion

Today’s big story is that Facebook just acquired Instagram for $1 billion.

Instagram is a hugely popular image sharing app – until recently only for iPhone. Last week they released an Android version, which caused many of my friends to finally start using it. 😉

But you know what’s an even bigger story? Instagram runs Postgres. And, their engineering team has been posting juicy tidbits about PGFouine and pgbouncer over the last year.

I love that companies are succeeding with Postgres in a huge way.

We’re seeing Postgres become the default technology choice for new tech companies. As those companies grow or are acquired, we’ll see Postgres becoming an important part of many more organizations.

It’s no coincidence that Bruce posted this morning about “The New Postgres Era“. The quality of the Postgres community’s software products is recognized, and the developers who use our software are succeeding.

Inheritance and sharding with Postgres

A friend told me about their sharding scheme last night, and it made me very curious about how others are handling this problem. This question about database design turns into a devops issue, so it’s something really the entire development group and devops and DBAs need to be aware of and concerned about. And it’s not a problem exclusive to Postgres.
Continue reading

Postgres Open 2012, SQL Server Guys discover Postgres, Tom Lane’s quote of the week & GSoC

Happy Friday!

Postgres Open‘s website is back up! Next week, we’ll be announcing sponsorship opportunities for 2012 and opening our Call for Speakers shortly after.

Some IT guys discovered Postgres and made a series of videos about what they’re learning. They came from Windows backgrounds, so their observations were a great introduction to what it’s like for non-Linux-y people when they first try Postgres out.
Continue reading

Security and maintenance release for PostgreSQL: versions 9.1.3, 9.0.7, 8.4.11 and 8.3.18

Today, PostgreSQL Global Development Group released new versions of all active branches. This includes three security bugfixes, two of which are pretty obscure and one that fixes a possible security issue with restoring un-sanitized output from pg_dump. Details about the security issues are included in the release announcement.
Continue reading

Where to find me at #LCA2012

I’m going to be pretty busy while in Melbourne and Ballarat for the next 10 days.

Here’s my itinerary:

There’s a rumor that Stewart Smith and I might do a Q&A about databases in the cloud. If it happens, it will involve lots of pessimism and swearing.

Drop me an note if you want to meet up! I’ll be in Ballarat until early Friday morning.

Then I fly back to LA to give a keynote at SCaLE that Sunday (blog post about that coming).