Everyday Postgres: Top 10 psql ‘\’ commands I use

I have been thinking about the kinds of questions people have about Postgres if they’re mostly users of MySQL. One thing that comes up a lot is how to use the psql command-line.

I’m going to do a series of posts based on what I actually do every day with Postgres. This isn’t going to be an exhaustive look at all the features, but just the kinds of things I find useful.

Here’s a look at the kinds of commands I regularly use on a production system:

selena@wuzetian:~ #1642 15:13: awk '{print $1}' /tmp/cmds   | uniq -c | sort -n -r
     47 \e
     22 \d
     13 \x
     12 \df+
     10 \q
      9 \df
      6 \ef
      6 \d+
      5 \o
      5 \h

Here’s the kinds of commands I use on my local system:

selena@wuzetian:~ #1645 15:15: awk '{print $1}' /tmp/local_cmds  | uniq -c | sort -n -r
     89 \d
     43 \e
     28 \df+
     14 \x
     14 \d+
     13 \df
     11 \c
     10 \h
      4 \a
      3 \ef+

There’s not a whole lot of difference between the two. I pretty clearly use the database locally to look at schema definitions over and over again!

Here’s what each of these commands do:

  • \d+: Examine a table, by default in 9.2 prints the table name, followed by the columns, their types, keys, indexes and constraints. The plus will cause all child tables that inherit from a parent to be listed.
  • \e: Opens an editor defined by your EDITOR environment variable, and put the most recent command entered in psql into the buffer. You can define a non-command line editor here!
  • \df+: Prints information about a User Defined Function, including the function’s whole definition (that’s what the + does), best when combined with \x and probably \a as well
  • \q: Quits psql. You can also quit with ^D
  • \ef [function]: Opens up your editor, and puts the function into the buffer. Without a function, it provides a convenient template for creating a new function.
  • \o [filename]: Open a local file for writing the output of whatever commands you run next. Stop writing to the file with another \o
  • \h: Help for SQL commands
  • \c [databasename]: Connect to [databasename] on local database cluster
  • \a: Print output “unaligned”, or without adding whitespace to make columns align. Good when trying to print machine-readable output to the terminal.
  • \x: Print output “expanded”. This causes output to be printed out like: “Column: Value”, rather than the normal tabular/spreadsheet style. Useful in lots of contexts, especially when you’ve got some columns that have a very large text field.

And here’s a few useful commands that didn’t make the top 10 lists:

  • \?: Help for \ commands
  • \timing: Turn timing of all commands on, reports in ms.
  • \s: print out your psql history to STDOUT.
  • \i [filename]: execute the contents of [filename]
  • \! [command]: execute a command in the local shell

Finally, when you start up psql, you have a few options. My favorite combination when generating machine-readable output is to add -AX -qt (axe cutie! hat tip to Greg Sabino Mullane for that mneumonic). Another very useful psql extension is -e, which causes the SQL commands used to produce output to also be printed out. This will help you learn about information_schema items and all the internal tables used to provide system information.

The shortcuts really worth spending a bit of time exploring are \e* and \d*. Both provide quite a bit of useful functionality, with relatively easy to remember letter combinations.

Eliminating duplicate code: our backfill functions for Socorro

Last Friday, I spent some time refactoring a user defined function in Socorro that was taking a little too long to run each day.

This meant splitting up one function into about 8 separate functions. Our functions are designed to backfill themselves when a failure occurs. However, if we need to remove an incorrect daily report and re-run the functions from scratch, we’ve typically written a special function for every report called backfill_REPORTNAME that handles the cleanup work.

This means we’ve got a lot of boilerplate code, that it would really be nice to replace. So, I took this opportunity to create a utility function and hopefully never have to write another backfill_REPORTNAME function again!

Here it is:

CREATE OR REPLACE FUNCTION backfill_named_table(tablename text, updateday date) 
    RETURNS boolean
    LANGUAGE plpgsql
AS $function$
DECLARE
    update_proc_name TEXT := 'update_' || tablename;
BEGIN

-- Check if requested table for backfilling exists
PERFORM 1 FROM information_schema.tables WHERE table_name=tablename;
IF NOT FOUND THEN
    RAISE INFO 'table: % not found', tablename;
    RETURN FALSE;
END IF;

-- Check that requested function for update exists
PERFORM 1 FROM pg_proc WHERE proname = update_proc_name;
IF NOT FOUND THEN
    RAISE INFO 'proc: % not found', update_proc_name;
    RETURN FALSE;
END IF;

EXECUTE format('DELETE FROM %I WHERE report_date = %L', tablename, updateday);

EXECUTE format('SELECT %I(%L, FALSE)', update_proc_name, updateday);

RETURN TRUE;

END;
$function$
;

Here’s the file with the code.

I’ve been trying to switch over to using format() instead of || in my queries, because it tends to be much more readable.

You’ll see that I’ve got a check for the existence of the table, and that the user defined function for the update exists. The type checking in the function handles ensuring that updateday is a valid date. If you think there’s any improvements I could make on this, definitely let me know in the comments.

JOINing against VIEWs can be harmful

I had a recent code review problem that was very curious at first glance, but came down to the use of complex VIEW in an even more complicated and frequently used reporting query.

I’ll just paste a edited version of the review below.

tl;dr: Don’t use product_info (a view, not a table) in this query, move WHERE clauses for product_name and version_string into the infos CTE, strictly limit the number of columns in tables being joined

This query is unfortunately doomed because it is using product_info — a view which already contains data from product_versions. There are four other tables which we don’t care about for the query that are included in the view.

As a result, you get a self-join many times over. A hint at the horrors of what Postgres decides to do with this is here:

Unique  (cost=10248.32..10248.35 rows=1 width=294)
   CTE infos
     ->  Hash Right Join  (cost=301.82..1683.83 rows=40195 width=96)
           Hash Cond: (pvb.product_version_id = pv.product_version_id)
           ->  Seq Scan on product_version_builds pvb  (cost=0.00..768.71 rows=42271 width=16)
           ->  Hash  (cost=282.46..282.46 rows=1549 width=84)
                 ->  Hash Right Join  (cost=218.53..282.46 rows=1549 width=84)
                       Hash Cond: (pv.product_version_id = pi.product_version_id)
                       ->  Seq Scan on product_versions pv  (cost=0.00..40.29 rows=1629 width=35)
                       ->  Hash  (cost=199.17..199.17 rows=1549 width=53)
                             ->  Subquery Scan on pi  (cost=179.81..199.17 rows=1549 width=53)
                                   ->  Sort  (cost=179.81..183.68 rows=1549 width=62)
                                         Sort Key: product_versions.product_name, product_versions.version_string
                                         ->  Hash Join  (cost=5.70..97.73 rows=1549 width=62)
                                               Hash Cond: ((product_versions.product_name = product_release_channels.product_name) AND (product_versions.build_type = product_release_channels.release_channel))
                                               ->  Seq Scan on product_versions  (cost=0.00..40.29 rows=1629 width=52)
                                               ->  Hash  (cost=5.03..5.03 rows=45 width=42)
                                                     ->  Hash Join  (cost=2.34..5.03 rows=45 width=42)
                                                           Hash Cond: (product_release_channels.release_channel = release_channels.release_channel)
                                                           ->  Hash Join  (cost=1.23..3.29 rows=45 width=34)
                                                                 Hash Cond: (product_release_channels.product_name = products.product_name)
                                                                 ->  Seq Scan on product_release_channels  (cost=0.00..1.45 rows=45 width=22)
                                                                 ->  Hash  (cost=1.10..1.10 rows=10 width=12)
                                                                       ->  Seq Scan on products  (cost=0.00..1.10 rows=10 width=12)
                                                           ->  Hash  (cost=1.05..1.05 rows=5 width=8)
                                                                 ->  Seq Scan on release_channels  (cost=0.00..1.05 rows=5 width=8)

Whenever you see so many nested joins, subquery sorts and sequence scans mushed together in a staircase, that’s a signal that we should investigate whether the query we’re running is really what we thought it was.

While @peterbe dug through code with me, he mentioned that product_info was a view! Now all the self-JOINs made sense and I started refactoring.

The product_info view was being deconstructed into it’s component parts, which already included product_versions (resulting in a self-join) and including a bunch of junk that for the purposes of this query, we don’t really care about. So, as the first step, I just made a copy of the SELECT query from the view (you can get that by running \d+ product_info in psql or you can dig it out of the socorro/external/postgresql/procs/views section of our code.

Here’s my proposal for what should go into infos:

         SELECT 
                product_versions.product_version_id
                , product_versions.version_string
                , 'new'::text AS which_table
                , product_versions.product_name
                , product_versions.release_version
                , product_versions.build_type
                , product_version_builds.build_id
                , product_versions.is_rapid_beta
                , product_versions.rapid_beta_id
                , product_versions.version_sort
        FROM product_versions
                LEFT JOIN product_version_builds USING (product_version_id)
        WHERE  %(product name and versions)s

We really need to move the product name and version filtering to this portion of the query because otherwise we end up doing a horrible self join on a 42,000 row table! :watch:

Here’s what the self-join looks like in the EXPLAIN:

   ->  Sort  (cost=8564.48..8564.49 rows=1 width=294)
         Sort Key: i1.version_sort, i1.product_version_id, i1.product_name, i1.version_string, i1.which_table, i1.release_version, i1.build_type, i1.build_id, i1.is_rapid_beta, i2.is_rapid_beta, ((((i2.product_nam
e)::text || ':'::text) || (i2.version_string)::text))
         ->  Merge Join  (cost=7755.52..8564.47 rows=1 width=294)
               Merge Cond: ((i1.product_name = i2.product_name) AND (i1.release_version = i2.release_version) AND (i1.build_type = i2.build_type))
               Join Filter: (((i1.product_name = 'Firefox'::citext) AND (i1.version_string = '26.0a2'::citext) AND (i1.version_string = i2.version_string)) OR ((i1.rapid_beta_id = i2.product_version_id) AND (i2.pr
oduct_name = 'Firefox'::citext) AND (i2.version_string = '26.0a2'::citext) AND (i2.is_rapid_beta IS TRUE)))
               ->  Sort  (cost=3877.76..3978.25 rows=40195 width=233)
                     Sort Key: i1.product_name, i1.release_version, i1.build_type
                     ->  CTE Scan on infos i1  (cost=0.00..803.90 rows=40195 width=233)
               ->  Sort  (cost=3877.76..3978.25 rows=40195 width=133)
                     Sort Key: i2.product_name, i2.release_version, i2.build_type
                     ->  CTE Scan on infos i2  (cost=0.00..803.90 rows=40195 width=133)

sad_kitten

This is pretty sad. The Sort at the top of Mt. Sadness. There are a series of sorts further down that are just HUGE because we’re tossing 45k records that must be joined to each other, and the width of the query is 294 — 294 columns in addition to our 45k rows.

The obvious (but sadly not always effective) thing to try is to see if we can filter our rows out earlier. Because we’re using infos, conveniently, that looks possible without too much trouble.

That just leaves sorting out the rapid beta self-join, which based on my tests should be pretty easy to continue to do in the body of the main SELECT, at line 125.

With the changes I proposed, the estimated duration of this query is ~200 ms in stage and the query plan looks like:

                                                                                QUERY PLAN                                                                                 
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=37.07..37.08 rows=1 width=294) (actual time=221.131..221.149 rows=31 loops=1)
   CTE infos
     ->  Nested Loop Left Join  (cost=0.00..35.18 rows=26 width=64) (actual time=0.136..0.459 rows=150 loops=1)
           ->  Index Scan using product_version_version_key on product_versions  (cost=0.00..7.27 rows=1 width=52) (actual time=0.111..0.112 rows=1 loops=1)
                 Index Cond: ((product_name = 'Firefox'::citext) AND (version_string = '26.0a2'::citext))
           ->  Index Only Scan using product_version_builds_key on product_version_builds  (cost=0.00..27.58 rows=33 width=16) (actual time=0.019..0.268 rows=150 loops=1)
                 Index Cond: (product_version_id = product_versions.product_version_id)
                 Heap Fetches: 150
   ->  Hash Join  (cost=0.84..1.86 rows=1 width=294) (actual time=0.943..47.334 rows=22500 loops=1)
         Hash Cond: (i1.product_version_id = i2.product_version_id)
         Join Filter: ((i1.version_string = i2.version_string) OR ((i1.rapid_beta_id = i2.product_version_id) AND (i2.is_rapid_beta IS TRUE)))
         ->  CTE Scan on infos i1  (cost=0.00..0.52 rows=26 width=233) (actual time=0.141..0.236 rows=150 loops=1)
         ->  Hash  (cost=0.52..0.52 rows=26 width=69) (actual time=0.778..0.778 rows=150 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 8kB
               ->  CTE Scan on infos i2  (cost=0.00..0.52 rows=26 width=69) (actual time=0.002..0.664 rows=150 loops=1)
 Total runtime: 221.321 ms
(16 rows)

VPN Problems and Ubuntu: killing off the dnsmasq zombie

I’ve been having problems with VPN, DNS and Ubuntu for a year. But, I’m also pretty lazy when it comes to spending time on configuration. And configuring VPNs is like last on my list of ways I’d like to spend my time.

In short, I’d rather reboot than figure out exactly why my networking just stopped working.

REBOOT.

Fortunately, I had an easy (for me) work-around for most of my VPN needs: use SSH and a jump-host for getting to servers. I found it annoying when I wanted to look at a website on protected network space, or had a service on an unusual port that I wanted to test things against. I would work around with SSH tunnels, or I would fire up my Mac, whose VPN settings worked flawlessly.

That all said, I thought today, a sunny, lovely fall day in Portland, I would fix my VPN.

And so, my buddy @uberj_ helped me get things sorted.

The root cause of all my VPN heartache was the dnsmasq daemon controlling my DNS. And, related, network-manager. There are a few places that document exactly how to disable dnsmasq

  • DNS in Ubuntu 12.04 http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/
  • Disabling dnsmasq as your local DNS server in Ubuntu http://mark.orbum.net/2012/05/14/disabling-dnsmasq-as-your-local-dns-server-in-ubuntu/

However, they leave out one important step: killing off the existing dnsmasq process. For the unlucky, restarting network-manager does not kill off dnsmasq.

So, to find and kill dnsmasq, do the following:

 sudo service network-manager stop
 kill `ps -C dnsmasq -o pid=`
 sudo service network-manager start

Then, start your VPN and check out the contents of the /etc/resolv.conf. If all went well, you’ve got nameserver addresses other than 127.0.0.1 in the file.

Yay!

Sadly, this was not the end of my story.

After a few minutes, NetworkManager started dnsmasq up again!

Zombie dnsmasq

So, like any reasonable sysadmin, I opened up the /etc/NetworkManager/NetworkManager.conf file, uncommented the dns=dnsmasq line, and replaced it with dns=/dev/null. My guess was that you can probably put just about anything other than dnsmasq into that line to permanently disable the plugin.

I ran sudo service network-manager restart, checked /etc/resolv.conf and felt pretty smug.

I tried also uninstalling dnsmasq-base package, but unfortunately that takes out a number of other packages I appear to need. So, I left /dev/null in my NetworkManager.conf, and updated this blog post.

But wait...

While editing this blog post, dnsmasq took over my DNS settings again.

A clue as to what was happening was in /var/log/syslog:

Oct 18 10:20:10 localhost dnsmasq[30535]: started, version 2.59 cache disabled
Oct 18 10:20:10 localhost dnsmasq[30535]: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
Oct 18 10:20:10 localhost dnsmasq[30535]: DBus support enabled: connected to system bus
Oct 18 10:20:10 localhost dnsmasq[30535]: warning: no upstream servers configured

It turns out that dnsmasq was still getting revived by NetworkManager. Why NetworkManager doesn’t seem to care about configuration settings was beyond my willingness to investigate today. So, I did some more searching about truly killing of dnsmasq for good.

And I found this thread, and this sample configuration file. In the output for the dnsmasq process from ps:

nobody   30777 30759  0 10:21 ?        00:00:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/sendsigs.omit.d/network-manager.dnsmasq.pid --listen-address=127.0.0.1 --conf-file=/var/run/nm-dns-dnsmasq.conf --cache-size=0 --proxy-dnssec --enable-dbus --conf-dir=/etc/NetworkManager/dnsmasq.d

I dug into the thread, and the suggestion was to set port=0 in the config. I created a file called custom in /etc/NetworkManager/dnsmasq.d. And ran sudo service network-manager restart.

And then I got this in my syslog:

Oct 18 10:21:10 localhost dnsmasq[30777]: started, version 2.59 DNS disabled

FINALLY.

FINALLY!

An experiment in attention

I’ve had reoccurring thoughts about attention and who I give mine to. In the last week, I’ve been mentioned in a couple “women in tech” twitter lists. This seems to happen about quarterly and someone will create a list of 50 or so, or maybe 100+ women on a list.

I spent a couple days looking through my 5k followers and assembled a list of the women and women’s groups who are following me. I probably missed a few, and I know I followed a few people who don’t consider themselves women. Sorry! Just let me know and I’ll add/drop as needed.

So, why bother with a list like this?

Before I created this list, I was following about 920 people. Which, in itself is a sort of ridiculous number. How could I possibly pay attention to that many people?

I really don’t, right? I just check into twitter, sample the firehose, and then step away for minutes, hours or days.

When I do sample the tweet stream, whose voices do I listen to? There are certainly a few close friends whose feeds I look at directly, and a few other people I’m interested in who I will catch up on a backlog. Otherwise, it’s whoever is the most vocal.

What I noticed is: most of the voices I hear from when I sample the feed are men. It wasn’t anywhere near balanced. That’s on social activity, technical rants, technical praise and blogging.

I’d like to be skewed toward women’s voices for a while. Particularly on tech issues. So, I just added about 450 women to my feed who were already following me.

My next step may be replacing my primary feed with this list I’ve made. I wrote a tool a while back to extract URLs and RSS feeds from my friend’s twitter profiles and feeds. The code isn’t awesome, just rough and practical. But you could do something similar using it. You need a set of read/write API keys (create an app, then make it read/write), but I did the “hard” work for you:

./opml.py --consumer_key [key here] --consumer_secret [secret here] --access_token [token here] --access_token_secret [token secret here]  --to_follow [file of twitter handles]

I left some crud in there that links the script directly to my account for the list. Sorry! If anyone actually wants to use this, I’ll clean it up. (just ping me on github)

Anyway, doing this kind of attention hacking for yourself isn’t hard. It is drudgery to go through all your followers and guess who is what gender. But it is interesting to spend a few hours contemplating what the people you’re giving your attention to have in common, and how you might hack it a bit to hear from different perspectives from time to time.

I’m turning comments off because I don’t care to hear from anyone who thinks I’m somehow being sexist by changing who I pay attention to. Cheer up, haters! I’m sure plenty of people are already paying attention to you.

Fancy SQL Monday: format() instead of quote_*()

In the comments, Isaac pointed out that using format() dramatically increases the readability of SQL. I liked the look of his query, so I dug a little deeper.

As of version 9.1 (first released in 2010), a new function is listed in Postgres’ built-in string function documentation:

format(formatstr text [, str “any” [, …] ]): Format a string. This function is similar to the C function sprintf; but only the following conversion specifications are recognized: %s interpolates the corresponding argument as a string; %I escapes its argument as an SQL identifier; %L escapes its argument as an SQL literal; %% outputs a literal %. A conversion can reference an explicit parameter position by preceding the conversion specifier with n$, where n is the argument position.

We also have examples linked in the definition for various quoting strategies for dynamic SQL.

This is an example where the Postgres documentation probably should have reversed the order what is mentioned.

It turns out that format() makes it much easier to avoid using the quote_*() functions. The code looks a lot more like a python """ string (you can have arbitrary whitespace in there!), with flexible options for usage. The only feature missing is named parameters.

My application requires Postgres 9.2 at this point (for JSON datatype), so my plan is to refactor a few functions using format() instead of quote_ident() in particular.

Are there situations where you’d prefer to use quote_*() other than for backward compatibility? It seems as though format() is far safer, particularly for the quoting and nullable problems mentioned on the Quote Literal Example documentation.

Catalog SQL Friday: using DO when we’ve got tables as variables

Just a quick note about modifying constraints:

There’s no such thing as ALTER CONSTRAINT. So, if you want to safely change a CHECK constraint, like on a partition, you need to DROP and ADD it in a single transaction.

Below is a snippet for finding partitions, their CHECK constraints based on a WHERE clause. Then we DROP the existing constraint and add back the correct constraint. It doesn’t take much sleuthing to figure out what the problem was. 🙂

 DO $$
  DECLARE myrecord record;
  DECLARE theweek text;
  BEGIN
    FOR myrecord IN SELECT relname, conname from pg_constraint
      JOIN pg_class ON pg_constraint.conrelid = pg_class.oid
      WHERE consrc ~ 'without' and split_part(relname, '_201', 1)
      IN (select table_name from report_partition_info
      WHERE partition_column = 'date_processed') LIMIT 1
    LOOP
       EXECUTE 'ALTER TABLE ' || quote_ident(myrecord.relname)
       || ' DROP CONSTRAINT IF EXISTS '
       || quote_ident(myrecord.conname) || ';';</p>

   theweek = substring(myrecord.relname from '........$');

   EXECUTE 'ALTER TABLE ' || quote_ident(myrecord.relname)
   || ' ADD CONSTRAINT ' || quote_ident(myrecord.conname)
   || ' CHECK ((date_processed >= timestamptz('
   || quote_literal(to_char(date(theweek), 'YYYY-MM-DD')) || '))'
   || ' AND (date_processed < timestamptz('
   || quote_literal(to_char(date(theweek) + 7, 'YYYY-MM-DD'))
   || ')));';

   RAISE NOTICE 'DONE: %', myrecord.relname;
END LOOP;

END$$; 

Here’s the gist version for easier reading.

The couple things I learned in this process was a nice feature in substring() allowing me to return the date portion of my partition names easily, and split_part() which allowed me to return the parent table name and compare it to my list of partitionable tables for the specific partition column. I recently added support for partitioning on a different column for certain tables, so I have to differentiate for this fix. The string function docs are pretty great.

I didn’t do any optimization of this — just got it working and am now testing it in our stage environment. The final script is going to perform the changes on a month’s worth of partitions at a time to help reduce the chance of deadlocking.

If you have thoughts on how I could have done this more efficiently, let me know in the comments!

Why I gave $1024 to the Ada Initiative

AdaCamp participants

I’m one of the founding board members and advisors to the Ada Initiative, a 501(c)3 charitable non-profit that supports women in open technology and culture. This year, I am supporting the organization through a donation of $1024.

My reasons for supporting the Ada Initiative, advocating for their work, and now donating are many. I started out working with open source software in 1996, as a sysadmin setting up backups for a small research lab at the University of Oregon. I’d installed Linux from floppies in 1994, and surrounded myself with other open source advocates, sysadmins, hackers and finally, computer science students. Open source software has defined my career, and most of what I’ve chosen to do with my life.

In those few years in college, I met only two women who were interested in open source software – one that I brought with me from Chemistry into the Computer Science program and a woman who worked in Academic Services. I met no other professors, fellow students or friends online to collaborate or talk with.

All my mentors and friends were men. And, to be honest, I never thought much about that. Growing up, I tended to hang out with boys or spend a lot of time alone. As I started to think about computers and programming as fundamentally changing society, as tools that enabled and could dramatically change lives, I felt discomfort that only men seemed to know the things I knew.

A critically important aspect of being involved in the work that the Ada Initiative does is simply introducing women to each other in our communities. I’ve attended all three AdaCamps – in Melbourne, AU, Washington DC and San Francisco. Each gathering was larger than the last, and I’ve had important conversations at each that have changed my thinking and my activism in significant ways.

My most striking realization is how little work is done to educate adult women about computer science, about open source and open culture. Most outreach efforts focus on children, with the implication that adults are some how “not worth the effort”, “beyond help”, or that adults will simply find the things that interest them on their own.

My experience with the Ada Initiative, and with PyLadies contradicts everything that I had assumed, and everything that some advocates for early CS education for girls have implied — that the only way to change diversity is to start with a new generation of kids.

Adult women want the same skills that anyone interested in computers want. But there are important social and economic barriers to pursuing those goals.

The grassroots movement to educate adult women in writing software is inspiring. It reinforces my belief that it is never too late to learn these skills, and the women who seek out these classes inspire me with their dedication and fearlessness. Because of what I’ve learned and experienced, I’m now taking a break from the Ada Initiative advisors board to focus on my work with PyLadies.

Many people share my discomfort with a world whose code is only written by men. We’re making important changes in the way women view open technology and culture. We’re showing women that they definitely can learn and master these skills, that there are huge benefits to doing so and that women fundamentally belong in our communities. That work, I believe, also leads to the kind of intersectional awakening about diversity that all community builders should have.

The work of the Ada Initiative is difficult, wonderful and making a huge difference in my life. Please join me in supporting the Ada Initiative in 2013.

Using JSON data type in production with Socorro

Back in June, we started using the JSON datatype in production for Mozilla’s Socorro. Our implementation analyzes crashes from Firefox and other Mozilla products, configured with a HBase backend and Postgres version 9.2 processing and serving reports to middleware and a Django front-end.

The guts of the application that stores raw data into Postgres is implemented in a crashstorage class, and an example of how we use this field in report generation is in a stored procedure for rolling up our “Top Crashers By Signature” reports.

The idea is to get the metadata we store for each crash into Postgres in its raw JSON form available to SQL queries. We are currently still storing core crash report data in a normalized table (extracted from the original JSON). We are considering re-writing everything to just use a JSON column at the core. There may be some significant negative performance impacts, so more testing is needed before we move forward.

Previously, we only stored metadata in HBase. While it is convenient to have a year’s worth of data in HBase for running arbitrary queries against, once we settle on reports, it is far faster and more convenient to be able to use Postgres with JSON functions and SQL to rollup aggregates. I’m sure this isn’t true for everyone. We’re working with large, but manageable data sets – around 30 GB per week of JSON, and only 2-3 weeks of data at a time.

The advantage to having the JSON in Postgres is that we no longer are adding columns to our base tables. It’s not terribly difficult to add columns, but it requires some special work to make the database schema migrations as low-lock as possible on tables with weekly partitions. Cutting out the DBA as an intermediary before developers can deploy new features and reports is a huge win for automation and sustainability of the application.

My preference is that developers never require a DBA to be present when deploying an app — even when there are schema changes. There are lots of features in Postgres that significantly reduce the need for DBA involvement — streaming replication, pg_basebackup, zero downtime column ADD, DROP and ALTER, transactional DDL and CREATE INDEX CONCURRENTLY to name a few. We’re not quite there for Socorro, but my plan is to get there.

The ideal role of the Postgres DBA is more to write database code, monitor and improve performance. That kind of role feels a lot more like a partner to developers, rather than a separate, mysterious entity. In much the same way that DevOps and configuration management enable great cooperation between IT and developers, many of Postgres’ maintenance features enable greater cooperation and less friction between DBAs and developers deploying new features.

There are still a few configuration and initialization issues to work through before Postgres can be seen as completely past it’s reputation as a difficult to deploy database. Some of that can be solved with more widely used configuration recipes for tools like Puppet and Chef. My colleagues in IT have been working on a Puppet module that they plan to release that automatically sets up a master database and as many replicas as you’d like to configure.

My hope is that people more widely share their automation stories and code!

In anycase, the JSON datatype is solving an important problem for our team, and significantly reducing the demand for schema changes. I’d love to hear more reports from users trying out JSON the field.

Fancy SQL Monday: Why use OVER() instead of a CROSS JOIN?

EDIT: I had to add this, because it was cracking me up:

FOR LIKE EVERY REASON EVER

I’m reimplementing some expensive database queries, moving them from our middleware into materialized view tables. We make pretty extensive use of Common Table Expressions (CTEs). And we generate many reporting queries that calculate averages and percentages of a total. One way this could be done is with a CROSS JOIN, which is a cartesian product of two tables, adding the total and then the percentage calculation to our original table that produces counts of events. For information about JOIN types supported by PostgreSQL, see SELECT…FROM documentation.

Or we could use OVER(), one of several Window Functions supported by Postgres. From the Postgres documentation, Window Functions are:

A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.

A few other folks have written about Window Functions: Postgres Guide to Window Functions, Window Functions: Postgres’s best kept secret, SQL Window Functions (examples in Ruby). “All built-in and user-defined aggregate functions — such as count, max, bit_or, or xmlagg” — are available for computation as a window function, quoting Open Logic, who phrased it well.

If you read about OVER(), you’ll see that all the examples specify a PARTITION clause. But you don’t have to specify a PARTITION to take advantage of the feature.

We had a query containing two CTEs and a final SELECT that produced the matview data. Here’s an example of that query:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
    , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
  AND report_date >= now()::date - '15 day'::interval
  AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    category
    , version_string
    , report_count
    , SUM(report_count) OVER () as total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
  as percentage
FROM totals
ORDER BY report_count DESC;

The part under consideration is:

    , SUM(report_count) OVER () as total_count

So, you see that OVER() has no PARTITION defined, meaning that the SUM will be calculated over the entire result.

Here is that same query, implemented using SUM() and a CROSS JOIN:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
     , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
    AND report_date >= now()::date - '15 day'::interval
    AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    SUM(report_count) AS total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
as percentage
FROM crashes CROSS JOIN totals
ORDER BY report_count DESC;

What’s the difference to Postgres between that and a SUM() plus a CROSS JOIN?

Here’s the EXPLAIN output from this query, pared down to the relevant section:

   CTE totals
     ->  WindowAgg  (cost=0.00..0.03 rows=1 width=72) (actual time=0.112..0.114 rows=3 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.097..0.100 rows=3 loops=1)
   ->  CTE Scan on totals  (cost=0.00..0.04 rows=1 width=104) (actual time=0.121..0.129 rows=3 loops=1)

The important bit to have a look at is WindowAgg right after CTE totals.

Now compare to the EXPLAIN output from a SUM() plus a CROSS JOIN query:

   CTE totals
     ->  Aggregate  (cost=0.02..0.03 rows=1 width=8) (actual time=0.015..0.015 rows=1 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=8) (actual time=0.001..0.004 rows=3 loops=1)
   ->  Nested Loop  (cost=0.00..0.07 rows=1 width=104) (actual time=0.191..0.205 rows=3 loops=1)
         ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.162..0.164 rows=3 loops=1)
         ->  CTE Scan on totals  (cost=0.00..0.02 rows=1 width=32) (actual time=0.006..0.006 rows=1 loops=3)

You can see here that we now have an Aggregate (for the SUM()) plus a Nested Loop (for the CROSS JOIN). This example query only has three results returned, but our more typical queries involve 10k or more rows returned.

Avoiding the Aggregate and Nested Loop will save us lots of memory and processing time on every run of a very expensive query.