Using JSON data type in production with Socorro

Back in June, we started using the JSON datatype in production for Mozilla’s Socorro. Our implementation analyzes crashes from Firefox and other Mozilla products, configured with a HBase backend and Postgres version 9.2 processing and serving reports to middleware and a Django front-end.

The guts of the application that stores raw data into Postgres is implemented in a crashstorage class, and an example of how we use this field in report generation is in a stored procedure for rolling up our “Top Crashers By Signature” reports.

The idea is to get the metadata we store for each crash into Postgres in its raw JSON form available to SQL queries. We are currently still storing core crash report data in a normalized table (extracted from the original JSON). We are considering re-writing everything to just use a JSON column at the core. There may be some significant negative performance impacts, so more testing is needed before we move forward.

Previously, we only stored metadata in HBase. While it is convenient to have a year’s worth of data in HBase for running arbitrary queries against, once we settle on reports, it is far faster and more convenient to be able to use Postgres with JSON functions and SQL to rollup aggregates. I’m sure this isn’t true for everyone. We’re working with large, but manageable data sets – around 30 GB per week of JSON, and only 2-3 weeks of data at a time.

The advantage to having the JSON in Postgres is that we no longer are adding columns to our base tables. It’s not terribly difficult to add columns, but it requires some special work to make the database schema migrations as low-lock as possible on tables with weekly partitions. Cutting out the DBA as an intermediary before developers can deploy new features and reports is a huge win for automation and sustainability of the application.

My preference is that developers never require a DBA to be present when deploying an app — even when there are schema changes. There are lots of features in Postgres that significantly reduce the need for DBA involvement — streaming replication, pg_basebackup, zero downtime column ADD, DROP and ALTER, transactional DDL and CREATE INDEX CONCURRENTLY to name a few. We’re not quite there for Socorro, but my plan is to get there.

The ideal role of the Postgres DBA is more to write database code, monitor and improve performance. That kind of role feels a lot more like a partner to developers, rather than a separate, mysterious entity. In much the same way that DevOps and configuration management enable great cooperation between IT and developers, many of Postgres’ maintenance features enable greater cooperation and less friction between DBAs and developers deploying new features.

There are still a few configuration and initialization issues to work through before Postgres can be seen as completely past it’s reputation as a difficult to deploy database. Some of that can be solved with more widely used configuration recipes for tools like Puppet and Chef. My colleagues in IT have been working on a Puppet module that they plan to release that automatically sets up a master database and as many replicas as you’d like to configure.

My hope is that people more widely share their automation stories and code!

In anycase, the JSON datatype is solving an important problem for our team, and significantly reducing the demand for schema changes. I’d love to hear more reports from users trying out JSON the field.

Fancy SQL Monday: Why use OVER() instead of a CROSS JOIN?

EDIT: I had to add this, because it was cracking me up:

FOR LIKE EVERY REASON EVER

I’m reimplementing some expensive database queries, moving them from our middleware into materialized view tables. We make pretty extensive use of Common Table Expressions (CTEs). And we generate many reporting queries that calculate averages and percentages of a total. One way this could be done is with a CROSS JOIN, which is a cartesian product of two tables, adding the total and then the percentage calculation to our original table that produces counts of events. For information about JOIN types supported by PostgreSQL, see SELECT…FROM documentation.

Or we could use OVER(), one of several Window Functions supported by Postgres. From the Postgres documentation, Window Functions are:

A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.

A few other folks have written about Window Functions: Postgres Guide to Window Functions, Window Functions: Postgres’s best kept secret, SQL Window Functions (examples in Ruby). “All built-in and user-defined aggregate functions — such as count, max, bit_or, or xmlagg” — are available for computation as a window function, quoting Open Logic, who phrased it well.

If you read about OVER(), you’ll see that all the examples specify a PARTITION clause. But you don’t have to specify a PARTITION to take advantage of the feature.

We had a query containing two CTEs and a final SELECT that produced the matview data. Here’s an example of that query:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
    , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
  AND report_date >= now()::date - '15 day'::interval
  AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    category
    , version_string
    , report_count
    , SUM(report_count) OVER () as total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
  as percentage
FROM totals
ORDER BY report_count DESC;

The part under consideration is:

    , SUM(report_count) OVER () as total_count

So, you see that OVER() has no PARTITION defined, meaning that the SUM will be calculated over the entire result.

Here is that same query, implemented using SUM() and a CROSS JOIN:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
     , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
    AND report_date >= now()::date - '15 day'::interval
    AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    SUM(report_count) AS total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
as percentage
FROM crashes CROSS JOIN totals
ORDER BY report_count DESC;

What’s the difference to Postgres between that and a SUM() plus a CROSS JOIN?

Here’s the EXPLAIN output from this query, pared down to the relevant section:

   CTE totals
     ->  WindowAgg  (cost=0.00..0.03 rows=1 width=72) (actual time=0.112..0.114 rows=3 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.097..0.100 rows=3 loops=1)
   ->  CTE Scan on totals  (cost=0.00..0.04 rows=1 width=104) (actual time=0.121..0.129 rows=3 loops=1)

The important bit to have a look at is WindowAgg right after CTE totals.

Now compare to the EXPLAIN output from a SUM() plus a CROSS JOIN query:

   CTE totals
     ->  Aggregate  (cost=0.02..0.03 rows=1 width=8) (actual time=0.015..0.015 rows=1 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=8) (actual time=0.001..0.004 rows=3 loops=1)
   ->  Nested Loop  (cost=0.00..0.07 rows=1 width=104) (actual time=0.191..0.205 rows=3 loops=1)
         ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.162..0.164 rows=3 loops=1)
         ->  CTE Scan on totals  (cost=0.00..0.02 rows=1 width=32) (actual time=0.006..0.006 rows=1 loops=3)

You can see here that we now have an Aggregate (for the SUM()) plus a Nested Loop (for the CROSS JOIN). This example query only has three results returned, but our more typical queries involve 10k or more rows returned.

Avoiding the Aggregate and Nested Loop will save us lots of memory and processing time on every run of a very expensive query.

Speaking at the Cash Music Summit today

I’m headed off to the Cash Music Summit in Portland, OR today.

Here’s the blurb I wrote out for Jesse’s zine:

What’s open source got to do with it?

Free software sounds like a 70s era free-love pipe dream. The idea and a copyright hack to enable it were born in 1984, brought to term by an academic who just wanted to fix a problem he had with his printer.

Free and open source software underpin Cash Music’s platform. Why should that matter to you? Software developers prefer working with code they can freely read, understand and modify. But the benefits to end users are not always as clear.

There’s a method to the madness that is giving away something many believe is worth more if kept secret. It’s not just about sharing, but also accountability, choice and freedom. And, it’s about creating communities we love contributing to, with the kind of people we love to collaborate with.

I’m not sure exactly which story I’m going to tell today – I’ve got 10 minutes though and a pretty sweet picture of Tina Turner.

If you’ve not seen Cash Music or learned about it’s mission, check out this NYTimes article about their work.