Fancy SQL Monday: format() instead of quote_*()

In the comments, Isaac pointed out that using format() dramatically increases the readability of SQL. I liked the look of his query, so I dug a little deeper.

As of version 9.1 (first released in 2010), a new function is listed in Postgres’ built-in string function documentation:

format(formatstr text [, str “any” [, …] ]): Format a string. This function is similar to the C function sprintf; but only the following conversion specifications are recognized: %s interpolates the corresponding argument as a string; %I escapes its argument as an SQL identifier; %L escapes its argument as an SQL literal; %% outputs a literal %. A conversion can reference an explicit parameter position by preceding the conversion specifier with n$, where n is the argument position.

We also have examples linked in the definition for various quoting strategies for dynamic SQL.

This is an example where the Postgres documentation probably should have reversed the order what is mentioned.

It turns out that format() makes it much easier to avoid using the quote_*() functions. The code looks a lot more like a python """ string (you can have arbitrary whitespace in there!), with flexible options for usage. The only feature missing is named parameters.

My application requires Postgres 9.2 at this point (for JSON datatype), so my plan is to refactor a few functions using format() instead of quote_ident() in particular.

Are there situations where you’d prefer to use quote_*() other than for backward compatibility? It seems as though format() is far safer, particularly for the quoting and nullable problems mentioned on the Quote Literal Example documentation.

Catalog SQL Friday: using DO when we’ve got tables as variables

Just a quick note about modifying constraints:

There’s no such thing as ALTER CONSTRAINT. So, if you want to safely change a CHECK constraint, like on a partition, you need to DROP and ADD it in a single transaction.

Below is a snippet for finding partitions, their CHECK constraints based on a WHERE clause. Then we DROP the existing constraint and add back the correct constraint. It doesn’t take much sleuthing to figure out what the problem was. 🙂

 DO $$
  DECLARE myrecord record;
  DECLARE theweek text;
  BEGIN
    FOR myrecord IN SELECT relname, conname from pg_constraint
      JOIN pg_class ON pg_constraint.conrelid = pg_class.oid
      WHERE consrc ~ 'without' and split_part(relname, '_201', 1)
      IN (select table_name from report_partition_info
      WHERE partition_column = 'date_processed') LIMIT 1
    LOOP
       EXECUTE 'ALTER TABLE ' || quote_ident(myrecord.relname)
       || ' DROP CONSTRAINT IF EXISTS '
       || quote_ident(myrecord.conname) || ';';</p>

   theweek = substring(myrecord.relname from '........$');

   EXECUTE 'ALTER TABLE ' || quote_ident(myrecord.relname)
   || ' ADD CONSTRAINT ' || quote_ident(myrecord.conname)
   || ' CHECK ((date_processed >= timestamptz('
   || quote_literal(to_char(date(theweek), 'YYYY-MM-DD')) || '))'
   || ' AND (date_processed < timestamptz('
   || quote_literal(to_char(date(theweek) + 7, 'YYYY-MM-DD'))
   || ')));';

   RAISE NOTICE 'DONE: %', myrecord.relname;
END LOOP;

END$$; 

Here’s the gist version for easier reading.

The couple things I learned in this process was a nice feature in substring() allowing me to return the date portion of my partition names easily, and split_part() which allowed me to return the parent table name and compare it to my list of partitionable tables for the specific partition column. I recently added support for partitioning on a different column for certain tables, so I have to differentiate for this fix. The string function docs are pretty great.

I didn’t do any optimization of this — just got it working and am now testing it in our stage environment. The final script is going to perform the changes on a month’s worth of partitions at a time to help reduce the chance of deadlocking.

If you have thoughts on how I could have done this more efficiently, let me know in the comments!

Using JSON data type in production with Socorro

Back in June, we started using the JSON datatype in production for Mozilla’s Socorro. Our implementation analyzes crashes from Firefox and other Mozilla products, configured with a HBase backend and Postgres version 9.2 processing and serving reports to middleware and a Django front-end.

The guts of the application that stores raw data into Postgres is implemented in a crashstorage class, and an example of how we use this field in report generation is in a stored procedure for rolling up our “Top Crashers By Signature” reports.

The idea is to get the metadata we store for each crash into Postgres in its raw JSON form available to SQL queries. We are currently still storing core crash report data in a normalized table (extracted from the original JSON). We are considering re-writing everything to just use a JSON column at the core. There may be some significant negative performance impacts, so more testing is needed before we move forward.

Previously, we only stored metadata in HBase. While it is convenient to have a year’s worth of data in HBase for running arbitrary queries against, once we settle on reports, it is far faster and more convenient to be able to use Postgres with JSON functions and SQL to rollup aggregates. I’m sure this isn’t true for everyone. We’re working with large, but manageable data sets – around 30 GB per week of JSON, and only 2-3 weeks of data at a time.

The advantage to having the JSON in Postgres is that we no longer are adding columns to our base tables. It’s not terribly difficult to add columns, but it requires some special work to make the database schema migrations as low-lock as possible on tables with weekly partitions. Cutting out the DBA as an intermediary before developers can deploy new features and reports is a huge win for automation and sustainability of the application.

My preference is that developers never require a DBA to be present when deploying an app — even when there are schema changes. There are lots of features in Postgres that significantly reduce the need for DBA involvement — streaming replication, pg_basebackup, zero downtime column ADD, DROP and ALTER, transactional DDL and CREATE INDEX CONCURRENTLY to name a few. We’re not quite there for Socorro, but my plan is to get there.

The ideal role of the Postgres DBA is more to write database code, monitor and improve performance. That kind of role feels a lot more like a partner to developers, rather than a separate, mysterious entity. In much the same way that DevOps and configuration management enable great cooperation between IT and developers, many of Postgres’ maintenance features enable greater cooperation and less friction between DBAs and developers deploying new features.

There are still a few configuration and initialization issues to work through before Postgres can be seen as completely past it’s reputation as a difficult to deploy database. Some of that can be solved with more widely used configuration recipes for tools like Puppet and Chef. My colleagues in IT have been working on a Puppet module that they plan to release that automatically sets up a master database and as many replicas as you’d like to configure.

My hope is that people more widely share their automation stories and code!

In anycase, the JSON datatype is solving an important problem for our team, and significantly reducing the demand for schema changes. I’d love to hear more reports from users trying out JSON the field.

Fancy SQL Monday: Why use OVER() instead of a CROSS JOIN?

EDIT: I had to add this, because it was cracking me up:

FOR LIKE EVERY REASON EVER

I’m reimplementing some expensive database queries, moving them from our middleware into materialized view tables. We make pretty extensive use of Common Table Expressions (CTEs). And we generate many reporting queries that calculate averages and percentages of a total. One way this could be done is with a CROSS JOIN, which is a cartesian product of two tables, adding the total and then the percentage calculation to our original table that produces counts of events. For information about JOIN types supported by PostgreSQL, see SELECT…FROM documentation.

Or we could use OVER(), one of several Window Functions supported by Postgres. From the Postgres documentation, Window Functions are:

A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.

A few other folks have written about Window Functions: Postgres Guide to Window Functions, Window Functions: Postgres’s best kept secret, SQL Window Functions (examples in Ruby). “All built-in and user-defined aggregate functions — such as count, max, bit_or, or xmlagg” — are available for computation as a window function, quoting Open Logic, who phrased it well.

If you read about OVER(), you’ll see that all the examples specify a PARTITION clause. But you don’t have to specify a PARTITION to take advantage of the feature.

We had a query containing two CTEs and a final SELECT that produced the matview data. Here’s an example of that query:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
    , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
  AND report_date >= now()::date - '15 day'::interval
  AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    category
    , version_string
    , report_count
    , SUM(report_count) OVER () as total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
  as percentage
FROM totals
ORDER BY report_count DESC;

The part under consideration is:

    , SUM(report_count) OVER () as total_count

So, you see that OVER() has no PARTITION defined, meaning that the SUM will be calculated over the entire result.

Here is that same query, implemented using SUM() and a CROSS JOIN:

WITH crashes as (
  SELECT
    product_name as category
    , version_string
     , SUM(report_count) as report_count
  FROM signature_summary_products
  JOIN signatures USING (signature_id)
  WHERE signatures.signature = 'libflashplayer.so@0x1f2a14'
    AND report_date >= now()::date - '15 day'::interval
    AND report_date < now()::date 
  GROUP BY product_name, version_string
),
totals as (
  SELECT
    SUM(report_count) AS total_count
  FROM crashes
)
SELECT category
  , version_string
  , report_count
  , round((report_count * 100::numeric)/total_count,3)::TEXT
as percentage
FROM crashes CROSS JOIN totals
ORDER BY report_count DESC;

What’s the difference to Postgres between that and a SUM() plus a CROSS JOIN?

Here’s the EXPLAIN output from this query, pared down to the relevant section:

   CTE totals
     ->  WindowAgg  (cost=0.00..0.03 rows=1 width=72) (actual time=0.112..0.114 rows=3 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.097..0.100 rows=3 loops=1)
   ->  CTE Scan on totals  (cost=0.00..0.04 rows=1 width=104) (actual time=0.121..0.129 rows=3 loops=1)

The important bit to have a look at is WindowAgg right after CTE totals.

Now compare to the EXPLAIN output from a SUM() plus a CROSS JOIN query:

   CTE totals
     ->  Aggregate  (cost=0.02..0.03 rows=1 width=8) (actual time=0.015..0.015 rows=1 loops=1)
           ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=8) (actual time=0.001..0.004 rows=3 loops=1)
   ->  Nested Loop  (cost=0.00..0.07 rows=1 width=104) (actual time=0.191..0.205 rows=3 loops=1)
         ->  CTE Scan on crashes  (cost=0.00..0.02 rows=1 width=72) (actual time=0.162..0.164 rows=3 loops=1)
         ->  CTE Scan on totals  (cost=0.00..0.02 rows=1 width=32) (actual time=0.006..0.006 rows=1 loops=3)

You can see here that we now have an Aggregate (for the SUM()) plus a Nested Loop (for the CROSS JOIN). This example query only has three results returned, but our more typical queries involve 10k or more rows returned.

Avoiding the Aggregate and Nested Loop will save us lots of memory and processing time on every run of a very expensive query.

What I mean when I talk about collaboration with teachers: part I

I’ve given a few talks about my experience learning to teach. This is an edited version of my speaking notes for the keynote I gave at the Computer Science Teachers Association conference. This is the best distillation of my thoughts about the value of open source in my life, and what motivates me to contribute and teach. The first half is over 2000 words, so I’m breaking this into two posts. The next part I publish will be the second half of the talk – about the classes I’ve taught, and my lessons learned about what people need to know to get started in free and open source software.

I am a beginner teacher. I’ve only just started writing lessons and teaching classes to adult women who are learning or practicing their programming. All of what I share today is based on my personal experiences working with first time, adult programmers. My plan today is to tell you a little bit about me and what motivates me to teach and contribute to open source, share with you the successes of some of our beginner adult programming efforts and finally what I think open source communities offer teachers.

And I want to start by giving away my punchline. When it comes to working with open source community – of which I’m a member and a leader, and there are many, many leaders without any kind of central authority – I can say for sure today that we’ll come to you.

I’ve been working for the past couple years to find like-minded open source community members, and for those of you in the audience today, I am making a commitment to you – if you want it – to find an open source person to come and talk about what it is that they do to your classroom.

Just contact me (you can leave a comment below – just indicate if you’d prefer I not make your comment public), and I will make this happen, either through Mozilla or through my open source collaborators. I’ve spent the last 16 years going to conferences, and I would like to introduce that network of people to you.

I want to start with something Julie Horvath said recently. She wrote a blog post about women in tech and it struck such a chord with me. The first sentence really stopped me dead in my tracks.

I didn’t grow up thinking I could do anything I wanted to.

When I look at this again, I feel overwhelmed by how much it matches what the women I’ve taught said that they think about programming.

Screen Shot 2013-07-25 at 5.31.27 AM

I see this every time I walk into a classroom to teach beginning programmers. This is a photo from a class on algorithms, people doing a pen and pencil exercise in groups. Several women said afterward they finally felt confident that they could explain what algorithms were. That before coming and working on this in these groups, they literally had never really thought about how algorithms related to programming or what it might mean to implement or create their own algorithms.

I’ve come to think of this as a possibilities problem. People truly have no idea what is possible for them in computer science. And in my teaching experience in particular, many women coming to these classes have a very limited view of what they can accomplish. They don’t know what the job opportunities are, they don’t realize how programming can be used in their lives outside of work, and they know very little about how a computer works or what the main components of a computer are.

When I think about what I really need to do — what my focus is in teaching that I do — I think about changing the scope of what people think is possible. Broadening the scope, and enhancing whatever details I can that make studying programming and ultimately computer science relevant to the lives of the people coming to these classes.

So if we were to just to attach a little overdeveloped importance to this idea of expanding the scope of possibility, we could call this “possibility engineering.”

In my experience, there’s two basic things I have to do – I need to raise awareness, and then I need to offer encouragement. It’s in addition, of course, to teaching real skills that people need. And as classroom teachers, you’re all aware of the need for these two things. I’ve found that these issues are often left out of how outreach and teaching in open source communities is structured.

Screen Shot 2013-07-25 at 5.09.46 AM

This is a picture of a sticky note I drew of how I felt while learning to use a new programming language or trying learn a new module in Python. The top of the graph is “euphoria” or “happiness”, and the bottom of the graph is “despair” or unhappiness. You can see I have a lot of ups and downs!

The peaks are when I’m reading documentation for the first time, succeeding with experiments and implementing code. The valleys are when I actually try the tutorials and they don’t completely work, when I write code that fails and when I’m trying to refactor my test suite. In the end, my emotions level out and if I’m lucky, I end up satisfied with the tool I chose to work with.

Screen Shot 2013-07-25 at 5.11.08 AM

Here’s what I think happens sometimes with the women who come to PyLadies and then never come back. They initially are very happy, but then something happens that causes them to give up.

Screen Shot 2013-07-25 at 5.11.51 AM

In one case, I know exactly what happened — a woman attended the workshops, tried things on their own that didn’t work, and then finally had something break with Python on her Windows laptop and she never came back.

What happens when PyLadies succeeds? What does the emotional graph look like?

Screen Shot 2013-07-25 at 5.12.49 AM

What I’ve seen in the 60+ women that keep coming to meetings is that they continue to have difficult experiences – things break, they don’t know how to fix them.

But they all come back to the group. They ask questions, they commiserate over things that don’t work and they get the help they need to see that they are improving at the same time as they feel as though they are getting better, making friends and being supported. The in-person experiences are key.

Screen Shot 2013-07-25 at 5.13.48 AM

Before we go on, it’s important to acknowledge a key truth about what it is that teachers are teaching. Computer Science is a way of thinking and solving problems. It’s not a company or a product.

This is of course obvious to all of you in this room – but it’s such an important idea to come back to to in all of our work. It’s about getting kids or adults to understand the basics of what a computer is and what it does, and how it stores data about what, where, how and when we do things. We need people to understand these concepts in the same way that we need people to be able to read. When our society is increasingly assisted, augmented and controlled with the help of computers, democracy is at stake when most people have no idea how a computer and software works.

The role of open source groups like PyLadies, of non-profits like Mozilla, is ultimately to empower people: to spread knowledge, dispel myths and invite exploration.

But these groups are mostly helping out people who are already out of high school.

There’s a fair amount of research at this point about what many people think about computers when they’re in high school. I’ve mostly read about what girls think, and try to keep that in mind when I’m advertising my courses. Which brings me to what I thought computer science was all about when I was in high school.

What I knew was:

  • Computers were for playing games
  • Computers were for anti-social boys
  • You’ll find lots of inappropriate, animated ASCII art on computers

And I think that highlights a problem with how we’re collectively handling explaining computer science to the world. We can’t rely on ad-hoc self-education, or discovery learning to help people understand how the whole world is changing.

Screen Shot 2013-07-25 at 5.16.37 AM

Here’s a list of job titles from my colleagues in the industry. Many of these are jobs that didn’t exist 20 years ago, some are jobs that didn’t exist five years ago. So much is changing so fast.

Despite that, we have some real principles – computer science principles – underpinning it all. That’s where we need to focus, while at the same time exposing people to this wealth of possibility.

So, how did I, a person who thought the computers were for gaming, for boys and probably a little bit seedy, get from there to thinking it might be possible to join an open source community and move on to actually changing something I cared about?

In 2000, I made my first contribution to an open source project. I was working at Intel, managing network equipment monitoring and I’d found a problem with how I’d set everything up and needed to modify something like 7000 files to fix it. So I wrote a simple script.

Not too long after that, someone else had a similar problem and posted about it to a mailing list. So I decided, I might as well help that guy out and post the script. Then, I did.

And what happened next totally changed my life. The maintainer of the project not only thanked me, and asked a bunch of questions, he accepted my patch committed it to the main repo, and added me as a contributor to the project’s site.

Mind Blown

I changed the source code of a tool I used every day.

I felt deliciously powerful, so important! And incredulous that something that I’d written that was so obviously terrible, was good enough to be part of a piece of software that I not only used every day, but thought was incredibly great.

And other people used it! I know because I got bug reports later.

Today, I’m a major contributor to the PostgreSQL community, and I founded a chapter of PyLadies in Portland. I’m also deeply involved in many aspects of open source community organizing, like running conferences and helping out with the Ada Initiative. It’s hard to understate how much that patch affected the rest of my life.

I’m super passionate about open source software and I really think collaborating with teachers is awesome. And what I think in particular is great about collaboration between us is getting the open source community to understand teaching at scale. By that, I mean learning how to teach everyone — the way that we teach in our public education system.

Public education is a grand experiment, and a very successful one. Despite the many issues we have with the administration of it, we have a literacy rate that enables us to sustain a democracy and a system for getting an incredible amount of information to most of our republic’s citizens.

We should be using this system to teach everyone about computer science.

Beyond that, I want open source communities to figure out how to teach at that kind of scale. Not only do we need computer science in the classrooms — we need free and open source principles and tools to be taught as well.

We can get there with community members reaching out to teachers as a first step.

And an important part of that is learning what the process of developing lessons and teaching students in classrooms is all about. This is the huge thing we (free and open source developers and community members) can learn from you (teachers).

Teachers and open source community have a lot in common. Some of the more important things are:

  • Minimal resources
  • Teach anyone who shows up
  • Change the world by sharing ideas

My dream in this is that we’ll find a way to provide effective computer science education for everyone.

We’re trying to find that minimal set of concepts that will make people feel empowered at a keyboard, a kiosk or any computer they interact with in their lives. That they understand what’s being said in the newspaper about computers, that they can ask questions without feeling shamed or stupid, and that they can learn more if they choose.

And I don’t mean at all to say that you’re all signing up for teaching everyone. But I am signing up to at least try to do this for the adults in my life that need and want it.

Second half of this talk coming shortly…

What Open Source developers can do for teachers: volunteer to speak in a classroom

I keynoted the Computer Science Teacher’s Association annual conference on July 16th. I gave a talk about how open source developers can help teachers, and what I’ve learned about teaching programming to beginners. The slides are available, although not completely informative about what I said. I plan to write up what I said in that talk after OSCON is finished. Here’s part 1 of the talk.

My pitch to the teachers was: send me an email, and I will find a free and open source community member to give a 15-20 minute talk in your classroom. Lots of teachers are taking me up on it.

To get this done, I set up a form for FOSS community members to provide their contact information. I’ve gotten 163 people as of 8:20am PT July 24. Thanks so much everyone! I know quite a few of the people who have filled out the form, and there are a ton of new people as well. I’m incredibly excited about this. We can all do a lot of good by actually meeting the teachers in our local areas!

Anyone is welcome to sign up, in any location. We have gotten volunteers from every continent except Africa so far.

Most of my teacher contacts are in the US, and I know a few teachers in Germany. If you know teachers who’d like speakers, I’d love to hear from you as well.

I’ve also started a conversation with a few developers about creating an app for connecting teachers and FOSS community members. We’ve got some code in a repo, and a couple people interested in an API for use in some neat remixing applications.

If you’ve got some Django or front-end experience, and you’d like to donate a little time, we’d be happy to have you join us in developing a tool for managing the contact process. I really want the introductions between people to continue to be a “warm handoff”, but it would be nice to remove me as a single-point-of-failure.

I’m setting up a meeting next week. Just ping me either on this post or via email.

A practical guide to using Alembic

I spent some time guiding a coworker through using Alembic for the first time with Socorro this morning and what follows are my notes from that meeting.

I’ve been using Alembic, a database schema migration tool, for about three months now, and really liking it a lot. I created a blog post that served as a slide deck for an internal team called A lightspeed tour of Alembic as my first stab at user education.

Setting things up initially was pretty simple, but explaining it to a coworker after I’d set everything up for myself proved slightly more difficult. Below are my notes on the differences between Alembic and some other migration tool.

Terminology

Alembic calls each migration a revision. Revisions know what order to be run in because each revision is given a down_revision to identify its parent. If down_revision is None, that revision is the very first revision according to Alembic. You can put your whole schema in that revision, or you can just start adding changes to this initial revision. Alembic doesn’t complain either way.

A best practice would likely be putting your entire model into the first revision. I may go back and “fix” this for us later. I opted to just have the default use case be to create a database fresh with a tool we call setupdb_app.py.

If you’re looking to migrate to using alembic, you’ll also need to use SQLAlchemy. I used sqlautocode for my initial schema reflection, and there’s a new tool sqlacodegen you may want to check out for generating your SQLAlchemy models for the first time.

Preparation: edit config and activate a virtualenv

Our environment was set up per the alembic tutorial for creating an environment. I ran:

alembic init alembic

I also put an alembic.ini-dist file into our project’s config/ directory, and modified alembic/env.py to include our model.

To get started working with an existing install, you’ll need to modify alembic.ini-dist, and copy it to config/alembic.ini to fit your environment – setting the connection string and the path to the alembic directory are the two most important settings. We have a script which creates databases from our models.py called setupdb_app.py. This script takes --database_name as a command-line argument. My default for our project is to use breakpad.

We use a virtualenv called socorro-virtualenv. The virtualenv is created automatically if you run make test. If you’re creating a standalone virtualenv, you can do that with virtualenv socorro-virtualenv. Activate this with . socorro-virtualenv/bin/activate.

Creating a revision

  1. Create a fresh database to work from. For Socorro, the command is: PYTHONPATH=. socorro/external/postgresql/setupdb_app.py --database_name=breakpad
  2. Edit models.py with the change to the schema
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini revision -m 'your message about the migration'. The output will include the name of the new file.
  4. Edit the new file as needed alembic/versions/*.py
  5. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1
  6. Test your downgrade with PYTHONPATH=. alembic -c config/alembic.ini downgrade -1

If all goes well, your revision is ready! If something goes wrong, edit and try again. The revision will automatically rollback if there are any errors.

Downgrades are a little tricky to properly execute. In an ideal world, you’d be able to revert the underlying code, but preserve only the commit containing the migration. More on this in a future blog post!

Creating a revision using --autogenerate

This is very similar to the above, with the addition of --autogenerate to your revision command. This should do the right thing, but definitely check your generated file for accuracy.

  1. Create a fresh database to work from. For Socorro, the command is: PYTHONPATH=. socorro/external/postgresql/setupdb_app.py --database_name=breakpad
  2. Edit models.py with the change to the schema
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini revision --autogenerate -m 'your message about the migration'. The output will include the name of the new file.
  4. Edit the new file as needed alembic/versions/*.py
  5. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1
  6. Test your downgrade with PYTHONPATH=. alembic -c config/alembic.ini downgrade -1

If all goes well, your revision is ready! If something goes wrong, edit and try again. The revision will automatically rollback if there are any errors.

Production deployment

You’ll need to deploy an alembic.ini on your production database system and probably a virtualenv to support your python modules.

We deploy our virtualenvs with our application, so this step was pretty simple for everything except for alembic itself. The virtualenv put in full, static paths for the python binaries and had some dependencies that I haven’t figured out yet for actually running alembic. To get around this, I created a virualenv locally on the system for the postgres user. Having your postgres user run the migrations locally is a must for me because I need to access the filesystem to pull in new versions of user defined functions stashed in the directory my model lives in.

I just deploy a new release of our application code on the database server locally, and then I run alembic against the versions directory that’s deployed.

FAQ

And here’s an FAQ for the common problems folks ran into:

OOPS I forgot to create a database before I created a revision!

To “fix” this, try:

  1. Create the database from scratch using your current models.py.
  2. Run: PYTHONPATH=. alembic -c config/alembic.ini downgrade -1
  3. Run: PYTHONPATH=. alembic -c config/alembic.ini upgrade +1

Assuming your downgrade function works, this should allow you reverse the latest revision and then test your migration.

Error message: “Only a single head supported so far.”

See Working with Branches.

I’m using schemas, and alembic doesn’t recognize them when I try to use --autogenerate.

See include_symbol. And be sure to add this to both the “offline” and “online” versions of the revision code in env.py.

Error message: Target database is not up to date.

This means you’ve got a file in your versions directory that contains one or more migrations that haven’t been applied to the current database. You can either apply them with alembic upgrade head or have a look in that directory and remove the migration(s) that you don’t want.

Why give credit to reviewers?

This is a lightly edited version of an email I sent to pgsql-hackers today.

Josh Berkus asked:

> How should reviewers get credited in the release notes?

Without getting into how the community might decide to do this, I thought it might be helpful to share the reasons why I believe recognizing and expressing gratitude to reviewers is a helpful, useful and gratifying exercise for the Postgres community.

I support crediting reviewers in a more formal way than we currently do for a few different reasons.

First, I believe it’s worth finding a way to say “Hey, you just did something great for Postgres”, publicly, to a bunch of people who could have spent their valuable time and energy in some other way.

Second, reviewers get better at their work by reviewing multiple times – so I’d like to encourage people to review more than once.

Third, reviewers don’t always need to be expert developers, or experts at Postgres to get started, but many people who do open source work have no idea this is true. Public recognition helps make it clear that we have people who give useful reviews and are relative novices.

We also have several different kinds of reviews:

  • “does it compile”
  • style/typo/easily seen bug passes
  • in-depth discussion of design choices, use case, interface
  • complex testing cases
  • performance testing
  • pre-commit checks for more subtle bugs or committer preferences.

All of those, except probably the very last, can be done by people who are familiar with Postgres or its configuration, but aren’t necessarily Postgres or C experts.

Fourth, we have very few accepted ways to recognize contributions to Postgres. Naming in Release Notes is one way this community has consistently supported as a public way to say “hey, you just did something great for Postgres”. The complete list of ways I’m aware of are:

  1. Recognizing major, minor and emeritus contributors
  2. Making someone a committer
  3. Being part of the -core group
  4. Naming authors by name in commit messages (but without consistent metadata, making it difficult to count well)
  5. Naming authors in release notes

That’s pretty much it. That’s great for the people who have already secured positions through seniority, or because they’re amazing C hackers. I don’t know if I need to lay out for everyone the value of public recognition – if you want me to I can enumerate them. But the benefits of public recognition are huge — both in a social and a financial sense.

Currently, the only way I know of to be recognized for work on Postgres that is not seniority or code-related is #1. If you’re a reviewer, there’s almost no chance you’ll be recognized in our list of contributors without some additional, very significant contribution to our community. (Please let me know if I’m mistaken about this — I only know what I know!) Adding names to Release Notes (or some variant of Release Notes) seems like a minor concession for work that we as a community need, value and want to encourage.

We are so few in terms of patch contributors – somewhere between 300-400 people contribute code to PostgreSQL each year based on the names I try to pull out of commits. I haven’t counted how many reviewers we have who do not also contribute patches separately.

Giving people appreciation for the review work they’re doing, for free, is a good thing for everyone. Naming more names helps describe the true scope of our community. Spreading gratitude is good for those who thank and those who receive thanks (like, proven scientifically!). And we increase the number of people who benefit directly from the work that they do here, by giving them something they can point their boss, their company and their colleagues to.

So, when we’re debating how recognition might be done, please don’t lose sight of why this is important in the first place.

More about JavaScript and PostgreSQL

People asked a lot of questions about what you can do with the datatype and PLV8! My slides are available from the talk at this dropbox link. Speakerdeck seems to be busted for the moment. And here’s my gist with the ‘liberate()’ function.

Here are some links to resources I’ve found for using PLV8 and the JSON datatype:

And folks who took notes from my talk:

JSON, PLV8 and Postgres 9.3

Black Skimmer

I’m presenting a talk about JSON and PLV8 (available on speakerdeck or on dropbox) at the JSConf 5th year Family Reunion, in Amelia Island, FL today. My husband and I took a couple extra days vacation between the conference and the Memorial Day holiday to go birding. So, in the slide deck, I included pictures we took of several of the 40+ species we’ve seen.

The Great Florida Birding Trail starts on Amelia Island, and wow — there are so many amazing species, several of which are endangered, that we’ve seen in the short time we’ve been here. It’s hard to do justice to how beautiful the area is. There’s a wikipedia entry, and some strange politics that have preserved the natural areas nearby, like Cumberland Island.

Regardless, Amelia Island is a stunning backdrop for a conference. I’m a bit of a fish out of water here, but I’ve met so many people excited about the JSON datatype and what it means for their development environments. And, tons of people who wish I would have told them about this a year ago when Postgres 9.2 first started supporting it!

My talk is at 10:30am, right after my fellow Mozillian and creator of JavaScript Brendan Eich presents a talk about the future of browser VMs.