Day 3 at PgConf.EU: the future, replication, performance and the closing keynote

I was room host for Simon Riggs, Magnus Hagander and Greg Smith today before giving my final talk this afternoon.

The morning started with Simon Riggs talking about his wishlist for the future of Postgres – including some boundary-stretching ideas for bi-directional replication (a way to possibly support multi-master architecture for Postgres). Simon named his talk “Postgres Futures”, but also called it his personal “shopping list” of features he’d like to see implemented, or implement himself. Magnus deep-dove into the replication protocol and how to use pg_basebackup with 9.1. Greg’s talk on benchmarking is always fantastic, and I learn something new every time. He included some graphs for FusionIO testing he’d done in the last couple weeks.

I also gave my last talk of the conference, “Managing Terabytes” about my experiences managing 8.x version clusters of a terabyte or larger in size for several companies. I reorganized this talk from the last time I’d given it, and I think it came across quite a bit more clearly to the audience. One developer gave me the suggestion that I should have tried to do a series of updates to a catalog tables to try to recover page space. I’m designing a little test case to help someone do this in the future if they run into this problem with older versions of Postgres. HOT (8.4 and later) essentially fixes this issue, by the way.

The keynote was shared by Ed Boyajian and Bruce Momjian. Ed mentioned that Oracle had the best earnings statement ever in the most recent shareholders call. In spite of that, there’s a rising tide of Oracle users who are looking for alternatives, given how strongly they’re locked into their technology. He said that he was recommending companies use Postgres is a strategic lever to negotiate with Oracle. And as IT departments strapped for cash are trying to figure out how to fund new data initiatives – they’re turning to products that are free.

Bruce then quoted the opening keynote by Ram Mohan – “With open source, support is a whole new level.” And Bruce’s comment was that what Ram did when he started 10 years ago with Afilias was heretical for conventional IT wisdom at the time.

Bruce also said that he’d always thought Postgres would ultimately only ever be a niche player among databases. But with all the progress we’ve made as a project, and the new markets being explored, he sees much greater possibilities for the project.

He asked the audience about the speed at which bugs had been fixed – within 24 hours, a few days or a single week. Only one hand was raised for a bug requiring more than 1 week to be fixed, among probably 40-50 hands raised for much faster fixes.

Bruce also noted that developers are often moved to work and stay with Postgres as a project, because they have decided that “this is an important thing for me to do in my life.”

PgConf EU was a great conference, and I’d be happy to be invited back, wherever they decide to hold it in 2012.

Day 2 at PgConf.EU: hallway track and the marketing of Postgres

The hallway track is always my favorite part of the conference. I had to give a full-length and a lightning talk today, so much of my time was spent making sure I was really prepared and then giving the talks!

But between talks, I got to chat with Heroku, 2ndQuadrant and EnterpriseDB folks about what they think is coming next in the world of enterprise development and Postgres.

One topic that I touched on in those conversations and my lightning talk (Postgres needs an aircraft carrier) was that our plan for world domination needs to get quite a bit more specific and actionable.

For the open source community, the right question is not “are we ready to tackle the enterprise?” — the right question is: Which market segment and customer group are we going to target for complete market domination?

One area that we definitely already dominate is online poker. We have had a few blog posts about it, but not a whole lot else. Another is GIS through PostGIS.

I created a survey to try and capture some scenarios from the developers who work with customers every day solving problems. We need to know more about the people using Postgres and the way that they use the database.

If we can get 30 responses, I’ll publish the results. It’s a bit long, and requires some thought, so I imagine it will take some time to get them all.

If you have a customer that you think represents a good target market for Postgres, take 10 minutes and fill out the survey for us!

Day 0: PgConf.EU

Yesterday was spent settling into the Casa 400 and reconnecting with the European Postgres community!

The hotel allowed us to check in very early and so I got to settle in, grab lunch and a nap before we set out for the evening.

We had informally decided to go on a pub crawl with whoever was already in town. The decision making about where to go started around 5pm.

We now have about five years of experience trying to get 20 or more people into bars and restaurants without calling ahead and last night, luck was with us!

I broke down our lessons learned as follows:

  • Start the planning one hour before the intended departure time.
  • Have a printed map, even if it is not consulted.
  • Ask for help from locals.

Thanks to Greg, we actually asked a local bar owner about where to go, and he called a friend’s restaurant for us!

We ended up at Nels for dinner with nearly 25 people taking over most of the restaurant. I was lucky enough to have a friend in town from Portland that joined us for a long conversation about marketing, PostgreSQL, geek cruises and aircraft carriers. I think I have a topic now for a lightning talk.

After that, we walked around searching for ice cream for Ads. We ended up at Pasta e Basta, which had ice cream and singing wait staff. After hearing “Hit the road, Jack”, “Here comes the sun” and a few other tunes, we closed out the night at the hotel bar.

I got to meet the author of pgChess, Gianni Ciolli, and catch up with Jonathan Katz, Dimitri Fontaine and Peter Geoghegan.

All in all, a great start to the week at the conference! I’ll be room hosting this afternoon in room #3.

Headed to PgConf.EU

I’m headed to Amsterdam for PgConf.EU and very excited for my very first European postgres Conference.

I’m giving two talks – Managing Terabytes, and Mistakes Were Made. Both are cautionary tales about the things that one can do terribly wrong with database management, and system operations management. My goal with these talks is to start a conversation about what we can learn from failure.

I encourage everyone to share their stories about what fails. Not only are they great “campfire stories” for entertainment, but they help us all learn faster, and they teach us what ultimately works when everything is failing.

In the same vein, UpdatePDX is putting on another “tales of failure” set of short talks the following week back in Portland. I’ll be leading the charge with a short story of my own, followed by at least two other tales of failure.

Update releases for 9.1.1, 9.0.5, 8.4.9, 8.3.16 and 8.2.22

Today the Global PostgreSQL Development Group released branch updates for all supported versions. You can go ahead and download them now!

There were quite a few fixes for somewhat obscure crashes, fixes for memory leaks discovered by some valgrind testing, and a couple big fixes for GiST indexes, like this:

* Fix memory leak at end of a GiST index scan

gistendscan() forgot to free so->giststate.

This oversight led to a massive memory leak — upwards of 10KB per tuple
— during creation-time verification of an exclusion constraint based on a
GIST index. In most other scenarios it’d just be a leak of 10KB that would
be recovered at end of query, so not too significant; though perhaps the
leak would be noticeable in a situation where a GIST index was being used
in a nestloop inner indexscan. In any case, it’s a real leak of long
standing, so patch all supported branches. Per report from Harald Fuchs.

There were a few fixes for catalog or catalog index corruption, and avoidance of buffer overflows which could cause a backend crash. There were also a few fixes that will improve the performance of VACUUM over time.

Release notes have all the details. Many of the fixes have already been committed to 9.1 (there are only 11 new commits in 9.1.1). So, you’re about to experience a great many bugfixes, users of 8.2->9.0.

Another thing to note – 8.2 will be deprecated in 2011! You ought to upgrade anyway, just to get HOT and to get yourself into a position to use pg_upgrade for future upgrades. But now, you’ve got extra incentive.

My Postgres Performance Checklist

I am asked fairly frequently to give a health assessment of Postgres databases. Below is the process I’ve used and continue to refine.

The list isn’t exhaustive, but it covers the main issues a DBA needs to address.

  1. Run boxinfo.pl on a system
    Fetch the script from http://bucardo.org/wiki/Boxinfo. Run as the postgres user on the system (or a user that has access to the postgres config).
  2. Check network.
    What is the network configuration of the system? What is the network topology between database and application servers? Any errors?
  3. Check hardware.
    How many disks? What is the RAID level? What is the SLA for disk replacement? How many spares? What is the SLA for providing data to the application? Can we meet that with the hardware we have?
  4. Check operating system.
    IO scheduler set to ‘noop’ or ‘deadline’, swappiness set to 0 (http://www.pythian.com/news/1913/what-exactly-is-swappiness/)
  5. Check filesystems.
    Which filesystem is being used? What parameters are used with the filesystem? Typical things: noatime, ‘tune2fs -m 0 /dev/sdXY‘ (get rid of root reserved space on database partition), readahead – set to at least 1MB, 8MB might be better.
  6. Check partitions.
    What are the partition sizes? Are the /, pg_xlog and pgdata directories separated? Are they of sufficient size for production, SLAs, error management, backups?
  7. Check Postgres.
    What is the read/write mix of the application? What is our available memory? What is the anticipated transpactions per second? Where are stats being written (tmpfs)?
  8. Check connection pooler.
    Which connection pooler is being used? Which system is it running on? Where will clients connect from? Which connection style (single statement, single transaction, multi-transaction)?
  9. Backups, disaster recovery, HA
    Big issues. Must be tailored to each situation.

What’s your checklist for analyzing a system?

Seeking: Database Disaster Stories

I’m going to give another “Mistakes Were Made” talk at PgConf.EU next month.

I have many disaster stories of my own, but am always looking for more! Stories of data-destruction and tales of unexpected failure are welcome.

You can leave them in the comments, or email me.

The talk focuses on the ways in which systems fail, and the typical kinds of failure we find in web operations. Types of failure I focus on are:

* Failure to Document
* Failure to Test
* Failure to Verify
* Failure to Imagine
* Failure to Implement

Stories that fall outside those categories are especially welcome.

I look forward to your tales of woe!

9.1 presentation at Windy City Perl Mongers

I recently updated my PostgreSQL 9.1 slides for a presentation at the Windy City Perl Mongers.

We discussed 10 features that the Postgres community decided to emphasize in our press releases. The crowd was primarily people who had never used Postgres before, which was a bit of a different audience for me.

It was great to be able to compare notes with folks who are supporting Oracle and SQL Server, and see a lot of excitement for trying out 9.1.

When I’m traveling around, I’ll be looking for more non-Postgres user groups to give talks like this. Let me know if you’d like me to come speak at yours!

Postgres Open: next year (!), resources, video

Postgres Open is over!

I wanted to share a few resources, and remind attendees to fill out our survey. I really appreciate the detailed comments I’ve been getting! Keep them coming.

I wanted to specially thank our program committee:

Robert Haas
Josh Berkus
Gavin Roy
Greg Smith

They were the people who put together and edited the website, found sponsors, recruited speakers, voted on talks, gave talks and tutorials and executed the many tasks needed to make the conference a success. We plan to make key members of the Postgres community part of the operation of the conference going forward. We’re really just emulating the way that PgCon is run.

I have some more thoughts about what makes a conference “community-operated”, and once my budget numbers are settled, I’m going to share with you what running the conference costs in terms of my time, and in terms of dollars to operate. It’s important to both understand the costs involved, how much of my time is required and what that means for you as either a sponsor, speaker, attendee or volunteer supporting what we are doing.

NEXT YEAR: September 17-19, 2012

I’m pleased to announce that next year’s conference for September 17-19, 2012 at the Westin-Michigan Ave. So mark your calendars now!

The conference will continue to be operated as a non-profit, with proceeds going toward operation of the following year’s event, and a very small percentage going to Technocation, Inc – our fiscal sponsor and a 501(c)3 organization dedicated to developing educational opportunities and resources for software professionals.

We had fantastic support from our sponsors this year, and hope to expand that next year.

In particular: 2ndQuadrant, EnterpriseDB, Heroku and VMWare’s support were instrumental in pulling this event together. We really only started planning in May. It feels good to now have a whole year ahead of us!

With greater sponsor support, we can help fund some of the things that attendees asked for like: soda (which costs $8/soda – I feel as though we should get some kind of gold plating for this), conference tshirts, and a closing party.

Please get in touch if you or a company you know is interested in sponsorship for 2012!

Slides:

Speakers are uploading or linking their slides to the PostgreSQL wiki. If the slides you’re looking for aren’t there, please ping the speaker or me.

Streaming Video:

Streaming content will be available for about 30 days.

I will be getting all the video on flash drives this week. My plan is to upload it to either vimeo or youtube. I don’t really have the resources to provide individual copies of the videos, but if we find a location for raw data upload, I’ll pass that along to you all.

Looking toward Chicago: Postgres Open, local user groups, parties and on to October!

I’ve been incredibly busy this past month, and not blogging – being a free agent has possibly made me busier than I was before!

Postgres Open’s schedule is in near-final state. We’ve started adding talks to our Demo room on Thursday, and are looking forward to a keynote from Charles Fan, SVP at VMWare about recent developments in vmware’s cloud offerings for Postgres.

We’ll also be getting a more in-depth look at Heroku’s new postgres.heroku.com on-demand database service, as well as an open source tool they wrote called WAL-E.

Thanks to Heroku, we’ll be streaming much of the content from the conference live, so you’ll be able to catch the keynotes and many of the talks, even if you’re not there. And we’ll be sharing the videos after.

I believe we’re the first Postgres conference to do this! Someone correct me if I’m wrong. 🙂

While I’m in Chicago, I’m planning to drop by the Windy City Perl Mongers for a reprise of my 9.1 talk from OSCON.

We’re also planning a couple parties for Postgres Open, and hopefully inviting a few of the local user groups to join us.

After that, I’m headed in October to PostgreSQL Conference EU, and will be giving a talk about terabyte Postgres databases (and the problems you run into with them), and a database-specific “Mistakes were Made” talk, about operations and the tools we need to use to help us make fewer mistakes.