Slides (as of this moment) are here: Mistakes were made. I changed quite a bit of the beginning and end, given how big the audience is. Previous talks, we’ve usually ended with a fun “omg, here’s the craziest story I know” session. I imagine we’ll get a little bit of that today.
Postgres folks will note a relevant picture on slide 13. 🙂
This is my first keynote! Thanks so much to SCALE for inviting me. There were at least 1500 registered attendees as of Friday, so looking forward to a big crowd.
A couple questions:
1) What is the boundary between testing and verification?
2) Do you have experiences moving an organization from a culture of “it’ll probably work/ we’ll deal with problems as they happen” to “you haven’t convinced me yet/ what if it doesn’t work”? That would make a good (though perhaps long) blog post.
Good questions. Someone asked me the first one while I was at LCA.
To me, testing is automated, or at least scripted. Verification has to do with making sure that you’re “doing the right thing” and that what you’re trying to do for, and what you’re testing, is actually what you want to change. Verification also probably has more to do with people and policy, than automation and repeatability.
I imagine there may be a formal definition for this in QA or business process circles. If I come across it, I’ll note it here.
Regarding moving an organization along — yes, I have some experience with this. It’s really hard and thankless if you’re at the bottom or middle of the hierarchy. It gets much better when you have support from supervisors or the heads of companies. Or if you can do this just for yourself.
May of the suggestions I make are changes that you can apply to your own work incrementally. My best experiences when there’s a “culture of chaos” is to just control my own work and show others how calm I am when things break, and how things break less often for me.
Maybe I will write about that more soonish. 🙂
OK, the way I’m picturing it now is that you first verify (which involves trying things, discussions about edge cases you find, etc.). Then, when you reach a consensus that it behaves correctly, you write automated tests to ensure that it stays working that way.