Naming conventions are poison.

I’m not referring to sensible naming conventions like “table names are always [singular|plural]” or “method names should be short but descriptive.” Those naming conventions are fine. They’re safe. They protect us from the stupidity of future generations of us (or at least uncaffeinated versions of ourselves).

Poisonous naming conventions are the naming conventions that assume something about the physical implementation of a thing. When you name an attribute `DCreatedAt` because it’s a date, or `objAppointment` because it’s an object, or even when you name a table `t_Users` or `tbl_Permissions`, you’re poisoning the world. It’s not because I shouldn’t have to worry about typing too much or reading the code – I can guarantee you that your code is shit. The poison comes from the realization that your decision is only valid for today. Nothing says that your date will remain a valid date. Nothing says that the account number will always be a number; starting tomorrow someone could add letters to the numbers and `n_AccountNumber` stops meaning anything.

You’ve poisoned your future self. You can’t look at a piece of code and anticipate what you’re going to find. With rigid physical naming conventions, the instant that something is changed in the model there are a limited number of choices available.

1. Do nothing. At this point the model is suspect and we have to resort to metadata and remembering exceptions to make sure we know which columns are really integral, which are dates, and which are strings. At this point, I hate you. Forever.
2. Change everything. Every database query that uses the column will need to change. Every use of an object’s attributes will need to change. Miles of code will be changed and tested. At this point I hate you, but not forever.

If there were no explicit reference to the underlying implementation, then I don’t need to hate you. I can just be confused that the metadata has changed and mildly annoyed that `AccountNumber` is now a string instead of an integer, but whatever. Some form of IntelliSense or metadata inspection is going to help me figure out what data type is living underneath that attribute. I don’t need to care – the software is caring for me.

The same thing happens when we drop down to the physical level – when I encounter a table named `tbl_PileOfGoats` I have to wonder what the original data modeler was thinking, or if they even were thinking. Ignoring the problem of modeling a pile of goats, there’s the problem of implementation. What constitutes a pile of goats? What if at some later date, we need to split out goat piles into multiple tables to support long term goat management technology? Logically we might end up with `Goats`, `PilesOfGoats`, and `GoatsInPiles`. At this point we have to replace the table `tbl_PileOfGoats` with a view named `tbl_PileOfGoats`. Suddenly, I can’t even trust tables to actually be tables in the database.

By forcing a rigid, implementation-based, naming convention on our systems we remove any notion of modeling in the true sense of modeling; we’ve removed any abstraction that we can make by forcing users and consumers to be grossly aware of the implementation details. It shouldn’t matter if `CreatedAt` is stored as a date, time, or milliseconds since the theatrical release date of Breakin’ 2: Electric Boogaloo, it’s just a value that is referenced to produce some sort of meaning. That meaning is separate from the implementation. In an object-oriented world, `CreatedAt` doesn’t even need to be a single value. C# and Java give developers the ability to create properties that hide automatic getter and setter methods – several other attributes, objects, or even external service calls can be used to provide the value of an attribute.

There’s no value in forcing a one time implementation decision into the naming conventions of your model. It limits expansion – not by imposing a physical limitation, but by imposing limitations in the mental model of the people using your system. When a few key identifiers in a system can’t be trusted to supply correct information, none of them can be trusted. A system with only a few incorrect types encoded in attribute names is as useful as a system with no types encoded in attribute names.

The next time you need to design a database, remember that you’re modeling the data and not the implementation.

Community Keynotes

I go to a lot of conferences. If you ask my co-workers, I probably go to too many conferences. Going to a lot of conferences, I get a chance to see a lot of keynotes, closing keynotes, and plenary discussions. Different conferences have different keynotes, but the one thing that sticks out in my head is that the keynotes and opening talks at a conference set the mood for the entire event.

Conference organizers, take note: we form our impressions based on the first things we see. For first time visitors, the first thing they see at a conference is frequently the keynote. It sets the stage for the learning they’re about to experience. Three conferences stand out this year for their high quality keynotes: OSCON, CodeMash, and Surge

What Makes For a Good Keynote?

I don’t know what makes for a good keynote, but I do know that it should reflect your conference. The keynote sets the mood; attendees get a feel for what their day is going to be like at a conference based on how they feel after the keynote. Bombard them with two hours of marketing material and they’re not going to feel good about the rest of their day. Reward them with two hours of intellectually stimulating content and they’re going to look forward to another six hours of learning.

Attending OSCON is a heady experience – there are concurrent sessions on different topics. I could go from a talk about Big Data to Go, Google’s new programming language, to a talk about HTML and CSS in a two hour time period. The keynotes were no different. They showcased both the breadth of interests present at the conference. Talks ranged from DIY biotech startups to open source community to launching robots into space.

Surge featured a keynote from Bryan Cantrill talking about various failures throughout his career. As a conference of operations staff and professional generalists, Bryan’s remarks rang true with all of the attendees. We’ve all been in an “oh shit” situation where millions of dollars hang in the balance based on the code we’ve written and the actions we’re about to take. Bryan summed up the feel of the conference and set the stage for the next few days of learning.

CodeMash is a great event that’s run outside of Cleveland in the winter. If you’ve ever been to Cleveland in January, you’ll know that the primary reason to visit Cleveland in January is to remind yourself why you live somewhere else. There was several feet of snow on the ground and my car was indistinguishable from a snow bank. CodeMash is renowned as a fun and educational conference. Chad Fowler delivered a hilarious and topical keynote about moving beyond hearsay and misunderstanding and opening your eyes to the world around you.

What Do All Three Conferences Share?

The common theme is that all three conferences are community conferences. Sure, OSCON is put on by a huge book publisher but it wouldn’t be a success without the people who volunteer to make it a success. OmniTI support Surge, but they aren’t the only sponsor; Surge was a vendor agnostic event. And CodeMash is put on by a non-profit group with the goal of making the event as cheap as possible for attendees. These three events are built for the community.

All three keynotes opened my eyes to the conferences. I knew what to expect: the stage was set. A good keynote sets the expectations of attendees. A keynote doesn’t have to be timeless, there is a place for a product demonstration, but a keynote should say something. When a keynote says “We have you trapped, watch this demonstration,” attendees notice: they shuffle papers, they play games on their phone, they do anything possible to mentally escape from the situation. Unfortunately, it does a disservice to the conference because most attendees won’t remember who gave a bad keynote, they’ll just remember that a conference had a bad keynote.

When a keynote is good, people notice. They sit up and take part. They cheer, they clap, they tweet, and they blog. The most important part, though, is that a good keynote draws the audience in account and makes them more than a passive set of ears; the audience is brought into the fold and becomes a member of an exclusive community open only to the people in the room. Nobody else can take part. A good keynote – like a good conference – embraces, entertains, and educates.

A bad keynote is a commercial that I can’t turn off.

Ten Reasons PostgreSQL is Better Than SQL Server

Why would anyone want to use PostgreSQL instead of SQL Server? There are a lot of factors to consider when choosing how to store your data. Sometimes we need to look deeper than the standard choice and consider something new. If you’re starting a brand new project, where should you store your data? Here are ten reasons why you might want to consider PostgreSQL over SQL Server.

Releases Every Year

Let’s face it, waiting three to five years for new functionality to roll out in any product is painful. I don’t want to constantly be learning new functionality, but on the flip side I don’t want to be writing hack solutions to critical business problems because I know something is coming down the pipe, but I can’t wait for a few more years before I implement it myself. Rapid release cycles guarantee that the PostgreSQL development team is able to quickly ship the features that users need and make frequent improvements.

Starting with version 9.0, the PostgreSQL release cycle has switched to a yearly cycle. Before that, PostgreSQL released whenever the features were done. Looking at the major releases on Wikipedia, it’s obvious that major releases still rolled out about once every 18 months. An 18 month release cycle isn’t bad for any software product, much less a mission critical one like a database.

True Serialization

Snapshot isolation guarantees that all reads in a transaction see a consistent snapshot of data. In addition, a transaction should only commit if the ways that it changes data don’t conflict with other changes made since the snapshot was taken. Unfortunately, snapshots allow anomalies to exist. It’s possible to create a situation where two valid transactions occur that leave the database in an inconsistent state – the database doesn’t pass its own rules for data integrity.

Serializable snapshot isolation was added to PostgreSQL in version 9.1. SSI emulates strict serial execution – transactions behave as if they are executing one after another. If there is a conflict, or even a potential conflict, the database engine throws an error back to the caller (who is left to figure out the appropriate next step).

Serializable snapshot isolation sounds painful. The kicker is that it makes it possible for databases to behave in ways that work to guarantee an even stronger level of consistency. Applications can be developed to assume that data modification will fail and subsequently retry failed transactions. The true benefit is that well written software can avoid data inconsistencies and maintain the illusion that all is operating as it should be.

Sane Defaults, Ridiculous Tuning

Okay, to be fair PostgreSQL ships with some ridiculously conservative shared memory settings. Most other PostgreSQL settings are conservative, but general enough for most generic workloads. Many people deploying PostgreSQL will not have to make many changes to PostgreSQL (probably just increasing shared_buffers to 25% of total RAM to start).

Once a PostgreSQL installation is up and running, there are a number of settings that can be changed. The best part, though, is that most of these settings can be changed at the server, database, user, or even individual query level. It’s very common to have mixed workload servers – most activity on the server is basic CRUD, but a small percentage of activity are reports that need to be aggressively tuned. Instead of moving the individual reports out to running on separate space (either separate servers, databases, or even in separate resource pools in the same database), we can simply tune a few queries to use the appropriate parameters including the memory to allocate for sorting and joins.

Unlogged Tables

Are you sick of trying to get minimally logged bulk inserts to work? Me too. Instead of trying various mechanisms to minimally log some tables, PostgreSQL give us option of creating an unlogged table – simply add the UNLOGGED directive to a create table statement and everything is ready to go.

Unlogged tables bypass the write ahead log; they aren’t crash safe, but they’re stupid fast. Data in an unlogged table will be truncated after the server crashes or there is an unclean shutdown, otherwise it’ll still be there. They’re also excluded from replication to a standby server. This makes unlogged tables ideal for ETL or other data manipulation processes that can easily be repeated using source data.

KNN for Geospatial… and More

Yeah, I hear ya, SQL Server will have this soon, but PostgreSQL already has it. If K Nearest Neighbor searches are critical for your business, you’ve already gone through some pain trying to get this working in your RDBMS. Or you’ve given up and implemented the solution elsewhere. I can’t blame you for that – geospatial querying is nice, but not having KNN features is a killer.

PostgreSQL’s KNN querying works on specific types of indexes (there are a lot of index types in PostgreSQL). Not only can you use KNN querying to find the 5 nearest Dairy Queens, but you can also use a KNN search on other data types. It’s completely possible to perform a KNN search and find the 10 phrases that are closest to “ice cream”.

KNN search capability makes PostgreSQL a serious contender for anyone looking at implementing geospatial querying. The additional flexibility puts PostgreSQL in a leadership position for many other kinds of search driven applications.

Transaction-Controlled Synchronous Replication

One of the easiest ways to keep another copy of your database is to use some kind of database replication. SQL Server DBAs will largely be used to transactional replication – a dedicated agent reads the SQL Server log, collects outstanding commands, and then ships them over to the subscriber where they are applied.

PostgreSQL’s built-in replication is closer to SQL Server’s mirroring than SQL Server’s replication (PostgreSQL’s replication has a readable standby). Log activity is hardened on the primary and then streamed to the secondary. This can either happen synchronously or asynchronously. Up until PostgreSQL 9.1, replication was an all or nothing affair – every transaction was either synchronous or asynchronous. Developers can set a specific transaction by setting the synchronous_replication configuration value for that single transaction. This is important because it makes it possible to write copious amounts of data to logging tables for debugging purposes but not have performance be impacted by synchronously committing writes to the log tables.

Any time we have more choice in how we develop our applications, I’m happy.

Writeable CTEs

CTEs are great for reads, but if I need to do something more complex with them, there are other issues involved. An example is going to make this much easier. Let’s say I want to delete stale data, but I want to store it in an archive table. To do this with SQL Server, the easiest route (from a development standpoint) is going to be to elevate my isolation level to at least snapshot, if not serializable, and use isolation levels to guarantee that no data will be changed. I could also load the PK value of the comments to be deleted into a temp table and reference that multiple times.

Both methods work, but both methods have problems. The first requires that the code be run in a specific isolation level. This relies on specific settings to be in place that may not be available. The code could also be copied out of the procedure and run in SSMS, leading to potential anomalies where a few rows are deleted but not archived. That’s no big deal for spam comments, but it could be critical in other situations. The second method isn’t necessarily bad, there’s nothing wrong with it, but it involves extra code noise. That temporary table isn’t necessary to solve our problem and is a byproduct of dealing with different isolation levels.

PostgreSQL has a different way to solve this problem: writeable CTEs. The CTE is constructed the same way it would be constructed in T-SQL. The difference is that when we’re using PostgreSQL, the data can be modified inside the CTE. The output is then used just like like the output of any other CTE:

CREATE TABLE old_text_data (text_data text); 

WITH deleted_comments AS ( 
  DELETE FROM comments 
  WHERE comment_text LIKE '%spam%' 
  RETURNING comment_id, email_address, created_at, comment_text 
) 
INSERT INTO spam_comments 
SELECT * 
FROM deleted_comments 

This can be combined with default values, triggers, or any other data modification to build very rich ETL chains. Under the covers it may be doing the same things that we’ve outlined from SQL Server, but the conciseness is beneficial.

Extensions

Ever want to add some functionality to SQL Server? What about keep up to date on that functionality? This can be a huge problem for DBAs. It’s very easy to skip a server when you roll out new administrative scripts across your production environment. Furthermore, how do you even know which version you have installed?

The PostgreSQL Extension Network is a centralized repository for extra functionality. It’s a trusted source for open source PostgreSQL libraries – no sneaky binaries are allowed. Plus, everything in PGXN is versioned. When updating PGXN provided functionality, the extension takes care of the update path for you – it knows how to make sure it’s up to date.

There are extensions for things ranging from K-means clustering, Oracle compatibility functions, to remote queries to Amazon S3.

Pushing this functionality out into extensions makes it easy for developers and DBAs to build custom packages that look and act like core functionality of PostgreSQL without trying to get the package through the PostgreSQL release process. These packages can then be developed independently, advance at their own rate, and provide complex functionality that may not fit within the release plan of the PostgreSQL core team. In short, there’s a healthy ecosystem of software being built around PostgreSQL.

Rich Temporal Data Types

One of my favorite features of PostgreSQL is the rich support for temporal data types. Sure, SQL Server 2008 finally brought some sophistication to SQL Server’s support for temporal data, but it’s still a pretty barren landscape. Strong support for temporal data is critical in many industries and, unfortunately, there’s a lot of work that goes on in SQL Server to work around the limitations of SQL Server’s support for temporal data.

PostgreSQL brings intelligent handling of time zones. In addition to supporting the ISO 8601 standard (1999-01-08 04:05:06 -8:00), PostgreSQL supports identifying the time zone by an abbreviation (PST) or by specifying a location identifier (America/Tijuana). Abbreviations are treated like a fixed offset from UTC, location identifiers change with daylight savings rules.

On top of time zone flexibility, PostgreSQL has an interval data type. The interval data type is capable of storing an interval of up to 178,000,000 years with precision out to 14 digits. Intervals can measure time at a number of precisions from as broad as a year to as narrow as the microsecond.

Exclusion Constraints

Have you ever tried to write any kind of scheduling functionality using SQL Server? If you have, you’ll know that when you have business requirements like “two people cannot occupy the same conference room at the same time”, you’ll know that this difficult to enforce with code and usually requires additional trips to the database. There are many ways to implement this purely through application level code and none of them lead to happy users or developers.

PostgreSQL 9.0 introduced exclusion constraints for columns. In short, we define a table and then add an additional constraint that includes a number of checks where at least one of the checks is false. Exclusion constraints are supported under the hood by indexes, so these operations are as quick as our disks and the index that we’ve designed. It’s possible to use exclusion constraints in conjunction with temporal or geospatial data and make sure that different people aren’t reserving the same room at the same time or that plots of land don’t overlap.

There was a presentation at the 2010 PGCon that going into the details of exclusion constraints. While there is no video, the slides are available and they contain enough examples and explanations to get you started.

Bonus Feature – Cost

It’s free. All the features are always there. There are no editions of PostgreSQL – the features always exist in the database. Commercial support is available from a number of companies, some of them even provide additional closed source features, but the core PostgreSQL database is always available, always free, and always contains the same features.

Getting Started

Want to get started with PostgreSQL? Head on over to the download page and pull down a copy for your platform of choice. If you want more details, the documentation is thorough and well written, or you can check out the tutorials in the wiki.

My Code Isn’t Fat, It’s Just Robust

I’ve been working on implementing some infrastructure code for a client. We’re building robust partition swapping to make it easy to load data without disrupting user queries. We’re doing everything eles the right way, but partition swapping makes it really easy to correct a bad load of a past data.

The upside is that this code is really easy to write. There are enough examples, samples, and previous samples out there that a lot of the basics can be easily implemented. Even the complex parts of implementing the partition swapping are fairly trivial. The trick is making the code robust enough to handle almost any failure scenario.

Table partitioning is good to use in different ETL scenarios, but we never want it to fail. If it does fail, we want to make sure that we’re in a recoverable state. Likewise, this code needs to be automated and recover from any potential failures.

It turns out that the actual functionality is just a few lines of code. The robust error handling, logging, and recovery code is about 30 times longer than the functionality. It can be difficult to go through the code and update all of the error handling and logic in response to minor changes to business requirements, but the end product is a stable piece of functionality.

Taking out the Trash

If you’re like me, you probably don’t think a lot about what you throw into the trash on your computer – you just put files and folders in there to be deleted and, eventually, you empty the trash. The other day I dropped a 40GB virtual machine into the trash and then told OS X to empty the trash. A few hours later, it was still chugging away at pretending it was deleting the trash. There were no messages in Console about files being locked and unable to be cleaned up and no warnings of any other kind.

Being the resourceful individual that I am, I dropped into the terminal and ran sudo rm -rf .Trash/*. That process hung as well. I checked the console and all of the various logs again and there was nothing going on.

As a last resort before giving up and switching to the backup computer, I rebooted the Mac into single user mode (hold down Command-S during the boot sequence). Once I was in single user mode, OS X dropped me off at a command prompt as the root user. A few quick cds later and I was able to once again rm -rf .Trash/*. This time it worked like a charm.

I have no idea what happened and there were no logs to point me in the right direction. I can only assume that OS X got hung up trying to read the contents of the virtual hard disk which is, in reality, nothing more than a bundle of files, much like an OS X application. Maybe there was just too much junk…

Upcoming Training

AppDev Virtual Chapter – The Database is Dead, Long Live the Database

I’ll be speaking on Tuesday June 28th for the PASS Application Development virtual chapter. Things kick off at 5PM Pacific, so it’s a bit late for those of you on the east coast and might be a bit early for people in the UK. I’ve given this talk twice before – once at Stir Trek in Columbus and once on SQL Cruise. If you want to know what the fuss is about, you should join the Live Meeting

Abstract:
If relational databases are so great, why are people talking about NoSQL? Shouldn’t we explore other ways to store and manipulate data? We’ll look at four scenarios – caching, session state, flexible data models, and batch processing – and discuss how traditional databases perform in each situation and what other options exist on the market. At the end of this session, attendees will have a better understanding of how different workloads perform in RDBMSes, best practices, and alternative storage solutions to make your life easier.

OSCON – Refactoring SQL

I’m really excited to be presenting at OSCON this year. The title of the talk is “Refactoring SQL”. It’s aimed squarely at shit hot application developers who refactor SQL code just like they refactor their application code – by bundling everything into loops and cursors and user defined functions. The point is, refactoring SQL is not like refactoring application code. We’ll go over a bunch of different techniques to squeeze extra performance out of your RDBMS so you can sleep easy after yelling at you DBA that it’s all his fault. You might even find places to make sweeping performance improvements. I’ve been tuning SQL applications for performance, throughput, and concurrency for a few years; I may not have written the book on the subject, but I’ve screwed up enough to know a few things.

You can buy tickets here, and use the code os11fos for 20% off. They pay me in beatings, the more tickets that are sold, the less the speakers get beatn.

Stir Trek: Thor Edition (the Aftermath)

I spent the weekend at Stir Trek: Thor Edition in Columbus, OH. While I had a blast speaking about databases, I had even more fun attending and learning.

Programming the Cloud with HTTP/REST

I knew about REST before I attended this talk and I’ve done a bit of REST programming (right before I decided to nerd out on data), but Mike Amundsen did a great job of convincing me why I should care about REST as a programming paradigm for web developers. I’m not going to go out and start writing code to build my own REST services, but I have a better grasp of how I can work with developers to build robust data driven services and applications and do some incredibly cool things in the process.

CSS 3: Something for Everyone

John Keyes delivered a presentation about the basics of CSS 3. I knew there were some great features of CSS 3, but I also knew that the spec is incredibly broad. I was a bit worried that John’s presentation was going to be a whirlwind tour with very little depth. Instead of a shallow romp through CSS 3, John delivered a solid presentation that worked through some core features of CSS 3 in a practical manner and built up on them to demonstrate new techniques with real world value. Except for maybe the demo that made a rectangle swirl into a circle, that was just cool.

John gets mad props for filling his slide deck with Jurassic Park references. I had some laughs while I got my learn on, and that’s a good thing. I’ve worked with John on presentations in the past and he’s become a phenomenal presenter, programmer, and web developer in the time that I’ve known him.

Real World jQuery

Matt Nowack had the difficult job of speaking right after lunch. He gave a great talk about jQuery 1.5 and 1.6. It turns out that Matt wrote the talk for jQuery 1.5 and did a great job of introducing some of the newest features of my favorite JavaScript library. jQuery 1.6 came out recently and it made parts of Matt’s talk irrelevant. Matt took it in stride and wrote new content earlier in the week and delivered a top notch presentation that was educational and entertaining. I overheard one attendee say that they were rushing off to Matt’s presentation because it was bound to be good. They were right and I was also right to pay them 15 bucks and a box of Milk Duds to yell it at the top of their lungs. Good job, Matt.

The Rest of My Time

I spent the rest of my time preparing and giving my talk The Database is Dead, Long Live the Database. If you attended, the resources page will always be online at http://brentozar.com/stir-trek-thor-edition. If you missed it and you wanted to see it, you’ll be able to catch it on June 28th at 8:00 PM Eastern through the PASS Application Development Virtual Chapter.

I was incredibly flattered when Matt Casto asked me to speak at this event, way back at Code Mash. I’m glad that he was clearly drunk and thought I would make a great speaker. You can’t imagine how happy I am that he accepted the bribes I sent his way, just in case he sobered up and didn’t remember asking me to speak. Luckily, most of that’s a lie. I did, however, have an awesome time in Columbus hanging out with old friends, hopefully making a few new ones, and only telling one STD joke during my presentation; nice try, guy in row three!

In summation: thanks Matt, thanks Stir Trek, and thanks Columbus!

PASS 2011 Session Abstracts

PASS 2011 Session Abstracts

Every November, a bunch of database geeks gather for the Professional Association for SQL Server’s (PASS) international Summit. This year it’s going to be held October 11-24 in Seattle, Washington. I didn’t submit last year since I was involved with the abstract selection process. This year I’m not involved, so I decided to submit a few abstracts.

Rewrite Your T-SQL for Great Good!

Refactoring SQL is not like refactoring application code. This talk will cover proven SQL refactoring techniques that will help you identify where performance gains can be made, apply quick fixes, improve readability, and help you quickly locate places to make sweeping performance improvements. Jeremiah Peschka has years of hands on experience tuning SQL applications for performance, throughput, and concurrency.

Why I submitted this session: I submitted this session because it’s a fun session to give, it crosses boundaries between DBA and developer, and I’ve given it a few times before.

The Database is Dead, Long Live the Database

If relational databases are so great, why are people talking about NoSQL? Shouldn’t we explore other ways to store and manipulate data? We’ll look at four scenarios – caching, session state, flexible data models, and batch processing – and discuss how traditional databases perform in each situation and what other options exist on the market. At the end of this session, attendees will have a better understanding of how different workloads perform in RDBMSes, best practices, and alternative storage solutions to make your life easier.

Why I submitted this session: I wrote this session when I was asked to speak at Stir Trek: Thor Edition. Writing it has been a lot of fun and has started the process of crystallizing a lot of the ideas in my head around data storage. This talk focuses on a few areas where relational databases don’t do a good job and proposes solutions to pick up the slack.

Rules, Rules, and Rules

Computers are governed by the rules of physics: electrons, drive heads, and disk platters can only move so fast. Database systems are built according to those rules: memory is faster than disk which is faster than the network. Database schemas and queries are built within the rules of database systems. You will hit the limitations of these rules. If you know what the rules are and why they are in place, you’ll know when it’s time to break them… and how to succeed.

Why I submitted this session: This is also a session I’ve given before. Andy Leonard asked me to speak at the inaugural SQLPeople event about my passion. One of my passions is learning about computer science and how it can be applied to databases in a practical way. (There’s a lot of purely theoretical information that only matters when you’re implementing an RDBMS.) This session is an extended version of the talk I gave at SQLPeople. I’m incredibly excited about it and I’ll be bummed if it doesn’t get accepted.

The Other Side of the Fence: Lessons Learned from Leaving Home

Traveling the world changes your outlook on things, home just doesn’t look quite the same once you’ve traveled. The same can be said for SQL Server; working with databases like PostgreSQL, Cassandra, and Hadoop forced Jeremiah Peschka to re-learn concepts that he took as a given. Learn from his experiences about the importance of understanding isolation levels, data storage and retention, querying patterns, and even database functionality in this talk drawn from his experiences as a DBA, consultant, and developer.

Why I submitted this session: There’s a theme going on here – I’ve learned a lot about database and application design and how it’s sometimes necessary to move outside of my comfort zone to build an effective system. This is a 3.5 hour session that will cover a lot of features in SQL Server. I learned a lot working with other databases, and I hope that this information helps some other people.

In the Event That Everything Should Go Terribly Right

Astute readers and internet stalkers will have noticed that I left my job at Quest Software back in March. I wasn’t unhappy, I just had the opportunity to take my show on the road and go solo. I’ve had the idea of being my own boss in the back of my head for along time. Suddenly I was confronted with a situation where a former pipe dream was all too real. I talked it over with a few friends and made the plunge.

Right around the same time, I started talking to Brent about his plans. This turned into talking to Brent and Tim about their plans. Then we looped Kendra in. It turns out that we all have similar goals and dreams. It only made sense to join forces and fight crime together! After evaluating the insurance costs of fighting crime we decided to become consultants. And thus Brent Ozar PLF was born.

I’ve never been more excited to work with a group of people. Brent, Tim, and Kendra have always gotten along. I’ve never felt more supported and challenged by a group of people. My business partners are three friends who have always encouraged me to excel. Whether it’s been learning about SQL Server, Ruby, or non-relational databases, these three have been there supporting me every step of the way, even when we’ve disagreed.

I could make a list of all of the other reasons that I’m looking forward to building this business, but it all boils down to the way we interact. Brent, Tim, and Kendra challenge me to be better at everything I do. Whether it’s my SQL Server skills, writing, or presenting, they’re always there helping me get better. I couldn’t ask for a better core group of friends to join me on this new endeavor.

What does the future hold? In terms of business, I’m excited to be building a business with Brent, Tim, and Kendra. Our interests are similar enough that we complement each other but they’re diverse enough that I know we’re going to educate and challenge each other.

You can learn more about our services at http://brentozar.com. If you want to get in touch, you can do that too.

PostgreSQL Update Internals

I recently covered the internals of a row in PostgreSQL, but that was just the storage piece. I got more curious and decided that I would look into what happens when a row gets updated. There are a lot of complexities to data, after all, and it’s nice to know how our database is going to be affected by updates.

Getting Set Up

I started by using the customer table from the pagila sample database. Rather than come up with a set of sample data, I figured it would be easy to work within an existing set of data.

The first trick was to find a customer to update. Since the goal is to look at an existing row, update it, and then see what happens to the row, we’ll need to be able to locate the row again. This is actually pretty easy to do. The first thing I did was retrieve the ctid along with the rest of the data in the row. I did this by running:

SELECT  ctid, *
FROM    customer
ORDER BY ctid 
LIMIT 10 ;

The first ten rows

This gives us the primary key of a customer to mess with as well as the location of the row on disk. We’re going to be looking at the customer with a customer_id of 1: Mary Smith. Using that select statement, we can see that Mary Smith’s data lives on page 0 and in row 1

Updating a Row

Now that we know who we’re going to update, we can go ahead and mess around with the data. We can take a look at the row on the disk using the get_raw_page function to examine page 0 of the customer table. Mary Smith’s data is at the end of the page.

Why is Mary’s data the first row in the table but the last entry on the page? PostgreSQL starts writing data from the end of the page but writes item identifiers from the beginning of the page.

We already know that Mary’s row is in page 0, position 1 because of the ctid we retrieved in our first query. Let’s see what happens when we update some of Mary’s data. Open up a connection to PostgreSQL using your favorite interactive querying tool. I use psql on the command prompt, but there are plenty of great tools out there.

BEGIN TRANSACTION

UPDATE  customer
SET     email = 'mary.smith@gmail.com'
WHERE   customer_id = 1 ;

Don’t commit the transaction yet!

When we go to look for Mary’s data using the first select ordered by ctid, we won’t see her data anywhere.

The first ten rows, after the update

Where did her data go? Interestingly enough, it’s in two places right now because we haven’t committed the transaction. In the current query window, run the following command:

 SELECT ctid, xmin, xmax, * FROM customer WHERE customer_id = 1;

The data has moved to a new page

After running this, we can see that the customer’s row has moved off of page 0 and is now on page 8 in slot 2. The other interesting thing to note is that the xmin value has changed. Transactions with a transaction id lower than xmin won’t be able to see the row.

In another query window, run the previous select again. You’ll see that the row is still there with all of the original data present; the email address hasn’t changed. We can also see that both the xmin and xmax columns now have values. This shows us the range of transactions where this row is valid.

The row after an update, from a different transaction

Astute readers will have noticed that the row is on disk in two places at the same time. We’re going to dig into this in a minute, but for now go ahead and commit that first transaction. This is important because we want to look at what’s going on with the row after the update is complete. Looking at rows during the update process is interesting, but the after effects are much more interesting.

Customer Page 0 - After the Update

Looking at page 0 of the customer table, we can see that the original row is still present. It hasn’t been deleted yet. However, PostgreSQL has marked the row as being “old” by setting the xmax value as well as setting the t_ctid value to 00 00 00 08 00 02. This tells us that if we look on page 8 in position 2 we’ll find the newest version of the data that corresponds to Mary Smith. Eventually this old row (commonly called a dead tuple) will be cleaned up by the vacuum process.

Customer Page 8 - After the Update

If we update the row again, we’ll see that it moves to a new position on the page, from (8,2) to (8,3). If we did back in and look at the row, we’ll see that the t_ctid value in Mary Smith’s record at page 8, slot 2 is updated from 00 00 00 00 00 00 to 00 00 00 08 00 03. We can even see the original row in the hex dump from page 8. We can see the same information much more legibly by using the heap_page_items function:

select * from heap_page_items(get_raw_page('customer', 8));

There are three rows listed on the page. The row with lp 1 is the row that was originally on this page before we started messing around with Mary Smith’s email address. lp 2 is the first update to Mary’s email address.

Looking at t_infomask2 on row 2 we can immediately see two things… I lied, I can’t immediately see anything apart from some large number. But, once I applied the bitmap deciphering technology that I call “swear repeatedly”, I was able to determine that this row was HEAP_HOT_UPDATED and contains 10 attributes. Refer to htup.h for more info about the bitmap meanings.

The HOTness

PostgreSQL has a unique feature called heap only tuples (HOT for short). The HOT mechanism is designed to minimize the load on the database server in certain update conditions:

  1. A tuple is repeatedly updated
  2. The updates do not change indexed columns

For definition purposes, an “indexed column” includes any columns in the index definition, whether they are directly indexes or are used in a partial-index predicate. If your index definition mentions it, it counts.

In our case, there are no indexes on the email column of the customer table. The updates we’ve done are going to be HOT updates since they don’t touch any indexed columns. Whenever we update a new row, PostgreSQL is going to write a new version of the row and update the t_ctid column in the most current row.

When we read from an index, PostgreSQL is going to read from the index and then follow the t_ctid chains to find the current version of the row. This lets us avoid additional hits to disk when we’re updating rows. PostgreSQL just updates the row pointers. The indexes will still point to the original row, which points to the most current version of the row. We potentially take an extra hit on read, but we save on write.

To verify this, we can look at the index page contents using the bt_page_items function:

SELECT  * 
FROM    bt_page_items('idx_last_name', 2) 
ORDER BY ctid DESC;

We can find our record by moving through the different pages of the index. I found the row on page 2. We can locate our index row by matching up the ctid from earlier runs. Looking at that row, we can see that it points to the ctid of a row with a forwarding ctid. PostgreSQL hasn’t changed the index at all. Instead, when we do a look up based on idx_last_name, we’ll read from the index, locate any tuples with a last name of ‘SMITH’, and then look for those rows in the heap page. When we get to the heap page, we’ll find that the tuple has been updated. We’ll follow the update chain until we get to the most recent tuple and return that data.

If you want to find out more about the workings of the Heap Only Tuples feature of PostgreSQL, check out the README.

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 531 items have been purified.