Category tfc_syndication

Querying Hive with Toad for Cloud Databases

I recently talked about using the Toad for Cloud Databases Eclipse plug-in to query an HBase database. After I finished up the video, I did some work loading a sample dataset from Retrosheet into my local Hive instance.

This 7 minute tutorial shows you brand new functionality in the Toad for Cloud Databases Eclipse plug-in and how you can use it to perform data warehousing queries against Hive.

Querying HBase with Toad for Cloud Databases

We recently released a new version of Toad for Cloud Databases as an Eclipse plug-in. While this functionality has been in Toad for Cloud since the last release, this video shows Toad for Cloud running in Eclipse and demonstrates some basic querying against HBase using Toad for Cloud’s ability to translate between ANSI compliant SQL code and native database calls.

[media id=5 width=680 height=560]

If the video above doesn’t work for you, everything is available full size at NoSQLPedia – Querying HBase.

CloudDBPedia is Changing Its Name to NoSQLPedia

That’s right, we’re changing the name from CloudDBPedia to NoSQLPedia.

When we originally started the site, the focus in the technology world was on cloud computing and cloud databases. Over time, the industry has changed and people have started focusing on NoSQL as the terminology of choice for not-so-relational databases. And, let’s face it, NotSoRelationalDBPedia.com is a bit lengthy for quick and easy typing.

The name change doesn’t reflect any change in focus for the site. We’re still going to be bringing you the best in blogging and community driven technical content about emerging database technology. As time goes on, we’ll be adding more and more into the mix and I think that you’ll be happy about what we’ve got in the pipeline.

Upcoming Talks – Next Week

Next week I’ll be in the San Francisco Bay area. More specifically, I’ll be giving three lightning talks at three separate Cloud Camps. It’s the same talk each time, I’ll be giving a general intro to NoSQL and cloud databases.

Silicon Valley Cloud Camp in Santa Clara, CA.
Cloud Camp Santa Clara also in Santa Clara, CA.
Cloud Camp SF @ QCon in San Francisco, CA.

All of these events start at 6:30PM, the Lightning Talks start at 6:45PM, and I have no idea when I’m going to take the stage, but it promises to be good. If you’re in the area and would like to hang out, hit me up in the comments and we can arrange a time to talk.

Cassandra Tutorial – Installing Cassandra

I was feeling a little bit crazy the other day and I installed Cassandra on my home computer. Twice. Just because. It wasn’t really a “just because” moment. I installed Cassandra twice so I could get the hang of it in case anyone asked me how they could go ahead and get started. The process was relatively painless, but there are some prerequisites that you need to have installed before you can get started. I thought that it would be good to create this short Cassandra installation tutorial to help you get started.

This video walks through everything you need to do get Cassandra installed and running on your computer. Apart from installing the OS. If you don’t have an OS, I don’t know how you’re reading this, probably witchcraft.

[media id=3 width=800 height=600]

Three Things to Watch with NoSQL

Last week I painted a rosy picture of NoSQL databases. Before you deploy any kind of NoSQL database, you need to be aware of the potential pitfalls of NoSQL databases.

1) Architecture

NoSQL databases have different architectural concerns than traditional RDMBSes.

If you’re wokring with a BigTable or Dynamo database, your network backbone will need to be able to handle the increased traffic from replication and materializing MapReduce data. Dynamo databases are very chatty – reads and writes are only guaranteed when a specific number of servers respond to a query.

MapReduce queries run across many nodes in a cluster. One stage processing a MapReduce query is temporarily materializing intermediate results from each node. The intermediate results may then be shipped to another server for additional processing, aggregation, or just be combined into the final result set.

2) Querying Strategy

RDBMSes are able to perform a mix of batch and ad hoc querying. They produce reasonably fast results for doing online aggregation and analysis of data. It’s just as easy to write a query that returns a single row from a relational database as it is to write a query that aggregates data from several thousand rows.

Not all NoSQL databases are created equally. Hadoop and Cassandra are built to perform large scale analytical queries. Their implementation of MapReduce is designed to process batches of analytical results. Frequently these results are going to be pre-aggregated rather than delivered in real time. Riak and MongoDB, on the other hand, are designed to handle interactive querying.

3) Data Modeling

Data modeling is a tricky field of study in the world of relational databases. A huge number of books have been written on the subject and they deal with both the theoretical and practical aspects of modeling data. In short, while data modeling is tricky it is also a well understood field of study.

Data modeling in the non-relational world is an entirely new field. Many problems that have been solved in relational data modeling are new and require a different way of thinking. BigTable databases, while their structure may initially appear familiar, require different modeling techniques to get the most out of the data store. MongoDB’s document oriented approach makes a great deal of sense to developers, but it still poses a number of data modeling questions about how to link related objects and how deeply nested object graphs should be in the database. Key/Value stores, like Riak, have very little structure and give the application developer a huge amount of freedom to model data.

Summation & Further Reading

NoSQL Databases hold a lot of promise to help businesses rapidly respond to data growth. Whenever you’re planning to make a major infrastructure decision, you need to weigh the pros and cons before determining which solution is right for you. While NoSQL databases provide answers to a lot of questions, they also pose a different set of questions that require a different set of skills to answer.

NoSQL Glossary
MongoDB – Sharding with the Fishes
Hadoop for Batch Processing (from the Yahoo! Developer Network)
Schema Design in Riak
Schema Design for Riak (presentation slides)

Five Reasons to Use NoSQL

You’re not using a NoSQL database right now? I’m not all that surprised. Many IT shops are still evaluating moving from SQL Server 2000 to SQL Server 2005, much less these non-relational databases. A lot of people don’t even know what they would use a NoSQL database for. Does it replace the RDBMS? Work alongside it? Do something else?

1) Analytics

One reason to consider adding a NoSQL database to your corporate infrastructure is that many NoSQL databases are well suited to performing analytical queries. Developers can use the same querying languages to perform analytical queries that they’re using to perform atomic queries. Typically this will be some variation of a MapReduce query, but it’s also possible to query data using Pig or Hive. Don’t worry too much about these weird language terms, MapReduce is a fancy way of saying “SELECT and then GROUP BY” and doing it in a way that is entirely confusing to people who are used to SQL.

Many NoSQL systems boast phenomenal write performance. When you combine high write performance with batch processing it is easy to pre-aggregate data, summarize results, and still guarantee ad hoc query performance.

2) Scale

NoSQL databases are designed to scale; it’s one of the primary reasons that people choose a NoSQL database. Typically, with a relational database like SQL Server or Oracle, you scale by purchasing larger and faster servers and storage or by employing specialists to provide additional tuning. Unlike relational databases, NoSQL databases are designed to easily scale out as they grow. Data is partitioned and balanced across multiple nodes in a cluster, and aggregate queries are distributed by default. Scaling is as easy as racking a new server and executing a few commands to add the new server to the cluster (yeah, it really is that easy). Data will start flowing and you’ll back in business in no time.

3) Redundancy

In addition to rapid scaleability, NoSQL databases are also designed with redundancy in mind. These databases were designed and built at massive scales where the rarest hardware problems go from being freak events to eventualities. Hardware will fail. Rather than treat hardware failure as an exceptional event, NoSQL databases are designed to handle it. While hardware failure is still a serious concern, this concern is addressed at the architectural level of the database, rather than requiring developers, DBAs, and operations staff to build their own redundant solutions. Cassandra uses a number of heuristics to determine the likelihood of node failure. Riak takes a different approach and can survive network partitioning (when one or more nodes in a cluster become isolated) and repair itself.

4) Flexibility

What’s the use of a database if it’s not flexible? While the data modeling issues are completely different in NoSQL, there is a large amount of flexibility in how data is stored for performance.

Databases modeled like Bigtable and Cassandra provide flexibility around how data is stored on disk. It’s possible to create derived column families. In plain English: you can design your database to duplicate frequently accessed data for rapid query response. This is, of course, based on the assumption that writes and storage space are cheap.

Databases based on the Bigtable model also have another benefit – outside of key structure it’s possible to store a variety of disparate data in the same table. Structure is largely irrelevant. Relational databases have adopted features to solve similar problems (such as sparse columns in SQL Server), but they carry overhead. Storing wildly different columns in multiple rows of the same column family is so cheap as to be invisible in a NoSQL database.

Lastly, key-value stores provide an incredible level of flexibility. Data is arbitrarily stored as a value. Key-value databases make it possible to store images, word documents, strings, integers, and serialized objects within the same database. This requires more responsibility and creative thinking on the part of application developers and architects but it also lets the people designing the system build custom a completely custom solution that fills their needs.

5) Rapid Development

Let’s face facts: everyone wants their application to be faster, have more features, and they want it yesterday. NoSQL databases make it easy to change how data is stored or change the queries you’re running. Massive changes to data can be accomplished with simple refactoring and batch processing rather than complex migration scripts and outages and it’s even easier to take nodes in a cluster offline for changes and add them back into a cluster as the new master server – replication features will take care of syncing up data and propagating the new data design out to the other servers in a cluster.

MongoDB – Now With Sharding

Version 1.6 of MongoDB was released last week while I was at devLink. What makes this an important point release? Two things: sharding and replica sets.

Sharding is a way of partitioning data. Data will be sharded by a key – this could be based on zip code, customer number, SKU, or any other aspect of the data. Sharding data in MongoDB occurs on a collection by collection basis. If you want related data to be on the same server you’ll need to pick a sharding key that is present in every collection. However, this design decision make sense because some collections may grow faster than others. The point of sharding is, in part, to spread load across multiple servers.

What makes this important? Previously, sharding has been difficult to set up and administer and required custom application code to be written and maintained. More important, though, sharding gives MongoDB the ability to scale across multiple servers with minimal effort.

What would happen if a single node in a set of sharded servers got very busy? Well, MongoDB would detect that one of the nodes is growing faster than the others and it would start balancing the load across the other servers. This might seem like it would violate my earlier statement about how we set MongoDB up to using a sharding key that we define. Here’s the catch: MongoDB only uses that sharding key when we set things up and when there are no problems. If things start getting busy, it will make changes to the sharding key. Those changes get reported throughout the entire cluster of servers and everyone knows where their data is, although nobody outside of the cluster really needs to care.

Replica Sets

Replica sets are a new and improved way to perform data replication in MongoDB. Basically, we set up replication in a cluster of servers. If any single server fails, another server in the replica set will pick up the load. Once we’re able to get the dead server back up and running, the replica set will automatically start up a recovery process and our users will never know that there was an outage.

There can be only one master server at any given time, so this protects us from master server failures. Through the magic of network heartbeats, we can be aware of all of the servers in the replica set. Interestingly, the master server is determined by a priority setting that is assigned to each server. This way, we could use older hardware to serve as a backup (or read-only server) to the master and use faster machines in the replica set to take over from the master in the event of any kind of hardware failure.

How It Works

MongoDB Sharding Diagram

MongoDB Sharding Diagram

Basically, here’s what happens (if you want more details, please see the Sharding Introduction):

  1. The mongos server is a router that makes our complicated MongoDB set up look like a single server to the application.
  2. The mongod config servers maintain the shards. They know where data is stored and will attempt to balance the shards if any single node gets out of whack.
  3. Replica sets provide localized redundancy for each shard key.

Gotchas

There are a few things to be aware when you’re considering sharding with MongoDB:

  1. If a configuration server goes down, you can no longer reallocate data if any shards become write hot spots. This meta-data must be writeable for data to be repartitioned. You can still read and write data, but load will not be distributed.
  2. Choose sharding keys wisely. An overly broad sharding key will do you no good: all data can end up on one node and you will be unable to split the data onto multiple nodes.
  3. Some queries will use multiple shards – make sure you understand data distribution, querying patterns, and potential sharding keys.

Photo Credits

glass litter by psyberartist – Creative Commons Licensed
I thought I saw a puddy cat… by Keven Law – Creative Commons Licensed

Object-Oriented Programming Concepts For DBAs

This all came about when Aaron Nelson (blog | twitter) asked me a few questions about object-oriented concepts. I did the best that I could to answer his questions, but I decided that this needed a real answer that will, I’m hoping, help to bridge the language gap between DBAs and developers.

Note to the reader: Please keep in mind that this is based on the .NET framework and specifically my experience with C#. If you want to highlight differences for your own language of choice in the comments, feel free.

Class

A class is a definition that we’ll be using later when we create objects. It’s a generic blueprint for building something else. It describes all of the properties that an object will have once we create one. In addition, classes also describe all of the behaviors (called methods) that an object will have.

DBAs, you can think of a class as similar to DDL. Just as DDL defines a the rows of a table, a class defines the objects that we’ll create later.

Object

“An object can be said to be an instance of a class.” WTF does that mean?

Well, an object is an instance of a class. That means that any time we create an object, we base if off of some kind of template. The object, whatever it may be, is just an implementation based off of some kind of blueprint. In the .NET world, an object can only be defined by one class.

Once we create an object (using something called a constructor), it’s different from the class. Like a zombie, the object takes on a life of its own. Let’s take a look at an example.

Cars have common properties – they all have four wheels, an engine, and a number of doors that is greater than 0. Individual cars have a lot of differences: different paint and upholstery colors, different trim levels, stereos, engines, transmissions, etc. When you go down to the car dealership (the constructor), you don’t just ask for a generic car. You have to be more specific and tell the dealer that you want a bright red 1978 Lincoln Continental with white wall tires. The dealer will get you the car (the constructor instantiates the object) and you can pimp in style.

Depending on how things are set up, we can change the properties of our object after we’ve created it. Properties can be modified with new values, or we can call methods on the object to make complicated changes to the object’s state.

In a round about way, an object is similar to a row. It’s a single instance of data. This is a bit of a crappy metaphor because a single object can actually contain deep structure that would be difficult to represent in a single row in the database.

Fields

A field is a basic variable – a string or number. It’s the simplest way to store data in an object. You can think of it as roughly akin to a column. You may also hear people refer to fields as member variables, instance variables, or any of a number of terms.

Properties

In the .NET world properties are wrappers around fields. They aren’t quite methods, but they aren’t quite fields. One of the interesting things about properties is that they let us enforce rules or build complex ways to describe an object from a set of simple data points.

public class Person { 
  // other nonsense goes here...

  // some fields
  private string _firstName;
  private string _lastName;

  public string FirstName {
    get { return _firstName; }
    set { _firstName = value; }
  }

  public string LastName {
    get { return _lastName; }
    set { _lastName = value; }
  }

  public readonly string FullName {
    get { return _firstName + " " + _lastName; }
  }
}

In the real world, we’d do things far more complex than concatenating first name and last name to make a full name. We’d probably use a comma and put things in the order of last name first. Or maybe we’d calculate order price based on county sales tax and order line items and shipping fees.

An astute reader might notice that properties bear an uncanny similarity to constraints in a database. Let’s look at a different example. A class for a clock might look like this:


public class Clock {
  private int _hours;
  public int Hours {
    get { return _hours; }
    set {
      if (value >= 0 && value <= 23) {
        _hours = value;
      } else {
        throw NotOnPlanetEarthException();
      }
    }
  }
}

And a similar table would look like this:


CREATE TABLE clock (
  [hours] INT CHECK (h BETWEEN 0 AND 23)
); 

It’s possible to create deeply nested structures of fields and properties. You can, for example, model a store thusly:


public class Store {
  private List _employees;
  private List _products;

  // I know this isn't how I should really model it. Shut up.
}

And this is all well and good in code. And there is a way to model this in the database that should seem relatively obvious using join tables (employees can work in more than one store and products can be sold in more than one store).


CREATE TABLE stores (
  StoreId INT IDENTITY(1,1)
  -- real implementation left out because I don't want to write it
);

CREATE TABLE Employees (
  EmployeeId INT IDENTITY(1,1)
);

CREATE TABLE Products (
  ProductId INT IDENTITY(1,1)
);

CREATE TABLE StoreEmployees (
  StoreId INT,
  EmployeeId INT
);

CREATE TABLE StoreProducts (
  StoreId INT,
  ProductId INT
);

The problem is that these two models don’t work well together. We call this object-relational impedance mismatch. I wrote about a lot of the problems in an older blog post, O/R-Ms: Panacea or Polio Braces?, so I won’t get into it here. Needless to say, there are some major problems that can arise.

Methods

Methods are the workhorses of object-oriented programming. These are the verbs of programming. A method is a bunch of statements that are executed in order and achieve some result. Methods are roughly analogous to stored procedures or functions (depending on your database).

Inheritance

The programming world would be incredibly painful if we had to keep repeating code every time we want to do the same thing. Inheritance helps us solve this problem, in some cases.

All cars have similar properties – they have engines, doors, wheels, seats, and various other car things. Some cars have different traits – a 1967 Plymouth Fury Station Wagon drives a lot differently than an 2010 Aston Martin DB9.

public class Car {
  // details go here
}

public class StationWagon : Car {
  // more details
}

public class SportsCar : Car {
  // very fast details
}

Inheritance is a way to work with the same root data and behaviors but provide additional specialization. Our SportsCar class might provide a SpinTires or BurnOut method whereas the StationWagon is not capable of performing a burn out.

Another way to think about it is to say that inheritance (and also interfaces) is a way of describing “is a” relationships. A SportsCar is a Car. Anywhere that we can use a Car, we can also use anything else that is a Car. So, we can use a StationWagon in any method that expects a car. But, since a StationWagon isn’t a SportsCar, it won’t work there. See how that works?

If this seems like a huge set of abstractions, you’re absolutely right. A lot of the concepts of object-oriented programming can be implemented using purely procedural code. It’s considerably easier to express some concepts using object-oriented code. There is some performance overhead, but many developers consider that the performance penalty is made up by the ease of development.

Interfaces

The duckroll fully implements IWheeledVehicle

Interfaces are where things get a little bit tricky for some people. An interface is a contract. When we say that a class implements an interface, we’re really saying that the class has all of the methods defined in the interface. An interface is an abstract type that can’t be directly created. Interestingly enough, though, we can create classes that implement an interface and then use the class anywhere that the interface is accepted.

If we create an IWheeledVehicle interface (interfaces in the .NET world are typically named with an I), we can define methods that accept an IWheeledVehicle parameter. We can use any object that provides an implementation of IWheeledVehicle with that method. Even if that object is a duckroll.

In Summation

Object-oriented programming is a strange concept when you first encounter it, but it provides useful abstractions that make it easier for developers to work with data. Understanding how developers might be consuming data is going to make it easier for you to work with them and create amazing applications.

If you’re going to be doing a lot of work with developers for any length of time, it would potentially be a good idea to take a look at a book or two on the language they’re using. You don’t need to become proficient, but you should understand the concepts they’re working with.

Special thanks to Matt Nowack (blog | twitter) and Mike Peschka (blog | twitter) for reviewing this and providing feedback. Also, thanks to Aaron Nelson (blog | twitter) for the original questions that sparked this blog post.

Would the Fastest Cloud Please Step Forward?

Not so fast, there, buddy.

CloudSleuth has released a new tool that allows users and customers to see how the different cloud providers stack up. As of 6:45 PM EST on August 1, Microsoft’s Azure services come out ahead by about 0.08 seconds.

0.08 seconds per what?

Turns out that it was the time to request and receive the same files from each service provider. Is this a meaningful metric for gauging the speed of cloud service providers? Arguably, yes. There was considerable variance across the different cloud providers, but that is also to be expected since there are different internet usage patterns in different parts of the world.

More importantly, using speed as a metric points out something much more important: As we develop more and more applications in the cloud, we will need new benchmarks to determine the effectiveness of an application. We can no longer just look at raw query performance and raw application response time. We need to consider things like global availability and global response times. In the past, we could get away with storing data on a remote storage server and caching commonly used files in memory on our web servers or in a separate reverse proxy server. Moving to the cloud makes it more difficult to accomplish some of this advanced performance tuning. Arguably, our cloud service providers should be handling these things for us, but that may not be the case.

With the proliferation of cheaper and cheaper cloud based hosting (did you know you can host basic apps for free with heroku?) the problems that used to plague only large customers can now bother smaller customers. As soon as you begin distributing data and applications across data centers, you can run into all kinds of problems. If you are sharding data based on geographic location, you may have problems when a data center goes down. Worse than that, because you’re in the cloud you may not notice it.

This isn’t meant to be fear mongering, I’ve come to the conclusion that the cloud is a great thing and it opens up a lot of exciting options that weren’t available before. But moving applications into the cloud caries some risks and requires new ways of looking at application performance. Latency is no longer as simple as measuring the time to render a page or the time to return a database query. We need to be aware of the differences between different cloud providers’ offerings and why performance on one provider may not directly equate to performance with a different provider.

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 531 items have been purified.