A quick look at MongoDB – part 1

NoSQL databases are all the rage lately, with some touting them as the demise of the traditional RDBMS.  I don’t share that view, but thought it worthwhile to begin exploring the new technologies.  A coworker of mine had gone through a seven week online course for MongoDB through MongoDB University (https://university.mongodb.com/) and was very happy with the outcome, so I thought I’d give it shot, also.

The course was free and took 2 – 4 hours a week to complete.  It consisted of a video “lecture” with ungraded quizzes at the end to help reinforce the material.  Each week had a few homework assignments to turn in.  Some of the material was dated but they made a strong effort to point out those areas.  I was able to complete the course with a minimal amount of effort, but felt very good about the knowledge I’d gained.

So why MongoDB?  It was created by a group of developers, so one of the first attractions to the technology is that you interact with the data through JavaScript.  A developer doesn’t have to worry about learning SQL, they can stick with their “native” language.  Additionally, all data is stored in JSON (JavaScript Object Notation) documents, with which most programmers are already familiar.  The underlying data structure of the “database” more closely resembles OO programming constructs (a good article here).  One more attraction to the technology is that interaction with the data is ‘schema-less’; there is no set structure that needs to be adhered to when getting information out of the JSON documents.  But, one of the main advantages of MongoDB is that it scales well horizontally, without the need to purchase high end hardware.  This is ideal for dealing with “Big Data” (yes, I used the buzzword).

How does MongoDB scale on commodity hardware?  It uses a concept called “sharding“.  Think “partitioning”.  Data is separated physically through the use of a “shard key” and access is managed through a central process.  Each server contains a subset of the data, “partitioned” across each node.  Node one would contain IDs 1 through 1,000,000.  Node two would have 1,000,001 through 2,000,000.  Node three 2,000,001 through 3,000,000, etc:

MongoDB shard
In addition, MongoDB uses the concept of “replica sets” (think “replication”).  There are multiple servers in a replica set which contain a copy of the data, this allows for read only secondaries to help distribute load.  It also maintains high availability:

MongoDB replica set

A blending of the two configurations creates a highly available, high performing environment:

Replica set and sharding

 

Servers can be added or removed without interruption of service.  It’s an inexpensive way to scale horizontally.  It also allows for maintenance and high availability.  I can do a “rolling” patch or upgrade to servers while still allowing access to the data.  I’m also not completely down if a server goes offline.

This was part 1 of a series of blog posts, focusing on a brief overview and the architectural advantages of MongoDB.  Stay tuned for more posts.

 

T-SQL Tuesday #049: What’re we waitin’ fer?!?!?

What're we waitin' fer?!?!?

What’re we waitin’ fer?!?!?

Had to channel my inner Mickey for the title of the post…Rocky II is one of the first movies that I remembering seeing as a re-run on cable…but I digress.

When I read Robert’s post and subject for this month’s T-SQL Tuesday, my mind immediately went to one of my first experiences with wait stats, as it was a very poignant “ah-HAH!” moment for me.  I was an administrator for several SQL Server instances and had been doing it for a while, but knew very little about SQL Server’s wait stats.  A few nights before my incident I was reading through Jonathan Kehayias’ (B|T) “Troubleshooting SQL Server: A Guide for the Accidental DBA” which has a section devoted to finding and diagnosing issues around waits.  I had saved the supplied queries and was developing a method for saving wait information and analyzing it over time, to help with my administration.tsql2sday

So there I was, at work taking care of whatever that was the radar that day.  Cue my users: emails and phone calls began coming in, telling me that our campus imaging database was unresponsive and that the application was down.  Our monitoring software wasn’t reporting any unusual resource strains on the instance and the other databases weren’t having any problems.  Enter my shiny new query: I ran it and found a wait on BACKUPTHREAD.  It turned out that my hourly backup of the imaging database had hung up…using the very same query I found that I wasn’t having any problems with I/O.  I was unable to kill the backup process and a phone call to Microsoft had me restart the instance.

It was awesome to be able to use something that was pretty fresh in my mind and be able to get to the root of an issue in just a few minutes.  I remember thinking “heck, that was pretty badass.”

Thanks to Robert Davis (B|T) for this month’s topic.

Here’s some more information about T-SQL Tuesdays (#tsql2sday on Twitter)

http://sqlblog.com/blogs/adam_machanic/archive/2009/11/30/invitation-to-participate-in-t-sql-tuesday-001-date-time-tricks.aspx

T-SQL Tuesday #047 – The stuff we all get!

My story isn’t so much about quality as it is about quantity.

PASS Summit 2011 was my first time attending that most of awesome of events and I was a bright eyed noob.  I had no idea what to expect and was already a bit overwhelmed with travel accommodations, light rail rides and hotel check-in.  Walking into the Tuesday night welcome reception did nothing to ease the nerves.

tsql2sday

At the reception was my first “score”, a blue umbrella.  After the reception was casino night with SQLServerCentral.com…this yielded an iPod shuffle and a t-shirt.  Two hours into the trip and I was already playing with house money.

Over the next three days, I had a pile of t-shirts, 2 hats, some rubber duckies, a new backpack, a flashlight, bouncy balls that lit up and more pens and note pads than Kinko’s.  I nearly had to check an extra bag to haul all my booty home.  My wife loved it ‘cuz I’m a “t-shirt and jeans” kind of guy….she wouldn’t have to buy me clothes for another year.

20131010_005142_resized

Don’t be a tool…

My favorite shirt was bright orange and in misaligned letters on the front read “Don’t be a tool…”.  It’s still my go-to camisa for Saturdays around the house.

I’ll be attending this year’s Summit and am looking forward to more SWAG!

Thanks to Kendall Van Dyke (B|T) for this month’s topic.

Here’s some more information about T-SQL Tuesdays (#tsql2sday on Twitter):

http://sqlblog.com/blogs/adam_machanic/archive/2009/11/30/invitation-to-participate-in-t-sql-tuesday-001-date-time-tricks.aspx

T-SQL Tuesday #046 – Rube Goldberg in order to SQL Server

tsql2sdayMy first T-SQL Tuesday post and am I excited!  My story is going to vary slightly from the original subject in that I’m going to outline a Goldberg machine that was created in order to CONNECT to SQL Server, showing how much my former employer valued the Microsoft database.  Thanks to Rick Krueger (@DataOgre|B) for a great way to tell some fun IT stories.

MPE – IMAGE/SQL

Flash back to the ’80s where a young man creates a general ledger accounting program for the California state university he’s working for.  The app was created using the MPE programming language, connecting to an IMAGE/SQL database on a then-cutting edge HP3000 server.  The app is so successful that the young man starts a company and begins selling his product.  He recruits students from the college and works in his garage.

COBOL – ORACLE/INFORMIX

As the industry evolved and sales grew, the application needed to scale and outgrow it’s dating hardware: enter the world of UNIX.  Cobol is chosen as the new, state-of-the-art language and Oracle or Informix is now the database.  Customers could choose any flavor of UNIX they liked, as long as the Microfocus Cobol compiler was compatible and Oracle/Informix would install.  But, what to do with the old MPE -> IMAGE/SQL code?

Instead of re-writing, the company chose to create a “transport” layer of code that would allow MPE  to co-exist with Cobol, making calls through that layer to the “shiny new database”.  All new development was done in Cobol, but the MPE code was so fundamental that it still played a HUGE role in the functionality of the application.

THE INTERNET

The industry continued to evolve: enter the brave new world of the internet.  Instead of clunky telnet clients that had to be installed on each desktop, the company decided it needed to create a web application, one where clients only needed an internet browser to access an internally hosted IIS server.  But IIS is a Windows product and although all new development was being done in .NET, the MPE/Cobol code still was the backbone of the product.  For most modules, the .NET code was making calls to the MPE/Cobol code that then interacted with the Oracle/Informix database living on the UNIX server.

mouse-trap-game-board-i4Which one of these do you picture as the bath tub filling with water to tip a scale to release a ball bearing that slides down a ramp to…

SQL SERVER, FINALLY

In order to remain current in the industry and reduce costs for clients to purchase necessary hardware to host the app, SQL Server was offered as an alternative database.  But legacy code is legacy code and MPE “transported to” Cobol does not play nicely in Windows.  The solution: emulate UNIX on Windows!  Using MKS (a UNIX emulator app for Windows), the MPE/Cobol code could now be used to “interact” with the SQL Server database.  The interaction was done using SQL Server logins (no sense in leveraging Active Directory at this point) and reporting was done using the proprietary solution offered with the software (SSRS???).

Interestingly enough, the company was successful to the point that it attracted the attention of a corporate giant and was sold for a handsome sum.

In summation, “you can’t tell a book if the title’s covered” (http://www.imdb.com/character/ch0029466/quotes).

Hopefully you never had to support this type of machine.  But, I do look back on those help desk days with fondness, as I had no clue what I was doing but was surrounded by great people.