Storing Big Data with EMC Isilon

What is BIG DATA? 1PB … 16TB … 32GB? Honestly it means something different for everyone, but no matter what it means you you the main goal is to make managing it simple and easy. Isilon could be just the solution you are looking for if your current data management platform/strategy is becoming challenging.

What is Isilon

So let me start by saying that Isilon is a scale out NAS solution, meaning that as you need to expand your storage you simply buy more nodes and stack them together (think LEGOs)…also very much like the HP P4000 Lefthand solution, but the P4000 is block level storage while Isilon is primarily file level storage but has some block storage features.

Here is a Visio type diagram of the Isilon Architecture:

 Why Isilon is awesome

The beauty of Isilon is there OneFS… and it means just what it says … there is only one file system that spans all nodes in the cluster, and it manages everything: the RAID, the volumes, and the file system. This is how Isilon scales so easily with very little management. As you add nodes to the cluster there is no configuration needed for an increase in capacity to be realized. Your data is also automatically load balanced across a back end network (infiniband) so that all data is distributed evenly. It also looks for hot data and spreads it out among nodes so that no one node is working harder then the others. The only configuration decision you need to make at all with Isilon is how much protection you need. And to do that you simply pick how many nodes in your cluster can be lost, and how many drives per node you are willing to lose, after that the OneFS magic kicks in and does the rest.

So your probably thinking by now that this is all great, but why Isilon? Why not a typical EMC VNX array like I’m used to? Afterall the Celerra architecture (and now the VNX) are tried and true? My two main reasons thought, are BIG DATA and easy of management. An EMC VNX array is not capable of presenting a volume bigger then 16TB, while that is a lot of space, it may just not be big enough for your big data needs. The other disadvantage of your typical array is that when you do add another node (P4000) or another shelf of disks (EMC VNX, Netapp, etc) you will have to login to the SAN and tell it what to do with those new disks. They will not automatically grow capacity by themselves, whereas Isilon will literally do just that.

Oh did I mention that it only takes about 60 seconds to add a new node to a cluster ? Seriously that is it, take a look for yourself right here. You power on the box and either use the CLI to add it to the cluster (which is super simple…. Select “join existing cluster” and then pick the cluster to join), or login to the GUI and use it to add the node. Either way it will literally take you longer to rack and cable the box then it will to add it and have people using it. You’re not going to get that with your typical SAN array.

So how big does Isilon get? Right now Isilon supports volumes up to 15PetaBytes (reference here) and there are rumors that it may be going up too! I guess you could call that big data.

Where I see Isilon being used

Because Isilon focuses on NAS services including NFS and CIFS I see Isilon being used at any company that has lots of people updating data… like a design firm that has tons of graphics files, or maybe an mechanical manufacturer that has CAD drawings. On the other side of the spectrum the hospitals that do MRI’s and other imaging need massive amounts of storage too. I remember working on one such machine that mounted a Linux NFS server for pushing those images too. It worked OK, but if that Linux server went down (and it did, that’s why I was there) they were unable to pull up images older then a week. If that customer would have been using Isilon they would have had two major benefits: 1) they could expand their storage VERY easily as they needed 2) they would have had to call me if one box went down (one node) because Isilon has built in redundancy between nodes.

Remember too that Isilon can do replication too. So now if you have lots of drawings or CAD files that need to be shared between your Ohio office and your California office you could do that too.

Isilon can also be used as VMware vSphere storage too, by utilizing the NFS capabilities of Isilon. Check here for the best practices guide.

More Information

For more information check out Jason McCarthys Blog. Jason is a vSpecialist working for EMC who’s focus is on Big Data and Isilon. He has also published two articles so far on setting up Smart Connect, which I did not mention, but basically think of SmartConnect as the Isilon version of PowerPath. IT tells your hosts about all the possible connection points to the Isilon cluster and helps load balance and provide more fault tolerance for applications.

Part I  and Part II

More Screenshots

The Homepage

Share Types:

New Share (although its not recommended):

Loading

Share This Post

8 Responses to "Storing Big Data with EMC Isilon"

  1. Another advantage of Isilon vs Celerra/VNX is for file writes. If you are working with large files (say 1GB in size) and try writing to a Celerra/VNX with lots of checkpoints, the performance is horrible (often <10MB/s). See discussion on that issues here: https://community.emc.com/message/593220 THe problem has to do with copy on copy-on-write handling of checkpoints.

    However, try doing reads/writes with an Isilon, even if it has lots of snapshots. The Isilon will write/read at around 100MB/s sustained (using 1 GB connections). It load balancing will help keep these performance numbers even with multiple users.

  2. Coincidentally, EMC just announced that they will be updating the VNX with new snapshot technology. They will be implementing an allocate on write method instead of the current copy-on-write which is what causes the extreme slow-downs when writing to Celerra/VNX volumes with lots of checkpoints. For more information, see point 4 on this post: http://virtualgeek.typepad.com/virtual_geek/2012/05/vnx-inyo-is-going-to-blow-some-minds.html

    I am personally happy to see the change but am still a disgruntled EMC customer nontheless because of such a poor performing Celerra these past couple of years since purchase. I will be curious to see if EMC will provide the code update to current Celerra owners.

  3. Yes, we are sure snapshots cause the slowdown. On a file system with no snapshots, performance is good. As soon as we create a single snapshot, performance is cut in (about) half. Additional snapshots shrinks performance down even more very quickly. We go from about 100MB/s down to about 10MB/s speed. We wrestled with this for months with EMC until they finally admitted that this is how it works and there isn’t much we could do about it.

  4. Hi Justin

    As per my understanding in case of Write IO , IO is divided into 128Kb Stripes at the Captain Node and then its is sent across all the nodes in the cluster .
    Lets suppose we have 6 Node Cluster having 4+2 Protection Level and a write IO of size 512Kb needs to be written on to the cluster .
    Isilon will divide it into 128Kb Stripe ( 128*4 ) and the ECC Code will be calculated for these.

    Node 1 2 3 4 5 6
    Data D1 D2 D3 D4 ECC1 ECC2

    If each node has 25 drive in it so will the 128Kb stipe( D1 ) on node 1 will be distributed among all the drives of node 1 or will it be written only on a single drive ?

    Thanks
    Gautam

  5. I would say you should get in touch with an Isilon SE or Support person as the training that I recieved on the product was never that deep. But I would assume that it is protected on each node… but at the same time you can select different levels of protection if i recall properly.

Post Comment