by Linda Capcara
In part one of a two part article series, Frank Berry, storage industry analyst and Founder of IT Brand Pulse and editor of TheFibreChannel.com, recently spoke with StorageIO Founder Greg Schulz about Fibre Channel SAN integration with OpenStack, why Rackspace is using Fibre Channel and more.
Frank: What are you seeing for FC SAN Integration with OpenStack?
Greg: When the conversation is around OpenStack, it is in the context of commodity, low cost, type capabilities. From a storage standpoint, a lot of the OpenStack conversations also tend to gravitate towards OpenStack SWIFT or object-based storage.
The interesting thing is that if OpenStack SWIFT or object is your sole focus – what could be lying underneath that is regular servers with regular storage with regular LUNS being served up. Some would say, well that defeats the whole purpose of leveraging low cost commodity. It probably would be in a lower scale, smaller type environment. On the other hand, if you have a bunch of storage lying around and you can access that storage via Fibre Channel, why can’t you serve up that storage to a server that in turn, the server gets repurposed and becomes a giant OpenStack server for SWIFT?
In the back end of almost every object storage system out there today, is block storage. They tend to be block DAS, but, back in mid-1995 a lot of things were DAS and we saw this shift towards shared storage systems. We saw this shift from proprietary to network storage with things like Fibre Channel supporting things like SCSI FCP, different things like that.
What that means is that OpenStack from a Fibre Channel perspective is just something else you can layer on top. We were focusing there on SWIFT since everybody likes to talk about objects because it’s something new and shiny. Yet objects have been around for a long time.
Let’s shift the conversation around OpenStack to something else called OpenStack CINDER, which is the block capability for OpenStack. In other words, where you save your virtual machine image and things like that. There again, a lot of the OpenStack CINDER Block implementations are associated with iSCSI, but around OpenStack is the other aspect caused CINDER.
There are a lot of opportunities, just back in the past where you layered NAS on top of a SAN, you could layer OpenStack on top of a SAN. You could layer a lot of these different things on top of a SAN. What that really means is for a new green field environment you may not go that way. But for an existing environment, an enterprise that happens to have some of these technologies laying around that they want to leverage, there’s no reason you can’t take part of an array, allocate that over to OpenStack and serve it up as object, serve it up as block, serve it up as manila folders, put OpenStack Trove on top of it for a database. Ultimately, all of those, somewhere in the stack leverage block which you certainly could do with Fibre Channel.
Frank: What’s your thinking on why Rackspace or the non-hyperscale guys are using FC SANs? Just because they know it?
Greg: Up to a year or so ago, a lot of people were shocked to hear that RackSpace is actually using EMC VMX, other EMC other storage and NetApp Filer – they were appalled. They were convinced all Cloud providers have to run OpenStack on commodity, because that’s all they knew, that’s all they read about.
The reality is that RackSpace, as part of its services, lets you actually subscribe and get your entire VMX or your entire NetApp. You can have your own private SAN with Rackspace or you can share part of that SAN with them. They aren’t unique, there are a lot of providers that cross between the traditional colo in the Cloud world that give you that capability. As part of giving that they are going to use iSCSI where they can for the lower cost. But where there’s that higher performance need to connect between your server located at Rackspace or at Equinix or take your pick, they are going to leverage that because they have the economies of scale to make it work.
You can make the commodity work at scale. But at some point, you may find out and realize that instead of having 10K or 20K JBODs, maybe it makes sense to put a big array back in and either SAS attach it or Fibre Channel attach it , but at a smaller scale, it doesn’t work.
What it really comes back to is that you have different tools in your connectivity toolbox, but if all you have is one tool that looks like a hammer, everything is going to look like a nail. So if all you have in your toolbox is iSCSI or SAS or SATA or NVMe or object or NFS SWIFT or NTB or whatever it is, everything is going to look that way.
But if you have that flexibility, now you what you can start to figure out is when you can use IVA or when you can use Fibre Channel. You realize – hey, on Fibre Channel, I can use other ULPs. Hey wait a minute, it’s not just about Fibre Channel Page, you certainly have FICON on there. You might even have this new thing called NVMe over Fabric on Fibre Channel at some point. People have to start looking beyond the speed and bandwidth of Fibre Channel. It’s a transport that can carry other upper-level protocols, other upper-level packets, other upper levels points of productivity. It’s not just about SCSI.
Category: Articles by TFC