LOCATION: Santa Clara, CA
We spoke with RackWare CEO and Co-Founder Sash Sunkara about the company’s unique platform and its commitment to advancing both the RackWare Management Module and the cloud data recovery industry as a whole.
When we started, Amazon had come out with Elastic Cloud and there was a lot of buzz. But it was mostly smaller companies who were using it, SMBs that didn’t have IT departments or startups who didn’t want to invest in infrastructure. Enterprises weren’t that enamored; they had a lot of technical challenges and there were a lot of security issues [regarding cloud computing and data storage]. We believed there was value that enterprises could gain from the cloud — the agility of flexibility and the potential cost savings. But the challenge was that people didn’t want to rebuild applications.
Enterprises have a big investment in their current enterprise set, and they’ve done a lot of testing and qualification [to optimize and tailor their applications]. To rewrite any of this would be challenging, it would be costly and it would be a significant OPEX burden. What we do — and the significant difference of our technology — is that we can go grab enterprise applications as-is, move them to the cloud and maintain much of that configuration that they had in their enterprise. We build this bridge so that enterprises can easily move in, move out or also move to a different cloud. Once they’re [in the cloud space], they can now monitor their workloads, move those workloads to a different cloud or scale those workloads on demand.
With RackWare’s disaster recovery capability, we’re saying, “Look, disaster recovery shouldn’t mean you have to build another data center, you shouldn’t have to buy redundant hardware and you shouldn’t have to be locked into a specific vendor. You should have a high level of automation so that a large IT staff is not required to either deploy or test DR.” That’s the big difference in our software today. We’re really talking about anywhere-to-anywhere DR. The flexibility to choose the cloud of your choice, but also to get the high level of automation so you really reduce the burden to your IT staff and also reduce how much application developers have to get involved. The benefit is that these folks can easily test their DR scenarios to give them the peace of mind that they have applications that are going to work in case of an event.
Over the last 12 months we’ve had significant company momentum: we’ve seen a 75 percent increase in new customers, and we just hit the 200-customer point in July. We’ve seen a lot of traction because there are a ton of enterprises that cannot really afford to go rewrite their applications; but they also have this big pressure to reduce the cost of doing business, the cost of their infrastructure and the cost of the OPEX to maintain that infrastructure.
Yes, it’s a hard problem we’re trying to solve, but we’re not new to this issue, we’ve been working on this since late 2008. We spent 2009 and 2010 really building the core technology [of RackWare], and now we’ve been able to build on top of that management capability to make it much easier to use cloud and maintain cloud.
I’ll just walk through a scenario: Let’s say you’re a big enterprise and you’re thinking about potentially moving to a public cloud. You can install our software in your environment or in the cloud, and we can go in and discover automatically what’s out there – what is the hardware you’re running, what is the software, what are the applications and application dependencies and what the networking infrastructure looks like.
We can give an inventory of what you have running, and we can also monitor the performance requirements of your applications. We can take all of that data, analyze it and let the clients know that these applications are good for the cloud. Once we do that, our software can go in via the network and capture a snapshot of the application. Once we capture that snapshot, we can replicate or clone that image and upload it to the cloud, let’s say Amazon or RackSpace or CenturyLink.
Once that image is in the cloud, we can launch it. Now you want to scale out. We can set a policy to say, “I have x machines running, and if you see traffic increase by more than 50 percent, add more resources automatically. If it goes below a certain percent, take away resources.” That’s sort of the migration and auto-scaling capabilities, but let’s say your client said, “I want to use cloud for disaster recovery.” Again, the same scenario – our software can sit in the cloud, we can via a secure connection go capture the application workload and data. We can replicate it or clone it, upload it to the cloud, make sure it’s tested and make sure it’s running properly. Then we can use their policy framework to say “Every hour, check for any changes on the data center side in my local site, then capture those changes in the DR site. If anything fails on the primary side, go failover to the cloud and launch the machines in the cloud.”
Typically there are not many choices for physical infrastructure. You can do tape backup, which would require you to wait for hours with a lot of manual intervention and involve a lot of IT staff. Let’s say you have a Hurricane Sandy-like event. Your DR site can take hours, days or potentially weeks to come up. With other solutions that do replication, there’s a high cost associated as well. Let’s say I’m running a bunch of expensive blades or expensive servers on my data center. I’d need to have redundant-like hardware on my DR site and a staff that can maintain those servers as well as the application stacks that are there. Having a duplicate data center doubles my costs.
The benefit of doing disaster recovery in the cloud is that you don’t have to buy hardware; you’re really just renting it. You’re only paying for storage, not for CPR memory. There’s a significantly lower cost for running those machines in the cloud. When we add the high-level automation, in addition to the automation the cloud brings you, you can come up within an hour in the case of a disaster.
The other piece is that because you have your data in the cloud, you don’t need a staff to manage it, and you have the high level of automation that we bring to the table. You can test DR as often as you want, and the more you test, the better off you are. Your organization will have the peace of mind to say, “In case we have an event, we know that we are going to have the right data, we know that we’re going to be up in an hour, and we’re paying a fraction of the cost avoiding a redundant data center for this purpose.”
DR solutions have really been about vendor lock-in. You buy these complicated solutions, qualify them and then can’t get away from them. You’re stuck with the technology, the costs, etc. But at RackWare, we give you the flexibility to choose the best cost and best-performing solution for your application. [We understand that] applications change over time, and that something that wasn’t important a year ago could be the most important application for your business right now. You need to be able to have that flexibility to change course depending on the business requirements.
I think our focus has always been enterprise. I think initially we had said it would be all about financial services because they’re typically leading edge. But we found that [many diverse industries seek cloud management and DR]: financial services, healthcare, big consumer brands, retail. Even brick-and-mortar stores that aren’t as technology-driven have adopted the cloud, because business is changing, technology is changing. We’re all dealing with events and disasters happening all of the time. That’s just the reality of our world these days. We’ve seen even conservative organizations adopt cloud. So the enterprise focus hasn’t changed over time, but I think the number of verticals has definitely expanded for us.
I think it’s definitely about choice, but also the thing that we do, especially for clients that are new to the cloud, is to understand what you have. One of the things that we find, especially with large organizations, is that people won’t even know what applications they have. They have a lot of legacy software, and they don’t know what it’s running because everybody is scared to touch it.
Moving to the cloud is really an opportunity to become more efficient and more secure, and to truly have an understanding of where your organization is going. The first step to understanding what your future looks like is to understand and get an inventory of what you truly have. Ask “what are the problems?” and “what is our existing security level?” One of the things about aging applications and infrastructure is that they create security holes. Understand what you have, and figure out what belongs in the cloud, maybe have some applications in private cloud deployment and some in a public cloud.
Then it’s making the decision about which cloud to go to, which will depend on your cost profile, your performance profile, and your SLA. The other critical piece is to not get locked in. You want to make sure that as your business changes, you can go with the best provider [for those needs].
The additional financing is going to help us continue to add higher levels of automation and scalability, and we’re adding all kinds of functions around policy and intelligence.
One of the things that we bring to the table is we have a lot of information. With our discovery capability, we have a lot of information about that environment. How do we feed that information back to our system to preemptively take care of situations that may happen to our clients? If we detect certain behaviors of a particular host, we can preemptively bring other resources to bear to take care of issues before they happen.
Those are the kinds of things that we are thinking about — using the intelligence, using the level of automation in the policy framework, to take care of situations before they can become an emergency or even high-risk. And then we are constantly working on usability, making it simple to use, adding lots of wizards and making sure that it’s an easy management framework. That’s what we’re thinking about for the next nine to 12 months, and those are the areas that we’ll be focused on.
I think we have this big battle happening, where you’ve got Amazon on one side, and VMware on the other side. Then you’ve got OpenStack contingent with the IBMs and the Red Hats and others. I think there’s going to be some coalescing of the industry as people say, “we do have to make it easier for enterprises to adopt, and everything shouldn’t be so different.”
I think there’s going to be some level of consolidation that happens in the market, but I do believe there will be additional traction for OpenStack, with the open-source community being able to drag that forward. I really believe that OpenStack is going to continue to gain traction and I think the open-source community is going to have an impact on the cloud. I think you will have somewhat of a standardization of APIs across the industry to make it easier for folks to be able to use the cloud, move to the cloud and leverage the cloud.
Learn more about RackWare cloud management and other leading cloud data backup tools by visiting our Cloud Management research center. To find more top enterprise backup and security platforms, download our free Top 10 Enterprise Cloud Backup software report and compare pricing options, deployment models and standout features of industry-leading solutions.