A few weeks ago, Unravel traveled to New York to take part in the Strata Data Conference. Strata is always a great opportunity to meet with customers, prospects, peers and influencers across the industry. To be honest, these events can start to seem a little stale with many of the same vendors and buzzwords showing up again and again. But that wasn’t the case at Strata – there were a number of exciting changes underway and a shifting attitude that seemed palpable.
BIG DATA ON CLOUD & HYBRID ENVIRONMENTS
One thing that stood out to me right away was the growing adoption of cloud. Cloud is a trend that’s been discussed for years, but traditional enterprises had not really embraced it. At Strata, we discovered that’s no longer the case, as many enterprises have begun to run full production workloads in AWS and Azure. Those customers include healthcare companies and major financial institutions, who had previously expressed concerns regarding security and compliance. But they’ve finally come around, and when the financial sector has adopted a technology, you know it’s for real.
From a Big Data perspective, the growth of cloud is a terrific development. Big Data has always been better suited for the cloud than on-premises due to its elastic compute requirements. For example, let’s say you’re an e-commerce company that needs to accommodate a surge in online traffic on Black Friday. If your Big Data is on-premises, you’d need to buy a ton of extra servers, which you wouldn’t really need the other 364 days of the year. But if you were in the cloud, you could just click a button and scale instantly, making it cheaper and easier.
Despite the cloud’s ascension, it’s also become clear that traditional enterprises will not completely abandon their on-premises infrastructure. Instead, most of the companies I talked to at Strata have begun to take a hybrid approach, keeping certain workloads on-premises and moving other ones to the cloud.
USE CASE, NOT SYSTEMS
Big Data is evolving in a way that resembles the growth of the cloud. At previous events, people were still navigating the basic Big Data ecosystem. The conversation always centered on systems and vendors – How should we store and process data? Should I use Hadoop or Spark or Kafka or all three? Which analytics tools should we choose? During Strata, it became apparent that those conversations have finally been settled. We’ve now moved to a new era in Big Data where the focus has shifted to use cases from systems.
Organizations have chosen and deployed their systems and tools. Now they want to get the most out of them. They’ve captured the data, now they want to know how to apply it. That means, they want to know how they can implement artificial intelligence in their products, or how they can prevent fraud, or how to create a recommendation engine. They’ve gone beyond the basics and are exploring specific use cases.
The traditional enterprises – banks, healthcare companies, Fortune 500s, etc. – are all in on this transition, just like they’ve bought in on cloud. Phase two is about driving value from Big Data, which gives these companies a considerable edge.
The reality is that Big Data is hard. There is no one size that fits all. There is no one tool, no one system, that can be leveraged for everything customers want to do. In the past, enterprises could use a monolithic Oracle stack to fill all their needs. Today’s Big Data stack is made of many different parts, all of which do their job better than any other solution.
Enterprises have now chosen those parts. To get value out of data, they need to figure out how to best use their data to create data-driven products. That is the challenge of this next phase in Big Data.