Roman Stanek is a man on a mission. A veteran entrepreneur, he founded GoodData, an analytics company that focuses on customer journey insights, in 2007 with the goal of shaking up the business intelligence space and making data insights pervasive and easily accessible to everyone that needs them within organizations.
I first wrote about the company here about a year ago. In a new phone interview, Stanek said:
The notion of data democratization is all about companies trying to provide their employees access to the right information and technology at the right time so they can react faster and help companies become more efficient. Scaling data insights from five percent pervasive to 95% pervasive is analogous to the DevOps movement which aims to put easier and more accessible technology in the hands of operations people. In the case of DataOps, that means making sure that analytics are business-user friendly and embedded seamlessly in the application at the point of work. You don’t need an IT geek to get to the good stuff.
In the spirit of radical openness, GoodData has just released a new version of the GoodData.UI framework for faster and easier development and delivery of data-driven applications. GoodData also open-sourced the library, making the best practices, principles, and tools available to all application developers.
The framework and component library reflect years of expertise in helping companies build interactive, data-driven applications. It hopes that open-sourcing GoodData.UI will help make analytics and insights truly pervasive. Said Stanek:
The low adoption of analytics and subsequently limited access to insights are two of the fundamental problems that plague all business intelligence projects. If the industry wants to increase analytics adoption to be closer to 100 percent, data and insights can no longer be isolated from workflows and processes. We’ve built GoodData.UI to give embedded BI developers a proven design pattern for architecturing insightful user experiences and data-driven decision-making.
The Snowflake effect
Last month, Snowflake–a cloud data platform–posted the largest IPO in software industry history. With a valuation of $70 billion. Few people know the data business better so I was anxious to talk to Stanek about what it meant:
If you don’t know the history you might think Snowflake had always been a Silicon Valley darling working on a popular data solution but that’s just not true. For years, the industry was enamored with Hadoop and Snowflake was seen as an outlier.
For those of you with long memories, Apache Hadoop was unveiled on April 1, 2006 and was immediately seen by most industry leaders as the future of data warehousing. Inspired by Google, Hadoop’s primary goal was to improve the flexibility and scalability of data processing by splitting the process into smaller functions that run on commodity hardware. Its intent was to replace enterprise data warehouses based on SQL. But for all the hype, Hadoop was not without serious problems.
Unfortunately, Hadoop was far too complex, slow, and unwieldy. The technology was extremely complicated and nearly impossible to use efficiently. Hadoop’s lack of speed was compounded by its focus on unstructured data–you had to be a “flip-flop wearing” data scientist to truly make use of it. Meant for unstructured data, the foundations for Hadoop usage were flawed. And so Silicon Valley floundered with Hadoop–for ten years.
In 2012, Marcin Zukowski and his colleagues Benoit Dageville and Thierry Cruanes started Snowflake, a data warehousing company available exclusively in the public cloud. Snowflake took a different approach, Stanek explained:
Marcin and his teammates rethought the data warehouse by leveraging the elasticity of the public cloud in an unexpected way: separating storage and compute. Their message was this: don’t pay for a data warehouse you don’t need. Only pay for the storage you need, and add capacity as you go.
Stanek says too much of today’s data is still fragmented and disjointed but he believes Snowflake’s success is the first of many steps:
Now that the data is mobilized, the whole vaslue chain ecosystem will have to realign. Snowflake will gain a true set of competitors, which will change the data landscape as we know it. Rather than slow and cumbersome data warehouses, the world’s data will be stored into standardized cloud storage, which will redefine how data is managed in every company. I call this the “realignment of the data value chain.”
The data value chain is the process by which data is extracted, cleansed, transformed, loaded, and stored. Today’s on-prem data value chain is fragmented. Data constantly moves between various systems and applications, adding friction to gaining insights. In the future, data will be created, managed, accessed, analyzed, and integrated in a well-structured and unified cloud data warehouse.
Data analytics is a discipline within the enterprise software market where only a few truly innovative companies survive and thrive. The data warehousing segment has long been in need of a reboot – it’s slow, expensive and often rear view mirror facing. Snowflake is pointing in an alrternative direction that on its face is attractive in terms of time to value and competitve in pricing.
Stanek believes the ‘new’ approach to data warehousing could ultimately spawn an even larger success story than Snowflake and predicts that in the future the new data value chain will result in the software industry’s first $100 billion IPO. That’s a bold statement but here’s the thing. Stanek is one of the smartest guys I talk to in an industry hip deep in very smart guys so I’m not going to disagree. At least not for now.
social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart