A while back, Shopify posted a story about how they avoided a $1 million query in BigQuery. They detail the data engineering that reduced the cost of the query to $1,300 and share tips for lowering costs in BigQuery.
Kunal Agarwal, CEO and Co-founder of Unravel Data, walks through Shopify’s approach to clustering tables as a way to bring the price of a highly critical query down to a more reasonable monthly cost. But clustering is not the only data engineering technique available to run far more efficiently—for cost, performance, reliability—and Kunal brings up 6-7 others.
Few organizations have the data engineering muscle of Shopify. All the techniques to keep costs low and performance high can entail a painstaking, toilsome slog through thousands of telemetry data points, logs, error messages, etc.
Unravel’s customers understand that they cannot possibly have people go through the hundreds of thousands or more lines of code to find the areas to optimize, but rather that this is all done better and faster with automation and AI. Whether your data runs on BigQuery, Databricks, or Snowflake.
Kunal shows what that looks like in a 1-minute drive-by.
Get started here