NOVEMBER 2020

Our first virtual conference for customers & friends of Unravel was a huge success. We loved hearing #datalovers share how they solve the most pressing challenges in DataOps and IT.

Untold 2020 was full of enthusiasts and evangelists who live, breathe and love everything data. Our community, #datalovers, represents many of the world's innovative thinkers from the most admired businesses, bursting with knowledge and inspiring insights. #datalovers at DBS Bank, Adobe, 84.51° and Mastercard shared the latest DataOps innovations in their enterprise.

Watch these fun and data driven users share their experiences in the videos below.

Using DataOps to Improve Software Quality across the Life Cycle

Senthil Murugan, Lead Development Engineer, DBS

Ensuring good performance in the big data applications in productions can present challenges across the software development life cycle (SDLC). Join Big Data expert and Lead Development Engineer at DBS Bank - Senthil Murugan, as he presents Using DataOps to Improve Software Quality across the Life Cycle. Senthil will drill down into the ins and outs of developing Spark applications, the tooling support for non-Spark developers, and share the secrets of putting together gatekeeping mechanisms, using Unravel to significantly improve software quality and performance.

Monitoring & Troubleshooting Mission-Critical Data Pipelines at Adobe

Getting fine-grain visibility into mission-critical data pipelines in any large business is challenging. Out-of-the-box tools and standard APM tools only scratch the surface and provide limited insights, restricted to infrastructure monitoring. Join Diwakar MB of Adobe as he shares the learnings from using Unravel to help support thousands of nodes running Hadoop clusters and massively parallel processing databases at scale. You will learn how to automate root-cause analysis across applications, platforms and infrastructures; the impacts of fine-grain visibility of Hive/Pig/Tez/MapReduce/Spark jobs and YARN container levels; how to detect delays and predict pipeline completion time; and how to integrate all of this with enterprise schedulers.

Monitoring & Troubleshooting Mission-Critical Data Pipelines at Adobe

Diwakar M B, Adobe

Getting fine-grain visibility into mission-critical data pipelines in any large business is challenging. Out-of-the-box tools and standard APM tools only scratch the surface and provide limited insights, restricted to infrastructure monitoring. Join Diwakar MB of Adobe as he shares the learnings from using Unravel to help support thousands of nodes running Hadoop clusters and massively parallel processing databases at scale. You will learn how to automate root-cause analysis across applications, platforms and infrastructures; the impacts of fine-grain visibility of Hive/Pig/Tez/MapReduce/Spark jobs and YARN container levels; how to detect delays and predict pipeline completion time; and how to integrate all of this with enterprise schedulers.

How 84.51° Slashed Operational Costs & Improved DataOps Efficiency by Solving Problems with Small Files

Jeff Lambert, Rajesh Vunnam & Suresh Devarakonda, 84.51°

Hear from 84.51° data experts Jeff Lambert, Rajesh Vunnam and Suresh Devarakonda as they give a 30,000 ft view into their management of Yarn and Impala. They will share how they solved challenges associated with small files and used Unravel to troubleshoot issues with their big data pipelines. 84.51° will also take from their executive dashboards and share key learnings in helping your business improve efficiency and reduce operational costs.

Migrating Apache Spark and Hive from on-premises to Amazon EMR

In this session, you will hear from Sandeep Uttamchandani, formerly Chief Data Architect at Intuit on how they’ve migrated from on-premise big data to Amazon Web Services at Intuit. You will learn how they migrated the company’s analytics, data processing (ETL), and data science workloads running on Apache Hive and Spark to Amazon EMR to reduce costs, increase availability, and improve performance. This session focuses on key motivations and the benefits of a move to the cloud and also gives details of key architectural changes and best practices.

Migrating Apache Spark and Hive from on-premises to Amazon EMR

Sandeep Uttamchandani
Former Chief Data Architect at Intuit
& CDO & VP Engineering at Unravel Data

In this session, you will hear from Sandeep Uttamchandani, formerly Chief Data Architect at Intuit on how they’ve migrated from on-premise big data to Amazon Web Services at Intuit. You will learn how they migrated the company’s analytics, data processing (ETL), and data science workloads running on Apache Hive and Spark to Amazon EMR to reduce costs, increase availability, and improve performance. This session focuses on key motivations and the benefits of a move to the cloud and also gives details of key architectural changes and best practices.

How to Better
Optimize and Manage Your
Big Data Clusters

Bob Jackson & Birla Putchakayala, Mastercard

Enterprises across all sectors have invested heavily in big data infrastructure - Hadoop, Impala, Spark, Kafka, etc. The need to turn data into insights into business value is no different at Mastercard. Join Bob Jackson and Birla Putchakayala as they share their best practices on how to better optimize and manage your big data clusters as they get bigger, more complex, and require the services of more and more data scientists and engineers.

CONTACT UNRAVEL

THANKS TO OUR #DATALOVERS FOR MAKING UNTOLD 2020 A HUGE SUCCESS!
WE HOPE YOU ENJOY YOUR SWAG BOX.

Thank you to our supporters