SnapLogic has announced it has expanded its partnership with Databricks with new support for Delta Lake, the open source storage layer created by Databricks that brings reliability to traditional data lakes.
Together, the joint solution helps customers accelerate the integration, transformation, and processing of big data workloads into Delta Lake, increasing data quality and accelerating the time to value of advanced analytics and machine learning initiatives.
“Seamless integration of SnapLogic eXtreme’s big data processing solution with Delta Lake means our joint customers get direct and continuous access to trusted, reliable data,” said Pankaj Dugar, Vice President of ISV and Technology Partners, Databricks. “With SnapLogic eXtreme’s self-service approach, data teams get faster time to value on their analytics and machine learning projects.”
Organisations are increasingly investing in data lakes to gain actionable insights into their growing data assets. However, the high volume and complexity of data often results in data quality, reliability, and performance issues. Together, SnapLogic and Databricks are removing these roadblocks by providing a low-code, visual paradigm for data engineers to create and process data pipelines that leverage the full power of Delta Lake — including features like ACID transactions, scalable metadata handling, schema enforcement, and batch and streaming support.
“Databricks and SnapLogic are committed to delivering product innovations that help organizations reduce the time, effort, and skills needed to manage their big data initiatives so they can quickly turn their data into meaningful insights that drive the business forward,” said Craig Stewart, Chief Technology Officer, SnapLogic.
“By teaming up with Databricks, we aim to remove the key technical barriers to data lake and big data management so our customers can accelerate their analytics and machine learning initiatives and focus on delivering real business value.”