With cloud-native rising, the conversation of infrastructure costs seeped from R&D Directors to every person in the R&D:

  • “How does much a VM cost?”
  • “can we use that managed services? How much will it cost us with our workload??”
  • “I need a stronger machine with more GPU, how do we make it happen within the budget?”

When deciding on a big data/data lake strategy for a product, one of the main chapters is cost management.

On top of the budget for hiring technical people, we need to prepare a strategy for services and infrastructure costs. That includes the provider we want to work with, the different tiers plan they have, the system needs, the R&D needs, and each service’s pros and cons.

Chris Seferlis discusses one of the lesser known and newer Data Services in Azure, Data Explorer.

If you’re looking to run extremely fast queries over large sets of log and IoT data, this may be the right tool for you. I also discuss where it’s not a replacement for Azure Synapse or Azure Databricks, but works nicely alongside them in the overall architecture of the Azure Data Platform.

Databricks hosted this tech talk on Delta Lake.

Data, like our experiences, is always evolving and accumulating. To keep up, our mental models of the world must adapt to new data, some of which contains new dimensions – new ways of seeing things we had no conception of before. These mental models are not unlike a table’s schema, defining how we categorize and process new information.

This brings us to schema management. As business problems and requirements evolve over time, so too does the structure of your data. With Delta Lake, as the data changes, incorporating new dimensions is easy. Users have access to simple semantics to control the schema of their tables. These tools include schema enforcement, which prevents users from accidentally polluting their tables with mistakes or garbage data, as well as schema evolution, which enables them to automatically add new columns of rich data when those columns belong. In this webinar, we’ll dive into the use of these tools.

In this webinar you will learn about:

  • Understanding table schemas and schema enforcement
  • How does schema enforcement work?
  • How is schema enforcement useful?
  • Preventing data dilution
  • How does schema evolution work?
  • How is schema evolution useful?

Related Resources:

Heres’s an online Tech Talk hosted by Denny Lee, Developer Advocate at Databricks with Burak Yavuz, Software Engineer also of Databricks

Link to Notebook.

The transaction log is key to understanding Delta Lake because it is the common thread that runs through many of its most important features, including ACID transactions, scalable metadata handling, time travel, and more. In this session, we’ll explore what the Delta Lake transaction log is, how it works at the file level, and how it offers an elegant solution to the problem of multiple concurrent reads and writes.

In this tech talk you will learn about:

  • What is the Delta Lake Transaction Log
  • What is the transaction log used for?
  • How does the transaction log work?
  • Reviewing the Delta Lake transaction log at the file level
  • Dealing with multiple concurrent reads and writes
  • How the Delta Lake transaction log solves other use cases including Time Travel and Data Lineage and Debugging

In this video, Chris Seferlis continues discussing the Modern Data Platform in Azure with Part 3: Data Processing.

Tools Discusssed: