In this video, learn how selecting the right partition key can make a huge difference in cost and performance with Azure Cosmos DB.

Program Manager Deborah Chen discusses how data partitioning ensures scale, why partition keys are so important for performance and cost-management, and how to select the right partition key for read-heavy or write-heavy workloads.

For more information, visit: https://www.azurecosmosdb.com

For many newcomers to Azure Cosmos DB, the learning process starts with data modeling and partitioning.

How should I structure my data? When should I co-locate data in a single container? Should I de-normalize or normalize properties? What’s the best partition key for my model?

In this demo-filled session, learn the strategies and thought process one should adopt for modeling and partitioning data effectively in Azure Cosmos DB.

Using a real-world example, we explore Azure Cosmos DB key concepts—request units (RU), partitioning, and data modeling—and how their understanding guides the path to a data model that yields best performance and scalability. If you’re familiar with relational databases, and want to dive into the non-relational world, this is the session for you.

In BlueGranite’s recent webinar, you will see several examples of Python in action for data modeling and visualization in Power BI. You will also learn where and how Python fits into a Power BI development workflow.

You’ll also see how to balance Python with native Power BI functionality and determine what limitations must be considered when using Python in Power BI.

ThorogoodBI explores the use of Databricks for data engineering purposes in this webinar.

Whether you’re looking to transform and clean large volumes of data or collaborate with colleagues to build advanced analytics jobs that can be scaled and run automatically, Databricks offers a Unified Analytics Platform that promises to make your life easier.

In the second of 2 recorded webcasts Thorogood Consultants Jon Ward and Robbie Shaw showcase Databricks’ data transformation and data movement capabilities, how the tool aligns with cloud computing services, and highlight the security, flexibility and collaboration aspects of Databricks. We’ll also look at Databricks Delta Lake, and how it offers improved storage for both large-scale datasets and real-time streaming data.Whether you’re looking to transform and clean large volumes of data or collaborate with colleagues to build advanced analytics jobs that can be scaled and run automatically, Databricks offers a Unified Analytics Platform that promises to make your life easier.

In this video, Robert is joined by Phil Japikse for part 3 of a 5 part series on Entity Framework Core. Aimed at folks new to EF Core, Phil shows how to start with an existing SQL Server database, create entities/objects for each table and then how to perform basic CRUD operations on the data.

This episode covers querying data using EF Core. We discuss the basics of Where clauses, retrieving single items or lists of items, as well as when queries actually execute.

Find the sample code here.

Episode list:

  • Part 1: Working with Existing Databases. We scaffold the DbContext and the Entities from the Northwind Database, discuss navigation properties and relationships.
  • Part 2: Change Tracking. Change Tracking is one of the most compelling reasons to use an object-relational-mapper (ORM) like EF Core. In this episode we discuss how the change tracker works, see it in action, as well as load data outside of the change tracker.
  • Part 3: Basic Queries (this episode).
  • Part 4: Querying Related Data and Using Projections (coming soon). Querying related data is simple in EF Core. In this episode we demonstrate creating joins in our LINQ queries with Include and ThenInclude. We also discuss how you use projections to shape the queried data into other objects, anonymous or strongly typed.
  • Part 5: Putting the CUD into CRUD (coming soon). Wrapping up our starter series on EF Core, this episode covers adding, updating, and deleting data.