Lex Fridman shared this lecture by Andrew Trask in January 2020, part of the MIT Deep Learning Lecture Series.

OUTLINE:

0:00 – Introduction
0:54 – Privacy preserving AI talk overview
1:28 – Key question: Is it possible to answer questions using data we cannot see?
5:56 – Tool 1: remote execution
8:44 – Tool 2: search and example data
11:35 – Tool 3: differential privacy
28:09 – Tool 4: secure multi-party computation
36:37 – Federated learning
39:55 – AI, privacy, and society
46:23 – Open data for science
50:35 – Single-use accountability
54:29 – End-to-end encrypted services
59:51 – Q&A: privacy of the diagnosis
1:02:49 – Q&A: removing bias from data when data is encrypted
1:03:40 – Q&A: regulation of privacy
1:04:27 – Q&A: OpenMined
1:06:16 – Q&A: encryption and nonlinear functions
1:07:53 – Q&A: path to adoption of privacy-preserving technology
1:11:44 – Q&A: recommendation systems

Matías Quaranta (@ealsur) shows Donovan Brown (@donovanbrown) how to do bulk operations with the Azure Cosmos DB .NET SDK to maximize throughput, and how to use the new Transactional Batch support to create atomic groups of operations.

Related Links:

Romit Girdhar, Microsoft (@romitgirdhar) and Chinmay Joshi, Oracle join Lara Rubbelke to explain how to interconnect Microsoft Azure and Oracle Cloud Infrastructure.

The Microsoft Azure and Oracle Cloud interoperability partnership enables you to migrate and run mission-critical enterprise workloads across both clouds, seamlessly connecting Azure services, like Analytics and AI, to Oracle Cloud services, like Autonomous Database. Learn how you now have a one-stop shop for all the cloud services and applications you need to run your business.

Just when you though that the demise of Moore’s Law meant the end of datacenter performance, FPGAs enter the fray to save the day.

The wonderful serendipity is that just at the time that the CPU can no longer be the sole and primary unit of compute in the datacenter for many workloads – for a whole host of reasons – the FPGA has come into its own, offering performance, low latency, sophisticated networking and memory, the heterogeneous compute capabilities of modern FPGA system on chips, which are arguably compute complexes and nearly complete systems in their own right at the high end of the product lines from FPGA suppliers.

But FPGAs can and do play well with other devices in hybrid systems, and we think are just beginning to finding their natural places in the hierarchy of compute.

Here’s my talk from the Azure Data Fest Philly 2020 last week!

Neural networks are an essential element of many advanced artificial intelligence (AI) solutions. However, few people understand the core mathematical or structural underpinnings of this concept. In this session, learn the basic structure of neural networks and how to build out a simple neural network from scratch with Python.Neural networks are an essential element of many advanced artificial intelligence (AI) solutions. However, few people understand the core mathematical or structural underpinnings of this concept. In this session, learn the basic structure of neural networks and how to build out a simple neural network from scratch with Python.

In this episode of CodeStories, Seth Juarez joins local Cloud Advocate, Christopher Maneu, on a tour of the Microsoft office in Paris, his remote office,  and a scuba diving club.

Learn how Christopher has automated a logbook with IoT Retrofitting https://aka.ms/CodeStories/compressor.

Get the latest articles, documentation, and events from Microsoft.Source—the curated monthly developer community newsletter – sign up here: https://aka.ms/CodeStories/Microsoft.Source

PyTorch is a project written in a combination of Python, C++, and CUDA which was mainly developed in Facebook’s AI research lab.

It has shared a repository with deep learning framework Caffe2 since 2018 and is one of the main competitors to Google’s TensorFlow.

Here’s a write up of a recent update that adds distributed model parallel training.

In PyTorch 1.4, distributed model parallel training has been added to accommodate the growing scale and complexity of modern models. In the case of Facebook’s RoBERTa method, the amount of parameters to take into account can be up in the billions, which not all machines can handle.