In this fascinating episode, watch how Ofir Barzilay, Principal Engineering Manager for IoT Security, demonstrates a brute force attack (https://aka.ms/iotshow/ascforiot) on a Raspberry Pi IoT device connected to Azure IoT Hub. You will see how Ofir attacks the device to discover its password.

Watch how he downloads a payload and infects the device. You will see him gain control over the device, connecting it to his command and control server to fully own it, showing how he can exploit it for crypto mining, DDOS and more.

At the end of the demo, Ofir demonstrates how Azure Security Center for IoT has monitored, detected, and reported on the entire attack. He also shows how Azure Security Center for IoT leverages Microsoft Threat Intelligence to flag suspicious devices. Solution builders using Azure IoT security will sleep better after watching this show.

Eric Reiss (FatDUX Group) started working with user experience (UX) long before the term was even known and speaks about Ethics in AI.

Whenever we say, “That’s not my problem,” or, “My company won’t let me do that,” we are handing over our ethical responsibility to someone else – for better or for worse. Do innocent decisions evolve so that they promote racism or gender discrimination through inadvertent cognitive bias or unwitting apathy? Far too often they do.

We, as technologists, hold incredible power to shape the things to come. I would like to share my thoughts with you so you can use this power to truly build a better world for those who come after us!

Lex Fridman interview Daniel Kahneman in this thought provoking interview.

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of lifeOUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life

Azure machine learning datasets is a great solution to manage your data for machine learning.

With datasets, you can directly access data from multiple sources without incurring extra storage cost; load data for training and inference through unified interface and built in support for open source libraries; track your data in ML experiments for reproducibility.

Learn More:

Here’s a talk by Danny Luo Pre-training of Deep Bidirectional Transformers for Language Understanding

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.Toronto Deep Learning Series, 6 November 2018

Paper: https://arxiv.org/abs/1810.04805

Malte Pietsch delivers this keynote on “Transfer Learning – Entering a new era in NLP” at PyData Warsaw 2019

Transfer learning has been changing the NLP landscape tremendously since the release of BERT one year ago. Transformers of all kinds have emerged, dominate most research leaderboards and have made their way into industrial applications. In this talk we will dissect the paradigm of transfer learning and its effects on pipelines, modelling and the engineers mindset.

There are 250 billion micro-controllers in the world today. 28.1 billion units were sold in 2018 alone, and IC Insights forecasts annual shipment volume to grow to 38.2 billion by 2023.

What if they all became smart? How would that change our world?

From venturebeat.com:

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.