A quiet revolution is taking place in electronics hardware design and, as silicon integration has continued, engineers are gradually moving from developing mostly at the component and circuit level to working more with board, modules and subsystems.

There are many advantages that lie in a shift to modular design. One is greater ability to share in the economies of scale that come from the use of platforms that attract many customers. Industrial users have a long experience with modular hardware. The Versa Module Eurocard (VME) and CompactPCI standards provided integrators and Original Equipment Manufacturers (OEMs) working in low-volume markets with the ability to use high-performance computing. They could perform more extensive customisation of a computer’s capabilities without having to invest time and effort in high-end printed circuit board (PCB) design. Since those days, Moore’s Law has delivered incredible gains in functionality while also reducing the cost of individual parts. The Raspberry Pi single board computer is a key example.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical mode s based on sample data, known as “ training data ,” in order to make predictions or decisions without being explicitly programmed to do so.

And if this sounds complex and expensive to build, it is. 

But, what if there’s another, more agile way.

The implications of TinyML accessibility are very important in today’s world. For example, a typical drug development trial takes about five years as there are potentially millions of design decisions that need to be made on route to FDA approval. Using the power of TinyML and hardware, not animals, for testing models can speed up the process and take just 12 months.

Another example of this game-changing technology in terms of building neural networks is the ability to fix problems and create new solutions for things we couldn’t dream of doing before. For example, TinyML can listen to beehives and detect anomalies and distress caused by things as small as wasps. A tiny sensor can trigger an alert based on a sound model that identifies a hive under attack, allowing farmers to secure and assist the hive, in real-time.

Did you ever want to get started in robotics, but were put off by the cost?

Your phone is probably powerful enough to be the eyes, ears and brain of a robot. Now Intel researchers have released a free design that can make this possible.

Their idea, called OpenBot, is to build cheap robot bodies that use smartphones as their eyes, ears and brains, inspired by Google Cardboard. What’s more, they have published their plans along with the software that makes it all possible so that anybody can build smart, capable robots for around $50 (provided they have a smartphone).

Azure Synapse has many features to help analyze data, and in this episode of Data Exposed, Ginger Grant will review how to query data stored in a Data Lake not only in Azure Synapse but also visualize the data in Power BI.

The demonstrations show how to run SQL queries against the Data Lake without using any Synapse Compute or data manipulation. Ginger will also walk-through the steps for how you can connect to Power BI from within Azure Synapse and visualize the data. To help get started Power BI and Azure Synapse, the video will walk through the steps to create Power BI Data Source files to speed connectivity.

Index:

  • 0:00 Introduction
  • 1:15 What is Azure Synapse
  • 2:27 What you can do with Azure Synapse
  • 3:40 Azure Synapse Studio
  • 5:10 Including PowerBI Demo
  • 9:40 When to use Azure Synapse

Jeremy Howard provides this introductory lesson on Deep Learning for Coders.

In this first lesson, we learn about what deep learning is, and how it’s connected to machine learning, and regular computer programming. We get our GPU-powered deep learning server set up, and use it to train models across vision, NLP, tabular data, and collaborative filtering. We do this all in Jupyter Notebooks, using transfer learning from pretrained models for the vision and NLP training.

We discuss the important topics of test and validation sets, and how to create and use them to avoid over-fitting. We learn about some key jargon used in deep learning.

We also discuss how AI projects can fail, and techniques for avoiding failure.

Index:

  • 00:00 – Introduction
  • 06:44 – What you don’t need to do deep learning
  • 08:38 – What is the point of learning deep learning
  • 09:52 – Neural Nets: a brief history
  • 16:00 – Top to bottom learning approach
  • 23:06 – The software stack
  • 39:06 – Git Repositories
  • 42:20 – First practical exercise in Jupyter Notebook
  • 48:00 – Interpretation and explanation of the exercise
  • 55:35 – Stochastic Gradient Descent (SGD)
  • 1:01:30 – Consider how a model interacts with its environment
  • 1:07:42 – “doc” function and fastai framework documentation
  • 1:16:20 – Image Segmentation
  • 1:17:34 – Classifying a review’s sentiment based on IMDB text reviews
  • 1:18:30 – Predicting salary based on tabular data from CSV
  • 1:20:15 – Lesson Summary

Brian Blanchard joins Scott Hanselman to discuss how you can unblock your cloud adoption efforts using the Cloud Adoption Framework governance methodology. This agile, iterative methodology enables governance maturity without impeding migration or innovation.

Video contents:

  • [0:00:00]- Overview
  • [0:00:23]- What is cloud governance?
  • [0:04:32]- Cloud Adoption Framework Governance Benchmark Tool
  • [0:08:17]- Cloud governance guides
  • [0:11:18]- Wrap-up

Related links: