Eben Kouao created a smart mirror that, in addition to built-in facial recognition, integrated with Alexa built with a Raspberry Pi 4, using the latest Operating System, Raspbian Buster (2020)

You can find the compiled face detection image on our website: https://smartbuilds.io/ 

Introducing Smart Mirror AI (SMAI). An IoT focused smart mirror connecting your favorite devices. Alexa to your Smart

deeplizard teaches us how to normalize a dataset. We’ll see how dataset normalization is carried out in code, and we’ll see how normalization affects the neural network training process.

Content index:

  • 0:00 Video Intro
  • 0:52 Feature Scaling
  • 2:19 Normalization Example
  • 5:26 What Is Standardization
  • 8:13 Normalizing Color Channels
  • 9:25 Code: Normalize a Dataset
  • 19:40 Training With Normalized Data

Microsoft Mechanics shares this working from home tip: how to change default Outlook meeting length in the desktop app to give others – and yourself – a little time between meetings.

By default, meetings in Outlook are set to 30 minutes, but did you know you can set them to end earlier – as a default when creating future meetings? You can; and we’ll show you how to end meetings 5 or more minutes early for everything you schedule in the future.

For more tips like this, check out the working remotely playlist at https://aka.ms/WFHMechanics

How far can you go with ONLY language modeling?

Can a large enough language model perform NLP task out of the box?

OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding.

Yannic Kilcher explores.

Paper

Time index:

  • 0:00 – Intro & Overview
  • 1:20 – Language Models
  • 2:45 – Language Modeling Datasets
  • 3:20 – Model Size
  • 5:35 – Transformer Models
  • 7:25 – Fine Tuning
  • 10:15 – In-Context Learning
  • 17:15 – Start of Experimental Results
  • 19:10 – Question Answering
  • 23:10 – What I think is happening
  • 28:50 – Translation
  • 31:30 – Winograd Schemes
  • 33:00 – Commonsense Reasoning
  • 37:00 – Reading Comprehension
  • 37:30 – SuperGLUE
  • 40:40 – NLI
  • 41:40 – Arithmetic Expressions
  • 48:30 – Word Unscrambling
  • 50:30 – SAT Analogies
  • 52:10 – News Article Generation
  • 58:10 – Made-up Words
  • 1:01:10 – Training Set Contamination
  • 1:03:10 – Task Exampleshttps://arxiv.org/abs/2005.14165
    https://github.com/openai/gpt-3

Jon Wood shows the importance of model explainability and a few ways you can do this ML.NET with linear regression models in this video.

Related links:

Paper mentioned in video and where wolf vs husky photo is from – https://arxiv.org/pdf/1602.04938.pdf

Code – https://github.com/jwood803/MLNetExamples/blob/master/MLNetExamples/ModelExplainability/Program.cs

ML.NET Playlist – https://www.youtube.com/watch?v=8gVhJKszzzI&list=PLl_upHIj19Zy3o09oICOutbNfXj332czx