Jupyter Notebooks are best known as tools for Data Scientists to display Python, Spark or R scripts. 
A Jupyter Notebook enables you to share words, images, code AND code results.  .NET interactive Jupyter notebooks add c sharp, F# and PowerShell core to the mix.
In this episode, Rob Sewell will introduce Jupyter Notebooks and show you how useful they could be for you in your daily work-life for Incident Resolution, Repeatable Tasks, and Demoing New Features.

Video index:

  • [01:04] Why use notebooks
  • [02:01] Demo
  • [04:15] Display notebooks in GitHub
  • [04:52] Get your own .NET Interactive notebook
  • [05:26] Edit .NET Interactive notebooks in Azure Data Studio
  • [07:37] SQL Instance Permissions
Resources:

Vicky Harp joins Scott Hanselman to show how Azure Data Studio combined the simple and robust SQL query editing experience of tools like SSMS with the flexibility and collaboration of Jupyter Notebooks. The November 2019 release of Azure Data Studio included SQL Server 2019 Guide as a Jupyter Book, which provides a richer troubleshooting experience.     

Related

Jeffrey Mew shows you how you can can natively edit Jupyter notebooks in Visual Studio Code.

Jupyter (formerly IPython) is an open-source project that enables you to easily combine Markdown text and executable Python source code on one canvas called a notebook.

These notebooks contain live code, equations, visualizations and narrative text. Jeffrey shows how easy it is to work with Jupyter notebooks in Visual Studio Code.

Resources:

Here’s a great collection of Jupyter notebooks that explore all the new features of SQL Server 2019.

Here are some of the ones that caught my attention.

SQL Server 2019 Querying 1 TRILLION rows

  • OneTrillionRowsWarm.ipynb – This notebook shows how SQL Server 2019 reads 9 BILLION rows/second using a 1 trillion row table using a warm cache,
  • OneTrillionRowsCold.ipynb – This notebook shows how SQL Server 2019 performs IO at ~24GB/s using a 1 trillion row table with a cold cache.

Big Data, Machine Learning & Data Virtualization

  • SQL Server Big Data Clusters – Part of our Ground to Cloud workshop. In this lab, you will use notebooks to experiment with SQL Server Big Data Clusters (BDC), and learn how you can use it to implement large-scale data processing and machine learning.
  • Data Virtualization using PolyBase – The notebooks in this SQL Server 2019 workshop cover how to use SQL Server as a hub for data virtualization for sources like OracleSAP HANAAzure CosmosDBSQL Server and Azure SQL Database.
  • Spark with Big Data Clusters – The notebooks in this folder cover the following scenarios:
    • Data Loading – Transforming CSV to Parquet
    • Data Transfer – Spark to SQL using Spark JDBC connector
    • Data Transfer – Spark to SQL using MSSQL Spark connector
    • Configure – Configure a spark session using a notebook
    • Install – Install 3rd party packages
    • Restful-Access – Access Spark in BDC via restful Livy APIs
  • Machine Learning
    • Powerplant Output Prediction – This sample uses the automated machine learning capabilities of the third party H2O package running in Spark in a SQL Server 2019 Big Data Cluster to build a machine learning model that predicts powerplant output.
    • TensorFlow on GPUs in SQL Server 2019 big data cluster – The notebooks in this directory illustrate fitting TensorFlow image classification models using GPU acceleration.

Ever wondered what breed that dog or cat is? In this show, you’ll learn how to train, optimize and deploy a deep learning model using Azure Notebooks, Azure Machine Learning Service, and Visual Studio Code using Python. Using transfer learning to retrain a mobilenet model via Tensorflow to recognize dog and cat breeds using the Oxford IIIT Pet Dataset.

Next, watch how to optimize that model using the Azure Machine Learning Service HyperDrive service, and improve the accuracy of our model to over 90%. Finally, we’ll put on our developer hat, and use Visual Studio Code and our Python Extension to deploy and test our model. Along the way you’ll see cool features like our new Jupyter-powered interactive programming experience in VS Code, our AI powered IntelliSense feature called Intellicode, and our Azure Machine Learning extension.

HyperDrive service, and improve the accuracy of our model to over 90%. Finally, we’ll put on our developer hat, and use Visual Studio Code and our Python Extension to deploy and test our model. Along the way you’ll see cool features like our new Jupyter-powered interactive programming experience in VS Code, our AI powered IntelliSense feature called Intellicode, and our Azure Machine Learning extension.

Github repo for all code used in the show: https://github.com/microsoft/connect-petdetector

Blog post introducing the new features in Azure Notebooks: https://github.com/Microsoft/AzureNotebooks/wiki/Azure-Notebooks-at-Microsoft-Connect()-2018

Blog post introducing our data science features in our Python extension: https://blogs.msdn.microsoft.com/pythonengineering/2018/11/08/data-science-with-python-in-visual-studio-code/

Azure Notebooks: https://notebooks.azure.com

Python Extension: https://marketplace.visualstudio.com/items?itemName=ms-python.python

Azure Machine Learning Extension: https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-ai

Visual Studio Code: https://code.visualstudio.com/

Here’s an interesting talk from PyCon Germany by Joshua Görner, a Data Scientist at BMW.

From the video description:

Interactive notebooks like Jupyter have become more and more popular in the recent past and build the core of many data scientist’s workplace. Being accessed via web browser they allow scientists to easily structure their work by combining code and documentation. Yet notebooks often lead to isolated and disposable analysis artifacts. Keeping the computation inside those notebooks does not allow for convenient concurrent model training, model exposure or scheduled model retraining. Those issues can be addressed by taking advantage of recent developments in the discipline of software engineering. Over the past years containerization became the technology of choice for crafting and deploying applications. Building a data science platform that allows for easy access (via notebooks), flexibility and reproducibility (via containerization) combines the best of both worlds and addresses Data Scientist’s hidden needs.

Jupyter notebooks are great. They are interactive, customizable and can be made to beautifully illustrate data.

Unfortunately only a small fraction of data scientists takes the full advantage of the possibilities that they bring. In this talk, Jakub Czakon shows you some of the coolest notebook features that will impress your peers, dazzle your clients and make your work a lot more enjoyable.