Visual Studio 2019 comes with a number of updates for Python developers. See how Visual Studio 2019 makes working with Python fun.
In this video, Lex Fridman interviews Greg Brockman, Co-Founder and CTO of OpenAI. OpenAI is a research organization developing ideas in AI that lead eventually to a safe & friendly artificial general intelligence that benefits and empowers humanity.
Last night was the Global AI Night and it was a pleasure and an honor to speak at the one in DC. The venue was awesome and the crowd was great!
Here are a sampling of tweets. I even did a live stream after the event.
And then, I had to put on my unicorn hat to make sure the neural network learned properly. After all, Data Scientists are part unicorn. 😉
Here’s an interesting article from CodeProject defining the cycles of data science and how it relates to business cycles and the fairly well established framework of SDLC. Although some will argue that data science is “pure science” and this cycle belongs to the “data engineering” label, organizations that fail to move innovations efficiently from “the lab” to production are not going to be competitive.
By its simple definition, Data Science is a multi-disciplinary field that contains multiple processes to extract knowledge or useful output from Input Data. The output may be Predictive or Descriptive analysis, Report, Business Intelligence, etc. Data Science has well-defined lifecycles similar to any other projects and CRISP-DM and TDSP are some of the proven standards.
Azure Blob Storage on IoT Edge is a light-weight Azure Consistent module which provides local Block blob storage. It comes with configurable abilities to:
- Automatically tier the data from IoT Edge device to Azure
- Automatically delete the data from IoT edge device after specified time.
Learn more: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob
Create a Free Account (Azure): https://aka.ms/aft-iot
Here’s a great tutorial for beginners (or refresher for seasoned pros) on the basics of SQL Joins (Inner, Left, Right, Full Join).
Here’s a good code-heavy tutorial that uses the Gradient descent optimization algorithm. It also explores the idea of splitting data into 3 parts.
Additionally, we will divide our data set into three slices, Training, Testing, and validation. In our example, we have data in CSV format with columns “height weight age projects salary”. Assuming there is a correlation between projects and salary will try to predict the salary given projects completed. You download data using this link: