Press the play button below to listen here or visit the show page

Show Notes

AI Today Podcast #82: Interview Frank La Vigne and Andy Leonard with Data Driven Podcast

In this podcast we get the rare opportunity to interview fellow podcasters. Frank La Vigne and Andy Leonard are co-hosts of the Data Driven podcast and joined us on this episode of the AI Today podcast to discuss the important role data plays in AI, their take on how data will continue to be used into the future and the idea of pervasive knowledge.

Read more …

Here’s an interesting look at the cutting edge technologies just on the horizon of AI research and what problems they can potentially solve that current techniques can’t.

Artificial intelligence (AI) is dominated by pattern recognition techniques. Recently, major advances have been made in the fields of image recognition, machine translation, audio processing and several others thanks to the development and refinement of deep learning. But deep learning is not the cure for every problem. In fact, […]

Last week, I spoke to a group of high school students about careers in STEM. Aside from being happy that STEM is now encouraged, I pointed out to them that the workforce they will be entering may look different than the one they see now.  By the time they hit the workforce, digital transformation will have made short work of companies that have not become data driven. The only surviving and thriving companies will be the one who adapted quickly. 

Proving that point is this article from TechRepublic and helpful advice on how to stay ahead of the robots.

Here’s an interesting video related to the article:

.

In case you were wondering when the worlds of CyberSecurity and AI would collide and create new threats, it’s happening.

AI fuzzing definition AI fuzzing uses machine learning and similar techniques to find vulnerabilities in an application or system. Fuzzing has been around for a while, but it’s been too hard to do and hasn’t gained much traction with enterprises. Adding AI promises to make the tools easier to […]

Here’s an enlightening look at the state of applying AI to security applications.

Top 5 barriers to AI security adoption

From this article in TechRepublic:

Artificial intelligence (AI) holds a great deal of promise for helpingcybersecurity professionals deal with more sophisticated and dangerous threats. But the technology faces several key obstacles before it is widely adopted in this field, according to a Tuesday survey conducted by SANS Institute and sponsored by Cylance. Among […]

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun may not be household names, but their work definitely is. In fact, you’ve interacted with the descendants their research today (or last few minutes).

From the facial recognition system that unlocked your phone to the AI language model that suggested what to write in your last email, the impact of their work is everywhere, earning them the titles  of “Godfathers of AI.” Everytime I hear that term this is the image that pops into my mind.

Recently, the trio has received the Turing Award, which is a kind of Nobel Prize for Computer Science.

So, if they ever call you and ask for a favor, you better do it, unless you want to wake up to a decapitated MacBook laying in your bed.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

AI is set to disrupt every field and every industry. Healthcare, in particular, seems primed for disruption. Here’s an interesting project out of Stanford.

“One of the really exciting things about computer vision is that it’s this powerful measuring tool,” said Yeung, who will be joining the faculty of Stanford’s department of biomedical data science this summer. “It can watch what’s happening in the hospital setting continuously, 24/7, and it never gets tired.”

Current methods for documenting patient movement are burdensome and ripe for human error, so this team is devising a new way that relies on computer vision technology similar to that in self-driving cars. Sensors in a hospital room capture patient motions as silhouette-like moving images, and a trained algorithm identifies the activity — whether a patient is being moved into or out of bed, for example, or into or out of a chair.

Here’s a video focusing on the “existential risks” facing humanity and why the 21st century is like no other.

While I think some of this is fear mongering, the video does raise some interesting ethical and policy questions.

Threats from artificial intelligence and biotechnology might wipe out humanity as we know it. The nature and level of such extreme technological risks (ETRs) are difficult to assess because they have received little serious scientific attention. At the University of Cambridge’s Centre for Existential Risk, scientists are exploring these threats and how we can manage them.

Many people new to data science might believe that this field is just about R, Python, Spark, Hadoop, SQL, traditional machine learning techniques or statistical modeling. While those technologies are a large part of the field, the answer is more nuanced than that.

Here’s a thoughtful article from Vincent Granville on Data Science Central about this very question and here is the list of resources that 

24 Articles About Core Data Science