If you’re a regular visitor to this blog, then you know that the use of machine learning and AI technologies rapidly on the rise and has been for a while now. This growth mirrors the increasing use of cloud service environments to leverage mass-scale computing resources.

Large cloud service providers offer ready-made machine learning and AI capabilities, models, and tools that make it easier than ever to build more intelligent applications and data mining scenarios.

But what does this mean for privacy?

Of course, this is a concern for security and privacy professionals that must be addressed. Mining data with machine learning and AI requires staggering quantities of data, and some of that data is bound to be sensitive in nature. On top of this, increasing numbers of regulations mandate data privacy measures for cloud services, making privacy-preserving machine learning techniques all the more critical.

The London “festival of A.I. and emerging technology” that takes place each June.

This year, due to Covid-19, the event took place completely online.

(For more about how CogX pulled that off, look here.)

One of the sessions veered towards privacy.

One of the most interesting sessions I tuned into was on privacy-preserving machine learning. This is becoming a hot topic, particularly in healthcare, and especially now due to the interest in applying machine learning to healthcare records that the coronavirus pandemic is helping to accelerate.

Lex Fridman shared this lecture by Andrew Trask in January 2020, part of the MIT Deep Learning Lecture Series.

OUTLINE:

0:00 – Introduction
0:54 – Privacy preserving AI talk overview
1:28 – Key question: Is it possible to answer questions using data we cannot see?
5:56 – Tool 1: remote execution
8:44 – Tool 2: search and example data
11:35 – Tool 3: differential privacy
28:09 – Tool 4: secure multi-party computation
36:37 – Federated learning
39:55 – AI, privacy, and society
46:23 – Open data for science
50:35 – Single-use accountability
54:29 – End-to-end encrypted services
59:51 – Q&A: privacy of the diagnosis
1:02:49 – Q&A: removing bias from data when data is encrypted
1:03:40 – Q&A: regulation of privacy
1:04:27 – Q&A: OpenMined
1:06:16 – Q&A: encryption and nonlinear functions
1:07:53 – Q&A: path to adoption of privacy-preserving technology
1:11:44 – Q&A: recommendation systems

Lex Fridman interviews Michael Kearns in the latest episode of his podcast.

Michael Kearns is a professor at University of Pennsylvania and a co-author of the new book Ethical Algorithm that is the focus of much of our conversation, including algorithmic fairness, privacy, and ethics in general. But, that is just one of many fields that Michael is a world-class researcher in, some of which we touch on quickly including learning theory or theoretical foundations of machine learning, game theory, algorithmic trading, quantitative finance, computational social science, and more. This conversation is part of the Artificial Intelligence podcast.

Understanding what your AI models are doing is super important both from a functional as well as ethical aspects. In this episode we will discuss what it means to develop AI in a transparent way.

Mehrnoosh introduces an awesome interpretability toolkit which enables you to use different state-of-the-art interpretability methods to explain your models decisions.

By using this toolkit during the training phase of the AI development cycle, you can use the interpretability output of a model to verify hypotheses and build trust with stakeholders.

You can also use the insights for debugging, validating model behavior, and to check for bias. The toolkit can even be used at inference time to explain the predictions of a deployed model to the end users.

Learn more:

Deepfakes have started to appear everywhere.

From viral celebrity face-swaps to impersonations of political leaders – it can be hard to spot the difference between real and fake.

Digital impressions are starting to have real financial repercussions. In the U.S., an audio deepfake of a CEO reportedly scammed one company out of $10 million.

With the 2020 election not far off, there is huge potential for weaponizing deepfakes on social media.

Now, tech giants like Google, Twitter, Facebook and Microsoft are fighting back. With Facebook spending more than $10 million to fight deepfakes, what’s at stake for businesses, and what’s being done to detect and regulate them.

Lex Fridman interviews Keoki Jackson, he CTO of Lockheed Martin.

Lockheed Martin is a company that through its long history has created some of the most incredible engineering marvels that human beings have ever built, including planes that fly fast and undetected, defense systems that intersect threats that could take the lives of millions in the case of nuclear weapons, and spacecraft systems that venture out into space, the moon, Mars, and beyond with and without humans on-board.

Law enforcement agencies like the New Orleans Police Department are adopting AI based systems to analyze surveillance footage. WSJ’s Jason Bellini gets a demonstration of the tracking technology and hears why some think it’s a game changer, while for others it’s raising concerns around privacy and potential bias.

As the machines are get smarter, they have reached the point where they learn by themselves and, even make their own decisions.

Here’s an interesting look at 10 times AI displayed amazing capabilities

There are machines that dream, read words in people’s brains, and evolve themselves into art masters. The darker skills are enough to make anyone […]