What does the mathematician and philosopher Blaise Pascal have to say about AI safety? Can there be fairness in a clearly asymmetric equation?
Dr. Hannah Fry explores the power of algorithms, AI, and pigeons in this talk from the Royal Institute.
From the YouTube description:
Algorithms are increasingly used to make decisions in healthcare, transport, finance and security. How can they best be used and what happens when things go wrong?
In this episode of BBC Click, they explore a new art installation in Paris, the Pope weighs in on the ethics of Robotics and AI.
Here’s an interesting article in Nature about the use of AI in evaluating embryos with AI — another use of computer vision in the medical field. Could this bring down healthcare costs? What if the algorithm mislabels an embryo? Are there ethical implications?
Deep learning algorithms, in particular convolutional neural networks (CNNs), have recently been used to address a number of medical-imaging problems, such as detection of diabetic retinopathy,18 skin lesions,19 and diagnosing disease.20 They have become the technique of choice in computer vision and they are the most successful type of models for image analysis. Unlike regular neural networks, CNNs contain neurons arranged in three dimensions (i.e., width, height, depth). Recently, deep architectures of CNNs such as Inception21 and ResNet22 have dramatically increased the progress rate of deep learning methods in image classification.23 In this paper, we sought to use deep learning to accurately predict the quality of human blastocysts and help select the best single embryo for transfer (Fig. 1).
Here’s an interesting piece in Forbes on how AI will transform the way we conduct and audit business.
“One of the best-use cases for AI is to look at information and to identify patterns that stand out,” he says, adding that this applies just as readily to looking for unethical behavior as it does to recognizing cat pictures. “As long as there’s data related to it and you can analyze it at scale, AI can detect anomalies that humans simply can’t.”
OpenAI raised some eyebrows last month when it announced it had figured out a way to get an AI to write more naturally. They, however, decided not to release their entire research for fear that it could cause havoc.
Last month, researchers at OpenAI revealed they had built software that could perform a range of natural language tasks, from machine translation to text generation. Some of the technical details were published in a paper, though the majority of materials was withheld for fear that it could be used maliciously to create spam-spewing bots or churn out tons of fake news. Instead, OpenAI released a smaller and less effective version nicknamed GPT-2-117M.
Siraj Raval weighs in on the controversy around OpenAI’s Text Generator.
As a machine learning project grows, so should its infrastructure. In this talk, Alejandro Saucedo covers some of the key trends in machine learning operations, as well as libraries to watch in 2019.
The talk is based on the “Awesome Machine Learning Operations” list maintained by The Institute for Ethical AI & Machine Learning, and focuses on the topics of reproducibility, orchestration and explainability.
This week, this week I’m at (well, near) Microsoft’s headquarters just outside Seattle, Washington, attending internal, possibly even secret, training. In this impromptu Data Point, he chats with fellow attendees about AI, Ethics, and the ever-present Unintended Consequences of technological advancement.
Press the play button below to listen here or visit the show page at DataDriven.tv
While the AI revolution will first automate away most of the jobs first, what happens next? Will they ever become conscious? How will we know? And what shall we do once machines become conscious? Do we need to grant them rights?