Frank La Vigne works at Microsoft as an AI Technology Solutions Professional where he helps companies achieve more by getting the most out of their data with analytics and AI. He also co-hosts the DataDriven podcast. He blogs regularly at FranksWorld.com and you can watch him on his YouTube channel, “Frank’s World TV” (FranksWorld.TV). Frank has extensive experience as a software engineer. You can find him on Twitter at @tableteer
In this DataPoint, Frank discovers that Chuck E. Cheese is tracking game playing statistics with the use of RFID cards and how to look up your own data with their in store kiosks.
Press the play button below to listen here or visit the show page at DataDriven.tv
You’ll hear the term Bayes or Bayesian come up a lot in data science, but this video explores the theory with tennis balls and a table.
Fraud detection, a common use of AI, belongs to a more general class of problems — anomaly detection.
An anomaly is a generic, not domain-specific, concept. It refers to any exceptional or unexpected event in the data: a mechanical piece failure, an arrhythmic heartbeat, or a fraudulent transaction.
Basically, identifying a fraud means identifying an anomaly in the realm of a set of legitimate transactions. Like all anomalies, you can never be truly sure of the form a fraudulent transaction will take on. You need to take all possible “unknown” forms into account.
Here’s an interesting article on doing anomaly/fraud detection with a neural autoencoder.
Using a training set of just legitimate transactions, we teach a machine learning algorithm to reproduce the feature vector of each transaction. Then we perform a reality check on such a reproduction. If the distance between the original transaction and the reproduced transaction is below a given threshold, the transaction is considered legitimate; otherwise it is considered a fraud candidate (generative approach). In this case, we just need a training set of “normal” transactions, and we suspect an anomaly from the distance value.
Based on the histograms or on the box plots of the input features, a threshold can be identified. All transactions with input features beyond that threshold will be declared fraud candidates (discriminative approach). Usually, for this approach, a number of fraud and legitimate transaction examples are necessary to build the histograms or the box plots.
Deep learning is central to recent innovations in AI. If you don’t want to run your code in the cloud and prefer to build your own local rig optimized for TensorFlow and machine learning, then this post is for you.
One point to note is that TensorFlow has a slightly unusual computation scheme which might be particularly intimidating to novice programmers. The computations are built into a ‘computation graph’ which is then run all at once. So to add two variables, a and b, TensorFlow would first encode the ‘computation’ a+b into a computation graph. Before running this graph, trying to access this graph will not give a result – the result hasn’t been processed yet! Instead, it will give you the graph. Only after running the graph will you have access to the actual answer. Bear this in mind, as it will help clear confusions in your later explorations with TensorFlow.