When working with Machine Learning on beacon sensor data or indeed any data, it’s important to realise AI machine learning isn’t magic. It isn’t foolproof and is ultimately only as good as the data passed in. Because it’s called AI and machine learning, people often expect 100% accuracy when this often isn’t possible.
By way of a simple example, take a look at the recent tweet by Max Woolf where he shows a video depicting the results of the Google cloud vision API when asked to identify an ambiguous rotating image that looks like a duck and rabbit:
There are times when it thinks the image is a duck, other times a rabbit and other times when it doesn’t identify either. Had the original learning data included only ducks but no rabbits there would have been different results. Had there been different images of ducks the results would have been different. Machine learning is only a complex form of pattern recognition. The accuracy of what you get out is related to a) The quality of the learning data and b) The quality of the tested data when to try identification.
If your application of machine learning is safety critical and needs 100% accuracy, then machine learning might not be right for you.
Read about AI Machine Learning with Beacons