Most of our sensor beacons with accelerometers offer motion triggered broadcast. That is, they can be set up to advertise when the beacon is moving. This can be used for motion detection or, in some scenarios, as a mechanism to significantly conserve battery life.
Some beacons such as the Minew range with buttons can be set to only advertise when there’s a double press of the button. This can be used for SOS type scenarios.
The S1 temperature humidity beacons can also be set to advertise when the temperature or humidity goes above or below a value. This is useful for alarm type situations.
While triggered broadcast provides for these usecases and extends battery life, it should be remembered that because the beacons are not advertising all the time, there’s no way of knowing their location (in RTLS situations) or indeed if the beacon is working (e.g. the battery might be flat). Beacons such as the INGICS range advertise all the time and send different advertising when a button is pressed or the temperature/humidity changes. This allows for ‘I am still here’ functionality at the expense of shorter battery life.
One of our customers, Activ84Health in Belgium, has an interesting product that we have been working on, Memoride, that encourages the use of cycling fitness machines through the use of a moving map and visual route.
The system uses beacons to detect movement of the cycling machine.
The technology is featured in the Care England Best Practice Report:
The S1 beacon is a temperature/humidity beacon that we supply in 3 variants. It’s not immediately obvious how the batteries should be replaced. The manufacturer, Minew, has a video showing how to access the screws to open the case:
Most, except the sensor beacons, are waterproof to IP67. All the beacons can be configured to advertise multiple channels at the same time including iBeacon, Eddystone UID, Eddystone URL, Eddystone TLM, sensor (where available), acceleration (where available) and device info.
Sato beacons use the button in an innovative way. Instead of going OFF, the button long press is detected for SOS type scenarios. The beacon is instead turned off using the configuration app or programatically via your custom app.
Dialog Semiconductor, the manufacturer of the SoC chip in some beacons, has an informative article on How Bluetooth Mesh and IIoT are Reimagining Factories and Warehouses. It explains how the recent introduction of Bluetooth mesh has created new opportunities in the Industrial Internet of Things (IIoT).
“The manufacturing industry is absolutely ripe for potential with Bluetooth mesh”
IDC
“Industrial sensors and smart buildings among other use cases, are expected to outpace the overall Bluetooth LE market by 3X through 2022”
Research and Markets
The article mentions preventive maintenance, air quality sensing, asset tracking, robot control systems and traditional air conditioning as possible applications for Bluetooth Mesh. However, a key insight is that once a mesh network is in place it can be used for applications beyond those originally envisaged.
There’s lots said about the advantages of Industry 4.0 or Digital Transformation and the associated new technologies but it’s a lot harder to apply this to the context of a business that has legacy equipment and no real way of knowing where to start.
Our previous article on productivity explained how, historically, digital transformation has been only been implemented in the top 5% ‘frontier’ companies. These have tended to be very large companies with large R&D budgets that have enabled customised digital solutions. More recently, the availability of less expensive sensors and software components have extended opportunities to the SME companies. These companies are already realising gains in profitability, customer experience and operational efficiency. Unlike previous technologies, such as CRM, the newer technologies such as IoT and AI are more transformative. Companies that don’t update their processes risk being outranked by their competition with a greater possibility of going out of business. But where do you start?
The place to start is not technology but instead something you and your colleagues fortunately have lots of experience of : Your company. Take an honest look at your processes and work out the key problems that, if solved, would achieve the greatest gains. You might have ignored problems or inefficiencies for years or decades because they were thought to be insolvable. Technology might now be able to solve some of these problems. So what kind of problems? Think in terms of bottlenecks, costly workrounds, human effort-limited tasks, stoppages, downtimes, process delays, under-used equipment and even under-used people. Can you measure these things and react? Can you predict they are about to happen? This is where sensing comes in.
The next stage is connectivity. You will almost certainly need to upgrade or expand your WiFi and/or Ethernet network. It can be impractical to put sensors on everything and everyone and connect everything by WiFi/Ethernet. Instead, consider Bluetooth LE and sensor beacons to provide a low cost, low power solution for the last 50 to 100m. Bluetooth mesh can provide site-wide connectivity.
Initially implement a few key improvements that offer good payback for the effort (ROI). The improvements in efficiency, productivity, reduced costs and even customer experience should be enough to convince stakeholders to expand and better plan the digital transformation. This involves replacement of inefficient equipment and inefficient processes using, for example, robotics and 3D printing. It also involves analysing higher order information combined from multiple sources and using more advanced techniques such as AI machine learning to recognise and detect patterns to detect, classify and predict. This solves problem complexity beyond that able to be solved by the human mind or algorithmic program created by a programmer.
While the SensingKit supports beacons, it only supports them for detecting proximity. The various sensor beacon variants are not supported. SensingKit is best used when you want the smartphone, not the beacon, to do the sensing. It’s useful when you want to mix smartphone sensing with beacon proximity sensing.
The traditional IoT strategy of sending all data up to the cloud for analysis doesn’t work well for some sensing scenarios. The combination of lots of sensors and/or frequent updates leads to lots of data being sent to the server, sometimes needlessly. The server and onward systems usually only need to now about abnormal situations. The data burden manifests itself as lots of traffic, lots of stored data, lots of complex processing and significant, unnecessary costs.
The processing of data and creating of ongoing alerts by a server can also imply longer delays that can be too long or unreliable for some time-critical scenarios. The opposite, doing all or the majority of processing near the sensing is called ‘Edge’ computing. Some people think that edge computing might one day become more normal as it’s realised that the cloud paradigm doesn’t scale technically or financially. We have been working with edge devices for a while now and can now formally announce a new edge device with some unique features.
Another problem with IoT is every scenario is different, with different inputs and outputs. Most organisations start by looking for a packaged, ready-made solution to their IoT problem that usually doesn’t exist. They tend to end up creating a custom coded solution. Instead, with SensorCognition™ we use pre-created modules that we ‘wire’ together, using data, to create your solution. We configure rather than code. This speeds up solution creation, providing greater adaptability to requirements changes and ultimately allows us to spend more time on your solution and less time solving programming problems.
However, the main reason for creating SensorCognition™ has been to provide for easier machine learning of sensor data. Machine learning is a two stage process. First data is collected, cleaned and fed into the ‘learning’ stage to create models. Crudely speaking, these models represent patterns that have been detected in the data to DETECT, CLASSIFY, PREDICT. During the production or ‘inference’ stage, new data is fed through the models to gain real-time insights. It’s important to clean the new data in exactly the same way as was done with the learning stage otherwise the models don’t work. The traditional method of data scientists manually cleaning data prior to creating models isn’t easily transferable to using those same models in production. SensorCognition™ provides a way of collecting sensor data for learning and inference with a common way of cleaning it, all without using a cloud server.
Sensor data and machine learning isn’t much use unless your solution can communicate with the outside world. SensorCognition™ modules allow us to combine inputs such as MQTT, HTTP, WebSocket, TCP, UDP, Twitter, email, files and RSS. SensorCognition™ can also have a web user interface, accessible on the same local network, with buttons, charts, colour pickers, date pickers, dropdowns, forms, gauges, notifications, sliders, switches, labels (text), play audio or text to speech and use arbitrary HTML/Javascript to view data from other places. SensorCognition™ processes the above inputs and provides output to files, MQTT, HTTP(S), Websocket, TCP, UDP, Email, Twitter, FTP, Slack, Kafka. It can also run external processes and Javascript if needed.
With SensorCognition™ we have created a general purpose device that can process sensor data using machine learning to provide for business-changing Internet of Things (IoT) and ‘Industry 4.0’ machine learning applications. This technology is available as a component of BeaconZone Solutions.
When working with Machine Learning on beacon sensor data or indeed any data, it’s important to realise AI machine learning isn’t magic. It isn’t foolproof and is ultimately only as good as the data passed in. Because it’s called AI and machine learning, people often expect 100% accuracy when this often isn’t possible.
By way of a simple example, take a look at the recent tweet by Max Woolf where he shows a video depicting the results of the Google cloud vision API when asked to identify an ambiguous rotating image that looks like a duck and rabbit:
There are times when it thinks the image is a duck, other times a rabbit and other times when it doesn’t identify either. Had the original learning data included only ducks but no rabbits there would have been different results. Had there been different images of ducks the results would have been different. Machine learning is only a complex form of pattern recognition. The accuracy of what you get out is related to a) The quality of the learning data and b) The quality of the tested data when to try identification.
If your application of machine learning is safety critical and needs 100% accuracy, then machine learning might not be right for you.