IoT Podcast Logo

Many organisations, both large and small, have already been engaged in some sort of IoT pilot activity in recent years, while others are even further ahead in the process of scaling solutions across the organisation. There is a newly found confidence for IoT in mainstream markets as the technology becomes part of the everyday IT and OT infrastructure agendas of today’s businesses, as well as further understanding the opportunities for business growth with a desire from the top to scale IoT solutions.

Finding the right IoT technology solution for business challenges still remains a major issue for many organisations. Although some early adopters are becoming more educated on technologies and their potential use cases, more needs to be done to help them navigate the IoT solutions and supplier ecosystem. Technology benchmarking is an activity that allows industry players to better understand the state of play of rapidly emerging solutions in the market.

By establishing a comparison between “like for like” products and solutions and how they perform under specific use case scenarios, it provides valuable insights for both demand and supply-side stakeholders. This talk aims to present the benchmarking activities we carried out in the last two years, on asset trackers and social distancing solutions, together with our findings.

ABOUT THE SPEAKERS

Ramona is the IoT Lead Engineer at Digital Catapult, working to deploy IoT “stuff that works” and which makes a big difference to UK businesses and organisations. Her role involves a range of activities from applying emerging IoT technologies to real-world problems, translating research into prototypes, demonstrators and pilots, finding early technology adopters, co-creating new testbeds and accelerator programmes to help innovators leverage IoT technology. Ramona is passionate about benchmarking IoT solutions, developing tools and practices to enable realistic performance evaluation. She is leading the IoT Benchmarking activities at Digital Catapult and is actively involved in academic and industrial benchmarking initiatives. Ramona has more than a decade of experience working with IoT technologies, having been a researcher at the Technical University of Cluj-Napoca and a senior researcher at the Nimbus Research Centre in Cork. She holds a PhD from the University of Trento and an MSc degree from the Technical University of Cluj-Napoca. Ramona is a strong advocate of women in STEM, working on programs to encourage and support young women along their career paths.

Find out more about Digital Catapult.

Follow Ramona

Become part of the Community!

Session Transcript

Ramona Marfievici
Hello, I’m Ramona Marfievici, and I am the Lead Engineer in the IoT team at Digital Catapult working among other things on benchmarking IoT devices and Solutions. Today, I’m going to talk to you about this activity, benchmarking and show you that IoT and computer science in general is not a boring job behind the computer, but an exciting one. My presentation is going to revolve around our benchmarking activities during COVID times, and IoT, social distancing solution, benchmarking activity that we worked on in 2020, the summer of 2020, autumn of 2020, I think, and I’m not going to talk about contact tracing solutions, so do not switch on another channel.

Ramona Marfievici
And before diving into the work that we did the Digital Catapult, let me introduce myself and my activities. So who am I? Currently I am lead engineering their team at Digital Catapult. But before that, I was a senior researcher at the Nimbus Embedded Systems Research Centre in Ireland. And before that, I was a researcher and a lecturer at a technical university in Cluj Napoca, my hometown, in Romania, were also graduated with a bachelor and master in computer science. And then I moved to Italy and got a PhD in information computer technology. So how did we started the benchmarking activities at Digital catapult? Well, it all started with a project with the British company. BOC who was the largest provider of industrial medical and special gases in the UK and it are in Ireland. And they were interested in asset tracking solutions. They were interested in understanding batteries and understanding energy harvesters. And one of the questions that they asked us to answer was, what is the energy consumption actually of an asset tracker that I might buy from the market for a periodic reporting application that I might implement and run on the device? And how do actually this differ for leading low power Wide Area Networks, communication technologies, but they came to us. We said, Okay, did you have a look at the data? She’s you talk with other experts? And this is what they said, Yes, we talked with all the experts with the NB IoT experts with Laura, one experts, the Sigfox experts, and they were all telling us the same story that your device is using all the communication technology and standard battery that the device comes with, is going to have a lifetime of more than 10 years, or up to 10 years, or 10 plus years. And of course, all the scientists are sending them to read the datasheet because the data about the power consumption is there and just apply formula, and you can get the battery lifetime.

Ramona Marfievici
Well, when they asked us and when we talked with them, we we quickly decided not to go to the datasheet not to talk with the experts all the way we’re also experts in communication technology, but to run an experiment. And what we did is we bought from the market, the most representative devices at the time as attackers using different communication technologies. So the LoRaWAN technology that they were mentioning the Sigfox, the NB IoT, and we run a benchmark measuring power consumption using real devices in real environments with power consumption with the power analyzer. We run experiments. We measured the current consumption when running the same application on all the devices immersing them are in the same environment. And then we derived the battery lifetime for those devices. And, of course, one of the things that we saw at the end of the experiment is that the lifetime was different than the one that they heard from the experts, or the time she was reporting. While we were running this project, different other clients from different company integrators, SMEs, different startups, they were all coming to us asking different questions around. Okay, what is the performance that I’m going to have using a specific communication technology, which has attraction divers are my application, which geolocation solver should I use? Which antenna should I buy to have the best to be The best solution and all these kind of questions. Although some early adopters, they become more educated on technologies and their potential use cases, for us, it was clear that there is a need of something to be done in the IoT community to help them navigate IoT solutions and IoT suppliers. Nevertheless, we received a very angrier email from somebody from the community, saying that, you guys, I worked with a number of companies, I talked to the number of individuals, and they were all knowledgeable, but nobody could help me by in the discussions about battery consumption and battery lifetime. So we thought, you know, what, DRT community needs a witch or not this kind of witch, but this guy, which that you might know about, because in the IoT, consumer space organisations such as which have started to feature product reviews, you know, and your, you know, for connected devices, like devices for smartphones. However, the majority of IoT products and services are part of this b2b solutions. There were no organisation who provide impartial guidance when we started the project, to businesses to navigate the emerging IoT solution space. So we had the idea to start this which project and the hypothesis underpinning our benchmarking activity was that an independent ground truth established through rigorous testing of IoT devices and solutions, with respect to real world application requirements can help increase the confidence of adopters when investing in IoT solutions, and at the same time, provide the feedback to IoT suppliers on how to improve the quality of their products. And what we were aiming for was to build some standardised test procedures for benchmarking of the shelf IoT devices and the solutions for common use cases and at the end of the day, but to establish a quantifiable, golden standard of performance for which the IoT ecosystem to aim for

Ramona Marfievici
while we were starting to work on this project at that moment, a lot of our clients and companies that we worked with, they were interested in asset tracking, using a lot of one communication technology. And that was the first activity that we focused on. So we went on the market with bought nine or 10, most representative asset trackers at that at that moment. And we started to benchmark them in terms of power consumption in terms of communication performance. And we did that in different environments in dense urban environments, in other urban environments, or in London, in Guildford. And we even looked at them through other lenses, then the power consumption and the reliability of communication. We looked at different device characteristics, we looked at things like four factors of the device, how programmable they are, if they’re programmable, how configurable they are. We looked at their GPS modules, because most of them had also a GPS module. We looked at the sensors available and other other things like usability, IoT capabilities. And this was what we did in terms of benchmarking the devices, but LoRaWAN asset tracking, it’s not only about the device, it’s also both the software actually that helps you to locate the Locate the device. So we, we were taking a step further, and we were benchmarking also the geolocation solvers. So this kind of software that is solving the location problem. And at that moment, we looked there were only four commercially available and research geolocation solvers. And we run a couple of experiments in London to benchmark to benchmark them. And it’s not only about running experiments at the at that, at this point, it’s also about developing and implementing the whole infrastructure for someone to run this experiment. So with the help of the Metropolitan Police, in 2019, we deployed and 2020 we deployed an infrastructure of LoRaWAN gateways in London that are geolocation capable. And now the testbed is still there and alive. So whoever wants to test the geolocation solutions, has the testbed available and can can use it. And also, apart from the technical validation and benchmark of the solution, we also did a lot of market study validation that was done by the business development team at Digital catapult at the moment and interviewing a larger network and service providers and system integrators, and we were presenting them our results and the idea of the benchmarking project. And, of course, everybody found it extremely useful. Some of them, they were acknowledging at that moment, they didn’t have the proper skills in the house, or that they were complaining that there is no central repository, actually where to find this information about the devices or specific devices that are no catalogues, where you can find information about measurements or other characteristics. And that sometimes them self, they found that the data sheets were misleading. And that certification of these devices is not enough. So we started thinking about some bespoke services at the moment around benchmarking services and IoT devices. While we were doing this, while we were wrapping everything and writing a report, we all faced a critical challenge. And yes, it’s still here with us, unfortunately, COVID-19. And it kind of changed the way we were interacting, and it had an impact on on every industry, and all the all of the problems that arise at the beginning of 2020, March, April,

Ramona Marfievici
in an effort to coexist with the virus companies, we’re trying to understand if they can actually bring back people to work, and to bring back to work in a safety and health environment. And social distancing was one of the factors that we could control in the outbreak at that moment, and is one of the measures to contain the virus which governments across the globe have enforced at that moment. So several IoT companies that were developing at that moment, real time locating systems, everything is connecting asset tracking. Now, real time locating systems, they quickly repurposed some of their solutions in an effort to help manage social distancing, and detect proximity violations in the workplace by neighbourhood discovery, distance measurements and real time alert of users when getting too close to each other. And they were doing this either through infrastructure based or just peer to peer solutions, like the ones that you are using with the phones, but they were using specific devices, and IoT companies on the market. They promised at that time to deliver effective solutions to enforce social distances. Still, in the IoT community, making the right technology choice was particularly challenging, because this were emerging solutions. And we were still on an immature market, where there was little usage evidence with no track records, or previous learnings from which to make informed the procurement decisions. So again, as a disclaimer, I’m not talking about contact distancing. So do not switch to another channel. So as I told you, before, we were we were into benchmarking, we were into evaluating IoT devices. So we said, Okay, let’s start evaluating the solutions that were emerging on the on the on the market for social distancing. And together with our collaborators from Imperial College and the IoT benchmarking consortium, we set up a challenge for social distancing solutions through which to benchmark them how reliable they are. In environments where people were asked, for example, not to stay less than two metres run from another. So we were looking to benchmark solutions that can ensure that people working as I said, in a shared environment are not less than two metre from one another for more than for more than a configurable time. Remember that the government’s are still changing at that time, this amount of time that you can sit in the proximity of somebody, sometimes 50 seconds or 30 seconds in Italy plus five seconds. So we wanted this timing to be configurable. And we also wanted the this solutions to immediately send alerts like we want to them either to have an LED that was blinking or a buzzer or to vibrate to enable this certain enforcement of social distancing habits. And what did we do? Well, we started with a methodology. We started with testing scenarios. And it started with the matrix. And we defined it the three scenarios that are typical for indoor offices. And the first one was kitchen one, with people chatting or making beverage. And we said that okay, at the beginning, let’s imagine that the subjects stand at a distance that is more than 4.5 metres, let’s say apart from one another, we have a timekeeper subject start moving, and

Ramona Marfievici
we want to measure the time that it takes from the point at which the subjects are breaking the two metre distance or less from a static one that are sitting near the sink. This was one of the scenarios that we tested remember, to be reliable means to be accurate. So can you detect the time at two metres or less than two metres from you, and can you do it in a timely way has precise distance measurement and clock was important for us. And, of course, detection must occur within well defined my town time bounds to ensure prompt user alerting or correctly captured the effective contact detection. And this is the kitchen digital catapult where we were running our experiments. So we were measuring the distances, we were marking each distance that we tested, and then we run the experiments. Another scenario that we looked at was with people walking on a corridor. So subjects are walking down the corridor at a constant rate, and they were moving into proximity violation, spend some time there and then move out. We tested this with different speeds of how people are moving. And we did this in front of the elevators at the Digital catapults in one of our largest corridor. Apart from this, we also looked at scenarios in which subjects were on either side of a wall. And why did we do this? Because we had some companies that came to us and say, hey, I want to have this social distancing solutions used in my shop, for example. But if two people are on different ales, I don’t want them to be reported as being in contact, because actually, there was a huge wall between between them. So we said, Okay, we have to investigate if actually these solutions are able to understand that there is an obstacle somehow, between the, the users and do not raise an alarm. So we ran scenarios in which we had one of the users sitting at a table, and we had a wall in between that user and another user that was moving towards the wall. Of course, we were after understanding how many false positives were raised during this scenarios. And this is the wall. And this is the room in which it’s the auditorium room at the Digital catapult, where we were running the experiments, you can see one of our colleagues running the experiment. And I can assure you that there was somebody behind the this, this wooden wall, and this was a wooden wall that we tested with, but environments where our clients were using this kind of social distancing solutions, head, also something else. And that was a wall that was thick glass wall. So imagine that somebody is in a phone booth in a company taking a phone call and somebody just passes by that phone booth or stops in front of the phone booth and had another conversation with a colleague. And these two, we were interested in actually actually if these two are reported as being in contact or not, if a proximity violation, because it should not be reported now because they were on the in different actually different different rooms, even if the distance between them was lower than two metres. So we run this kind of experiments, also at Digital catapult on one of our floors, where we had this kind of phone booth with very thick glass doors, and one of the participants in the experiment was walking along the corridor. So this is what we put in place for the reliability measurements. We also as I said, Set the measure of the power consumption and the current consumption of this devices. And we did this in two scenarios. And we took measurements. So the devices were connected to powerful power analyzers in our future networks lab at Digital catapult. And we were playing with two devices that are outside of this proximity violation range. So the distance between the devices more than two metres, and then we were bringing them inside the proximity violation range. And we’re measuring the current consumption of this

Ramona Marfievici
device devices. Of course, these two metrics are nice. So reliability, how reliable is a solution for detecting two metre proximity violations? And then what is the current consumption? So informing actually, what’s the battery lifetime of these devices? But this is not the only thing that end user is interested in? And users are also interested about other things about how configurable is the device, how easy it is to use? What is its form factor, because you don’t want to hang on your neck a device that has 300 Something grammes, is actually the platforms and the devices? Are they suitable for social distancing? Are they noticeable, they are not noticeable, either comfortable? So we looked through all these lenses and try to analyse and benchmark the devices. And what did we do is we invited some of the most innovative companies in the IoT space from different countries to take part in our exciting social distancing, tech experiment that took place in London at the end of August in 2020. So we benchmark all these solutions that were all built to maintain social distancing and detect proximity violations, using precise scientific methodologies to identify how devices work to look at their design and their quality, I’m not going to go into a lot of details about the results and how they work. Just to tell you that all these social distancing solutions, they all work the same way. First, they try to discover all their neighbours. So who’s here in the same room with me, for example. And then once they were discovering their neighbours, they were trying to arrange with them. So to understand what’s the distance between me and each of my neighbours in the in the neighbourhood. Yeah. And they were doing this do using different communication protocols. I’m not gonna go into details, you heard about Bluetooth, which is the protocol that also the phone is using for social distancing. But there are also other technologies that were used, like, with a wideband, which now was brought back to life by Apple and other communication and other companies, or they were using combination of two technologies, I’m not gonna go into more details about this, essentially, all these technologies, they were working either in the 2.4 gigahertz, so the same as your Wi Fi devices, or Kota wideband in another communication band. And it’s well known already, that the energy consumption of a device, for example, that is using ultra wideband, it’s much bigger than one that is using Bluetooth. And it’s also well known in the community that the distance estimates error of the Bluetooth, it’s much, much bigger than then Ultra wideband. So this word, the contestants, and this is as just a glimpse of what was happening at Digital catapult during the days of the tests. So these are the members of the team that helped with the experimentation. I’m just gonna show a some results are not all of them. So for example, for a scenario in which people were chatting or making a beverage, all the solutions that we tested, they were able to detect reliably, the proximity violations. Now I will say, okay, Ramona, so they were all fine. What’s the difference between them? Well, the difference between between them, as you might expect, is actually how fast they are doing this. And I can tell you that they were all capable of detecting proximity violation below or six seconds. Of course, there were some of them that they were doing it very, very fast. We had the two devices that we tested that could do this in less than one second. So the moment you were breaking the violation distance they were raising The the alarm they were passing.

Ramona Marfievici
That was the reliability, then what did we do is we open all the devices, of course, in order to measure their power consumption. And we looked at what was happening with the devices, when there was no proximity violation when they were proximity when there were proximity violations, I’m not gonna explain the plots, I put them all here just to show you how different all these devices actually what using the same communication protocols, because reliability again, does not tell the whole story, discovering your neighbours, ranging with your neighbours is energy expensive. And there is always a trade off between the reliability and and the current consumption. And again, look different implementations. Even if on the same radio, same chips, they consume a different, different, different power. This is just one of the experiment that we ran. And of course, because of the different implementations, then looking at the devices, default battery capacity and considering the same solution running, I mean, just having six proximity violation per hour, we computed the lifetime of the each device and of course, different they have different lifetimes, because of the different implementations of, of the proximity violation detection. We covered reliability, we covered the power consumption, now we are moving at things that are more tangible maybe. And we compare the devices in terms of some must have features. So we were looking if they have a battery indication, because this is important for the end user to understand if the battery is, is running low, we were looking if you can configure different distances, for proximity violations, we were looking, if they have different types of alarms, if they were having a boss, they have vibrations, they have LEDs, all these kinds of things. We we care about the documentation that is available about each device, we looked at different configurations and different indicators that that might have, of course, we were looking if they come with a warranty, if companies that are producing these devices have a troubleshooting for the devices. And of course, they come with a mobile application that the end user can use to configure the device or not, if they if one can configure the volume of the alarms because some of the alarms were annoying, and maybe you want to lower a bit down the volume, if this device is actually are more intelligent, and you don’t have to push a button to activate the device. But actually when you start moving, they have an accelerometer and they understand that okay, that device is in use, they should start the radio and try to start the Bluetooth module to find that the neighbours so all the all these kind of things. And we also looked at the devices in terms of battery capacities. I said what are the claim battery IP rating because some of the companies are working under extreme conditions for examples. And they were interested in IP rating of the of the device, we looked at their operating temperatures, we looked if they can be mounted on different places on the neck on arm. So we characterise them for all this lenses. And while we were running this experiment we had basically coming to a digital catapult and they spent the day with us.

Ramona Marfievici
Whilein common all day, maybe you’ve noticed that all of them were designed before the pandemic with other users in mind, and are now trying to quickly adjust their devices to market them as something to help in the fight against the spread of COVID 19. So there needs to be a way to test these products to see if they’ve been repurposed effectively. And that’s why we’re here today at a research event, whether we’re looking at these distance sensing devices to see how well they alert you when you’re within someone’s two metre proximity. And they’re doing that by running through three different scenarios. The first is the chat in the kitchen, where two people were moving closer to someone standing by the sink to see how quickly the devices go off once they move within two metres a second is the walk down the corridor where two people will walk past each other This is done at different speeds to see if the devices can pick up such a quick passing bite. And the third is the close, but not close enough to see if the devices can tell that a wall separates the two people in close range, and instead won’t go off. Now these accessories use one of three different kinds of radio waves to detect another. The well known Bluetooth the lesser known Ultra wideband, which is used by many tracking devices, and now also appearing in smartphones, or a combination of both to maximise range. Bluetooth devices didn’t fare well and failed all the tests, it isn’t very accurate if the devices are obscured Ultra wideband ones performed very well as its accuracy is roughly five to 10 centimetres, and the combo products did well too. But all of them failed the war scenario. And that’s not good enough,there is a lot of hype around the IoT, and the entry barrier is really low. So you’re going to find there a lot of devices pretending to do some things that actually they don’t deliver. proper evaluation of devices and solutions is important.

Ramona Marfievici
Many companies are bringing their solute. So this was basically they spent one day with us while we were running the experiments. And I’m gonna end my talk saying that all this work that you did, it was not a one man show. So this was a work of a team, involving engineers, innovation managers involving project managers, and that we continue our benchmarking that digital catapult and if you’re interested in our activities, please, please contact me via email. So instead of conclusions, as you’ve seen, from the benchmarking activities that we run, some solutions are extremely reliable, some perform better in terms of power consumption, and config configurability. But what we believe is that by establishing comparison between like for like products and solutions, and how they perform under specific use case scenarios, it’s going to provide valuable insights from both demand and supply side the stakeholders. Of course, carrying out IoT benchmarking activities requires a significant technology, expertise, and trusted and transparent processes and the right facilities. It also requires market independence and vendor neutrality to gain the right trust from the market. And I think the IoT team and digital catapult, it is at that, at that point, to do this.

Katie White
Thank you. Ramona, thank you. For your talk, it was really interesting to hear about sort of the benchmarking project that you held at Digital Catapult. And especially within such, you know, relevant time of COVID. And, you know, social distancing. So thank you for that on. We’re on to a q&a now. And so we have a question here, firstly, sort of Now that you’ve finished a bench marking campaign, what is next for yourself in digital catapult.

Ramona Marfievici
So the IoT team is working now on building an advanced digital capability platform on which we are trying to bring services from different vendors and orchestrate them together in order to generate end to end IoT solutions and to run POCs. And we are gonna run benchmarking activities for benchmarking all the services that we are gonna bring on the platform before enrolling them there. So the work continues. Now we just move at a different level on the stack. And we continue working on our benchmarking courses throughout the projects that we have. So through our Ukri, so research projects, or European projects, we are still doing benchmarking activities or on the future networks, lab accelerator programmes that we are working, we still do benchmarking activities for the companies that are enrolled in the programmes.

Katie White
So so exciting, a few other exciting projects in the line. So be looking forward to hearing more about that. And we also have another question. So what is the biggest advice do you have for females wanting to sort of start a similar career path for you? We have a question there.

Ramona Marfievici
How it’s that saying when things get tough, the tough gets going. Yeah, just just just do it.

Katie White
Yeah, yeah, yes. And I guess that’s it.

Ramona Marfievici
It happens to all of us. We all started at the same point, you just need to, to follow what you’re interested in.

Katie White
And I think that’s that’s a perfect note to end this session on. So I know before we finish this session, we have another speaker after us you’d like to introduce.

Ramona Marfievici
Yes, so the next talk is from the Tejumade Afonja. She’s Co-Founder of AI Saturdays, and she’s going to talk about AI innovation on learning Nigerian accent embeddings from speech data. Stay tuned.

Katie White
Right. Thank you so much Ramona.

Thank you to our sponsors

The IoT Podcast Team

The IoT Podcast is powered by Paratus People, a leading organisation in IoT Talent Solutions.

Innovation is at the heart of IoT, it is our passion to explore and learn more about this fast paced and transforming sector.

Connect & Get Involved

Your subscription could not be saved. Please try again.
Your subscription has been successful.
Subscribe to our newsletter to be amongst the first to find out exclusive information about The IoT Podcast.

We use Sendinblue as our marketing platform. By Clicking below to submit this form, you acknowledge that the information you provided will be transferred to Sendinblue for processing in accordance with their href="https://www.sendinblue.com/legal/termsofuse/">terms of use