The recent advances in the areas of artificial intelligence, machine learning, computer vision, wearable sensors and smartphone technologies permitted the introduction of systems that allows the monitoring, analysis and assessment of food intake, in terms of energy and nutrient content. To empower diabetic patients the Diabetes Technology Research laboratory of the ARTORG Center at the University of Bern (Switzerland) has developed GoCARB, a smartphone App, capable to translate food images into carbohydrates. How does it work? The user places a credit card-sized reference object next to the dish, and takes two photographs from different points of view. One of the photos is used to detect, segment and recognize automatically the existing food items, while semiautomatic tools are also provided for correcting the results, if needed. By using both photos and the card, we build a 3D model of the food itself. With this 3D model of the different foods, you can calculate their volume. Once you know the volume and food type and using nutrient databases, you can calculate the carbohydrate content. In house clinical studies have shown that this is superior to getting the diabetic patient to estimate the carbohydrate content and that glucose control is then more precise. But how close are we towards real world systems? Will artificial intelligence improve dietary assessment? What is my experience within and beyond the GoCARB project?
27 Sep 2018 - 29 Sep 2018