The document proposes an automatic 3D vision-based dietary inspection system for central kitchen automation. The system uses computer vision techniques to detect and locate meal boxes on a conveyor. It then identifies food ingredients in each meal using color, texture, and depth features. Food categories and quantities are determined and compared to dietary requirements. Experimental results showed inspection accuracy of 80%, efficiency of 1200ms, and food quantity accuracy of 90%. The system is expected to increase meal supply capacity by over 50% and help dieticians by automating dietary inspections.
Automatic meal inspection system using lbp hf feature for central kitchensipij
This paper proposes an intelligent and automatic meal inspection system which can be applied to the meal
inspection for the application of central kitchen automation. The diet specifically designed for the patients are required with providing personalized diet such as low sodium intake or some necessary food. Hence,
the proposed system can benefit the inspection process that is often performed manually. In the proposed
system, firstly, the meal box can be detected and located automatically with the vision-based method and
then all the food ingredients can be identified by using the color and LBP-HF texture features. Secondly,
the quantity for each of food ingredient is estimated by using the image depth information. The experimental results show that the meal inspection accuracy can approach 80%, meal inspection efficiency can reach1200ms, and the food quantity accuracy is about 90%. The proposed system is expected to increase the capacity of meal supply over 50% and be helpful to the dietician in the hospital for saving the time in the diet inspection process.
In this paper, we propose an easy approach of
identification and classification of high calorie snacks for dietary
assessment using machine learning. As an object detection
technique we have use point features matching algorithm to
identify the object of interest from a cluttered scene. After
detecting the object, a Bag of Features (BoF) model is created by
extracting Speed up Robust features (SURF) features. This BoF
model is used to recognize and classify the snacks items of different
categories. We have used three types of snacks images named: Icecream,
Chips and Chocolate for experimental purpose. Depending
on the experimental results our proposed algorithm is able to
detect and classify different types of snacks with around 85%
accuracy.
Effect of dietary fibers from mango peels and date seeds on physicochemical p...IJMREMJournal
The present study aims at evaluating effects of dietary fibers of Mango peels (MP) and Date seeds (DS) on the
quality of Arabic bread (AB). MP was added at two levels (2% and 4%) and DS were at 4% and 6%, based on
flour weight. Results showed that DS is considered as a good source of dietary fiber compared to MP. Also, it was
found that MP at different levels improved the overall quality of AB. An adaptive neuro-fuzzy inference system
(ANFIS) is used to study the properties AB with the different proportions of mango peel (M) and dates seed (D)
as inputs, and two output properties (crust color CC and crumb texture CT). Experimental validation runs were
conducted to compare the measured values and the predicted ones. The comparison shows that the adoption of
this neuro- modeling technique (i.e., ANFIS) achieved a satisfactory prediction accuracy of about 85%
Classification of Mango Fruit Varieties using Naive Bayes Algorithmijtsrd
Mangos are an important agricultural commodity in the global market for fresh products. In Myanmar, the type of mango called SeinTaLone is the best taste and the most people like it. Another type of mango called MaSawYin is not good taste but it is visually similar to the SeinTaLone. So, some people are difficult to classify the mango varieties. A means for distinguishing mango varieties is needed and therefore, some reliable technique is needed to discriminate varieties rapidly and non destructively. The main objective of this research was to classify the varieties of mango fruit that occur in Myanmar using Naive Bayes algorithm. The methodology involved image acquisition, pre processing and segmentation, feature extraction and classification of mango varieties. A method for classifying varieties of mangos using image processing technique is proposed in this paper. RGB image was first converted to HSV image. Then by using edge detection method and morphological operation, region of interest was segmented by taking into account only the HUE component image of the HSV image. Later, a total of 4 shape features and 13 texture features were extracted. Extracted features were given as inputs to a Naive Byaesian classifier to classify the test images as each type. The data set used had 50 mango images for each varieties of mango for training and 20 images of mango for each variety for testing. Ohnmar Win "Classification of Mango Fruit Varieties using Naive Bayes Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26677.pdfPaper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/26677/classification-of-mango-fruit-varieties-using-naive-bayes-algorithm/ohnmar-win
India is the second largest producer of wheat in the world after China. Specifying the quality of wheat manually requires an expert judgment and is time consuming. To overcome this problem, machine algorithms can be used to classify wheat according to its quality. In this paper we have done wheat classification by using two machine learning algorithms, that is, Support Vector Machine (OVR) and Neural Network (LM). For classification, images of wheat grain are captured using digital camera and thresholding is performed. Following this step, features of wheat are extracted from these images and machine learning algorithms are implemented. The accuracy of Support Vector Machine is 86.8% and of Neural network is 94.5%. Results show that Neural Network (LM)is better than Support Vector Machine(OVR).
The paper presents a solution for quality evaluation and grading of Gujarat 17 rice using image processing and soft computing technique. In this paper basic problem of rice industry for quality assessment is defined which is traditionally done manually by human inspector. Proposed solution provides one alternative for an automated, non-destructive and cost-effective technique. The proposed method for quality assessment of Gujarat 17 rice using image processing and multi-layer feed forward neural network technique achieves high degree of quality than human vision inspection. The proposed algorithm based on morphological features is developed for counting the number of rice seeds with long seeds as well as small seeds. A trained multi-layer feed forward neural network based classifier is developed for identification of unknown rice seed quality.
Classification of Macronutrient Deficiencies in Maize Plant Using Machine Lea...IJECEIAES
This paper aims to classify macronutrient deficiencies in maize plants using machine learning techniques. Two feature extraction methods are used to generate two feature sets from images of healthy and deficient maize leaves. Various machine learning classifiers including artificial neural networks, support vector machines, k-nearest neighbors, and deep networks with autoencoders are applied to the two feature sets and their classification accuracies are compared. The results show that deep networks with autoencoders achieved the highest accuracy of 100% for one feature set, while k-nearest neighbors performed best for the other feature set. This study demonstrates the effectiveness of machine learning approaches for nutrient deficiency classification in plants.
IRJET- Automatic Fruit Quality Detection SystemIRJET Journal
This document presents an automatic fruit quality detection system that uses computer vision and image processing techniques. The system captures images of fruits on a conveyor belt using a camera. It then performs image processing on the images to analyze features like color, size, and texture. It can detect defects in fruits based on pixel analysis of the images. The fruits are then sorted based on color and graded based on size. The system aims to automate and improve the efficiency of the fruit sorting and grading process compared to manual methods. It analyzes the images, detects quality factors, and controls hardware like the conveyor belt based on the analysis results.
Automatic meal inspection system using lbp hf feature for central kitchensipij
This paper proposes an intelligent and automatic meal inspection system which can be applied to the meal
inspection for the application of central kitchen automation. The diet specifically designed for the patients are required with providing personalized diet such as low sodium intake or some necessary food. Hence,
the proposed system can benefit the inspection process that is often performed manually. In the proposed
system, firstly, the meal box can be detected and located automatically with the vision-based method and
then all the food ingredients can be identified by using the color and LBP-HF texture features. Secondly,
the quantity for each of food ingredient is estimated by using the image depth information. The experimental results show that the meal inspection accuracy can approach 80%, meal inspection efficiency can reach1200ms, and the food quantity accuracy is about 90%. The proposed system is expected to increase the capacity of meal supply over 50% and be helpful to the dietician in the hospital for saving the time in the diet inspection process.
In this paper, we propose an easy approach of
identification and classification of high calorie snacks for dietary
assessment using machine learning. As an object detection
technique we have use point features matching algorithm to
identify the object of interest from a cluttered scene. After
detecting the object, a Bag of Features (BoF) model is created by
extracting Speed up Robust features (SURF) features. This BoF
model is used to recognize and classify the snacks items of different
categories. We have used three types of snacks images named: Icecream,
Chips and Chocolate for experimental purpose. Depending
on the experimental results our proposed algorithm is able to
detect and classify different types of snacks with around 85%
accuracy.
Effect of dietary fibers from mango peels and date seeds on physicochemical p...IJMREMJournal
The present study aims at evaluating effects of dietary fibers of Mango peels (MP) and Date seeds (DS) on the
quality of Arabic bread (AB). MP was added at two levels (2% and 4%) and DS were at 4% and 6%, based on
flour weight. Results showed that DS is considered as a good source of dietary fiber compared to MP. Also, it was
found that MP at different levels improved the overall quality of AB. An adaptive neuro-fuzzy inference system
(ANFIS) is used to study the properties AB with the different proportions of mango peel (M) and dates seed (D)
as inputs, and two output properties (crust color CC and crumb texture CT). Experimental validation runs were
conducted to compare the measured values and the predicted ones. The comparison shows that the adoption of
this neuro- modeling technique (i.e., ANFIS) achieved a satisfactory prediction accuracy of about 85%
Classification of Mango Fruit Varieties using Naive Bayes Algorithmijtsrd
Mangos are an important agricultural commodity in the global market for fresh products. In Myanmar, the type of mango called SeinTaLone is the best taste and the most people like it. Another type of mango called MaSawYin is not good taste but it is visually similar to the SeinTaLone. So, some people are difficult to classify the mango varieties. A means for distinguishing mango varieties is needed and therefore, some reliable technique is needed to discriminate varieties rapidly and non destructively. The main objective of this research was to classify the varieties of mango fruit that occur in Myanmar using Naive Bayes algorithm. The methodology involved image acquisition, pre processing and segmentation, feature extraction and classification of mango varieties. A method for classifying varieties of mangos using image processing technique is proposed in this paper. RGB image was first converted to HSV image. Then by using edge detection method and morphological operation, region of interest was segmented by taking into account only the HUE component image of the HSV image. Later, a total of 4 shape features and 13 texture features were extracted. Extracted features were given as inputs to a Naive Byaesian classifier to classify the test images as each type. The data set used had 50 mango images for each varieties of mango for training and 20 images of mango for each variety for testing. Ohnmar Win "Classification of Mango Fruit Varieties using Naive Bayes Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26677.pdfPaper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/26677/classification-of-mango-fruit-varieties-using-naive-bayes-algorithm/ohnmar-win
India is the second largest producer of wheat in the world after China. Specifying the quality of wheat manually requires an expert judgment and is time consuming. To overcome this problem, machine algorithms can be used to classify wheat according to its quality. In this paper we have done wheat classification by using two machine learning algorithms, that is, Support Vector Machine (OVR) and Neural Network (LM). For classification, images of wheat grain are captured using digital camera and thresholding is performed. Following this step, features of wheat are extracted from these images and machine learning algorithms are implemented. The accuracy of Support Vector Machine is 86.8% and of Neural network is 94.5%. Results show that Neural Network (LM)is better than Support Vector Machine(OVR).
The paper presents a solution for quality evaluation and grading of Gujarat 17 rice using image processing and soft computing technique. In this paper basic problem of rice industry for quality assessment is defined which is traditionally done manually by human inspector. Proposed solution provides one alternative for an automated, non-destructive and cost-effective technique. The proposed method for quality assessment of Gujarat 17 rice using image processing and multi-layer feed forward neural network technique achieves high degree of quality than human vision inspection. The proposed algorithm based on morphological features is developed for counting the number of rice seeds with long seeds as well as small seeds. A trained multi-layer feed forward neural network based classifier is developed for identification of unknown rice seed quality.
Classification of Macronutrient Deficiencies in Maize Plant Using Machine Lea...IJECEIAES
This paper aims to classify macronutrient deficiencies in maize plants using machine learning techniques. Two feature extraction methods are used to generate two feature sets from images of healthy and deficient maize leaves. Various machine learning classifiers including artificial neural networks, support vector machines, k-nearest neighbors, and deep networks with autoencoders are applied to the two feature sets and their classification accuracies are compared. The results show that deep networks with autoencoders achieved the highest accuracy of 100% for one feature set, while k-nearest neighbors performed best for the other feature set. This study demonstrates the effectiveness of machine learning approaches for nutrient deficiency classification in plants.
IRJET- Automatic Fruit Quality Detection SystemIRJET Journal
This document presents an automatic fruit quality detection system that uses computer vision and image processing techniques. The system captures images of fruits on a conveyor belt using a camera. It then performs image processing on the images to analyze features like color, size, and texture. It can detect defects in fruits based on pixel analysis of the images. The fruits are then sorted based on color and graded based on size. The system aims to automate and improve the efficiency of the fruit sorting and grading process compared to manual methods. It analyzes the images, detects quality factors, and controls hardware like the conveyor belt based on the analysis results.
This document summarizes several research papers on vision-based food analysis systems. It discusses papers on using deep learning techniques like Mask R-CNN and convolutional neural networks to identify and estimate nutrition from food images. Some key applications discussed are food calorie estimation, generating virtual food images to enhance experience, developing large datasets to aid food recognition, analyzing food trays with multiple items, identifying food waste, and mobile apps for recommending recipes from leftovers or identifying halal foods. The document concludes that reviewing these papers provided knowledge on problems and solutions for developing a vision-based food analysis system and highlighted the importance of this topic.
Report on "Food recognition system using BOF model"madhuripallod
This document is a seminar report submitted by Miss. Madhuri J.Pallod to her college on the topic of "A Food Recognition System Based On Bag-of-Feature Model". It describes research conducted on developing an image recognition system to classify food images using the bag-of-features model. The system is evaluated on a dataset of nearly 5000 food images across 11 classes. Key aspects investigated include feature extraction methods, visual dictionary learning techniques, and machine learning classifiers. The optimized system achieved a classification accuracy of 78% using dense SIFT features, hierarchical k-means clustering, and a linear SVM classifier.
Food Cuisine Analysis using Image Processing and Machine LearningIRJET Journal
This document discusses a proposed system for analyzing food cuisine using image processing and machine learning. The system aims to identify the cuisine of a dish by taking either an image of the food or a list of ingredients as input. It would then output the predicted cuisine, along with relevant recipe and nutritional information. The proposed approach aims to overcome limitations of prior work, such as low accuracy and inability to handle both image and ingredient-based inputs. It will utilize a large, detailed dataset and trained machine learning models like KNN to make highly accurate cuisine predictions. The system is intended to benefit food delivery apps and other services by providing customized recommendations to users based on analyzed cuisine preferences.
IRJET- A Food Recognition System for Diabetic Patients based on an Optimized ...IRJET Journal
This document presents a food recognition system for diabetic patients based on an optimized bag-of-features model. The system extracts dense local features from food images using scale-invariant feature transform on HSV color space. It then builds a visual vocabulary of 10,000 visual words using k-means clustering and classifies the food descriptions with a linear support vector machine classifier. The optimized system achieved a classification accuracy of around 78% on a dataset of over 5,000 food images belonging to 11 classes, demonstrating the feasibility of using a bag-of-features approach for automated food recognition to help with carbohydrate counting for diabetic patients.
RECIPE GENERATION FROM FOOD IMAGES USING DEEP LEARNINGIRJET Journal
The document describes a system that can generate cooking recipes from food images using deep learning techniques. It proposes a novel approach that uses CNN, LSTM, and bidirectional LSTM models to extract embeddings of cooking instructions from food images. The system is able to overcome challenges like varying instruction lengths and multiple food items in an image. It was tested on an Indian cuisine dataset and showed potential for applications in information retrieval and automatic recipe recommendation.
WEB BASED NUTRITION AND DIET ASSISTANCE USING MACHINE LEARNINGIRJET Journal
The document describes a proposed web-based nutrition and diet assistance system using machine learning. The system aims to accurately identify foods from images using a convolutional neural network (CNN) model, calculate the nutritional content of the foods, determine the user's BMI, and provide personalized diet recommendations. The system uses techniques like pre-processing, region proposal networks, feature extraction, and CNNs to classify foods, retrieve nutritional data, and suggest diets tailored for the user's BMI and food choices. The goal is to help users conveniently track their nutrition intake and maintain a balanced diet for better health outcomes.
Food Classification and Recommendation for Health and Diet Tracking using Mac...IRJET Journal
This document discusses a study that uses machine learning to classify food images and recommend foods for health and diet tracking. The researchers collected 1000 food images from various sources to create a dataset with 12 food classes. A convolutional neural network (CNN) model was trained on the dataset and achieved an average accuracy of 86.33% for classifying images into the correct food category. The model can also provide dietary recommendations to help people, such as diabetics, track their health goals. The researchers conclude that combining food classification with recommendation using deep learning is useful for applications like automatic food labeling and dietary assessment.
ResSeg: Residual encoder-decoder convolutional neural network for food segmen...IJECEIAES
This paper presents the implementation and evaluation of different convolutional neural network architectures focused on food segmentation. To perform this task, it is proposed the recognition of 6 categories, among which are the main food groups (protein, grains, fruit, and vegetables) and two additional groups, rice and drink or juice. In addition, to make the recognition more complex, it is decided to test the networks with food dishes already started, i.e. during different moments, from its serving to its finishing, in order to verify the capability to see when there is no more food on the plate. Finally, a comparison is made between the two best resulting networks, a SegNet with architecture VGG-16 and a network proposed in this work, called Residual Segmentation Convolutional Neural Network or ResSeg, with which accuracies greater than 90% and interception-over-union greater than 75% were obtained. This demonstrates the ability, not only of SegNet architectures for food segmentation, but the use of residual layers to improve the contour of the segmentation and segmentation of complex distribution or initiated of food dishes, opening the field of application of this type of networks to be implemented in feeding assistants or in automated restaurants, including also for dietary control for the amount of food consumed.
Quality Analysis and Classification of Rice Grains using Image Processing Tec...IRJET Journal
This document presents a study that aims to analyze and classify rice grains using image processing techniques. The study develops image processing algorithms to segment and identify rice grains in images. By measuring the size (length and breadth) of rice grains through edge detection and a caliper, the algorithms can efficiently analyze grain quality and classify grains. The algorithms are able to generalize classifications across diverse rice varieties by also extracting Fourier features from grain images. The study proposes a methodology involving image pre-processing, morphological operations, edge detection, measurement, and classification to accurately quantify and categorize rice seeds based on size and shape attributes analyzed through images. The methodology aims to provide an alternative approach to rice quality analysis that is more efficient, cost-effective and reduces
This document presents a framework for automatically generating cooking recipes from food images. The framework uses neural networks to recognize ingredients in the food image and then generates step-by-step cooking instructions by considering both the image and predicted ingredients simultaneously. The framework is trained and evaluated on a large dataset of over 1 million recipes. Evaluation shows the framework can improve recipe prediction performance over past approaches and generate convincing and accurate recipes by weighing both the food image and ingredients.
This document presents a framework for generating recipes from food images. The framework first predicts a list of ingredients present in an image using neural networks. It then generates cooking instructions by considering both the image features and predicted ingredients. The system is evaluated on a large dataset and is shown to outperform previous baselines by accurately predicting ingredients and generating coherent recipes directly from images. Key aspects of the framework include using transformers to generate sequential instructions conditioned on image and ingredient embeddings.
A MACHINE LEARNING METHOD FOR PREDICTION OF YOGURT QUALITY AND CONSUMERS PREF...mlaij
Prediction of quality and consumers’ preferences is essential task for food producers to improve their
market share and reduce any gap in food safety standards. In this paper, we develop a machine learning
method to predict yogurt preferences based on the sensory attributes and analysis of samples’ images
using image processing texture and color feature extraction techniques. We compare three unsupervised
ML feature selection techniques (Principal Component Analysis and Independent Component Analysis and
t-distributed Stochastic Neighbour Embedding) with one supervised ML feature selection technique
(Linear Discriminant Analysis) in terms of accuracy of classification. Results show the efficiency of the
supervised ML feature selection technique over the traditional feature selection techniques.
A MACHINE LEARNING METHOD FOR PREDICTION OF YOGURT QUALITY AND CONSUMERS PREF...mlaij
Prediction of quality and consumers’ preferences is essential task for food producers to improve their
market share and reduce any gap in food safety standards. In this paper, we develop a machine learning
method to predict yogurt preferences based on the sensory attributes and analysis of samples’ images
using image processing texture and color feature extraction techniques. We compare three unsupervised
ML feature selection techniques (Principal Component Analysis and Independent Component Analysis and
t-distributed Stochastic Neighbour Embedding) with one supervised ML feature selection technique
(Linear Discriminant Analysis) in terms of accuracy of classification. Results show the efficiency of the
supervised ML feature selection technique over the traditional feature selection techniques.
This document describes a proposed hybrid technique for automatic medical image classification and retrieval using information retrieval, support vector machines, and particle swarm optimization. Key aspects of the proposed approach include extracting low-level visual features from images like color, texture, shape and integrating them with semantic metadata. A content analysis system analyzes image descriptors and assigns semantic labels. Images are indexed and classified during a training phase. The proposed system aims to reduce the semantic gap between low-level features and high-level semantics by combining content-based image retrieval with text-based retrieval and machine learning algorithms.
Semi-automatic model to colony forming units countingIJECEIAES
Colony forming units counting is a conventional process carry out in bacteriological laboratories, and it is used to follow the behavior of bacteria in different conditions. Currently exist different systems, automatic or semiautomatic, to counting colony forming units exits, but, in general, many laboratories continue using manual counting, which consumes considerable time and effort from researchers and laboratory employees. This paper presents a mathematical model carry out to segment the colony forming units and, in this way, counting them from a digital image of the sample. The method uses the color space information of some points in the image and shows good behavior for images with many or few colony forming units in the sample, according to manual counting. The results show efficiencies close to 98% with MacConkey agar.
Automatic food bio-hazard detection system IJECEIAES
This paper presents the design of a convolutional neural network architecture oriented to the detection of food waste, to generate a low, medium, or criticallevel alarm. An architecture based on four convolution layers is used, for which a database of 100 samples is prepared. The database is used with the different hyperparameters that make up the final architecture, after the training process. By means of confusion matrix analysis, a 100% performance of the network is obtained, which delivers its output to a fuzzy system that, depending on the duration of the detection time, generates the different alarm levels associated with the risk.
IRJET- Estimation of Calorie Content in a DietIRJET Journal
The document presents a machine learning model to estimate calorie content in foods by analyzing images. A convolutional neural network detects the food item and its geometric shape is used to calculate volume. A support vector machine model trained on food images achieves 94% accuracy in classification. The volume and density information are then used to compute the calorie content. Experimental results show the estimated calorie contents match well with actual values, demonstrating the accuracy of the proposed model. The authors suggest future work could expand the model to estimate calories in more food types and reduce errors in computation.
This document proposes a method for visual food recognition using sparse coding. Patch-based representations of food images are used directly without extracted features. Sparse coding is applied to learn dictionaries from training patches. Atom distributions from sparse coding are then used as features to train an SVM classifier. Experiments show the approach achieves over 90% accuracy when the correct class is within the top 2 rankings, demonstrating its potential for real-world use despite the computational complexity.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
This document summarizes several research papers on vision-based food analysis systems. It discusses papers on using deep learning techniques like Mask R-CNN and convolutional neural networks to identify and estimate nutrition from food images. Some key applications discussed are food calorie estimation, generating virtual food images to enhance experience, developing large datasets to aid food recognition, analyzing food trays with multiple items, identifying food waste, and mobile apps for recommending recipes from leftovers or identifying halal foods. The document concludes that reviewing these papers provided knowledge on problems and solutions for developing a vision-based food analysis system and highlighted the importance of this topic.
Report on "Food recognition system using BOF model"madhuripallod
This document is a seminar report submitted by Miss. Madhuri J.Pallod to her college on the topic of "A Food Recognition System Based On Bag-of-Feature Model". It describes research conducted on developing an image recognition system to classify food images using the bag-of-features model. The system is evaluated on a dataset of nearly 5000 food images across 11 classes. Key aspects investigated include feature extraction methods, visual dictionary learning techniques, and machine learning classifiers. The optimized system achieved a classification accuracy of 78% using dense SIFT features, hierarchical k-means clustering, and a linear SVM classifier.
Food Cuisine Analysis using Image Processing and Machine LearningIRJET Journal
This document discusses a proposed system for analyzing food cuisine using image processing and machine learning. The system aims to identify the cuisine of a dish by taking either an image of the food or a list of ingredients as input. It would then output the predicted cuisine, along with relevant recipe and nutritional information. The proposed approach aims to overcome limitations of prior work, such as low accuracy and inability to handle both image and ingredient-based inputs. It will utilize a large, detailed dataset and trained machine learning models like KNN to make highly accurate cuisine predictions. The system is intended to benefit food delivery apps and other services by providing customized recommendations to users based on analyzed cuisine preferences.
IRJET- A Food Recognition System for Diabetic Patients based on an Optimized ...IRJET Journal
This document presents a food recognition system for diabetic patients based on an optimized bag-of-features model. The system extracts dense local features from food images using scale-invariant feature transform on HSV color space. It then builds a visual vocabulary of 10,000 visual words using k-means clustering and classifies the food descriptions with a linear support vector machine classifier. The optimized system achieved a classification accuracy of around 78% on a dataset of over 5,000 food images belonging to 11 classes, demonstrating the feasibility of using a bag-of-features approach for automated food recognition to help with carbohydrate counting for diabetic patients.
RECIPE GENERATION FROM FOOD IMAGES USING DEEP LEARNINGIRJET Journal
The document describes a system that can generate cooking recipes from food images using deep learning techniques. It proposes a novel approach that uses CNN, LSTM, and bidirectional LSTM models to extract embeddings of cooking instructions from food images. The system is able to overcome challenges like varying instruction lengths and multiple food items in an image. It was tested on an Indian cuisine dataset and showed potential for applications in information retrieval and automatic recipe recommendation.
WEB BASED NUTRITION AND DIET ASSISTANCE USING MACHINE LEARNINGIRJET Journal
The document describes a proposed web-based nutrition and diet assistance system using machine learning. The system aims to accurately identify foods from images using a convolutional neural network (CNN) model, calculate the nutritional content of the foods, determine the user's BMI, and provide personalized diet recommendations. The system uses techniques like pre-processing, region proposal networks, feature extraction, and CNNs to classify foods, retrieve nutritional data, and suggest diets tailored for the user's BMI and food choices. The goal is to help users conveniently track their nutrition intake and maintain a balanced diet for better health outcomes.
Food Classification and Recommendation for Health and Diet Tracking using Mac...IRJET Journal
This document discusses a study that uses machine learning to classify food images and recommend foods for health and diet tracking. The researchers collected 1000 food images from various sources to create a dataset with 12 food classes. A convolutional neural network (CNN) model was trained on the dataset and achieved an average accuracy of 86.33% for classifying images into the correct food category. The model can also provide dietary recommendations to help people, such as diabetics, track their health goals. The researchers conclude that combining food classification with recommendation using deep learning is useful for applications like automatic food labeling and dietary assessment.
ResSeg: Residual encoder-decoder convolutional neural network for food segmen...IJECEIAES
This paper presents the implementation and evaluation of different convolutional neural network architectures focused on food segmentation. To perform this task, it is proposed the recognition of 6 categories, among which are the main food groups (protein, grains, fruit, and vegetables) and two additional groups, rice and drink or juice. In addition, to make the recognition more complex, it is decided to test the networks with food dishes already started, i.e. during different moments, from its serving to its finishing, in order to verify the capability to see when there is no more food on the plate. Finally, a comparison is made between the two best resulting networks, a SegNet with architecture VGG-16 and a network proposed in this work, called Residual Segmentation Convolutional Neural Network or ResSeg, with which accuracies greater than 90% and interception-over-union greater than 75% were obtained. This demonstrates the ability, not only of SegNet architectures for food segmentation, but the use of residual layers to improve the contour of the segmentation and segmentation of complex distribution or initiated of food dishes, opening the field of application of this type of networks to be implemented in feeding assistants or in automated restaurants, including also for dietary control for the amount of food consumed.
Quality Analysis and Classification of Rice Grains using Image Processing Tec...IRJET Journal
This document presents a study that aims to analyze and classify rice grains using image processing techniques. The study develops image processing algorithms to segment and identify rice grains in images. By measuring the size (length and breadth) of rice grains through edge detection and a caliper, the algorithms can efficiently analyze grain quality and classify grains. The algorithms are able to generalize classifications across diverse rice varieties by also extracting Fourier features from grain images. The study proposes a methodology involving image pre-processing, morphological operations, edge detection, measurement, and classification to accurately quantify and categorize rice seeds based on size and shape attributes analyzed through images. The methodology aims to provide an alternative approach to rice quality analysis that is more efficient, cost-effective and reduces
This document presents a framework for automatically generating cooking recipes from food images. The framework uses neural networks to recognize ingredients in the food image and then generates step-by-step cooking instructions by considering both the image and predicted ingredients simultaneously. The framework is trained and evaluated on a large dataset of over 1 million recipes. Evaluation shows the framework can improve recipe prediction performance over past approaches and generate convincing and accurate recipes by weighing both the food image and ingredients.
This document presents a framework for generating recipes from food images. The framework first predicts a list of ingredients present in an image using neural networks. It then generates cooking instructions by considering both the image features and predicted ingredients. The system is evaluated on a large dataset and is shown to outperform previous baselines by accurately predicting ingredients and generating coherent recipes directly from images. Key aspects of the framework include using transformers to generate sequential instructions conditioned on image and ingredient embeddings.
A MACHINE LEARNING METHOD FOR PREDICTION OF YOGURT QUALITY AND CONSUMERS PREF...mlaij
Prediction of quality and consumers’ preferences is essential task for food producers to improve their
market share and reduce any gap in food safety standards. In this paper, we develop a machine learning
method to predict yogurt preferences based on the sensory attributes and analysis of samples’ images
using image processing texture and color feature extraction techniques. We compare three unsupervised
ML feature selection techniques (Principal Component Analysis and Independent Component Analysis and
t-distributed Stochastic Neighbour Embedding) with one supervised ML feature selection technique
(Linear Discriminant Analysis) in terms of accuracy of classification. Results show the efficiency of the
supervised ML feature selection technique over the traditional feature selection techniques.
A MACHINE LEARNING METHOD FOR PREDICTION OF YOGURT QUALITY AND CONSUMERS PREF...mlaij
Prediction of quality and consumers’ preferences is essential task for food producers to improve their
market share and reduce any gap in food safety standards. In this paper, we develop a machine learning
method to predict yogurt preferences based on the sensory attributes and analysis of samples’ images
using image processing texture and color feature extraction techniques. We compare three unsupervised
ML feature selection techniques (Principal Component Analysis and Independent Component Analysis and
t-distributed Stochastic Neighbour Embedding) with one supervised ML feature selection technique
(Linear Discriminant Analysis) in terms of accuracy of classification. Results show the efficiency of the
supervised ML feature selection technique over the traditional feature selection techniques.
This document describes a proposed hybrid technique for automatic medical image classification and retrieval using information retrieval, support vector machines, and particle swarm optimization. Key aspects of the proposed approach include extracting low-level visual features from images like color, texture, shape and integrating them with semantic metadata. A content analysis system analyzes image descriptors and assigns semantic labels. Images are indexed and classified during a training phase. The proposed system aims to reduce the semantic gap between low-level features and high-level semantics by combining content-based image retrieval with text-based retrieval and machine learning algorithms.
Semi-automatic model to colony forming units countingIJECEIAES
Colony forming units counting is a conventional process carry out in bacteriological laboratories, and it is used to follow the behavior of bacteria in different conditions. Currently exist different systems, automatic or semiautomatic, to counting colony forming units exits, but, in general, many laboratories continue using manual counting, which consumes considerable time and effort from researchers and laboratory employees. This paper presents a mathematical model carry out to segment the colony forming units and, in this way, counting them from a digital image of the sample. The method uses the color space information of some points in the image and shows good behavior for images with many or few colony forming units in the sample, according to manual counting. The results show efficiencies close to 98% with MacConkey agar.
Automatic food bio-hazard detection system IJECEIAES
This paper presents the design of a convolutional neural network architecture oriented to the detection of food waste, to generate a low, medium, or criticallevel alarm. An architecture based on four convolution layers is used, for which a database of 100 samples is prepared. The database is used with the different hyperparameters that make up the final architecture, after the training process. By means of confusion matrix analysis, a 100% performance of the network is obtained, which delivers its output to a fuzzy system that, depending on the duration of the detection time, generates the different alarm levels associated with the risk.
IRJET- Estimation of Calorie Content in a DietIRJET Journal
The document presents a machine learning model to estimate calorie content in foods by analyzing images. A convolutional neural network detects the food item and its geometric shape is used to calculate volume. A support vector machine model trained on food images achieves 94% accuracy in classification. The volume and density information are then used to compute the calorie content. Experimental results show the estimated calorie contents match well with actual values, demonstrating the accuracy of the proposed model. The authors suggest future work could expand the model to estimate calories in more food types and reduce errors in computation.
This document proposes a method for visual food recognition using sparse coding. Patch-based representations of food images are used directly without extracted features. Sparse coding is applied to learn dictionaries from training patches. Atom distributions from sparse coding are then used as features to train an SVM classifier. Experiments show the approach achieves over 90% accuracy when the correct class is within the top 2 rankings, demonstrating its potential for real-world use despite the computational complexity.
Similar to 3 d vision based dietary inspection for the central kitchen automation (20)
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
2. 210 Computer Science & Information Technology (CS & IT)
in this study, we apply the proposed the meal box
HFtexture features, and depth images to construct
the central kitchen automation.
inspection system is shown in Figure 1.
Figure 1.The system flowchart of the
In Fig. 1, firstly, the sensing module extracts
locating module locates the position of
food ingredient. Finally, the meal contents identification module identifies the food categories
and amount for evaluate the food quality.
Figure 2. The meal box is moving on
and 2D images continuously. Once the meal box is located with
the food quality can be inspected with the color, texture, and depth image features.
experimental results show that the dietary inspection accuracy can approach 80%, dietary
inspection efficiency can reach
proposed system is expected to increase the capacity of meal supply over 50% and be helpful to
the dietician in the hospital for saving the time in the diet
Figure 2.Thesystem operation procedures
Computer Science & Information Technology (CS & IT)
apply the proposed the meal box detection and locating technology
texture features, and depth images to construct a novel approach of the dietary inspection for
The system flowchart of the proposed 3D vision-
igure 1.
flowchart of the proposed 3D vision-based dietary inspection system.
module extracts 3D (depth) and 2D images. Secondly, the meal box
locating module locates the position of the detected meal box and segment the regions for each
the meal contents identification module identifies the food categories
evaluate the food quality. The system operation procedures are
Figure 2. The meal box is moving on the conveyor and the sensing module extracts 3D (depth)
continuously. Once the meal box is located with the meal box locating modu
the food quality can be inspected with the color, texture, and depth image features.
experimental results show that the dietary inspection accuracy can approach 80%, dietary
inspection efficiency can reach 1200 ms, and the food quantity accuracy is about 90%. The
proposed system is expected to increase the capacity of meal supply over 50% and be helpful to
the dietician in the hospital for saving the time in the diet inspection process.
procedures of the proposed 3D vision-based dietary inspection
locating technology, LBP-
dietary inspection for
-based dietary
system.
3D (depth) and 2D images. Secondly, the meal box
detected meal box and segment the regions for each
the meal contents identification module identifies the food categories
described in
module extracts 3D (depth)
the meal box locating module,
the food quality can be inspected with the color, texture, and depth image features. The
experimental results show that the dietary inspection accuracy can approach 80%, dietary
about 90%. The
proposed system is expected to increase the capacity of meal supply over 50% and be helpful to
dietary inspection system.
3. Computer Science & Information Technology (CS & IT) 211
2. AUTOMATIC MEAL BOX INSPECTION
The baseline of the system design is based on the domain knowledge of dietician. In this section,
the methodologies of the meal box locating and the meal contents identification are described. In
the diet content and amount identification process, the 2D/3D image features, e.g., depth, color [7]
and textures[6], are used to train the dietary inspection system, and then the system can identify
the diet categories and amounts. By using the novel automatically foods recognition and amount
identification system, the manual operations can be reduced significantly and the accuracy and
efficiency of food arrangement can be improved significantly.
2.1. Meal Box Detection and Locating with Multi-resolution Image Alignment
To develop a real-time vision-based dietary inspection system, the analyses of the video content
captured from the camera are crucial to detect and locate the meal box. In Fig. 3, we can see that
the meal box is moving continuously on the meal dispatch conveyor at central kitchen. Then, how
to detect the meal box and locate the position of meal box in real-time is a problem. Here, we
proposed a novel meal box locating method by using the multi-resolution image alignment
method to match the meal box template shown in Fig. 4-(b) to the captured images from low
resolution to high resolution within the region of interest (ROI) shown in Fig. 3.
Figure3.The ROI setting in the meal box image.
Based on the careful observations, the image template of the meal box is difficult to generate
because that the foods can cover the left side of meal box and no texture information exist on the
right side of meal box. Hence, we extract the middle image in the meal box image shown in Fig.
4-(b) as the image template of meal box to detect and locate the position of meal box.
4. 212 Computer Science & Information Technology (CS & IT)
Figure 4.Selection of image template of meal box to detect and locate the position of meal box.
image on the meal dispatch conveyor
The algorithm of meal box locating is described as follows.
1. Image template and meal box image are decomposed into specified multi
(pyramid image representation
2. Perform the pixel-based template matching (correlation matching)
resolution). Then, some candidate regions are extracted.
3. Perform the pixel-based template matching
neighbouring region obtained from the
Figure 5.Image are decompose
To speed up the calculation efficiency, all the integer
for the computation of correlation are calculated in advance
GrayTableሺA, Bሻ ൌ ሼሺA
whereܣ̅andܤത are the mean values of the image
flowchart of the multi-resolution meal box locat
Computer Science & Information Technology (CS & IT)
(a) (b)
Selection of image template of meal box to detect and locate the position of meal box.
on the meal dispatch conveyor.(b)Image template of meal box.
box locating is described as follows.
Image template and meal box image are decomposed into specified multi-resolution levels
yramid image representation) shown in Fig. 5.
template matching (correlation matching) in the lower level
ome candidate regions are extracted.
template matching in the higher resolution image
region obtained from the candidate regions in step 2.
are decomposed into multi resolution levels (pyramid image representation
To speed up the calculation efficiency, all the integer multiplication operations defined in Eq. (1)
the computation of correlation are calculated in advance and stored as a look-up table.
െ Aഥሻ ൈ ሺB െ Bഥሻ, 0 A 255 , 0 B 255ሽ
are the mean values of the image A and B respectively. The system
resolution meal box locating module is shown in Figure 6.
Selection of image template of meal box to detect and locate the position of meal box.(a)Meal box
resolution levels
in the lower level (lower
images within the
image representation).
operations defined in Eq. (1)
up table.
(1)
system operation
5. Computer Science & Informa
Figure6.The system operation flow
2.2. Meal Box Contents Identification
This section will describe the identification method
processes in the "meal box location
the color distribution (polor histogram)
dietary inspection is designed with the
trained image features in the database. Figure
system.
Figure
2.2.1.Color Polar Histogram
Once the meal box is aligned, we can segment the regions for each
color distribution feature for identifying the
space of the image of each food ingredient
channels to establish the color polar
shows the polar coordinates.
Computer Science & Information Technology (CS & IT)
operation flowchart of the multi resolution meal box locating module
dentification
identification methods for the contents in the meal box,
the "meal box location content identification module". The extracted feature
(polor histogram)and texture(LBP-HF) feature within the ROI
is designed with the similarity measure between the online input image and the
trained image features in the database. Figure 7 illustrates the flowchart of food quality inspection
Figure7.Flowchart of food quality inspection.
we can segment the regions for each food ingredient
color distribution feature for identifying the food ingredient color. Here, we transf
food ingredient into YCbCr color space and use the
polar histogram [7,9] with angle range from -127o
to 127
213
module.
the meal box, i.e., the
features include
the ROI. Finally, the
online input image and the
illustrates the flowchart of food quality inspection
to extract the
transfer the color
the CbCr color
to 127o
. Figure 8
6. 214 Computer Science & Information Technology (CS & IT)
Figure 8. Pol
Here, the color polar histogram is represented as
bin can be calculated as the formula in
extracted color polar histogram feature for the
is illustrated in Fig. 9.
Figure 9. The extracted color polar histogram feature for the
2.2.2Local Binary Pattern-Histogram Fourier
Local binary pattern-histogram F
invariant. The LBP operator is powerful for texture description. It labels the image pixels by
thresholding the surrounding pixels with
thresholded values weighted by powers of two.
Eq. (3).
LBP,ୖሺݔ
where f(x, y) is the center pixel
surrounding points, R is sampling radius, and s(z) is the thresholding function
sሺzሻ ൌ ቄ
Computer Science & Information Technology (CS & IT)
Figure 8. Polar coordinates for Cr and Cb color space.
h୪ ൌ ∑ δሾθሺCୠ, C୰ሻ െ lሿ
୧ୀଵ
the color polar histogram is represented as H ൌ ሼhଵ, … , h୫ሽ. The value for each histogram
the formula in Eq. (2), where the θ is the degree of coordinate
extracted color polar histogram feature for the image of the food ingredient of the sample image
extracted color polar histogram feature for the image of the food ingredient of the sample
image.
istogram Fourier (LBP-HF)
Fourier (LBP-HF)[10] is based on the LBP method for
. The LBP operator is powerful for texture description. It labels the image pixels by
thresholding the surrounding pixels with comparing the value of center pixel and summing the
thresholded values weighted by powers of two. The LBPlabel can be obtained with the formula in
ሺ,ݔ ݕሻ ൌ ∑ s ቀfሺx, yሻ െ f൫x୮, y୮൯ቁ 2୮ିଵ
୮ୀ
where f(x, y) is the center pixel (red dot) of image f shown as Figure 10. P is the number of
surrounding points, R is sampling radius, and s(z) is the thresholding function shown
ቄ
0, ݖ ൏ 0
1, z 0
ሺ2ሻ
. The value for each histogram
oordinate. The
of the sample image
of the sample
LBP method for rotation-
. The LBP operator is powerful for texture description. It labels the image pixels by
center pixel and summing the
with the formula in
ሺ3ሻ
is the number of
shown as Eq.(4).
ሺ4ሻ
7. Computer Science & Information Technology (CS & IT) 215
Figure 10. LBP sampling radius. (a) (P, R) = (6, 1). (b) (P, R) = (8, 2).
Furthermore, an extended LBP operator called uniform LBP [11] is proposed to describe the
region texture distribution more precisely. The uniform LBP operator is constructed by
considering if the binary pattern contains at most two bitwise transitions from 0 to 1 or 1 to 0
when the bit pattern is considered circular[6]. For computing the uniform LBP histogram, each
uniform pattern is assigned to a specified bin and all non-uniform patterns are assigned into a
single bin.The58 possible uniform patterns (all zeros, all ones, non-uniform)of (P, R) = (8, R) are
shown in Figure11.
Figure 11. 58 possible uniform patternsof (P, R) = (8, R).
The uniform LBP owns a rotation invariant property. The rotation of uniform LBP is just as a
horizontal shift in Figure 11 and shown in Figure 12. Based on this property, the LBP-HF [6]
image feature is proposed. The LBP-HF image feature is generated by performing Fourier
transform to every row in the uniform LBP histogram(except the first and the last row) to
Discrete Fourier Transform to construct these features, and let H(n,・)be the DFT of n-th row of
the histogram hI (UP (n, r)),, which is shown as Eq. (5).
ܪሺ݊, ݑሻ ൌ ∑ ℎூ൫ܷሺ݊, ݎሻ൯݁ିଶగ௨/ିଵ
ୀ ሺ5ሻ
8. 216 Computer Science & Information Technology (CS & IT)
In Eq. (5), H(n, u) is the Fourier transformed histogram, n is the number of “1”, u is the
frequency, hI is the uniform LBP histogram of the image I, UP(n, r) is the uniform LBP operator,
and r denotes the row index. We apply the feature vectors consisting of three LBP histogram
values (all zeros, all ones, non-uniform) and Fourier magnitude spectrum values of LBP-HFinEq.
(6) to describe the texture distribution of the food ingredient image.
݂ݒିୌ ൌ ሾ |Hሺ1,0ሻ|, ⋯ , ቚH ቀ1,
ଶ
ቁቚ , …
⋯ , |HሺP െ 1,0ሻ|, ⋯ , ቚHቀP െ 1,
ଶ
ቁቚ ,
h൫Uሺ0,0ሻ൯, h൫UሺP, 0ሻ൯, h൫UሺP + 1,0ሻ൯ ሿ
ሺ6ሻ
Figure12. Rotation doesn't change the row it belongs to in Figure11
2.2.3.Data Fusion and Matching
In this study, we utilize Bhattacharyya distance[12]to measure the similarity between the trained
and test patterns that are described with the LBP-HF texture description and polar color histogram.
Bhattacharyya distance measurement shown in Eq. (7) can be used to measure the similarity
between two discrete probability distributions.
dሺyሻ ൌ
ଵ
ୗ
∑ ඥ1 െ ρሾHୠ, Pୠሺyሻሿୗ
ୠୀଵ , ሺ7ሻ
where, ρሾHୠ, Pୠሺyሻሿ ൌ ∑ ට
୦∙୮ሺ୷ሻ
∑ ୦∙∑ ୮ሺ୷ሻౣ
సభ
ౣ
సభ
୫
୧ୀଵ .
2.3.Food Quantity Measurement
For the inspection of amount of food ingredient, we use depth information obtained from the
depth sensor to evaluate the amount of each food ingredient. Figure 13illustrates the captured
depth information used to determine the amount of food ingredient.
9. Computer Science & Informa
Figure13.The captured depth information used to
3. EXPERIMENTAL RESULT
In this section, we apply the automatic meal box
ingredients identification module to
scenario is shown in Figure 14.
Figure 1
The proposed food quality inspection system
of the Chinese food central kitchen
between customer-made meals orders of the
system's operation module used Intel i5 2.2GHz CPU to analysis the contents of meal box, and it
used the Microsoft Kinect camera in capture module.
The performance of food quality inspection is evaluated with two different meal boxes that are
three and four food ingredients’ partitions
dishes types in the meal box including the
shown in Figure 15.The efficiency
food ingredients identification module
Computer Science & Information Technology (CS & IT)
(a) (b)
The captured depth information used to determine the amount of food ingredient.
image. (b) Depth image for (a).
ESULTS
In this section, we apply the automatic meal box detection/locating module and automatic food
ingredients identification module to construct a food quality inspection system. The operation
Figure 14.The system operation scenario.
inspection system is implemented on the meal box dispatch conveyor
food central kitchen. It automatically check compliance of the meal box content
made meals orders of the dietician designed. This automated inspection
's operation module used Intel i5 2.2GHz CPU to analysis the contents of meal box, and it
used the Microsoft Kinect camera in capture module.
The performance of food quality inspection is evaluated with two different meal boxes that are
partitions. Figure 15 shows two different meal boxes. There are 9
including the one main dishes and 3 or 4 vice-dishes, which are
efficiency of the meal box location detection and locating
identification module is listed in Table 1.
217
(a) Meal box
module and automatic food
The operation
implemented on the meal box dispatch conveyor
t automatically check compliance of the meal box content
This automated inspection
's operation module used Intel i5 2.2GHz CPU to analysis the contents of meal box, and it
The performance of food quality inspection is evaluated with two different meal boxes that are
Figure 15 shows two different meal boxes. There are 9
es, which are
ion and locating module and
10. 218 Computer Science & Information Technology (CS & IT)
(a) (b)
Figure 15.(a) 4 vice-dish meal box. (b) 3 vice-dish meal box.
Table 1.The efficiency analysis of the meal box detection and locating.
Test Video
Meal box location
detect module(1 box)
food ingredients
identification
module(1 box)
AVG. time for
one meal box
Video 1 (4 vice-dish) 10.16 ms 116.44ms 126.60 ms
Video 2 (3 vice-dish) 9.8 ms 95 ms 114.8 ms.
Table 2 illustrates the accuracy analysis of the proposed food quality inspection system. The
accuracy of meal box detection and locating is higher than 85% and the accuracy of food
ingredients identification can approach 85%.
Table 2.Accuracy analysis for the food quality inspection system.
Test Video
Meal box location
detect module
food ingredients
identification module
Video 1 (4 vice-dish) 85.3 % 82.1 %
Video 2 (3 vice-dish) 89.6 % 89.3 %
For the amount inspection of each food ingredient, we apply the depth information to evaluate the
amount of each food ingredient. The efficiencies for the meal box location detection module, food
ingredients identification, and quantity estimated module are shown in Table 3.The complete
average processing time of each meal box is about 1.4 second. In Table 4, the accuracy analysis is
listed.
Table 3.Detection efficiency of the automated optical inspection system
Test Video
Meal box
location detect
module(1 box)
food ingredients
identification
module(1 box)
food quantity
estimated
module(1 box)
AVG. time for
one meal box
Video 1 21.36 ms 94.1 ms 25.2ms 140.66 ms
Table 4.Detection accuracy of the automated optical inspection system
Test Video
Meal box location
detect module
food ingredients
identification module
food quantity
estimated module
Video 1 85.3 % 82.1 % 74.2%
11. Computer Science & Information Technology (CS & IT) 219
4. CONCLUSIONS
In the proposed system, firstly, the meal box can be located automatically with the vision-based
method and then all the food ingredients can be identified by using the colour and LBP-HF
texture features. Secondly, the quantity for each of food ingredient is estimated by using the
image depth information. The experimental results show that the dietary inspection accuracy can
approach 80%, dietary inspection efficiency can reach 1200ms, and the food quantity accuracy is
about 90%. The proposed system is expected to increase the capacity of meal supply over 50%
and be helpful to the dietician in the hospital for saving the time in the diet identification process.
REFERENCES
[1] Chetima, M.M. &Payeur, p. (2008) “Feature selection for a real-time vision-based food inspection
system", Proc. of the IEEE Intl Workshop on Robotic and Sensors Environments, pp 120-125.
[2] Blasco, J. Aleixos, N. Molto, E. (2007) “Computer vision detection of peel defects in citrus by means
of a region oriented segmentation algorithm”, Journal of Food Engineering, Vol. 81, No. 3, pp535-
543.
[3] Brosnan, T. & Sun, D.-W.(2004) “Improving quality inspection of food products by computer vision
– a review”, Journal of Food Engineering, Vol. 61, pp. 3-16.
[4] Matsuda, Y. & Hoashi, H. (2012) “Recognition of multiple-food images by detecting candidate
regions.” In Proc. of IEEE International Conference on Multimedia and Expo, pp 1554–1564.
[5] Yang, S. Chen, M. Pomerleau, D. Sukhankar. (2010)“Food recognition using statistics of pairwise
local features.”International Conference on Computer Vision and Pattern Recognition (CVPR), San
Francisco, CA, pp. 2249–2256
[6] Zhao, G. Ahonen, T. Matas, J. Pietikainen, M. (2012) “Rotation-invariant image and video
description with local binary pattern features,” IEEE Trans. Image Processing, Vol. 21, No. 4, pp.
1465 –1477.
[7] Suau, P., Rizo, R. Pujol, M. (2004) "Image Recognition Using Polar Histograms.”
http://www.dccia.ua.es/~pablo/papers/ukrobraz2004.pdf
[8] Chen, M.Y., Yang, Y.H., Ho, C.J., Wang, S.H., Liu, S.M., Chang, E., Yeh, C.H., Ouhyoung, M.
(2012) “Automatic Chinese food identification and quantity estimation.” SIGGRAPH Asia 2012
Technical Briefs
[9] Chang, P., Krumm, J.(1999) “Object Recognition with Color Coocurrence Histograms”, IEEE
International Conference on Computer Vision and Pattern Recognition,
[10] Ren, J., Jiang, X., Yuan, J., Wang, G., (2014)“Optimizing LBP Structure For Visual Recognition
Using Binary Quadratic Programming”, Signal Processing Letters, IEEE, On page(s): 1346 - 1350
Volume: 21, Issue: 11.
[11] Ojala, T., Pietik¨ainen, M., M¨aenp¨a¨a, T.(2002)“Multi resolution gray-scale and rotation invariant
texture classification with local binary patterns.” IEEE Transactions on Pattern Analysis and Machine
Intelligence 24(7), 971–987
[12] F. Aherne, N. Thacker and P. Rockett,(1998)“The Bhattacharyya Metric as an Absolute Similarity
Measure for Frequency Coded Data,” Kybernetika, vol. 34, no. 4, pp. 363-368.
[13] M.R. Chandraratne, D. Kulasiri, and S. Samarasinghe,(2007) “Classification of Lamb Carcass Using
Machine Vision: Comparison of Statistical and Neural Network Analyses”, Journal of Food
Engineering, vol. 82, no. 1, pp. 26-34.
[14] Aleixos, N., Blasco, J., &Moltó, E. (1999). “Design of a vision system for real-time inspection of
oranges.”In VIII national symposium on pattern recognition and image analysis. Bilbao, Spain.pp.
387–394