Paper: http://ceur-ws.org/Vol-2882/paper51.pdf
Amel Ksibi, Amina Salhi, Ala Alluhaidan and Sahar A. El-Rahman : Insights for wellbeing: Predicting Personal Air Quality Index using Regression Approach. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Providing air pollution information to individuals enables them to understand the air quality of their living environments. Thus, the association between people’s wellbeing and the properties of the surrounding environment is an essential area of investigation. This paper proposes Air Quality Prediction through harvesting public/open data and leveraging them to get the Personal Air Quality index. These are usually incomplete. To cope with the problem of missing data, we applied the KNN imputation method. To predict Personal Air Quality Index, we apply a voting regression approach based on three base regressors which are Gradient Boosting regressor, Random Forest regressor, and linear regressor. Evaluating the experimental results using the RMSE metric, we got an average score of 35.39 for Walker and 51.16 for Car.
Overview of MediaEval 2020 Insights for Wellbeing: Multimodal Personal Health...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper11.pdf
YouTube: https://youtu.be/fBPuacAZkxs
Minh-Son Dao, Peijiang Zhao, Thanh Nguyen, Thanh Binh Nguyen, Duc Tien Dang Nguyen and Cathal Gurrin : Overview of MediaEval 2020 Insights for Wellbeing: Multimodal Personal Health Lifelog Data Analysis. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper provides a description of the MediaEval 2020 “Multimodal personal health lifelog data analysis". The purpose of this task is to develop approaches that process the environment data to obtain insights about personal wellbeing. Establishing the association between people’s wellbeing and properties of the surrounding environment which is vital for numerous research. Our task focuses on the internal associations of heterogeneous data. Participants create systems that derive insights from multimodal lifelog data that are important for health and wellbeing to tackle two challenging subtasks. The first task is to investigate whether we can use public/open data to predict personal air pollution data. The second task is to develop approaches to predict personal air quality index(AQI) using images captured by people (plus GAQD). This task targets (but is not limited to) researchers in the areas of multimedia information retrieval, machine learning, AI, data science, event-based processing and analysis, multimodal multimedia content analysis, lifelog data analysis, urban computing, environmental science, and atmospheric science.
Presented by: Peijiang Zhao
Use Visual Features From Surrounding Scenes to Improve Personal Air Quality ...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper40.pdf
YouTube: https://youtu.be/SL5Hvu1mARY
Trung-Quan Nguyen, Dang-Hieu Nguyen and Loc Tai Tan Nguyen : Use Visual Features From Surrounding Scenes to Improve Personal Air Quality Data Prediction Performance. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we propose a method to predict the personal air quality index in an area by using the combination of the levels of the following pollutants: PM2.5, NO2, and O3, measured from the nearby weather stations of that area, and the photos of surrounding scenes taken at that area. Our approach uses the Inverse Distance Weighted (IDW) technique to estimate the missing air pollutant levels and then use regression to integrate visual features from taken photos to optimize the predicted values. After that, we can use those values to calculate the Air Quality Index (AQI). The results show that the proposed method may not improve the performance of the prediction in some cases.
Personal Air Quality Index Prediction Using Inverse Distance Weighting Methodmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper39.pdf
YouTube: https://youtu.be/3r_oSguFPVM
Trung-Quan Nguyen, Dang-Hieu Nguyen and Loc Tai Tan Nguyen : Personal Air Quality Index Prediction Using Inverse Distance Weighting Method. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we propose a method to predict the personal air quality index in an area by only using the levels of the following pollutants: PM2.5, NO2, O3. All of them are measured from the nearby weather stations of that area. Our approach uses one of the most well-known interpolation methods in spatial analysis, the Inverse Distance Weighted (IDW) technique, to estimate the missing air pollutant levels. After that, we can use those levels to calculate the Air Quality Index (AQI). The results show that the proposed method is suitable for the prediction of those air pollutant levels.
This research aims to predict the level of air pollution with a set of data used to make predictions through them and to obtain the best prediction using several models and compare them and find the appropriate solution
An Analytical Survey on Prediction of Air Quality Indexijtsrd
A drastic increase of modernization gives birth to many industries and automobiles, which intern becomes the very common reason for the environmental issues like Air and water pollutions. Air pollution is the immediate affecting factors in our life, which contaminates the air that we breathe to cause serious health hazards. So it is very important to predict the Air quality index for the future coming days so that proper prompt action can be taken by the concern authorities to curb the same. The air quality reading for the different gases can be collect through the physical sensors and these readings can be used to predict the future Air quality index. Machine learning is acting as the catalyst in this prediction scenario to predict the accurate Air quality index for the future instance. Most of the learning systems need a huge amount of the data for the learning purpose and it is not possible to provide this every time. So it is a need to predict the air quality index by using considerable less amount of past instance data, This paper mainly concentrates on analyzing the past work in prediction of air quality index using machine learning and try to evaluate their flaws and to estimate the new possible way of prediction using machine learning. Suraj Kapse | Akshay Kurumkar | Vighnesh Manthapurvar | Prof. Rajesh Tak "An Analytical Survey on Prediction of Air Quality Index" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28072.pdf Paper URL: https://www.ijtsrd.com/engineering/information-technology/28072/an-analytical-survey-on-prediction-of-air-quality-index/suraj-kapse
Overview of MediaEval 2020 Insights for Wellbeing: Multimodal Personal Health...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper11.pdf
YouTube: https://youtu.be/fBPuacAZkxs
Minh-Son Dao, Peijiang Zhao, Thanh Nguyen, Thanh Binh Nguyen, Duc Tien Dang Nguyen and Cathal Gurrin : Overview of MediaEval 2020 Insights for Wellbeing: Multimodal Personal Health Lifelog Data Analysis. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper provides a description of the MediaEval 2020 “Multimodal personal health lifelog data analysis". The purpose of this task is to develop approaches that process the environment data to obtain insights about personal wellbeing. Establishing the association between people’s wellbeing and properties of the surrounding environment which is vital for numerous research. Our task focuses on the internal associations of heterogeneous data. Participants create systems that derive insights from multimodal lifelog data that are important for health and wellbeing to tackle two challenging subtasks. The first task is to investigate whether we can use public/open data to predict personal air pollution data. The second task is to develop approaches to predict personal air quality index(AQI) using images captured by people (plus GAQD). This task targets (but is not limited to) researchers in the areas of multimedia information retrieval, machine learning, AI, data science, event-based processing and analysis, multimodal multimedia content analysis, lifelog data analysis, urban computing, environmental science, and atmospheric science.
Presented by: Peijiang Zhao
Use Visual Features From Surrounding Scenes to Improve Personal Air Quality ...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper40.pdf
YouTube: https://youtu.be/SL5Hvu1mARY
Trung-Quan Nguyen, Dang-Hieu Nguyen and Loc Tai Tan Nguyen : Use Visual Features From Surrounding Scenes to Improve Personal Air Quality Data Prediction Performance. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we propose a method to predict the personal air quality index in an area by using the combination of the levels of the following pollutants: PM2.5, NO2, and O3, measured from the nearby weather stations of that area, and the photos of surrounding scenes taken at that area. Our approach uses the Inverse Distance Weighted (IDW) technique to estimate the missing air pollutant levels and then use regression to integrate visual features from taken photos to optimize the predicted values. After that, we can use those values to calculate the Air Quality Index (AQI). The results show that the proposed method may not improve the performance of the prediction in some cases.
Personal Air Quality Index Prediction Using Inverse Distance Weighting Methodmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper39.pdf
YouTube: https://youtu.be/3r_oSguFPVM
Trung-Quan Nguyen, Dang-Hieu Nguyen and Loc Tai Tan Nguyen : Personal Air Quality Index Prediction Using Inverse Distance Weighting Method. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we propose a method to predict the personal air quality index in an area by only using the levels of the following pollutants: PM2.5, NO2, O3. All of them are measured from the nearby weather stations of that area. Our approach uses one of the most well-known interpolation methods in spatial analysis, the Inverse Distance Weighted (IDW) technique, to estimate the missing air pollutant levels. After that, we can use those levels to calculate the Air Quality Index (AQI). The results show that the proposed method is suitable for the prediction of those air pollutant levels.
This research aims to predict the level of air pollution with a set of data used to make predictions through them and to obtain the best prediction using several models and compare them and find the appropriate solution
An Analytical Survey on Prediction of Air Quality Indexijtsrd
A drastic increase of modernization gives birth to many industries and automobiles, which intern becomes the very common reason for the environmental issues like Air and water pollutions. Air pollution is the immediate affecting factors in our life, which contaminates the air that we breathe to cause serious health hazards. So it is very important to predict the Air quality index for the future coming days so that proper prompt action can be taken by the concern authorities to curb the same. The air quality reading for the different gases can be collect through the physical sensors and these readings can be used to predict the future Air quality index. Machine learning is acting as the catalyst in this prediction scenario to predict the accurate Air quality index for the future instance. Most of the learning systems need a huge amount of the data for the learning purpose and it is not possible to provide this every time. So it is a need to predict the air quality index by using considerable less amount of past instance data, This paper mainly concentrates on analyzing the past work in prediction of air quality index using machine learning and try to evaluate their flaws and to estimate the new possible way of prediction using machine learning. Suraj Kapse | Akshay Kurumkar | Vighnesh Manthapurvar | Prof. Rajesh Tak "An Analytical Survey on Prediction of Air Quality Index" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28072.pdf Paper URL: https://www.ijtsrd.com/engineering/information-technology/28072/an-analytical-survey-on-prediction-of-air-quality-index/suraj-kapse
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Presentation on the topic of sensing air-quality at city level based on Twitter data given at the IEEE Image, Video, and Multidimensional Signal Processing (IVMSP) 2018 workshop in Aristi, Greece.
The two main challenges of predicting the wind speed depend on various atmospheric factors and random variables. This paper explores the possibility of developing a wind speed prediction model using different Artificial Neural Networks (ANNs) and Categorical Regression empirical model which could be used to estimate the wind speed in Coimbatore, Tamil Nadu, India using SPSS software. The proposed Neural Network models are tested on real time wind data and enhanced with statistical capabilities. The objective is to predict accurate wind speed and to perform better in terms of minimization of errors using Multi Layer Perception Neural Network (MLPNN), Radial Basis Function Neural Network (RBFNN) and Categorical Regression (CATREG). Results from the paper have shown good agreement between the estimated and measured values of wind speed.
Air quality challenges and business opportunities in China: Fusion of environ...CLIC Innovation Ltd
MMEA (The Measurement, Monitoring and Environmental Efficiency Assessment) research program final seminar presentation by Dr. Ari Karppinen, Finnish Meteorological Institute
Air pollution monitoring system using mobile gprs sensors array pptSaurabh Giratkar
ppt This paper contain brief introduction to vehicular pollution, effect of increase in vehicular pollution on environment as well on human health. To monitor this pollution wireless sensor network (WSN) system is proposed. The proposed system consists of a Mobile Data-Acquisition Unit (Mobile-DAQ) and a fixed Internet-Enabled Pollution Monitoring Server (Pollution-Server). The Mobile-DAQ unit integrates a single-chip microcontroller, air pollution sensors array, a General Packet Radio Service Modem (GPRS-Modem), and a Global Positioning System Module (GPS-Module). The Pollution-Server is a high-end personal computer application server with Internet connectivity. The Mobile-DAQ unit gathers air pollutants levels (CO, NO2, and SO2), and packs them in a frame with the GPS physical location, time, and date. The frame is subsequently uploaded to the GPRS-Modem and transmitted to the Pollution-Server via the public mobile network. A database server is attached to the Pollution- Server for storing the pollutants level for further usage by various clients such as environment protection agencies, vehicles registration authorities, and tourist and insurance companies.
AI BASED PPT FOR PROJCTS USEFUL FOR EDITINGLokesh147875
Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and future prospects. To give you an idea of what such an extensive essay might entail, here's an outline highlighting key points that could be explored in each section:
1. **Introduction to Artificial Intelligence (500 words)**
- Definition of AI.
- Brief history and evolution of AI.
- Key milestones in AI development.
2. **Fundamental Concepts and Technologies in AI (500 words)**
- Machine Learning and Deep Learning.
- Neural Networks.
- Natural Language Processing (NLP).
- Computer Vision.
- Robotics and Automation.
3. **Applications of AI Across Various Sectors (500 words)**
- AI in Healthcare: diagnostics, treatment planning, drug discovery.
- AI in Business: customer service, data analysis, automation.
- AI in Transportation: autonomous vehicles, traffic management.
- AI in Education: personalized learning, grading systems.
- AI in Entertainment: gaming, content creation.
4. **Ethical Considerations and Challenges in AI (500 words)**
- Bias and fairness in AI algorithms.
- Privacy concerns with AI technologies.
- AI and job displacement.
- Ethical AI development and use.
5. **AI's Global Impact and Policy Implications (500 words)**
- AI's impact on global economies.
- International regulations and policies on AI.
- AI in global governance and security.
6. **The Future of AI and Emerging Trends (500 words)**
- Advancements in AI technologies.
- Potential future applications and innovations.
- The role of AI in shaping future societies.
In a detailed essay, each of these sections would delve into specific examples, case studies, and theoretical frameworks, providing a comprehensive understanding of AI. The essay would not only inform about the current state of AI but also provoke thought about its future implications and how society might adapt to and shape these emerging technologies.Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and future prospects. To give you an idea of what such an extensive essay might entail, here's an outline highlighting key points that could be explored in each section:
1. **Introduction to Artificial Intelligence (500 words)**
- Definition of AI.
- Brief history and evolution of AI.
- Key milestones in AI development.
2. **Fundamental Concepts and Technologies in AI (500 words)**
- Machine Learning and Deep Learning.
- Neural Networks.
- Natural Language Processing (NLP).
Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and futUETR
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Presentation on the topic of sensing air-quality at city level based on Twitter data given at the IEEE Image, Video, and Multidimensional Signal Processing (IVMSP) 2018 workshop in Aristi, Greece.
The two main challenges of predicting the wind speed depend on various atmospheric factors and random variables. This paper explores the possibility of developing a wind speed prediction model using different Artificial Neural Networks (ANNs) and Categorical Regression empirical model which could be used to estimate the wind speed in Coimbatore, Tamil Nadu, India using SPSS software. The proposed Neural Network models are tested on real time wind data and enhanced with statistical capabilities. The objective is to predict accurate wind speed and to perform better in terms of minimization of errors using Multi Layer Perception Neural Network (MLPNN), Radial Basis Function Neural Network (RBFNN) and Categorical Regression (CATREG). Results from the paper have shown good agreement between the estimated and measured values of wind speed.
Air quality challenges and business opportunities in China: Fusion of environ...CLIC Innovation Ltd
MMEA (The Measurement, Monitoring and Environmental Efficiency Assessment) research program final seminar presentation by Dr. Ari Karppinen, Finnish Meteorological Institute
Air pollution monitoring system using mobile gprs sensors array pptSaurabh Giratkar
ppt This paper contain brief introduction to vehicular pollution, effect of increase in vehicular pollution on environment as well on human health. To monitor this pollution wireless sensor network (WSN) system is proposed. The proposed system consists of a Mobile Data-Acquisition Unit (Mobile-DAQ) and a fixed Internet-Enabled Pollution Monitoring Server (Pollution-Server). The Mobile-DAQ unit integrates a single-chip microcontroller, air pollution sensors array, a General Packet Radio Service Modem (GPRS-Modem), and a Global Positioning System Module (GPS-Module). The Pollution-Server is a high-end personal computer application server with Internet connectivity. The Mobile-DAQ unit gathers air pollutants levels (CO, NO2, and SO2), and packs them in a frame with the GPS physical location, time, and date. The frame is subsequently uploaded to the GPRS-Modem and transmitted to the Pollution-Server via the public mobile network. A database server is attached to the Pollution- Server for storing the pollutants level for further usage by various clients such as environment protection agencies, vehicles registration authorities, and tourist and insurance companies.
AI BASED PPT FOR PROJCTS USEFUL FOR EDITINGLokesh147875
Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and future prospects. To give you an idea of what such an extensive essay might entail, here's an outline highlighting key points that could be explored in each section:
1. **Introduction to Artificial Intelligence (500 words)**
- Definition of AI.
- Brief history and evolution of AI.
- Key milestones in AI development.
2. **Fundamental Concepts and Technologies in AI (500 words)**
- Machine Learning and Deep Learning.
- Neural Networks.
- Natural Language Processing (NLP).
- Computer Vision.
- Robotics and Automation.
3. **Applications of AI Across Various Sectors (500 words)**
- AI in Healthcare: diagnostics, treatment planning, drug discovery.
- AI in Business: customer service, data analysis, automation.
- AI in Transportation: autonomous vehicles, traffic management.
- AI in Education: personalized learning, grading systems.
- AI in Entertainment: gaming, content creation.
4. **Ethical Considerations and Challenges in AI (500 words)**
- Bias and fairness in AI algorithms.
- Privacy concerns with AI technologies.
- AI and job displacement.
- Ethical AI development and use.
5. **AI's Global Impact and Policy Implications (500 words)**
- AI's impact on global economies.
- International regulations and policies on AI.
- AI in global governance and security.
6. **The Future of AI and Emerging Trends (500 words)**
- Advancements in AI technologies.
- Potential future applications and innovations.
- The role of AI in shaping future societies.
In a detailed essay, each of these sections would delve into specific examples, case studies, and theoretical frameworks, providing a comprehensive understanding of AI. The essay would not only inform about the current state of AI but also provoke thought about its future implications and how society might adapt to and shape these emerging technologies.Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and future prospects. To give you an idea of what such an extensive essay might entail, here's an outline highlighting key points that could be explored in each section:
1. **Introduction to Artificial Intelligence (500 words)**
- Definition of AI.
- Brief history and evolution of AI.
- Key milestones in AI development.
2. **Fundamental Concepts and Technologies in AI (500 words)**
- Machine Learning and Deep Learning.
- Neural Networks.
- Natural Language Processing (NLP).
Creating a comprehensive discussion on artificial intelligence (AI) that spans 3000 words would cover a vast array of topics, including its history, development, applications, ethical implications, and futUETR
Computer model simulations are widely used in the investigation of complex hydrological systems. In particular, hydrological models are tools that help both to better understand hydrological processes and to predict extreme events such as floods and droughts. Usually, model parameters need to be estimated through calibration, in order to constrain model outputs to observed variables.
Relevant model parameters used for calibration are usually selected based on expert knowledge of the modeller or by using a local one-at-a-time (OAT) sensitivity analysis (SA). However, in case of complex models those approaches may not result in proper identification of the most sensitive parameters for model calibration. In particular local OAT SA methods are only effective for assessing the relative importance of input factors when the model is linear, monotonic, and additive, which is rarely the case for complex environmental models. In contrast Global Sensitivity Analysis (GSA)
is a formal method for statistical evaluation of relevant parameters that contribute significantly to model performance. GSA techniques explore the entire feasible space of each model parameter, and they do not require any assumptions on the model nature (such as linearity or additivity).
In this work we apply the GSA to LISFLOOD, a fully-distributed hydrological model used for flood forecasting at Pan-European scale within the European Flood Awareness System (EFAS). Two case studies are considered, snowmelt- and evapotranspiration-driven catchments, to identify sensitive parameters for both types of hydrological regimes. Results of the GSA will then be used for selecting parameters that need to be estimated during model calibration. Considering the large
number of parameters of a fully-distributed model, a two-step GSA framework is applied. First, we implement the computationally efficient screening method of Morris. This method requires a limited number of simulations and produces a qualitative ranking and selection of important factors. As a second step, we apply the variance-based method of Sobol, only to the subset of factors determined as important during the previous screening. The method of Sobol provides quantitative estimates for first order and total order sensitivity indexes of input factors.
The calibration results after the GSA will be described for both case studies and compared against those obtained by using only prior expert knowledge
Description of the space debris activities associated with the ACCORD (Alignment of Capacity and Capability for the Objective of Reducing Debris) project.
This presentation, presented at the Clean Space Workshop held at Harwell Oxford Campus (29/10/2013) provides a summary of the research associated with the prototype ACCORD environmental impact rating for spacecraft, which aims to quantify the impact of a prospective spacecraft on the debris environment based on a number of factors, including debris mitigation capability.
Cities operate ambient air quality monitoring networks but often do not analyze and interpret the data. Data gets simply "stacked". Networks are not configured correctly capturing the data trends and monitoring objectives. This presentation provides guidance and uses Mumbai's ambient air quality data to illustrate application
A data science observatory based on RAMP - rapid analytics and model prototypingAkin Osman Kazakci
RAMP approach to analytics: Rapid Analytics and Model Prototyping; collaborative data challenges with in-built data science process management tools and analytics; An observatory of data science and scientists. Presented at the Design Theory Special Interest Group of International Design Society. Mines ParisTech and Centre for Data Science.
Classification of Strokes in Table Tennis with a Three Stream Spatio-Temporal...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper62.pdf
YouTube: https://youtu.be/gV-rvV3iFDA
Pierre-Etienne Martin, Jenny Benois-Pineau, Boris Mansencal, Renaud Péteri and Julien Morlier : Classification of Strokes in Table Tennis with a Three Stream Spatio-Temporal CNN for MediaEval 2020. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This work presents a method for classifying table tennis strokes using spatio-temporal convolutional neural networks. The fine-grained classification is performed on trimmed video segments recorded at 120 fps with different players performing in natural conditions. From those segments, the frames are extracted, their optical flow is computed and the pose of the player is estimated. From the optical flow amplitude, a region of interest is inferred. A three stream spatio-temporal convolutional neural network using combination of those modalities and 3D attention mechanisms is presented in order to perform classification.
Presented by: Pierre-Etienne Martin
HCMUS at MediaEval 2020: Ensembles of Temporal Deep Neural Networks for Table...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper50.pdf
Hai Nguyen-Truong, San Cao, N. A. Khoa Nguyen, Bang-Dang Pham, Hieu Dao, Minh-Quan Le, Hoang-Phuc Nguyen-Dinh, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at MediaEval 2020: Ensembles of Temporal Deep Neural Networks for Table Tennis Strokes Classification Task. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Sports Video Classification Tasks in the Multimedia Evaluation 2020 Challenge focuses on classifying different types of table tennis strokes in video segments. In this task, we - the HCMUS Team - perform multiple experiments, which includes a combination of models such as SlowFast, Optical Flow, DensePose, R2+1, Channel-Separated Convolutional Networks, to classify 21 types of table tennis strokes from video segments. In total, we submit eight runs corresponding to five different models with different sets of hyper-parameters in each of our models. In addition, we apply some pre-processing techniques on the dataset in order for our model to learn and classify more accurately. According to the evaluation results, one of our team's methods out-performs the other team's. In particular, our best run achieves 31.35\% global accuracy, and all of our methods show potential results in terms of local and global accuracy for action recognition tasks.
Sports Video Classification: Classification of Strokes in Table Tennis for Me...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper2.pdf
YouTube: https://youtu.be/-bRL868b8ys
Pierre-Etienne Martin, Jenny Benois-Pineau, Boris Mansencal, Renaud Péteri, Laurent Mascarilla, Jordan Calandre and Julien Morlier : Sports Video Classification: Classification of Strokes in Table Tennis for MediaEval 2020. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Fine-grained action classification has raised new challenges compared to classical action classification problems. Sport video analysis is a very popular research topic, due to the variety of application areas, ranging from multimedia intelligent devices with user-tailored digests, up to analysis of athletes' performances. Running since 2019 as a part of MediaEval, we offer a task which consists in classifying table tennis strokes from videos recorded in natural conditions at the University of Bordeaux. The aim is to build tools for teachers, coaches and players to analyse table tennis games. Such tools could lead to an automatic profiling of the player and adaptation of his training for improving his/her sport skills more efficiently.
Presented by: Pierre-Etienne Martin
Predicting Media Memorability from a Multimodal Late Fusion of Self-Attention...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper61.pdf
YouTube: https://youtu.be/brmI4g3jLS4
Ricardo Kleinlein, Cristina Luna-Jiménez, Fernando Fernández-Martínez and Zoraida Callejas : Predicting Media Memorability from a Multimodal Late Fusion of Self-Attention and LSTM Models. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper reports on the GTH-UPM team experience in the Predicting Media Memorability task at MediaEval 2020. Teams were requested to predict memorability scores at both short-term and long-term, understanding such score as a measure of whether a video was perdurable in a viewer's memory or not. Our proposed system relies on a late fusion of the scores predicted by three sequential models, each trained over a different modality: video captions, aural embeddings and visual optical flow-based vectors. Whereas single-modality models show a low or zero Spearman correlation coefficient value, their combination considerably boosts performance over development data up to 0.2 in the short-term memorability prediction subtask and 0.19 in the long-term subtask. However, performance over test data drops to 0.016 and -0.041, respectively.
Presented by: Ricardo Kleinlein
Essex-NLIP at MediaEval Predicting Media Memorability 2020 Taskmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper52.pdf
Janadhip Jacutprakart, Rukiye Savran Kiziltepe, John Q. Gan, Giorgos Papanastasiou and Alba G. Seco de Herrera : Essex-NLIP at MediaEval Predicting Media Memorability 2020 Task. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we present the methods of approach and the main results from the Essex NLIP Team’s participation in the MediEval 2020 Predicting Media Memorability task. The task requires participants to build systems that can predict short-term and long-term memorability scores on real-world video samples provided. The focus of our approach is on the use of colour-based visual features as well as the use of the video annotation meta-data. In addition, hyper-parameter tuning was explored. Besides the simplicity of the methodology, our approach achieves competitive results. We investigated the use of different visual features. We assessed the performance of memorability scores through various regression models where Random Forest regression is our final model, to predict the memorability of videos.
Overview of MediaEval 2020 Predicting Media Memorability task: What Makes a V...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper6.pdf
YouTube: https://youtu.be/ySGGu_4vaxs
Alba García Seco De Herrera, Rukiye Savran Kiziltepe, Jon Chamberlain, Mihai Gabriel Constantin, Claire-Hélène Demarty, Faiyaz Doctor, Bogdan Ionescu and Alan F. Smeaton : Overview of MediaEval 2020 Predicting Media Memorability task: What Makes a Video Memorable? Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper describes the MediaEval 2020 Predicting Media Memorability task. After first being proposed at MediaEval 2018, the Predicting Media Memorability task is in its 3rd edition this year, as the prediction of short-term and long-term video memorability (VM) remains a challenging task. In 2020, the format remained the same as in previous editions. This year the videos are a subset of the TRECVid 2019 Video to Text dataset, containing more action rich video content as compare with the 2019 task. In this paper a description of some aspects of this task is provided, including its main characteristics, a description of the collection, the ground truth dataset, evaluation metrics and the requirements for the run submission.
Presented by: Rukiye Savran Kiziltepe
Fooling an Automatic Image Quality Estimatormultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper45.pdf
Benoit Bonnet, Teddy Furon and Patrick Bas : Fooling an Automatic Image Quality Estimator. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper we present our work on the 2020 MediaEval task: Pixel "Privacy: Quality Camouflage for Social Images". Blind Image Quality Assessment (BIQA) is a classifier that for any given image will return a quality score. Our task is to modify an image to decrease its BIQA score while maintaining a good perceived quality. Since BIQA is a deep neural network, we worked on an adversarial attack approach of the problem.
Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable C...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper16.pdf
YouTube: https://youtu.be/ix_b9K7j72w
Zhengyu Zhao : Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable Color Filter. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper presents the submission of our RU-DS team to the Pixel Privacy Task 2020. We propose to fool the blind image quality assessment model by transforming images based on optimizing a human-understandable color filter. In contrast to the common work that relies on small, $L_p$-bounded additive pixel perturbations, our approach yields large yet smooth perturbations. Experimental results demonstrate that in the specific context of this task, our approach is able to achieve strong adversarial effects, but has to sacrifice the image appeal.
Presented by: Zhengyu Zhao
Pixel Privacy: Quality Camouflage for Social Imagesmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper77.pdf
YouTube: https://youtu.be/8Rr4KknGSac
Zhuoran Liu, Zhengyu Zhao, Martha Larson and Laurent Amsaleg : Pixel Privacy: Quality Camouflage for Social Images. Proc. of MediaEval 2020, 14-15 December 2020, Online.
High-quality social images shared online can be misappropriated for unauthorized goals, where the quality filtering step is commonly carried out by automatic Blind Image Quality Assessment (BIQA) algorithms. Pixel Privacy benchmarks privacy-protective approaches that protect privacy-sensitive images against unethical computer vision algorithms. In the 2020 task, participants are encouraged to develop camouflage methods that can effectively decrease the BIQA quality score of high-quality images and maintain image appeal. The camouflaged images need to be either imperceptible to the human eye, or it can be a visible enhancement.
Presented by: Zhuoran Liu
HCMUS at MediaEval 2020:Image-Text Fusion for Automatic News-Images Re-Matchingmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper73.pdf
YouTube: https://youtu.be/TadJ6y7xZeA
Thuc Nguyen-Quang, Tuan-Duy Nguyen, Thang-Long Nguyen-Ho, Anh-Kiet Duong, Xuan-Nhat Hoang, Vinh-Thuyen Nguyen-Truong, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at MediaEval 2020:Image-Text Fusion for Automatic News-Images Re-Matching. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Matching text and images based on their semantics has an important role in cross-media retrieval. However, text and images in articles have a complex connection. In the context of MediaEval 2020 Challenge, we propose three multi-modal methods for mapping text and images of news articles to the shared space in order to perform efficient cross-retrieval. Our methods show systemic improvement and validate our hypotheses, while the best-performed method reaches a recall@100 score of 0.2064.
Presented by: Thuc Nguyen-Quang
Efficient Supervision Net: Polyp Segmentation using EfficientNet and Attentio...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper72.pdf
Sabarinathan D and Suganya Ramamoorthy : Efficient Supervision Net: Polyp Segmentation using EfficientNet and Attention Unit. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Colorectal cancer is the third most common cause of cancer worldwide. In the era of medical Industry, identifying colorectal cancer in its early stages has been a challenging problem. Inspired by these issues, the main objective of this paper is to develop a Multi supervision net algorithm for segmenting polys on a comprehensive dataset. The risk of colorectal cancer could be reduced by early diagnosis of poly during a colonoscopy. The disease and their symptoms are highly varying and always a need for a continuous update of knowledge for the doctors and medical analyst. The diseases fall into different categories and a small variation of symptoms may lead to higher rate of risk. We have taken Medico polyp challenge dataset, which consists of 1000 segmented polyp images from gastrointestinal track. We proposed an efficient Net B4 as a pre-trained architecture in multi-supervision net. The model is trained with multiple output layers. We present quantitative results on colorectal dataset to evaluate the performance and achieved good results in all the performance metrics. The experimental results proved that the proposed model is robust and provides a good level of accuracy in segmenting polyps on a comprehensive dataset for different metrics such as Dice coefficient, Recall, Precision and F2.
HCMUS at Medico Automatic Polyp Segmentation Task 2020: PraNet and ResUnet++ ...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper47.pdf
YouTube: https://youtu.be/vMsM4zg2-JY
Tien-Phat Nguyen, Tan-Cong Nguyen, Gia-Han Diep, Minh-Quan Le, Hoang-Phuc Nguyen-Dinh, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at Medico Automatic Polyp Segmentation Task 2020: PraNet and ResUnet++ for Polyps Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Medico task, MediaEval 2020, explores the challenge of building accurate and high-performance algorithms to detect all types of polyps in endoscopic images. We proposed different approaches leveraging the advantages of either ResUnet++ or PraNet model to efficiently segment polyps in colonoscopy images, with modifications on the network structure, parameters, and training strategies to tackle various observed characteristics of the given dataset. Our methods outperform the other teams' methods, for both accuracy and efficiency. After the evaluation, we are at top 2 for task 1 (with Jaccard index of 0.777, best Precision and Accuracy scores) and top 1 for task 2 (with 67.52 FPS and Jaccard index of 0.658).
Depth-wise Separable Atrous Convolution for Polyps Segmentation in Gastro-Int...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper31.pdf
Syed Muhammad Faraz Ali, Muhammad Taha Khan, Syed Unaiz Haider, Talha Ahmed, Zeshan Khan and Muhammad Atif Tahir : Depth-wise Separable Atrous Convolution for Polyps Segmentation in Gastro-Intestinal Tract. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Identification of polyps in endoscopic images is critical for the diagnosis of colon cancer. Finding the exact shape and size of polyps requires the segmentation of endoscopic images. This research explores the advantage of using depth-wise separable convolution in the atrous convolution of the ResUNet++ architecture. Deep atrous spatial pyramid pooling was also implemented on the ResUNet++ architecture. The results show that architecture with separable convolution has a smaller size and fewer GFLOPs without degrading the performance too much.
Deep Conditional Adversarial learning for polyp Segmentationmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper22.pdf
Debapriya Banik and Debotosh Bhattacharjee : Deep Conditional Adversarial learning for polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This approach has addressed the Medico automatic polyp segmentation challenge which is a part of Mediaeval 2020. We have proposed a deep conditional adversarial learning based network for the automatic polyp segmentation task. The network comprises of two interdependent models namely a generator and a discriminator. The generator network is a FCN employed for the prediction of the polyp mask while the discriminator enforces the segmentation to be as similar as the real segmented mask (ground truth). Our proposed model achieved a comparative result on the test dataset provided by the organizers of the challenge.
A Temporal-Spatial Attention Model for Medical Image Detectionmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper21.pdf
Hwang Maxwell, Wu Cai, Hwang Kao-Shing, Xu Yong Si and Wu Chien-Hsing : A Temporal-Spatial Attention Model for Medical Image Detection. Proc. of MediaEval 2020, 14-15 December 2020, Online.
A local region model with attentive temporal-spatial pathways is proposed for automatically learning various target structures. The attentive spatial pathway highlights the salient region to generate bounding boxes and ignores irrelevant regions in an input image. The proposed attention mechanism allows efficient object localization and the overall predictive performance is increased because there are fewer false positives for the object detection task for medical images with manual annotations. The experimental results show that proposed models consistently increase the base architectures' predictive performance for different datasets and training sizes without undue computational efficiency.
HCMUS-Juniors 2020 at Medico Task in MediaEval 2020: Refined Deep Neural Netw...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper20.pdf
YouTube: https://youtu.be/CVelQl5Luf0
Quoc-Huy Trinh, Minh-Van Nguyen, Thiet-Gia Huynh and Minh-Triet Tran : HCMUS-Juniors 2020 at Medico Task in MediaEval 2020: Refined Deep Neural Network and UNet for Polyps Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Medico: Multimedia Task focuses on developing an efficient and accurate framework to computer-aided diagnosis systems for automatic polyp segmentation to detect all types of polyps in endoscopic images of the gastrointestinal (GI) tract. We are HCMUS-team approach a solution, which includes combination Residual module, Inception module, Adaptive Convolutional neural network with Unet model and PraNet to semantic segmentation all types of polyps in endoscopic images. We submit multiple runs with different architecture and parameters in our model. Our methods show potential results in accuracy and efficiency through multiple experiments.
Fine-tuning for Polyp Segmentation with Attentionmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper15.pdf
Rabindra Khadka : Transfer of Knowledge: Fine-tuning for Polyp Segmentation with Attention. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper describes how the transfer of prior knowledge can effectively take on segmentation tasks with the help of attention mechanisms. The UNet model pretrained on brain MRI dataset was fine-tuned with the polyp dataset. Attention mechanism was integrated to focus on relevant regions in the input images. The implemented architecture is evaluated on 200 validation images based on intersection over union and dice score between groundtruth and predicted region. The model demonstrates a promising result with computational efciency.
Bigger Networks are not Always Better: Deep Convolutional Neural Networks for...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper12.pdf
Adrian Krenzer and Frank Puppe : Bigger Networks are not Always Better: Deep Convolutional Neural Networks for Automated Polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper presents our team's (AI-JMU) approach to the Medico automated polyp segmentation challenge. We consider deep convolutional neural networks to be well suited for this task. To determine the best architecture we test and compare state of the art backbones and two different heads. Finally we achieve a Jaccard index of 73.74\% on the challenge test set. We further demonstrate that bigger networks do not always perform better. However the growing network size always increases the computational complexity.
Ensemble based method for the classification of flooding event using social m...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper37.pdf
YouTube: https://youtu.be/4ROoOzdQzEI
Muhammad Hanif, Huzaifa Joozer, Muhammad Atif Tahir and Muhammad Rafi : Ensemble based method for the classification of flooding event using social media data. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper presents the method proposed and implemented by team FAST-NU-DS, in "The Flood-related Multimedia Task at MediaEval 2020". The task includes data of tweets in Italian language, extracted during floods between 2017 and 2019. The proposed method has utilized text of the tweet and its relevant image for the purpose of binary classification, which identifies whether or not the particular tweet is about flood incident. The proposed method has designed an ensemble based method for the classification of tweets, on the basis of textual data, visual data and combination of both. For visual data, the proposed method has utilized the technique of data augmentation for oversampling of the minority class and applied stratified random sampling for the selection of input. Moreover, Visual Geometry Group (VGG16) convolutional neural network, pretrained on ImageNet and Places365 is utilized by the proposed method. For classification of textual data, the technique of Term Frequency Inverse Document Frequency (TF-IDF) is utilized for feature representation and Multinomial Naive-Bayes classifier is used for the prediction of class. The prediction of image and text are combined for the prediction of each instance. The evaluation of method revealed 36.31%, 20.76% and 27.86% F1-score for text, image and combination of both text and image respectively.
Presented by: Muhammad Hanif
Flood Detection via Twitter Streams using Textual and Visual Featuresmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper35.pdf
Firoj Alam, Zohaib Hassan, Kashif Ahmad, Asma Gul, Michael Reiglar, Nicola Conci and Ala Al-Fuqaha : Flood Detection via Twitter Streams using Textual and Visual Features. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The paper presents our proposed solutions for the MediaEval 2020 Flood-Related Multimedia Task, which aims to analyze and detect flooding events in multimedia content shared over Twitter. In total, we proposed four different solutions including a multi-modal solution combining textual and visual information for the mandatory run, and three single modal image and text-based solutions as optional runs. In the multi-modal method, we rely on a supervised multimodal bitransformer model that combines textual and visual features in an early fusion, achieving a micro F1-score of .859 on the development data set. For the text-based flood events detection, we use a transformer network (i.e., pretrained Italian BERT model) achieving an F1-score of .853. For image-based solutions, we employed multiple deep models, pre-trained on both, the Ima- geNet and places data sets, individually and combined in an early fusion achieving F1-scores of .816 and .805 on the development set, respectively.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Insights for wellbeing: Predicting Personal Air Quality Index using Regression Approach
1. Insights for wellbeing:
predicting personal air quality index
using regression approach
Prepared by:
Amel Ksibi1,Amina Salhi1, Ala Alluhaidan1, Sahar A. ELRahman1,2
1 College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
2 Electrical Engineering Department, Faculty of Engineering-Shoubra, Benha University, Cairo, Egypt
3. MOTIVATION
Providing air pollution
information to individuals
enables them to
understand the air quality
of their living
environments.
Open data including
weather data and air
pollution data collected
over the city, have been
investigated widely for
general population.
There is a shortage on
determining the quality of
air at personal scale
Research question is
whether we can use only
data from open sources to
predict the personal air
pollution data.
4. MEDIAEVAL2020:
INSIGHTS FOR WELLBEING
CHALLENGE
Task 1: Personal Air Quality Prediction with public/open data
Predicting the value of personal air pollution data (PM2.5, O3, and NO2) using only:
• weather data (wind speed, wind direction, temperature, humidity)
• air pollution data (PM2.5, O3, and NO2)
from public/open data sources (e.g., stations, website).
5. PROPOSED SOLUTION
Data pre-processing
• Solving missing data using
KNN imputation method
• Time Features extraction
• Features selection
Learning voting regression model
• Gradient Boosting regressor
• Random Forest regressor
• linear regressor
6. EXPERIMENTAL RESULTS
Table 1: Official results of the submitted run
PM2.5 RMSE NO2
RMSE
O3 RMSE AQI RMSE
AVG walkers 35.34 25.98 12.08 35.39
AVG car 40.93 25.02 35.98 51.16
• The performance of learning model depends on the way the data are collected:
o results from data collected by a walker outperform the results from data collected by a car.
7. CONCLUSION & FUTURE WORK
• This was our first participation on MediaEval challenges.
• Our solution was based on KNN imputation method and voting
regression approach.
• The used features are time and location features since weather features
did not improve the results within the training dataset.
• As future work, we will investigate more on:
• time features (seasons, periods of day,…) to consolidate the performances of
our regression model
• Lifelog images mainly on estimating the level of the haze, smoke and greenness
within the photos.
Editor's Notes
Good evening;
My name is Amel Ksibi from Pnu university.
I will present our topic which is Insights for wellbeing: Predicting Personal Air Quality Index using Regression Approach
Our presentation consists of motivation, challenge, proposed solution, experients and results and finally conclusion and future work
Air pollution remains a crucial subject of study since it has an intensive impact on public health.
So, Providing air pollution information to individuals enables them to understand the air quality of their living environments.
Over the past of 40 years, air quality prediction has been of interest. however, all these previous studies focused only on determining the air pollutants values at city scale for the general population.
So, There is a shortage on determining the quality of air at personal scale. Thus, the research question is whether we can use only data from open sources to predict the personal air pollution data.
This question has been proposed by MediaEval 2020 Insight for Wellbeing: Multimodal personal health lifelog data analysis challenge.
We participate in the first task that aims to predict the value of personal air pollution data using only wather data and air pollution data from public data sources.
Our proposed solution consists of two steps: data preprocessing and learning voting regression model.
Concerning the first step, we applied the knn imputation method the solve the problem of missing data , then we perform time features extraction to extract the month, day and hour features. After that we try different combination of features to determine the impact of each feature type. We found that weather data did not improve the performance of the learning model.
Concerning the second step , we lean a voting regression model based on three base regressors which are : gradient boosting regressor, random forest regressor and linear regressor.
The obtained results are shown in this table, Ae we see the estimating O3 is the esiaset task while estimating the PM2.5 is the hardest.
Also, we can note that the results from data collected by a walker outperform the results from data collected by a car. So the performance of the learning model depends on the way the data are collected.
As future work,
We will investigate more on:
time features (seasons, periods of day,…) to consolidate the performances of our regression model
And also, we will investigate on Lifelog images mainly on estimating the level of the haze, smoke and greenness within these photos.