Biosafety is defined as a set of preventive measures aimed at
reducing the risk of infectious diseases’ spread to crops and
animals, by providing quarantine pesticides. Prolonged and
sustained overheating of the sea, creates significant habitat losses,
resulting in the proliferation and spread of invasive species, which invade foreign areas typically seeking colder climate. This is one of the most important modern threats to marine biosafety. The research effort presented herein, proposes an innovative approach
for Marine Species Identification, by employing an advanced
intelligent Machine Hearing Framework (MHF). The final target is the identification of invasive alien species (IAS) based on the
sounds they produce. This classification attempt, can provide
significant aid towards the protection of biodiversity, and can
achieve overall regional biosecurity. Hearing recognition is
performed by using the Online Sequential Multilayer Graph
Regularized Extreme Learning Machine Autoencoder
(MIGRATE_ELM). The MIGRATE_ELM uses an innovative Deep Learning algorithm (DELE) that is applied for the first time for the above purpose. The assignment of the corresponding class ‘native’ or ‘invasive’ in its locality, is carried out by an equally innovative approach entitled ‘Geo Location Country Based Service’ that has been proposed by our research team.
Nano-approach towards Sustainable Agriculture and Precision FarmingIJAEMSJORNAL
The document discusses various applications of nanotechnology in agriculture, including smart field systems that use nanosensors to detect issues and apply treatments precisely, smart dust technology that utilizes small sensor motes to monitor crop and soil conditions, and bioanalytical nanosensors that can detect pathogens and contaminants. It also explores how nanotechnology allows for more targeted delivery of fertilizers, pesticides, and herbicides as well as improvements to crop traits through techniques like nanoparticle-mediated gene therapy.
WI-ALERT : A WIRELESS SENSOR NETWORK BASED INTRUSION ALERT PROTOTYPE FOR HECijdpsjournal
A wired fence based intrusion detection and alerting mechanism for boundaries separating wildlife habitats and human settlements was implemented, particularly as a solution to the Human-Elephant Conflict(HEC). The objective of the research reported in this paper is to propose and verify an alternative technique for this wired fence based alerting mechanism to overcome its limitations and to improve its effectiveness.
This article presents a comprehensive study of alternative solutions with deliberate consideration of the practical constraints. Wi-Alert is a wireless sensor network based intrusion detection system proposed as the best alternative solution. This article reports the outcomes of the first two phases of ongoing developments of Wi-Alert.
The first phase of experiments was conducted to investigate the multi-path effect reduction techniques at one site. In the next phase, experiments were conducted to verify the ability to detect elephants. The results obtained via the candidate techniques are compared. Both experiments confirm the feasibility of the prototype as a non-invasive method to detect elephants.
Cybernetics is an interdisciplinary study of regulatory systems, their structures, constraints and possibilities. Cybernetics was defined by Norbert Wiener in 1948 as “the scientific study of control and communication in living organism and the machine”. Cybernetics study includes but not limited to artificial learning, adaptation, cognition, convergence, Social Control, efficacy, efficiency, connectivity and communication [1]. It is known from science fiction; technically modified organism with exceptional skills called as cyborgs -it was originated from the term “cybernetic organism”. As a matter of fact, cyborgs that incorporates technical systems with living organism are already reality. For instance, smart machines that spontaneously operate to changing dynamic conditions, computer supported designs and fabrication based on magnetic tomography datasets or surface modifications for enhanced tissue integration that allowed major development in cybernetics technology [2,3].
This article discusses concerns around the environmental and health impacts of engineered nanotechnology (ENT) particles. It notes that while ENT has benefits, its effects are still not well understood as it is a new field. The article questions if enough is known about ENT's impacts to ensure safety. It examines contradictory research on ENT's environmental impacts and toxicity. It raises concerns that ENT particles could accumulate and have unknown long term effects on climate or enter the food chain. The article argues more research is needed on ENT's environmental and health impacts before widespread use to assure public safety.
A Study of Mixed Reality to Preserve a Malaysian HerbJazmi Jamal
The document discusses a study on using mixed reality to digitally preserve a Malaysian herb. It aims to identify effective mixed reality delivery methods for teaching about a herb's physical characteristics, and determine design guidelines for presenting herb visuals. The study will focus on the Cat's Whiskers herb, reviewing literature on mixed reality in education, herb morphology, and design best practices to develop guidelines for an interactive mixed reality prototype.
IRJET-A Review on Fungus Mediated Nanoparticles in the Control of Dengue Vect...IRJET Journal
1. The document reviews the use of fungus-mediated nanoparticles for controlling the dengue vector Aedes aegypti.
2. Fungi such as Aspergillus species have been shown to extracellularly synthesize metallic nanoparticles like silver, gold, copper, and zinc through enzymatic reduction of metal ions.
3. The fungus Aspergillus niger in particular secretes reducing agents that can convert silver nitrate to silver nanoparticles, making it a potential source for eco-friendly larvicidal nanoparticles against Aedes aegypti.
Nano-approach towards Sustainable Agriculture and Precision FarmingIJAEMSJORNAL
The document discusses various applications of nanotechnology in agriculture, including smart field systems that use nanosensors to detect issues and apply treatments precisely, smart dust technology that utilizes small sensor motes to monitor crop and soil conditions, and bioanalytical nanosensors that can detect pathogens and contaminants. It also explores how nanotechnology allows for more targeted delivery of fertilizers, pesticides, and herbicides as well as improvements to crop traits through techniques like nanoparticle-mediated gene therapy.
WI-ALERT : A WIRELESS SENSOR NETWORK BASED INTRUSION ALERT PROTOTYPE FOR HECijdpsjournal
A wired fence based intrusion detection and alerting mechanism for boundaries separating wildlife habitats and human settlements was implemented, particularly as a solution to the Human-Elephant Conflict(HEC). The objective of the research reported in this paper is to propose and verify an alternative technique for this wired fence based alerting mechanism to overcome its limitations and to improve its effectiveness.
This article presents a comprehensive study of alternative solutions with deliberate consideration of the practical constraints. Wi-Alert is a wireless sensor network based intrusion detection system proposed as the best alternative solution. This article reports the outcomes of the first two phases of ongoing developments of Wi-Alert.
The first phase of experiments was conducted to investigate the multi-path effect reduction techniques at one site. In the next phase, experiments were conducted to verify the ability to detect elephants. The results obtained via the candidate techniques are compared. Both experiments confirm the feasibility of the prototype as a non-invasive method to detect elephants.
Cybernetics is an interdisciplinary study of regulatory systems, their structures, constraints and possibilities. Cybernetics was defined by Norbert Wiener in 1948 as “the scientific study of control and communication in living organism and the machine”. Cybernetics study includes but not limited to artificial learning, adaptation, cognition, convergence, Social Control, efficacy, efficiency, connectivity and communication [1]. It is known from science fiction; technically modified organism with exceptional skills called as cyborgs -it was originated from the term “cybernetic organism”. As a matter of fact, cyborgs that incorporates technical systems with living organism are already reality. For instance, smart machines that spontaneously operate to changing dynamic conditions, computer supported designs and fabrication based on magnetic tomography datasets or surface modifications for enhanced tissue integration that allowed major development in cybernetics technology [2,3].
This article discusses concerns around the environmental and health impacts of engineered nanotechnology (ENT) particles. It notes that while ENT has benefits, its effects are still not well understood as it is a new field. The article questions if enough is known about ENT's impacts to ensure safety. It examines contradictory research on ENT's environmental impacts and toxicity. It raises concerns that ENT particles could accumulate and have unknown long term effects on climate or enter the food chain. The article argues more research is needed on ENT's environmental and health impacts before widespread use to assure public safety.
A Study of Mixed Reality to Preserve a Malaysian HerbJazmi Jamal
The document discusses a study on using mixed reality to digitally preserve a Malaysian herb. It aims to identify effective mixed reality delivery methods for teaching about a herb's physical characteristics, and determine design guidelines for presenting herb visuals. The study will focus on the Cat's Whiskers herb, reviewing literature on mixed reality in education, herb morphology, and design best practices to develop guidelines for an interactive mixed reality prototype.
IRJET-A Review on Fungus Mediated Nanoparticles in the Control of Dengue Vect...IRJET Journal
1. The document reviews the use of fungus-mediated nanoparticles for controlling the dengue vector Aedes aegypti.
2. Fungi such as Aspergillus species have been shown to extracellularly synthesize metallic nanoparticles like silver, gold, copper, and zinc through enzymatic reduction of metal ions.
3. The fungus Aspergillus niger in particular secretes reducing agents that can convert silver nitrate to silver nanoparticles, making it a potential source for eco-friendly larvicidal nanoparticles against Aedes aegypti.
1 Environmental Sustainability Ws Tony Vetterguest17df6
The document discusses how mobile applications can promote environmental sustainability. It describes several examples of applications that use mobile phones to track wildlife, engage citizens in environmental activism and monitoring, and enable participatory urban sensing projects. Some challenges to wider adoption are ensuring privacy, accuracy of data, and preventing misuse of sensor systems. Mobile technologies are a promising tool to address sustainability issues through observation, public engagement, and coordinated action on environmental threats.
Malaria is a killer disease. Since 2000 there has been a massive expenditure on reducing the prevalence of malaria, but the results achieved have been quite modest relative to the cost. Using an integrated approach would be more cost effective. This slideset describes the various components that go into an integrated approach to mosquito and malaria control.
The document discusses the Environmental Information System (ENVIS) in India. It begins by acknowledging the support of an environmental science teacher. It then provides an overview of ENVIS, including its objectives to build an environmental information repository and strengthen information collection, processing and dissemination capabilities.
ENVIS operates as a decentralized network of specialized centers covering different environmental subject areas. The key roles of ENVIS centers and nodes are to establish information sources, create databases on assigned topics, identify information gaps, publish newsletters and bulletins, and serve as an interface for users.
Specific details are provided about the focal point located in the Ministry of Environment and Forests, which works to build the information base and respond to queries. An
Machine learning applications to non-destructive defect detection in horticul...DASHARATHMABRUKAR
Machine learning methods have become useful tools for the non-destructive detection of defects in horticultural products when used in conjunction with sensing devices. This review examines the recent advances in using machine learning with various sensing techniques to detect defects in fruits and vegetables. Specifically, it discusses the role machine learning has played in addressing issues like achieving fast, early, and quantitative defect assessments. However, limitations still exist and future opportunities are needed to further improve machine learning's ability to help solve technical challenges in non-destructive defect detection for horticultural products.
Animal Repellent System for Smart Farming Using AI and Deep LearningIRJET Journal
This document summarizes a research paper on developing an animal repellent system for smart farming using artificial intelligence and deep learning. The system uses a camera to collect animal data, which is then classified using a deep convolutional neural network model trained on the data. It can identify different animal species in real-time. When an animal is detected, it sends an alert message and produces the appropriate ultrasonic frequency to repel that species. Testing on an animal dataset showed the CNN achieved over 98% accuracy in identifying animals. The system provides a real-time monitoring solution using AI to help farmers prevent crop damage from animals.
The document discusses several topics related to using geospatial data and modeling for agricultural research and development in Africa. It describes index-based livestock insurance (IBLI) being piloted in Northern Kenya to protect pastoralists from drought losses. It discusses how normalized difference vegetation index (NDVI) data is used to develop a predictive livestock mortality index for IBLI. It also discusses downscaling global climate models to generate high-resolution climate projections and weather data to assess local impacts of climate change on agriculture. Finally, it outlines how ILRI is targeting its work, taking a systems approach, and aiming to have a forward-looking perspective.
This document discusses the role of information technology in environmental and human health. It describes how biological databases like NCBI store genomic data for research. Biometrics uses human characteristics for identification. Bioinformatics applies computer science to biological problems. Telemedicine uses technology to provide healthcare from a distance. Other areas discussed include biomechanics, spreading health awareness through media, using programs to control biological equipment, and environmental applications like remote sensing, GIS, and databases to study and monitor the environment.
Global biodiversity data is critical for conservation, policymaking, and scientific research. However, most data is held by small, isolated publishers and is difficult to access. The Global Biodiversity Information Facility (GBIF) aims to mobilize this "small data" by creating common data standards and tools to publish data through its Integrated Publishing Toolkit. This allows data to be discovered through GBIF's portal and used for applications like predicting climate change impacts and invasive species spread. GBIF calls on all data holders to publish their data openly through its framework to build a comprehensive global resource for biodiversity data.
Drought is possibly the most complex and least understood of natural hazards. The effects of drought accumulate slowly and linger for years. It is estimated that 380 million people, 38% of the world’s rural poor, live in the arid and semi-arid tropics (SAT). Of those who are vulnerable to drought, more than 90% are either smallholder farmers or landless laborers. The Committee on Science and Technology for the United Nations Convention to Combat Desertification, in its fifth session last year, issued a note on strategies for communicating relevant information on combating the effects of drought.
BIOINFORMATICS AND ITS APPLICATIONS IN ENVIRONMENTAL SCIENCE AND HEALTH AND I...Dr Varruchi Sharma
Bioinformatics in integration to computational biology is a novel field which applies computer to biology, with which biologists are able to make detailed use of biological data for its advancement. In bioinformatics, the computers are used for the storage followed by the processing and analyzing, along with retrieval of large amounts of biologic and genomic data. In recent years, the field of Bioinformatics is gaining more interest. Earlier, the methodology adopted by the researchers to generate, collect followed by the analysis of various types of scientific data, which is the most time consuming and quite expensive for the work to be carried out. On the other hand with the help of computational tools & techniques, software & databases, one can process a large amount of biological data in a short span such as computer-aided drug designing (CADD). Environment and its protection in today’s word are the most challenging. The problems associated with its protection, planning can be resolved by the best bases of Information technology.
Predicting the misconception of dengue disease based on the awareness surveyjournalBEEI
Mosquito-borne disease such as dengue fever is a pervasive public health problem around the world and further investigation is needed to rectify the misunderstanding of the disease among communities. This requires a personalized information delivery, which will effectively fix the problem. The process of personalizing information requires several major steps: (i) determine the attributes which will be used to interpret a person, (ii) selects an algorithm which will accurately and efficiently classify the person according to the retrieved background information, and (iii) recommends the correct information to rectify the particular misunderstanding. This research paper considers the first two steps. First, data regarding the knowledge, attitudes and prevention practices are determined from the established literature where some variables give a significant impact on the predictive model. In the second step, five performed machine learning algorithms were tested for the classification task. The result indicates that the use of Support Vector Machine and Decision Tree algorithms provide the best performance in classifying the person’s understanding regarding the dengue fever.
This document summarizes the transition from clinical information systems to health grids and the future of health research infrastructure. It discusses trends like rising populations in Asia, increasing resource scarcity, and the need for multidisciplinary and open collaboration. Health grids are presented as enabling virtual collaborations across institutions. Key areas like medical imaging, computational models, and genomic medicine are highlighted. Adoption challenges and requirements like reliable, usable infrastructure are also summarized.
Artificial Intelligence And Water Cycle ManagementJennifer Daniel
This document discusses how artificial intelligence can help manage water resources more effectively by processing large amounts of data. AI applications have potential in areas like monitoring water quality and quantity, detecting illegal dumping or changes in water bodies, and improving the efficiency of water treatment plants. By using sensors and data from smart homes, AI systems can also help optimize water distribution networks and consumption patterns. Overall, AI can support more sustainable water management through integrated analysis of environmental information across different sectors.
Report-Fog Based Emergency System For Smart Enhanced Living EnvironmentKEERTHANA M
Report-An ambient assisted-living emergency system exploits cloud and fog computing, an outdoor positioning mechanism, and emergency and communication protocols to locate activity-challenged individuals.
IT infrastructure includes computers, networks, operating systems, applications, databases, storage, servers, and telecommunications technologies that support an organization. IT has significantly improved environmental education, human health, business, economics, culture, and politics by facilitating the collection and management of interrelated data on various subjects in computerized databases. This data is easily retrievable and has produced higher influence on the environment and health through software for analysis, geographic information systems, and data transmitted via satellites. The Ministry of Environment and Forests in India maintains conservation, wildlife, and forest cover databases, as well as databases for diseases like HIV/AIDS, malaria, and fluorosis.
The document presents a novel bio-computational model called "Sequence Miner" to analyze and classify dengue virus gene sequences. The model uses periodic association rules to identify co-occurrence patterns in dengue gene sequences and visualize classification results through an interactive tool. When tested on over 10,000 dengue virus sequences, the model accurately classified 96.74% of the sequences.
This document summarizes a student project on developing a bio-cryptographic approach to improve the security of electronic health records (EHRs). The project aims to use biometrics like iris scans combined with cryptography to encrypt EHRs. Students developed a prototype system using iris scans to encrypt EHR data, which is then stored in a fuzzy vault. The system was tested to register and authenticate users via iris scans to access encrypted health records. The study concludes that applying bio-cryptography can significantly improve EHR security and benefit the healthcare sector by preventing unauthorized access to sensitive patient information.
Digital transformation in plant protection leads to
o Increased efficiency: Reduced manual labour, operational costs, improved resource allocation, and optimised workflows.
o Data driven decision making: Farmers can make more informed choices based on data-driven insights, leading to better pest and disease management strategies.
o Automation and predictive analytics: Automation of tasks like pesticide application has reduced human error and resource waste. Predictive analytics models optimise preventive measures.
o Monitoring: Digital solutions enable real-time monitoring by using cell phones.
o Knowledge sharing and innovation: Rapid sharing of knowledge, best practices, and information among farmers, researchers, and stakeholders is possible.
Also, digital transformation opens up avenues for communication among farmers, scientists, and government bodies, resulting in a multitude of indirect benefits: scientists gain better data access, governments improve their policy-making processes, and farmers attain increased crop productivity.
Commentary: Aedes albopictus and Aedes japonicus—two invasive mosquito specie...Konstantinos Demertzis
In this interesting and original study, the authors present an ensemble Machine Learning (ML) model for the prediction of the habitats’ suitability, which is affected by the complex interactions between living conditions and survival-spreading climate factors. The research focuses in two of the most dangerous invasive mosquito species in Europe with the requirements’ identification in temperature and rainfall conditions. Though it is an interesting approach, the ensemble ML model is not presented and discussed in sufficient detail and thus its performance and value as a tool for modeling the distribution of invasive species cannot be adequately evaluated.
A Dynamic Intelligent Policies Analysis Mechanism for Personal Data Processin...Konstantinos Demertzis
The document describes an Intelligent Policies Analysis Mechanism (IPAM) that is part of the ADVOCATE framework. IPAM uses machine learning methods like Fuzzy Cognitive Maps and Extreme Learning Machines to identify potentially conflicting rules or consents from a user that could lead to personal data collection and profiling without consent. The framework aims to help users maintain control over their personal data as required by GDPR regulations. IPAM simulates how smart devices collect personal data and identifies rules that may enable profiling, training on example data to learn how to detect such instances.
More Related Content
Similar to Extreme deep learning in biosecurity the case of machine hearing for marine species identification
1 Environmental Sustainability Ws Tony Vetterguest17df6
The document discusses how mobile applications can promote environmental sustainability. It describes several examples of applications that use mobile phones to track wildlife, engage citizens in environmental activism and monitoring, and enable participatory urban sensing projects. Some challenges to wider adoption are ensuring privacy, accuracy of data, and preventing misuse of sensor systems. Mobile technologies are a promising tool to address sustainability issues through observation, public engagement, and coordinated action on environmental threats.
Malaria is a killer disease. Since 2000 there has been a massive expenditure on reducing the prevalence of malaria, but the results achieved have been quite modest relative to the cost. Using an integrated approach would be more cost effective. This slideset describes the various components that go into an integrated approach to mosquito and malaria control.
The document discusses the Environmental Information System (ENVIS) in India. It begins by acknowledging the support of an environmental science teacher. It then provides an overview of ENVIS, including its objectives to build an environmental information repository and strengthen information collection, processing and dissemination capabilities.
ENVIS operates as a decentralized network of specialized centers covering different environmental subject areas. The key roles of ENVIS centers and nodes are to establish information sources, create databases on assigned topics, identify information gaps, publish newsletters and bulletins, and serve as an interface for users.
Specific details are provided about the focal point located in the Ministry of Environment and Forests, which works to build the information base and respond to queries. An
Machine learning applications to non-destructive defect detection in horticul...DASHARATHMABRUKAR
Machine learning methods have become useful tools for the non-destructive detection of defects in horticultural products when used in conjunction with sensing devices. This review examines the recent advances in using machine learning with various sensing techniques to detect defects in fruits and vegetables. Specifically, it discusses the role machine learning has played in addressing issues like achieving fast, early, and quantitative defect assessments. However, limitations still exist and future opportunities are needed to further improve machine learning's ability to help solve technical challenges in non-destructive defect detection for horticultural products.
Animal Repellent System for Smart Farming Using AI and Deep LearningIRJET Journal
This document summarizes a research paper on developing an animal repellent system for smart farming using artificial intelligence and deep learning. The system uses a camera to collect animal data, which is then classified using a deep convolutional neural network model trained on the data. It can identify different animal species in real-time. When an animal is detected, it sends an alert message and produces the appropriate ultrasonic frequency to repel that species. Testing on an animal dataset showed the CNN achieved over 98% accuracy in identifying animals. The system provides a real-time monitoring solution using AI to help farmers prevent crop damage from animals.
The document discusses several topics related to using geospatial data and modeling for agricultural research and development in Africa. It describes index-based livestock insurance (IBLI) being piloted in Northern Kenya to protect pastoralists from drought losses. It discusses how normalized difference vegetation index (NDVI) data is used to develop a predictive livestock mortality index for IBLI. It also discusses downscaling global climate models to generate high-resolution climate projections and weather data to assess local impacts of climate change on agriculture. Finally, it outlines how ILRI is targeting its work, taking a systems approach, and aiming to have a forward-looking perspective.
This document discusses the role of information technology in environmental and human health. It describes how biological databases like NCBI store genomic data for research. Biometrics uses human characteristics for identification. Bioinformatics applies computer science to biological problems. Telemedicine uses technology to provide healthcare from a distance. Other areas discussed include biomechanics, spreading health awareness through media, using programs to control biological equipment, and environmental applications like remote sensing, GIS, and databases to study and monitor the environment.
Global biodiversity data is critical for conservation, policymaking, and scientific research. However, most data is held by small, isolated publishers and is difficult to access. The Global Biodiversity Information Facility (GBIF) aims to mobilize this "small data" by creating common data standards and tools to publish data through its Integrated Publishing Toolkit. This allows data to be discovered through GBIF's portal and used for applications like predicting climate change impacts and invasive species spread. GBIF calls on all data holders to publish their data openly through its framework to build a comprehensive global resource for biodiversity data.
Drought is possibly the most complex and least understood of natural hazards. The effects of drought accumulate slowly and linger for years. It is estimated that 380 million people, 38% of the world’s rural poor, live in the arid and semi-arid tropics (SAT). Of those who are vulnerable to drought, more than 90% are either smallholder farmers or landless laborers. The Committee on Science and Technology for the United Nations Convention to Combat Desertification, in its fifth session last year, issued a note on strategies for communicating relevant information on combating the effects of drought.
BIOINFORMATICS AND ITS APPLICATIONS IN ENVIRONMENTAL SCIENCE AND HEALTH AND I...Dr Varruchi Sharma
Bioinformatics in integration to computational biology is a novel field which applies computer to biology, with which biologists are able to make detailed use of biological data for its advancement. In bioinformatics, the computers are used for the storage followed by the processing and analyzing, along with retrieval of large amounts of biologic and genomic data. In recent years, the field of Bioinformatics is gaining more interest. Earlier, the methodology adopted by the researchers to generate, collect followed by the analysis of various types of scientific data, which is the most time consuming and quite expensive for the work to be carried out. On the other hand with the help of computational tools & techniques, software & databases, one can process a large amount of biological data in a short span such as computer-aided drug designing (CADD). Environment and its protection in today’s word are the most challenging. The problems associated with its protection, planning can be resolved by the best bases of Information technology.
Predicting the misconception of dengue disease based on the awareness surveyjournalBEEI
Mosquito-borne disease such as dengue fever is a pervasive public health problem around the world and further investigation is needed to rectify the misunderstanding of the disease among communities. This requires a personalized information delivery, which will effectively fix the problem. The process of personalizing information requires several major steps: (i) determine the attributes which will be used to interpret a person, (ii) selects an algorithm which will accurately and efficiently classify the person according to the retrieved background information, and (iii) recommends the correct information to rectify the particular misunderstanding. This research paper considers the first two steps. First, data regarding the knowledge, attitudes and prevention practices are determined from the established literature where some variables give a significant impact on the predictive model. In the second step, five performed machine learning algorithms were tested for the classification task. The result indicates that the use of Support Vector Machine and Decision Tree algorithms provide the best performance in classifying the person’s understanding regarding the dengue fever.
This document summarizes the transition from clinical information systems to health grids and the future of health research infrastructure. It discusses trends like rising populations in Asia, increasing resource scarcity, and the need for multidisciplinary and open collaboration. Health grids are presented as enabling virtual collaborations across institutions. Key areas like medical imaging, computational models, and genomic medicine are highlighted. Adoption challenges and requirements like reliable, usable infrastructure are also summarized.
Artificial Intelligence And Water Cycle ManagementJennifer Daniel
This document discusses how artificial intelligence can help manage water resources more effectively by processing large amounts of data. AI applications have potential in areas like monitoring water quality and quantity, detecting illegal dumping or changes in water bodies, and improving the efficiency of water treatment plants. By using sensors and data from smart homes, AI systems can also help optimize water distribution networks and consumption patterns. Overall, AI can support more sustainable water management through integrated analysis of environmental information across different sectors.
Report-Fog Based Emergency System For Smart Enhanced Living EnvironmentKEERTHANA M
Report-An ambient assisted-living emergency system exploits cloud and fog computing, an outdoor positioning mechanism, and emergency and communication protocols to locate activity-challenged individuals.
IT infrastructure includes computers, networks, operating systems, applications, databases, storage, servers, and telecommunications technologies that support an organization. IT has significantly improved environmental education, human health, business, economics, culture, and politics by facilitating the collection and management of interrelated data on various subjects in computerized databases. This data is easily retrievable and has produced higher influence on the environment and health through software for analysis, geographic information systems, and data transmitted via satellites. The Ministry of Environment and Forests in India maintains conservation, wildlife, and forest cover databases, as well as databases for diseases like HIV/AIDS, malaria, and fluorosis.
The document presents a novel bio-computational model called "Sequence Miner" to analyze and classify dengue virus gene sequences. The model uses periodic association rules to identify co-occurrence patterns in dengue gene sequences and visualize classification results through an interactive tool. When tested on over 10,000 dengue virus sequences, the model accurately classified 96.74% of the sequences.
This document summarizes a student project on developing a bio-cryptographic approach to improve the security of electronic health records (EHRs). The project aims to use biometrics like iris scans combined with cryptography to encrypt EHRs. Students developed a prototype system using iris scans to encrypt EHR data, which is then stored in a fuzzy vault. The system was tested to register and authenticate users via iris scans to access encrypted health records. The study concludes that applying bio-cryptography can significantly improve EHR security and benefit the healthcare sector by preventing unauthorized access to sensitive patient information.
Digital transformation in plant protection leads to
o Increased efficiency: Reduced manual labour, operational costs, improved resource allocation, and optimised workflows.
o Data driven decision making: Farmers can make more informed choices based on data-driven insights, leading to better pest and disease management strategies.
o Automation and predictive analytics: Automation of tasks like pesticide application has reduced human error and resource waste. Predictive analytics models optimise preventive measures.
o Monitoring: Digital solutions enable real-time monitoring by using cell phones.
o Knowledge sharing and innovation: Rapid sharing of knowledge, best practices, and information among farmers, researchers, and stakeholders is possible.
Also, digital transformation opens up avenues for communication among farmers, scientists, and government bodies, resulting in a multitude of indirect benefits: scientists gain better data access, governments improve their policy-making processes, and farmers attain increased crop productivity.
Similar to Extreme deep learning in biosecurity the case of machine hearing for marine species identification (20)
Commentary: Aedes albopictus and Aedes japonicus—two invasive mosquito specie...Konstantinos Demertzis
In this interesting and original study, the authors present an ensemble Machine Learning (ML) model for the prediction of the habitats’ suitability, which is affected by the complex interactions between living conditions and survival-spreading climate factors. The research focuses in two of the most dangerous invasive mosquito species in Europe with the requirements’ identification in temperature and rainfall conditions. Though it is an interesting approach, the ensemble ML model is not presented and discussed in sufficient detail and thus its performance and value as a tool for modeling the distribution of invasive species cannot be adequately evaluated.
A Dynamic Intelligent Policies Analysis Mechanism for Personal Data Processin...Konstantinos Demertzis
The document describes an Intelligent Policies Analysis Mechanism (IPAM) that is part of the ADVOCATE framework. IPAM uses machine learning methods like Fuzzy Cognitive Maps and Extreme Learning Machines to identify potentially conflicting rules or consents from a user that could lead to personal data collection and profiling without consent. The framework aims to help users maintain control over their personal data as required by GDPR regulations. IPAM simulates how smart devices collect personal data and identifies rules that may enable profiling, training on example data to learn how to detect such instances.
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
The document discusses a proposed Next Generation Cognitive Computing Security Operations Center (NGC2SOC) that uses a novel intelligence driven cognitive computing framework called the λ-Architecture Network Flow Forensics Framework (λ-NF3). The λ-NF3 implements a Lambda machine learning architecture to analyze both batch and streaming network data using two computational intelligence algorithms - an Extreme Learning Machine neural network and a Self-Adjusting Memory k-Nearest Neighbors classifier. It aims to provide fully automated network traffic analysis, malware detection, and encrypted traffic identification for efficient defense against adversarial attacks without relying on human expertise.
The Next Generation Cognitive Security Operations Center: Network Flow Forens...Konstantinos Demertzis
A Security Operations Center (SOC) can be defined as an organized and highly skilled team that uses advanced computer forensics tools to prevent, detect and respond to cybersecurity incidents of an organization. The fundamental aspects of an effective SOC is related to the ability to examine and analyze the vast number of data flows and to correlate several other types of events from a cybersecurity perception. The supervision and categorization of network flow is an essential process not only for the scheduling, management, and regulation of the network’s services, but also for attacks identification and for the consequent forensics’ investigations. A serious potential disadvantage of the traditional software solutions used today for computer network monitoring, and specifically for the instances of effective categorization of the encrypted or obfuscated network flow, which enforces the rebuilding of messages packets in sophisticated underlying protocols, is the requirements of computational resources. In addition, an additional significant inability of these software packages is they create high false positive rates because they are deprived of accurate predicting mechanisms.
For all the reasons above, in most cases, the traditional software fails completely to recognize unidentified vulnerabilities and zero-day exploitations. This paper proposes a novel intelligence driven Network Flow Forensics Framework (NF3) which uses low utilization of computing power and resources, for the Next Generation Cognitive Computing SOC (NGC2SOC) that rely solely on advanced fully automated intelligence methods. It is an effective and accurate Ensemble Machine Learning forensics tool to Network Traffic Analysis, Demystification of Malware Traffic and Encrypted Traffic Identification.
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...Konstantinos Demertzis
The document discusses a new meta-ensemble zero-shot learning method called MAME-ZsL for hyperspectral image analysis and classification. MAME-ZsL overcomes the difficulties of traditional deep learning methods that require large labeled datasets and long training times. It reduces computational costs, avoids overfitting, and achieves high classification accuracy even when testing classes were not present during training. The method is a novel optimization-based meta-ensemble architecture that facilitates learning representations from limited labeled examples to enable one-shot and zero-shot learning.
Modeling and Forecasting the COVID-19 Temporal Spread in Greece: An Explorato...Konstantinos Demertzis
This document presents a novel method for modeling and forecasting the temporal spread of COVID-19 in Greece based on complex network analysis. The method develops a spline regression model where the knot vector is determined by community detection in the network representing the time series. The model provides a reliable framework for forecasting that can help inform decision making and management of health resources for fighting COVID-19 in Greece. The analysis finds that Greece's infection curve experienced 5 stages of declining dynamics and showed signs of saturation after 33 days, suggesting Greece's response has been effective at keeping cases and deaths relatively low.
The internet has revolutionized the way we live our lives – enabling us to read the news, enjoy entertainment, carry out research, book our holidays, buy and sell, shop, network, learn, bank and carry out many other everyday tasks. However, there are a number of risks associated with going online. Hackers are still on the lookout for personal information they can use to access your credit card and bank information.
Η χαλαζόπτωση αποτελεί έναν από τους σοβαρότερους κινδύνους της γεωργικής παραγωγής. Οι άμεσες συνέπειες της παρατηρούνται στη διάλυση της επιδερμίδας και τον τραυματισμό ή και την πτώση των ανθέων, καρπών, φύλλων και βλαστών, ενώ επιπρόσθετα τα πληγέντα φυτά παρουσιάζουν μεγαλύτερη ευαισθησία σε μυκητολογικές ασθένειες και σε εντομολογικές προσβολές. Σκοπός Η έγκαιρη αξιολόγηση σε ημερήσια βάση, όσον αφορά στο αν θα προκύψει φυσική καταστροφή λόγω χαλαζόπτωσης, μπορεί συμβάλλει καθοριστικά στην προστασία του γεωργικού κεφαλαίου της χώρας, αφού θα ενδυναμώσει σημαντικά τους μηχανισμούς πολιτικής προστασίας και θα δημιουργήσει τις κατάλληλες συνθήκες για βιώσιμη ανάπτυξη και οικονομική ευημερία. Υλικό Για τον έγκαιρο και έγκυρο χαρακτηρισμό (πρόβλεψη) μιας ημέρας ως ημέρα χαλαζόπτωσης, δημιουργήθηκε ένα νευρωνικό δίκτυο εμπρόσθιας τροφοδοσίας, το οποίο είναι ικανό να προβλέψει την χαλαζόπτωση. Για την εκπαίδευση και αξιολόγηση του συστήματος, χρησιμοποιήθηκαν τα ιστορικά δεδομένα χαλαζόπτωσης καθώς και τα μετεωρολογικά δεδομένα των τελευταίων 18 ετών της Κεντρικής Μακεδονίας. Μέθοδος Η σχεδίαση και ανάπτυξη του προτεινόμενου συστήματος πραγματοποιήθηκε με τη χρήση τεχνητών νευρωνικών δικτύων τα οποία έχουν την δυνατότητα να μοντελοποιήσουν πολύπλοκα μη γραμμικά προβλήματα ταξινόμησης (classification) εκμεταλλευόμενα την εγγενή ικανότητα μάθησης των τεχνητών νευρώνων. Η προσέγγιση επιλέχθηκε μετά από εξαντλητικές δοκιμές και συγκρίσεις διαφορετικών αλγοριθμικών μεθόδων μηχανικής μάθησης. Αποτελέσματα Τα αποτελέσματα της έρευνας είναι ιδιαίτερα ενθαρρυντικά καθώς η πρόβλεψη της χαλαζόπτωσης επιτυγχάνεται με ποσοστό ακρίβειας (Accuracy) 91,5%. Το γεγονός της ύπαρξης πολλών δεδομένων που αφορούν σε μεγάλο πλήθος εμπλεκομένων παραμέτρων, συνέβαλε σημαντικά στην επιτυχία της συγκεκριμένης μεθόδου. Συμπεράσματα Η εργασία προτείνει ένα σύστημα Μηχανικής Μάθησης με δυνατότητα ταξινόμησης των περιπτώσεων ως ημέρες χαλαζόπτωσης ή όχι. Το κυριότερο είναι ότι αυτό γίνεται εύκολα, γρήγορα και με μεγάλη ακρίβεια. Η αξιοπιστία και η βέλτιστη απόδοση του προτεινόμενου συστήματος με νέα δεδομένα που δεν είχαν καμία σχέση με τα δεδομένα εκπαίδευσης, προέκυψε μετά από την πραγματοποίηση εκτεταμένων συγκρίσεων μεταξύ διαφορετικών αλγοριθμικών προσεγγίσεων και αρχιτεκτονικών.
This document describes a hybrid artificial intelligence system for cyber security (HAISCS) that uses two components:
1. An Evolving Spiking Anomaly Detection Model (ESADM) that uses a spiking neural network for pattern recognition and classification of anomalies.
2. An Evolving Computational Intelligence System for Malware Detection (ECISMD) that uses an Evolving Classification Function and genetic algorithm for malware classification.
The system aims to improve cyber security through hybrid supervised and unsupervised machine learning methods for detection of anomalies and malware. Evaluation results show the hybrid system achieves better performance than other classifiers. Future work will focus on improving and expanding the system.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
3. directly related to sustainability in areas such as agriculture and food safety and it refers to
the protection of the environment, mainly focusing in biodiversity. The threats to biosecur-
ity, are related to small-scale risks that emerge rapidly, making the application of an
effective policy, a very challenging task. This is due to time and resources’ limitations,
which make the analysis and assessment of threats likelihood, a tedious task.
IAS are a result of generalized climate change and they constitute a serious and rapidly
worsening threat to natural biodiversity in Europe. They enter new foreign habitats and
they can stifle natural flora or fauna, causing serious harm to the environment (Demertzis
& Iliadis, 2017a). The impacts are socio-economic and there are negative consequences in
public health, in fisheries, in agriculture and more generally in food production. The inva-
sion of these species is the second biggest threat to local biodiversity worldwide and is
called ‘bio-pollution’. The impact of IAS and their rapid expansion in new seas have
direct economic consequences in various areas (e.g. financial) such as the risk of indigen-
ous species’ extinction, entails costs to restore natural balance. The expansion of organ-
isms harmful to the human health might lead (in mid-term) in the reduction of tourist
development. Ecological impacts such as food grid disruption are very important, and
we cannot ignore the risk of introducing new diseases that can destroy sensitive indigen-
ous species. The changes in biodiversity entail a change in relative abundance of species.
Moreover, the risk to public health should not be overlooked as these species may be toxic,
such as ‘Lagocefalus’ fish, which contains ‘Tetrodotoxin’ a very dangerous substance,
capable of causing serious health problems, even death (Demertzis & Iliadis, 2015,
2017b). European Union spends at least 12 billion Euros per year, trying to control IAS
and to overcome their consequences.
1.2. Our research approach
Modern IT technologies and innovations are introduced by Computational Intelligence
(COIN). These automated high-tech solutions, create the preconditions for designing
proper biosecurity and biodiversity protection methods that can evolve and reform the
existing framework. They literally reinforce and simplify IAS detection processes with
Machine Learning (ML) identification systems. They allow fast and easy data collection
and thus logging of indigenous and non-resident populations in an area. This is achieved
by applying an automated process with low financial requirements. Moreover, they create
proper conditions for studying the behaviours of different species and their seasonal vari-
ation. In other words, they are helping to accurately map the overall invasions. In this way,
they are contributing substantially to slowing the uncontrolled expansion of IAS. The sig-
nificance of these intelligent models towards the identification and classification of IAS has
been supported by some recent researches (Demertzis & Iliadis, 2015; Demertzis & Iliadis,
2017b, 2017c; Demertzis, Iliadis, & Anezakis, 2017).
This study proposes a new identification method for IAS which is presented for the first
time in the international literature. It is an advanced intelligent Deep Extreme Learning
Machine framework (DELM) towards Marine Specification Identification aided by
Machine Hearing (MAHE). The proposed framework uses the Online Sequential Multilayer
algorithm (OSML) plus a Graph Regularized Extreme Learning Machine Autoencoder
(GRELMA). It is a highly efficient and original DELE architecture that uses a Multilayer
Extreme Learning Machine (MELM) with online learning classification capabilities.
JOURNAL OF INFORMATION AND TELECOMMUNICATION 493
4. Checking whether the recognized item is native or not in its locality, is carried out by an
innovative Geo Location Country Based Services (GCBS) (Demertzis et al., 2017).
The implementation of this proposed framework was based on the DELE philosophy.
An important aspect of the framework is the usage of ELM which has proven to be
capable, of solving a multidimensional and complex IT problem. The Deep ELM simulates
the functioning of biological brain cells in a most realistic mode. This creates the potential
for a fully automated configuration of the model with high accuracy classification. An inno-
vative aspect of this work is the development of this model using Online Sequential Multi-
layer algorithm plus a Graph Regularized Extreme Learning Machine Autoencoder that
aims to optimize the choice of the input layer weights and bias. This has been done in
order to achieve a higher level of generalization. This method combines two highly
effective machine learning algorithms, for solving a multidimensional and complex
machine hearing problem. Another interesting point is the performance of feature extrac-
tion using an intelligent method of audio signal analysis. The proposed method supports
the autonomous operation of the system comprising the geolocation capability (using
global position systems) of the identified species and the native ones. This system is
pouring artificial intelligence to digital sensors that can easily (quickly and at low cost)
identify invasive or rare species on basis of their phenotypes. This would result in strength-
ening biosecurity programmes.
1.3. Literature review
Through a series of new learning architectures and algorithms, have been transformed;
DELE methods are now the state-of-the-art in object, speech and audio recognition.
Deng and Yu (2013) had proposed methods and applications of DELE. A large deep
convolutional neural network trained (Krizhevsky, Sutskever, & Hinton, 2012) to classify
1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into 1000
different classes. In recent years, Convolutional Neural Networks (CNNs) have
become very popular and have achieved great success in many computer vision
tasks particularly in object recognition. Cellular Simultaneous Recurrent Networks
(CSRNs) applied (Alom, Alam, Taha, & Iftekharuddin, 2017) to generate initial filters
of CNNs for features extraction and Regularized Extreme Learning Machines (RELM)
for classification. Experiments were conducted on three popular data sets for object
recognition (such as face, pedestrian, and car) to evaluate the performance of the pro-
posed system.
An object recognition algorithm constructed efficient features automatically without
relying on human experts in order to design features for fish species classification by
Zhang, Lee, Zhang, Tippetts, and Lillywhite (2016). Results from experiments showed
that the proposed method obtained an average of 98.9% classification accuracy, with a
standard deviation of 0.96%. The data set comprised of 8 fish species and a total of
1049 images. Moreover, DELE has been the driving force behind large leaps in accuracy
and model robustness in audio related domains, like speech recognition.
Hinton et al. (2012) presented four successful DELE research approaches for acoustic
modelling in sound recognition, namely: Bing-voice-search speech recognition, Switch-
board speech recognition, Google voice input speech recognition and YouTube speech
recognition.
494 K. DEMERTZIS ET AL.
5. Moreover (Hinton et al., 2012) proposed the employment of (Deep Neural Networks
(DNNs) to extract high-level features from raw data and to show that they are effective
for speech and emotion recognition. They first produced an emotion state probability
distribution for each speech segment using DNNs. Then they constructed utterance-
level features from segment-level probability distributions. Then these features were
fed into an Extreme Learning Machine (ELM), a special simple and efficient single-
hidden-layer neural network, to identify utterance-level emotions. The automatic
Sound Event Classification (SEC) has attracted a growing attention in recent years.
Feature extraction is a critical factor in SEC system, and DNNs. Actually, they have
achieved the state-of-the-art performance for SEC. The Extreme Learning Machine
Auto Encoder (ELMAE) is a new DELE algorithm, which has both an excellent represen-
tation performance and it follows a very fast training procedure. A bilinear multi-
column ELMAE algorithm has been proposed by Zhang, Yin, Zhang, Shi, and Li
(2017) in order to improve the robustness, stability, and feature representation of
the original approach. This method was applied towards feature representation of
sound signals. Moreover, a similar ELMAE model combined with a two-stage ensemble
learning and classification framework was developed to perform the robust and
effective automatic SEC.
Additionally, many studies on automatic audio classification and segmentation are
using several features and techniques. Zhao et al. (2017) proposed a new method for auto-
mated field recording analysis with an improved segmentation and a robust bird species
classification. They used a Gaussian Mixture Model (GMM) with an event-energy-based
sifting procedure that selected representative acoustic events. Moreover, they used a
classification Support Vector Machine (SVM).
1.4. Paper outline
The remainder of this paper is as follows: A general description of the theoretical back-
ground is presented in Section 2. The theoretical concepts of the proposed system are dis-
cussed in Section 3. The data sets used and the feature extraction method is included in
Section 4. Finally, the marine species identification framework and the experimental setup
that was used in this research are discussed in detail in Section 5. Moreover, Section 6 is
about the obtained results which are compared to existing approaches. The conclusions
are summarized and further discussed in Section 7, whereas future research is shown in
Section 8.
2. Theoretical background
2.1. Machine hearing
Machine Hearing (Lyon, 2017) is a scientific field of artificial intelligence that attempts to
reproduce the sense of hearing algorithmically. Audio Signal analysis is the general pro-
cedure which refers to the extraction of knowledge based on the correlation among
the content and the nature of audio signals.
In general, the process involves the extraction of certain features capable to differen-
tiate their values, according to the content and the structure of the corresponding
JOURNAL OF INFORMATION AND TELECOMMUNICATION 495
6. signals. Based on the application, the final algorithmic step is classification, segmentation,
automatic retrieval or synthesis.
2.2. Intelligent methods of audio signal analysis
A MAHE data collection can be so large or so heterogeneous and complex in its structure,
that traditional data processing software is impossible to manage. Solving a MAHE
problem requires high availability of resources. Critical factors are time complexity of
the employed algorithms, memory availability as a function of input data, as well as indi-
vidual analysis related to other resources, such as the number of parallel processors
required to solve the problem.
Therefore, MAHE cases require resolution with modern computational intelligence
methods. The analysis of data extracted from audio fragments requires not only large
storage space (Ramírez-Gallego, Fernández, García, Chen, & Herrera, 2018) but coordi-
nation of complex analysis processes solved in polynomial time as well.
3. Theoretical background of the proposed system
3.1. A brief reference to the extreme learning machines
Before introducing our proposed algorithm, it would be essential to discuss the existing basic
theoretical framework. The question of whether a small Neural Network architecture can
learn a lot, even from huge training data sets, was answered in the affirmative by ELM. An
ELM (Cambria & Guang-Bin, 2013) is a Single-Hidden Layer Feed Forward Neural Network
(SLFFNN) with N hidden neurons, randomly selected input weights and random values of
bias in the hidden layer, while the weights at its output are calculated with a single multipli-
cation of vector matrices. SLFFNNs are used in ELMs because of their ability to approach any
continuous function and to classify any discontinuous areas. An ELM can accurately learn N
samples, and its learning speed can be even thousands of times greater than the speed of
conventional Back Propagation Feed Forward Neural Networks (BP_FFNN).
ELMs use the SLFFNN’s general methodology, with the specificity that the Hidden Layer
(feature mapping) is not required to work in a coordinated fashion. All hidden-layer par-
ameters are independent from the activation functions and from the training data.
ELMs can randomly create hidden nodes or hidden level parameters, before seeing the
training data, while it is remarkable that they can handle non-differential activation
equations and they do not address known NN problems such as stopping criterion, learn-
ing rate and learning epochs (Cambria & Guang-Bin, 2013; Huang, 2014, 2015).
A mathematical basis has been provided for the understanding of ELM (Equations
(1)–(9)) (Cambria & Guang-Bin, 2013; Huang, 2014, 2015).
For an ELM using SLFFNN and random representation of hidden neurons, input data is
mapped to a random L-dimensional space with a discrete training set N, where
(xi,ti), i [ 1, N with xi [ Rd
and ti [ Rc
. The specification output of the network is the
following:
fL(x) =
L
i=1
bihi(x) = h(x)b i [ 1,N. (1)
496 K. DEMERTZIS ET AL.
7. Vector matrix b = [b1, . . . ,bL]T
is the output of the weight vector matrix connecting
hidden and output nodes. On the other hand, h(x) = [g1(x), . . . ,gL(x)] is the output of
the hidden nodes for input x, and g1(x) is the output of the ith neuron. Based on a training
set {(xi,ti)}N
i=1, an ELM can solve the Learning Problem Hb = T, where the target labels
(target outputs) are T = [t1, . . . ,tN]T
and the output vector matrix of the Hidden Layer H
is the following:
H(vj,bj,xi) =
g(v1x1 + b1) · · · g(vlx1 + bl)
..
. ..
. ..
.
g(v1xN + b1) · · · g(vlxN + bl)
⎡
⎢
⎣
⎤
⎥
⎦
N×l
. (2)
Input weight vector matrix of hidden layer ω (before training) and bias vectors b are
created randomly in the interval [−1,1], with vj = [vj1,vj2, . . . ,vjm]T
and
bj = [bj1,bj2, . . . ,bjm]T
. Output weight vector matrix of hidden layer H is calculated by
the use of the Activation function in the training data set, based on the following function:
H = g(vx + b). (3)
The output weights β can be estimated by using function 4:
b =
I
C
+ HT
H
−1
HT
X, (4)
where H = [h1, . . . ,hN] is the output vector matrix of the hidden layer and X = [x1, . . . ,xN] is
the input vector matrix of the hidden layer. Indeed, β can be calculated by the following
general relation 5:
b = H+
T, (5)
where H+
is the generalized inverse vector matrix Moore–Penrose for matrix H.
3.1.1. Description of the ELM model employed in our approach
This approach is employing ELM with Gaussian Radial Basis Function (RBF) kernel
K(u,v)=exp(−γ||u-v||2
). The hidden neurons are k = 20. Subsequently, the assigned
random input weights are wi and the biases are denoted as bi, i = 1, … , N.
To calculate the hidden layer output matrix H we have used the following function (6):
H =
h(x1)
..
.
h(xN)
⎡
⎢
⎣
⎤
⎥
⎦ =
h1(x1) · · · hL(x1)
..
. ..
.
h1(xN) · · · hL(xN)
⎡
⎢
⎣
⎤
⎥
⎦ (6)
where h(x) = [h1(x), … , hL(x)] is the output (row) vector of the hidden layer with respect to
the input x. Moreover, h(x) maps the data from the D-dimensional input space to the L-
dimensional hidden-layer feature space (ELM feature space) H. Thus, h(x) is indeed a
feature mapping. ELM is to minimize the training error as well as the norm of the
output weights:
Minimize: ||Hb − T||2
and ||b|| (7)
where H is the hidden-layer output matrix of function (6).
JOURNAL OF INFORMATION AND TELECOMMUNICATION 497
8. Minimization of the norm of the output weights ||β|| is actually achieved by maximizing
the distance of separating the margins of the two different classes in the ELM feature
space 2/||β||.
We used the following function (8) to calculate the output weights β:
b =
I
C
+ HT
H
−1
HT
T, (8)
where the value of C (a positive constant) and the value of T are obtained from the Func-
tion Approximation of SLFFNs with additive neurons:
ti = [ti1,ti2, . . . ,tim]T
[ Rm
and T =
tT
1
..
.
tT
N
⎡
⎢
⎣
⎤
⎥
⎦. (9)
3.2. Online sequential extreme learning machines
The Online Sequential ELM (OSELM) (Huang, Liang, Rong, Saratchandran, & Sundar-
arajan, 2005; Liang, Huang, Saratchandran, & Sundararajan, 2006) is an alternative
technique for large scale computing and ML which is employed when data
becomes available in a sequential manner to determine mapping to data set corre-
sponding labels. The main difference between the Online Learning (ONL) and the
Batch Learning (BL) techniques is that in ONL mapping is updated after the
arrival of every new data point in a scale fashion, whereas BL techniques are
used when one has access to the entire training data set at once. It is a versatile
sequential Learning Algorithm (LA) because training observations are introduced
sequentially one-by-one, or chunk-by-chunk, with varying or fixed chunk length.
At any moment, only the newly arrived single record or chunk are used and
learned. A single record or a chunk of training observations is discarded as soon
as the learning procedure for that particular (single or chunk) observation(s) is com-
pleted. The LA has no prior knowledge as to how many training observations will be
presented. Unlike other sequential learning algorithms which have many control
parameters to be tuned, OSELM only requires the number of hidden nodes to be
specified (Huang et al., 2005; Liang et al., 2006).
The OSELM consists of two main phases namely: Boosting Phase (BPh) and Sequential
Learning Phase (SLPh). BPh trains the SLFFNs using the primitive ELM method with some
batch of training data in the initialization stage. This set of training data is discarded as
soon as the boosting phase is completed. The required batch of training data is very
small, which can be equal to the number of hidden neurons (Huang et al., 2005; Liang
et al., 2006).
3.3. The innovative OSML-GRELMΑ proposed algorithm
Considering and combining the features of ELM (that was presented above) we introduce
and propose a new Deep architecture by creating an Online Learning Multilayer Graph
Regularized Extreme Learning Machine Auto-Encoder (OSML-GRELMA).
498 K. DEMERTZIS ET AL.
9. This is a multi-layered neural network model that receives successive OL data streams
and uses the unsupervised GRELMA algorithm as a basic building block in which the
outputs of each level are used as inputs to the next one (Sun, Zhang, Zhang, & Hu, 2017).
An autoencoder is an ANN used for unsupervised learning of efficient coding. The aim
of an autoencoder is to learn a representation (encoding) for a set of data, but with the
output layer having the same number of nodes as the input layer, and with the
purpose of reconstructing its own inputs (instead of predicting target value Y given
inputs X). The algorithm is described below (Sun et al., 2017):
Algorithm 1. GRELMA Algorithm for Clustering (Sun et al., 2017)
Input: Data{X}= {xi}N
i=1 the number of hidden neurons nh, the penalty coefficient κ and λ
Output: The cluster results.
Step 1: Initialize an ELM of nh hidden neurons with random input weights and biases.
Step 2: If nh ≤N Compute the output weights β by employing the following equation
β* =(Inh +HT
CH + λHT
LH)-1
HT
CX
Else Compute the output weights β with the following equation β* = HT
(IN + CHHT
+ λLHHT
)-1
CX
Step 3: Xnew = XβT
Step 4: Treat each row of Xnew as a point and cluster the N points into K clusters using the k-means algorithm.
The overall function of the OSML-GRELMA is presented in the following algorithm 2.
Algorithm 2. OSML-GRELMA Algorithm for Classification
Input: A small initial training set N={(xi, ti)|xi ∈ Rn
, ti ∈ Rm
, i=1, ··· , ˜N}
The model depth: m;
The number of hidden nodes in each GRELMA: nh1, nh2, … ,nhm;
The new activation function: hnew.
Output: The classification results of the M data.
Phase 1 (BPh) (Huang et al., 2005; Liang et al., 2006)
Initialize X1 = Xtrain
For i =1: m
Assign arbitrary input weight wi and bias bi i=1, of the ith layer GRELMA by using some random numbers;
Calculate the initial hidden layer output matrix H0 = [h1, · · · , h˜N ]T
, where hi = [hnew(w1 · xi + b1), ···, hnew(w˜N ·xi+b˜N)]T
,
i = 1, ···, ˜N, where hnew is the activation function.
Train the output weights βi of the ith layer GRELMA;
Estimate the initial output weight b(0)
= M0HT
0 T0, where M0 = (HT
0 H0)−1
and t1, . . . ,t˜N.
Set k = 0.
Compute the outputs Xi+1=hnew(Xiβi
T
).
Phase 2 (SLPh) (Huang et al., 2005; Liang et al., 2006)
The essential step of this phase for each further coming observation (xi,t1), where xi ∈ Rn
, ti ∈ Rm
and i = N + 1, N + 2,
N + 3, is described as follows:
Calculate the hidden layer output vector h(k+1) = [hnew(w1 · xi + b1), ···, hnew(w˜N ·xi + b˜N)]T
Calculate the latest output weight b(k+1)
by using the algorithm ˆb = (HT
H)−1
HT
T which is known as the Recursive Least-
Squares (RLS) algorithm.
Set k = k + 1
End For
Map Xm+1, the output of the mth layer, to the output layer.
Compute the classification results by using the above trained OSML-GRELMA model.
The main objective and training success of the proposed OSML-GRELMA approach is
based on the evolutionary identification of the underlying structure of the input data
flows to produce the final model. It basically uses the knowledge of labelled data to inves-
tigate the distribution of the input data, aiming at enhancing the outcome of the learning
process using an adaptive scheme. In this sense, it includes procedures that approach
unsupervised learning, where inputs come from the same marginal distribution or they
follow a common cluster structure.
JOURNAL OF INFORMATION AND TELECOMMUNICATION 499
10. 3.4. The innovative proposed geolocation country-based services algorithm
The proposed GCBS algorithm performs the precise determination of the country to which
coordinates (obtained by a GPS device) are referenced each time. It has been introduced
by (Demertzis et al., 2017). According to this method, the world borders were originally
taken from the states, as they appear in the shapefile available at the following URL
http://thematicmapping.org/downloads/world_borders.php.
The following example is a Python script included in this shapefile and checks the coor-
dinates 39.35230, 24.41232 belonging to Greece1
:
import countries
cc = countries.CountryChecker(’TM_WORLD_BORDERS-0.3.shp’)
print cc.getCountry(countries.Point(39.35230, 24.41232)).iso
print Greece
Experiments were performed on a PC with an i7 at 3.6 GHz CPU and 16 GB RAM. The
feature extraction and classification processes run under a Linux Ubuntu 16.04 LTS Oper-
ating System (OS), with PyLab (NumPy, SciPy, Matplotlib and IPython).
4. Data set used in this research
Bioacoustics is an interdisciplinary branch that combines biology with acoustics to study the
production of sound, its propagation through elastic means, and its intake from animals. It
deals with objective electrophysiological measurements performed on animals for the study
of the hearing instrument, with emphasis on the bioelectric potentials. The Sea Audio
Dataset used in this study was created for the extension of a previous research effort of
our team (Demertzis et al., 2017). More details are presented in the next paragraphs.
4.1. Underwater sounds
Four main categories of sounds have been determined in order to create highly complex
scenarios. They can potentially include the most likely cases to be detected underwater.
They are suitable for the training phase of the proposed framework. Graphs 1–3 are
used to depict the three of them below.
The first graph presents the four categories of underwater sounds.
The categories of the underwater sounds are presented below:
. Fish: Several species of fish produce sounds with various mechanisms such as teeth,
pharynx, fins, and shuttle bladder. 1076 sounds belonging to 10 fish species have
been included in this category (e.g. Bidyanus Βidyanus, Epinephelus Αdscensionis, Cynos-
cion Regalis, Carassius Auratus, Cyprinus Carpio, Rutilus Rutilus, Salmo Trutta, Oreochromis
Mossambicus, Micropterus Salmoides, Oncorhynchus Mykiss).
The second graph shows the categorization of 1076 sounds belonging to 10 fish
species.
. Mammals: Marine mammals produce and use sounds to orientate and communicate
with each other. Totally 836 sounds belonging to 8 species of mammals are included
500 K. DEMERTZIS ET AL.
11. in this category (e.g. Delphinus Delphis, Erignathus Barbatus, Balaena Mysticetus, Pho-
coena Phocoena, Neophocaena Phocaenoides, Trichechus, Tursiops Truncates, Phoca
Hispida).
The third graph indicates the categorization of 836 sounds belonging to 8 species of
mammals.
Graph 1. The four categories of underwater sounds.
Graph 2. The categorization of 1076 sounds belonging to 10 fish species.
JOURNAL OF INFORMATION AND TELECOMMUNICATION 501
12. . Anthropogenic Sounds: It comprises of 684 sounds belonging to nine classes (Ship,
Sonar, Zodiac, Torpedo, Wind Turbine, Scuba Noise, Bubble Curtain, Personal Water
Craft, Airgun).
. Natural Sounds: It includes totally 477, which are classified in six clusters (Earthquake,
Hydrothermal Vents, Ice Cracking, Rainfall, Lightning, Waves).
4.2. Feature extraction
The Feature Extraction process (Giannakopoulos, 2015) enables capturing the character-
istics that precisely determine the uniqueness of each sound and helps to distinguish
between acoustic categories. The categories distinction is based on 34 characteristics
related to statistical measurements obtained from the signal frequency information,
while in general two stages are followed in the characteristics extraction methodology
(Giannakopoulos, 2015):
. Short-term feature extraction: This method separates the input signal into short-term
windows (frames) and calculates a series of attributes for each frame, thus leading
and discovering the sequence of short-term vector characteristics for the signal.
. Mid-term feature extraction: In many cases, the signal is represented by the relative
statistics of extracted features of the Short-term feature extraction operation
described above. For this reason, the Mid-term feature extraction function extracts
a series of statistics (e.g. mean and standard deviation) over each short-term
feature sequence.
In this research effort, we have extracted the short-term feature sequences for an audio
signal, using a frame size of 50 msec and a frame step of 25 msec (50% overlapping). All
sounds have a sampling rate of 44.1 kHz, 16-bit stereo resolution while their average dur-
ation is 10.3 sec.
Graph 3. The categorization of 836 sounds belonging to 8 species of mammals.
502 K. DEMERTZIS ET AL.
13. 5. The marine species identification framework
The first step of the proposed algorithmic approach includes the process of audio feature
extraction, obtaining the proper features related to each sound of the Sea Audio Dataset.
In the second step, these attributes are introduced to the proposed DELMF model and
the classification process is performed to determine if the detected sound comes from
an IAS.
If this sound is described as noise that comes from a usually human sea-related process,
then it is rejected and there is no further development in the process.
If this sound comes from a species of fish, or mammal, and once this species is ident-
ified, the coordinates are taken from a GPS device and they are assigned to the country
they belong to, through the GCBS. Then a check is made as to whether the species ident-
ified is native to that country, otherwise it is recorded as IAS.
Lists with indigenous and invasive species were extracted from the Invasive
Species Compendium2
the most valid and comprehensive database world-wide.
Algorithm 3 has been developed in an earlier research of our team in order to clas-
sify the IAS as native or invasive species (Demertzis et al., 2017). It is presented
below:
Algorithm 3. Algorithm for classifying IAS as native or invasive species
Input:
Recognized_Species;
Country;
Country_Native_Species;
1: Read Recognized_Species, Country, Country_Native_Species;
2: for i=1 to Country_Native_Species [max] do
3: if Country_Native_Species [i]=Recognized_Species then
4: Recognized Species=Native_Species
5: else
6: Recognized Species=Invasive_Species
7: end if
8: end
Output:
Species Identity;
The overall algorithmic procedure is presented in Figure 1.
6. Results and comparative analysis
The evaluation of the classification is mainly based on the Confusion Matrix (CM), where
the main diagonal values (top left corner to bottom right) correspond to the correct
classifications and the rest of the numbers correspond to very few cases that were misclas-
sified. The misclassifications frequencies are related to the False Positive (FP) and False
Negative (FN) indices appearing in the confusion Matrix. An FP is the number of cases
where you wrongfully receive a positive result and the FN is exactly the opposite. On
the other hand, the True Positive (TP) is the number of records where you correctly
receive a Positive result. The True Negative (TN) is defined respectively. The True Positive
rate (TPR) known as Sensitivity, the True Negative rate also known as Specificity (TNR) and
the Total Accuracy (TA) are defined by using equations 10, 11, and 12 respectively
JOURNAL OF INFORMATION AND TELECOMMUNICATION 503
14. (Fawcett, 2006; Mao, Jain, & Duin, 2000):
TPR =
TP
TP + FN
(10)
TNR =
TN
TN + FP
(11)
TA =
TP + TN
P + N
(12)
P represents the number of true and false positives while N represents the number of
true and false negatives respectively (total testing class), in total accuracy.
The Precision (PRE) the Recall (REC) and the F-Score indices are defined as in equations
13, 14 and 15, respectively (Fawcett, 2006; Mao et al., 2000):
PRE =
TP
TP + FP
(13)
Figure 1. The flowchart of the proposed algorithmic procedure.
504 K. DEMERTZIS ET AL.
15. REC =
TP
TP + FN
(14)
F - Score = 2 *
PRE * REC
PRE + REC
(15)
The 10-fold cross validation (10_FCV) is employed in this stage in order to obtain per-
formance indices. Analytical values of the predictive capacity of the DELMF algorithm are
presented in the following Tables 1–3 where a comparison is made with the Spiking Con-
volutional Neural Network (SCNN) algorithm, which was the subject of an earlier research
effort by our team (Demertzis et al., 2017). The Receiver Operating Characteristic (ROC)
curve, that is a graphical plot that illustrates the diagnostic ability of the classifier
system are presented in graphs 4–6. Authors have designed 3 ROC curves. The first ROC
curve (Graph 4) represents the underwater sounds, the second curve (Graph 5) indicates
the fishes and finally the third curve (Graph 6) depicts the mammals. In all simulations
below, the testing hardware and software characteristics are listed as follows: Laptop
Intel-i7 2.4G CPU, 16G DDR3 RAM, Windows 10, MATLAB R2015a.
In general, the proposed system has achieved equal and in many cases higher accuracy
than the comparable alternatives. The most important fact to be highlighted in this
respect, which justifies this research, is the significant reduction in DELMF’s implemen-
tation time, which was reduced up to 23% in any case, even when compared with the
extremely fast SCNN algorithm. This achievement is very important for the scientific
research in Machine Hearing, which, as has been said, is an extremely complex field
with specific requirements.
The performance results in Tables 1–3 are related to the testing process. The validity of
the proposed approach has been proven by using all known indices namely: Accuracy,
Table 3. Performance metrics of fishes_dataset.
Classifier
Classification Accuracy (ACC) & Performance Metrics in the Test_Dataset
ACC RMSE Precision Recall F-Score ROC Time
DELMF 87.91% 0.1512 0.879 0.879 0.879 0.920 214 sec
SCNN 89.03% 0.1481 0.892 0.890 0.890 0.939 277 sec
Table 2. Performance metrics of mammals_dataset.
Classifier
Classification Accuracy (ACC) & Performance Metrics in the Test_Dataset
ACC RMSE Precision Recall F-Score ROC Time
DELMF 92.18% 0.1571 0.922 0.922 0.922 0.955 186 sec
SCNN 92.10% 0.1577 0.921 0.921 0.921 0.952 259 sec
Table 1. Performance metrics of categories_dataset.
Classifier
Classification Accuracy (ACC) & Performance Metrics in the Test_Dataset
ACC RMSE Precision Recall F-Score ROC Time
DELMF 96.08% 0.1376 0.960 0.960 0.960 0.970 189 sec
SCNN 96.25% 0.1368 0.963 0.963 0.963 0.975 248 sec
JOURNAL OF INFORMATION AND TELECOMMUNICATION 505
16. RMSE, PRECISION, RECALL, F-Score and ROC. So the results are very well supporting the
reliable performance.
Moreover, a graph represents the accuracy for the data sets that have been considered
in the testing process (Graph 7).
7. Discussion and conclusions
An innovative, reliable, low-demand and effective computer-based hearing system, based
on sophisticated computational intelligence, was presented in this work. The described
Graph 4. The ROC curve of the underwater sounds.
Graph 5. The ROC curve of the fishes.
506 K. DEMERTZIS ET AL.
17. application significantly reduces the running time. It is coupled with the highly optimistic
obtained results. It is a credible innovative proposal in the standardization and design of
biosecurity and biodiversity protection methods.
As it has been proven, ELMs are an important approach for handling and analysing Big
Data as they require the minimum training time relative to the corresponding engineering
learning algorithms. Moreover ELMs do not require fine manipulations to determine their
operating parameters and finally they can determine the appropriate output weights
towards the most effective resolution of a problem. What is most important, they have
the potential to generalize, in contrast to corresponding methods which adjust their
Graph 6. The ROC curve of the mammals.
Graph 7. The accuracy for the datasets that have been considered in the testing process.
JOURNAL OF INFORMATION AND TELECOMMUNICATION 507
18. performance based solely on their training data set. It is obvious that the emerging use of
ELM in Big Data analysis as well as DELE creates serious prerequisites for complex systems’
development by low-cost machines.
8. Future research
Proposals for development and future improvements of this system should focus on
further optimizing the parameters of the DELMF algorithm used to achieve an even
more efficient, accurate, and faster categorization process. It would be very important
to select a heuristic optimization method to search for the optimal system parameters
and possibly to automatically calculate the contribution of the independent variables
and assign them as entry variables to the system (Demertzis & Iliadis, 2017c). This
would be a Meta-learning approach and it would lead to a totally automated and self-
adaptive system.
Notes
1. https://github.com/che0/countries
2. http://www.cabi.org/isc/
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Konstantinos Demertzis is an officer of the Hellenic Army, serving in Research and Informatics Corps.
He holds a PhD in Environmental Informatics and Computational Intelligence from the Democritus
University of Thrace and an MSc in the Communication & Computer Networking Technologies, from
the University of the Aegean. He is a Part-time Lecturer at the Computer and Informatics Engineering
department of the Eastern Macedonia & Thrace Institute of Technology, Research Associate at the
National and Kapodistrian University of Athens and Research Assistant at the University of Pelopon-
nese. His research interests include Computational Intelligence (Big Data Analytics, Machine Learn-
ing), Environmental Informatics (Invasive Species, Climate Change) and Cyber Security (Critical
Infrastructure Protection, Malware Analysis). He has published 10 research papers in international
scientific journals and 14 in Proceedings of international conferences.
Lazaros S. Iliadis (BSc in Mathematics AUTh, MSc Computer Science, Wales UK, PhD Expert systems,
AUTh) is Professor of Applied Informatics in the School of Engineering Department of Civil Engineer-
ing, Lab of Mathematics and Informatics of the Democritus University of Thrace. He has authored, co-
authored 60 publications in international scientific journals, more than 100 publications in Proceed-
ings of international conferences corresponding to more than 1000 citations and he has been the
organizer/chair of more than 18 scientific conferences. He is author and co-author of 2 scientific
books and a member of ACM and IEEE. He has supervised 3 PhD dissertations and 24 MSc ones.
He has lectured as a visiting Professor in many Greek and foreign Universities (UOM, UTh, UOL,
UEL, CRAN, COV).
Vardis-Dimitris Anezakis is a PhD candidate (Forest Informatics) in the Department of Forestry and
Management of the Environment and Natural Resources of the Democritus University of Thrace,
Greece. His research interests include Hybrid Computational Intelligence modelling of environ-
mental risks and threats. More specifically his PhD research is related to intelligent modelling of
508 K. DEMERTZIS ET AL.
19. the impacts of atmospheric conditions-pollution on public health, considering the climate change
contribution. He has published 7 research papers in international scientific journals and 7 in Proceed-
ings of international conferences.
ORCID
Vardis-Dimitris Anezakis http://orcid.org/0000-0002-4012-0651
References
Alom, M. Z., Alam, M., Taha, T. M., & Iftekharuddin, K. M. (2017, May). Object recognition using cellular
simultaneous recurrent networks and convolutional neural network. Proceedings of the
International Joint Conference on Neural Networks (IJCNN2017) (pp. 2873–2880). Anchorage.
doi:10.1109/IJCNN.2017.7966211
Cambria, E., & Guang-Bin, H. (2013, November–December). Extreme learning machines. IEEE
Intelligent Systems, 28(6), 30–59. doi:10.1109/MIS.2013.140
Demertzis, K., & Iliadis, L. (2015, September). Intelligent Bio-inspired detection of food borne patho-
gen by DNA barcodes: The case of invasive fish species lagocephalus sceleratus. In L. Iliadis & C.
Jayne (Eds.), Engineering applications of neural networks EANN2015. Communications in computer
and information science, Vol. 517 (pp. 89–99). Rhodes: Springer Verlag, Cham. doi:10.1007/978-3-
319-23983-5_9
Demertzis, K., & Iliadis, L. (2017a). The impact of climate change on biodiversity: The ecological con-
sequences of introduced species in Greece. Handbook of Climate Change Communication, Vol. 1,
Theory of Climate Change Communication, Chapter 2, In: W. Leal Filho, E. Manolas, A.M. Azul, U.M.
Azeiteiro, & H. McGhie (Eds.), Springer. doi:10.1007/978-3-319-69838-0_2
Demertzis, K., & Iliadis, L. (2017b, June). Detecting invasive species with a bio-inspired semi-super-
vised neurocomputing approach: The case of lagocephalus sceleratus. Neural Computing and
Applications, 28(6), 1225–1234. London: Springer. doi:10.1007/s00521-016-2591-2
Demertzis, K., & Iliadis, L. (2017c, October). Adaptive elitist differential evolution extreme learning
machines on Big Data: Intelligent recognition of invasive species. In P. Angelov, Y.
Manolopoulos Y, L. Iliadis, A. Roy, & M. Vellasco (Eds.), INNS 2016. Advances in intelligent systems
and computing, Vol.529 (pp. 333–345). Thessaloniki: Springer Verlag, Cham. doi:10.1007/978-3-
319-47898-2_34
Demertzis, K., Iliadis, L., & Anezakis, V. D. (2017, July). A deep spiking machine-hearing system for the
case of invasive fish species. Proceedings of 2017 IEEE International Conference on Innovations in
Intelligent Systems and Applications (INISTA) (pp. 23–28). Gdynia, Poland. ΙΕΕΕ. doi:10.1109/
INISTA.2017.8001126
Deng, L., & Yu, D. (2013). Deep learning: Methods and applications. Foundations and Trends® in Signal
Processing, 7(3–4), 197–387. doi:10.1561/2000000039
Fawcett, T. (2006, June). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874.
Elsevier Science Inc. doi:10.1016/j.patrec.2005.10.010
Giannakopoulos, T. (2015). Pyaudioanalysis: An open-source python library for audio signal analysis.
PLoS ONE, 10(12), e0144610. doi:10.1371/journal.pone.0144610
Hinton, G., Deng, L., Yu, D., Dahl, G., Rahman Mohamed, A., Jaitly, N., … Kingsbury, B. (2012,
November). Deep neural networks for acoustic modeling in speech recognition: The shared
views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97. doi:10.1109/MSP.
2012.2205597
Huang, G.-B. (2014). An insight into extreme learning machines: Random neurons, random features
and kernels. Cognitive Computation, 6(3), 376–390. Springer New York LLC. doi:10.1007/s12559-
014-9255-2
Huang, G.-B. (2015). What are extreme learning machines? Filling the Gap between Frank
Rosenblatt’s dream and John von Neumann’s Puzzle. Cognitive Computation, 7(3), 263–278.
Springer New York LLC. doi:10.1007/s12559-015-9333-0
JOURNAL OF INFORMATION AND TELECOMMUNICATION 509
20. Huang, G.-B., Liang, N.-Y., Rong, H.-J., Saratchandran, P., & Sundararajan, N. (2005, July). On-line
sequential extreme learning machine. In M. H. Hamza (Eds.), CI 2005. Proceedings of the IASTED
International Conference on Computational Intelligence (pp. 232–237). Calgary, Alberta, Canada.
Krizhevsky, Α, Sutskever, I., & Hinton, G. E. (2012, December). ImageNet Classification with Deep
Convolutional Neural Networks. NIPS 2012. Advances in Neural Information Processing Systems
Vol.2 (pp. 1097–1105). Lake Tahoe, NV.
Liang, N.-Y., Huang, G.-B., Saratchandran, P., & Sundararajan, N. (2006). A fast and accurate On-line
sequential learning algorithm for feedforward networks. IEEE Transactions on Neural Networks,
17(6), 1411–1423. doi:10.1109/TNN.2006.880583
Lyon, R. (2017, April). Human and machine hearing: Extracting meaning from sound. Cambridge:
Cambridge University Press. doi:10.1017/9781139051699
Mao, J., Jain, A. K., & Duin, P. W. (2000). Statistical pattern recognition: A review. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 22(1), 4–37. doi:10.1109/34.824819
Ramírez-Gallego, S., Fernández, A., García, S., Chen, M., & Herrera, F. (2018). Big data: Tutorial and
guidelines on information and process fusion for analytics algorithms with MapReduce.
Information Fusion, 42, 51–61. doi:10.1016/j.inffus.2017.10.001
Sun, K., Zhang, J., Zhang, C., & Hu, J. (2017). Generalized extreme learning machine autoencoder and
a new deep neural network. Neurocomputing, 230, 374–381. ISSN 0925-2312, doi:10.1016/j.
neucom.2016.12.027.
Zhang, D., Lee, D.-J., Zhang, M., Tippetts, B. J., & Lillywhite, K. D. (2016). Object recognition algorithm
for the automatic identification and removal of invasive fish. Biosystems Engineering, 145, 65–75.
doi:10.1016/j.biosystemseng
Zhang, J., Yin, J., Zhang, Q., Shi, J., & Li, Y. (2017). Robust sound event classification with bilinear multi-
column ELM-AE and two-stage ensemble learning. Eurasip Journal on Audio, Speech, and Music
Processing, (1). art. No. 11. doi:10.1186/s13636-017-0109-1
Zhao, Z., Zhang, S.-H., Xu, Z.-Y., Bellisario, K., Dai, N.-H., Omrani, H., & Pijanowski, B. C. (2017).
Automated bird acoustic event detection and robust species classification. Ecological
Informatics, 39, 99–108. doi:10.1016/j.ecoinf.2017.04.003
510 K. DEMERTZIS ET AL.