The document describes an AI-driven Occupational Skills Generator (AIOSG) that aims to automate the process of creating occupational skills reference documents. The AIOSG utilizes an intelligent web crawler, natural language processing, neural networks, and a blockchain to gather data on occupational skills from various sources, analyze the data, and generate standardized skills reference documents. It is intended to reduce the time and resources required to manually produce these documents while ensuring more comprehensive and up-to-date skills information. The AIOSG system architecture and its use of analytics, artificial intelligence, and blockchain technologies are explained in detail.
This document summarizes a hybrid dental charting system called HyDeCS that allows for electronic dental records to be accessed both online and offline. HyDeCS was created to address limitations of existing dental systems that only work when connected to the internet. It incorporates modules for patient information, treatment, appointments, dental diseases and charting. HyDeCS can be used in clinics both online and offline, as well as during outreach activities outside of clinics without an internet connection. Data entered offline is synced once the system reconnects online through database replication technologies. The system aims to improve dental care access and management through integrated electronic records.
Benchmarking supervised learning models for sentiment analysisConference Papers
This document summarizes a study that benchmarks the performance of two models - BiLSTM and FastText - on three sentiment analysis datasets. BiLSTM performed better in all tasks but was 15,000% slower to train than FastText. The datasets included an Amazon product reviews dataset, a Twitter sentiment dataset, and a financial domain-specific dataset collected by the authors. The study aimed to determine the most suitable algorithm while considering factors like speed, hardware usage, and resources.
Evaluating the impact of removing less important terms on sentiment analysisConference Papers
This document summarizes a research paper that evaluated the impact of removing less important terms on sentiment analysis. It discusses how sentiment analysis is an important natural language processing task but is complex due to linguistic challenges. Supervised machine learning is commonly used but requires high-quality training data. The paper experiments with identifying and removing less important words like stopwords and supporting part-of-speech tags from training data to see if it improves the precision of a sentiment classification model. The results showed removal of some unimportant words improved precision on a large generic dataset but not a smaller context-specific one.
Novel character segmentation reconstruction approach for license plate recogn...Conference Papers
The document proposes a new approach called partial character reconstruction for segmenting characters in license plate images to improve license plate recognition performance. It introduces using angular information and stroke width properties in different domains to segment characters and then reconstruct their complete shapes for recognition. Experimental results on several benchmark license plate databases and video databases show the technique is effective in handling images affected by multiple challenges.
Fakebuster fake news detection system using logistic regression technique i...Conference Papers
The document describes a fake news detection system called "FAKEBUSTER" that was developed using logistic regression in machine learning. It analyzed past research that found logistic regression achieved 79-89% accuracy in detecting fake news. The system was trained on a dataset of news articles labeled as real or fake. It uses TF-IDF to convert text to numerical features for the logistic regression model. The model was integrated into a web application called "FAKEBUSTER" that allows users to input a news article or URL to check if it is real or fake. Evaluation found the stance detection approach improved the model's accuracy for fake news classification.
Unified theory of acceptance and use of technology of e government services i...Conference Papers
This document describes a study that developed and validated a survey instrument to understand technology acceptance of an e-Government system called MYGOVSVC among Malaysian government employees. A literature review was conducted on previous studies applying the Unified Theory of Acceptance and Use of Technology (UTAUT) model to e-Government systems. A 21-item survey was developed containing questions on performance expectancy, effort expectancy, hedonic motivation, and facilitating conditions. The survey was translated to Malay and validated with stakeholders. It was administered to 419 government employees and results found the survey to be reliable in measuring acceptance of the MYGOVSVC system. The validated survey can be used to help improve e-Government services for Malaysian citizens.
This research focuses on the assessment of blockchain technology readiness level of banking
industry. Firstly, blockchain technology is defined and its specifications have been highlighted. Secondly the
answer for the question how the domains of information systems integration are related to the areas of adoption
of blockchain technology is researched.
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...ijcsit
5G next-generation networking paradigm with its envisioned capacity, coverage, and data transfer rates
provide a developmental field for novel applications scenarios. Virtual, Mixed, and Augmented Reality will
play a key role as visualization, interaction, and information delivery platforms. The recent hardware and
software developments in immersive technologies including AR, VR and MR in terms of the commercial
availability of advanced headsets equipped with XR-accelerated processing units and Software
Development Kits (SDKs) are significantly increasing the penetration of such devices for entertainment,
corporate and industrial use. This trend creates next-generation usage models which rise serious technical
challenges within all networking and software architecture levels to support the immersive digital
transformation. The focus of this paper is to detect, discuss and propose system development approaches
and architectures for successful integration of the immersive technologies in the future information and
communication concepts like Tactile Internet and Internet of Skills.
This document summarizes a hybrid dental charting system called HyDeCS that allows for electronic dental records to be accessed both online and offline. HyDeCS was created to address limitations of existing dental systems that only work when connected to the internet. It incorporates modules for patient information, treatment, appointments, dental diseases and charting. HyDeCS can be used in clinics both online and offline, as well as during outreach activities outside of clinics without an internet connection. Data entered offline is synced once the system reconnects online through database replication technologies. The system aims to improve dental care access and management through integrated electronic records.
Benchmarking supervised learning models for sentiment analysisConference Papers
This document summarizes a study that benchmarks the performance of two models - BiLSTM and FastText - on three sentiment analysis datasets. BiLSTM performed better in all tasks but was 15,000% slower to train than FastText. The datasets included an Amazon product reviews dataset, a Twitter sentiment dataset, and a financial domain-specific dataset collected by the authors. The study aimed to determine the most suitable algorithm while considering factors like speed, hardware usage, and resources.
Evaluating the impact of removing less important terms on sentiment analysisConference Papers
This document summarizes a research paper that evaluated the impact of removing less important terms on sentiment analysis. It discusses how sentiment analysis is an important natural language processing task but is complex due to linguistic challenges. Supervised machine learning is commonly used but requires high-quality training data. The paper experiments with identifying and removing less important words like stopwords and supporting part-of-speech tags from training data to see if it improves the precision of a sentiment classification model. The results showed removal of some unimportant words improved precision on a large generic dataset but not a smaller context-specific one.
Novel character segmentation reconstruction approach for license plate recogn...Conference Papers
The document proposes a new approach called partial character reconstruction for segmenting characters in license plate images to improve license plate recognition performance. It introduces using angular information and stroke width properties in different domains to segment characters and then reconstruct their complete shapes for recognition. Experimental results on several benchmark license plate databases and video databases show the technique is effective in handling images affected by multiple challenges.
Fakebuster fake news detection system using logistic regression technique i...Conference Papers
The document describes a fake news detection system called "FAKEBUSTER" that was developed using logistic regression in machine learning. It analyzed past research that found logistic regression achieved 79-89% accuracy in detecting fake news. The system was trained on a dataset of news articles labeled as real or fake. It uses TF-IDF to convert text to numerical features for the logistic regression model. The model was integrated into a web application called "FAKEBUSTER" that allows users to input a news article or URL to check if it is real or fake. Evaluation found the stance detection approach improved the model's accuracy for fake news classification.
Unified theory of acceptance and use of technology of e government services i...Conference Papers
This document describes a study that developed and validated a survey instrument to understand technology acceptance of an e-Government system called MYGOVSVC among Malaysian government employees. A literature review was conducted on previous studies applying the Unified Theory of Acceptance and Use of Technology (UTAUT) model to e-Government systems. A 21-item survey was developed containing questions on performance expectancy, effort expectancy, hedonic motivation, and facilitating conditions. The survey was translated to Malay and validated with stakeholders. It was administered to 419 government employees and results found the survey to be reliable in measuring acceptance of the MYGOVSVC system. The validated survey can be used to help improve e-Government services for Malaysian citizens.
This research focuses on the assessment of blockchain technology readiness level of banking
industry. Firstly, blockchain technology is defined and its specifications have been highlighted. Secondly the
answer for the question how the domains of information systems integration are related to the areas of adoption
of blockchain technology is researched.
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...ijcsit
5G next-generation networking paradigm with its envisioned capacity, coverage, and data transfer rates
provide a developmental field for novel applications scenarios. Virtual, Mixed, and Augmented Reality will
play a key role as visualization, interaction, and information delivery platforms. The recent hardware and
software developments in immersive technologies including AR, VR and MR in terms of the commercial
availability of advanced headsets equipped with XR-accelerated processing units and Software
Development Kits (SDKs) are significantly increasing the penetration of such devices for entertainment,
corporate and industrial use. This trend creates next-generation usage models which rise serious technical
challenges within all networking and software architecture levels to support the immersive digital
transformation. The focus of this paper is to detect, discuss and propose system development approaches
and architectures for successful integration of the immersive technologies in the future information and
communication concepts like Tactile Internet and Internet of Skills.
This second machine age has seen the rise of artificial intelligence (AI), or “intelligence” that is not the result of
human cogitation. It is now ubiquitous in many commercial products, from search engines to virtual assistants. aI is the result of exponential growth in computing power, memory capacity, cloud computing, distributed and parallel processing, open-source solutions, and global connectivity of both people
and machines. The massive amounts and the speed at which structured and unstructured (e.g., text, audio, video, sensor) data is being generated has made a necessity of speedily processing and generating meaningful, actionable insights from it.
Internet of Things, cognitive systems, and blockchain technology are three fields which have created numerous revolutions in software development. It seems that a combination among these fields may results in emerging a high potential and interesting field. Therefore, in this paper, we propose a framework for Internet of Things based on cognitive systems and blockchain technology. To the best of our knowledge, there is no framework for Internet of Things based on cognitive systems and blockchain. In order to study the applicability of the proposed framework, a recommender system based on the proposed framework is suggested. Since the proposed framework is novel, the suggested recommender system is novel. The suggested recommender system is compared with the existing recommender systems. The results show that the suggested recommender system has several benefits which are not available in the existing recommender systems.
KM - Cognitive Computing overview by Ken Martin 13Apr2016HCL Technologies
This document provides an introduction to cognitive computing and how it relates to knowledge management strategies. It begins with an overview of Ken Martin's background and the agenda. It then defines key cognitive computing concepts and technologies like natural language processing, machine learning, and pattern recognition. The document contrasts traditional and cognitive systems, noting cognitive systems are interactive, self-learning, and expand conversations. It maps cognitive capabilities to the KM lifecycle, showing how capabilities like natural language processing, text mining, and social network analysis can enhance each stage.
Ai - Artificial Intelligence predictions-2018-report - PWCRick Bouter
Here’s some actionable advice on artificial intelligence (AI), that you can
use today: If someone says they know exactly what AI will look like and
do in 10 years, smile politely, then change the subject or walk away.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
Blockchain enabled task and time sheet management for accounting services pro...Conference Papers
This document describes a blockchain-enabled timesheet management system for accounting firms. It aims to improve on traditional centralized timesheet databases which are vulnerable to tampering. The proposed system uses blockchain to immutably store task and timesheet data, including check-in/out times. This ensures accuracy and avoids issues like overclaiming hours. The document outlines the system architecture, which features a frontend app and blockchain backend on Hyperledger Fabric. Timesheet records are added to the blockchain using smart contracts. Preliminary results found blockchain improved aspects like organizational management, cost savings, transparency and data security compared to traditional methods.
How cognitive computing is transforming HR and the employee experienceRichard McColl
As a co-sponsor of a recently published IBM Institute of Business Value (IBV) I was pleased to see the insights support our own IBM HR strategy of aligning cognitive solutions to cloud platforms to create fantastic experiences for our IBM'ers.
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Implementing data-driven decision support system based on independent educati...IJECEIAES
Decision makers in the educational field always seek new technologies and tools, which provide solid, fast answers that can support decision-making process. They need a platform that utilize the students’ academic data and turn them into knowledge to make the right strategic decisions. In this paper, a roadmap for implementing a data driven decision support system (DSS) is presented based on an educational data mart. The independent data mart is implemented on the students’ degrees in 8 subjects in a private school (AlIskandaria Primary School in Basrah province, Iraq). The DSS implementation roadmap is started from pre-processing paper-based data source and ended with providing three categories of online analytical processing (OLAP) queries (multidimensional OLAP, desktop OLAP and web OLAP). Key performance indicator (KPI) is implemented as an essential part of educational DSS to measure school performance. The static evaluation method shows that the proposed DSS follows the privacy, security and performance aspects with no errors after inspecting the DSS knowledge base. The evaluation shows that the data driven DSS based on independent data mart with KPI, OLAP is one of the best platforms to support short-tolong term academic decisions.
Machine learning and ai in a brave new cloud worldUlf Mattsson
Machine learning platforms are one of the fastest growing services of the public cloud. ML, an approach and set of technologies that use Artificial Intelligence (AI) concepts, is directly related to pattern recognition and computational learning. Early adopters of AI have now rolled out cloud-based services that are bringing AI to the masses.
How are AI, deep learning, machine learning, big data, and cloud related? Can machine learning algorithms enable the use of an individual’s comprehensive biological information to predict or diagnose diseases, and to find or develop the best therapy for that individual? How is Quantum Computing in the Cloud related to the use of AI and Cybersecurity?
Join this webinar to learn more about:
- Machine Learning, Data Discovery and Cloud
- Cloud-Based ML Applications and ML services from AWS and Google Cloud
- How to Automate Machine Learning
This document provides an overview of a presentation on deep learning given by Melanie Swan. The key points are:
1) Melanie Swan is a technology theorist who gave a presentation on deep learning and smart networks at a conference in Indianapolis.
2) She discussed the definition and technical details of deep learning, including how it is inspired by concepts from statistical mechanics and physics. Deep learning uses neural networks of processing units to model high-level abstractions in data.
3) Deep learning has many applications including image recognition, speech recognition, and question answering. It is seen as important due to the large worldwide spending on AI and the growth of data science jobs.
The internet of things is an emerging technology that is currently present in most processes and devices, allowing to improve the quality of life of people and facilitating the access to specific information and services. The main purpose of the present article is to offer a general overview of internet of things, based on the analysis of recently published work. The added value of this article lies in the analysis of the main recent publications and the diversity of applications of internet of things technology. As a result of the analysis of the current literature, internet of things technology stands out as a facilitator in business and industrial performance but above all in improving the quality of life. As a conclusion to this document, the internet of things is a technology that can overcome the challenges in terms of security, processing capacity and data mobility, as long as the development related to other technologies follows its expected course.
The document discusses a research project that aims to map values in AI governance by studying how value attributions take form in human and computational ecologies. It proposes moving beyond focusing on ideal norms and values or trying to directly understand legal and computational commands, and instead "encircling" the topic by analyzing mundane practices. The researchers argue this assemblage perspective is needed to understand the interactions that constitute systems' viability and better inform academics, practitioners, regulators and judges.
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
Blockchain distributed ledger technology is evolving from the hype phase into one of greater maturity and long-term value creation. This graduate course overview examines how blockchains, networks, and social interaction patterns are related.
P14 towards using blockchain technology for e healthdevid8
This document proposes using blockchain technology to address challenges with managing electronic health data access and exchange. It summarizes previous related work applying blockchain to eHealth. The key challenges are ensuring data privacy and scalability. The proposed model uses a public blockchain to securely transmit data pointers and notifications, while storing actual health data off-chain in an InterPlanetary File System database. Smart contracts would manage access permissions. This would provide security while addressing blockchain's limitations for large data storage. The model was implemented using Ethereum, IPFS and smart contracts to test the feasibility of the blockchain-based eHealth data management approach.
Finely Chair talk: Every company is an AI company - and why Universities sho...Amit Sheth
Video: https://youtu.be/ZS8rGSzb_9I
The context of this talk is this statement from the host institution's provost: "We are trying to mobilize our campus activities around AI.” I connect academic initiatives in Interdisciplinary AI with industry needs.
--- Original abstract -----
Every company now is an AI company: Now, Near Future, or Distant Future?
Amit Sheth, AI Institute, University of South Carolina
“Every company now is an AI company. The industrial companies are changing, the supply chain…every single sector, it’s not only tech.” said Steven Pagliuca, CEO of Bain Capital at the 2019 World Economic Forum. With this statement as the context, I will provide an overview of AI landscape -- what AI capabilities are for real, what is being oversold, what is nonexistent, what is unlikely in our lifetime. I will also provide an anecdote-supported review through a broad variety of current and eminent applications of AI that rely on some of the well-developed and emerging AI capabilities. The objective is to help those considering AI applications start thinking of new business opportunities, new products and services, and new revenue/business models in the context of rapid penetration of AI technologies everywhere. I will seek to answer: Is AI just hype or something already happening? If it has not happened in your industry, is it impending? Do bad impacts of AI outweigh the good?
For this project, we had to conduct research on a topic that was seen as a relevant area of study in Enterprise Systems and how it will be applicable in the future.
We chose to study the effects artificial intelligence will have on CRM systems. To view our findings, you can view the video here - https://www.youtube.com/watch?v=Fe55c60QPwY&t=9s
Jurnal industry 4.0 implication on human capatialSoni Riharsono
This document summarizes a review of the implications of Industry 4.0 on human capital. Industry 4.0 relies on emerging technologies like cyber-physical systems, cloud manufacturing, and big data analytics. While this may reduce some manual jobs, it is also expected to create new opportunities that demand highly skilled workers. However, current workers may lack the new skills required. The review finds that technical, methodological, social and personal competencies will be needed for employees to remain competitive. While some unskilled jobs may be lost, human abilities like creativity cannot be replaced, so employees need to develop new skills to adapt to changes from Industry 4.0.
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
This second machine age has seen the rise of artificial intelligence (AI), or “intelligence” that is not the result of
human cogitation. It is now ubiquitous in many commercial products, from search engines to virtual assistants. aI is the result of exponential growth in computing power, memory capacity, cloud computing, distributed and parallel processing, open-source solutions, and global connectivity of both people
and machines. The massive amounts and the speed at which structured and unstructured (e.g., text, audio, video, sensor) data is being generated has made a necessity of speedily processing and generating meaningful, actionable insights from it.
Internet of Things, cognitive systems, and blockchain technology are three fields which have created numerous revolutions in software development. It seems that a combination among these fields may results in emerging a high potential and interesting field. Therefore, in this paper, we propose a framework for Internet of Things based on cognitive systems and blockchain technology. To the best of our knowledge, there is no framework for Internet of Things based on cognitive systems and blockchain. In order to study the applicability of the proposed framework, a recommender system based on the proposed framework is suggested. Since the proposed framework is novel, the suggested recommender system is novel. The suggested recommender system is compared with the existing recommender systems. The results show that the suggested recommender system has several benefits which are not available in the existing recommender systems.
KM - Cognitive Computing overview by Ken Martin 13Apr2016HCL Technologies
This document provides an introduction to cognitive computing and how it relates to knowledge management strategies. It begins with an overview of Ken Martin's background and the agenda. It then defines key cognitive computing concepts and technologies like natural language processing, machine learning, and pattern recognition. The document contrasts traditional and cognitive systems, noting cognitive systems are interactive, self-learning, and expand conversations. It maps cognitive capabilities to the KM lifecycle, showing how capabilities like natural language processing, text mining, and social network analysis can enhance each stage.
Ai - Artificial Intelligence predictions-2018-report - PWCRick Bouter
Here’s some actionable advice on artificial intelligence (AI), that you can
use today: If someone says they know exactly what AI will look like and
do in 10 years, smile politely, then change the subject or walk away.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
Blockchain enabled task and time sheet management for accounting services pro...Conference Papers
This document describes a blockchain-enabled timesheet management system for accounting firms. It aims to improve on traditional centralized timesheet databases which are vulnerable to tampering. The proposed system uses blockchain to immutably store task and timesheet data, including check-in/out times. This ensures accuracy and avoids issues like overclaiming hours. The document outlines the system architecture, which features a frontend app and blockchain backend on Hyperledger Fabric. Timesheet records are added to the blockchain using smart contracts. Preliminary results found blockchain improved aspects like organizational management, cost savings, transparency and data security compared to traditional methods.
How cognitive computing is transforming HR and the employee experienceRichard McColl
As a co-sponsor of a recently published IBM Institute of Business Value (IBV) I was pleased to see the insights support our own IBM HR strategy of aligning cognitive solutions to cloud platforms to create fantastic experiences for our IBM'ers.
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Implementing data-driven decision support system based on independent educati...IJECEIAES
Decision makers in the educational field always seek new technologies and tools, which provide solid, fast answers that can support decision-making process. They need a platform that utilize the students’ academic data and turn them into knowledge to make the right strategic decisions. In this paper, a roadmap for implementing a data driven decision support system (DSS) is presented based on an educational data mart. The independent data mart is implemented on the students’ degrees in 8 subjects in a private school (AlIskandaria Primary School in Basrah province, Iraq). The DSS implementation roadmap is started from pre-processing paper-based data source and ended with providing three categories of online analytical processing (OLAP) queries (multidimensional OLAP, desktop OLAP and web OLAP). Key performance indicator (KPI) is implemented as an essential part of educational DSS to measure school performance. The static evaluation method shows that the proposed DSS follows the privacy, security and performance aspects with no errors after inspecting the DSS knowledge base. The evaluation shows that the data driven DSS based on independent data mart with KPI, OLAP is one of the best platforms to support short-tolong term academic decisions.
Machine learning and ai in a brave new cloud worldUlf Mattsson
Machine learning platforms are one of the fastest growing services of the public cloud. ML, an approach and set of technologies that use Artificial Intelligence (AI) concepts, is directly related to pattern recognition and computational learning. Early adopters of AI have now rolled out cloud-based services that are bringing AI to the masses.
How are AI, deep learning, machine learning, big data, and cloud related? Can machine learning algorithms enable the use of an individual’s comprehensive biological information to predict or diagnose diseases, and to find or develop the best therapy for that individual? How is Quantum Computing in the Cloud related to the use of AI and Cybersecurity?
Join this webinar to learn more about:
- Machine Learning, Data Discovery and Cloud
- Cloud-Based ML Applications and ML services from AWS and Google Cloud
- How to Automate Machine Learning
This document provides an overview of a presentation on deep learning given by Melanie Swan. The key points are:
1) Melanie Swan is a technology theorist who gave a presentation on deep learning and smart networks at a conference in Indianapolis.
2) She discussed the definition and technical details of deep learning, including how it is inspired by concepts from statistical mechanics and physics. Deep learning uses neural networks of processing units to model high-level abstractions in data.
3) Deep learning has many applications including image recognition, speech recognition, and question answering. It is seen as important due to the large worldwide spending on AI and the growth of data science jobs.
The internet of things is an emerging technology that is currently present in most processes and devices, allowing to improve the quality of life of people and facilitating the access to specific information and services. The main purpose of the present article is to offer a general overview of internet of things, based on the analysis of recently published work. The added value of this article lies in the analysis of the main recent publications and the diversity of applications of internet of things technology. As a result of the analysis of the current literature, internet of things technology stands out as a facilitator in business and industrial performance but above all in improving the quality of life. As a conclusion to this document, the internet of things is a technology that can overcome the challenges in terms of security, processing capacity and data mobility, as long as the development related to other technologies follows its expected course.
The document discusses a research project that aims to map values in AI governance by studying how value attributions take form in human and computational ecologies. It proposes moving beyond focusing on ideal norms and values or trying to directly understand legal and computational commands, and instead "encircling" the topic by analyzing mundane practices. The researchers argue this assemblage perspective is needed to understand the interactions that constitute systems' viability and better inform academics, practitioners, regulators and judges.
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
Blockchain distributed ledger technology is evolving from the hype phase into one of greater maturity and long-term value creation. This graduate course overview examines how blockchains, networks, and social interaction patterns are related.
P14 towards using blockchain technology for e healthdevid8
This document proposes using blockchain technology to address challenges with managing electronic health data access and exchange. It summarizes previous related work applying blockchain to eHealth. The key challenges are ensuring data privacy and scalability. The proposed model uses a public blockchain to securely transmit data pointers and notifications, while storing actual health data off-chain in an InterPlanetary File System database. Smart contracts would manage access permissions. This would provide security while addressing blockchain's limitations for large data storage. The model was implemented using Ethereum, IPFS and smart contracts to test the feasibility of the blockchain-based eHealth data management approach.
Finely Chair talk: Every company is an AI company - and why Universities sho...Amit Sheth
Video: https://youtu.be/ZS8rGSzb_9I
The context of this talk is this statement from the host institution's provost: "We are trying to mobilize our campus activities around AI.” I connect academic initiatives in Interdisciplinary AI with industry needs.
--- Original abstract -----
Every company now is an AI company: Now, Near Future, or Distant Future?
Amit Sheth, AI Institute, University of South Carolina
“Every company now is an AI company. The industrial companies are changing, the supply chain…every single sector, it’s not only tech.” said Steven Pagliuca, CEO of Bain Capital at the 2019 World Economic Forum. With this statement as the context, I will provide an overview of AI landscape -- what AI capabilities are for real, what is being oversold, what is nonexistent, what is unlikely in our lifetime. I will also provide an anecdote-supported review through a broad variety of current and eminent applications of AI that rely on some of the well-developed and emerging AI capabilities. The objective is to help those considering AI applications start thinking of new business opportunities, new products and services, and new revenue/business models in the context of rapid penetration of AI technologies everywhere. I will seek to answer: Is AI just hype or something already happening? If it has not happened in your industry, is it impending? Do bad impacts of AI outweigh the good?
For this project, we had to conduct research on a topic that was seen as a relevant area of study in Enterprise Systems and how it will be applicable in the future.
We chose to study the effects artificial intelligence will have on CRM systems. To view our findings, you can view the video here - https://www.youtube.com/watch?v=Fe55c60QPwY&t=9s
Jurnal industry 4.0 implication on human capatialSoni Riharsono
This document summarizes a review of the implications of Industry 4.0 on human capital. Industry 4.0 relies on emerging technologies like cyber-physical systems, cloud manufacturing, and big data analytics. While this may reduce some manual jobs, it is also expected to create new opportunities that demand highly skilled workers. However, current workers may lack the new skills required. The review finds that technical, methodological, social and personal competencies will be needed for employees to remain competitive. While some unskilled jobs may be lost, human abilities like creativity cannot be replaced, so employees need to develop new skills to adapt to changes from Industry 4.0.
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
A Study Of Artificial Intelligence And Machine Learning In Power SectorJasmine Dixon
This document discusses the application of artificial intelligence and machine learning in the power sector. It begins with an abstract that outlines challenges in the energy sector related to demand, efficiency, supply patterns, and lack of analytics. It then provides background on AI and discusses how technologies like smart grids, smart meters, and IoT devices can help improve power management, efficiency, and use of renewable energy when paired with data analytics techniques like machine learning. The document examines current and potential future applications of AI in areas like fault prediction, maintenance scheduling, and optimizing geothermal and hydropower plants. It concludes by stating that while advanced economies currently lead in AI applications for power, the technologies have potential to address energy access challenges in emerging markets as
Reliability based analytical engine as a service for industrial applications...Mayur Dvivedi
1. The document proposes developing a Reliability Based Analytical Engine (RBAE) as a service for industrial applications to improve asset reliability.
2. The RBAE would utilize condition monitoring data, historical databases, and machine learning to provide diagnostic and prognostic outputs. It would be trained in collaboration with human experts.
3. The goal of the RBAE is to enhance reliability, uptime, and life cycle value of physical assets through optimized maintenance and utilization based on its automated reliability analysis.
The Next Step For Aritificial Intelligence in Financial ServicesAccenture Insurance
As financial services firms strive to transform their businesses for a digital world, realize efficiencies, improve the customer experience and revitalize their growth, they increasingly see artificial intelligence-based (AI) technologies as key. For firms, the next wave of AI innovation are artificial neural networks.
4 th International Conference on Data Science and Machine Learning (DSML 2023)gerogepatton
4
th International Conference on Data Science and Machine Learning (DSML 2023) will
act as a major forum for the presentation of innovative ideas, approaches, developments, and
research projects in the areas of Data Science and Machine Learning. It will also serve to
facilitate the exchange of information between researchers and industry professionals to
discuss the latest issues and advancement in the area of Data Science and Machine Learning.
Authors are solicited to contribute to the Conference by submitting articles that illustrate
research results, projects, surveying works and industrial experiences that describe significant
advances in the Computer Networks & Communications.
4th International Conference on Data Science and Machine Learning (DSML 2023) gerogepatton
4th International Conference on Data Science and Machine Learning (DSML 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Data Science and Machine Learning. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Data Science and Machine Learning.
Authors are solicited to contribute to the Conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
This workshop presentation from Enterprise Knowledge team members Joe Hilger, Founder and COO, and Sara Nash, Technical Analyst, was delivered on June 8, 2020 as part of the Data Summit 2020 virtual conference. The 3-hour workshop provided an interdisciplinary group of participants with a definition of what a knowledge graph is, how it is implemented, and how it can be used to increase the value of your organization’s datas. This slide deck gives an overview of the KM concepts that are necessary for the implementation of knowledge graphs as a foundation for Enterprise Artificial Intelligence (AI). Hilger and Nash also outlined four use cases for knowledge graphs, including recommendation engines and natural language query on structured data.
1. Industry 4.0 refers to the current trend of automation and data exchange in manufacturing technologies. It includes cyber-physical systems, IoT, cloud computing and cognitive computing.
2. Industry 4.0 is expected to significantly impact the job market. Many existing jobs will be automated, especially those involving physical labor, data collection and processing. However, it will also create demand for new skills like data analytics and machine learning.
3. For workers to adapt to Industry 4.0, new skill sets need to be developed, such as change management, emotional intelligence, process awareness, data science, IT proficiency, domain expertise and big picture awareness. The skills that are uniquely human, like creativity, complex
Machine learning applications used in accounting and auditsvivatechijri
AI is a territory of software engineering that gains from a lot of information, recognizes examples, and makes expectations about future occasions. In the accounting and auditing professions, Machine Learning has been progressively utilized over the most recent couple of years. Thusly, this investigation means to Survey the current Machine Learning applications in accounting and auditing with a focus on Big Four Organizations. In this study, the AI devices and stages created by Big Four organizations are analyzed by directing a content investigation. It has been distinguished that Big Four organizations built up a few Machine learning devices that are utilized for predictable audits coordination and the management, completely automated audits. Accounting processes such as accounts receivable and accounts payable management, preparation of expense reports, and risk assessment can easily be automated by AI. For instance, machine learning algorithms can match an invoice received, decide the right business ledger for acknowledgment, and place it in a payment pool where a human specialist can inspect and submit the payment request to the payment queue.
STOCKSENTIX: A MACHINE LEARNING APPROACH TO STOCKMARKETIRJET Journal
This paper presents an approach for analyzing stock market news articles using web scraping, natural language processing, machine learning, and data visualization techniques. Key aspects of the approach include:
1) Web scraping is used to collect stock market news articles from various online sources.
2) Data preprocessing with Pandas cleans and structures the data. Sentiment analysis then categorizes the sentiment of each article as positive, negative, or neutral.
3) Matplotlib and other tools are used to visualize sentiment trends in an easily interpretable way to help identify patterns and aid decision making.
Data science is transitioning as technologies like big data, AI, and cloud computing become mainstream across businesses. This will lead to two major changes: increased automation of data processes and instantaneous delivery of analytics solutions. Future data scientists will need strong domain knowledge in addition to data skills to develop specialized, industry-focused predictive models. Many data science tasks will become automated, with AI and machine learning powering automated solutions that deliver faster, better results over time.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
This document discusses techniques for identifying fake news using social network analysis. It first reviews literature on existing fake news identification methods that use feature extraction from news content and social context. Deep learning models are then proposed to classify news as real or fake using datasets of news and social network information. The implementation achieves 99% accuracy on binary classification of news. Social network analysis factors like bot accounts, echo chambers, and information spread are discussed as enabling the spread of fake news online.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
A Study on the Applications and Impact of Artificial Intelligence in E Commer...ijtsrd
This document discusses the applications and impact of artificial intelligence in the e-commerce industry. It outlines several ways AI is used in e-commerce, including chatbots, handling customer data, making recommendations, and managing inventory. The impact of AI on e-commerce is also examined, such as enabling smart customer relationship management, improving operational efficiency, and becoming more customer-centric. The conclusion is that replacing human experts with AI systems can increase operations and help the growth of the e-commerce industry.
Artificial Intelligence Applications and Its Impact on Library Management SystemIRJET Journal
This document discusses artificial intelligence applications and their impact on library management systems. It begins with an abstract that outlines how AI is being used more pervasively in libraries, including expert systems for reference services, book sorting robots, and virtual reality. The document then provides details on the foundations of AI, including representation, search, reasoning, and learning. It explains how each of these foundations are core requirements for any AI system. Finally, the document discusses several applications of AI in libraries, such as natural language processing for reference systems, expert systems, pattern recognition, and robotics to automate tasks like shelf reading and free up librarians for other duties.
Brno-IESS 20240206 v10 service science ai.pptxISSIP
It my pleasure to be with you all today – thanks to my host for the opportunity to speak with you all today.
Host: Leonard Walletzky <qwalletz@fi.muni.cz> (https://www.linkedin.com/in/leonardwalletzky/) +420 549 49 7690
Google Scholar: https://scholar.google.com/citations?user=aUvbsmwAAAAJ&hl=cs
Katrina Motkova (https://www.linkedin.com/in/kateřina-moťková-mba-a964a3175/en/?originalSubdomain=cz)
Speaker: Jim Spohrer <spohrer@gmail.com> (https://www.linkedin.com/in/spohrer/) +1-408-829-3112
Presentation made for the event "Digital transformation in France and Germany: Consequences for industry, society & higher education" organized by the French-German University in cooperation with Institut Mines-Télécom https://www.dfh-ufa.org/fr/digital-transformation-in-france-and-germany/
RefugeeDo is a decentralized job marketplace that uses blockchain and artificial intelligence to connect refugees to jobs worldwide. It aims to address the high unemployment rates among refugees by providing a platform where refugees can create profiles, search and apply for jobs, and where employers can post open positions. The platform uses a mixed architecture where users interact both directly with smart contracts on the blockchain as well as through a backend server. It stores job postings on IPFS and user data such as profiles, applications, and reviews on Ethereum smart contracts to ensure security and transparency. The goal is to leverage new technologies like blockchain and AI to help tackle the global refugee crisis and enable refugees to contribute economically to their host countries.
World of Watson 2016 - Artificial Intelligence ResearchKeith Redman
Have you ever noticed that all the movies made about the topic of Artificial Intelligence portray the doom of human kind and the hero or heroine’s success at averting it? Hopefully we never truly get to that point. However, if the inner geek in you is interested in checking out what IBM research is working on today, check out these sessions.
Similar to Ai driven occupational skills generator (20)
The document describes an AI-driven Occupational Skills Generator (AIOSG) that aims to automate the process of creating occupational skills reference documents. The AIOSG utilizes an intelligent web crawler, natural language processing, neural networks, and a blockchain to gather data on occupational skills from various sources, analyze the data, and generate standardized skills reference documents. It is meant to make the document creation process more efficient, data-driven, and able to incorporate rapidly changing skills demands compared to the traditional manual process. The system architecture and key components of data collection, analysis, skills ontology construction, and reference document generation are outlined.
Advanced resource allocation and service level monitoring for container orche...Conference Papers
This document proposes an architecture for advanced resource allocation and service level monitoring for container orchestration platforms. It begins with background on containerization and different container orchestration platforms like Docker Swarm, Kubernetes, and Mesos. It then discusses the need for resource-aware container placement and SLA-based monitoring to minimize container migration and ensure performance. The proposed architecture consists of different components like a request manager, information collector, policy manager, and resource manager to enable advanced scheduling and monitoring of containers on Kubernetes. The proposed solution aims to analyze future resource utilization to improve placement decisions and reduce issues after deployment.
Adaptive authentication to determine login attempt penalty from multiple inpu...Conference Papers
This document proposes an adaptive authentication solution that determines login penalties based on multiple input sources. It describes adding an IP address checker module to the existing Trust Engine component of the Mi-UAP authentication platform. The IP address checker would identify the source type of a user's IP address and apply the appropriate penalty, such as requiring additional authentication methods or blocking the user, depending on factors like whether the IP is on a blacklist database. The document outlines the process flow and provides examples of how penalties would be applied based on the identified source type.
Absorption spectrum analysis of dentine sialophosphoprotein (dspp) in orthodo...Conference Papers
- The document analyzes the absorption spectrum of dentine sialophosphoprotein (DSPP) in gingival crevicular fluid (GCF) samples from orthodontic patients to develop a model for detecting orthodontic-induced inflammatory root resorption (OIIRR).
- GCF samples were collected from orthodontic patients at different treatment periods (3, 6, 12 months) and from non-orthodontic patients. Absorption spectroscopy found DSPP absorbance spectra increased with longer treatment duration, indicating more DSPP released due to more OIIRR.
- A qualitative model using SIMCA analysis accurately classified GCF samples into orthodontic and non-orthodont
A deployment scenario a taxonomy mapping and keyword searching for the appl...Conference Papers
This document discusses developing a taxonomy to map relationships between applications, virtual machines, hosts, and clients when performing upgrades and patches. It proposes creating a taxonomy based on analyzing errors that occur during application execution to understand dependencies. The methodology involves backing up configurations, testing connectivity between virtual networks and clusters before and after upgrades, and analyzing issues that arise. The goal is to establish structures for troubleshooting by classifying relationships between applications, libraries, operating systems, and browsers involved. This may improve determining the root cause of errors during upgrades involving virtualization.
Automated snomed ct mapping of clinical discharge summary data for cardiology...Conference Papers
The document discusses an approach to automatically map clinical terms in clinical discharge summary data from Malaysian hospitals to SNOMED CT terminology in order to improve the accuracy of queries for cardiology-related cases. Natural language processing techniques are used to preprocess the free-text discharge notes by removing formatting tags and identifying clinical terms, which are then mapped to SNOMED CT concepts using techniques like synonym matching, subsumption relationships, and identifying and excluding negative statements. The goal is to enrich the query results by standardizing the clinical terms to SNOMED CT and taking relationships like synonyms, subsumption, and negation into account to provide more accurate analytic results for monitoring and planning related to heart disease in Malaysia.
Automated login method selection in a multi modal authentication - login meth...Conference Papers
The document proposes an intelligent model to automatically select the login authentication method in a multi-modal authentication system based on user behavior profiling. It analyzes user behavior data from login sessions to minimize real-time processing and prevent untrusted attempts, while facilitating a frictionless user experience. The system determines the user, retrieves their behavioral historical data, matches the user profile based on data retrieval, and selects the authentication method based on evaluating the user profile and environmental parameters. It then updates the user profile with new successful login session data for future evaluations.
Atomization of reduced graphene oxide ultra thin film for transparent electro...Conference Papers
This document summarizes research on using an atomization process to deposit reduced graphene oxide (rGO) thin films for use as transparent conductive electrodes. Key points:
- Graphene oxide was spray coated onto silicon wafers and glass slides using an ultrasonic atomizer. Thermal reduction processes were then used to make the films electrically conductive while maintaining optical transparency.
- Thinner films with 1-2 spray coats had higher transparency (>90%) but higher resistivity, while thicker 3-4 coat films had lower transparency (77.1%) but lower resistivity (5.3 kΩ/sq).
- Rapid thermal processing was more effective than plasma processing at reducing resistivity. Sheet resistance decreased
An enhanced wireless presentation system for large scale content distribution Conference Papers
An enhanced wireless presentation system (eWPS) was developed to distribute presentation content to larger audiences over WiFi networks. The eWPS uses multiple access points connected via a high-speed Ethernet switch to provide WiFi coverage to audiences. It captures screenshots of presentations and stores them on an external web server for access by audience devices through a web browser. Testing showed the eWPS could serve over 125 audience devices with an average delay of 1.74ms per page load. System resources on the web server remained mostly idle, indicating it could potentially serve a much larger audience size.
An analysis of a large scale wireless image distribution system deploymentConference Papers
This document describes two setups of a wireless image distribution system:
1. A setup using commercial network equipment like access points and an access controller, which supported over 125 connected devices and provided sufficient bandwidth for the system load in an auditorium with 159 seats.
2. A setup using a wireless mesh network of three NerveNet nodes, which provided a quick and easy setup without wired connections but needs further performance improvements. Results from tests of both setups were analyzed to evaluate the network technologies for smart community applications.
Validation of early testing method for e government projects by requirement ...Conference Papers
The document describes a validation study of an Early Requirement Testing Method (ERTM) for e-government projects. Test engineers used the ERTM, which involves reviewing requirements documents and providing feedback, on six e-government projects. The number of defects found before and after applying the ERTM and providing interventions was compared using a statistical test. The results showed that overall, there was a statistically significant reduction in the number of defects found after applying the ERTM, suggesting it is useful for improving requirements documentation. However, one project saw an increase in defects due to additional requirements added later in the project.
The design and implementation of trade finance application based on hyperledg...Conference Papers
This document describes the design and implementation of a trade finance application built on the Hyperledger Fabric permissioned blockchain platform. It discusses the architecture of blockchain-based applications in general and this trade finance application specifically. Key aspects covered include identifying different types of software connectors (linkage, arbitrator, event, adaptor) that are important building blocks in the architecture. The trade finance application uses connectors like the blockchain facade connector and block/transaction event connector to interface between layers and handle asynchronous event propagation. Overall the document aims to provide insights into architectural considerations and best practices for developing blockchain-based applications.
Towards predictive maintenance for marine sector in malaysiaConference Papers
This research uses machine learning on sensor data from ships to predict failures of components and their remaining useful life. Interviews with marine experts identified significant maintenance items to prioritize for ship supply chains. The results were analyzed to provide recommendations to a government company on implementing predictive analytics and supply chain strategies for ship maintenance in Malaysia.
The new leaed (ii) ion selective electrode on free plasticizer film of pthfa ...Conference Papers
This document describes the development of a lead ion-selective electrode (Pb2+-ISE) sensor based on a poly-tetrahydrofurfuryl acrylate (pTHFA) membrane without plasticizers using photo-polymerization. The sensor demonstrated a linear range of 0.1-10-5 M, Nernstian slope of 26.5-29.8 mV/decade, limit of detection of 3.24-3.98 x 10-6 M, and good selectivity against interfering ions. Sensor characterization showed comparable results to measurements using atomic absorption spectroscopy on artificial and real samples. Optimization of the lipophilic salt potassium tetrakis(4-chlorophenyl)borate and lead ionophore
This document summarizes security definitions for searchable symmetric encryption (SSE) schemes. It reviews the indistinguishability and semantic security game definitions, noting that attacks have succeeded against published schemes. It then proposes a new security game definition against distribution-based query recovery attacks, to better capture practical adversary capabilities. The goal is to define security in a way that implies the current indistinguishability and semantic security definitions.
This document discusses the implementation challenges of autonomous things and proposes a high-level architecture for a cloud robotics infrastructure to address these challenges. It explores existing platforms for autonomous things and identifies three main areas of complexity: development, execution, and operation. A proposed architecture is presented using the TOGAF framework, with core services for integrated development/testing/simulation and operation/monitoring/maintenance, and application services and technologies to realize these, including cloud, edge and robotics computing with virtualization and ROS. The architecture aims to ease autonomous things implementation through a super-converged system.
Study on performance of capacitor less ldo with different types of resistorConference Papers
The document summarizes a study on the performance of a capacitor-less low dropout (LDO) voltage regulator using different types of resistors. A 1.8V LDO voltage regulator was designed and simulated using five different resistor types in Cadence. The performance metrics compared included output voltage accuracy, phase margin, unity gain bandwidth, and power supply rejection ratio. Simulation results showed differences in LDO performance depending on the resistor type. The LDO with hpoly resistor had the best stability performance, while the LDO with pdiffb resistor produced the highest power supply rejection ratio. In conclusion, the type of resistor used can significantly impact key performance characteristics of a capacitor-less LDO regulator.
Stil test pattern generation enhancement in mixed signal designConference Papers
This document describes a process for generating STIL test patterns from mixed signal design simulations in order to test digital blocks on an SoC. It involves simulating the mixed signal design, sampling the waveforms to generate test vectors, and converting those vectors into an ATPG-compliant STIL format using an automation program. This was implemented successfully at MIMOS Berhad, generating STIL test patterns that passed 100% of stuck-at tests.
The document discusses the implementation of an on-premise AI platform at MIMOS Berhad, a Malaysian research institute. The platform makes use of existing on-premise services such as a private cloud, distributed storage, and authentication platform. It provides an AI training facility using containers on VMs, with distributed training and GPU/CPU support. A version management system stores AI models and applications in Docker images. Deployment is supported on the private cloud and edge devices using containers. The goal is to enable internal development and hosting of AI projects in a secure, customizable manner.
Review of big data analytics (bda) architecture trends and analysis Conference Papers
This document reviews big data analytics (BDA) architecture trends and analysis. It discusses the evolution of data analytics from ancient times to modern technologies like Hadoop and Spark. It describes key features of BDA like flexibility, scalability, and fault tolerance. Common BDA architectures like lambda and kappa architectures are summarized. The lambda architecture uses batch, speed, and serving layers to handle both real-time and batch processing. The kappa architecture simplifies this by removing the batch layer and handling all processing through streaming. Overall, the document provides a high-level overview of BDA architectures and technologies.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
Ai driven occupational skills generator
1. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 29
AI-DRIVEN OCCUPATIONAL SKILLS GENERATOR (AIOSG)
Goon Wooi Kin (wk.goon@mimos.my), Kee Kok Yew (ky.kee@mimos.my), Nazarudin Mashudi
(nazarudin.mashudi@mimos.my), Amru Yusrin Amruddin (yusrin@mimos.my),
Ganesha Muthkumaran (ganesha.muthukumaran@mimos.my)
Enterprise Government Solutions Lab, Big Data Analytics Lab,
Artificial Intelligence Lab & Blockchain Lab,
Corporate Technology Division, MIMOS Berhad
ABSTRACT
The revolution of artificial intelligence (AI) is reshaping the world as we know it in terms of job
inequality and automation. Jobs are changing where the focus is moving more towards skillset rather
than academic qualifications as systems become more intelligent. Therefore, the need for every country
to align the skills development of their workforce towards the progression of technology is of paramount
importance. In this study, the authors present a novel idea and practical methods to capture and process
knowledge and experience in skillset to generate occupational skillset guidelines. This platform, from
here onward referred to as AI-driven Occupational Skills Generator (AIOSG), Malaysia’s applied
research and development center, MIMOS Berhad. AIOSG captures information on occupational
structure, occupational area, competency levels, competency profile, competency based curriculum, and
guidelines for assessment and training to create an occupational skills reference document. This
information is captured from market analysis, human resource departments (Government and private),
industry experts, and Internet literature. At present, such a reference document is produced manually
through conducting workshops involving industry experts. Hence, the document may not include
sufficient inputs from all active practitioners of an occupation, and it is often produced with some level
of obsolescence in that it lags behind current technology and process by the time it is published. AIOSG
leverages on AI’s strength in Natural Language Processing (NLP) and Neural Networks housed in a
web portal built on analytics to capture information from various contributing stakeholders on
occupational skills. The platform then processes relevant information and draws related information in
the creation of ontologies. All relevant information pertaining to a particular occupation is then
structured into a reference document which allows further review and inputs from experts. The platform
finally publishes the final, reviewed version of the document upon approval by the decision-making
authority and records of the data is stored on a blockchain. AIOSG cuts down the time and effort needed
while increasing the accuracy of information for a reference occupational skills document that training
institutes, employers, and employees potentially use to close the gap between the industry and skilled
workforce.
Field of Research: Artificial Intelligence, Occupational Skills, Skillset, Skills Development,
Competency, Training, Natural Language Processing, Neural Networks, Workforce, Human Resources,
Blockchain, Hyperledger, Web Crawler, Analytics, Labour Market
---------------------------------------------------------------------------------------------------------------------------
2. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 30
1 Introduction
An occupational skills reference (OSR) document is a document that provides guidelines on the skill
sets necessary to perform tasks in a particular occupation. The document also provides a description of
the occupation, which is framed from the tasks and skill sets needed. Preparation of the OSR document
requires collection of and review of information with regard to the skillsets required to complete the
tasks under this occupation. The information includes, but is not limited to: interviews and workshops
with industry experts and review of labour market reports, projection reports of labour market demand
and supply, and other literature. With the current pace of advancement of technology and automation,
timeframe for skills relevance and demand for new and enhanced skills, the OSR for various
occupational sectors would need to be updated regularly.
However, the current process of production of an occupational skills reference (OSR) document is
resource-intensive, in terms of both time and cost. Lack of availability of individuals with sufficient
skillsets and well-developed industries with which to use and enhance those skills leads to less relevant
information on skills being captured and utilised for the creation of the OSR document. Conducting a
literature review of the data collected against existing market literature, reports and other pertinent
documents can be costly in both time and human resources. The limited ability of humans to forecast
the skills that may be in demand in the near future, is a shortcoming which may render the document
obsolete in a relatively short period after its publication. With these shortcomings in the current, human
labour-intensive process, process engineering and artificial intelligence would be employed to reduce
the time-to-market and financial cost of the OSR documents.
2 Methodology
This chapter first introduces the AIOSG then explains the system and components of AI-driven
Occupational Skills Generator.
The AIOSG is composed of three components:
1. Analytics in the form of an intelligent web crawler that crawls and analyses the web and specific
government/private databases (which includes information from market analysis, human
resource departments and industry experts) and literature repositories for:
a. information on the skills and competencies needed for a particular job title,
b. market demand for skillsets (existing, developing, non-existent in the current market),
and
c. regular feedback from industry experts on skills needed, both current and future, as
well as skills expected to become redundant.
2. An AI-based service that processes the information obtained from the crawler. The AIOSG
ontology construction comprises curriculum resource acquisition, domain concept extraction,
ontology relation mining, ontology description and ontology updating and leverages on Natural
Language Processing (NLP) and Neural Networks.
3. A blockchain backbone based on Hyperledger stores the records of the occupational skills and
to trace the changes and updates to the records through digital footprints as well as prevent the
records from being tampered.
All data is finally displayed on a visual dashboard for total bird’s eye view of the occupational skills
reference. This is used for decision making, publishing (generated into document format) and future
planning of the governing authority in collaboration with the industry lead bodies. Through this, it
ensures relevant and applicable occupational skills reference are applied and are able to be utilised for
and by the industry.
3. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 31
The overall system for AIOSG is as follows:
Figure 1: AIOSG system architecture
3 AIOSG Analytics (Data Crawler & Analysis)
3.1 Topics Related to Skillset
Topics related to search term, in this case skillsets, contain semantic-related topics that can be used to
narrow down the search result by adding the topic to the initial search term. Figure 2 shows a screenshot
of topics related information in the Keyword Cloud.
Analytics
Layer
AIOSG UI
Presentation
Layer
Blockchain/
Data Layer Blockchain Distributed Records
Artificial
Intelligence
Layer
Integration
Layer Developer API
REST
Data Crawler
Filtering Engine
Relational
Engine
xxx
xxx
xxx
Language
Detector
ANN (SOM)
NLP
Text Extract
4. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 32
Figure 2: Topics related to search term
3.2 Categories Related to Skillset
Categories related to search term contains semantic related categories that can be used to narrow down
the search result by adding the topic to the initial search term. Figure 3 shows a screenshot of categories
related information in the Keyword Cloud.
Figure 3: Other categories related to search term
3.3 Related Skillset Keyword Method
Figure 4 shows the process and flow to generating related keywords from user input search term.
5. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 33
Figure 4: Process and flow to generating related keywords from user input search term
3.3.1 Language Detector
Figure 5 details the process of how these systems perform language detection on the search term:
User will insert search term in search form. Example: Computer
Language detector will detect insert search term languages status. Example: Computer
(English), Komputer (Malay).
Keyword is tagged with identified language.
Search term inserted
Search term tokenized
Each token compared with language service
(Example: Google language detection)
Language
detect service.
Each keyword tagged with its language
Figure 5: Language detector flow
Input:Search Term
Get relatedKeyword Search WWW
Comparecontentsof
search result with related
keywords
Assign frequency to
relatedkeywords
Output3: Topics
relatedto keyword
Output1: WWW Links (Web, Wiki,
News,Books, Blog, Location,
Video,Images)
Output2:
KeywordCloud
Ontology
Wiki
Lexical
Output4: Categories
relatedto keyword
LanguageDetector
KeywordGenerator
3.3.1
3.3.2
3.3.3
3.3.4
6. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 34
3.3.2 Get Related Keyword
Each keyword then will go through the process as shown in Figure 6 to get any possible related keyword
from Wikipedia, Machine Readable Dictionary (MRD) or WordNet based on tagged language:
Token of keyword with tagged language
Each tokenized keyword will query wikipedia page
based on detected language. All hypertext linked word
will be grabbed and store to keywords repository
database.
Each tokenized keyword will query MRD thesaurus,
every synonym word in thesaurus will be stored to
keywords repository.
Each tokenized keyword will query WordNet, every
synonym word in WordNet will be stored to keywords
repository.
Keywords
repository
Figure 6: Get related keyword flow
3.3.3 Search Result Content Compared to Related Keywords
All search results on web content, news, blog, video description, image description from the Internet
search engine using the search term inserted by the user will be stored to temporary database. The
semantic similarity algorithm influenced by Noah et al. (2007) and Li et al. (2006) will aggregate all
stored search results using two-tier aggregation processes as shown in Figure 7:
a. Semantic similarity on WWW title:
7. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 35
Search term WWW Titles
Joint word set
Raw semantic
vector 1
Raw semantic
vector 2
Semantic vector 1 Semantic vector 2
Semantic similarity
index
Wordnet
MRD
Figure 7: Semantic similarity algorithm
• Each web search title will be compared with search term insert by user using the method
proposed by the flow chart in Figure 7.
• Joint word sentence is:
S = S1 S2
= {w1, w2, ……., wn); wi are distinct
Example:
S1: software developer Malaysia (search term by user)
S2: ethical hacker (WWW search title)
S = {Malaysia, software, developer, ethical, hacker}.
S in distinct words generated from combination of S1 and S2.
• Each words in S1 and S2 will be compared with each words in S using
n
m
m
m
n
n
m
i
n
x
x
x
x
x
x
x
x
x
q
q
q
S
w
w
w
S
,
2
,
1
,
,
2
2
,
2
1
,
2
,
1
2
,
1
1
,
1
2
1
2
1
..
..
..
..
.
..
..
..
..
..
.
.
..
..
..
..
..
.
..
..
..
..
..
..
..
..
.
.
...
...
...
...
8. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 36
• Each element in the matrix is compared with:
• where C is the set of unique overlap words found in the meanings of w1 and w2 and M refers
to the meanings of the respective words in WordNet. Therefore, r(C, Mw1) refers to the ratio
between the counts of meanings that contains any of the words in C with all the meaning
associated with w1.
• For the calculation of the semantic vector Si, the following formula is used:
The value of I(w) is calculated by referring to the MRD dictionary, using the following
formula:
• Then, the semantic similarity between the two compared sentences is simply the cosine
coefficient between the two semantic vectors.
• Ss (Semantic value of compared each search term and WWW title) will be stored in database.
Ss = 1.0 meaning search term is same with WWW title semantically else it gives 0 value.
b. Content categorisation:
• Each content of selected WWW title will be categorised using Naive Bayes (“Naïve Bayes
classifier,” n.d.) formula, the probability that a given document D contains all of the words
, given a class C is (Strickland, 2014, p. 75):
The source of set of words are using keywords repository by previous process as shown in
Figure 7.
3.3.4 Assign Frequency to Related Keywords
All top 10 WWW title with highest semantic similarity score and categorised score will be selected.
Figure 8 shows the process to assign frequency to related keywords:
)
,
(
)
,
(
)
,
( 2
1
2
1 w
w M
C
r
M
C
r
w
w
sim
)
(
)
(
š
~
i
i
i w
I
w
I
s
)
1
log(
)
1
log(
1
)
(
N
n
w
I
2
1
2
1
2
1 )
,
cos(
s
s
s
s
s
s
Ss
9. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 37
All words in top 10 selected WWW title content will be
tokenized
Each tokenized words will going thru to tag cloud
creation module
Higher frequency words will represent keywords with
big font in Keyword Cloud.
Finally related keywords is created.
Figure 8: Assign frequency to related keywords
Finally, related keywords from user input search term is created by following the method explained
above and they are listed in the Keyword Cloud section in the AIOSG system. The keywords in the
Keyword Cloud can either be used to initiate a new search or restart the search by adding the keyword
to the initial search term. This will return a narrowed down result, thus making the required information
more easily obtained.
4 AIOSG Artificial Intelligence
In the context of Artificial Intelligence (A.I.) using the ontology matching method is essential, this is
because the model can be grounded on element, structure, instance or multiple strategies (Hu et al.,
2008; Pirro and Talia, 2010; Belhadef, 2011; Liu et.al., 2012). As described in Zhu, Y. C., Zhang, W.,
He, Y., Wen, J.B., & Li, M. Y. (2018), multi strategy method works best because the conceptual
semantics and the hierarchy between concepts are weighted and integrated, while others lack either in
description function of attributes and relations or there is no intersection between the instance sets of
two ontologies.
As the AIOSG ontology is automated using web crawler, text mining and association rule mining,
Following this method, the ontology construction can be divided into five phases: curriculum resource
acquisition, domain concept extraction, ontology relation mining, ontology description and ontology
updating (Figure 9)(adapted from Zhu, Y. C., Zhang, W., He, Y., Wen, J.B., & Li, M. Y. (2018)).
10. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 38
Figure 9: Flow chart of the curriculum ontology construction
4.1 Natural Language Processing (NLP)
In order to upkeep the curriculum for variety of jobs and new skills in the market, the ontology extension
and update of the recent skills and job libraries has to be updated automatically, for which this area shall
require process matching ontology AI that works best. Among all the machine learning approach, there
are two applicable AI approach which are Natural Processing Language (NLP) and Neural Networks,
to be discussed in this paper.
Figure 10: Extracting Information from text and NLP Process
Figure 10 above shows how a web crawler extracts information from text, which also exhibits how
NLP and ontological processing works. Firstly, an ontology can be used directly when building the
lexicon, defining the terms (concepts and relations) for content words. Secondly, an ontology is a
knowledge base, expressed in a formal language, and therefore it provides knowledge for more complex
language processing.
11. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 39
4.2 Neural Networks for Ontology Matching
In the framework for ontology matching, extended model of AIOSG represents the unsupervised neural
network based learning which is suitable to the knowledge structure because it fits the concepts and
relations for content words i.e. taxonomy. This is based on the applicability of a self-organising
map(SOM) or self –organising feature map(SOFM) as a type of artificial neural network(ANN) that is
trained to produce a low dimensional, discretised representation of the input space of the training
samples which is called a map, another method used in dimensionality reduction. The unique approach
in SOM as competitive learning is applied as opposed to error correction learning such as
backpropagation with gradient descent. Based on weight, the neurons are initialised either to small
random values or sampled evenly from the subspace spanned by the two largest principal component
eigenvectors. When a training example is used, its Euclidean distance to all weight vectors is computed,
of which the most similar weights to the inputs will be called best matching unit(BMU). The update
formula for a neuron v with weight vector Wᵥ(s) is
Where s is the step index, t an index into the training sample, u is the index of the BMU for the input
vector D(t), α(s) is a monotonically decreasing learning coefficient; θ(u,v,s) is the neighbourhood
function which gives the distance between the neuron u and the neuron v in steps.
Variables
These are the variables needed, with vectors in bold,
S is the current iteration
λ is the iteration limit
t is the index of the target input data vector in the input data set D
D(t) is a target input data vector
ʋ is the index of the best matching unit (BMU) in the map
θ( u, v, s) is a restraint due to distance from BMU, usually called the neighborhood function,
and
α(s) is a learning restraint due to iteration progress
Algorithm
1. Randomise the node weight vectors in a map
2. Randomly pick an input vector D(t)
3. Traverse each node in the map
Use the Euclidean distance formula to find the similarity between the input vector and
the map’s node’s weight vector
Track the node that produces the smallest distance (node is BMU)
4. Update the weight vectors of the nodes in the neighborhood of the BMU by pulling them closer
to the input vector
5. Increase s and repeat from step 2 while s< λ
12. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 40
5 AIOSG Blockchain (Data Storage)
The data from the Artificial Intelligence block will then be stored in Blockchain, Hyperledger Fabric
(Hyperledger, 2019). It is the nature of blockchain itself where the data will be distributed among nodes
and immutable, which means, the data cannot be changed. This grabbing data from blockchain will
visualise a higher accuracy of outcome. The data entered has to be agreed in a consensus before storing
in the blockchain, which creates trusted data by the experts/AI. Even if a third party wants to change
data, the data will be changed in their own node. This action, however, will fail to add or modify the
information in the blockchain because the blockchain technology will always cross-refer with other
nodes if the data is the same and it will check if the block hash is the same as the previous block, thus
demonstrating immutability.
Figure 11: Representation of a blockchain network
Figure 12: Block containing transactions
Referring to Figure 12, blockchain creates blocks in an append-only structure. This disallows the ability
to delete and update. The data or transactions inside can be updated and will be added as a new block.
Each block has its own hash value before adding onto the chain of blocks.
Node 1
Node 2
Node 3
Node 4
Node 5
Node 6
Node 8
Node 7
13. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 41
Figure 13: Links of blocks
Referring to Figure 13, the data hash is linked to each block. This happens when a block is added upon
an agreed consensus. Due to this property of blockchain, if a person tried to edit their data, the block
will change and will not be linked to the previous hash. All nodes the nodes will then cross-refer each
other and check if the data is legitimate. If the data entered is not in concurrence with the data found in
the other nodes, the blockchain will pull the latest correct block from the other nodes.
5.1 Processed Data Stored in Blockchain
The processed skills and competencies data will be stored in the blockchain to ensure the integrity and
security of the original data. Figure 14 shows the process flow of how data is stored in the blockchain.
Figure 14: Data submission to blockchain flow
Processed data captured
using AI
Data is submited via an
invoke call to the
blockchain API
Transaction proposal
submited for consensus
approval
Acquire approval from
consensus and peers
Transaction signed by
peers and submitted for
block creation
Block containing
processsed skills and
competencies data is
created and appended to
the chain of blocks
14. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 42
The processed skills and competencies data will be sent to blockchain API. This will then perform a
transaction proposal submission to the peers in the blockchain network. The peers will then endorse the
transaction proposal and return the simulated transactions and endorsing the peers’ signatures. The
application waits until it receives enough endorsed transaction proposals and will send the endorsed
transaction to the Ordering Peer Service to create a new block and update the ledger.
A smart contract or chaincode is required by the blockchain to store data inside. The following
chaincode shows how the occupation is added into the blockchain.
func (m *AIOSGChaincode) addOccupation(stub shim.ChaincodeStubInterface, args
[]string) pb.Response {
if len(args) != 7 {
return shim.Error("Incorrect number of arguements. Expecting 7 args")
}
// Input sanitation
fmt.Println("- Start Submit Occupation -")
if len(args[0]) <= 0 {// User
return shim.Error("User Required !")
}
if len(args[1]) <= 0 { // Project Name
return shim.Error("Occupation Required!")
}
user := args[0]
occp := args[1]
for tsid := 1; tsid >= 1; tsid++ {
tid := strconv.Itoa(tsid)
idAsBytes, err := stub.GetState(tid)
if err != nil {
return shim.Error("Failed to get state for " + tid)
}
if len(idAsBytes) == 0 {
fmt.Printf("No record found for " + tid + " ! Safe to add !")
occupation := &Occupation{OccID: tid, OccName: occp}
tsJSONasBytes, err := json.Marshal(occupation)
if err != nil {
return shim.Error(err.Error())
}
//Putstate
err = stub.PutState(user, tsJSONasBytes)
if err != nil {
return shim.Error(err.Error())
}
// indexed and saved
15. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 43
tsKey, err := getTSKey(stub, occupation.OccID, occupation.OccName)
if err != nil {
return shim.Error("Error getting Occupation key" + err.Error()
)
}
fmt.Println(tsKey)
value := []byte{0x00}
stub.PutState(tsKey, value)
fmt.Println("--- end submit Occupation successfully ---")
} else {
tsid = tsid + 1
}
}
return shim.Success(nil)
}
Figure 15: Smart contract to add occupation
The data is added into a struct. A struct is a structure of how the data captured is stored. Figure 16
shows how the structure of data is stored in blockchain.
type Occupation struct {
OccID string `json:"occID"`
OccName string `json:"occupation"`
Keywords []string `json:"keywords"`
Skillset []string `json:"skillset"`
}
Figure 16: Data structure stored in blockchain
5.2 Digital Footprint Using Blockchain
The processed skills and competencies data that is stored in the blockchain is immutable. This ensures
any illegal updates performed will be disregarded as it cross references with other nodes to check if the
data is the same or not. This is because, if the data is changed, the data has will change too. This will
disconnect from the original chain of blocks itself. This ensures the integrity of the data.
A digital footprint is established using blockchain. Blockchain traces records from the beginning to the
end (Blockchain Network, 2019). This is enabled using the block hash which are chained together.
Using this feature, any data created or updated in the blockchain leaves a trace of who has invoked the
call as it stores the identity of user too. Not only does the blockchain trace the state of the data, it also
traces the person who invokes it.
16. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 44
Figure 17: Digital footprint in blockchain
Figure 17 shows how the blocks are chained together. Each update or adding of data creates a data hash
and stores the person to has invoked the function. This ensures the digital footprint of each processed
skills and competencies data.
5.3 Processed Data Retrieved from Blockchain for Comparison
The processed skills and competencies data that is stored in the blockchain will be retrieved by the
system to be compared with the skills and competencies database of the organisation. Since the integrity
of the processed data is maintained, it will be used to compare with that of the organisation. Figure 18
shows how the data is retrieved from the blockchain for comparison
Figure 18: Process flow of data retrieval from blockchain flow
A data query call is sent to the blockchain API requesting the specific data. This request is then routed
to any of the peers in the blockchain network. The requested query is then searched in the blockchain
network, if it exists or not. If it exists, the verified data is then returned to the application for its use.
A function in the smart contract allows the data retrieval using the occupation keyword. This will return
the latest data related to the occupation. Figure 19: Function to retrieve data from blockchain shows
the smart contract used to retrieve data related to the occupation:
Data is retrieved
using a query call
from the
blockchain API
Query call is
directed to a peer
Data is verified
with signed
blocks
Processed data is
returned to the
application, if
exist
17. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 45
func (m *AIOSGChaincode) searchKeyword(stub shim.ChaincodeStubInterface, args
[]string) pb.Response {
queryString := "{"Keywords":"+"args[0] "+"}"
queryResults, err := getQueryResultForQueryString(stub, queryString)
if err != nil {
return shim.Error(err.Error())
}
return shim.Success(queryResults)
}
Figure 19: Function to retrieve data from blockchain
6 Generated Occupational Skills Reference
A sample table of related knowledge and skills generated by the AIOSG system for a particular OSR is
given below. It shall include occupational structure, occupational area, competency levels, competency
profile, competency based curriculum, and guidelines for assessment and training. With more sources
of information from the Internet in the form of unstructured data and Government and private sector
databases in the form of structured data as well as verification and subject matter knowledge from
industry experts and lead bodies, the final output will be generated quickly and more accurately reflect
the actual real-world knowledge and skills.
OSR TITLE Cybersecurity Penetration Tester
REQUIRED
ACTIVITY
RELATED KNOWLEDGE RELATED SKILLS
Manage IP
Network
1.1 Network documentation and change
management
1.2 IP protocol stack layers including:
Role of a layered protocol stack
Key functions of each layer of
the IP stack
1.3 Ethernet operation and addressing
structure including:
Ethernet operating principles
Ethernet frame structure &
frame fields
MAC address structure
1.4 Transport layer protocols including:
TCP and UDP
TCP flow control
1.5 Network devices including:
Router
Switch
Networking Interface
1.6 IP network management tools and
software
Protocol analyser
Command line
1.1 Configuring and operating between
layers of the IP protocol stack
1.2 Interpret Ethernet features and
operations, configuration and
troubleshooting
1.3 Interpret key transport layer protocol
operations and act on them
1.4 Identifying key network devices and
their attributes
1.5 Connecting to key network devices
1.6 Utilisation of suitable tools and
software for managing IP networks
18. E-PROCEEDING OF THE 8TH INTERNATIONAL
CONFERENCE ON SOCIAL SCIENCES RESEARCH 2019
E-PROCEEDING OF THE 8TH INTERNATIONAL CONFERENCE ON SOCIAL SCIENCES RESEARCH (ICSSR 2019).
(e-ISBN 978-967-0792-36-1). 18-19 November 2019, Imperial Heritage Hotel, Melaka, Malaysia.
Organised by https://worldconferences.net Page 46
7 Conclusion
This paper presents a system to address the gaps in generating occupational skills reference (OSR)
documents for the workforce. The system crawls for data from the Internet and other relevant databases
in addition to taking account of inputs from industry experts. This is to form a complete picture in terms
of inputs toward a particular job sector. The inputs are then processed in terms of the relative nature of
the skills to a particular job by way of Artificial Intelligence. The processed data is then verified by the
industry expert and lead bodies’ panel to generate the final output. The final OSR is stored in the
blockchain network to ensure traceability in terms of updates of such records and be put up for review
by a panel of experts for decision-making, publishing and future workforce planning. Malicious parties
cannot easily tamper the records of the documents and associated materials, due to the immutable nature
of blockchain. Through the enablement of such a system, relevant and applicable occupational skills
can be utilised by various industries.
8 References
1. Noah, S.A., Amruddin, A.Y., & Omar, N. (2007). Semantic Similarity Measures for Malay
Sentences. [Ebrary version]. Retrieved from
http://books.google.com.my/books?id=8pxUkfwqt_0C&pg=PA117&dq=amru+semantic+mal
ay&source=gbs_toc_r&cad=4
2. Li, Y., Mclean, D., Bandar, Z.A., O'Shea, J.D., & Crockett, K. (2006). Sentence similarity
based on semantic nets and corpus statistics. [Ebrary version]. Retrieved from
http://ants.iis.sinica.edu.tw/3BkMJ9lTeWXTSrrvNoKNFDxRm3zFwRR/55/Sentence%20Si
milarity%20Based%20on%20Semantic%20Nets%20and%20corpus%20statistics.pdf
3. Strickland, J. (2014). Predictive Analytics Using R. Colorado Springs, Simulation Educators.
4. Zhu, Y, C., Zhang, W., He, Y., Wen, J. B., & Li, M. Y. (2018). Design and implementation of
curriculum knowledge Ontology – Driven SPOC Flipped Classroom Teaching Model.
Educational Sciences: Theory & Practice, 18(5), 1351-1374.
5. Hyperledger (2019). Introduction to Hyperledger Fabric. Retrieved from https://hyperledger-
fabric.readthedocs.io/en/release-1.4/blockchain.html
6. Blockchain Network (2019). Retrieved November 2019, from https://hyperledger-
fabric.readthedocs.io/en/release-1.4/network/network.html.