The semantic MDM system is an innovation in the area of consolidation of reference data, unification of services for their processing, consolidation of knowledge in semantic models and standardization of data exchange processes.
This paper presents a semantic model which delivers personalized audio information. The personalization process is automated and decentralized. The metadata which support personalization are separated in two categories: the metadata describing user preferences stored at each user and the resource adaptation metadata stored at the server. The multimedia models MPEG-21 and MPEG-7 are used to describe metadata information. The Web Ontology Language (OWL) language is used to produce and manipulate the relative ontological descriptions.
Re-Engineering Databases using Meta-Programming TechnologyGihan Wikramanayake
G N Wikramanayake (1997) "Re-engineering Databases using Meta-Programming Technology" In:16th National Information Technology Conference on Information Technology for Better Quality of Life Edited by:R. Ganepola et al. pp. 1-14. Computer Society of Sri Lanka, Colombo: CSSL Jul 11-13, ISBN 955-9155-05-9
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
CARARE 2.0: Metadata schema for 3D Cultural Objects3D ICONS Project
D’Andrea, A., Niccolucci, F. and Fernie K., 'CARARE 2.0: a metadata schema for 3D Cultural Objects'. Digital Heritage 2013, International Congress, forthcoming IEEE Proceedings.
The document discusses database management systems (DBMS) and their advantages over traditional file-oriented data storage. It describes the key components of a DBMS, including the data definition language (DDL) used to define the database schema, the data manipulation language (DML) used to query and manipulate data, and database models like relational, hierarchical and network models. The document provides examples of how a sample education database could be structured in a relational model using tables, attributes, and relations.
This paper presents a semantic model which delivers personalized audio information. The personalization process is automated and decentralized. The metadata which support personalization are separated in two categories: the metadata describing user preferences stored at each user and the resource adaptation metadata stored at the server. The multimedia models MPEG-21 and MPEG-7 are used to describe metadata information. The Web Ontology Language (OWL) language is used to produce and manipulate the relative ontological descriptions.
Re-Engineering Databases using Meta-Programming TechnologyGihan Wikramanayake
G N Wikramanayake (1997) "Re-engineering Databases using Meta-Programming Technology" In:16th National Information Technology Conference on Information Technology for Better Quality of Life Edited by:R. Ganepola et al. pp. 1-14. Computer Society of Sri Lanka, Colombo: CSSL Jul 11-13, ISBN 955-9155-05-9
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
CARARE 2.0: Metadata schema for 3D Cultural Objects3D ICONS Project
D’Andrea, A., Niccolucci, F. and Fernie K., 'CARARE 2.0: a metadata schema for 3D Cultural Objects'. Digital Heritage 2013, International Congress, forthcoming IEEE Proceedings.
The document discusses database management systems (DBMS) and their advantages over traditional file-oriented data storage. It describes the key components of a DBMS, including the data definition language (DDL) used to define the database schema, the data manipulation language (DML) used to query and manipulate data, and database models like relational, hierarchical and network models. The document provides examples of how a sample education database could be structured in a relational model using tables, attributes, and relations.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a study that aimed to identify value co-creation attributes that influence the UTM Institutional Repository (UTM IR), an e-service application. The study used interviews with UTM IR providers and users to collect data on the e-service based on the DART model of value co-creation. The DART model examines dialogue, access, risk, and transparency between customers and providers. Interview responses were coded according to the DART building blocks. A gap analysis of the coded provider and user responses identified attributes influencing the UTM IR from a value co-creation perspective. The findings aimed to help evaluate the UTM IR e-service based on customer and provider value co-creation.
Data Exchange Design with SDMX Format for Interoperability Statistical DataNooria Sukmaningtyas
Today’s concept of Open Government Data (OGD) for openness, transparency and ease of
access of data owned by government agencies becomes increasingly important. This initiative emerges
from the demand of data usersforthe data belongs to the government agencies. The data services
providing an easy access, cheap, fast, and interoperability are needed by the users and becomes
important indicator performance for respective government agencies. Statistical Data and Metadata
Exchange (SDMX) is a new standard format in the data dissemination activities particularly in the
exchange of statistical data and metadata via Internet. In this respect SDMX support the implementation of
OGD project. This paper is on the technical design, development and implementation of data and
metadata exchange service of statistical data using SDMX format to support interoperability data through
web services. Three results are proposed: (i) framework for standardization of structure of statistical
publications data model with SDMX; (ii) design architecture of data sharing model; and (iii) web service
implementation of data and metadata exchange service using Service Oriented Analysis and Design
(SOAD) method. Implementation at Statistics Indonesia (BPS) is chosen as a case study to prove the
design concept. It is shown through quantitative assessment and black box testing that the design
achieves its objective.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Why Mark Logic Addressing The Challenges Of Unstructured InformationDomSpitz
This document discusses the challenges of storing and managing unstructured information like documents, reports, emails, and media files. It describes how unstructured information is heterogeneous, taking many different formats, hierarchical with nested relationships, and changing unpredictably over time. It introduces MarkLogic as a database designed specifically for unstructured information that can address these challenges.
Information Architecture System Design (IA)Billy Choi
The document discusses information architecture system design and related topics. It includes sections on definition, methodology, career, challenges, and references. The methodology section states that deeply understanding people is important for designing systems for people. The challenges section notes some difficulties in information architecture work. The references section lists various sources on information architecture.
This document discusses perspectives on big data applications for database engineers and IT students. It summarizes key concepts of big data and MongoDB, a popular NoSQL database for managing big data. It then demonstrates practical learning activities using MongoDB, such as installation, terminology, and basic syntax. The document concludes by emphasizing the importance of skills in big data and cloud computing for IT professionals and recommends further research on MongoDB security.
Why MarkLogic: Addressing the Challenges of Unstructured Information with Pur...MarkLogic Corporation
This paper describes why MarkLogic Server helps organizations leverage unstructured information more effectively. It is intended for technology executives and project leaders who recognize the potential value possible by taking better advantage of the estimated 70 to 90 percent of today's "unstructured" information. For commercial organizations, this may lead to competitive advantage or new business opportunities, while government agencies may obtain greater mission advantage.
This paper is also relevant for readers who recognize the most commonly used tools today are not optimized to leverage unstructured information since most were designed for structured data. With the right tools, there is an opportunity for significant improvement with regard to agility, efficiency, flexibility, performance, and scalability.
Aspect-Oriented Programming (AOP) provides new constructs and concepts to handle secondary requirements in applications. Secondary requirements, i.e. crosscutting concerns, of the Internet of things (IoT) applications is inherited from the nature of the complexity of interactions, and implementation crosscutting concerns over core IoT architecture. Realizing the full potential of the IoT application requires a new abstraction design technique. This paper proposes an abstract class element toward a design approach to providing better means better separation of concerns. The proposed approach is accompanied by gathering relevant contextual properties pertaining to the environment of IoT interactions. A new architectural aspect-aware definition is proposed for tracking the logic of interaction characteristics on the IoT components being designed.
http://www.embarcadero.com
Data yields information when its definition is understood or readily available and it is presented in a meaningful context. Yet even the information that may be gleaned from data is incomplete because data is created to drive applications, not to inform users. Metadata is the data that holds application
data definitions as well as their operational and business context, and so plays a critical role in data and application design and development, as well as in providing an intelligent operational environment that's driven by business meaning.
A Mobile Audio Server enhanced with Semantic Personalization CapabilitiesUniversity of Piraeus
The document describes a mobile audio server enhanced with semantic personalization capabilities. The server and client are implemented on the Android platform to provide mobility. User preferences metadata is stored locally on each client to minimize central storage requirements, while audio resources and adaptation metadata are stored on the server. MPEG-21, MPEG-7 and OWL are used to semantically describe metadata about users, audio resources, and their relationships. The server uses the metadata to promote personalized audio tracks to clients based on their preferences.
Requirements Variability Specification for Data Intensive Software ijseajournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different models which are far from real world concepts. This,leaded to difficulties in analyzing, designing, implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from genetics. This bio-inspiration has carried out important results in DISPLs requirements variability specification with feature modeling, which were not approached by the conventional approaches.The feature model was enriched with features and relations, facilitating the requirements variation management, not yet considered in the current relevant works.The use of genetics-based methodology
seems to be promising in data intensive software requirements variability specification
REQUIREMENTS VARIABILITY SPECIFICATION FOR DATA INTENSIVE SOFTWARE mathsjournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the
variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is
considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different
models which are far from real world concepts. This,leaded to difficulties in analyzing, designing,
implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from
genetics. This bio-inspiration has carried out important results in DISPLs requirements variability
specification with feature modeling, which were not approached by the conventional approaches.The
feature model was enriched with features and relations, facilitating the requirements variation
management, not yet considered in the current relevant works.The use of genetics-based m
A NOVEL SERVICE ORIENTED ARCHITECTURE (SOA) TO SECURE SERVICES IN E-CITYijsptm
Many cities in the world have moved toward being e-city using IT and there are some who have
implemented it and or seeking out to make it operational. However, experience shows that implementation
of e-city faces Challenges which the effectiveness of e-city, improvement of provided services and security
are the most important part of these challenges. Today, for realization of a perfect e-city, to overcome
faced challenges Such as security issues is an urgent need. Considering that e-city consists of multiple
information systems, it’s the most important challenge to create integration between these systems and
provide its security which service oriented architecture as a computational model and an approach for data
integration could largely overcome these problems. In this paper through studying challenges of
information systems in e-city layers and with concentrating on advantages of service oriented architecture,
a new architecture to improve the security of e-city’s systems and their provided services and also
overcoming the challenges of information systems has been proposed
A database is an organized collection of data stored and accessed electronically. A database management system (DBMS) is software that allows users to define, create, query, update, and administer a database. Well-known DBMSs include MySQL, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, and IBM DB2. A DBMS manages access to the database, maintains its organization and security, and recovers information if the system fails.
Agent based frameworks for distributed association rule mining an analysis ijfcstjournal
Distributed Association Rule Mining (DARM) is the task for generating the globally strong association
rules from the global frequent itemsets in a distributed environment. The intelligent agent based model, to
address scalable mining over large scale distributed data, is a popular approach to constructing
Distributed Data Mining (DDM) systems and is characterized by a variety of agents coordinating and
communicating with each other to perform the various tasks of the data mining process. This study
performs the comparative analysis of the existing agent based frameworks for mining the association rules
from the distributed data sources.
Video Data Visualization System : Semantic Classification and Personalization ijcga
We present in this paper an intelligent video data visualization tool, based on semantic classification, for
retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification
resulting from semantic analysis of video. The obtained classes will be projected in the visualization space.
The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the user’s
profile, based on the interaction with the system, to render the system more adequate to its preferences.
Video Data Visualization System : Semantic Classification and Personalization ijcga
We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the user’s profile, based on the interaction with the system, to render the system more adequate to its preferences.
Presenting an Excusable Model of Enterprise Architecture for Evaluation of R...Editor IJCATR
The document presents a method for creating an executable model of enterprise architecture diagrams to evaluate reliability. It transforms UML collaboration diagrams into colored Petri nets using an algorithm. This allows simulation of the diagrams to identify potential reliability issues early in the planning process. It aims to avoid high costs of implementation by improving architectural artifacts. The key steps are:
1) Using C4ISR framework and UML diagrams to describe enterprise architecture.
2) Transforming collaboration diagrams to colored Petri nets using a algorithm that represents messages as transitions and senders/receivers as places.
3) Annotating the Petri net model with reliability data to enable simulation and evaluation of reliability.
This document provides tips for effective PowerPoint presentations. It recommends starting with a clear goal and action plan, writing out a draft first, and using an interesting opening. Key tips include keeping presentations simple, using bullet points instead of full paragraphs, following the 6 by 6 rule for bullets, using large font sizes for titles and smaller sizes for text, ensuring good color contrast, limiting transitions, only including necessary graphics, and practicing extensively before presenting. The document emphasizes keeping presentations concise and engaging for audiences.
The student used various new media technologies throughout the construction, research, and evaluation stages of their A-level media studies project. During research, they used YouTube to find inspiration and filmed interviews using a Canon camera. They researched album designs on iTunes and artists' websites on Google. For planning and communication, they used WhatsApp and Facebook. They took photos on an iPhone and edited them on an iPhoto app. They created a website using Wix and uploaded audio using SoundCloud. Video editing was done using Adobe Premiere Pro on school computers. Throughout the process, they documented their work by taking screenshots and posting to a blog created using various programs.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a study that aimed to identify value co-creation attributes that influence the UTM Institutional Repository (UTM IR), an e-service application. The study used interviews with UTM IR providers and users to collect data on the e-service based on the DART model of value co-creation. The DART model examines dialogue, access, risk, and transparency between customers and providers. Interview responses were coded according to the DART building blocks. A gap analysis of the coded provider and user responses identified attributes influencing the UTM IR from a value co-creation perspective. The findings aimed to help evaluate the UTM IR e-service based on customer and provider value co-creation.
Data Exchange Design with SDMX Format for Interoperability Statistical DataNooria Sukmaningtyas
Today’s concept of Open Government Data (OGD) for openness, transparency and ease of
access of data owned by government agencies becomes increasingly important. This initiative emerges
from the demand of data usersforthe data belongs to the government agencies. The data services
providing an easy access, cheap, fast, and interoperability are needed by the users and becomes
important indicator performance for respective government agencies. Statistical Data and Metadata
Exchange (SDMX) is a new standard format in the data dissemination activities particularly in the
exchange of statistical data and metadata via Internet. In this respect SDMX support the implementation of
OGD project. This paper is on the technical design, development and implementation of data and
metadata exchange service of statistical data using SDMX format to support interoperability data through
web services. Three results are proposed: (i) framework for standardization of structure of statistical
publications data model with SDMX; (ii) design architecture of data sharing model; and (iii) web service
implementation of data and metadata exchange service using Service Oriented Analysis and Design
(SOAD) method. Implementation at Statistics Indonesia (BPS) is chosen as a case study to prove the
design concept. It is shown through quantitative assessment and black box testing that the design
achieves its objective.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Why Mark Logic Addressing The Challenges Of Unstructured InformationDomSpitz
This document discusses the challenges of storing and managing unstructured information like documents, reports, emails, and media files. It describes how unstructured information is heterogeneous, taking many different formats, hierarchical with nested relationships, and changing unpredictably over time. It introduces MarkLogic as a database designed specifically for unstructured information that can address these challenges.
Information Architecture System Design (IA)Billy Choi
The document discusses information architecture system design and related topics. It includes sections on definition, methodology, career, challenges, and references. The methodology section states that deeply understanding people is important for designing systems for people. The challenges section notes some difficulties in information architecture work. The references section lists various sources on information architecture.
This document discusses perspectives on big data applications for database engineers and IT students. It summarizes key concepts of big data and MongoDB, a popular NoSQL database for managing big data. It then demonstrates practical learning activities using MongoDB, such as installation, terminology, and basic syntax. The document concludes by emphasizing the importance of skills in big data and cloud computing for IT professionals and recommends further research on MongoDB security.
Why MarkLogic: Addressing the Challenges of Unstructured Information with Pur...MarkLogic Corporation
This paper describes why MarkLogic Server helps organizations leverage unstructured information more effectively. It is intended for technology executives and project leaders who recognize the potential value possible by taking better advantage of the estimated 70 to 90 percent of today's "unstructured" information. For commercial organizations, this may lead to competitive advantage or new business opportunities, while government agencies may obtain greater mission advantage.
This paper is also relevant for readers who recognize the most commonly used tools today are not optimized to leverage unstructured information since most were designed for structured data. With the right tools, there is an opportunity for significant improvement with regard to agility, efficiency, flexibility, performance, and scalability.
Aspect-Oriented Programming (AOP) provides new constructs and concepts to handle secondary requirements in applications. Secondary requirements, i.e. crosscutting concerns, of the Internet of things (IoT) applications is inherited from the nature of the complexity of interactions, and implementation crosscutting concerns over core IoT architecture. Realizing the full potential of the IoT application requires a new abstraction design technique. This paper proposes an abstract class element toward a design approach to providing better means better separation of concerns. The proposed approach is accompanied by gathering relevant contextual properties pertaining to the environment of IoT interactions. A new architectural aspect-aware definition is proposed for tracking the logic of interaction characteristics on the IoT components being designed.
http://www.embarcadero.com
Data yields information when its definition is understood or readily available and it is presented in a meaningful context. Yet even the information that may be gleaned from data is incomplete because data is created to drive applications, not to inform users. Metadata is the data that holds application
data definitions as well as their operational and business context, and so plays a critical role in data and application design and development, as well as in providing an intelligent operational environment that's driven by business meaning.
A Mobile Audio Server enhanced with Semantic Personalization CapabilitiesUniversity of Piraeus
The document describes a mobile audio server enhanced with semantic personalization capabilities. The server and client are implemented on the Android platform to provide mobility. User preferences metadata is stored locally on each client to minimize central storage requirements, while audio resources and adaptation metadata are stored on the server. MPEG-21, MPEG-7 and OWL are used to semantically describe metadata about users, audio resources, and their relationships. The server uses the metadata to promote personalized audio tracks to clients based on their preferences.
Requirements Variability Specification for Data Intensive Software ijseajournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different models which are far from real world concepts. This,leaded to difficulties in analyzing, designing, implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from genetics. This bio-inspiration has carried out important results in DISPLs requirements variability specification with feature modeling, which were not approached by the conventional approaches.The feature model was enriched with features and relations, facilitating the requirements variation management, not yet considered in the current relevant works.The use of genetics-based methodology
seems to be promising in data intensive software requirements variability specification
REQUIREMENTS VARIABILITY SPECIFICATION FOR DATA INTENSIVE SOFTWARE mathsjournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the
variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is
considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different
models which are far from real world concepts. This,leaded to difficulties in analyzing, designing,
implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from
genetics. This bio-inspiration has carried out important results in DISPLs requirements variability
specification with feature modeling, which were not approached by the conventional approaches.The
feature model was enriched with features and relations, facilitating the requirements variation
management, not yet considered in the current relevant works.The use of genetics-based m
A NOVEL SERVICE ORIENTED ARCHITECTURE (SOA) TO SECURE SERVICES IN E-CITYijsptm
Many cities in the world have moved toward being e-city using IT and there are some who have
implemented it and or seeking out to make it operational. However, experience shows that implementation
of e-city faces Challenges which the effectiveness of e-city, improvement of provided services and security
are the most important part of these challenges. Today, for realization of a perfect e-city, to overcome
faced challenges Such as security issues is an urgent need. Considering that e-city consists of multiple
information systems, it’s the most important challenge to create integration between these systems and
provide its security which service oriented architecture as a computational model and an approach for data
integration could largely overcome these problems. In this paper through studying challenges of
information systems in e-city layers and with concentrating on advantages of service oriented architecture,
a new architecture to improve the security of e-city’s systems and their provided services and also
overcoming the challenges of information systems has been proposed
A database is an organized collection of data stored and accessed electronically. A database management system (DBMS) is software that allows users to define, create, query, update, and administer a database. Well-known DBMSs include MySQL, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, and IBM DB2. A DBMS manages access to the database, maintains its organization and security, and recovers information if the system fails.
Agent based frameworks for distributed association rule mining an analysis ijfcstjournal
Distributed Association Rule Mining (DARM) is the task for generating the globally strong association
rules from the global frequent itemsets in a distributed environment. The intelligent agent based model, to
address scalable mining over large scale distributed data, is a popular approach to constructing
Distributed Data Mining (DDM) systems and is characterized by a variety of agents coordinating and
communicating with each other to perform the various tasks of the data mining process. This study
performs the comparative analysis of the existing agent based frameworks for mining the association rules
from the distributed data sources.
Video Data Visualization System : Semantic Classification and Personalization ijcga
We present in this paper an intelligent video data visualization tool, based on semantic classification, for
retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification
resulting from semantic analysis of video. The obtained classes will be projected in the visualization space.
The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the user’s
profile, based on the interaction with the system, to render the system more adequate to its preferences.
Video Data Visualization System : Semantic Classification and Personalization ijcga
We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the
edges are the relation between documents and the classes of documents. Finally, we construct the user’s profile, based on the interaction with the system, to render the system more adequate to its preferences.
Presenting an Excusable Model of Enterprise Architecture for Evaluation of R...Editor IJCATR
The document presents a method for creating an executable model of enterprise architecture diagrams to evaluate reliability. It transforms UML collaboration diagrams into colored Petri nets using an algorithm. This allows simulation of the diagrams to identify potential reliability issues early in the planning process. It aims to avoid high costs of implementation by improving architectural artifacts. The key steps are:
1) Using C4ISR framework and UML diagrams to describe enterprise architecture.
2) Transforming collaboration diagrams to colored Petri nets using a algorithm that represents messages as transitions and senders/receivers as places.
3) Annotating the Petri net model with reliability data to enable simulation and evaluation of reliability.
This document provides tips for effective PowerPoint presentations. It recommends starting with a clear goal and action plan, writing out a draft first, and using an interesting opening. Key tips include keeping presentations simple, using bullet points instead of full paragraphs, following the 6 by 6 rule for bullets, using large font sizes for titles and smaller sizes for text, ensuring good color contrast, limiting transitions, only including necessary graphics, and practicing extensively before presenting. The document emphasizes keeping presentations concise and engaging for audiences.
The student used various new media technologies throughout the construction, research, and evaluation stages of their A-level media studies project. During research, they used YouTube to find inspiration and filmed interviews using a Canon camera. They researched album designs on iTunes and artists' websites on Google. For planning and communication, they used WhatsApp and Facebook. They took photos on an iPhone and edited them on an iPhoto app. They created a website using Wix and uploaded audio using SoundCloud. Video editing was done using Adobe Premiere Pro on school computers. Throughout the process, they documented their work by taking screenshots and posting to a blog created using various programs.
BadgerLink is a suite of databases funded by the Wisconsin Department of Public Instruction that provides Wisconsin residents access to a variety of academic, health, business, and general interest databases. It includes resources like newspaper archives, encyclopedias, streaming videos, genealogy records, literature references, test preparation materials, and more. BadgerLink can be accessed through Wisconsin public libraries, schools, and directly from home for Wisconsin residents. Similar programs exist in other states to provide statewide access to online resources.
Wisconsin was first explored by the French in 1634 and a trading post was established in 1660. Britain gained control after the French and Indian War, then the U.S. after the Revolutionary War, though Britain retained actual control until after the War of 1812. Wisconsin became a separate territory in 1836. Famous natives include Harry Houdini and Robert La Follette. Matt Koehl was a leader of the American Nazi Party born in Milwaukee in 1935. Herbert Simon was an American economist and political scientist who won the 1978 Nobel Prize in Economics. Madison is the capital city and had a population of over 233,000 in 2010.
This document discusses model-driven architecture (MDA), an approach to system specification and interoperability based on the use of formal models. MDA uses platform-independent models that are translated to platform-specific models using formal rules. Core MDA standards like UML, MOF, XMI, and CWM define the infrastructure. The vision is for nearly seamless interoperability based on shared metadata and formal model translations, with a long-term goal of adaptive object models that can dynamically interpret models at runtime.
Journal of Physics Conference SeriesPAPER • OPEN ACCESS.docxLaticiaGrissomzz
Journal of Physics: Conference Series
PAPER • OPEN ACCESS
The methodology of database design in
organization management systems
To cite this article: I L Chudinov et al 2017 J. Phys.: Conf. Ser. 803 012030
View the article online for updates and enhancements.
You may also like
The Construction of Group Financial
Management Information System
Yuan Ma
-
Identification of E-Maintenance Elements
and Indicators that Affect Maintenance
Performance of High Rise Building: A
Literature Review
Nurul Inayah Wardahni, Leni Sagita
Riantini, Yusuf Latief et al.
-
Web-Based Project Management
Information System in Construction
Projects
M R Fachrizal, J C Wibawa and Z Afifah
-
This content was downloaded from IP address 75.44.16.235 on 09/10/2022 at 19:18
https://doi.org/10.1088/1742-6596/803/1/012030
https://iopscience.iop.org/article/10.1088/1757-899X/750/1/012025
https://iopscience.iop.org/article/10.1088/1757-899X/750/1/012025
https://iopscience.iop.org/article/10.1088/1757-899X/1007/1/012021
https://iopscience.iop.org/article/10.1088/1757-899X/1007/1/012021
https://iopscience.iop.org/article/10.1088/1757-899X/1007/1/012021
https://iopscience.iop.org/article/10.1088/1757-899X/1007/1/012021
https://iopscience.iop.org/article/10.1088/1757-899X/879/1/012064
https://iopscience.iop.org/article/10.1088/1757-899X/879/1/012064
https://iopscience.iop.org/article/10.1088/1757-899X/879/1/012064
The methodology of database design in organization
management systems
I L Chudinov, V V Osipova, Y V Bobrova
Tomsk Polytechnic University, 30, Lenina ave., Tomsk, 634050, Russia
E-mail: [email protected]
Abstract. The paper describes the unified methodology of database design for management
information systems. Designing the conceptual information model for the domain area is the
most important and labor-intensive stage in database design. Basing on the proposed integrated
approach to design, the conceptual information model, the main principles of developing the
relation databases are provided and user’s information needs are considered. According to the
methodology, the process of designing the conceptual information model includes three basic
stages, which are defined in detail. Finally, the article describes the process of performing the
results of analyzing user’s information needs and the rationale for use of classifiers.
1. Introduction
Management information systems are among the most important components of information
technologies (IT), used in a company. They are usually classified by the functions into the following
systems: Manufacturing Execution Systems (MES), Human Resource Management (HRM), Enterprise
Content Management (ECM), Customer Relationship Management (CRM), etc. [1]. Such systems are
used a special structured database and are required for reengineering of the whole enterprise
management system, while the integration makes it difficult to use them. These systems are expensive
enough and particularly devel.
WHAT IS A DBMS? EXPLAIN DIFFERENT MYSQL COMMANDS AND CONSTRAINTS OF THE SAME.`Shweta Bhavsar
This document discusses database management systems (DBMS) and MySQL commands and constraints. It begins by defining a DBMS and describing their components and characteristics, including data models, query languages, and advantages like data integrity and sharing. It then explains common MySQL commands to create and manage databases, tables, and insert values. Constraints are also discussed as ways to define data types and validate values in tables.
The IT-GRC platform is a solution that is based on
the paradigm of distributed systems, based on multi-agent systems
(MAS) in its different parts namely the user interface, the static
and dynamic configuration of the organization management
profiles, the choice of the best repository and the processing of
processes, it takes advantage of the autonomy and learning aspect
of ADMs as well as their high-level communication and
coordination. However, these technological components are
difficult to manipulate, or users lack the necessary skills to use
them correctly. In this situation, the modeling of a communication
architecture is necessary, in order to adapt the functionalities of
the platform to the needs of the users. To help achieve these goals,
it is necessary to develop a functional and intelligent
communication architecture, adaptable and able to provide a
support framework, allowing access to system functionalities
regardless of physical and time constraints.
Data Integration in Multi-sources Information Systemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document proposes a smart semantic middleware for the Internet of Things. The middleware would allow for self-managed complex systems consisting of distributed and heterogeneous components. Each component would be represented by an autonomous software agent that monitors and controls the component. Semantic technologies and ontologies would be used to enable discovery and interoperability between heterogeneous components. The proposed middleware aims to support self-configuration, optimization, protection and healing of complex systems.
A CASE STUDY OF INNOVATION OF AN INFORMATION COMMUNICATION SYSTEM AND UPGRADE...ijaia
In this paper, a case study is analyzed. This case study is about an upgrade of an industry communication system developed by following Frascati research guidelines. The knowledge Base (KB) of the industry is gained by means of different tools that are able to provide data and information having different formats and structures into an unique bus system connected to a Big Data. The initial part of the research is focused on the implementation of strategic tools, which can able to upgrade the KB. The second part of the proposed study is related to the implementation of innovative algorithms based on a KNIME (Konstanz Information Miner) Gradient Boosted Trees workflow processing data of the communication system which travel into an Enterprise Service Bus (ESB) infrastructure. The goal of the paper is to prove that all the new KB collected into a Cassandra big data system could be processed through the ESB by predictive algorithms solving possible conflicts between hardware and software. The conflicts are due to the integration of different database technologies and data structures. In order to check the outputs of the Gradient Boosted Trees algorithm an experimental dataset suitable for machine learning testing has been tested. The test has been performed on a prototype network system modeling a part of the whole communication system. The paper shows how to validate industrial research by following a complete design and development of a whole communication system network improving business intelligence (BI).
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
DESIGN PATTERNS IN THE WORKFLOW IMPLEMENTATION OF MARINE RESEARCH GENERAL INF...AM Publications
This paper proposes the use of design patterns in a marine research general information platform. The development of the platform refers to a design of complicated system architecture. Creation and execution of the research workflow nodes and designing of visualization library suited for marine users play an important role in the whole software architecture. This paper studies the requirements characteristic in marine research fields and has implemented a series of framework to solve these problems based on object-oriented and design patterns techniques. These frameworks make clear the relationship in all directions between modules and layers of software, which communicate through unified abstract interface and reduce the coupling between modules and layers. The building of these frameworks is importantly significant in advancing the reusability of software and strengthening extensibility and maintainability of the system.
A relational model of data for large shared data banksSammy Alvarez
This document introduces the relational model of data organization for large shared databases. It discusses inadequacies of existing tree-structured and network models, including ordering, indexing, and access path dependencies that impair data independence. The relational model represents data as mathematical n-ary relations and relationships between domains, providing independence from representation changes. It allows a clearer evaluation of existing systems and competing internal representations. The relational view forms a basis for treating issues like derivability, redundancy, and consistency in a sound way.
Personalized Multimedia Web Services in Peer to Peer Networks Using MPEG-7 an...University of Piraeus
Multimedia information has been increased in the recent years while new content delivery services enhanced with personalization functionalities are provided to users. Several standards are proposed for the representation and retrieval of multimedia content. This paper makes an overview of the available standards and technologies. Furthermore a prototype semantic P2P architecture is presented which delivers personalized audio information. The metadata which support personalization are separated in two categories: the metadata describing user preferences stored at each user and the resource adaptation metadata stored at the P2P network’s web services. The multimedia models MPEG-21 and MPEG-7 are used to describe metadata information and the Web Ontology Language (OWL) to produce and manipulate ontological descriptions. SPARQL is used for querying the OWL ontologies. The MPEG Query Format (MPQF) is also used, providing a well-known framework for applying queries to the metadata and to the ontologies.
IRJET-Computational model for the processing of documents and support to the ...IRJET Journal
This document proposes a computational model for processing documents and supporting decision making in information retrieval systems. The model includes five main components: 1) a tracking and indexing component to crawl the web and store document metadata, 2) an information processing component to categorize documents and define user profiles, 3) a decision support component to analyze stored information and generate statistical reports, 4) a display component to provide search interfaces and visualization tools, and 5) specialized roles to administer the system. The goal of the model is to provide a framework for developing large-scale search engines.
TECHNIQUES FOR COMPONENT REUSABLE APPROACHcscpconf
This document discusses techniques for component reuse using a component retrieval approach. It proposes using UML models stored in MDL file format to retrieve relevant software components based on structural information like class names and relationships. A tool called a "smart environment" is described that can search a repository of MDL files and source code based on class diagrams or use case diagrams to find the best matching components for reuse. Weights are assigned to different model elements to return search results in order of closest match. The approach aims to improve on keyword-based searching by matching design specifications.
The document discusses how to build web 3.0 applications for healthcare by combining web-oriented architecture and semantic interoperability standards. It aims to help business, clinical and IT decision-makers create approaches to developing healthcare core systems using modern architectural patterns and focusing on semantic interoperability. The key is a standardised clinical document accessible through a standardised external interface for other applications like patient medical records. The tutorial covers modeling the solution architecture as a core framework implementing healthcare functionality and integrating with light, front-end web applications. It presents interoperability instruments and code systems, and includes practical exercises demonstrating simple EHR and PMR implementations based on open source solutions.
Generic Algorithm based Data Retrieval Technique in Data MiningAM Publications,India
This system Hybrid extraction of robust model (GA), a dynamic XAML based mechanism for the adaptive management and reuse of e-learning resources in a distributed environment like the Web. This proposed system argues that to achieve the on-demand semantic-based resource management for Web-based e-learning, one should go beyond using domain ontology’s statically. So the propose XAML based matching process involves semantic mapping has done on both the open dataset and closed dataset mechanism to integrate e-learning databases by using ontology semantics. It defines context-specific portions from the whole ontology as optimized data and proposes an XAML based resource reuse approach by using an evolution algorithm. It explains the context aware based evolution algorithm for dynamic e-learning resource reuse in detail. This system is going to conduct a simulation experiment and evaluate the proposed approach with a xaml based e-learning scenario. The proposed approach for matching process in web cluster databases from different database servers can be easily integrated and deliver highly dimensional e-learning resource management and reuse is far from being mature. However, e-learning is also a widely open research area, and there is still much room for improvement on the method. This research mechanism includes 1) improving the proposed evolution approach by making use of and comparing different evolutionary algorithms, 2) applying the proposed approach to support more applications, and 3) extending to the situation with multiple e-learning systems or services.
The document discusses automatic data unit annotation in search results. It proposes a method that clusters data units on result pages into groups containing semantically similar units. Then, multiple annotators are used to predict annotation labels for each group based on features of the units. An annotation wrapper is constructed for each website to annotate new result pages from that site. The method aims to improve search response by providing meaningful annotations of data units within results. It is evaluated based on precision and recall for the alignment of data units and text nodes during the annotation process.
Similar to Semantic MDM systems design concept (20)
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
AppSec PNW: Android and iOS Application Security with MobSF
Semantic MDM systems design concept
1. Semantic MDM systems design concept
“Evolution of society is connected in particular with development of
communication means of its members, especially tools for building and using their
collective memory”
Stanislav Yankovskiy
“A properly organized Semantic Network can facilitate evolution of all human
knowledge at large”
Sir Tim Berners-Lee
Andrey Andrichenko,
Ph.D. in Technical Sciences,
CEO SDI Research
andrichenko@sdi-solution.ru
www.sdi-solution.ru
The design automation systems have drawn closer to the threshold, beyond
which the snowballing use of semantic technologies follows. The interest to these
technologies is shown everywhere, where there are complex data structures and
difficult to formalize decision-making procedures operate, which are based on
empiric knowledge about behavior and interaction of objects. The use of semantic
data models in CAD, CAM, CAPP shall allow creating a new class of intellectual
systems with powerful decision-making.
All objects in production are in continuous interaction viz. materials,
component parts, equipment, jigs, fixtures and tools. The characteristics of these
objects are stored in separate databases, and rules of their behavior and compatibility
– in the algorithms of various applications. By uniting data and knowledge into a
single semantic model of the application domain, intellectual information system of
the enterprise can be built, which shall serve as the basis for taking adequate
decisions in design, production and management.
By definition, the semantic network is an “information model of the
application domain, having oriented graph appearance, whose peaks correspond to
the application domain objects, and directed lines (edges) give the relation between
them” (fig. 1).
1
2. Fig.1. Fragment of semantic network
The evolutionary development of software consists in gradual unification of
system-wide components. In the next 5 years, the shift of accent from software
development towards generation of application-oriented semantic data models is
inevitable. The standardization and unification of terms, concepts and relations, used
in these models, shall become the key factor in developing any information system.
The replacement of object paradigm by semantic and unification of data models is the
mainstream, which shall allow increasing the automation level of decision-making
and standardize information exchange protocols between different applications.
Fig.2.Unification of system-wide components in the course of evolution of software
tools
2
3. Historically, the appearance of a new class of systems, designed for
implementation of data domain semantic models, is inevitable. The applications of
Master Data Management (MDM) class, consolidating all enterprise reference data of
non-transactional nature, can serve as the beneficial environment for building these
models.
Under this direction, the problems of duplication and synchronization of
reference data are eliminated. A single classification and coding system is introduced.
A centralized system of storage, management and access to reference data is
implemented, and the prospects for standardizing data presentation and exchange
appear. The “venue” is opened for deploying mechanisms that are based on
knowledge.
MDM methodology analyzes reference data, circulating in the enterprise, as a
single communication language of corporate information systems. It is understood
that information about the products is subject to joint use and exchange only if the
same reference data is used by both the sender and the recipient.
Thus, we deal with innovations in the field of reference data consolidation,
unification of their processing services, and consolidation of knowledge in semantic
models and standardization of data exchange formats.
The development perspective of MDM systems is to absorb the innovations
listed, and together with DBMS class applications, become system-wide IT
infrastructure components of any enterprise.
Let us examine the basic principles of building semantic MDM systems.
Data consolidation
The reference data repository shall be the only place, where data addition,
change or deletion shall take place (fig. 3). MDM is an independent class of systems,
which shall not occupy a subordinate position in relation to any application system,
for example, ERP or PDM.
Fig.3 Reference data consolidation
3
4. Consolidation of knowledge
The shift of decision-making rules to data model level makes them available to
all corporate applications. The orientation for building semantic models of data
domains provides maximum level of automation, as the isolated solutions, once input
to the semantic reference data database, shall duly be formalized and used repeated
number of times in various application systems (fig.4).
Fig.4. Consolidation of information
Common information space
The semantic MDM system represents consolidated reference data space.
Information is gathered from the primary systems and integrated into the common
permanent storage area. The removal of part of the references beyond its limits
breaks the connection between objects that violates the integrity information system
of knowledge and considerably limits the possibility of building a semantic network
(fig.5).
Fig.5. Common information space of reference data
4
5. Universality and expandability
The data domain model is regularly corrected and improved. New objects are
created, their rules of conduct and relation change. The semantic MDM shall be
capable of adapting to these changes i.e. in essence, be the medium of performance of
data domain model, independent of its specific content.
Context-sensitive data display
MDM system shall provide the opportunity to “see” objects from different
points of view. For example, the technologist must see the workpiece and cutting tool
displacement mechanism in the metal cutting machine tool, and the mechanical
engineer – units and parts, subject to maintenance inspection (fig.6).
Fig.6 Contextual point of view on reference data information object
The contextual point of view on the object is not limited only the user’s role, it
changes depending on time, to be precise from the stages of object’s life cycle, and
set of its functions (intended use).
The tangible objects have two main properties: structure and activity. The
contextual representation of the object’s internal structure changes dynamically
depending on the processes, in which it takes part. It can be said that the objects are
defined by candidate actions with them.
Standardization of data exchange formats
The topic of data synchronization and unification is far outside the interests of
establishments. In accordance with the requirements of international standards, the
product suppliers shall provide the buyer with technical information about the goods
required for cataloguing in electronic format. The integration of goods from different
manufacturers in the electronic catalogs implies that when describing the goods it is
required to use the same dictionary terms and notations.
As of today, there are two alternatives of data exchange standardization formats.
The first is implemented by ISO 22745 standard, which assumes the use of open
5
6. technical data dictionary of the International Association for Electronic Commerce
Management (eOTD ECCMA).
eOTD dictionaries are developed with the purpose of connecting terms and
definitions with similar semantic content. They allow to assign unique worldwide
identifier to any term, property or class. On the basis of these identifiers the
description of material and technical objects in various automated systems can be
conformed (fig.7).
Fig. 7 ECCMA Open Technical Dictionary (eOTD)
In conformity with the order of Rostechregulirovaniye No. 1921 of July 19,
2006, the Russian version of open technical dictionary eOTD ECCMA has been
formed that is designed to coordinate information about products from different
suppliers with the purpose of reducing costs for developing electronic catalogs of the
products.
The second choice is realized by ISO 15926 standard, which, in contrast to ISO
22745, is ontological, as it standardizes the structure of objects. The data model,
defining value of information about life cycle in the single context, supporting all
description groups, which the process engineers, equipment engineers, operators,
maintenance engineers and other specialists can hold in relation to the equipments, is
specified in it (ISO 15926, part 1).
The standard data model, based on which synchronization with application data
models is proposed, is implemented in ISO 15926 by reference data libraries RDL
(Reference Data Libraries).
6
7. Fig.8 Standard model formalizes data exchange
The integration of a new application in the single information space of an
enterprise shall begin with the coordination of classes and attributes of the application
model of this application with the corresponding definitions of the standard model,
which is the corporate communication language of different automated systems at the
enterprise (fig.8).
The real works for using ISO 15926 are conducted actively by the State
Corporation Rosatom and FGUP Sudoexport. On December 16, 2008, an order was
issued under number 710, prescribing: “State Corporation “Rosatom” and its
organizations when forming and using production information models at all stages of
NPP lifecycle and fuel production during information management process for the
purposes of data integration be governed by the provisions of international standard
ISO 15926, for which develop the corresponding corporate standards”.
Semantic technologies in CAPP
Computer aided process planning (CAPP) systems, operating at machine
building enterprises, are the principal consumers of reference information. Data about
material and technical objects viz. equipment, materials, machine tool attachments,
are required by them with maximum details. The interest for CAPP is represented not
only by the technical parameters of objects, but relations between them in the context
of production process. The possibilities of semantic MDM systems allow CAPP
applications to realize “intelligent” search in reference data database, where
parameters of the unknown object, as well as rules of its interaction with other
objects, take part.
7
8. Thus, for example, when searching for a cutting tool, as criteria one can specify
not only its characteristics, but any other object interconnected with it: material of
machined part, machining scheme, fixtures, metal cutting machine tool. The system
shall select the required tool, compatible with the copies of associated objects (fig.9).
Fig.9. Narrowing of search area in the semantic network of interconnected
objects
Semantic search is a key consumption value, capable of providing competitive
advantage of CAPP due to increase in automation level of decision-making during
the design process.
This approach is the basis of Semantic Web technologies. Semantic
technologies have already passed the initial development stage and are seriously
considered by leading analysts as real force: “During the next 10 years, Web-based
technologies will improve the ability to embed semantic structures in documents, and
create structured vocabularies and ontologies to define terms, concepts and
relationships…” Analytical report “Finding and Exploiting Value in Semantic
Technologies on the Web”, Gartner, 2007
As defined by Thomas Grubber, ontology is the specification of some data
domain, which describes multitude of terms, concepts and classes of objects, and
interactions between them. Ontology is designed to provide coordinated unified
dictionary of terms for interaction of different corporate information systems.
The simplest example of building ontology is to segregate the connecting and
cutting part in the rotary cutting tool structure as independently classified objects that
allow using them when building descriptions of similar tools, like: drill, broach bit,
reamer bit, end milling cutter etc. (fig.10).
8
9. Fig.10 Allocation of component parts of a rotary cutting tool
Without building an ontological model of the object, it is impossible to
formalize its interconnection with other entities, since the compatibility rules of two
objects are defined by combined compatibility of their component parts (fig.11).
Fig.11 Compatibility of objects is defined by complex compatibility of their
component parts
Integration of information on unified descriptions of application domain
objects in the common library and provision of access to it from different applications
9
10. resolves the issue of data exchange format standardization. The placement of such
library on the Global Network resolves the problems of data integration at industrial,
state and inter-state level.
Under the European project JORD (Joint Operational Reference Data),
starting 2008, a library of ontological data models has been created based on open
international standard ISO 15926. Each volunteer has the opportunity to place his
own ontological data models in this library. The annual subscription for this library
in the internet shall cost EUR 25,000.
Semantic, corporate reference data management system
SDI Solution informs about the release of Semantic, a new corporate reference
data management system. This software solution has advanced functional of
information and search system and simultaneously serves as the reference data
provider for CAPP, PLM and ERP.
Fig. 12 Semantic, corporate reference data management system
The Semantic system supports corporate reference data management business
processes: data input, actualization, access, and control, including maintaining history
of changes and data use. It implements multi-criteria parametric and semantic search
of objects. It allows storing data in various media: Oracle, MS SQL Server,
InterBase. Detailed description of functionality of Semantic system shall be presented
in the next issue of “CAD and Graphics”.
10