LINQ (Language Integrated Query) provides a unified way to query different data sources like XML documents, relational databases, and in-memory objects. LINQ to XML allows querying XML documents using LINQ syntax. LINQ to SQL allows querying relational databases returning XML results. The paper explores using LINQ to query XML, transform XML structures, and query a database returning XML results. It discusses how LINQ bridges the gaps between object-oriented languages and relational/XML data models through object-relational and object-XML mapping.
Semantic RDF based integration framework for heterogeneous XML data sourcesDeniz Kılınç
The document describes a semantic RDF-based framework for integrating heterogeneous XML data sources. It proposes using regular expressions to define the structure of XML data semantically rather than through XPath, addressing issues with previous approaches. The framework includes a Regular Expression Generator Tool to produce regular expressions from local schemas and an Integrator Tool Box to combine the expressions into a global data source.
The document discusses the complexity of software and limitations of traditional development tools and methods. It proposes using an evolutionary approach inspired by biology to support rapid software evolution. Key elements of the proposed platform include live programming, multi-paradigm development, decentralized networking, and knowledge representation to support continuous adaptation and improvement of software systems.
Semi Automatic to Improve Ontology Mapping Process in Semantic Web Data AnalysisIRJET Journal
This document summarizes a research paper about developing a semi-automatic ontology mapping system to improve integration of data from different ontologies on the semantic web. It discusses how the system uses techniques from computational linguistics, information retrieval, and machine learning to map ontologies in an iterative process. The system performs various natural language processing tasks and leverages external resources like domain thesauri and WordNet to strengthen matches during the mapping process. Preliminary case studies show promising results for the semi-automatic ontology mapping system.
Data Integration Solutions Created By KoneksysKoneksys
This document summarizes data integration solutions created by Koneksys including OSLC adapters and clients, data management apps, specifications, and community efforts. It also describes other solutions such as model-based systems engineering, linked data research, blockchain, web applications, engineering and analysis, and network security and database work done by Koneksys. Open source projects for many of these solutions are listed.
The ADO.NET Entity Framework is part of Microsoft’s next generation of .NET technologies. It is intended to make it easier and more effective for object-oriented applications to work with data.
This document summarizes a webinar about Open Services for Lifecycle Collaboration (OSLC) and data integration. It introduces the presenter Axel Reichwein and his company Koneksys, which helps organizations create data integration solutions. It discusses challenges of distributed engineering data from different sources and the benefits of data integration. Key concepts discussed include using URLs, HTTP, and RDF to create a web of linked data. OSLC standards provide APIs to access and link data from different sources. This allows building mashup applications to search, visualize, and link engineering information across distributed systems.
Koneksys - Offering Services to Connect Data using the Data WebKoneksys
Koneksys provides consulting and software services to connect data silos using the Data Web (Linked Data on the World Wide Web). They create open-source software, promote data integration standards like OSLC, and help clients integrate their data from different sources and systems for improved traceability, transparency, collaboration and analytics. Connecting data using web technologies avoids vendor lock-in and proprietary solutions, allowing organizations to establish relationships between related data to facilitate sharing and decision making across silos.
What Factors Influence the Design of a Linked Data Generation Algorithm?andimou
Generating Linked Data remains a complicated and intensive engineering process. While different factors determine how a Linked Data generation algorithm is designed, potential alternatives for each factor are currently not considered when designing the tools’ underlying algorithms. Certain design patterns are frequently ap- plied across different tools, covering certain alternatives of a few of these factors, whereas other alternatives are never explored. Consequently, there are no adequate tools for Linked Data generation for certain occasions, or tools with inadequate and inefficient algorithms are chosen. In this position paper, we determine such factors, based on our experiences, and present a preliminary list. These factors could be considered when a Linked Data generation algorithm is designed or a tool is chosen. We investigated which factors are covered by widely known Linked Data generation tools and concluded that only certain design patterns are frequently encountered. By these means, we aim to point out that Linked Data generation is above and beyond bare implementations, and algorithms need to be thoroughly and systematically studied and exploited.
Semantic RDF based integration framework for heterogeneous XML data sourcesDeniz Kılınç
The document describes a semantic RDF-based framework for integrating heterogeneous XML data sources. It proposes using regular expressions to define the structure of XML data semantically rather than through XPath, addressing issues with previous approaches. The framework includes a Regular Expression Generator Tool to produce regular expressions from local schemas and an Integrator Tool Box to combine the expressions into a global data source.
The document discusses the complexity of software and limitations of traditional development tools and methods. It proposes using an evolutionary approach inspired by biology to support rapid software evolution. Key elements of the proposed platform include live programming, multi-paradigm development, decentralized networking, and knowledge representation to support continuous adaptation and improvement of software systems.
Semi Automatic to Improve Ontology Mapping Process in Semantic Web Data AnalysisIRJET Journal
This document summarizes a research paper about developing a semi-automatic ontology mapping system to improve integration of data from different ontologies on the semantic web. It discusses how the system uses techniques from computational linguistics, information retrieval, and machine learning to map ontologies in an iterative process. The system performs various natural language processing tasks and leverages external resources like domain thesauri and WordNet to strengthen matches during the mapping process. Preliminary case studies show promising results for the semi-automatic ontology mapping system.
Data Integration Solutions Created By KoneksysKoneksys
This document summarizes data integration solutions created by Koneksys including OSLC adapters and clients, data management apps, specifications, and community efforts. It also describes other solutions such as model-based systems engineering, linked data research, blockchain, web applications, engineering and analysis, and network security and database work done by Koneksys. Open source projects for many of these solutions are listed.
The ADO.NET Entity Framework is part of Microsoft’s next generation of .NET technologies. It is intended to make it easier and more effective for object-oriented applications to work with data.
This document summarizes a webinar about Open Services for Lifecycle Collaboration (OSLC) and data integration. It introduces the presenter Axel Reichwein and his company Koneksys, which helps organizations create data integration solutions. It discusses challenges of distributed engineering data from different sources and the benefits of data integration. Key concepts discussed include using URLs, HTTP, and RDF to create a web of linked data. OSLC standards provide APIs to access and link data from different sources. This allows building mashup applications to search, visualize, and link engineering information across distributed systems.
Koneksys - Offering Services to Connect Data using the Data WebKoneksys
Koneksys provides consulting and software services to connect data silos using the Data Web (Linked Data on the World Wide Web). They create open-source software, promote data integration standards like OSLC, and help clients integrate their data from different sources and systems for improved traceability, transparency, collaboration and analytics. Connecting data using web technologies avoids vendor lock-in and proprietary solutions, allowing organizations to establish relationships between related data to facilitate sharing and decision making across silos.
What Factors Influence the Design of a Linked Data Generation Algorithm?andimou
Generating Linked Data remains a complicated and intensive engineering process. While different factors determine how a Linked Data generation algorithm is designed, potential alternatives for each factor are currently not considered when designing the tools’ underlying algorithms. Certain design patterns are frequently ap- plied across different tools, covering certain alternatives of a few of these factors, whereas other alternatives are never explored. Consequently, there are no adequate tools for Linked Data generation for certain occasions, or tools with inadequate and inefficient algorithms are chosen. In this position paper, we determine such factors, based on our experiences, and present a preliminary list. These factors could be considered when a Linked Data generation algorithm is designed or a tool is chosen. We investigated which factors are covered by widely known Linked Data generation tools and concluded that only certain design patterns are frequently encountered. By these means, we aim to point out that Linked Data generation is above and beyond bare implementations, and algorithms need to be thoroughly and systematically studied and exploited.
Chen Wang is seeking a software engineering position as a developer with skills in Java, Python, and web programming. He has a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Electrical Engineering from Beihang University in Beijing, China. His skills include programming languages like Java, Python, JavaScript, PHP, and databases like MySQL. He has experience with academic projects involving cloud monitoring systems, natural language processing, a library management system, and a stock market web application. Previously he worked as an undergraduate research assistant on projects involving text mining, virtualization platform development, and service management.
The Web is until now mostly considered to be a Web of documents, more specifically a Web of HTML pages. However, the inventor of the Web Tim Berners Lee considers the Web not to have reached its fullest potential. The Data Web and Linked Data will enable more precise search services transforming the Web into a smarter and richer Web. Google for example uses Linked Data concepts to realize its own knowledge graph to process voice commands and voice queries for users. Linked Data concepts are not limited to the public Web. They can also be used to capture private knowledge in private company Webs making them potentially applicable as the backbone for future PLM solutions.
This document summarizes three popular Java frameworks for working with RDF and SPARQL: Jena, Sesame, and JRDF. It describes how each framework represents RDF data using a graph model with subjects, predicates, and objects. It also discusses how each framework supports querying RDF data using SPARQL or alternative query languages, and persisting RDF graphs to databases.
IEEE 2014 DOTNET DATA MINING PROJECTS A novel model for mining association ru...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document discusses algorithms for transforming queries between different query languages. It focuses on transformations between Prolog, SPARQL, and λ-DCS queries. The document provides background on these query languages and explains why query transformations are useful, such as for linking natural language to database queries. It then describes two algorithms: one for transforming Prolog to SPARQL queries, and one for transforming SPARQL to λ-DCS queries. The algorithms are tested on a small geography database and the results are analyzed to evaluate the algorithms' performance and limitations.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
The following was presented at the Semantic Technology conference in March of 2006 in San Jose California. This case study examines the extension of the National
Information Exchange Model NIEM to include K-12
education metadata. NIEM’s compliance with ISO/IEC
11179 metadata standards was found to be critical for
cost-effective system interoperability. This study indicates
that extending the NIEM can be compatible with newer
RDF and OWL metadata standards. We discuss how this
strategy will dramatically lower data integration costs and
make longitudinal data analysis more cost-effective. We
make recommendations for state education agencies,
federal policy makers, and metadata standards
organizations. The conclusion discusses the possible
impacts of recent innovations in collaborative metadata
standards efforts.
This document discusses object-relational mapping (ORM) frameworks that allow data from relational databases to be accessed via object-oriented programming languages. It notes that most business applications are data-centric and use databases for data storage and retrieval. ORM frameworks provide benefits like productivity, abstraction from database technologies, and consistency. They work by mapping database tables and columns to domain objects and properties. Some popular ORM frameworks mentioned are Hibernate, Entity Framework, and NHibernate. The document also lists some challenges of using ORM like learning curves and performance tuning needs.
High quality Linked Data generation for librariansandimou
This document discusses generating high quality linked data from heterogeneous data sources. It describes how linked data is derived from different data structures and formats and needs to be consistent. It presents challenges in linked data generation including data and semantic heterogeneity. It proposes using the RML mapping language to reduce heterogeneity and facilitate uniform linked data generation. The RML mapper tool is presented for executing RML rules to generate linked data.
A SEMANTIC BASED APPROACH FOR INFORMATION RETRIEVAL FROM HTML DOCUMENTS USING...cscpconf
Most of the internet applications are built using web technologies like HTML. Web pages are designed in such a way that it displays the data records from the underlying databases or just displays the text in an unstructured format but using some fixed template. Summarizing these data which are dispersed in different web pages is hectic and tedious and consumes most of the time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic web technologies much simplifies the process. This tool enables us to query the data being scattered over multiple web pages, in distinguished ways. This can be accomplished by the following steps – extracting the data from multiple web pages, storing them in the form of RDF triples, integrating multiple RDF files using ontology, generating SPARQL query based on user query and generating report in the form of tables or charts from the results of SPARQL query. The relationship between various related web pages are identified using ontology and used to query in better ways thus enhancing the searching efficacy.
A semantic based approach for information retrieval from html documents using...csandit
This document describes a semantic-based approach for extracting structured information from HTML documents and generating reports by:
1) Extracting data from multiple HTML pages using wrapper induction techniques and storing it in RDF format.
2) Creating an ontology to establish relationships between RDF files.
3) Generating SPARQL queries to retrieve relevant data and generate reports in various formats like tables and charts.
4) The approach was tested on a college website, extracting faculty publication details and generating precision-improved reports faster than manual methods.
Open Services for Lifecycle Collaboration (OSLC) - Extending REST APIs to Con...Axel Reichwein
Presentation on Open Services for Lifecycle Collaboration (OSLC) at the International Semantic Web Conference (ISWC) 2019 in Auckland, New Zealand.
Engineers need graphs for traceability. Problem: it is currently not possible to build engineering graphs at scale due to data and API heterogeneity. This problem can be solved by standardizing APIs of data sources. OSLC defines a standard API by combining concepts of REST and Linked Data. OSLC has been adopted by vendors of engineering software but more adoption is needed to increase the network effect.
iLastic: Linked Data Generation Workflow and User Interface for iMinds Schola...andimou
Enriching scholarly data with metadata enhances the publications’ meaning. Unfortunately, different publishers of overlapping or complementary scholarly data neglect general-purpose solutions for metadata and instead use their own ad-hoc solutions. This leads to duplicate efforts and entails non-negligible implementation and maintenance costs. In this paper, we propose a reusable Linked Data publishing workflow that can be easily adjusted by different data owners to (i) generate and publish Linked Data, and (ii) align scholarly data repositories with enrichments over the publications’ content. As a proof-of-concept, the proposed workflow was applied to the iMinds research institute data warehouse, which was aligned with publications’ content derived from Ghent University’s digital repository. Moreover, we developed a user interface to help lay users with the exploration of the iLastic Linked Data set. Our proposed approach relies on a general-purpose workflow. This way, we manage to reduce the development and maintenance costs and increase the quality of the resulting Linked Data.
Standard Web APIs for Multidisciplinary CollaborationAxel Reichwein
- The document discusses the need for standard web APIs and connected data across disciplines like engineering to enable digital thread and multidisciplinary collaboration.
- It outlines challenges of current disconnected "data silos" and vendor lock-in. Lessons can be learned from earlier standardization efforts like the World Wide Web.
- Open standards like the Open Services for Lifecycle Collaboration (OSLC) specification aim to specify standard APIs and use of common data models like Resource Description Framework (RDF) to connect data across systems.
- Using standard APIs and treating data as a universal asset with open standards could help achieve full connectivity across the product lifecycle from requirements to design to manufacturing.
eNanoMapper database, search tools and templatesNina Jeliazkova
A webinar given at the NCIP Hub https://nciphub.org/resources/1925
Nanomaterial safety assessment has become an important task following the production growth of engineered nanomaterials (ENMs) and the increased interest for ENMs from various academic, industry and regulatory parties. A number of challenges exist in nanomaterials data representation and integration mainly due to the data complexity and origination of ENM information from diverse sources. We have recently described eNanoMapper database [1] as part of the computational infrastructure for toxicological data management of engineered materials, developed within eNanoMapper project [2].
The eNanoMapper prototype database is publicly available at http://data.enanomapper.net, demonstrating the integration of data from multiple sources, using the common data model and Application Programming Interface. The supported import formats are IUCLID5 files (OECD HT), semantic format (RDF) and custom spreadsheet templates. The latter accommodates the preferred approach for data gathering for the majority of the NanoSafety Cluster projects and is enabled by a configurable parser mapping the the custom spreadsheet organization into the internal eNanoMapper storage components through external configuration file. Import of spreadsheet data and other data formats, generated by a number of NanoSafety Cluster projects is currently ongoing. The export formats have been extended with the new ISA JSON format, following the most recent ISA specification.
Defining templates for data gathering is a common activity for most of the NanoSafety Cluster projects usually resulting in modified Excel spreadsheets. In order to help avoiding the incompatibility issues, we present a tool for template generation, based on templates released under open license by JRC under the framework of the NANoREG project [3]. A number of physchem, in-vitro and in-vivo assays are supported and using feedback from users we added and extended existing information about different aspects of nanosafety, e.g. environmental exposure, cell culture assays, cellular and animal models, nanomaterial production features, and nanomaterial ageing.
Finally, the data can be accessed programmatically via the application programming interface as well as via user friendly search interface at https://search.data.enanomapper.net. The search application is powered by a free text search engine and eNanoMapper ontology and was improved over the last year based on user feedback.The search function allows now multiple filtering for information. It is possible to stack filters for e.g. nanomaterial type, cell model and assay.
eNanoMapper is supported by European Commission 7th Framework Programme for Research and Technological Development Grant (Grant agreement no: 604134).
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
The document discusses automatic data unit annotation in search results. It proposes a method that clusters data units on result pages into groups containing semantically similar units. Then, multiple annotators are used to predict annotation labels for each group based on features of the units. An annotation wrapper is constructed for each website to annotate new result pages from that site. The method aims to improve search response by providing meaningful annotations of data units within results. It is evaluated based on precision and recall for the alignment of data units and text nodes during the annotation process.
IRJET- An Efficient Way to Querying XML Database using Natural LanguageIRJET Journal
This document discusses an efficient way to query XML databases using natural language. It proposes a framework that can accept English language queries and translate them into XQuery or SQL expressions to retrieve data from an XML database. The system performs linguistic processing to map tokens in the natural language query to XQuery fragments, then executes the translated query against the database. Existing approaches are discussed that typically use semantic and syntactic analysis to represent the query logically before translation, but have limitations in handling ambiguity. The proposed system aims to improve query translation accuracy by leveraging token relationships and classifications determined from natural language parsing.
LINQ
The acronym LINQ stands for Language Integrated Query. Microsoft’s query language is fully integrated and offers easy data access from in-memory objects, databases, XML documents, and many more. It is through a set of extensions, LINQ ably integrates queries in C# and Visual Basic. This tutorial offers a complete insight into LINQ with ample examples and coding. The entire tutorial is divided into various topics with subtopics that a beginner can be able to move gradually to more complex topics of LINQ.
Chen Wang is seeking a software engineering position as a developer with skills in Java, Python, and web programming. He has a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Electrical Engineering from Beihang University in Beijing, China. His skills include programming languages like Java, Python, JavaScript, PHP, and databases like MySQL. He has experience with academic projects involving cloud monitoring systems, natural language processing, a library management system, and a stock market web application. Previously he worked as an undergraduate research assistant on projects involving text mining, virtualization platform development, and service management.
The Web is until now mostly considered to be a Web of documents, more specifically a Web of HTML pages. However, the inventor of the Web Tim Berners Lee considers the Web not to have reached its fullest potential. The Data Web and Linked Data will enable more precise search services transforming the Web into a smarter and richer Web. Google for example uses Linked Data concepts to realize its own knowledge graph to process voice commands and voice queries for users. Linked Data concepts are not limited to the public Web. They can also be used to capture private knowledge in private company Webs making them potentially applicable as the backbone for future PLM solutions.
This document summarizes three popular Java frameworks for working with RDF and SPARQL: Jena, Sesame, and JRDF. It describes how each framework represents RDF data using a graph model with subjects, predicates, and objects. It also discusses how each framework supports querying RDF data using SPARQL or alternative query languages, and persisting RDF graphs to databases.
IEEE 2014 DOTNET DATA MINING PROJECTS A novel model for mining association ru...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document discusses algorithms for transforming queries between different query languages. It focuses on transformations between Prolog, SPARQL, and λ-DCS queries. The document provides background on these query languages and explains why query transformations are useful, such as for linking natural language to database queries. It then describes two algorithms: one for transforming Prolog to SPARQL queries, and one for transforming SPARQL to λ-DCS queries. The algorithms are tested on a small geography database and the results are analyzed to evaluate the algorithms' performance and limitations.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
The following was presented at the Semantic Technology conference in March of 2006 in San Jose California. This case study examines the extension of the National
Information Exchange Model NIEM to include K-12
education metadata. NIEM’s compliance with ISO/IEC
11179 metadata standards was found to be critical for
cost-effective system interoperability. This study indicates
that extending the NIEM can be compatible with newer
RDF and OWL metadata standards. We discuss how this
strategy will dramatically lower data integration costs and
make longitudinal data analysis more cost-effective. We
make recommendations for state education agencies,
federal policy makers, and metadata standards
organizations. The conclusion discusses the possible
impacts of recent innovations in collaborative metadata
standards efforts.
This document discusses object-relational mapping (ORM) frameworks that allow data from relational databases to be accessed via object-oriented programming languages. It notes that most business applications are data-centric and use databases for data storage and retrieval. ORM frameworks provide benefits like productivity, abstraction from database technologies, and consistency. They work by mapping database tables and columns to domain objects and properties. Some popular ORM frameworks mentioned are Hibernate, Entity Framework, and NHibernate. The document also lists some challenges of using ORM like learning curves and performance tuning needs.
High quality Linked Data generation for librariansandimou
This document discusses generating high quality linked data from heterogeneous data sources. It describes how linked data is derived from different data structures and formats and needs to be consistent. It presents challenges in linked data generation including data and semantic heterogeneity. It proposes using the RML mapping language to reduce heterogeneity and facilitate uniform linked data generation. The RML mapper tool is presented for executing RML rules to generate linked data.
A SEMANTIC BASED APPROACH FOR INFORMATION RETRIEVAL FROM HTML DOCUMENTS USING...cscpconf
Most of the internet applications are built using web technologies like HTML. Web pages are designed in such a way that it displays the data records from the underlying databases or just displays the text in an unstructured format but using some fixed template. Summarizing these data which are dispersed in different web pages is hectic and tedious and consumes most of the time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic web technologies much simplifies the process. This tool enables us to query the data being scattered over multiple web pages, in distinguished ways. This can be accomplished by the following steps – extracting the data from multiple web pages, storing them in the form of RDF triples, integrating multiple RDF files using ontology, generating SPARQL query based on user query and generating report in the form of tables or charts from the results of SPARQL query. The relationship between various related web pages are identified using ontology and used to query in better ways thus enhancing the searching efficacy.
A semantic based approach for information retrieval from html documents using...csandit
This document describes a semantic-based approach for extracting structured information from HTML documents and generating reports by:
1) Extracting data from multiple HTML pages using wrapper induction techniques and storing it in RDF format.
2) Creating an ontology to establish relationships between RDF files.
3) Generating SPARQL queries to retrieve relevant data and generate reports in various formats like tables and charts.
4) The approach was tested on a college website, extracting faculty publication details and generating precision-improved reports faster than manual methods.
Open Services for Lifecycle Collaboration (OSLC) - Extending REST APIs to Con...Axel Reichwein
Presentation on Open Services for Lifecycle Collaboration (OSLC) at the International Semantic Web Conference (ISWC) 2019 in Auckland, New Zealand.
Engineers need graphs for traceability. Problem: it is currently not possible to build engineering graphs at scale due to data and API heterogeneity. This problem can be solved by standardizing APIs of data sources. OSLC defines a standard API by combining concepts of REST and Linked Data. OSLC has been adopted by vendors of engineering software but more adoption is needed to increase the network effect.
iLastic: Linked Data Generation Workflow and User Interface for iMinds Schola...andimou
Enriching scholarly data with metadata enhances the publications’ meaning. Unfortunately, different publishers of overlapping or complementary scholarly data neglect general-purpose solutions for metadata and instead use their own ad-hoc solutions. This leads to duplicate efforts and entails non-negligible implementation and maintenance costs. In this paper, we propose a reusable Linked Data publishing workflow that can be easily adjusted by different data owners to (i) generate and publish Linked Data, and (ii) align scholarly data repositories with enrichments over the publications’ content. As a proof-of-concept, the proposed workflow was applied to the iMinds research institute data warehouse, which was aligned with publications’ content derived from Ghent University’s digital repository. Moreover, we developed a user interface to help lay users with the exploration of the iLastic Linked Data set. Our proposed approach relies on a general-purpose workflow. This way, we manage to reduce the development and maintenance costs and increase the quality of the resulting Linked Data.
Standard Web APIs for Multidisciplinary CollaborationAxel Reichwein
- The document discusses the need for standard web APIs and connected data across disciplines like engineering to enable digital thread and multidisciplinary collaboration.
- It outlines challenges of current disconnected "data silos" and vendor lock-in. Lessons can be learned from earlier standardization efforts like the World Wide Web.
- Open standards like the Open Services for Lifecycle Collaboration (OSLC) specification aim to specify standard APIs and use of common data models like Resource Description Framework (RDF) to connect data across systems.
- Using standard APIs and treating data as a universal asset with open standards could help achieve full connectivity across the product lifecycle from requirements to design to manufacturing.
eNanoMapper database, search tools and templatesNina Jeliazkova
A webinar given at the NCIP Hub https://nciphub.org/resources/1925
Nanomaterial safety assessment has become an important task following the production growth of engineered nanomaterials (ENMs) and the increased interest for ENMs from various academic, industry and regulatory parties. A number of challenges exist in nanomaterials data representation and integration mainly due to the data complexity and origination of ENM information from diverse sources. We have recently described eNanoMapper database [1] as part of the computational infrastructure for toxicological data management of engineered materials, developed within eNanoMapper project [2].
The eNanoMapper prototype database is publicly available at http://data.enanomapper.net, demonstrating the integration of data from multiple sources, using the common data model and Application Programming Interface. The supported import formats are IUCLID5 files (OECD HT), semantic format (RDF) and custom spreadsheet templates. The latter accommodates the preferred approach for data gathering for the majority of the NanoSafety Cluster projects and is enabled by a configurable parser mapping the the custom spreadsheet organization into the internal eNanoMapper storage components through external configuration file. Import of spreadsheet data and other data formats, generated by a number of NanoSafety Cluster projects is currently ongoing. The export formats have been extended with the new ISA JSON format, following the most recent ISA specification.
Defining templates for data gathering is a common activity for most of the NanoSafety Cluster projects usually resulting in modified Excel spreadsheets. In order to help avoiding the incompatibility issues, we present a tool for template generation, based on templates released under open license by JRC under the framework of the NANoREG project [3]. A number of physchem, in-vitro and in-vivo assays are supported and using feedback from users we added and extended existing information about different aspects of nanosafety, e.g. environmental exposure, cell culture assays, cellular and animal models, nanomaterial production features, and nanomaterial ageing.
Finally, the data can be accessed programmatically via the application programming interface as well as via user friendly search interface at https://search.data.enanomapper.net. The search application is powered by a free text search engine and eNanoMapper ontology and was improved over the last year based on user feedback.The search function allows now multiple filtering for information. It is possible to stack filters for e.g. nanomaterial type, cell model and assay.
eNanoMapper is supported by European Commission 7th Framework Programme for Research and Technological Development Grant (Grant agreement no: 604134).
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
The document discusses automatic data unit annotation in search results. It proposes a method that clusters data units on result pages into groups containing semantically similar units. Then, multiple annotators are used to predict annotation labels for each group based on features of the units. An annotation wrapper is constructed for each website to annotate new result pages from that site. The method aims to improve search response by providing meaningful annotations of data units within results. It is evaluated based on precision and recall for the alignment of data units and text nodes during the annotation process.
IRJET- An Efficient Way to Querying XML Database using Natural LanguageIRJET Journal
This document discusses an efficient way to query XML databases using natural language. It proposes a framework that can accept English language queries and translate them into XQuery or SQL expressions to retrieve data from an XML database. The system performs linguistic processing to map tokens in the natural language query to XQuery fragments, then executes the translated query against the database. Existing approaches are discussed that typically use semantic and syntactic analysis to represent the query logically before translation, but have limitations in handling ambiguity. The proposed system aims to improve query translation accuracy by leveraging token relationships and classifications determined from natural language parsing.
LINQ
The acronym LINQ stands for Language Integrated Query. Microsoft’s query language is fully integrated and offers easy data access from in-memory objects, databases, XML documents, and many more. It is through a set of extensions, LINQ ably integrates queries in C# and Visual Basic. This tutorial offers a complete insight into LINQ with ample examples and coding. The entire tutorial is divided into various topics with subtopics that a beginner can be able to move gradually to more complex topics of LINQ.
LINQ (Language Integrated Query) allows .NET languages like C# and VB.NET to query different data sources like object collections, ADO.NET datasets, XML documents, and SQL databases using a standardized query syntax. It simplifies working with collections by replacing traditional loops and delegates with a more concise query syntax that is type checked at compile time. LINQ can work with various data sources and developers don't need to learn different query languages for each data source. It provides a powerful tool for data manipulation in .NET applications.
LINQ (Language-Integrated Query) allows .NET languages to perform data querying directly in code. It was introduced in .NET Framework 3.5 and adds native querying capabilities to languages like C# and VB.NET. LINQ can query different data sources, including objects, XML, ADO.NET, and SQL databases. It uses a SQL-like syntax that is translated into the appropriate data language. LINQ provides many benefits like maintaining business logic and queries together in one project and generating optimized SQL.
Pattern based approach for Natural Language Interface to DatabaseIJERA Editor
Natural Language Interface to Database (NLIDB) is an interesting and widely applicable research field. As the name suggests an NLIDB allows a naive user to ask query to database in natural language. This paper presents an NLIDB namely Pattern based Natural Language Interface to Database (PBNLIDB) in which patterns for simple query, aggregate function, relational operator, short-circuit logical operator and join are defined. The patterns are categorized into valid and invalid. Valid patterns are directly used to translate natural language query into Structured Query Language (SQL) query whereas an invalid pattern assists the query authoring service in generating options for user so that the query could be framed correctly. The system takes an English language query as input, recognizes pattern in the query, selects one of the before mentioned features of SQL based on the pattern, prepares an SQL statement, fires it on database and displays the result.
Improved Presentation and Facade Layer Operations for Software Engineering Pr...Dr. Amarjeet Singh
Nowadays, one of the most challenging situations for
software developers is the presence of a mismatch between
relational database systems and programming codes. In the
literature, this problem is defined as "impedance mismatch".
This study is to develop a framework built on innovations
based on the existing Object Relational Mapping technique to
solve these problems. In the study, users can perform
operations for three different database systems such as
MsSQL, MySql and Oracle. In addition, these operations can
be done within the framework of C# and Java programming
languages. In this framework, while the developers can define
database tables in the interface automatically, they can create
relations between tables by defining a foreign key. When the
system performs these operations, it creates tables, views, and
stored procedures automatically. In addition, entity classes in
C# and Java for tables and views, and operation classes for
stored procedures are created automatically. The summary of
the transactions can be taken as pdf file by the framework. In
addition, the project can automatically create Windows
Communication Foundation classes to facilitate the handling
of database elements created and the interfacing operations, as
well. This framework, which supports distributed systems, can
be downloaded at this link.
This document discusses object-relational mapping (ORM) tools and LINQ (Language Integrated Query). It begins by explaining what an ORM is and why developers may or may not need one. It then introduces LINQ, describing how it allows queries against data sources to be written as first-class language constructs in C# using familiar operators and syntax. The document outlines the different types of data sources LINQ can query, including SQL databases, XML, and .NET collections. It provides examples of using LINQ to SQL and LINQ to XML. The document also discusses the differences between code-first and database-first approaches in Entity Framework and how to configure domain classes. Finally, it covers query syntax versus method syntax in
Building N Tier Applications With Entity Framework Services 2010David McCarter
Learn how to build real world nTier applications with the new Entity Framework and related services introduced in .NET 3.5 SP1. With this new technology built into .NET, you can easily wrap an object model around your database and have all the data access automatically generated or use your own stored procedures and views. Then learn how to easily and securely expose your object model using WCF with just a few line of code using ADO.NET Data Services. The session will demonstrate how to create and consume these new technologies from the ground up. Lots of code!
Database Integrated Analytics using R InitialExperiences wiOllieShoresna
Database Integrated Analytics using R: Initial
Experiences with SQL-Server + R
Josep Ll. Berral and Nicolas Poggi
Barcelona Supercomputing Center (BSC)
Universitat Politècnica de Catalunya (BarcelonaTech)
Barcelona, Spain
Abstract—Most data scientists use nowadays functional or
semi-functional languages like SQL, Scala or R to treat data,
obtained directly from databases. Such process requires to fetch
data, process it, then store again, and such process tends to
be done outside the DB, in often complex data-flows. Recently,
database service providers have decided to integrate “R-as-a-
Service” in their DB solutions. The analytics engine is called
directly from the SQL query tree, and results are returned as
part of the same query. Here we show a first taste of such
technology by testing the portability of our ALOJA-ML analytics
framework, coded in R, to Microsoft SQL-Server 2016, one of
the SQL+R solutions released recently. In this work we discuss
some data-flow schemes for porting a local DB + analytics engine
architecture towards Big Data, focusing specially on the new
DB Integrated Analytics approach, and commenting the first
experiences in usability and performance obtained from such
new services and capabilities.
I. INTRODUCTION
Current data mining methodologies, techniques and algo-
rithms are based in heavy data browsing, slicing and process-
ing. For data scientists, also users of analytics, the capability
of defining the data to be retrieved and the operations to be
applied over this data in an easy way is essential. This is the
reason why functional languages like SQL, Scala or R are so
popular in such fields as, although these languages allow high
level programming, they free the user from programming the
infrastructure for accessing and browsing data.
The usual trend when processing data is to fetch the data
from the source or storage (file system or relational database),
bring it into a local environment (memory, distributed workers,
...), treat it, and then store back the results. In such schema
functional language applications are used to retrieve and slice
the data, while imperative language applications are used to
process the data and manage the data-flow between systems.
In most languages and frameworks, database connection pro-
tocols like ODBC or JDBC are available to enhance this data-
flow, allowing applications to directly retrieve data from DBs.
And although most SQL-based DB services allow user-written
procedures and functions, these do not include a high variety
of primitive functions or operators.
The arrival of the Big Data favored distributed frameworks
like Apache Hadoop and Apache Spark, where the data is
distributed “in the Cloud” and the data processing can also be
distributed where the data is placed, then results are joined
and aggregated. Such technologies have the advantage of
distributed computing, but when the schema for accessing data
and using it is still the same, ...
LINQ (Language Integrated Query) allows adding querying capabilities to .NET languages. It defines standard query operators and translation rules to query data like arrays, XML, databases. LINQ to XML represents XML as XElement objects that can be queried using LINQ. The System.Xml.Linq namespace contains classes like XDocument and XElement for constructing XML documents programmatically. XML can be loaded from files, traversed, inserted, deleted, and updated using LINQ to XML.
IRJET- Natural Language Query ProcessingIRJET Journal
The document discusses the development of a natural language query processing system that allows users to retrieve data from a database using simple English statements rather than SQL queries. It proposes a system that takes an English query as input, analyzes it to extract keywords, uses those keywords to generate an equivalent SQL query, executes the SQL query on the database, and returns the results to the user. The system is meant to make accessing database information easier for non-technical users by allowing them to use natural language instead of SQL.
LINQ 2 SQL Presentation To Palmchip And Trg, Technology Resource GroupShahzad
LINQ 2 SQL provides strongly-typed queries and results for relational databases. It consists of LINQ to Objects syntax to query data and tools to map classes and database tables. LINQ to SQL is an object-relational mapper that allows querying over SQL Server using LINQ. It provides an intuitive API and compile-time checking compared to other ORMs. Performance is improved by caching and optimized translation of LINQ queries to SQL.
AUTOMATIC TRANSFER OF DATA USING SERVICE-ORIENTED ARCHITECTURE TO NoSQL DATAB...IRJET Journal
This document summarizes an academic paper that proposes a model for automatically migrating data from relational databases to NoSQL databases using service-oriented architecture. The model encapsulates popular NoSQL databases like MongoDB, Cassandra, and Neo4j as web services. This allows data to be efficiently migrated from a relational database like Apache Derby to a NoSQL database with minimal knowledge of how each database works. The document provides details of the proposed migration model and discusses its implementation and testing migrating data from Derby to the NoSQL databases successfully.
The document describes an Automatic Database Schema Generator tool that can generate a database schema from natural language textual requirements. It takes textual requirements as input, analyzes the text using natural language processing techniques like tokenization and part-of-speech tagging. It also parses a domain ontology related to the problem domain to help identify entities and attributes. The tool then extracts entities, attributes, and identifies primary and foreign keys to generate a relational database schema that can be used to develop the application database. The tool aims to automate the manual and iterative process of database schema design from requirements.
IRJET- Hosting NLP based Chatbot on AWS Cloud using DockerIRJET Journal
This document discusses hosting an NLP-based chatbot on AWS using Docker. It describes developing a chatbot that answers user questions by searching text data indexed in Elasticsearch. The chatbot is containerized using Docker and deployed on AWS Elastic Container Service (ECS) to improve availability and performance. Key components include natural language processing, Elasticsearch for searching, an API for querying data, and an Angular UI. Docker Compose is used to launch multiple containers for the Elasticsearch, API, UI and other services.
The document discusses Microsoft's Entity Framework ORM technology. It provides an overview of Entity Framework and how it compares to other ORM technologies like LINQ to SQL. It also outlines Microsoft's strategy of promoting Entity Framework as the preferred .NET ORM going forward.
The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.