This document provides a review of spatial-temporal databases and their models. It discusses the key components and characteristics of spatial databases, temporal databases, and spatial-temporal databases. Some of the main models of spatial-temporal data modeling that are described include the snapshot model, space-time composite data model, simple time-stamping models, event-oriented models, three-domain model, and history graph model. The review examines how these different models approach representing and querying spatial and temporal data.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models may also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
In this talk, I will present my recent work on neural IR models. We begin with a discussion on learning good representations of text for retrieval. I will present visual intuitions about how different embeddings spaces capture different relationships between items, and their usefulness to different types of IR tasks. The second part of this talk is focused on the applications of deep neural architectures to the document ranking task.
A Geographic Information System (GIS) is a computer system for capturing, storing, analyzing and managing data and associated attributes which are spatially referenced to Earth. GIS integrates common database operations with tools for visualizing and analyzing geographic data. Key components of a GIS include hardware, software, data, people and methods. GIS draws upon techniques from fields such as cartography, remote sensing, photogrammetry, surveying and statistics. Spatial data in GIS can be represented using vector or raster data models. Vector models represent geographic features as points, lines and polygons while raster models divide space into a grid of cells. GIS performs functions such as inputting data, map making, data manipulation, file management, querying
Clustering: Large Databases in data miningZHAO Sam
The document discusses different approaches for clustering large databases, including divide-and-conquer, incremental, and parallel clustering. It describes three major scalable clustering algorithms: BIRCH, which incrementally clusters incoming records and organizes clusters in a tree structure; CURE, which uses a divide-and-conquer approach to partition data and cluster subsets independently; and DBSCAN, a density-based algorithm that groups together densely populated areas of points.
GeoServer, an introduction for beginnersGeoSolutions
This presentation will provide an introduction to the GeoServer project and its abilities to publish data with a mix of well known OGC protocols and other pupolar protocol and data formats, including:
* Setting up vector and raster data from the GeoServer administration control
* Publishing data via WMS, WFS and WCS
* Styling layers using desktop tools, with a carousel of GeoServer mapping abilities
* Tile caching with WMTS
* Moving to data processing with WPS
* Brief introduction to security
This document discusses spatial databases and spatial data mining. It introduces spatial databases as databases that store large amounts of space-related data with special data types for spatial information. Spatial data mining extracts patterns and relationships from spatial data. The document also discusses spatial data warehousing with dimensions and measures for spatial and non-spatial data, mining spatial association patterns from spatial databases, techniques for spatial clustering, classification, and trend analysis.
The document summarizes a seminar presentation on information retrieval (IR) given by Hadi Mohammadzadeh. It defines IR and discusses basic assumptions of IR systems. It also describes common search methods for finding documents, including the grep method, term-document incidence matrices, inverted indexes with and without skip pointers, and positional indexes. The construction of inverted indexes is also outlined.
Broad introduction to information retrieval and web search, used to teaching at the Yahoo Bangalore Summer School 2013. Slides are a mash-up from my own and other people's presentations.
The document discusses various information retrieval models, including:
1) Classic models like Boolean and vector space models that use index terms to represent documents and queries.
2) Probabilistic models that view IR as estimating the probability of relevance between documents and queries.
3) Structured models that incorporate document structure, including models based on non-overlapping text regions and hierarchical document structure.
4) Browsing models like flat, structure-guided, and hypertext models for navigating document collections.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models may also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
In this talk, I will present my recent work on neural IR models. We begin with a discussion on learning good representations of text for retrieval. I will present visual intuitions about how different embeddings spaces capture different relationships between items, and their usefulness to different types of IR tasks. The second part of this talk is focused on the applications of deep neural architectures to the document ranking task.
A Geographic Information System (GIS) is a computer system for capturing, storing, analyzing and managing data and associated attributes which are spatially referenced to Earth. GIS integrates common database operations with tools for visualizing and analyzing geographic data. Key components of a GIS include hardware, software, data, people and methods. GIS draws upon techniques from fields such as cartography, remote sensing, photogrammetry, surveying and statistics. Spatial data in GIS can be represented using vector or raster data models. Vector models represent geographic features as points, lines and polygons while raster models divide space into a grid of cells. GIS performs functions such as inputting data, map making, data manipulation, file management, querying
Clustering: Large Databases in data miningZHAO Sam
The document discusses different approaches for clustering large databases, including divide-and-conquer, incremental, and parallel clustering. It describes three major scalable clustering algorithms: BIRCH, which incrementally clusters incoming records and organizes clusters in a tree structure; CURE, which uses a divide-and-conquer approach to partition data and cluster subsets independently; and DBSCAN, a density-based algorithm that groups together densely populated areas of points.
GeoServer, an introduction for beginnersGeoSolutions
This presentation will provide an introduction to the GeoServer project and its abilities to publish data with a mix of well known OGC protocols and other pupolar protocol and data formats, including:
* Setting up vector and raster data from the GeoServer administration control
* Publishing data via WMS, WFS and WCS
* Styling layers using desktop tools, with a carousel of GeoServer mapping abilities
* Tile caching with WMTS
* Moving to data processing with WPS
* Brief introduction to security
This document discusses spatial databases and spatial data mining. It introduces spatial databases as databases that store large amounts of space-related data with special data types for spatial information. Spatial data mining extracts patterns and relationships from spatial data. The document also discusses spatial data warehousing with dimensions and measures for spatial and non-spatial data, mining spatial association patterns from spatial databases, techniques for spatial clustering, classification, and trend analysis.
The document summarizes a seminar presentation on information retrieval (IR) given by Hadi Mohammadzadeh. It defines IR and discusses basic assumptions of IR systems. It also describes common search methods for finding documents, including the grep method, term-document incidence matrices, inverted indexes with and without skip pointers, and positional indexes. The construction of inverted indexes is also outlined.
Broad introduction to information retrieval and web search, used to teaching at the Yahoo Bangalore Summer School 2013. Slides are a mash-up from my own and other people's presentations.
The document discusses various information retrieval models, including:
1) Classic models like Boolean and vector space models that use index terms to represent documents and queries.
2) Probabilistic models that view IR as estimating the probability of relevance between documents and queries.
3) Structured models that incorporate document structure, including models based on non-overlapping text regions and hierarchical document structure.
4) Browsing models like flat, structure-guided, and hypertext models for navigating document collections.
Natural language processing (NLP) is introduced, including its definition, common steps like morphological analysis and syntactic analysis, and applications like information extraction and machine translation. Statistical NLP aims to perform statistical inference for NLP tasks. Real-world applications of NLP are discussed, such as automatic summarization, information retrieval, question answering and speech recognition. A demo of a free NLP application is presented at the end.
Using PostGIS To Add Some Spatial Flavor To Your ApplicationSteven Pousty
- PostGIS adds spatial capabilities like points, lines, polygons, and functions like area, distance to PostgreSQL. It allows spatial queries and analysis.
- To install PostGIS, you need PostgreSQL and libraries like Proj and GEOS. Packages are available for many platforms.
- With PostGIS, you can import spatial data like shapefiles, perform queries using spatial filters and functions, simplify geometries, and more to build mapping and location-based applications.
The document provides information on geographic information systems (GIS) databases. It defines key database concepts like entities, attributes, and relationships. It explains relational databases and how GIS data is structured using entities, attributes, and relationships. It also summarizes common data models for storing GIS data like coverages, shapefiles, and geodatabases. The document focuses on relational database structures and how they can represent spatial data.
This document discusses metadata, which is structured data that describes and helps manage information resources. There are different types of metadata including descriptive, structural, and administrative. Metadata serves important functions like allowing resources to be discovered and organized. Several metadata standards are discussed, including Dublin Core, METS, MODS, EAD, and LOM. The document also covers metadata creation, quality issues, and ways metadata can be improved.
A study on the factors considered when choosing an appropriate data mining a...JYOTIR MOY
The document discusses factors to consider when selecting a data mining algorithm. It outlines the data mining process and notes that choosing the appropriate algorithm is important for obtaining useful results. The document then proposes several key factors for analysts to consider, including the goal of the problem, data structure, expected results, how the information will be used, familiarity with algorithms, and configuration parameters. It emphasizes that carefully considering these factors can help analysts select an algorithm that accurately extracts useful knowledge to inform business decisions.
The document discusses information retrieval models. It describes the Boolean retrieval model, which represents documents and queries as sets of terms combined with Boolean operators. Documents are retrieved if they satisfy the Boolean query, but there is no ranking of results. The Boolean model has limitations including difficulty expressing complex queries, controlling result size, and ranking results. It works best for simple, precise queries when users know exactly what they are searching for.
The document summarizes some of the key potential problems with distributed database management systems (DDBMS), including:
1) Distributed database design issues around how to partition and replicate the database across sites.
2) Distributed directory management challenges in maintaining consistency across global or local directories.
3) Distributed query processing difficulties in determining optimal strategies for executing queries across network locations.
4) Distributed concurrency control complications in synchronizing access to multiple copies of the database across sites while maintaining consistency.
Boolean,vector space retrieval Models Primya Tamil
The document discusses various information retrieval models including Boolean, vector space, and probabilistic models. It provides details on how documents and queries are represented and compared in the vector space model. Specifically, it explains that in this model, documents and queries are represented as vectors of term weights in a multi-dimensional space. The similarity between a document and query vector is calculated using measures like the inner product or cosine similarity to retrieve and rank documents.
This document provides an overview of information retrieval systems, including their definition, objectives, and key functional processes. An information retrieval system aims to minimize the time and effort users spend locating needed information by supporting search generation, presenting relevant results, and allowing iterative refinement of searches. The major functional processes involve normalizing input items, selectively disseminating new items to users, searching archived documents and user-created indexes. Information retrieval systems differ from database management systems in their handling of unstructured text-based information rather than strictly structured data.
Information retrieval (IR) is the process of searching for and retrieving relevant documents from a large collection based on a user's query. Key aspects of IR include:
- Representing documents and queries in a way that allows measuring their similarity, such as the vector space model.
- Ranking retrieved documents by relevance to the query using factors like term frequency and inverse document frequency.
- Allowing for similarity-based retrieval where documents similar to a given document are retrieved.
This document provides an introduction and overview of document clustering techniques in information retrieval. It discusses motivations for clustering documents, such as improving search recall and organizing search results. It covers common clustering algorithms like K-means and hierarchical clustering, how they work, and considerations like choosing the number of clusters. The document uses examples and diagrams to illustrate clustering concepts and algorithms.
This document provides an overview and introduction to analyzing spatial data using Python. It discusses what spatial data is, popular Python libraries for working with spatial data like Fiona, Shapely, GeoPy, and Mapnik, and how to perform spatial analysis tasks in Python such as geocoding, data conversion and visualization. Jupyter notebooks are presented as an interactive environment for exploring spatial data and libraries like Geopandas and PySAL are introduced for performing spatial analysis. Examples analyze Colombian location and point of interest data.
This document discusses information retrieval techniques. It begins by defining information retrieval as selecting the most relevant documents from a large collection based on a query. It then discusses some key aspects of information retrieval including document representation, indexing, query representation, and ranking models. The document also covers specific techniques used in information retrieval systems like parsing documents, tokenization, removing stop words, normalization, stemming, and lemmatization.
The basic intention of this presentation is to help the beginners in GIS to understand what GIS is? It is a simple presentation about GIS, i mean an introductory one. Hope anyone finds it useful.
Mobile GIS allows geographic information systems tools and data to be accessed on mobile devices through wireless networks. It has applications in fields like public safety, utilities management, and land surveying by enabling workers to view maps and collect geospatial data in the field. The key components of a mobile GIS include positioning systems, mobile GIS software, data synchronization capabilities, and geospatial data servers. A case study demonstrates how a university integrated a mobile GIS platform using ArcPad software on PocketPC devices to help campus security and emergency response teams respond quickly to incidents.
The document discusses signature files, which are used for document retrieval. A signature file creates a compressed representation or "signature" for each document in a database. These signatures are stored in hash tables to allow easy retrieval of matching documents for user queries. Signatures can represent words using triplets of characters and a hash function, or entire documents through concatenation of word signatures or superimposed coding. Signature files provide a quick link between queries and documents but have lower accuracy than inverted files, which are generally better for information retrieval applications.
An inverted file indexes a text collection to speed up searching. It contains a vocabulary of distinct words and occurrences lists with information on where each word appears. For each term in the vocabulary, it stores a list of pointers to occurrences called an inverted list. Coarser granularity indexes use less storage but require more processing, while word-level indexes enable proximity searches but use more space. The document describes how inverted files are structured and constructed from text and discusses techniques like block addressing that reduce their space requirements.
INTRODUCTION TO INFORMATION RETRIEVAL
This lecture will introduce the information retrieval problem, introduce the terminology related to IR, and provide a history of IR. In particular, the history of the web and its impact on IR will be discussed. Special attention and emphasis will be given to the concept of relevance in IR and the critical role it has played in the development of the subject. The lecture will end with a conceptual explanation of the IR process, and its relationships with other domains as well as current research developments.
INFORMATION RETRIEVAL MODELS
This lecture will present the models that have been used to rank documents according to their estimated relevance to user given queries, where the most relevant documents are shown ahead to those less relevant. Many of these models form the basis for many of the ranking algorithms used in many of past and today’s search applications. The lecture will describe models of IR such as Boolean retrieval, vector space, probabilistic retrieval, language models, and logical models. Relevance feedback, a technique that either implicitly or explicitly modifies user queries in light of their interaction with retrieval results, will also be discussed, as this is particularly relevant to web search and personalization.
This document provides an overview of text mining and web mining. It defines data mining and describes the common data mining tasks of classification, clustering, association rule mining and sequential pattern mining. It then discusses text mining, defining it as the process of analyzing unstructured text data to extract meaningful information and structure. The document outlines the seven practice areas of text mining as search/information retrieval, document clustering, document classification, web mining, information extraction, natural language processing, and concept extraction. It provides brief descriptions of the problems addressed within each practice area.
This document provides a review of using refrigerant blends in existing refrigerator and air conditioning systems. It discusses the history of refrigerants used from early toxic and hazardous natural refrigerants to safer chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs). However, CFCs and HCFCs were found to deplete the ozone layer. Alternative refrigerants considered include hydrofluorocarbons (HFCs), hydrocarbons (HCs), and refrigerant blends. Refrigerant blends are mixtures of two or more refrigerants that can provide desired characteristics while being safer for the environment than existing refrigerants. The document examines properties and examples of a
Study of Boron Based Superconductivity and Effect of High Temperature Cuprate...IOSR Journals
This paper illustrates the main normal and Boron superconducting state temperature properties of magnesium diboride, a substance known since early 1950's, but lately graded to be superconductive at a remarkably high critical temperature Tc=40K for a binary synthesis. What makes MgB2 so special? Its high Tc, simple crystal construction, large coherence lengths, high serious current densities and fields, lucidity of surface boundaries to current promises that MgB2 will be a good material for both large scale applications and electronic devices. Throughout the last seven month, MgB2 has been fabricated in various shape, bulk, single crystals, thin films, ribbons and wires. The largest critical current densities >10MA/cm2 and critical fields 40T are achieved for thin films. The anisotropy attribution inferred from upper critical field measurements is still to be resolved, a wide range of values being reported, γ = 1.2 ÷ 9. Also there is no consensus about the existence of a single anisotropic or double energy cavity. One central issue is whether or not MgB2 represents a new class of superconductors, being the tip of an iceberg that waits to be discovered. Until now MgB2 holds the record of the highest Tc among simple binary synthesis. However, the discovery of superconductivity in MgB2 revived the interest in non-oxides and initiated a search for superconductivity in related materials, several synthesis being already announced to become superconductive: TaB2, BeB2.75, C-S composites, and the elemental B under pressure.
Natural language processing (NLP) is introduced, including its definition, common steps like morphological analysis and syntactic analysis, and applications like information extraction and machine translation. Statistical NLP aims to perform statistical inference for NLP tasks. Real-world applications of NLP are discussed, such as automatic summarization, information retrieval, question answering and speech recognition. A demo of a free NLP application is presented at the end.
Using PostGIS To Add Some Spatial Flavor To Your ApplicationSteven Pousty
- PostGIS adds spatial capabilities like points, lines, polygons, and functions like area, distance to PostgreSQL. It allows spatial queries and analysis.
- To install PostGIS, you need PostgreSQL and libraries like Proj and GEOS. Packages are available for many platforms.
- With PostGIS, you can import spatial data like shapefiles, perform queries using spatial filters and functions, simplify geometries, and more to build mapping and location-based applications.
The document provides information on geographic information systems (GIS) databases. It defines key database concepts like entities, attributes, and relationships. It explains relational databases and how GIS data is structured using entities, attributes, and relationships. It also summarizes common data models for storing GIS data like coverages, shapefiles, and geodatabases. The document focuses on relational database structures and how they can represent spatial data.
This document discusses metadata, which is structured data that describes and helps manage information resources. There are different types of metadata including descriptive, structural, and administrative. Metadata serves important functions like allowing resources to be discovered and organized. Several metadata standards are discussed, including Dublin Core, METS, MODS, EAD, and LOM. The document also covers metadata creation, quality issues, and ways metadata can be improved.
A study on the factors considered when choosing an appropriate data mining a...JYOTIR MOY
The document discusses factors to consider when selecting a data mining algorithm. It outlines the data mining process and notes that choosing the appropriate algorithm is important for obtaining useful results. The document then proposes several key factors for analysts to consider, including the goal of the problem, data structure, expected results, how the information will be used, familiarity with algorithms, and configuration parameters. It emphasizes that carefully considering these factors can help analysts select an algorithm that accurately extracts useful knowledge to inform business decisions.
The document discusses information retrieval models. It describes the Boolean retrieval model, which represents documents and queries as sets of terms combined with Boolean operators. Documents are retrieved if they satisfy the Boolean query, but there is no ranking of results. The Boolean model has limitations including difficulty expressing complex queries, controlling result size, and ranking results. It works best for simple, precise queries when users know exactly what they are searching for.
The document summarizes some of the key potential problems with distributed database management systems (DDBMS), including:
1) Distributed database design issues around how to partition and replicate the database across sites.
2) Distributed directory management challenges in maintaining consistency across global or local directories.
3) Distributed query processing difficulties in determining optimal strategies for executing queries across network locations.
4) Distributed concurrency control complications in synchronizing access to multiple copies of the database across sites while maintaining consistency.
Boolean,vector space retrieval Models Primya Tamil
The document discusses various information retrieval models including Boolean, vector space, and probabilistic models. It provides details on how documents and queries are represented and compared in the vector space model. Specifically, it explains that in this model, documents and queries are represented as vectors of term weights in a multi-dimensional space. The similarity between a document and query vector is calculated using measures like the inner product or cosine similarity to retrieve and rank documents.
This document provides an overview of information retrieval systems, including their definition, objectives, and key functional processes. An information retrieval system aims to minimize the time and effort users spend locating needed information by supporting search generation, presenting relevant results, and allowing iterative refinement of searches. The major functional processes involve normalizing input items, selectively disseminating new items to users, searching archived documents and user-created indexes. Information retrieval systems differ from database management systems in their handling of unstructured text-based information rather than strictly structured data.
Information retrieval (IR) is the process of searching for and retrieving relevant documents from a large collection based on a user's query. Key aspects of IR include:
- Representing documents and queries in a way that allows measuring their similarity, such as the vector space model.
- Ranking retrieved documents by relevance to the query using factors like term frequency and inverse document frequency.
- Allowing for similarity-based retrieval where documents similar to a given document are retrieved.
This document provides an introduction and overview of document clustering techniques in information retrieval. It discusses motivations for clustering documents, such as improving search recall and organizing search results. It covers common clustering algorithms like K-means and hierarchical clustering, how they work, and considerations like choosing the number of clusters. The document uses examples and diagrams to illustrate clustering concepts and algorithms.
This document provides an overview and introduction to analyzing spatial data using Python. It discusses what spatial data is, popular Python libraries for working with spatial data like Fiona, Shapely, GeoPy, and Mapnik, and how to perform spatial analysis tasks in Python such as geocoding, data conversion and visualization. Jupyter notebooks are presented as an interactive environment for exploring spatial data and libraries like Geopandas and PySAL are introduced for performing spatial analysis. Examples analyze Colombian location and point of interest data.
This document discusses information retrieval techniques. It begins by defining information retrieval as selecting the most relevant documents from a large collection based on a query. It then discusses some key aspects of information retrieval including document representation, indexing, query representation, and ranking models. The document also covers specific techniques used in information retrieval systems like parsing documents, tokenization, removing stop words, normalization, stemming, and lemmatization.
The basic intention of this presentation is to help the beginners in GIS to understand what GIS is? It is a simple presentation about GIS, i mean an introductory one. Hope anyone finds it useful.
Mobile GIS allows geographic information systems tools and data to be accessed on mobile devices through wireless networks. It has applications in fields like public safety, utilities management, and land surveying by enabling workers to view maps and collect geospatial data in the field. The key components of a mobile GIS include positioning systems, mobile GIS software, data synchronization capabilities, and geospatial data servers. A case study demonstrates how a university integrated a mobile GIS platform using ArcPad software on PocketPC devices to help campus security and emergency response teams respond quickly to incidents.
The document discusses signature files, which are used for document retrieval. A signature file creates a compressed representation or "signature" for each document in a database. These signatures are stored in hash tables to allow easy retrieval of matching documents for user queries. Signatures can represent words using triplets of characters and a hash function, or entire documents through concatenation of word signatures or superimposed coding. Signature files provide a quick link between queries and documents but have lower accuracy than inverted files, which are generally better for information retrieval applications.
An inverted file indexes a text collection to speed up searching. It contains a vocabulary of distinct words and occurrences lists with information on where each word appears. For each term in the vocabulary, it stores a list of pointers to occurrences called an inverted list. Coarser granularity indexes use less storage but require more processing, while word-level indexes enable proximity searches but use more space. The document describes how inverted files are structured and constructed from text and discusses techniques like block addressing that reduce their space requirements.
INTRODUCTION TO INFORMATION RETRIEVAL
This lecture will introduce the information retrieval problem, introduce the terminology related to IR, and provide a history of IR. In particular, the history of the web and its impact on IR will be discussed. Special attention and emphasis will be given to the concept of relevance in IR and the critical role it has played in the development of the subject. The lecture will end with a conceptual explanation of the IR process, and its relationships with other domains as well as current research developments.
INFORMATION RETRIEVAL MODELS
This lecture will present the models that have been used to rank documents according to their estimated relevance to user given queries, where the most relevant documents are shown ahead to those less relevant. Many of these models form the basis for many of the ranking algorithms used in many of past and today’s search applications. The lecture will describe models of IR such as Boolean retrieval, vector space, probabilistic retrieval, language models, and logical models. Relevance feedback, a technique that either implicitly or explicitly modifies user queries in light of their interaction with retrieval results, will also be discussed, as this is particularly relevant to web search and personalization.
This document provides an overview of text mining and web mining. It defines data mining and describes the common data mining tasks of classification, clustering, association rule mining and sequential pattern mining. It then discusses text mining, defining it as the process of analyzing unstructured text data to extract meaningful information and structure. The document outlines the seven practice areas of text mining as search/information retrieval, document clustering, document classification, web mining, information extraction, natural language processing, and concept extraction. It provides brief descriptions of the problems addressed within each practice area.
This document provides a review of using refrigerant blends in existing refrigerator and air conditioning systems. It discusses the history of refrigerants used from early toxic and hazardous natural refrigerants to safer chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs). However, CFCs and HCFCs were found to deplete the ozone layer. Alternative refrigerants considered include hydrofluorocarbons (HFCs), hydrocarbons (HCs), and refrigerant blends. Refrigerant blends are mixtures of two or more refrigerants that can provide desired characteristics while being safer for the environment than existing refrigerants. The document examines properties and examples of a
Study of Boron Based Superconductivity and Effect of High Temperature Cuprate...IOSR Journals
This paper illustrates the main normal and Boron superconducting state temperature properties of magnesium diboride, a substance known since early 1950's, but lately graded to be superconductive at a remarkably high critical temperature Tc=40K for a binary synthesis. What makes MgB2 so special? Its high Tc, simple crystal construction, large coherence lengths, high serious current densities and fields, lucidity of surface boundaries to current promises that MgB2 will be a good material for both large scale applications and electronic devices. Throughout the last seven month, MgB2 has been fabricated in various shape, bulk, single crystals, thin films, ribbons and wires. The largest critical current densities >10MA/cm2 and critical fields 40T are achieved for thin films. The anisotropy attribution inferred from upper critical field measurements is still to be resolved, a wide range of values being reported, γ = 1.2 ÷ 9. Also there is no consensus about the existence of a single anisotropic or double energy cavity. One central issue is whether or not MgB2 represents a new class of superconductors, being the tip of an iceberg that waits to be discovered. Until now MgB2 holds the record of the highest Tc among simple binary synthesis. However, the discovery of superconductivity in MgB2 revived the interest in non-oxides and initiated a search for superconductivity in related materials, several synthesis being already announced to become superconductive: TaB2, BeB2.75, C-S composites, and the elemental B under pressure.
Improving Irrigation Water Management in Delta of EgyptIOSR Journals
This document discusses improving irrigation water management in the Delta region of Egypt. It does this in two stages: 1) irrigation scheduling through calculating evapotranspiration rates and studying the effects of irrigation depth on crop yield and revenue, and 2) examining the effects of mixing fresh and saline irrigation waters on crop productivity when applied through different irrigation systems. The key crops grown in the Delta region are discussed, along with water sources, costs, soil types and other relevant factors. Two models are presented for irrigation scheduling: one relating crop yield to evapotranspiration, and another identifying optimal crop rotations. The effects of saline water on crop yields when mixed with fresh water are also studied.
This document presents a hybrid approach for color image segmentation that integrates color edge information and seeded region growing. It uses color edge detection in CIE L*a*b color space to select initial seed regions and guide region growth. Seeded region growing is performed based on color similarity between pixels. The edge map and region map are fused to produce homogeneous regions with closed boundaries. Small regions are then merged. The approach is tested on images from the Berkeley segmentation dataset and produces reasonably good segmentation results by combining color and edge information.
This document summarizes a research paper that models the performance of different types of Dynamic Voltage Restorers (DVRs) in mitigating balanced and unbalanced voltage sags on distribution systems. The paper presents modeling aspects of several DVR configurations and analyzes their effectiveness in compensating for various voltage sag scenarios through detailed simulation results. It also discusses the capability of DVRs to regulate voltage quality at load terminals during power quality issues like sags, swells and harmonics.
This document summarizes a research paper on using Markov Decision Processes (MDPs) to improve inpatient hospital care. It discusses challenges in the current healthcare system and how machine learning and artificial intelligence could help address issues like overtreatment, inconsistent care quality, and high costs. The paper proposes using MDPs and other algorithms to analyze patient electronic health record data, detect abnormal care patterns, and make real-time predictions to optimize treatment and resource allocation. A web application with modules for patients, doctors and administrators is designed to facilitate this approach. Simulation results suggest it could increase care efficiency by better connecting patients and doctors. Future work may expand this to personalized treatment planning, diagnostic testing optimization and knowledge discovery from medical literature.
The document proposes a new method for efficient high resolution image reconstruction based on compressed sensing and the Modified Frame Reconstruction Iterative Thresholding Algorithm (MFR ITA). The method involves three phases: 1) input images are processed using multilook processing and discrete wavelet transform to minimize noise, 2) measurements are obtained from sparse coefficients using a proposed fusion method, and 3) a fast compressed sensing method based on MFR ITA is used to reconstruct the high resolution image using total variation. Simulation results show the proposed method achieves better PSNR and SSIM values compared to other traditional methods, and validates its effectiveness in reconstructing images in the presence of noise.
This document compares different window functions used for finite impulse response (FIR) filter design, including Hann, Hamming, Blackman, and Bartlett windows. It analyzes the performance of lowpass, highpass, bandpass, and bandstop filters designed with each window function. Hann window provides the narrowest main lobe width but Hamming window results in more side lobes. Blackman window achieves the highest side lobe attenuation of -70dB but with a wider main lobe. Bartlett window has the widest main lobe and highest side lobes. In conclusion, the appropriate window function depends on the specific filter design goals and tradeoffs between main lobe width and side lobe suppression.
The document summarizes the key steps in an optical character recognition (OCR) system for recognizing printed text:
1. Image acquisition involves obtaining the image, which can be done using scanners or digital cameras.
2. Pre-processing prepares the image for recognition through techniques like converting to grayscale, skew correction, binarization, noise reduction, and thinning.
3. Segmentation separates the image into lines and individual characters.
4. Recognition identifies the characters by comparing features or templates to stored models.
The paper then discusses specific algorithms that could implement grayscale conversion, skew correction, and other steps in the OCR system.
This document summarizes a research paper that proposes two methods for diagnosing stator short circuit faults in brushless DC motors: Adaptive Neuro-fuzzy Inference System (ANFIS) and wavelet-based fault diagnosis. ANFIS is a neural network combined with fuzzy logic that can model nonlinear functions. It is applied to diagnose faults based on motor parameters. Wavelet transforms are also used to analyze motor current signals and detect characteristic fault frequencies indicating shorts. The paper presents the modeling of a BLDC motor and discusses common motor faults before detailing the proposed ANFIS and wavelet approaches.
This document describes an experimental study on optimizing cutting parameters for milling aluminum alloy 6351. Experiments were conducted using an orthogonal array design with four cutting parameters (speed, feed, depth of cut, tool diameter) each at three levels. The objectives were to minimize surface roughness and maximize material removal rate. Gray relational analysis was used to determine the optimal cutting conditions for the combined objectives. The results showed that feed had the greatest influence on surface roughness, while speed had the least effect on material removal rate. This analysis method provides a means to optimize multiple performance characteristics simultaneously in milling.
This document presents a study that aims to develop correlations between uniaxial compressive strength (UCS) and point load index (I50) for single and double jointed rocks. Over 180 plaster samples were prepared with different joint conditions like orientation, roughness, and number of joints. Samples were tested for UCS and I50. Statistical analysis identified two groups of jointed rocks that showed different trends between UCS and I50. Multiple linear regression was used to develop new correlation equations for each group to predict UCS from I50 for jointed rocks. The proposed equations were compared to previous studies and may be applied to actual rocks like weathered limestone.
The document discusses the effects of adding HHO gas produced through water electrolysis on the performance of a single cylinder, four stroke spark ignition engine. Three key findings are presented:
1) The addition of 2.57-2.74% HHO gas to the intake air decreased fuel consumption by 1.95-3.58% compared to petrol alone, with greater decreases at higher compression ratios and higher percentages of HHO gas.
2) Brake thermal efficiency increased by 0.34-0.74% with the addition of HHO gas at compression ratios of 7-9, indicating improved engine performance.
3) Mechanical efficiency increased with both higher compression ratios and higher percentages of added H
This document summarizes research comparing the design of isolated footings in cohesive soils versus non-cohesive soils. Standard penetration tests were conducted in both soil types to determine their bearing capacities. Footings were then designed for a sample load of 500kN applied to a column. For cohesive soil, the designed footing was 2.5m x 2.5m x 0.205m, while for non-cohesive soil it was 1.9m x 1.9m x 0.201m. The volume and cost of the footing is 56.25% higher for cohesive soil. Therefore, for the same applied load, isolated footings are more economical in non-cohesive soils. For co
This document provides a critical review of using recycled coarse aggregate in self-compacting concrete. It summarizes several previous studies that investigated replacing natural coarse aggregate with recycled coarse aggregate from 0-100% in self-compacting concrete mixes. The key findings from the reviewed studies are that compressive strength and other mechanical properties generally decrease as the replacement ratio of recycled coarse aggregate increases, but self-compacting concrete can still meet design requirements with up to 30% replacement. Permeability and durability properties are also mostly unaffected by use of recycled coarse aggregate. Using recycled aggregate in concrete production helps reduce construction waste and demand for natural resources.
This document summarizes a study that analyzed the chemical composition of wastewater generated from olive oil production and evaluated its potential use as fertilizer on agricultural land. The study found that the wastewater was acidic but rich in organic matter and nutrients like potassium. Soil analysis before and after application of the wastewater showed increases in organic matter and nutrients. Over 280 days, biochemical and chemical oxygen demand of the wastewater decreased by around 50%, indicating its characteristics were modifying. The high potassium and organic content suggests the wastewater could improve soil quality and be a lower-cost fertilizer, though long term effects require more research.
Thorny Issues of Stakeholder Identification and Prioritization in Requirement...IOSR Journals
Abstract: Identifying the stakeholder in requirement engineering process is one of the critical issues. It
performs a remarkable part for successful project completion. The software project largely depends on several
stakeholders. Stakeholder identification and prioritization is still a challenging part in the software development
life cycle. Most of the time, the stakeholders are treated with less importance during the software deployment.
Additionally, there is a lack of attempt to think about the right project stakeholder by the development team. In
maximum cases, the stakeholder identification technique is performed incorrectly and there is a lack of attempt
to mark out them with priority. Besides, there are so many limitations on the existing processes which are used
for identifying stakeholders and setting their priority. These limitations pose a negative impact on the
development of software project, which should be pointed out by giving deep concern on it. We are aiming to
focus on this typical fact, so that we can figure out the actual problem and current work on identifying
stakeholders and setting their priority.
Keywords: Stakeholders, Stakeholder Identification, Stakeholder Selection, Stakeholder
Prioritization, Stakeholder Value, Software Development
The document discusses factors that cause road damage in Palangka Raya City, Indonesia and analyzes the relationship between these factors and their effect on road damage. It finds that water infiltration, traffic load, climate, construction materials, subgrade conditions, and compaction processes significantly impact road damage on peatland roads. A regression equation is presented relating road damage to these six independent factors.
1. The document proposes using magnetic lifts to safely evacuate people from high-rise buildings during fires.
2. It suggests installing ferro-magnetic lifts on the top floors that can drop and be slowed under controlled magnetic fields, reducing evacuation time by 75% and saving a similar number of additional lives.
3. The system would use a solenoid made of current-carrying wire wound around the bottom of the lift shaft to generate a magnetic field strong enough to gently stop a falling lift, keeping deceleration below safe limits.
This document describes an image denoising technique called the TWIST (Transform With Iterative Sampling and Thresholding) method. It begins with background on common types of image noise like Gaussian, salt-and-pepper, and quantization noise. It then discusses related work using eigendecomposition and the Nystrom extension for denoising. The proposed TWIST method uses the Nystrom extension to approximate the filter matrix with a low-rank matrix, allowing efficient processing of the entire image. It performs eigendecomposition on sample pixels to estimate eigenvalues and eigenvectors, then iterates this process with thresholding to denoise the image while preserving edges.
This document summarizes a study that compares the performance of time series databases using real-world datasets versus synthetic datasets. The study measures three key performance metrics - data loading throughput, storage space usage, and query latency - for different time series databases when ingesting and querying both real and synthetic time series data. The results show significant differences in performance between real and synthetic datasets for data injection throughput and query execution times. Specifically, databases perform differently when handling real-world versus synthetic datasets, indicating that benchmarks using only synthetic data may not accurately represent real-world database performance for time series applications.
The document provides an introduction to database management systems (DBMS). It can be summarized as follows:
1. A DBMS allows for the storage and retrieval of large amounts of related data in an organized manner. It removes data redundancy and allows for fast retrieval of data.
2. Key components of a DBMS include the database engine, data definition subsystem, data manipulation subsystem, application generation subsystem, and data administration subsystem.
3. A DBMS uses a data model to represent the organization of data in a database. Common data models include the entity-relationship model, object-oriented model, and relational model.
11.challenging issues of spatio temporal data miningAlexander Decker
This document discusses the challenging issues of spatio-temporal data mining. It begins with an introduction to spatio-temporal databases and how they differ from traditional databases by managing moving objects and their locations over time. It then provides an overview of spatial data mining and temporal data mining before focusing on spatio-temporal data mining, which aims to analyze large databases containing both spatial and temporal information. The document outlines some of the key challenges in applying traditional data mining techniques to spatio-temporal data due to its continuous and correlated nature.
SPATIO-TEMPORAL QUERIES FOR MOVING OBJECTS DATA WAREHOUSINGijdms
In the last decade, Moving Object Databases (MODs) have attracted a lot of attention from researchers.
Several research works were conducted to extend traditional database techniques to accommodate the new
requirements imposed by the continuous change in location information of moving objects. Managing,
querying, storing, and mining moving objects were the key research directions. This extensive interest in
moving objects is a natural consequence of the recent ubiquitous location-aware devices, such as PDAs,
mobile phones, etc., as well as the variety of information that can be extracted from such new databases. In
this paper we propose a Spatio-Temporal data warehousing (STDW) for efficiently querying location
information of moving objects. The proposed schema introduces new measures like direction majority and
other direction-based measures that enhance the decision making based on location information
BI-TEMPORAL IMPLEMENTATION IN RELATIONAL DATABASE MANAGEMENT SYSTEMS: MS SQ...lyn kurian
Traditional database management systems (DBMS) are the computation
storage and reservoir of large amounts of information. The data accumulated by these
database systems is the information valid at present time, valid now. It is the data that
is true at the present moment. Past data is the information that was kept in the
database at an earlier time, data that is hold to be existed in the past, were valid at
some point before now. Future data is the information supposed to be valid at a future
time instance, data that will be true in the near future, valid at some point after now.
The commercial DBMS of today used by organizations and individuals, such as MS
SQL Server, Oracle, DB2, Sybase, Postgres etc., do not provide models to support and
process (retrieving, modifying, inserting and removing) past and future data.
The implementation of bi-temporal modelling in Microsoft SQL Server is important
to know how relational database management system handles data the bi-temporal
property. In bi-temporal database, data saved is never deleted and additional values
are always appended. Therefore, the paper explores one of the way we can build bitemporal handling of data. The paper aims to build the core concepts of bi-temporal
data storage and querying techniques used in bi-temporal relational DBMS i.e., from
data structures to normalized storage, and to extraction or slicing of data.
The unlimited growth of data results relational data to become complicated in terms
of management and storage of data. Thus, the developers working in various
commercial and industrial applications should know how bi-temporal concepts apply to relational databases, especially due to their increased flexibility in the bi-temporal
storage as well as in analyzing data. Thereby, the paper demonstrates how bi-temporal
data structures and their operations are applied in Relational Database Management
System
A database is generally used for storing related, structured data, w.pdfangelfashions02
A database is generally used for storing related, structured data, with well defined data formats,
in an efficient manner for insert, update and/or retrieval (depending on application).
On the other hand, a file system is a more unstructured data store for storing arbitrary, probably
unrelated data. The file system is more general, and databases are built on top of the general data
storage services provided by file systems.
A Data Base Management System is a system software for easy, efficient and reliable data
processing and management. It can be used for:
Creation of a database.
Retrieval of information from the database.
Updating the database.
Managing a database.
It provides us with the many functionalities and is more advantageous than the traditional file
system in many ways listed below:
1) Processing Queries and Object Management:
In traditional file systems, we cannot store data in the form of objects. In practical-world
applications, data is stored in objects and not files. So in a file system, some application software
maps the data stored in files to objects so that can be used further.
We can directly store data in the form of objects in a database management system. Application
level code needs to be written to handle, store and scan through the data in a file system whereas
a DBMS gives us the ability to query the database.
2) Controlling redundancy and inconsistency:
Redundancy refers to repeated instances of the same data. A database system provides
redundancy control whereas in a file system, same data may be stored multiple times. For
example, if a student is studying two different educational programs in the same college, say
,Engineering and History, then his information such as the phone number and address may be
stored multiple times, once in Engineering dept and the other in History dept. Therefore, it
increases time taken to access and store data. This may also lead to inconsistent data states in
both places. A DBMS uses data normalization to avoid redundancy and duplicates.
3) Efficient memory management and indexing:
DBMS makes complex memory management easy to handle. In file systems, files are indexed in
place of objects so query operations require entire file scans whereas in a DBMS , object
indexing takes place efficiently through database schema based on any attribute of the data or a
data-property. This helps in fast retrieval of data based on the indexed attribute.
4) Concurrency control and transaction management:
Several applications allow user to simultaneously access data. This may lead to inconsistency in
data in case files are used. Consider two withdrawal transactions X and Y in which an amount of
100 and 200 is withdrawn from an account A initially containing 1000. Now since these
transactions are taking place simultaneously, different transactions may update the account
differently. X reads 1000, debits 100, updates the account A to 900, whereas X also reads 1000,
debits 200, updates A to 800. In bot.
Query Optimization Techniques in Graph Databasesijdms
Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for
storing and managing data with graph-like structure. Today, they represent a requirementfor many
applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize
queries in graph databases, have been used in traditional databases, distribution systems,… or they are
inspired from graph theory. However, their reuse in graph databases should take care of the main
characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to
efficiently access data relationships. In this paper, we survey the query optimization techniques in graph
databases. In particular,we focus on the features they have in
Organizing Data in a Traditional File Environment
File organization Term and Concepts
Computer system organizes data in a hierarchy
Bit: Smallest unit of data; binary digit (0,1)
Byte: Group of bits that represents a single character
Field: Group of characters as word(s) or number
Record: Group of related fields
File: Group of records of same type
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Power Management in Micro grid Using Hybrid Energy Storage Systemijcnes
This paper proposed for power management in micro grid using a hybrid distributed generator based on photovoltaic, wind-driven PMDC and energy storage system is proposed. In this generator, the sources are together connected to the grid with the help of interleaved boost converter followed by an inverter. Thus, compared to earlier schemes, the proposed scheme has fewer power converters. FUZZY based MPPT controllers are also proposed for the new hybrid scheme to separately trigger the interleaved DC-DC converter and the inverter for tracking the maximum power from both the sources. The integrated operations of both the proposed controllers for different conditions are demonstrated through simulation with the help of MATLAB software
The document provides an overview of database management systems and related concepts. It discusses database components like the data dictionary and data repository. It also covers different data models including hierarchical, network, and relational models. Key concepts covered include entities, attributes, relationships, schemas, and data abstraction which allows users to interact with data without knowing details of how it is structured and stored.
Elimination of data redundancy before persisting into dbms using svm classifi...nalini manogaran
Elimination of data redundancy before persisting into dbms using svm classification,
Data Base Management System is one of the
growing fields in computing world. Grid computing, internet
sharing, distributed computing, parallel processing and cloud
are the areas store their huge amount of data in a DBMS to
maintain the structure of the data. Memory management is
one of the major portions in DBMS due to edit, delete, recover
and commit operations used on the records. To improve the
memory utilization efficiently, the redundant data should be
eliminated accurately. In this paper, the redundant data is
fetched by the Quick Search Bad Character (QSBC) function
and intimate to the DB admin to remove the redundancy.
QSBC function compares the entire data with patterns taken
from index table created for all the data persisted in the
DBMS to easy comparison of redundant (duplicate) data in
the database. This experiment in examined in SQL server
software on a university student database and performance is
evaluated in terms of time and accuracy. The database is
having 15000 students data involved in various activities.
Keywords—Data redundancy, Data Base Management System,
Support Vector Machine, Data Duplicate.
I. INTRODUCTION
The growing (prenominal) mass of information
present in digital media has become a resistive problem for
data administrators. Usually, shaped on data congregate
from distinct origin, data repositories such as those used by
digital libraries and e-commerce agent based records with
disparate schemata and structures. Also problems regarding
to low response time, availability, security and quality
assurance become more troublesome to manage as the
amount of data grow larger. It is practicable to specimen
that the peculiarity of the data that an association uses in its
systems is relative to its efficiency for offering beneficial
services to their users. In this environment, the
determination of maintenance repositories with “dirty” data
(i.e., with replicas, identification errors, equal patterns,
etc.) goes greatly beyond technical discussion such as the
everywhere quickness or accomplishment of data
administration systems.
Nalini.M, nalini.tptwin@gmail.com, Anbu.S, anomaly detection,
data mining
big data
dbms
intrusion detection
dublicate detection
data cleaning
data redundancy
data replication, redundancy removel, QSBC, Duplicate detection, error correction, de-duplication, Data cleaning, Dbms, Data sets
This document summarizes a seminar on temporal databases. It discusses the key topics covered in the seminar including an introduction to temporal databases and their features like valid time and transaction time. It also covers the problems of schema versioning that temporal databases address. The advantages include support for declarative queries and solving problems in temporal data models. Applications mentioned include financial, medical, and scheduling systems. Current research is focused on improving spatiotemporal database management systems. The conclusion is that temporal databases are an emerging concept for storing data in a time-sensitive manner and further efforts are needed to generalize databases as structures change over time.
1. Database management systems (DBMS) allow users to define, create, query, update, and administer databases.
2. A DBMS interacts with users, applications, and the database itself to capture and analyze data stored in the database.
3. Well-known DBMS are tools like MySQL, Oracle, SQL Server, and PostgreSQL. They allow defining, creating, querying, updating and managing databases.
QUERY OPTIMIZATION IN OODBMS: IDENTIFYING SUBQUERY FOR COMPLEX QUERY MANAGEMENTcsandit
This document discusses query optimization in object-oriented database management systems (OODBMS) using query decomposition and caching. It proposes an approach that decomposes complex queries into smaller subqueries for faster retrieval of cached results. The approach aims to reuse parts of cached results to answer wider queries by combining multiple cached queries. Experiments showed this approach improved query optimization performance especially when data manipulation rates were low compared to data retrieval rates. Key aspects included decomposing queries, caching subquery results, and reusing cached results to answer other queries.
The document discusses database system architecture and data models. It introduces the three schema architecture which separates the conceptual, logical and internal schemas. This provides logical data independence where the conceptual schema can change without affecting external schemas or applications. It also discusses various data models like hierarchical, network, relational and object-oriented models. Key aspects of each model like structure, relationships and operations are summarized.
The document discusses key concepts related to databases including:
- A database is an organized collection of data stored electronically and accessed via a DBMS.
- Data is logically organized into records, tables, and databases for meaningful representation to users.
- Databases offer advantages like reduced data redundancy, improved data integrity, and easier data sharing.
- Database subsystems include the database engine, data definition language, and data administration.
The document then covers database types, uses, issues, and security concepts.
Hi! Take a look at this awesome computer science dissertation literature review example. If you want to see more visit https://www.literaturereviewwritingservice.com/how-to-conduct-a-computer-science-literature-review/
See an example on writing a computer science dissertation literature review and get more information at https://www.literaturereviewwritingservice.com/
The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
Similar to Spatio-Temporal Database and Its Models: A Review (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 11, Issue 2 (May. - Jun. 2013), PP 91-100
www.iosrjournals.org
www.iosrjournals.org 91 | Page
Spatio-Temporal Database and Its Models: A Review
Rainu Nandal
University Institute of Engineering and Technology, M .D. University, India
Abstract : This paper focus on the study of spatial temporal database and its models and different types of
applications where dynamic modeling of spatial temporal database can be used, this is an emerging field which
places lot of contribution in DBMS with the aspects of the real world. Spatio-Temporal data models are the
heart of a Spatio-Temporal Information System (STIS); they describe object data types, relationships,
operations and rules to maintain database integrity. A rigorous data model must anticipate spatio-temporal
queries and analytical methods to be performed in the STIS. Spatio-temporal database models are proposed to
deal with real world applications, where spatial changes occur over the time line. A serious weakness of
existing models is that each of them deals with few common characteristics found across a number of specific
applications. Thus the applicability of the model to different cases, fails on spatio-temporal behaviors not
anticipated by the application used for the initial model development.
Keywords– Land Information System (LIS), Geographical Information System (GIS), Spatial-temporal
databases, STIS.
I. Introduction
Traditional databases are organized by fields, records, and files. A field is a single piece of information; a
record is one complete set of fields; and a file is a collection of records. For example, a telephone book is
equivalent to a file. It contains a list of records, each of which consists of three fields: name, address, and
telephone number. A database is a collection of information that is organized so that it can easily be accessed,
managed, and updated. In one view, databases can be classified according to types of content: bibliographic,
full-text, numeric, and images. In computing, databases are sometimes classified according to their
organizational approach. [4]
The type of database system that you require depends on a number of factors, such as:
the complexity of the data involved, e.g. plain text, images, sound files.
the quantity of data to be stored and processed.
whether the data needs to be accessed and amended by more than one person simultaneously.
whether data needs to be imported from, or exported to, other IT systems.[7]
The software programs for databases create one of four common types: hierarchical databases, network
databases, relational databases or object-oriented databases and two more types: spatial databases and temporal
databases.
Fig 1: Types of databases
1.1 Hierarchical database
The hierarchical database is one of the oldest types of database management systems. It is most
commonly used on mainframe computers. The database creator pre-defines the relationships between each
record and its data. The structure requires a root record, or parent, from which the database designer creates a
parent-child relationship for each bit of data that goes into the database. [7]
2. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 92 | Page
1.2 Network database
A network database also organizes data by using defined parent-child relationships. Like a real family,
the network database structure allows a piece of data classified as a child to have more than one parent. This is
an improvement over hierarchical types of database management systems. It allows users to connect information
in one database to another set of data through the parent record and the child record. [7]
1.3 Relational database
The relational database management system has increased in popularity because of its flexibility and
ease of use. It allows the database designer to use individual pieces of information to create relationships
between separate databases without the restriction of parent or owner relationships. The information in one
database that ties it to data in a different one is a unique identifier, such as an employee identification number.
[7]
1.4 Object-oriented database
Object-oriented types of database management systems provide a way to organize data other than
numbers and text. Designers use them to accommodate multimedia items such as photos, music and videos. This
database management system uses two identifiers for each item. The first is a descriptive object name, and the
second is a miniature program with instructions or methods that the computer runs during storage and retrieval.
The two parts become an object that the database users can organize like they can with text or numbers. [7]
1.5 Spatial database
A spatial database is a database that is optimized to store and query data that is related to objects in
space, including points, lines and polygons. It offers spatial data types and stores information relating to
geometric or geographical space. The spatial database stores a collection of space related data. [15]
1.6 Temporal database
Temporal database stores data relating to time instance. It offers temporal data types and stores information
relating to past, present and future e.g. the history of the stock market or the movement of the employee with an
organization. Thus a temporal database stores a collection of time related data. [4]From the above listed six
database models, the two models namely: Temporal database, Spatial database will be used in the present work.
The descriptions of these two models are as explained below.
II. Spatial Database
A spatial database is a database that is optimized to store and query data that is related to objects in
space, including points, lines and polygons. While typical databases can understand various numeric and
character types of data, additional functionality needs to be added for databases to process spatial data types.
These are typically called geometry or feature. Spatial database management systems aim at the effective and
efficient management of data related to [1][18].
a space such as the physical world (geography, urban planning, astronomy);
parts of living organisms (anatomy of the human body);
engineering design (very large scale integrated circuits, the design of an automobile, or the molecular
structure of a pharmaceutical drug); and
conceptual information space (a multidimensional decision support system, fluid flow, or an
electromagnetic field).[24]
Commercial examples of spatial database management include Informix’s spatial data-blades (i.e., 2D, 3D,
Geodetic), Oracle’s Universal server with either Spatial Data Option or Spatial Data Cartridge and ESRI’s
Spatial Data Engine (SDE). Research prototype examples of spatial database management systems include
spatial data blades with Postgres, Predator, and Paradise. The functionalities provided by these systems include
a set of spatial datatypes such as a point, line-segment and polygon, and a set of spatial operations such as
inside, intersection, and distance. The spatial types and operations may be made part of a query language such as
SQL, which allows spatial querying when combined with an object-relational database management system. The
performance enhancement provided by these systems includes a multidimensional spatial index and algorithms
for spatial access methods, spatial range queries, and spatial joins. Spatial indexing with concurrency control
may be implemented in the object-relational server for performance reasons. Existing and emerging applications
require new functionalities including the modeling of network spaces and continuous fields. The performance
needs of emerging applications require not only the management of large data sets, but also new processing
strategies for spatial set operations, field operations (e.g., slope), and network analysis (e.g., shortest-path, route-
evaluation).[18]
3. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 93 | Page
III. Temporal Database
Commercial database management systems (DBMS) such as Oracle, Sybase, Informix and O2 allow
the storage of huge amounts of data. This data is usually considered to be valid now. Past or future data is not
stored. Past data refers to data which was stored in the database at an earlier time instant and which might has
been modified or deleted in the meantime. Past data usually is overwritten with new (updated) data. Future data
refers to data which is considered to be valid at a future time instant (but not now).[25]Temporal data stored in a
temporal database is different from the data stored in non-temporal database in that a time period attached to the
data expresses when it was valid or stored in the database. As mentioned above, conventional databases consider
the data stored in it to be valid at time instant now, they do not keep track of past or future database states. By
attaching a time period to the data, it becomes possible to store different database states. A first step towards a
temporal database thus is to timestamp the data. This allows the distinction of different database states. One
approach is that a temporal database may timestamp entities with time periods. Another approach is the
timestamping of the property values of the entities. In the relational data model, tuples are timestamped, where
as in object-oriented data models, objects and/or attribute values may be timestamped.[2]
What time period do we store in these timestamps? There are mainly two different notions of time which are
relevant for temporal databases. One is called the valid time, the other one is the transaction time. Valid time
denotes the time period during which a fact is true with respect to the real world. Transaction time is the time
period during which a fact is stored in the database. Note that these two time periods do not have to be the same
for a single fact. [3][4] To categorize temporal data, one can adopt different criteria. Below we list several
commonly used categorizations for temporal data:[16][8]
Partially Temporal vs. Fully Temporal: - A temporal dataset is partially temporal if it contains data items whose
temporal relationships such as before and after are undecidable. For instance, web log is partially temporal, as it
is often impossible to decide the exact access time to the same web page from different web sessions. In
contrast, in a fully temporal dataset, the temporal relationship between every pair of data items is decidable:-
Regularly Timestamped vs. Irregularly Timestamped:- In regularly timestamped data, measurements are
recorded at equal-spaced time points. Otherwise, the data is irregularly timestamped.
Univariate vs. Multivariate:- Univariate temporal data describes the temporal behavior of one variable.
Multivariate temporal data on the other hand describes the temporal behavior of more than one variable.
Uni-subject vs. Multi-subject:- A uni-subject temporal data involves only one subject.Whereas a multi-
subject temporal data involves more than one subject
IV. Spatio-Temporal Database
Spatio-temporal databases deal with applications where data types are characterized by both spatial and
temporal semantics. Spatio-temporal data handling was not a straightforward task due to the complexity of the
data structures requiring careful analysis in structuring the dimensions, together with the representation and
manipulation of the data involved. Research on space-time representation has focused on a number of specific
areas, including: (a) the ontology of space and time and the development of efficient and robust space-time
database models and languages; (b) inexactness and scaling issues; (c) graphical user interfaces and query
optimization; (d) indexing techniques for space-time databases.[5][24]
V. Spatio-Temporal Data Modeling
The ideas behind the spatio-temporal modeling can be broadly cross-classified according to: (a) their
motivation, (b) their underlying objectives and (c) the scale of data. Under (a) the motivations for models can be
classified into four classes: (i) extensions of time series methods to space (ii) extension of random field and
imaging techniques to time (iii) interaction of time and space methods and (iv) physical models. Under (b) the
main objectives can be viewed as either data reduction or prediction. Finally, under (c) the available data might
be sparse or dense in time or space respectively, and the modeling approach often takes this scale of data into
account.[6] Spatio-Temporal data models are the core of a Spatio-Temporal Information System (STIS); they
define object data types, relationships, operations and rules to maintain database integrity. A rigorous data
model must anticipate spatio-temporal queries and analytical methods to be performed in the STIS. Spatio-
temporal database models are intended to deal with real world applications, where spatial changes occur over
the time line. A serious weakness of existing models is that each of them deals with few common characteristics
found across a number of specific applications. Thus the applicability of the model to different cases, fails on
spatio-temporal behaviors not anticipated by the application used for the initial model development. [24]Several
different forms of spatio-temporal data types are available in real applications. While they all share the
availability of some kind of spatial and temporal aspects, the extent of such information and the way they are
4. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 94 | Page
related can combine to several different kinds of data objects. Figure 1 visually depicts a possible classification
of such data types, based on two dimensions:[20][23]
Fig 5.1: Spatio-temporal data types dimensions[19]
the temporal dimension describes to which extent the evolution of the object is captured by the data. The
very basic case consists of objects that do not evolve at all, in which case only a static snapshot view of
each object is available. In slightly more complex contexts, each object can change its status, yet only its
most recent value (i.e., an updated snapshot) is known, therefore without any knowledge about its past
history. Finally, we can have the extreme case where the full history of the object is kept, thus forming a
time series of the status it reversed;
the spatial dimension describes whether the objects considered are associated to a fixed location (e.g., the
information collected by sensors fixed to the ground) or they can move, i.e., their location is dynamic and
can change in time. In addition to these two dimensions, a third, auxiliary one is mentioned in our
classification, which is related to the spatial extension of the objects involved. The simplest case, which is
also the most popular in real world case studies, considers point-wise objects, while more complex ones can
take into consideration objects with a extension, such as lines and areas.
VI. Spatio-Temporal Data Models
Throughout the relatively young history of research on spatio-temporal modeling, a substantial
numbers of models have been presented. Spatio-temporal data models are classified into the following ten
categories:[12][14][20][10]
6.1 The Snapshot Model
One of the simplest spatio-temporal data models is the snapshot model. Temporal information has been
incorporated into this spatial data model by time-stamping layers. In this model, every layer is a collection of
temporally homogeneous units of one theme. It shows the states of a geographic distribution at different times
without explicit temporal relations among layers.
6.2 The Space-Time Composite (STC) Data Model
The method is brought forward by Chrisman to the vector model in 1983. It is based on the principle
that every line in space and time is projected down to the spatial plane and intersected with each other creating a
polygon mesh. Each polygon in this mesh has its own attribute history associated with it. Each new amendment
is intersected with the already existing lines, and new polygons are formed with individual histories. Its biggest
weakness lies in polygon broken and depend on related database excessively.
6.3 Data Models based on Simple Time-Stamping
In this model, to tag every object with a pair of timestamps, one for the time of creation and one for the
time of cessation. Current objects have their cessation time given by a special value “NOW”, ―CURRENT”, or
“NULL” The model is based on the linear, discrete, absolute time model. Only valid time is supported while the
5. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 95 | Page
model supports multiple granularities. Time is represented as an attribute of the object and vector structure of
space is assumed.
6.4 Event-Oriented Models
In the space time model based on affairs, the state change of the space time object is sprung by
geography things. Passing through import affairs table, putting attributes or space change record in the each
module of the same affair, showing, giving the describe method of time in sequence, which can build up the
relation of object state and geography thing, In order to provide the foundation of the tense operation for high
level. The space time model based on the affairs is very fit to the query of this question as ―What happened in
some times and in some areas", and also have good consistency in data and less fraction redundancy degree of
the data.
6.5 The Three-Domain Model
This model represents semantics, space and time separately and provides links between them to
describe geographic processes and phenomena. The semantic domain holds uniquely identifiable objects that
correspond to human concepts independent of their spatial and temporal location. It identifies semantics, spatial
domain, and, temporal domain for spatio-temporal data. The links between space and time are described through
different semantics
Fig-6.1: The three-domain model
6.6 The History Graph Model
The history graph model is to identify all types of temporal behavior and to manage both objects and
events. The intention of the history graph notation is to visualize the temporal element of geographical and other
information. It is based on the simple idea that an object may either be in a static, a changing or a ceased state.
In the history graph notation, the static states called object versions are shown with rectangular boxes, while the
changing states called transitions between versions are shown with round ended boxes (or circles in case of
sudden changes).
6.7 The Spatio-Temporal Entity-Relationship (STER) Model
The careful analysis of spatio-temporal applications and behavior of spatial and temporal entities
suggested that entity sets with their attributes and relationships could capture the dynamic nature of spatio-
temporal databases. In STER model, three types of time aspects can be defined: (i) valid time, (ii) transaction
time, and (iii) existence time. The valid time of a fact is the time when the fact is true in the modeled reality.
The transaction time of a geo database is the time when the element is the part of the current state of the geo
database. The transaction time is applied not only to facts but to any element that may be stored in a database.
The existence time refers to the time when the object exists.
6.8 Object-Relationship (O-R) Model
The implementation of object-relationship models describe “processes, which act on the geometric
attributes of an entity” and illustrate the importance of capturing the processes, which cause change in
connection with space and time.
6.9 Spatio-Temporal Object-Oriented (O-O) Data Models
This model organizes geography space time with the object oriented idea. Among them object is the
independent pack which is a concept entity have only one marking. Each geography space time object
encapsulate the tense characteristic, space characteristic, attribute characteristic ,related behavior operation and
the relation with other objects.
The model introduces the concept of version management in order to integrate object and event
elements. Two main versioning levels can be distinguished: object version and object configuration. Four basic
premises underlie the proposed model at the object version level: a) Every object must have an initial version, b)
Semantic Domain
Temporal Domain
Spatial Domain
6. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 96 | Page
a hierarchical structure is imposed on the versions of an object, c) different versions of an object denote different
object instances, d) among versions, a current version is always distinguished.
6.10 Moving Object Data Models
The moving object data model, where spatio-temporal data is abstracted as a collection of moving
objects including points and regions. It models time as an integral part of spatial entities and captures both
changes and movements.
VII. Features Of Spatio-Temporal Data Model
The above spatio-temporal database models vary on completeness, formalization and implementation.
These data models are describe with the following features in table 7.1[14] :-
Formalisation: This factor shows if the model has been formally defined or not.
Implementation: This criterion depicts if the model has been implemented or not.
Tool: If the model has been implemented, the name of the tool developed is listed here.
Application: The case study used to analyse and develop the model.
Spatial model: The spatial model used as the basis for the development of the corresponding spatio-
temporal model.
Temporal model: The temporal model used to define and develop the spatio-temporal model.
Table7.1: Comparing existing spatio-temporal data models
VIII. Practical project applications for spatio-temporal
modeling of dynamic phenomena in gis
A temporal GIS has three general components (temporal database, temporal visualization and temporal
analysis) that lead to three research domain in temporal GIS.Animation as an efficient approach for
representation of temporal changes is implemented and then optimum path analysis are extended as it can
handle time dependent graphical and attribute information on the base of the animation representation method.
Spatio-
Temporal
datamodels
Formalisatio
n
Implementatio
n
Tool Application Spatial Model Temporal
Model
Snapshot No No None LIS GIS N/A
STC No No None LIS GIS N/A
Simple Time
Stamping
No No None Historical
cadastral
database
US spatial
data transfer
standard
N/A
Event
Oriented
Yes Yes TEMPEST LIS GIS with
event dates
Ordered
time
models
3-Domain Yes No None LIS Relational
spatial
database
Relational
version
tables
History Graph No No None LIS N/A Graphs
STER Yes No None Cadastral
application
Spatial
indicator
Temporal
indicators
O-R Yes Yes MADS Rural urban
land use
application
MEOSIG MODUL-
R
POLLEN
O-O Yes Yes Geo-OM LIS ERT model Temporal
base
model
STUML Yes No None Regional
health care
example
Spatial
indicator
Temporal
indicators
Moving
Objects
Yes Yes SECONDO
module
Multimedia
scenario,
Forestfire
control
management
Abstraction of
spatial data
types &
Oracle Spatial
Abstractio
n of
temporal
data types
&TAU
types
7. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 97 | Page
Integration of temporal path analysis with animation can be useful in many cases such as more reliable decision
making and faster and better access to emergency services in critical conditions is possible if traffic volume and
road network conditions imported into GIS databases, dynamically.[28]With this integration of animation and
optimum path analysis, the system can obtain real optimum path with consideration of momentary traffic
volume of road networks and other temporal parameters that affect selection of optimum path. If this system to
be integrated with a GIS-GPS system, it can handle real time position of vehicle in order to evaluate real
optimum path for each moving vehicle.[17][11]The major applications of Spatio-temporal modeling of dynamic
phenomena in GIS are as given below:[21][22][9]
8.1 Roads traffic volume simulation
In practice, roads traffic volumes have to be collected and transmitted to GIS database with specific
sensors. These sensors monitor and record traffic volume changes and then apply these changes to the database.
Using such data, the simulations can be achieved for analysis of the traffic volumes.
8.2 Vehicle movement simulation
Vehicle movements in a GIS can be considered as changes of planimetric position attribute in the
database. This position’s attributes (X, Y / Phi, Lambda coordinates) can be obtained from GPS or other
positioning systems and then transmitted to the database. Vehicle movements with respect to road networks can
be simulated henceforth, and then resulted attributes are applied to the database.
8.3 Graphical changes modeling
In order to represent and model graphical changes related to road networks in a cadastral map at a
scale, for example, 1:500, the snapshot model can be used. For this purpose, every change is stored in a separate
layer (next changes can be added to model with storage of new snapshots in the specified path on computer),
then the implemented system represents these snapshots sequentially with the animation method.
8.4 Extended optimum path analysis
In common cases this analysis requires only one start point, one end point, some intermediate points
and one attribute field such as traffic volume that includes cost information for each segment. However, in a
temporal GIS, each of these elements can change over time (movement of start, end and intermediate points and
change in attribute field or any combination of them). In addition to, graphical information can have structural
changes (such as construction of new roads). Therefore, optimum path analysis must be extended in order to
handle these temporal changes. Moreover, representation of temporal optimum path must be coordinated with
animation picture rate.
8.5 Animation
In graphical images such as GIF files that nowadays are frequently used to implement animation under
web applications, there are some images that aligned sequentially in a file and are shown one by one. In this
approach there are no tools for users to control picture rate or analysis on images. But the animation that is
subjected in GIS, must present the ability to use all GIS analysis, querying, and all other static GIS capabilities.
When animation is used in GIS, there is an important term of picture rate that returns to representation period
versus real period of occurrence events. For example representation of annual period of sea level changes in a
few minutes or an accident moment in larger duration of time may be considered. Thus each GIS, which use
animation, must prepare the ability to control picture rate, too.
In the system implemented in this research, there are some abilities to represent graphical changes in
addition to representation of database changes with animation. In this case, user must first store new snapshots
in a path on hard disk and then this path, number of snapshots and number of loops that are required; have to be
introduced to the system. Figure 4 represents a schema of the implemented system.[29]
8. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 98 | Page
Fig. 8.1- A schema of implemented system interface[13]
IX. Research Issues In Spatio-Temporal Database
The following points consider in depth the spatio-temporal research agenda that is important in the framework
of computing science, in terms of the necessities that find out the priorities of modeling spatio-temporal
applications and highlight key issues from those of lesser significance:
Spatio-temporal databases typically deal with large complex bodies of spatially referenced data, which is
required to be readily available. As such indexing techniques for space-time databases and more
specifically for real-time applications that describe continuously evolving spatial entities; are still an
important and open research area.[30]
A sizeable proportion of the data is either regularly/irregularly updated from external data sources or need
to be continuously updated due to evolution of natural processes.[35]
Some of the data are noisy, conflicting and incomplete. More analytically, a major problem with spatial
data is the control of error propagation under spatial operations. Further research is needed on finite
precision geometry and multiple resolution techniques.[31]
Complex functions and calculations involving operators, relationship status and objects for the prediction of
their future motion need to be designed. Research is also needed to be carried out on applications of newer
computational paradigms, such as constraint-based approaches, fuzzy sets and rough sets.[33]
A growing number of researchers in both the DBMS and GIS communities have come to the realization that
a general, application-independent solution that allows an optimal combination of simplicity, flexibility and
efficiency will require rethinking at an abstract level and new types of implementation data models and
associated query languages. This solution needs to be based upon a uniform ontological framework and
requires a multi-representational approach.[32][34]
X. Conclusion
The spatial and temporal dimensions should be considered separately and incorporated into the
database design. Modeling of Temporal and spatial characteristics captures the aspects of the real world. The
thematic characteristics of spatio-temporal objects should be identified and modeled together with the spatial
and temporal characteristics. The study is done in the real world application which brings out new directions and
requirements for further developments. The spatio-temporal models are more concerned with theoretical
notation of spatio- temporal data. In future, the next step in spatio-temporal database development is the testing
stage where the models proposed are implemented on different applications to identify further requirements and
research directions.
9. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 99 | Page
REFERENCES
[1]. A. Renolen, ―Temporal Maps and Temporal Geographical Information Systems (Review of Research)‖, Department of Surveying
and Mapping, The Norwegian Institute of Technology, 1997.
[2]. A.G. Cohn and S.M. Hazarika. ―Qualitative spatial representation and reasoning: An overview‖, Fundamental Informatics, 46(1-
2):1–29, 2001
[3]. Auroop R Ganguly & Karsten Steinhaeuser, (2008) ―Data Mining for Climate Change and Impacts―, IEEE International conference
on Data mining workshop, ICDMW, 385-394, 15-19 Dec, Italy.
[4]. Bitner T, (2000) ―Rough sets in Spatiotemporal data mining‖, Proceedings of International workshop on Temporal, Spatial and
Spatiotemporal Data Mining, Lyon, France.
[5]. Bogorny V & Shekhar S, (2010) ―Spatial and Spatio-Temporal Data Mining‖, IEEE 10th
International Conference on Data Mining
(ICDM), Sydney,NSW.
[6]. Bruno De C. Leal et.al, (2011) ― From Conceptual Modeling to Logical Representation of Trajectories in SGBDOR and DW
Systems‖, Journal of Information and Data Management, Vol 2, No 3.
[7]. C. Armenakis, ―Estimation and Organization of Spatio-Temporal Data‖, Proceedings of the Canadian Conference on GIS92,
Ottawa, Canada, 1992.
[8]. C. Bettini, X. S. Wang, and S. Jajodia, "A general framework and reasoning models for time granularity", Proceedings of Third
International Workshop on Temporal Representation and reasoning (TIME '96), pp. 104-11,1996.
[9]. C.S. Jensen, J. Clifford, R. Elmasri, S.K. Gadia, P. Hayes and S. Jajodia, eds., ―A Glossary of Temporal Database Concepts‖ ACM
SIGMOD Record, vol. 23, no. 1, pp. 52-64, Mar. 1994.
[10]. Changbin Wu, (2011) ―Detecting Spatio-Temporal Topological relationships between boundary lines of parcel‖, International
Conference on Remote sensing, Environment and Transportation Engineering, Nanjing.
[11]. D. Peuquet, ―Making Space for Time: Issues in Spase-Time Data Representation‖ GeoInformatica, 5: 11-32, 2001.
[12]. Derya Birant & Alp Kut, (2007) ―ST-DBSCAN: An algorithm for clustering spatio-temporal data―, Data & Knowledge
Engineering, Volume 60, Issue 1, January, Pages 208-221.
[13]. F. Wang, G.B. Hall, and Subaryono, ―Fuzzy Information Representation and Processing in Conventional GIS Software: Database
Design and Application‖, International Journal of Geographical Information Systems, Vol. 4: 261-283, 1990.
[14]. Florian Verhein & Sanjay Chawla, (2005) ―Mining Spatio-Temporal Association Ruls, Sources, Sinks, Stationary Regions and
Thoroughfares in Object Mobility Databases―, Technical Report Number 574, The University of Sydney.
[15]. Gabriel Pestana etal, (2005) ―Multidimensional Modeling based on spatial,Temporal and Spatio-Temporal Stereotypes―, ESRI
International User Conference, July, Sandiego, Califonia.
[16]. Gabriel Pestana, Miguel Mira da Silva,‖Multidimensional Modeling based on Spatial, Temporal and Spatio-Temporal
Stereotypes‖,ESRI International User Conference July 25–29, 2005
[17]. H. Yang, K. Marsolo, S. Parthasarathy, and S. Mehta. ―Discovering spatial relationships between approximately equivalent
patterns‖. In BIOKDD04: Workshop on Data Mining in Bioinfomatics (with SIGKDD04 Conf.), August 2004 .
[18]. H.F. Korth and A. Silberschatz, Database System Concepts.McGraw-Hill Advanced Computer Science Series. McGraw-HillBook
Co., 1986
[19]. J M Kang, S Shekar, M Henjum, P Novak & W Arnold, (2009) ― Discovering Teleconnected Flow Anomalies : A Relationship
Analysis of Spatiotemporal Dynamic Neighborhoods‖, In Symposium of Spatial and Temporal Databases SSTD’09, July 8-10,
Aalborg, Denmark.
[20]. James F Allen, "Time and Time Again: The Many Ways to Represent Time" Journal of Intelligent Systems, vol. 6, pp. 341-355,
July 1991
[21]. Jia-Dong Ren, Jie Bao & Hui-Yu Huang, (2003) ―Research on Spatio-Temporal Data Model and Related Mining―, Proceeding of
the Second International Conference on Machine Learning and Cybernetics, Xi’an, 2-5 November.
[22]. Jiawei Han, (2003) ―Mining Spatiotemporal Knowledge: Methodologies and Research Issues―, A position paper, KDV workshop.
[23]. K.Venkateswara Rao, Dr. A.Govardhan & Dr.K.V.Chalapati Rao, (2011) ―Discovering Spatiotemporal Topological Relationships‖,
The Second international workshop on Database Management Systems, DMS-2011, July, Chennai, India, Springer Proceedings
LNCS-CCIS 198.
[24]. K.Venkateswara Rao, Dr. A.Govardhan & Dr.K.V.Chalapati Rao, ―Mining Topological Relationship Patterns from spatiotemporal
Databases―, International Journal of Data Mining and Knowledge Management Process, IJDKP, Accepted.
[25]. M Ekram Azim et.al, (2011) ― Detection of the Spatiotemporal Trends of Mercury in Lake Erie Fish Communities: A Bayesian
Approach‖, ACS Environmental Science & Technology, 45(6).
[26]. M. Erwig and M. Schneider, ―Developments in Spatio-Temporal Query Languages‖, In IEEE Int.Workshop on Spatio-Temporal
Data Models and Languages (STDML), pages 441-449, Florence,Italy, 1999.
[27]. M. Erwig and M. Schneider, ―Spatio-Temporal Predicates‖, Technical Report 261, Fern Universitat Hagen, 1999. To appear in
IEEE Transactions on Knowledge and Data Engineering
[28]. M. J. Egenhofer and J. R. Herring. ―Spatial and Temporal Reasoning in Geographic Information Systems‖,Oxford University Press
,New York,1998.
[29]. M. Koubarakis, T. Sellis et al. (eds.), Spatio-Temporal Databases: The Chorochronos Approach, 2003. Springer-Verlag LNCS
2520.
[30]. M. Koubarakis, T.K. Sellis et al.(eds.),‖Spatio-Temporal Databases: The Chorochronos Approach‖ Springer-Verlag LNCS 2520,
2003.
[31]. M. Yuan, "Temporal GIS and Spatio-Temporal Modeling " in Proceedings of Third International Conference on Integrating GIS
and Environmental Modeling, Santa Fe, New Mexico, USA, 1996.
[32]. Manso J A, Times V C, Oliveira G, Alvares L & Bogorny V, (2010) ―DB-SMoT: A Direction based Spatio-Temporal Clustering
Method‖, Fifth IEEE International Conference on intelligent Systems IEEE IS 2010.
[33]. Martin Erwig and Markus Schneider,‖ Spatio-Temporal Predicates‖ IEEE TRANSACTIONS ON KNOWLEDGE AND DATA
ENGINEERING, VOL. 14, NO. 4, JULY/AUGUST 2002 881
[34]. Max J Egenhofer ,‖ what is special about spatial database requirements for vehicles navigation in geographic space ‖, SIGMOD
Rec.,vol.22,no. 2, pp.398—402,june 1993.
[35]. Michael Mcguire, Vandana J & Aryya Gangopadhyay (2010) ― Spatiotemporal Neighborhood discovery for sensor data‖, In
knowledge discovery from sensor data, Vol 5840.
[36]. N. Pelekis, B. Theodoulidis, I. Kopanakis, and Y. Theodoridis. ―Literature review of spatio-temporal database models‖. Technical
report, Center of Research in Information Management (CRIM), Department of Computation, UMIST; Department of Informatics,
University of Piraeus, 2005.
10. Spatial-Temporal Database and its Models: A Review
www.iosrjournals.org 100 | Page
[37]. N. W. J. Hazelton, ―Developments in Spatio-Temporal GIS‖, Proceedings of the First Regional Conference on GIS Research in
Victoria and Tasmania, 1992.
[38]. O. Ahlqvist, J. Keukelaar and K. Oukbir, ―Rough Classification and Accuracy Assessment‖, International Journal of Geographical
Information Science, Vol. 14: 475-496, 2000.
[39]. O. Guenther and A. Buchmann. ``Research Issues in Spatial Databases'', SIGMOD Record, Vol. 19:61-68,1990.
[40]. O. Wolfson, B. Xu, S. Chamberlain, L. Jiang, ―Moving Objects Databases: Issues and Solutions‖, Proceedings of the 10th Int.
Conference on Scientific and Statistical Database Management, pages111-122, Capri, Italy, 1998.
[41]. P. J. Brockwell and R. A. Davis. ―Introduction to Time Series and Forecasting‖.Springer, ISBN: 0-387- 95351-5, 2003.
[42]. R.H. Guting, ―An Introduction to Spatial Database Systems‖, VLDB Journal 4 (1994), 357-399.
[43]. Roberto Trasarti, Fabio Pinelli & Mirco Nanni, (2011) ―Mining Mobility User Profiles for Car Pooling―, KDD’ 11, August 21-24,
San Diego, California, USA.
[44]. S. Grumbach, P. Rigaux and L. Segoufin, ―Spatio-Temporal Data Handling with Constraints‖, GeoInformatica, 5: 95-115, 2001.
[45]. S. Nadi and R.D. Mahmoud, "Spatio-Temporal Modeling of Dynamic Phenomena in GIS," in ScanGIS'2003 - The 9th
Scandinavian Research Conference on GIS, 4-6 June 2003- Proceedings, Espoo, Finland , pp. 215- 225,2003.
[46]. S. Shekhar, S. Chawla, S. Ravada,A. Fetterer, X. Liu, C.T. Lu,‖Spatial Databases: Accomplishments and Research Needs‖
[47]. S.Kisilevich,F.Mansmann, M.Nanni, and S.Rinzivillo, "Spatio-temporal clustering," pp. 855-874, 2010
[48]. SUJIT K. SAHU, KANTI V. MARDIA,‖Recent Trends in Modeling Spatio-temporal Data‖
[49]. T. Abraham and J. F. Roddick. ―Survey of spatio-temporal databases‖. Geoinformatica, 3(1):61–99, 1999.
[50]. Taher Omran & Maryvonne, (2005) ―Multidimensional Structures Dedicated to Continuous Spatiotemporal Phenomena‖, BNCOD
2005, LNCS 3567, pp 29-40.
[51]. Vieira M R, Frias Martinez V, Oliver N & Frias Martinez E, (2010) ―Characterizing Dense Urban Areas from Mobile Phone-call
Data: Discovery and Social Dynamics‖, IEEE Second International Conference on Social Computing, Minneapolis, MN.
[52]. Viveca Asproth, Anita H.kansson, and Peter Rèvay, "Dynamic information in GIS systems," vol. 19, pp. 107 - 115, 1995
[53]. Xiaoyu Wang, Xiaofang Zhou and Sanglu Lu,‖Spatio-temporal Data Modeling and management: A Survey‖
[54]. Yan Huang, Cai Chen & Pinliang Dong, (2008) ―Modeling Herds and Their Evolvements from Trajectory Data―, International
Conference on Geographic Information Science, GISCIENCE. International Journal of Computer Science & Engineering Survey
(IJCSES) Vol.3, No.1, February 2012