HP India Sales Pvt Ltd has proposed a Digital Repository Solution for an institution using the Dspace open source digital library system. The proposal outlines the benefits of digital repositories for preservation, access and discovery of digital materials. It describes key aspects of implementing Dspace such as its architecture with separate tiers for presentation, application processing and data storage. HP would deliver the installation, configuration and population of Dspace to create a digital repository for the institution's intellectual assets and research outputs.
What is HDFS | Hadoop Distributed File System | EdurekaEdureka!
( Hadoop Training: https://www.edureka.co/hadoop )
This What is HDFS PPT will help you to understand about Hadoop Distributed File System and its features along with practical. In this What is HDFS PPT, we will cover:
1. What is DFS and Why Do We Need It?
2. What is HDFS?
3. HDFS Architecture
4. HDFS Replication Factor
5. HDFS Commands Demonstration on a Production Hadoop Cluster
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
SAP HANA typical implementations today
Outlook for the next 12-18 months
Disaster Recovery capabilities of SAP HANA
Complete automation of Disaster Recovery for SAP HANA with SUSE Linux High Availability
Speakers: Dan Lahl (VP Database Product, SAP), Markus Guertler (Senior SAP Architect, SUSE)
Data Warehouse:
A physical repository where relational data are specially organized to provide enterprise-wide, cleansed data in a standardized format.
Reconciled data: detailed, current data intended to be the single, authoritative source for all decision support.
Extraction:
The Extract step covers the data extraction from the source system and makes it accessible for further processing. The main objective of the extract step is to retrieve all the required data from the source system with as little resources as possible.
Data Transformation:
Data transformation is the component of data reconcilation that converts data from the format of the source operational systems to the format of enterprise data warehouse.
Data Loading:
During the load step, it is necessary to ensure that the load is performed correctly and with as little resources as possible. The target of the Load process is often a database. In order to make the load process efficient, it is helpful to disable any constraints and indexes before the load and enable them back only after the load completes. The referential integrity needs to be maintained by ETL tool to ensure consistency.
What is HDFS | Hadoop Distributed File System | EdurekaEdureka!
( Hadoop Training: https://www.edureka.co/hadoop )
This What is HDFS PPT will help you to understand about Hadoop Distributed File System and its features along with practical. In this What is HDFS PPT, we will cover:
1. What is DFS and Why Do We Need It?
2. What is HDFS?
3. HDFS Architecture
4. HDFS Replication Factor
5. HDFS Commands Demonstration on a Production Hadoop Cluster
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
SAP HANA typical implementations today
Outlook for the next 12-18 months
Disaster Recovery capabilities of SAP HANA
Complete automation of Disaster Recovery for SAP HANA with SUSE Linux High Availability
Speakers: Dan Lahl (VP Database Product, SAP), Markus Guertler (Senior SAP Architect, SUSE)
Data Warehouse:
A physical repository where relational data are specially organized to provide enterprise-wide, cleansed data in a standardized format.
Reconciled data: detailed, current data intended to be the single, authoritative source for all decision support.
Extraction:
The Extract step covers the data extraction from the source system and makes it accessible for further processing. The main objective of the extract step is to retrieve all the required data from the source system with as little resources as possible.
Data Transformation:
Data transformation is the component of data reconcilation that converts data from the format of the source operational systems to the format of enterprise data warehouse.
Data Loading:
During the load step, it is necessary to ensure that the load is performed correctly and with as little resources as possible. The target of the Load process is often a database. In order to make the load process efficient, it is helpful to disable any constraints and indexes before the load and enable them back only after the load completes. The referential integrity needs to be maintained by ETL tool to ensure consistency.
This presentation about Hadoop architecture will help you understand the architecture of Apache Hadoop in detail. In this video, you will learn what is Hadoop, components of Hadoop, what is HDFS, HDFS architecture, Hadoop MapReduce, Hadoop MapReduce example, Hadoop YARN and finally, a demo on MapReduce. Apache Hadoop offers a versatile, adaptable and reliable distributed computing big data framework for a group of systems with capacity limit and local computing power. After watching this video, you will also understand the Hadoop Distributed File System and its features along with the practical implementation.
Below are the topics covered in this Hadoop Architecture presentation:
1. What is Hadoop?
2. Components of Hadoop
3. What is HDFS?
4. HDFS Architecture
5. Hadoop MapReduce
6. Hadoop MapReduce Example
7. Hadoop YARN
8. Demo on MapReduce
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Who should take up this Big Data and Hadoop Certification Training Course?
Big Data career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology for the following professionals:
1. Software Developers and Architects
2. Analytics Professionals
3. Senior IT professionals
4. Testing and Mainframe professionals
5. Data Management Professionals
6. Business Intelligence Professionals
7. Project Managers
8. Aspiring Data Scientists
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Data warehouse is defined as " A Subject-Oriented integrated, time-varient and nonvolatile collection of data in support of management decision making process
Big Data Tutorial | What Is Big Data | Big Data Hadoop Tutorial For Beginners...Simplilearn
This presentation about Big Data will help you understand how Big Data evolved over the years, what is Big Data, applications of Big Data, a case study on Big Data, 3 important challenges of Big Data and how Hadoop solved those challenges. The case study talks about Google File System (GFS), where you’ll learn how Google solved its problem of storing increasing user data in early 2000. We’ll also look at the history of Hadoop, its ecosystem and a brief introduction to HDFS which is a distributed file system designed to store large volumes of data and MapReduce which allows parallel processing of data. In the end, we’ll run through some basic HDFS commands and see how to perform wordcount using MapReduce. Now, let us get started and understand Big Data in detail.
Below topics are explained in this Big Data presentation for beginners:
1. Evolution of Big Data
2. Why Big Data?
3. What is Big Data?
4. Challenges of Big Data
5. Hadoop as a solution
6. MapReduce algorithm
7. Demo on HDFS and MapReduce
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
Introduction To Hadoop | What Is Hadoop And Big Data | Hadoop Tutorial For Be...Simplilearn
This presentation about Hadoop will help you learn the basics of Hadoop and its components. First, you will see what is Big Data and the significant challenges in it. Then, you will understand how Hadoop solved those challenges. You will have a glance at the History of Hadoop, what is Hadoop, the different companies using Hadoop, the applications of Hadoop in different companies, etc. Finally, you will learn the three essential components of Hadoop – HDFS, MapReduce, and YARN, along with their architecture. Now, let us get started with Introduction to Hadoop.
Below topics are explained in this Hadoop presentation:
1. Big Data and its challenges
2. Hadoop as a solution
3. History of Hadoop
4. What is Hadoop
5. Applications of Hadoop
6. Components of Hadoop
7. Hadoop Distributed File System
8. Hadoop MapReduce
9. Hadoop YARN
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/introduction-to-big-data-and-hadoop-certification-training.
this is the ppt this contains definition of data ware house , data , ware house, data modeling , data warehouse architecture and its type , data warehouse types, single tire, two tire, three tire .
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
This presentation about Hadoop architecture will help you understand the architecture of Apache Hadoop in detail. In this video, you will learn what is Hadoop, components of Hadoop, what is HDFS, HDFS architecture, Hadoop MapReduce, Hadoop MapReduce example, Hadoop YARN and finally, a demo on MapReduce. Apache Hadoop offers a versatile, adaptable and reliable distributed computing big data framework for a group of systems with capacity limit and local computing power. After watching this video, you will also understand the Hadoop Distributed File System and its features along with the practical implementation.
Below are the topics covered in this Hadoop Architecture presentation:
1. What is Hadoop?
2. Components of Hadoop
3. What is HDFS?
4. HDFS Architecture
5. Hadoop MapReduce
6. Hadoop MapReduce Example
7. Hadoop YARN
8. Demo on MapReduce
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Who should take up this Big Data and Hadoop Certification Training Course?
Big Data career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology for the following professionals:
1. Software Developers and Architects
2. Analytics Professionals
3. Senior IT professionals
4. Testing and Mainframe professionals
5. Data Management Professionals
6. Business Intelligence Professionals
7. Project Managers
8. Aspiring Data Scientists
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Data warehouse is defined as " A Subject-Oriented integrated, time-varient and nonvolatile collection of data in support of management decision making process
Big Data Tutorial | What Is Big Data | Big Data Hadoop Tutorial For Beginners...Simplilearn
This presentation about Big Data will help you understand how Big Data evolved over the years, what is Big Data, applications of Big Data, a case study on Big Data, 3 important challenges of Big Data and how Hadoop solved those challenges. The case study talks about Google File System (GFS), where you’ll learn how Google solved its problem of storing increasing user data in early 2000. We’ll also look at the history of Hadoop, its ecosystem and a brief introduction to HDFS which is a distributed file system designed to store large volumes of data and MapReduce which allows parallel processing of data. In the end, we’ll run through some basic HDFS commands and see how to perform wordcount using MapReduce. Now, let us get started and understand Big Data in detail.
Below topics are explained in this Big Data presentation for beginners:
1. Evolution of Big Data
2. Why Big Data?
3. What is Big Data?
4. Challenges of Big Data
5. Hadoop as a solution
6. MapReduce algorithm
7. Demo on HDFS and MapReduce
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
Introduction To Hadoop | What Is Hadoop And Big Data | Hadoop Tutorial For Be...Simplilearn
This presentation about Hadoop will help you learn the basics of Hadoop and its components. First, you will see what is Big Data and the significant challenges in it. Then, you will understand how Hadoop solved those challenges. You will have a glance at the History of Hadoop, what is Hadoop, the different companies using Hadoop, the applications of Hadoop in different companies, etc. Finally, you will learn the three essential components of Hadoop – HDFS, MapReduce, and YARN, along with their architecture. Now, let us get started with Introduction to Hadoop.
Below topics are explained in this Hadoop presentation:
1. Big Data and its challenges
2. Hadoop as a solution
3. History of Hadoop
4. What is Hadoop
5. Applications of Hadoop
6. Components of Hadoop
7. Hadoop Distributed File System
8. Hadoop MapReduce
9. Hadoop YARN
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/introduction-to-big-data-and-hadoop-certification-training.
this is the ppt this contains definition of data ware house , data , ware house, data modeling , data warehouse architecture and its type , data warehouse types, single tire, two tire, three tire .
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Infognana is an ISO/IEC certified, BPO and software development company based on USA, specialized in web & mobile apps development and ePublishing services. Infognana is one of the leading software development company in Dallas, Chicago and Los Angeles
Cybnetics is the best SEO company in Delhi. Our SEO Expert Consultants offers premium Top SEO services around the Google Guidelines helping you to increase traffics.
This project report deliberates the new activities, methods and technology used in digitization and formation of digital libraries. It set out some key points involved and the detailed plans required in the process, offers pieces of advice and guidance for the practicing Librarians and Information scientists. Digital Libraries are being created today for diverse communities and in different fields e.g. education, science, culture, development, health, governance and so on. With the availability of several free digital Library software packages at the recent time, the creation and sharing of information through the digital library collections has become an attractive and feasible proposition for library and information professionals around the world. The paper ends with a call to integrate digitization into the plans and policies of any institution to maximize its effectiveness.
Css Founder is Website Designing Company working with the mission of Website For Everyone Website Start From 999/-* More Packages are available. we are best company in website designing company in Delhi,
In Search of Simplicity: Redesigning the Digital Bleek and LloydLighton Phiri
DESIDOC Journal of Library & Information Technology: Special Issue on Digital Preservation original submission.
Publication URL: http://goo.gl/yUERj
BibTeX Citation
@article{D2524,
author = {Lighton Phiri and Hussein Suleman},
title = {In Search of Simplicity: Redesigning the Digital Bleek and Lloyd},
journal = {DESIDOC Journal of Library & Information Technology},
volume = {32},
number = {4},
year = {2012},
keywords = {},
abstract = {The Digital Bleek and Lloyd is a collection of digitised historical artefacts on the Bushman people ofSouthern Africa. The underlying software was initially designed to enable access from as many people aspossible so usage requirements were minimal – it was not even necessary to use a web server or database.However, the system was not focused on preservation, extensibility, or reusability. In this article, it is arguedthat such desirable attributes could manifest themselves in a natural evolution of the Bleek and Lloyd softwaresystem in the direction of greater simplicity. A case study demonstrates that this is indeed feasible in the caseof the Digital Bleek and Lloyd and potentially more generally applicable in digital libraries.},
issn = {0976-4658}, url = {http://publications.drdo.gov.in/ojs/index.php/djlit/article/view/2524}
}
Research Data (and Software) Management at Imperial: (Everything you need to ...Sarah Anna Stewart
A presentation on research data management tools, workflows and best practices at Imperial College London with a focus on software management. Presented at the 2017 session of the HPC Summer School (Dept. of Computing).
IMPLEMENTATION OF DIGITAL LIBRARY SYSTEM BY USING DSPACE & ANDROID APPS AT AM...IAEME Publication
Developing countries face serious problems on building and using digital libraries
(DL) due to low computer and Internet penetration rates, lack of financial resources,
etc. Thus, since mobile phones are much more used than computers in these countries,
they might be a good alternative for accessing DL. Moreover, in the developed world
there has been an exponential growth on the usage of mobile phones for data traffic,
establishing a good ground for accessing DL on mobile devices. This paper presents a
design proposal for making DSpace-based digital libraries accessible on mobile
phones. Since DSpace is a popular free and open source DL system used around the
world, making it accessible through mobile devices might contribute for improving the
global accessibility of scientific and academic publications.
Developing National Repository Of Child Health Information For India Anil M...Anil Mishra
Parent organization (NIHFW & NCHRC)
Need for ‘Repository on Child Health’
Plan & Steps of development
Software selection
Key features of Repository
Conclusions & Impact on country
Development And Analysis Of Child Health Repository In India Anil MishraAnil Mishra
The goal of the national repository is to ensure the availability of electronic information resources of libraries, organization, NGO’s, department etc. at a common platform now and in the future. The project focuses on common services, operational guidelines, modules, government policies and programs related to child health.
The project aims at creating a common public interface by using open source software CMS Drupal for the development of the digital repository.
This paper highlights the functions, objectives and development of the digital repository. The paper covers the digital repository of the National child health Resource Centre (NCHRC).
Developing National Repository Of Child Health Information For India Anil M...Anil Mishra
India faces an enormous challenge in the area of child survival. The Government and different non-government organizations have undertaken various initiatives to improve the status of Child Health in the country and this has generated an abundant resource of valuable information. However this information lies scattered and is often inaccessible to the public and other stakeholders.
Efficient management of ‘health information’ is imperative for informed decision making and for attaining effective programmatic outcomes. Digital repositories have nowadays become the preferred source of information management. This paper describes the development of a digital repository of information on child health developed by the National Child Health Resource Centre at the National Institute of Health & Family Welfare, Delhi, using the open source content management system Drupal. This repository has been developed as a comprehensive source of information on child health and related maternal health.
Repository on child health by anil mishraAnil Mishra
The ‘Repository on Child Health’ is a virtual guide to Child Health and related Maternal Health information relevant to Public Health in India. It is a one-stop access to efficiently search, organize and share latest information.
Guidelines for antenatal care and skilled attendance at birth by ANMs/LHVs/SNsAnil Mishra
Abstract:
Prepared by the MOHFW in 2010 to strengthen and operationalise the 24X7 PHCs and designated FRUs in handling Basic and Comprehensive Obstetric Care including Care at Birth, this guideline reorients the service providers particularly the Auxiliary Nurse Midwives (ANMs), Staff Nurses (SNs), and Lady Health Visitors (LHVs) for providing skilled care during pregnancy and childbirth.
Keywords: Maternal Health, Newborn Child Health, Quality of Care, Health workers, ANC, Obstetric care, Guidelines, Government
Year of Publication: 2010
Source: MoHFW
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
D Space Proposal Tvm 1407
1. Proposal for HP – Dspace (Digital Repository
Solution) from HP India Sales Pvt Ltd
May 11, 2009
2. HP India Sales Pvt Ltd
No. 2 Harrington Road, HP Towers
North Tower, Chetpet
Chennai 600031
www.hp.com
Satyanarayana May 11, 2009
Venkatarao
Attn:
Customer Details
Tel: +91 44 28365566
kantipudi.satya@hp.com
Dear Chairman,
HP India Sales Pvt Ltd is pleased to offer a proposal for Digital Library Solution
based on leading-edge information technology products and services -a broad
portfolio of market-leading products that offer flexibility, investment protection and
superior performance
Benefits of Digital Repository solution for the Institution:
• Preservation and conservation, an exact copy of original copy can be made any
number of times with out performance degradation.
• Round the clock availability of data.
• Digitally store TV Programs, News paper articles, photos, Audio Lectures, Video
Lectures and Sound.
• Fast and easy Information retrieval whereby the user is able to use any
search term, word, phrase, title, name or subject to search the entire
collection.
• Provide very user-friendly interfaces, giving clickable access to its resources.
The Institution can have confidence in the proposed approach because HP India
Sales Pvt Ltd has assisted customers worldwide with the successful deployment of
similar business solutions.
HP India Sales Pvt Ltd is committed to Institution’s success and is confident that
solution addresses all critical requirements. We look forward to meeting with you to
review our capabilities, to discuss the benefits of our proposed solution, and to
explore the next steps in forging a strong and mutually beneficial relationship.
Sincerely,
HP India Sales Pvt Ltd
Satyanarayana Venkatarao
3. HP Proposal for Digital Repository
May 11, 2009
Digital Repository Solution Framework
Background
Digital repository, also known as digital libraries are important for
organizations in helping to manage and capture intellectual assets as part of
their information strategy. A digital repository can hold wide range of
materials for a variety of purposes and users. It can support research,
learning, and administrative processes. Broadly speaking, a digital repository
is a library in which collections are stored in digital formats (as opposed to
print, microform, or other media) and accessible by computers. The digital
content may be stored locally, or accessed remotely via computer networks.
A digital library is a type of information retrieval system.
Digital Repositories can immediately adopt innovations in technology
providing users with improvements in electronic and audio book
technology as well as presenting new forms of communication such as
wikis and blogs.
No physical boundary. The user of a digital repository/library need not look for
physical paper files; people from across the locations can gain access to the
same information, as long as an Internet connection is available.
Round the clock availability. A major advantage of digital libraries/repository is
that people can gain access to the information at any time, night or day.
Multiple accesses. The same resources can be used at the same time by a
number of users.
Structured approach. Digital libraries/repository provide access to much richer
content in a more structured manner, i.e. we can easily move from the catalog
to the particular book then to a particular chapter and so on.
Information retrieval. The user is able to use any search term
(word, phrase, title, name, subject) to search the entire collection.
Digital libraries/repository can provide very user-friendly interfaces,
giving clickable access to its resources.
Preservation and conservation. An exact copy of the original can be made any
number of times without any degradation in quality.
Networking. A particular digital library can provide a link to any other
resources of other digital libraries very easily; thus a seamlessly
integrated resource sharing can be achieved.
Page-3
4. HP Proposal for Digital Repository
May 11, 2009
Digital Repository Architecture
Digital Repository Solution is based on multi-tier architecture, categorized broadly into
three tiers:
1. Presentation Tier (Web Server) Top most level of library
solution is the user interface. It comprises of a web server,
acting as an interface with library user(s).
2. Logic Tier (Application Server) : This layer processes commands, retrieves data
and sends to presentation layer to be shown to users. Application Servers like Tomcat
are typically part of this layer.
3. Data Tier (Database & Storage Server) : This layer stores library information for
retrieval by Application server which processes it and send to web server for display to
users. It consists of a database server, storage and backup devices for data integrity.
Page-4
5. HP Proposal for Digital Repository
May 11, 2009
Objective of the Proposal
This Proposal is for supply and deployment of the Dspace Digital Repository software
for the Institution.
This Proposal facilitates creation of a platform for a Digital Web Based knowledge
repository accessible by the community. The aim is to capture, store, index, preserve
and redistribute the knowledge assets of the Institution for the overall development
of the community on long-term basis.
The implementation of this proposal will bring years of joint Research between
Massachusetts Institute of Technology (MIT) and HP in developing Dspace to assist
the Institution in creation of a world-class knowledge repository for the benefit of the
community.
Benefits to the Institution
• Implementation of the Digital Repository provides an opportunity for the
Institution to achieve and sustain leadership position in Education,
Consultancy, Research and Development.
• Implementation of Digital Repository would help preserve knowledge
generated within the institution and make it accessible to all for the
overall development of the community.
• Implementation of the Digital Repository provides an opportunity on
updating the skills and knowledge of the community.
• Implementation of the Digital Repository provides an opportunity for the
Institution to provide to its community the latest technologies as per the
industry requirements.
• Implementation of Digital Repository will facilitate collaboration and
sharing of Knowledge amongst the community on long-term basis.
Specifically, the Digital Repository provides the following benefits:
• Provides the capability to publish research results out quickly.
• Helps in reaching the community through exposure to the omnipresent
Internet.
• Archiving and distributing material normally currently stored in the library.
• Keeping track of publications/bibliography
• Having a persistent network identifier for your work, as shown in this
image:
Page-5
6. HP Proposal for Digital Repository
May 11, 2009
1. The Dspace Digital Repository
A Digital institutional repository is a set of services that an institute offers to the
members of its community for the management and dissemination of digital
materials created by the institution and its community members. It is most
essentially an organizational commitment to the stewardship of these digital
materials, including long-term preservation where appropriate, as well as
organization and access or distribution.
To collect, distribute, and preserve research materials in increasingly complex digital
formats are a time-consuming and expensive chore for individual faculty and their
departments, labs, and centers to manage themselves. The DSpace system provides
a way to manage these research materials and publications in a professionally
maintained repository to give them greater visibility and accessibility over time.
DSpace is a groundbreaking digital library system to capture, store, index, preserve,
and redistribute all scholarly research material in digital formats.
One can share research findings quickly with a worldwide audience and preserve
materials in perpetuity.
DSpace captures data in any format – in text, video, audio, and data. It distributes
it over the web. It indexes the work, so users can search and retrieve items. It
preserves digital work over the long term.
DSpace provides a way to manage research materials and publications in a
professionally maintained repository to give them greater visibility and accessibility
over time.
Adding content in Dspace
DSpace is easy to use. You use your web browser to submit content and search or
browse its collections.
Page-6
7. HP Proposal for Digital Repository
May 11, 2009
To submit content, you upload the file(s) and add descriptive information including
title, author, publication information, and keywords. This descriptive data is known
as metadata.
To add your content, though, you must belong to a DSpace community. Speak with
your library’s staff to learn more about DSpace communities.
Licensing and copyright issues
To add content to DSpace, one must have the copyright to the material, or have
permission to submit work for which one does not have copyright. One should be
willing and able to grant the institute library the right to preserve and distribute the
work in DSpace.
Many publishers offer a “self-archiving” clause in publication contracts, which allows
one to archive a copy of work. If the publisher doesn’t offer such a clause, one can
negotiate to include one.
Each institute sets its own licensing requirements for DSpace.
Preserving data for grants
DSpace provides a means to preserve and distribute data and research, as is
required in many grants.
Reference Sites
• Central Plantation Crops Research Institute, Kasargod
• GB Pant University of Agriculture & Technology, Pant Nagar
• Indira Gandhi Institute for Development Research, Mumbai
• INFLIBNET, Ahemdabad
• Indian Institute of Astrophysics, Bangalore
• Indian Institute of Management, Kozhikode
• Indian Institute of Science, Bangalore
• Indian Institute of Technology, Bombay
• Indian Institute of Technology, New Delhi
• Indian Institute of Technology, Kharagpur
• Indian National Science Academy, New Delhi
• Indian Statistical Institute, Bangalore
• LDL: Librarians' Digital Library, DRTC
• National Centre for Radio Astrophysics, Pune
• National Chemical Laboratory (NCL), Pune
• National Institute of Oceanography, Goa
Page-7
8. HP Proposal for Digital Repository
May 11, 2009
• National Institute of Technology, Rourkela
• Raman Research Institute, Bangalore
• Sri Venkateswara University, Tirupati
• University of Hyderabad, Hyderbad
DSpace Features
The major features of Dspace are:
• Institutional Repository
• DSpace is a digital library system to capture, store, index, preserve, and
redistribute the intellectual output of a university’s research faculty in digital
formats.
• DSpace is organized to accommodate the multidisciplinary and organizational
needs of a large institution.
• DSpace provides access to the digital work of the whole institution through
one interface.
• DSpace is organized into Communities and Collections, each of which
retains its identity within the repository.
• Customization for DSpace communities and collections allows for flexibility in
determining policies and workflow.
Supported Formats and Content Types
DSpace accepts any type of digital content, including:
• Text
• Images
• Audio
• Video
Some examples of items that DSpace can accommodate are:
• Documents such as articles, preprints, working papers, technical reports,
conference papers
• Books
• Theses
• Data sets
• Computer programs
• Visual simulations and models
Each institution that implements DSpace can determine its own list of supported
formats and content types, based on its needs and resources.
Page-8
9. HP Proposal for Digital Repository
May 11, 2009
Digital Preservation
One of the primary goals of DSpace is to preserve digital information.
• DSpace provides long-term physical storage and management of digital items
in a secure, professionally managed repository including standard operating
procedures such as backup, mirroring, refreshing media, and disaster
recovery.
• DSpace assigns a persistent identifier to each contributed item to ensure its
retrievability far into the future.
• DSpace provides a mechanism for advising content contributors of the
preservation support levels they can expect for the files they submit.
Access Control
DSpace allows contributors to limit access to items in DSpace, at both the collection
and the individual item level.
Versioning
New versions of previously submitted DSpace items can be added and linked to each
other, with or without withdrawal of the older item.
Multiple formats of the same content item can be submitted to DSpace, for example,
a TIFF file and a GIF file of the same image.
Search and Retrieval
The DSpace submission process allows for the description of each item using a
qualified version of the “Dublin Core metadata” schema. These descriptions are
entered into a relational database, which is used by the search engine to retrieve
items.
Architecture Overview
The DSpace system is organized into three layers, each of which consists of a
number of components.
Page-9
10. HP Proposal for Digital Repository
May 11, 2009
DSpace System Architecture
The storage layer is responsible for physical storage of metadata and content.
The business logic layer deals with managing the content of the archive, users of
the archive (e-people), authorization, and workflow. The application layer
contains components that communicate with the world outside of the individual
DSpace installation, for example the Web user interface and the Open Archives
Initiative protocol for metadata harvesting service.
Each layer only invokes the layer below it; the application layer may not used
the storage layer directly, for example. Each component in the storage and
business logic layers has a defined public API. The union of the APIs of those
components is referred to as the Storage API (in the case of the storage layer)
and the DSpace Public API (in the case of the business logic layer). These APIs
are in-process Java classes, objects and methods.
It is important to note that each layer is trusted. Although the logic for
authorizing actions is in the business logic layer, the system relies on individual
applications in the application layer to correctly and securely authenticate e-
people. If a 'hostile' or insecure application were allowed to invoke the Public API
directly, it could very easily perform actions as any e-person in the system.
The reason for this design choice is that authentication methods will vary widely
between different applications, so it makes sense to leave the logic and
responsibility for that in these applications.
The source code is organized to cohere very strictly to this three-layer
architecture. Also, only methods in a component's public API are given the
public access level. This means that the Java compiler helps ensure that the
source code conforms to the architecture.
Creating Content with Dspace
Adding content to DSpace is very easy to do. This section illustrates the basic steps:
• Choose a collection
• Describe your content item by adding metadata and keywords
• Upload the file(s)
• Verify the submitted item
• Accept the DSpace license
• Find your submitted items in a workflow
Page-10
11. HP Proposal for Digital Repository
May 11, 2009
Deliverables
HP Deliverables are as detailed below:
1. Installation of OS and layered components: Installation of Operating System and
layered products like Apache Web Servers, SendMail System, Samba Services,
and infrastructure services like DNS client services
2. Prerequisites Software Installation: Installation of prerequisite software for
Dspace, which includes various Java patches and layered Java products, like java
activation services, mail api layers etc.
3. Installation of Apache: Installation and configuration of secured (SSL) Apache for
use with Dspace. Involves configuration of OpenSSL and Apache with dynamic
loadable modules like APR, mod_php and mod_ldap.
4. Installation of Tomcat: Installation and configuration of Tomcat for use with
Dspace. Involves integration of Apache with tomcat with Apache using apache
portable runtime and mod_jk2 connector (a connector which serves servlet and
jsp requests to apache from the backend and lets apache serve the web
requests).
5. Creating sample Users and setting permissions: Create sample users with varying
access rights for participation in Dspace digital library.
6. Install Dspace: Install and configure Dspace. This is base configuration.
7. Configure PostgreSQL: Configure postgresql and initialize persistant objects for
Dpsace.
8. Configure Dspace: Configure Dspace for usage by user community
9. Initialize (run scripts): Automate startup and shutdown procedure of dspace
10. Scan the sample articles provided by the Institution and upload the same along
with the Metadata into Dspace.
11. Training on Administration and usage: Administration of Dspace. Demonstrate
key concepts in Dspace to user groups
Duration
The services are expected to be completed in 6 Months.
Page-11
12. HP Proposal for Digital Repository
May 11, 2009
3. Commercials
Hardware and Software
Sl No Description Value
1 DSPACE Hardware and Software 9,99,999/-
2 Implementation and Support
Assumptions
1
Additional Servers, OS and Networking equipment may be
required if not already available
2 LAN and Networking equipment such Switches and Cabling for
the LAN are assumed to be available
3 Internet Connectivity is assumed to be available
4 Adequate Capacity UPS for the Infrastructure is assumed to be
available
5 LCD for Training is assumed to be available
6 High Speed Scanner is available with the Colleges.
Terms and Conditions
1. All Prices are inclusive of taxes. Prevailing rates of taxes are 2% CST,4%
VAT and 12.5% Service TAX
2. The Infrastructure suggested is based on certain assumptions listed herein.
3. Books are assumed to be in Bound conditions. It is assumed that they can
be unbound, scanned, rebound and returned.
4. It is assumed that the college will provide necessary resources for scanning
like content and man power.
5. Purchase order may be placed with our Business Partner (Name and
Address)
Page-12