This is the slide deck of the presentation given to the RRAC national group meeting on 10-20-2010. It is a summary of the research efforts in Digital Preservation at ODU.
The document discusses Memento, a framework that integrates archived ("past") web resources with the current web. It proposes using HTTP headers and URIs to enable time-based content negotiation, allowing users to access prior representations of resources via the same URI. This bridges the divide between the current and past web by making navigation to archived versions a normal part of HTTP interactions.
The document discusses the core concepts of the web and semantic web. It explains how linked data uses semantic web standards to identify things with URIs and describe them using attributes. An example is given describing a spacecraft using its URI and attributes like name, mass, launch date, site, and coordinates. Links connect related concepts like the spacecraft and its launch.
Richard Wallis is a technology evangelist at OCLC. The document discusses how Richard Wallis advocates for linked data and semantic web technologies. It provides examples of how identifiers and URIs can be used to identify things and link their various attributes on the semantic web.
The document discusses key concepts related to the internet and web technologies. It defines common terms like internet, IP address, domain name system, web browser, uniform resource locator. It explains important protocols for transmission and communication like TCP/IP, HTTP, FTP. It also discusses various internet applications and services like search engines, email, social networking sites, blogs, microblogs, real-time communication tools, and malware threats.
E:\Tecnicas De Ventas\Resumen De Que Es Venderguest5661df
Este documento resume las técnicas de ventas impartidas por el profesor Carlos Piña en el Instituto Tecnológico Sudamericano. Explica que vender requiere dominar el arte de la persuasión y la perseverancia. También destaca que ser equilibrado mental y físicamente, así como estar bien preparado y dispuesto a invertir en uno mismo, son claves para tener éxito en las ventas y ser uno de los pocos vendedores que generan la mayor parte de las comisiones.
This document provides a basic guide to Twitter in 3 sentences:
Twitter is a social media platform that allows users to post short messages called tweets. It was created in 2006 by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams. The guide explains how users can sign up for a Twitter account and start sharing tweets.
The document discusses Memento, a framework that integrates archived ("past") web resources with the current web. It proposes using HTTP headers and URIs to enable time-based content negotiation, allowing users to access prior representations of resources via the same URI. This bridges the divide between the current and past web by making navigation to archived versions a normal part of HTTP interactions.
The document discusses the core concepts of the web and semantic web. It explains how linked data uses semantic web standards to identify things with URIs and describe them using attributes. An example is given describing a spacecraft using its URI and attributes like name, mass, launch date, site, and coordinates. Links connect related concepts like the spacecraft and its launch.
Richard Wallis is a technology evangelist at OCLC. The document discusses how Richard Wallis advocates for linked data and semantic web technologies. It provides examples of how identifiers and URIs can be used to identify things and link their various attributes on the semantic web.
The document discusses key concepts related to the internet and web technologies. It defines common terms like internet, IP address, domain name system, web browser, uniform resource locator. It explains important protocols for transmission and communication like TCP/IP, HTTP, FTP. It also discusses various internet applications and services like search engines, email, social networking sites, blogs, microblogs, real-time communication tools, and malware threats.
E:\Tecnicas De Ventas\Resumen De Que Es Venderguest5661df
Este documento resume las técnicas de ventas impartidas por el profesor Carlos Piña en el Instituto Tecnológico Sudamericano. Explica que vender requiere dominar el arte de la persuasión y la perseverancia. También destaca que ser equilibrado mental y físicamente, así como estar bien preparado y dispuesto a invertir en uno mismo, son claves para tener éxito en las ventas y ser uno de los pocos vendedores que generan la mayor parte de las comisiones.
This document provides a basic guide to Twitter in 3 sentences:
Twitter is a social media platform that allows users to post short messages called tweets. It was created in 2006 by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams. The guide explains how users can sign up for a Twitter account and start sharing tweets.
Memento: TimeGates, TimeBundles, and TimeMapsMichael Nelson
Memento introduces a time dimension to the web by allowing archived ("memento") versions of resources to be accessed via content negotiation on the original resource URI or a "timegate" URI. It defines mechanisms for transparently negotiating the datetime dimension of a resource and discovering all archived versions ("timemap" and "timebundle" URIs) of a given original resource across multiple archives.
The Memento protocol is a lightweight HTTP framework an standard (IETF RFC 7089) for navigating Web clients to past representations of a resource. It has use cases in Web archiving, URI persistance and Linked Data publishing.
Summarize Your Archival Holdings With MementoMapSawood Alam
This document summarizes a presentation about using MementoMaps to efficiently route memento lookup requests to appropriate web archives. MementoMaps provide concise summaries of what URIs are held by each archive in order to avoid broadcasting requests to all archives. They can be generated from archive indexes, compacted for size, and published for discovery. Adopting MementoMaps could significantly reduce wasted lookup requests across archives.
The document provides an overview of browser-based digital preservation including:
- The current state of digital preservation which relies on web crawlers and archives like the Internet Archive. However, this approach is insufficient for preserving pages that are not popular, behind authentication, or use complex JavaScript.
- The requirements for new software to directly capture and preserve web pages from within the browser in order to address the limitations of current archival approaches.
- A proposed system called "WARCreate" that would leverage the Chrome extension API to capture web pages and resources and generate WARC files for preservation while maintaining the original browsing context.
The document introduces Memento, a framework that integrates past and present versions of web resources to enable time travel on the web. Memento achieves this through linking between original resources and their archived past versions, as well as between versions. This allows navigation between versions through standard HTTP requests. The framework has tools for both clients and servers and aims to preserve important parts of our cultural record that would otherwise be lost as web pages change and disappear over time.
Delivered by Richard Richard Wincewicz at Open Repositories OR2015, Indianapolis, IN, USA, June 2014.
An introduction to "Reference or Link Rot", the evidence for the extent of the problem, and remedies proposed by the Hiberlink project.
This presentation provides an overview of the Memento "Time Travel for the Web" framework that is aligned with the stable version of the Memento protocol, specified in RFC 7089.
Introduction to Web Programming - first courseVlad Posea
The document provides an introduction to a web programming course, outlining its objectives, what students will learn, and how they will be evaluated. Key points covered include:
- Students will understand web applications and develop basic skills in HTML, CSS, JavaScript.
- Evaluation will be based on exam scores, lab work, and individual study demonstrating understanding and skills.
- The course will cover the history of the web, how the HTTP protocol works, and core frontend technologies.
Introducing Web Archiving and WSDL Research GroupSawood Alam
My talk to introduce Web Archiving and the Web Science and Digital Libraries Research Group to some invited students from India for a summer workshop in Old Dominion University, Norfolk, VA
Presented by Michele C. Weigle, June 4, 2015
Columbia University Web Archiving Collaboration: New Tools and Models
Work by Yasmin AlNoamany, Michele C. Weigle, and Michael L. Nelson
Avoiding Spoilers On MediaWiki Fan Sites Using MementoShawn Jones
A variety of fan-based wikis about episodic fiction (e.g., television shows, novels, movies) exist on the World Wide Web. These wikis provide a wealth of information about complex stories, but if readers are behind in their viewing they run the risk of encountering "spoilers" -- information that gives away key plot points before the intended time of the show's writers. Enterprising readers might browse the wiki in a web archive so as to view the page prior to a specic episode date and thereby avoid spoilers. Unfortunately, due to how web archives choose the "best" page, it is still possible to see spoilers (especially in sparse archives).
In this presentation we highlight the issues with avoiding spoilers using Memento. We show that for a sample of fan wiki pages there is as much as a 66% chance of encountering a spoiler. We also find, using logs from the Internet Archive, that 19% of actual requests to the Wayback Machine for wikia.com end in spoilers. We suggest a different heuristic for use with wikis and unveil the Memento MediaWiki Extension as a solution.
This document discusses various strategies and resources for archiving internet content for research purposes. It describes several existing large-scale web archives like the Internet Archive and Common Crawl, as well as national and institutional archives. It also outlines how researchers can collect targeted web archives using open-source tools or subscription-based services.
This document summarizes research into discovering lost web pages using techniques from digital preservation and information retrieval. Key points include:
- Web pages are frequently lost due to broken links or content being moved/removed, but copies may still exist in search engine caches or archives.
- Techniques like lexical signatures (representing a page's content in a few keywords) and analyzing page titles, tags and link neighborhoods can help characterize lost pages and find similar replacement content.
- Experiments showed that lexical signatures degrade over time but page titles are more stable, and combining techniques improves performance in locating replacement content. The goal is to develop a browser extension to help users find lost web pages.
Synchronicity: Just-In-Time Discovery of Lost Web PagesMichael Nelson
The document discusses techniques for discovering lost web pages using lexical signatures. It finds that lexical signatures generated from page titles and content evolve over time, with terms dropping out. Signatures perform best with 5-7 terms. Combining titles with signatures provides better discovery results than either alone. Future work includes predicting "good" titles and augmenting signatures with tags and link neighborhoods.
The Memento Protocol and Research Issues With Web ArchivingMichael Nelson
Michael L. Nelson
Old Dominion University
Web Science & Digital Libraries Research Group
www.cs.odu.edu/~mln/
University of Virginia Colloquium
2016-09-12
This document discusses web decay, which refers to web sources disappearing over time. It provides statistics on the percentage of decayed web citations in various studies. While web sources are commonly cited in scholarly works, their URLs can become invalid if the sites change or are removed. The Wayback Machine and other archives can help recover some decayed sources. When citing web sources, authors should check URLs work and consider archiving pages themselves to mitigate decay issues.
Filling in the Blanks: Capturing Dynamically Generated ContentJustin Brunelle
JCDL 2012 Doctoral Consortium presentation by Justin F. Brunelle. Covers the problem Web 2.0 creates for preservation, and proposes a solution for client-side capture of content.
Scripts in a Frame: A Two-Tiered Crawling Approach to Archiving Deferred Repr...Justin Brunelle
This document summarizes Justin Brunelle's dissertation defense on archiving deferred web representations using a two-tiered crawling approach. It discusses how current archival tools are unable to fully capture dynamic and interactive web pages that use JavaScript to modify page content after load. The dissertation measures the impact of missing JavaScript resources on memento quality and proposes crawling pages using PhantomJS to execute scripts and archive the complete representation as seen by users. Future work is needed to scale this approach for archiving large portions of the deferred web at risk of being lost to history.
iPRES2015: Archiving Deferred Representations Using a Two-Tiered Crawling App...Justin Brunelle
This document proposes a two-tiered crawling approach using PhantomJS and Heritrix to better archive deferred representations, which are web pages that require JavaScript execution or user interaction to fully render. The approach uses PhantomJS to execute JavaScript and interact with deferred representations, while Heritrix crawls non-deferred representations for better performance. Test results found the PhantomJS frontier was 1.5 times larger but crawling was 10.5 times slower. The approach provides a better method for archiving deferred representations compared to the current workflow.
Memento: TimeGates, TimeBundles, and TimeMapsMichael Nelson
Memento introduces a time dimension to the web by allowing archived ("memento") versions of resources to be accessed via content negotiation on the original resource URI or a "timegate" URI. It defines mechanisms for transparently negotiating the datetime dimension of a resource and discovering all archived versions ("timemap" and "timebundle" URIs) of a given original resource across multiple archives.
The Memento protocol is a lightweight HTTP framework an standard (IETF RFC 7089) for navigating Web clients to past representations of a resource. It has use cases in Web archiving, URI persistance and Linked Data publishing.
Summarize Your Archival Holdings With MementoMapSawood Alam
This document summarizes a presentation about using MementoMaps to efficiently route memento lookup requests to appropriate web archives. MementoMaps provide concise summaries of what URIs are held by each archive in order to avoid broadcasting requests to all archives. They can be generated from archive indexes, compacted for size, and published for discovery. Adopting MementoMaps could significantly reduce wasted lookup requests across archives.
The document provides an overview of browser-based digital preservation including:
- The current state of digital preservation which relies on web crawlers and archives like the Internet Archive. However, this approach is insufficient for preserving pages that are not popular, behind authentication, or use complex JavaScript.
- The requirements for new software to directly capture and preserve web pages from within the browser in order to address the limitations of current archival approaches.
- A proposed system called "WARCreate" that would leverage the Chrome extension API to capture web pages and resources and generate WARC files for preservation while maintaining the original browsing context.
The document introduces Memento, a framework that integrates past and present versions of web resources to enable time travel on the web. Memento achieves this through linking between original resources and their archived past versions, as well as between versions. This allows navigation between versions through standard HTTP requests. The framework has tools for both clients and servers and aims to preserve important parts of our cultural record that would otherwise be lost as web pages change and disappear over time.
Delivered by Richard Richard Wincewicz at Open Repositories OR2015, Indianapolis, IN, USA, June 2014.
An introduction to "Reference or Link Rot", the evidence for the extent of the problem, and remedies proposed by the Hiberlink project.
This presentation provides an overview of the Memento "Time Travel for the Web" framework that is aligned with the stable version of the Memento protocol, specified in RFC 7089.
Introduction to Web Programming - first courseVlad Posea
The document provides an introduction to a web programming course, outlining its objectives, what students will learn, and how they will be evaluated. Key points covered include:
- Students will understand web applications and develop basic skills in HTML, CSS, JavaScript.
- Evaluation will be based on exam scores, lab work, and individual study demonstrating understanding and skills.
- The course will cover the history of the web, how the HTTP protocol works, and core frontend technologies.
Introducing Web Archiving and WSDL Research GroupSawood Alam
My talk to introduce Web Archiving and the Web Science and Digital Libraries Research Group to some invited students from India for a summer workshop in Old Dominion University, Norfolk, VA
Presented by Michele C. Weigle, June 4, 2015
Columbia University Web Archiving Collaboration: New Tools and Models
Work by Yasmin AlNoamany, Michele C. Weigle, and Michael L. Nelson
Avoiding Spoilers On MediaWiki Fan Sites Using MementoShawn Jones
A variety of fan-based wikis about episodic fiction (e.g., television shows, novels, movies) exist on the World Wide Web. These wikis provide a wealth of information about complex stories, but if readers are behind in their viewing they run the risk of encountering "spoilers" -- information that gives away key plot points before the intended time of the show's writers. Enterprising readers might browse the wiki in a web archive so as to view the page prior to a specic episode date and thereby avoid spoilers. Unfortunately, due to how web archives choose the "best" page, it is still possible to see spoilers (especially in sparse archives).
In this presentation we highlight the issues with avoiding spoilers using Memento. We show that for a sample of fan wiki pages there is as much as a 66% chance of encountering a spoiler. We also find, using logs from the Internet Archive, that 19% of actual requests to the Wayback Machine for wikia.com end in spoilers. We suggest a different heuristic for use with wikis and unveil the Memento MediaWiki Extension as a solution.
This document discusses various strategies and resources for archiving internet content for research purposes. It describes several existing large-scale web archives like the Internet Archive and Common Crawl, as well as national and institutional archives. It also outlines how researchers can collect targeted web archives using open-source tools or subscription-based services.
This document summarizes research into discovering lost web pages using techniques from digital preservation and information retrieval. Key points include:
- Web pages are frequently lost due to broken links or content being moved/removed, but copies may still exist in search engine caches or archives.
- Techniques like lexical signatures (representing a page's content in a few keywords) and analyzing page titles, tags and link neighborhoods can help characterize lost pages and find similar replacement content.
- Experiments showed that lexical signatures degrade over time but page titles are more stable, and combining techniques improves performance in locating replacement content. The goal is to develop a browser extension to help users find lost web pages.
Synchronicity: Just-In-Time Discovery of Lost Web PagesMichael Nelson
The document discusses techniques for discovering lost web pages using lexical signatures. It finds that lexical signatures generated from page titles and content evolve over time, with terms dropping out. Signatures perform best with 5-7 terms. Combining titles with signatures provides better discovery results than either alone. Future work includes predicting "good" titles and augmenting signatures with tags and link neighborhoods.
The Memento Protocol and Research Issues With Web ArchivingMichael Nelson
Michael L. Nelson
Old Dominion University
Web Science & Digital Libraries Research Group
www.cs.odu.edu/~mln/
University of Virginia Colloquium
2016-09-12
This document discusses web decay, which refers to web sources disappearing over time. It provides statistics on the percentage of decayed web citations in various studies. While web sources are commonly cited in scholarly works, their URLs can become invalid if the sites change or are removed. The Wayback Machine and other archives can help recover some decayed sources. When citing web sources, authors should check URLs work and consider archiving pages themselves to mitigate decay issues.
Filling in the Blanks: Capturing Dynamically Generated ContentJustin Brunelle
JCDL 2012 Doctoral Consortium presentation by Justin F. Brunelle. Covers the problem Web 2.0 creates for preservation, and proposes a solution for client-side capture of content.
Scripts in a Frame: A Two-Tiered Crawling Approach to Archiving Deferred Repr...Justin Brunelle
This document summarizes Justin Brunelle's dissertation defense on archiving deferred web representations using a two-tiered crawling approach. It discusses how current archival tools are unable to fully capture dynamic and interactive web pages that use JavaScript to modify page content after load. The dissertation measures the impact of missing JavaScript resources on memento quality and proposes crawling pages using PhantomJS to execute scripts and archive the complete representation as seen by users. Future work is needed to scale this approach for archiving large portions of the deferred web at risk of being lost to history.
iPRES2015: Archiving Deferred Representations Using a Two-Tiered Crawling App...Justin Brunelle
This document proposes a two-tiered crawling approach using PhantomJS and Heritrix to better archive deferred representations, which are web pages that require JavaScript execution or user interaction to fully render. The approach uses PhantomJS to execute JavaScript and interact with deferred representations, while Heritrix crawls non-deferred representations for better performance. Test results found the PhantomJS frontier was 1.5 times larger but crawling was 10.5 times slower. The approach provides a better method for archiving deferred representations compared to the current workflow.
Not All Mementos Are Created Equal: Measuring The Impact Of Missing MementosJustin Brunelle
This document discusses measuring the quality of archived webpages or "mementos" when some embedded resources like images or CSS are missing. It found that a "damage rating" approach, which considers factors like the size, position and importance of missing resources, was a better indicator of memento quality than just the percentage of missing resources. A study with Amazon Mechanical Turk users found they agreed more with damage ratings than percentage missing when comparing mementos. The quality of mementos in the Internet Archive was also found to generally improve over time despite more resources being missing.
- The document discusses how the author spends their summer vacations studying the archivability of web resources by capturing 1,000 URIs each from Twitter and Archive-It.
- The author analyzes the captured data to measure how perfectly the resources were archived, finding only 4.2% of Twitter and 34.2% of Archive-It were perfectly archived, with missing embedded resources, stylesheets, and functionality impacting the archivability.
- The author plans future work evaluating the importance of missing elements, measuring damage to archives, and developing methods to predict and potentially improve archivability.
An Evaluation of Caching Policies for Memento TimeMapsJustin Brunelle
This document evaluates caching policies for Memento TimeMaps. The authors observed changes to over 4,000 TimeMaps for 92 days and analyzed caching policies based on the changes. They found that TimeMaps either remained unchanged (77.4%) or increased in size through new archives or mementos (22.6%). An optimal TimeMap cache Time-To-Live of 15 days balances freshness with reduced load on archives by caching incrementally improving TimeMaps.
Justin F. Brunelle is a computer scientist who works at The MITRE Corporation and received his BS and MS in computer science from Old Dominion University. He is currently pursuing his PhD in digital preservation from ODU under Dr. Nelson, focusing on ensuring web pages are archived over time. Previously he conducted research in serious games and intelligent tutoring systems.
This document provides an introduction to agile methodologies and Scrum. It explains that agile promotes iterative development, user feedback, and rapid prototyping. Scrum is then defined as a common agile framework using self-organizing teams, sprints, product backlogs to prioritize work, and retrospectives to improve. Key Scrum roles of Product Owner, Scrum Master, and Development Team are outlined along with processes like storyboarding, backlogs, sprints, and reviews.
The document discusses how to find archived web pages from the past using two different services - http://web.archive.org and Memento. It explains that web.archive.org allows searching for and accessing cached pages of a website like CNN from specific past dates, while Memento aims to make navigating archived past web pages easier through its framework and development community.
The presentation given for the RRAC meeting on 10-20-2010. This is a summary of the research efforts in Digital Preservation at Old Dominion University.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
1. Digital Preservation Research
at Old Dominion University
Justin F. Brunelle
The MITRE Corporation
Old Dominion University
(And hopefully MITRE, soon)
2. Why are we listening?
• Overview of the problem
• BRIEF introduction to ODU WSDL group
research
• Memento
• I’ll be skipping around, so don’t hesitate to
interrupt me
3. Digital Preservation
• Using the past Web
– Focus of our research
• Temporal Browsing
– Sessions in the past
• Recovering Lost Pages
– Is it really gone?
• 404s
– How to fix broken links?
4. 1
same URI
maps to same
or very similar
content at a
later time
2
same URI
maps to
different
content at a
later time
3
different URI
maps to same
or very similar
content at the
same or at a
later time
4
the content
can not be
found at
any URI
U1
C1
U1
C1
timeA B
U1
C2
U1
C1
timeA B
U2
C1
U1
C1
U1
404
timeA B
U1
??
U1
C1
timeA B
Change on the Web
5. Time to Talk About Saving
Everything?
Dinner for one or two costs more than 1TB disk Wikis have popularized versioning
Cool URIs (http://www.w3.org/Provider/Style/URI.html) are widely adopted, e.g.:
http://news.yahoo.com/s/ap/20100920/ap_on_el_se/us_alaska_senate
http://d.yimg.com/a/p/ap/20100918/capt.67567dbc0a874b689f0b4a5c392f379c-67567dbc0a874b689f0b4a5c392f379c-0.jpg
http://d.yimg.com/a/p/afp/20100918/thumb.photo_1284846332993-1-0.jpg
Also related projects with cool URI / permalink focus:
http://www.citability.org/
http://data.gov/
http://data.gov.uk/
6. Fortress Model
• Get a lot of money
• Buy lots of storage
• Hire lots of people
• “Look upon my archive ye Mighty, and
despair!”
7. Alternate Methods
• Lazy Preservation (McCown)
– “How much preservation do I get if I do absolutely
nothing?”
• Just-In-Time Preservation (Klein)
– Wait for it to disappear, then find a “good ‘nuff”
version
• Shared Infrastructure Preservation
– Push content to sites that might preserve it
• arXiv.org, IA, WebCite…
• Server Enhanced Preservation
– Create archival-ready resources
8. And Soon…
• Social Preservation
– Preserving resources using 3rd
party Web Services
– Repository for OAI-ORE ReMs
– Social network feel
– Lazy-esque, server-side reconstruction
9. But I digress…
• Few years away…
• Preliminary research
• And now back to the prior research…
24. Finding Archived Resources
Go to http://www.archive.org/ and search
http://cnn.com
On http://web.archive.org/web/*/http://cnn.com, select
desired datetime
24
27. Current and Past Web are Not
Integrated
27
• Current and Past Web based on
same technology.
• But, going from Current to
Past Web is a matter of (manual)
discovery.
• Memento wants to make going
from Current to Past Web a
(HTTP) protocol matter.
• Memento wants to integrate
Current And Past Web.
43. What does it all mean?
• Cutting edge technology
• Existing Infrastructure
• Redefining Web surfing
• MAJOR “real world” implications
44. Closing Thoughts
Preservation not for
privileged priesthood
http://doi.acm.org/10.1145/1592761.1592794
http://booktwo.org/notebook/wikipedia-historiography/
no more hoary stories
about format obsolescence:
http://blog.dshr.org/2010/09/reinforcing-my-point.html
Don't dessicate resources;
leave them on the web
Endless metadata is not
preservation…
archiving as branded service,
not infrastructure
http://blog.dshr.org/2010/06/jcdl-2010-keynote.html
45. Acknowledgements
• Slides borrowed from:
• Dr. Michael L. Nelson:
– http://www.slideshare.net/phonedude/my-point-of-view-
michael-l-nelson-web-archiving-cooperative
– http://www.slideshare.net/phonedude/review-of-web-
archiving
– http://www.slideshare.net/phonedude/memento-time-
travel-for-the-web
• Martin Klein:
– http://www.slideshare.net/phonedude/synchronicity-
justintime-discovery-of-lost-web-pages