SlideShare a Scribd company logo
1 of 16
2
Assignment 11 – Quality Control & Delivery
Besser
The Besser reading gave me a good understanding of how
digital (image) files are routinely cross-checked and surveyed.
It seems unfortunate with big collections, that only a percentage
can be worked on at a time. I had no idea that (over time) files
can become altered and corrupted despite their usage. Although,
I know files can become corrupt, I never realized that they
could change as well. It was beneficial to obtain more
information about content delivery as there are many elements
to consider. Surely, since the time of the Besser writing, there
have been new, all-in-one delivery solutions produced
especially, with so many more cultural heritage institutions now
making their collections available online.
Topic 1: Checksums
Source:
Tikhonov, A. (2019, April). Preservation of Digital Images:
Question of Fixity. Heritage, 2(2), pp. 1160-1165. Retrieved
from https://www.mdpi.com/2571-9408/2/2/75/htm
Abstract:
This article explains the challenges seen in the approaches used
to maintain the “fixity” of digital images in the digital
preservation process. A basic requirement in preserving digital
images is to maintain each file’s contents fixity. Fixity refers
to the unchanged integrity and authenticity of the original data
that was once administered prior to storing, or digitally
preserving the file. Currently, the most common manner to
implement fixity maintenance techniques is through data and
file checksums and/or cryptographic hashes according to the
article. However, to ensure up-to-date formats and to avoid
obsolescence, when planning for long-term preservation, the
need to migrate data to new formats to maintain availability and
sustainability must be taken into account. This calls for
additional tools to ensure the fixity of digital images. One issue
with digital images is that they do not actually exist. A digital
image file is a (bitstream) numeric representation of the image,
it is the raw data that the digital object is made of. In order to
discern the image, users will need access to the kind of software
that will generate the actual image to be viewed by the naked
eye. This will also call for a monitor, printer or some other
device that helps us to appropriately see the image. So, even if
the fixity of an original file in maintained, users cannot
overlook the issue of maintaining the various parts in the
infrastructure needed to present actual images. The article
shines light on improving the “relevancy of metrics” used to
validate digital images in long-term preservation by focusing on
the data in the files (rather than the files) to analyze the images.
Because digital objects tend to be fragmented, i.e., raw data is
stored in one place and metadata in another, one solution the
article mentions, is to preserve digital images using a “smart
archival package”. This will “know” how to represent digital
images and intuitively help to maintain their fixity as well.
Author Credentials:
Alexey Tikhonov is a lead analyst of Yandex.Zen in Yandex,
Inc. (the “Google of Russia”). He was also a programmer,
system architect, an e-zine columnist, a tech writer among other
tittles. His interests lie in neural networks on discrete domains,
text parsing, distributed computation, visualization, applied
natural language processing and artificial intelligence (AI).
Intended Reader:
Heritage is an international peer-reviewed open-access journal
of cultural and natural heritage science published quarterly. It is
intended for scientists, cultural heritage professionals, IT
professionals and any other professional involved with
architectural technologies, innovative solutions for natural
heritage protection, research in conservation and recovery of
archaeological heritage, geoscience and earth observation
technologies, etc.
What I learned:
I learned about the importance of checksums and cryptographic
hashes used to ensure digital image fixity in the long-term
preservation process. Fixity refers to the unchanged integrity
and authenticity of the original data that was added before the
file was stored or digitally preserved. I learned that because
digital files are not tangible, or non-existent really, it is
difficult to observe because it is a representation of an object.
So, in order to perceive the digital image, one would need to
have the basic software and hardware to view it. I learned that
instead of looking at the image itself, it is more useful to
observe the data in the files so that the digital image can be
properly validated while it’s digitally preserved.
Topic 2: Checksums
Source: Digital Preservation Coalition. (2020). Digital
Preservation Manual. DPC. Retrieved
fromhttps://www.dpconline.org/handbook/technical-solutions-
and-tools/fixity-and-checksums
Abstract:
This Digital Preservation Manual describes how checksums
work. According to the Manual, a checksum is a ‘digital
fingerprint’ on a file that detects even the smallest change,
causing the checksum itself to completely change. However, the
checksum does not necessarily discern where in the file the
change has taken place. The way checksums are created are by
cryptographic techniques that are generated using an array of
open source tools.
The Manual reveals that checksums have three main uses. They
include: 1)To know that a file has been correctly received from
a content owner or source and it successful transfer to
preservation storage; 2) To know that a file fixity has been
maintained when that file is being stored; 3) To be given to
users of the file in the future so they know that the file has been
correctly retrieved from storage and delivered to them. When
checksums are applied to digital preservation, they can be used
to monitor the fixity of each copy (of a file), and if a file has
changed, then one of the other file copies can be used to create
a replacement. Such a deviation found in a file is known to be a
corrupt file which will need to be replaced with a non-corrupt,
good file. The process is called “data scrubbing.” Another
reason digital files may change is because they have been
intentionally migrated (to another file format). Since this causes
the checksum to change as well, a new checksum will need to be
put into place once a migration has been implemented. It now
becomes the new checksum that detects file changes (or errors)
moving forward. Depending on an institution’s needs,
checksums ideally, should be done regularly at least once a year
according to the Digital Preservation Manual. Obviously, the
more often files are checked the sooner problems can be
addressed and remedied. Checksums are stored in databases, a
PREMIS record or in ‘manifests’ that go with files in storage
systems. They are often integrated into digital preservation
tools. The Manual also mentions that checksums work using
various algorithms, making checksums ‘stronger’ and better at
detecting file changes.
Author Credentials:
The Digital Preservation Coalition is a UK-based non-profit
limited company that seeks to secure the preservation of digital
resources in the UK and internationally to secure the global
digital memory and knowledge base. The DPC is a consortium
of organizations interested in the preservation of digital
information.
Intended Reader:
The Digital Preservation Manual is intended for those interested
in digital preservation of information. They include commercial,
cultural heritage, educational, governmental, and research
bodies.
What I learned:
I gained a much better understanding of what checksums are and
what they do to ensure that digital files don’t become corrupt.
Basically, checksums detect errors or changes in files that may
have occurred when the file was transferred or stored in digital
preservation. When checksums are applied to digital
preservation, they can be used to monitor the fixity of each
digital copy, and if the file has changed, then one of the other
file copies can be used to create a replacement. I also learned
that digital files can also change when the file has been
migrated to a different format. This causes the checksum to
change which results in having to put another checksum into
place to detect new changes in the migrated files. I learned that
checksums are stored in databases and other digital preservation
tools and that the algorithms used in checksums can directly
affect how well they work.
Topic 3: Common Gateway Interface (CGI)
Source:
Both, D. (2017). How to generate webpages using CGI scripts.
OpenSource. Retrieved from
https://opensource.com/article/17/12/cgi-scripts
Abstract:
This article explains how Common Gateway Interface (CGI)
codes produce dynamic websites. This means that the HTML
(Hyper Text Markup Language) used to produce the web page
on a browser changes every time the page is accessed,
producing different forms of content. CGIs translate the HTML
language implemented between a browser and a device. HTML
is the language used to create web pages. This allows content to
present itself in visually stimulating ways. The author reverts to
the “old days” of the Internet when many websites were static
and unchanging. Today, CGIs allow web content to be either
simple or extremely complex. The content can be influenced by
certain calculations, input and even the current conditions in the
server. CGIs scripts use different languages such as Python,
Perl, PHP and Bash to name a few. The author provides
language codes to experiment with, and concludes by stating
that creating CGI programs are simple and can be used to
generate a vast array of dynamic web pages.
Author Credentials:
David Both is an Open Source Software and GNU/Linux
advocate, trainer, writer, and speaker. He is a strong proponent
of “Linux Philosophy.” Both has been in the IT industry for
almost 50 years and has worked for Cisco, MCI Worldcom and
the State of North Carolina. He has taught RHCE classes for
RedHat and for the last twenty years, worked for Linux and
Open Source.
Intended Reader:
OpenSource is owned by Red Hat, a multinational software
company providing open source software products to the
enterprise community. Intended readers and users include
computer programmers, web developers, and IT professionals in
education, government, law, businesses, health and life.
What I learned:
I learned what the Common Gateway Interface (CGI) is and how
it helps to make webpages more dynamic. While HTML is used
to write web pages, CGIs work between the browser and a
computer to translate the language into content. This
interpretation is what produces “dynamic” content that is not
static but more interactive with links, texts and images.
Topic 4: Common Gateway Interface (CGI)
Source:
Spector, P. (2003). Introduction to CGI. UC Berkeley. Retrieved
from
https://www.stat.berkeley.edu/~spector/extension/python/notes/
node94.html
Abstract:
This source introduces the concept of the Common Gateway
Interface (CGI) as a “mechanism used to transmit information to
and from your web browser and a web site’s computer.” It
explains that every time a user enters a web address or clicks on
a URL link, the request is sent to an internet computer that then
sends the contents to the web browser. The browser then
translates the HTML (used to write web pages,) into the content
expressed as links, text, images, animations and any other
content the developer implemented on the site. So basically,
CGI provides the manner in which this information is retrieved.
It is what connects the web (server) to an external database
(computer) and sends information between the two.
Author Credentials:
Phil Spector used to be the Applications Manager and Software
Consultant for the Statistical Computing Facility in the
Department of Statistics at University of California at Berkeley.
He was also an Adjunct Professor in the Statistics department
where he taught Statistical Computing.
Intended Reader:
This resource is intended for UC Berkeley students who are
interested in computer programming, web development, IT,
computer languages and other (computer) technologies.
What I learned:
I learned that the Common Gateway Interface (CGI) is
essentially the link between the web and a computer. It
communicates HTML language back and forth between the two
to determine how to provide the content we see on a webpage.
CGIs translate HTML to produce dynamic webpage content such
as images, texts, links and animations.
UNT Digital Projects Unit
I was impressed with the uniquely different projects UNT is
working on. The TDNP project sounds like a great idea. I really
feel (old) newspapers should be digitized and preserved because
they are part of history, and chances are, their only record is the
newspaper since computers weren’t used to create articles back
in the day. I also like that they list their partnership with the
Portal to Texas History because it links a lot of cultural
institutions and information organizations (such as libraries)
together by allowing them to share and access each other’s
collections and/or resources. I also believe the UNT’s open
access repository of Scholarly Works is a brilliant service that
will benefit students all over the world who can access the UNT
(Digital) Library remotely. The Web Archiving project is also a
great idea. I think this is very important as well because it is
capturing American history, even if it is via old, expired
government agency websites. Preserving these old sites and
maintaining them as archives will show future generations a
glimpse of what certain U.S. government organizations were
doing back in the day through their websites. All these projects
have been or are being digitally preserved, which has much to
do with what we’ve covered in this class so far. I think one
additional topic I would add to the “Technology” section would
be storage information.
Standards – The Standards section is very explicit. UNT makes
sure to list each object type and how it is handled in the
scanning process. I think it’s ideal that they add information
about how certain items “should be scanned and digitally
preserved”, whether it’s following a national standard like the
Library of Congress’ standard. The Standards section also
provides various examples of how specific objects are captured
including the file formats used, resolution and bit depth to name
a few. All this information helps those who are researching or
those who are interested in this kind of work. We’ve covered
standards in this class, and I like that we are able to explore
how a particular institution utilizes the different standards for
different objects in the preservation process.
Metadata - I really like how UNT explains that they use the
Dublin Core Metadata Schema. I think it’s great that it is fully
explained and includes examples of each element. This is very
beneficial to researchers who don’t understand the schema or
have never conducted a search in the UNT archives. It surely
also benefits individuals like my fellow students and I who are
interested in learning more about metadata and how it is used in
libraries, archives and other cultural institutions or information
organizations to catalog digital collections.
Scanners/Equipment – UNT does a great job of listing all the
scanners used to capture objects for digital preservation. I love
that they explain “how they use” the equipment and their
specific features as well. It is very helpful that they provide
examples for each scanner to give users and researchers an idea
of how they preserve items in the digitization process. I like
that they use a BetterLight Scanning system and after studying
the BetterLight website, I see that UNT spares no expense in
using one of the best scanning systems in the world. We
recently covered scanners and again, it is fortunate to see just
how an institution such as UNT makes use of the various types
of scanners, systems and equipment for capturing objects for
digital preservation.
Software/Hardware – Although UNT lists the software they use
including imaging software, they do not list the hardware used
to house the applications. I would presume they use PCs, but
since they likely opt for the best in graphics, resolution and
quality, I would think that they would use Apple hardware. I do
not see any information regarding hardware, though I did see
brief information regarding the “Coda repository system” and
the “Aubrey system”, which apparently, provide access to
digital resources. UNT also lists a Software Development Unit
on staff which “develops and maintains the infrastructure” and
“specialized library applications” (but they do not necessarily
divulge what the infrastructure is). This is also quite conducive
to have because staff can create bespoke applications that are
uniquely suited for various projects.
Quality Control – The only bit of information I found regarding
any form of quality control was in the Digital Curation Unit
area in the About section. It briefly states that they “enhance
discovery and ensure long-term access” but it also states
towards the end that the UNT Digital Curation Unit “generates
tools, procedures, and documents necessary for effective digital
lifecycle management,” which I would assume includes quality
control. Perhaps, this is another piece of information they can
make researchers aware of somewhere on the UNT Digital
Projects Unit website.
File Types/Formats – In the Standards section, UNT lists the
file formats they use in the digitization process. Not surprising,
it appears that they use TIFF image files which are one of the
highest quality formats, not to mention, quite versatile as we
learned earlier in the semester. It’s interesting that they use
TIFF for all objects captured such as text, maps and other
documentation. Providing this kind of information helps
researchers and those in the field who may need to know this
for downloading, interoperability, compatibility issues, etc.
Delivery - Under the “About” section, there is a “Display
Information Toolkit” that lists how the collections are described
in order to “attract” users to the “most interesting and important
materials”. The content is delivered with the appropriate
descriptions and accompanying “representative image or
collection icon” as listed under the “Collection” area. UNT lists
that its documentation is delivered and/or displayed in either
PDF or Word file formats. I think overall, UNT does an
exemplary job in delivering its collections as the content is
clear and user-friendly.
The Portal to Texas History
The Portal to Texas History website is simple and
straightforward. It lists most of the information we learned in
this class in the “Technology” section under “Digitization
Practices and Tools”. The following lists most of the tools and
practices used to preserve and provide accessibility and
availability online. I think I would have added storage
information as I did not see this listed in the Portal (or UNT’s
site). All the links listed in the Portal revert back to UNT’s
Digital Projects Unit site as they demonstrate their combined
efforts to digitize and make Texas history available to the
public.
Metadata – The Portal lists the “Metadata Guidelines” used
which are the Dublin Core Metadata Schema. When you click on
the “Guidelines” link it takes users back to the UNT Libraries
Digital Projects – Metadata section. Here again, UNT provides
all the relevant information on how the Dublin Core is used to
describe items while also using examples of each element.
Scanning Standards – Upon clicking on the title link, it takes
users back to the UNT site where it lists the “scanning standards
by type of material”. It provides examples of objects and how
they are captured, what bit depth, color, resolution, scale and
file format used.
Delivery – Once again, the “Display Information Toolkit” link
in the Texas Portal reverts users back to the UNT Digital
Libraries site where UNT explains how the collection is
delivered and displayed for access. The content is delivered
with relevant descriptions and matching image icons for easy
browsing.
Equipment – This link takes the user to the UNT site section
regarding “Scanners and Scanning Systems”. Again, UNT does
a fine job of explaining the scanners, their features and how
they are used to capture different objects for digitization.
Software – Once more, this link takes the user back to the UNT
Digital Projects Unit website where it lists all the different
image software applications and OCR, or optical character
recognition software used in the digitization process.
Quality Control? – This section and link also revert to the UNT
Projects website where it explains the “auditing and
methodologies” used in the preservation of and access to UNT’s
content. Perhaps this is part of the quality control process as
individuals “audit” and examine how the digitization and
preservation processes, documentation, access to content and
infrastructure systems are functioning.
Online Archive of California https://oac.cdlib.org/
The Online Archive of California (OAC) is a massive repository
that contains more than 250,000 digital images and documents
about California’s history. The website states that it provides
free access to detailed descriptions of resource collections
maintained by more than 200 contributing institutions including
libraries, special collections, archives, historical societies, and
museums throughout California. The collections are maintained
by the 10 University of California (UC) campuses. The OAC’a
website is quite extensive and jammed pack with information
and links that take users to whichever library or database they
wish to search in California. The OAC’s website is fairly simple
to navigate, but some users may get overwhelmed with the
plethora of information to sort through. I had to really dig
through countless of sections in order to find most of the
information listed below. I did not see a specific section about
the scanning process as the OAC outsources their projects. I
managed to find most of the topics we covered in class, but I
think I would have had a specific section listed on the website
that contains all the “technology” information used in the
digitization process like UNT does. This would help users grasp
how the digitization process works.
Standards – The OAC has three major content type standards for
digital content including the EAD (Encoded Archival
Description), MARC (Machine Readable Cataloguing) and
METS (Metadata Encoding and Transmission Standard) for
digital objects.
Delivery – OAC’s delivery platform is on a CDL-developed
XML- and XSLT-based delivery platform packaged as the
eXtensible Text Framework (XTF). The XTF system contains
tools that permit users to perform Web-based searching and
retrieval of electronic documents. Digital objects are
subsequently harvested, published and delivered in Calisphere,
OAC’s complimentary website.
Metadata – The metadata used for all objects in the repository –
regardless of format – are mapped to the Dublin Core element
set for generalizability and to support cross-collection
discovery. They also use MODS (Metadata Object Description
Schema).
File formats – The OAC’s preferred format choices for images
include TIFF, JPEG, JPG-2000 and PNG. For texts they use
HTML, XML, PDF/A, UTF-8 and ASCII. For audio preservation
they use AIFF and WAVE file formats. For containers they use
GZIP and ZIP.
Quality Control – The OAC uses the checksums MD5, SHA-1 or
CRC32 and byte size values in the METS file element for the
orderly transmission and ingest of digital objects.
Storage – All digital content is written to remote external
storage for preservation, either in Amazon S3 (at the San Diego
Supercomputer Center) or for DataONE at the University of
New Mexico.
Scanning Systems – It’s unfortunate that the OAC does not
seem to list what scanners or scanning systems they use to
capture materials. It appears that they outsource their scanning
projects to the Southern Regional Library Facility (SRLF) in
Los Angeles. When looking on the SRLF website, I was able to
find some information regarding the digitization process. To
capture photographic negatives, prints and microfilm they use
state of the art medium format cameras. They use scanners to
produce high resolution images intended for long-term digital
preservation. They do not, however list the kind of scanners
they use. The OAC follows the FADGI guidelines for their
digitization and preservation projects.
Week 15
The Final Portfolio Project is a comprehensive assessment of
what you have learned during this course.
The Final Project has two parts: Limitations of Blockchain and
Emerging Concepts.
Blockchain continues to be deployed into various businesses
and industries. However, Blockchain is not without
its problems. Several challenges have already been associated
with the use of this technology. Identify at least 5 key
challenges to Blockchain. Additionally, discuss potential
solutions to these challenges. Lastly, please discuss if we will
see the limitations to blockchain be reduced or mitigated in the
future.
There are several emerging concepts that are using Big Data
and Blockchain Technology. Please search the internet and
highlight 5 emerging concepts that are exploring the use
of Blockchain and Big Data and how they are being used.
Conclude your paper with a detailed conclusion section which
discusses both limitations and emerging concepts.
The paper needs to be approximately 6-8 pages long, including
both a title page and a references page (for a total of 8-10
pages). Be sure to use proper APA formatting and citations to
avoid plagiarism.
Your paper should meet the following requirements:
• Be approximately 6-8 pages in length, not including the
required cover page and reference page.
• Follow APA7 guidelines. Your paper should include an
introduction, a body with fully developed content, and a
conclusion.
• Support your answers with the readings from the course, the
course textbook, and at least four scholarly journal articles from
the UC library to support your positions, claims, and
observations, in addition to your textbook. The UC Library is a
great place to find resources.
• Be clearly and well-written, concise, and logical, using
excellent grammar and style techniques. You are being graded
in part on the quality of your writing.
1. Read 2 chapters in the Besser text: Quality Control (Links to
an external site.), and Delivery (Links to an external site.). The
reference to CD-ROM error checking (at the end of the Quality
Control chapter) is essentially obsolete. In the chapter on
Delivery, the paragraph on web delivery is definitely not
complete, but this link will give you a quick idea of the
proliferation of scripting languages. (Links to an external site.)
Additionally, the brief discussion about how Google searches
websites is definitely NOT current.
Write about what you learned from these chapters.
Choose 1 topic from each of those readings (2 topics
total). Find 2 authoritative and useful resources that further
your understanding for each topic (4 total). Describe each
source each as in previous assignments.
2. Explore the website of the UNT Digital Projects Unit (Links
to an external site.). Keeping in mind what you have read in the
Besser text, explore this site thoroughly. Summarize
in *detail* what you found regarding metadata, quality control,
hardware and any other topic you have learned about in this
class.
3. Then spend time using the Portal to Texas History (Links to
an external site.)and analyze this image database. List the
attributes (metadata, image standards, etc.) that you looked at
and discuss what you discovered about these attributes.
4. Find one more image database that you consider
exemplary. As in Assignment 5 (#3), critically examine the
database you choose, drawing on the topics we've covered since
then and all you have learned during that time. Write a
description of the site and describe the attributes you looked for
(again, keep in mind what you have learned so far in this class).
Summary of Assignment:
1. Read the assigned chapters Quality Control, and Delivery.
Write about, as usual.
2. Find 4 sources and describe as usual.
3. Explore the Digital Projects Lab site and discuss what you
discover.
4. Explore the Portal to Texas History and analyze the site.
5. Explore an additional image database and analyze the site.

More Related Content

More from bartholomeocoombs

CompetenciesEvaluate the challenges and benefits of employ.docx
CompetenciesEvaluate the challenges and benefits of employ.docxCompetenciesEvaluate the challenges and benefits of employ.docx
CompetenciesEvaluate the challenges and benefits of employ.docx
bartholomeocoombs
 
CompetenciesABCDF1.1 Create oral, written, or visual .docx
CompetenciesABCDF1.1 Create oral, written, or visual .docxCompetenciesABCDF1.1 Create oral, written, or visual .docx
CompetenciesABCDF1.1 Create oral, written, or visual .docx
bartholomeocoombs
 
COMPETENCIES734.3.4 Healthcare Utilization and Finance.docx
COMPETENCIES734.3.4  Healthcare Utilization and Finance.docxCOMPETENCIES734.3.4  Healthcare Utilization and Finance.docx
COMPETENCIES734.3.4 Healthcare Utilization and Finance.docx
bartholomeocoombs
 
Competences, Learning Theories and MOOCsRecent Developments.docx
Competences, Learning Theories and MOOCsRecent Developments.docxCompetences, Learning Theories and MOOCsRecent Developments.docx
Competences, Learning Theories and MOOCsRecent Developments.docx
bartholomeocoombs
 
Compensation, Benefits, Reward & Recognition Plan for V..docx
Compensation, Benefits, Reward & Recognition Plan for V..docxCompensation, Benefits, Reward & Recognition Plan for V..docx
Compensation, Benefits, Reward & Recognition Plan for V..docx
bartholomeocoombs
 
Compensation Strategy for Knowledge WorkersTo prepare for this a.docx
Compensation Strategy for Knowledge WorkersTo prepare for this a.docxCompensation Strategy for Knowledge WorkersTo prepare for this a.docx
Compensation Strategy for Knowledge WorkersTo prepare for this a.docx
bartholomeocoombs
 
Compensation PhilosophyEvaluate the current compensation phi.docx
Compensation PhilosophyEvaluate the current compensation phi.docxCompensation PhilosophyEvaluate the current compensation phi.docx
Compensation PhilosophyEvaluate the current compensation phi.docx
bartholomeocoombs
 
Compensation Evaluation Grading GuideHRM324 Version 42.docx
Compensation Evaluation Grading GuideHRM324 Version 42.docxCompensation Evaluation Grading GuideHRM324 Version 42.docx
Compensation Evaluation Grading GuideHRM324 Version 42.docx
bartholomeocoombs
 
Comparison Paragraph InstructionsTopics Choose two item.docx
Comparison Paragraph InstructionsTopics Choose two item.docxComparison Paragraph InstructionsTopics Choose two item.docx
Comparison Paragraph InstructionsTopics Choose two item.docx
bartholomeocoombs
 

More from bartholomeocoombs (20)

CompetenciesEvaluate the challenges and benefits of employ.docx
CompetenciesEvaluate the challenges and benefits of employ.docxCompetenciesEvaluate the challenges and benefits of employ.docx
CompetenciesEvaluate the challenges and benefits of employ.docx
 
CompetenciesDescribe the supply chain management principle.docx
CompetenciesDescribe the supply chain management principle.docxCompetenciesDescribe the supply chain management principle.docx
CompetenciesDescribe the supply chain management principle.docx
 
CompetenciesABCDF1.1 Create oral, written, or visual .docx
CompetenciesABCDF1.1 Create oral, written, or visual .docxCompetenciesABCDF1.1 Create oral, written, or visual .docx
CompetenciesABCDF1.1 Create oral, written, or visual .docx
 
COMPETENCIES734.3.4 Healthcare Utilization and Finance.docx
COMPETENCIES734.3.4  Healthcare Utilization and Finance.docxCOMPETENCIES734.3.4  Healthcare Utilization and Finance.docx
COMPETENCIES734.3.4 Healthcare Utilization and Finance.docx
 
Competencies and KnowledgeWhat competencies were you able to dev.docx
Competencies and KnowledgeWhat competencies were you able to dev.docxCompetencies and KnowledgeWhat competencies were you able to dev.docx
Competencies and KnowledgeWhat competencies were you able to dev.docx
 
Competencies and KnowledgeThis assignment has 2 parts.docx
Competencies and KnowledgeThis assignment has 2 parts.docxCompetencies and KnowledgeThis assignment has 2 parts.docx
Competencies and KnowledgeThis assignment has 2 parts.docx
 
Competencies and KnowledgeThis assignment has 2 partsWhat.docx
Competencies and KnowledgeThis assignment has 2 partsWhat.docxCompetencies and KnowledgeThis assignment has 2 partsWhat.docx
Competencies and KnowledgeThis assignment has 2 partsWhat.docx
 
Competences, Learning Theories and MOOCsRecent Developments.docx
Competences, Learning Theories and MOOCsRecent Developments.docxCompetences, Learning Theories and MOOCsRecent Developments.docx
Competences, Learning Theories and MOOCsRecent Developments.docx
 
Compensation  & Benefits Class 700 words with referencesA stra.docx
Compensation  & Benefits Class 700 words with referencesA stra.docxCompensation  & Benefits Class 700 words with referencesA stra.docx
Compensation  & Benefits Class 700 words with referencesA stra.docx
 
Compensation, Benefits, Reward & Recognition Plan for V..docx
Compensation, Benefits, Reward & Recognition Plan for V..docxCompensation, Benefits, Reward & Recognition Plan for V..docx
Compensation, Benefits, Reward & Recognition Plan for V..docx
 
Compete the following tablesTheoryKey figuresKey concepts o.docx
Compete the following tablesTheoryKey figuresKey concepts o.docxCompete the following tablesTheoryKey figuresKey concepts o.docx
Compete the following tablesTheoryKey figuresKey concepts o.docx
 
Compensation Strategy for Knowledge WorkersTo prepare for this a.docx
Compensation Strategy for Knowledge WorkersTo prepare for this a.docxCompensation Strategy for Knowledge WorkersTo prepare for this a.docx
Compensation Strategy for Knowledge WorkersTo prepare for this a.docx
 
Compensation PhilosophyEvaluate the current compensation phi.docx
Compensation PhilosophyEvaluate the current compensation phi.docxCompensation PhilosophyEvaluate the current compensation phi.docx
Compensation PhilosophyEvaluate the current compensation phi.docx
 
Compensation Involves designing and implementing compensation, bene.docx
Compensation Involves designing and implementing compensation, bene.docxCompensation Involves designing and implementing compensation, bene.docx
Compensation Involves designing and implementing compensation, bene.docx
 
Compensation Evaluation Grading GuideHRM324 Version 42.docx
Compensation Evaluation Grading GuideHRM324 Version 42.docxCompensation Evaluation Grading GuideHRM324 Version 42.docx
Compensation Evaluation Grading GuideHRM324 Version 42.docx
 
Comparisons of fire prevention programs in effect on this continent .docx
Comparisons of fire prevention programs in effect on this continent .docxComparisons of fire prevention programs in effect on this continent .docx
Comparisons of fire prevention programs in effect on this continent .docx
 
Comparisons of artworks are important in Art History.  They allow us.docx
Comparisons of artworks are important in Art History.  They allow us.docxComparisons of artworks are important in Art History.  They allow us.docx
Comparisons of artworks are important in Art History.  They allow us.docx
 
Comparison or Contrast Paragraph.approximately 200 words,.docx
Comparison or Contrast Paragraph.approximately 200 words,.docxComparison or Contrast Paragraph.approximately 200 words,.docx
Comparison or Contrast Paragraph.approximately 200 words,.docx
 
Comparison Paragraph InstructionsTopics Choose two item.docx
Comparison Paragraph InstructionsTopics Choose two item.docxComparison Paragraph InstructionsTopics Choose two item.docx
Comparison Paragraph InstructionsTopics Choose two item.docx
 
Comparison of Three SculpturesResource Podcast David vs. .docx
Comparison of Three SculpturesResource Podcast David vs. .docxComparison of Three SculpturesResource Podcast David vs. .docx
Comparison of Three SculpturesResource Podcast David vs. .docx
 

2Assignment 11 – Quality Control & Delivery BesserThe

  • 1. 2 Assignment 11 – Quality Control & Delivery Besser The Besser reading gave me a good understanding of how digital (image) files are routinely cross-checked and surveyed. It seems unfortunate with big collections, that only a percentage can be worked on at a time. I had no idea that (over time) files can become altered and corrupted despite their usage. Although, I know files can become corrupt, I never realized that they could change as well. It was beneficial to obtain more information about content delivery as there are many elements to consider. Surely, since the time of the Besser writing, there have been new, all-in-one delivery solutions produced especially, with so many more cultural heritage institutions now making their collections available online. Topic 1: Checksums Source: Tikhonov, A. (2019, April). Preservation of Digital Images: Question of Fixity. Heritage, 2(2), pp. 1160-1165. Retrieved from https://www.mdpi.com/2571-9408/2/2/75/htm Abstract: This article explains the challenges seen in the approaches used to maintain the “fixity” of digital images in the digital preservation process. A basic requirement in preserving digital images is to maintain each file’s contents fixity. Fixity refers to the unchanged integrity and authenticity of the original data that was once administered prior to storing, or digitally preserving the file. Currently, the most common manner to
  • 2. implement fixity maintenance techniques is through data and file checksums and/or cryptographic hashes according to the article. However, to ensure up-to-date formats and to avoid obsolescence, when planning for long-term preservation, the need to migrate data to new formats to maintain availability and sustainability must be taken into account. This calls for additional tools to ensure the fixity of digital images. One issue with digital images is that they do not actually exist. A digital image file is a (bitstream) numeric representation of the image, it is the raw data that the digital object is made of. In order to discern the image, users will need access to the kind of software that will generate the actual image to be viewed by the naked eye. This will also call for a monitor, printer or some other device that helps us to appropriately see the image. So, even if the fixity of an original file in maintained, users cannot overlook the issue of maintaining the various parts in the infrastructure needed to present actual images. The article shines light on improving the “relevancy of metrics” used to validate digital images in long-term preservation by focusing on the data in the files (rather than the files) to analyze the images. Because digital objects tend to be fragmented, i.e., raw data is stored in one place and metadata in another, one solution the article mentions, is to preserve digital images using a “smart archival package”. This will “know” how to represent digital images and intuitively help to maintain their fixity as well. Author Credentials: Alexey Tikhonov is a lead analyst of Yandex.Zen in Yandex, Inc. (the “Google of Russia”). He was also a programmer, system architect, an e-zine columnist, a tech writer among other tittles. His interests lie in neural networks on discrete domains, text parsing, distributed computation, visualization, applied natural language processing and artificial intelligence (AI). Intended Reader: Heritage is an international peer-reviewed open-access journal
  • 3. of cultural and natural heritage science published quarterly. It is intended for scientists, cultural heritage professionals, IT professionals and any other professional involved with architectural technologies, innovative solutions for natural heritage protection, research in conservation and recovery of archaeological heritage, geoscience and earth observation technologies, etc. What I learned: I learned about the importance of checksums and cryptographic hashes used to ensure digital image fixity in the long-term preservation process. Fixity refers to the unchanged integrity and authenticity of the original data that was added before the file was stored or digitally preserved. I learned that because digital files are not tangible, or non-existent really, it is difficult to observe because it is a representation of an object. So, in order to perceive the digital image, one would need to have the basic software and hardware to view it. I learned that instead of looking at the image itself, it is more useful to observe the data in the files so that the digital image can be properly validated while it’s digitally preserved. Topic 2: Checksums Source: Digital Preservation Coalition. (2020). Digital Preservation Manual. DPC. Retrieved fromhttps://www.dpconline.org/handbook/technical-solutions- and-tools/fixity-and-checksums Abstract: This Digital Preservation Manual describes how checksums work. According to the Manual, a checksum is a ‘digital fingerprint’ on a file that detects even the smallest change, causing the checksum itself to completely change. However, the checksum does not necessarily discern where in the file the change has taken place. The way checksums are created are by
  • 4. cryptographic techniques that are generated using an array of open source tools. The Manual reveals that checksums have three main uses. They include: 1)To know that a file has been correctly received from a content owner or source and it successful transfer to preservation storage; 2) To know that a file fixity has been maintained when that file is being stored; 3) To be given to users of the file in the future so they know that the file has been correctly retrieved from storage and delivered to them. When checksums are applied to digital preservation, they can be used to monitor the fixity of each copy (of a file), and if a file has changed, then one of the other file copies can be used to create a replacement. Such a deviation found in a file is known to be a corrupt file which will need to be replaced with a non-corrupt, good file. The process is called “data scrubbing.” Another reason digital files may change is because they have been intentionally migrated (to another file format). Since this causes the checksum to change as well, a new checksum will need to be put into place once a migration has been implemented. It now becomes the new checksum that detects file changes (or errors) moving forward. Depending on an institution’s needs, checksums ideally, should be done regularly at least once a year according to the Digital Preservation Manual. Obviously, the more often files are checked the sooner problems can be addressed and remedied. Checksums are stored in databases, a PREMIS record or in ‘manifests’ that go with files in storage systems. They are often integrated into digital preservation tools. The Manual also mentions that checksums work using various algorithms, making checksums ‘stronger’ and better at detecting file changes. Author Credentials: The Digital Preservation Coalition is a UK-based non-profit limited company that seeks to secure the preservation of digital resources in the UK and internationally to secure the global digital memory and knowledge base. The DPC is a consortium
  • 5. of organizations interested in the preservation of digital information. Intended Reader: The Digital Preservation Manual is intended for those interested in digital preservation of information. They include commercial, cultural heritage, educational, governmental, and research bodies. What I learned: I gained a much better understanding of what checksums are and what they do to ensure that digital files don’t become corrupt. Basically, checksums detect errors or changes in files that may have occurred when the file was transferred or stored in digital preservation. When checksums are applied to digital preservation, they can be used to monitor the fixity of each digital copy, and if the file has changed, then one of the other file copies can be used to create a replacement. I also learned that digital files can also change when the file has been migrated to a different format. This causes the checksum to change which results in having to put another checksum into place to detect new changes in the migrated files. I learned that checksums are stored in databases and other digital preservation tools and that the algorithms used in checksums can directly affect how well they work. Topic 3: Common Gateway Interface (CGI) Source: Both, D. (2017). How to generate webpages using CGI scripts. OpenSource. Retrieved from https://opensource.com/article/17/12/cgi-scripts Abstract: This article explains how Common Gateway Interface (CGI)
  • 6. codes produce dynamic websites. This means that the HTML (Hyper Text Markup Language) used to produce the web page on a browser changes every time the page is accessed, producing different forms of content. CGIs translate the HTML language implemented between a browser and a device. HTML is the language used to create web pages. This allows content to present itself in visually stimulating ways. The author reverts to the “old days” of the Internet when many websites were static and unchanging. Today, CGIs allow web content to be either simple or extremely complex. The content can be influenced by certain calculations, input and even the current conditions in the server. CGIs scripts use different languages such as Python, Perl, PHP and Bash to name a few. The author provides language codes to experiment with, and concludes by stating that creating CGI programs are simple and can be used to generate a vast array of dynamic web pages. Author Credentials: David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker. He is a strong proponent of “Linux Philosophy.” Both has been in the IT industry for almost 50 years and has worked for Cisco, MCI Worldcom and the State of North Carolina. He has taught RHCE classes for RedHat and for the last twenty years, worked for Linux and Open Source. Intended Reader: OpenSource is owned by Red Hat, a multinational software company providing open source software products to the enterprise community. Intended readers and users include computer programmers, web developers, and IT professionals in education, government, law, businesses, health and life. What I learned: I learned what the Common Gateway Interface (CGI) is and how it helps to make webpages more dynamic. While HTML is used
  • 7. to write web pages, CGIs work between the browser and a computer to translate the language into content. This interpretation is what produces “dynamic” content that is not static but more interactive with links, texts and images. Topic 4: Common Gateway Interface (CGI) Source: Spector, P. (2003). Introduction to CGI. UC Berkeley. Retrieved from https://www.stat.berkeley.edu/~spector/extension/python/notes/ node94.html Abstract: This source introduces the concept of the Common Gateway Interface (CGI) as a “mechanism used to transmit information to and from your web browser and a web site’s computer.” It explains that every time a user enters a web address or clicks on a URL link, the request is sent to an internet computer that then sends the contents to the web browser. The browser then translates the HTML (used to write web pages,) into the content expressed as links, text, images, animations and any other content the developer implemented on the site. So basically, CGI provides the manner in which this information is retrieved. It is what connects the web (server) to an external database (computer) and sends information between the two. Author Credentials: Phil Spector used to be the Applications Manager and Software Consultant for the Statistical Computing Facility in the Department of Statistics at University of California at Berkeley. He was also an Adjunct Professor in the Statistics department where he taught Statistical Computing. Intended Reader: This resource is intended for UC Berkeley students who are
  • 8. interested in computer programming, web development, IT, computer languages and other (computer) technologies. What I learned: I learned that the Common Gateway Interface (CGI) is essentially the link between the web and a computer. It communicates HTML language back and forth between the two to determine how to provide the content we see on a webpage. CGIs translate HTML to produce dynamic webpage content such as images, texts, links and animations. UNT Digital Projects Unit I was impressed with the uniquely different projects UNT is working on. The TDNP project sounds like a great idea. I really feel (old) newspapers should be digitized and preserved because they are part of history, and chances are, their only record is the newspaper since computers weren’t used to create articles back in the day. I also like that they list their partnership with the Portal to Texas History because it links a lot of cultural institutions and information organizations (such as libraries) together by allowing them to share and access each other’s collections and/or resources. I also believe the UNT’s open access repository of Scholarly Works is a brilliant service that will benefit students all over the world who can access the UNT (Digital) Library remotely. The Web Archiving project is also a great idea. I think this is very important as well because it is capturing American history, even if it is via old, expired government agency websites. Preserving these old sites and maintaining them as archives will show future generations a glimpse of what certain U.S. government organizations were doing back in the day through their websites. All these projects have been or are being digitally preserved, which has much to do with what we’ve covered in this class so far. I think one additional topic I would add to the “Technology” section would be storage information.
  • 9. Standards – The Standards section is very explicit. UNT makes sure to list each object type and how it is handled in the scanning process. I think it’s ideal that they add information about how certain items “should be scanned and digitally preserved”, whether it’s following a national standard like the Library of Congress’ standard. The Standards section also provides various examples of how specific objects are captured including the file formats used, resolution and bit depth to name a few. All this information helps those who are researching or those who are interested in this kind of work. We’ve covered standards in this class, and I like that we are able to explore how a particular institution utilizes the different standards for different objects in the preservation process. Metadata - I really like how UNT explains that they use the Dublin Core Metadata Schema. I think it’s great that it is fully explained and includes examples of each element. This is very beneficial to researchers who don’t understand the schema or have never conducted a search in the UNT archives. It surely also benefits individuals like my fellow students and I who are interested in learning more about metadata and how it is used in libraries, archives and other cultural institutions or information organizations to catalog digital collections. Scanners/Equipment – UNT does a great job of listing all the scanners used to capture objects for digital preservation. I love that they explain “how they use” the equipment and their specific features as well. It is very helpful that they provide examples for each scanner to give users and researchers an idea of how they preserve items in the digitization process. I like that they use a BetterLight Scanning system and after studying the BetterLight website, I see that UNT spares no expense in using one of the best scanning systems in the world. We recently covered scanners and again, it is fortunate to see just how an institution such as UNT makes use of the various types of scanners, systems and equipment for capturing objects for digital preservation. Software/Hardware – Although UNT lists the software they use
  • 10. including imaging software, they do not list the hardware used to house the applications. I would presume they use PCs, but since they likely opt for the best in graphics, resolution and quality, I would think that they would use Apple hardware. I do not see any information regarding hardware, though I did see brief information regarding the “Coda repository system” and the “Aubrey system”, which apparently, provide access to digital resources. UNT also lists a Software Development Unit on staff which “develops and maintains the infrastructure” and “specialized library applications” (but they do not necessarily divulge what the infrastructure is). This is also quite conducive to have because staff can create bespoke applications that are uniquely suited for various projects. Quality Control – The only bit of information I found regarding any form of quality control was in the Digital Curation Unit area in the About section. It briefly states that they “enhance discovery and ensure long-term access” but it also states towards the end that the UNT Digital Curation Unit “generates tools, procedures, and documents necessary for effective digital lifecycle management,” which I would assume includes quality control. Perhaps, this is another piece of information they can make researchers aware of somewhere on the UNT Digital Projects Unit website. File Types/Formats – In the Standards section, UNT lists the file formats they use in the digitization process. Not surprising, it appears that they use TIFF image files which are one of the highest quality formats, not to mention, quite versatile as we learned earlier in the semester. It’s interesting that they use TIFF for all objects captured such as text, maps and other documentation. Providing this kind of information helps researchers and those in the field who may need to know this for downloading, interoperability, compatibility issues, etc. Delivery - Under the “About” section, there is a “Display Information Toolkit” that lists how the collections are described in order to “attract” users to the “most interesting and important materials”. The content is delivered with the appropriate
  • 11. descriptions and accompanying “representative image or collection icon” as listed under the “Collection” area. UNT lists that its documentation is delivered and/or displayed in either PDF or Word file formats. I think overall, UNT does an exemplary job in delivering its collections as the content is clear and user-friendly. The Portal to Texas History The Portal to Texas History website is simple and straightforward. It lists most of the information we learned in this class in the “Technology” section under “Digitization Practices and Tools”. The following lists most of the tools and practices used to preserve and provide accessibility and availability online. I think I would have added storage information as I did not see this listed in the Portal (or UNT’s site). All the links listed in the Portal revert back to UNT’s Digital Projects Unit site as they demonstrate their combined efforts to digitize and make Texas history available to the public. Metadata – The Portal lists the “Metadata Guidelines” used which are the Dublin Core Metadata Schema. When you click on the “Guidelines” link it takes users back to the UNT Libraries Digital Projects – Metadata section. Here again, UNT provides all the relevant information on how the Dublin Core is used to describe items while also using examples of each element. Scanning Standards – Upon clicking on the title link, it takes users back to the UNT site where it lists the “scanning standards by type of material”. It provides examples of objects and how they are captured, what bit depth, color, resolution, scale and file format used. Delivery – Once again, the “Display Information Toolkit” link in the Texas Portal reverts users back to the UNT Digital Libraries site where UNT explains how the collection is delivered and displayed for access. The content is delivered with relevant descriptions and matching image icons for easy browsing.
  • 12. Equipment – This link takes the user to the UNT site section regarding “Scanners and Scanning Systems”. Again, UNT does a fine job of explaining the scanners, their features and how they are used to capture different objects for digitization. Software – Once more, this link takes the user back to the UNT Digital Projects Unit website where it lists all the different image software applications and OCR, or optical character recognition software used in the digitization process. Quality Control? – This section and link also revert to the UNT Projects website where it explains the “auditing and methodologies” used in the preservation of and access to UNT’s content. Perhaps this is part of the quality control process as individuals “audit” and examine how the digitization and preservation processes, documentation, access to content and infrastructure systems are functioning. Online Archive of California https://oac.cdlib.org/ The Online Archive of California (OAC) is a massive repository that contains more than 250,000 digital images and documents about California’s history. The website states that it provides free access to detailed descriptions of resource collections maintained by more than 200 contributing institutions including libraries, special collections, archives, historical societies, and museums throughout California. The collections are maintained by the 10 University of California (UC) campuses. The OAC’a website is quite extensive and jammed pack with information and links that take users to whichever library or database they wish to search in California. The OAC’s website is fairly simple to navigate, but some users may get overwhelmed with the plethora of information to sort through. I had to really dig through countless of sections in order to find most of the information listed below. I did not see a specific section about the scanning process as the OAC outsources their projects. I managed to find most of the topics we covered in class, but I think I would have had a specific section listed on the website that contains all the “technology” information used in the
  • 13. digitization process like UNT does. This would help users grasp how the digitization process works. Standards – The OAC has three major content type standards for digital content including the EAD (Encoded Archival Description), MARC (Machine Readable Cataloguing) and METS (Metadata Encoding and Transmission Standard) for digital objects. Delivery – OAC’s delivery platform is on a CDL-developed XML- and XSLT-based delivery platform packaged as the eXtensible Text Framework (XTF). The XTF system contains tools that permit users to perform Web-based searching and retrieval of electronic documents. Digital objects are subsequently harvested, published and delivered in Calisphere, OAC’s complimentary website. Metadata – The metadata used for all objects in the repository – regardless of format – are mapped to the Dublin Core element set for generalizability and to support cross-collection discovery. They also use MODS (Metadata Object Description Schema). File formats – The OAC’s preferred format choices for images include TIFF, JPEG, JPG-2000 and PNG. For texts they use HTML, XML, PDF/A, UTF-8 and ASCII. For audio preservation they use AIFF and WAVE file formats. For containers they use GZIP and ZIP. Quality Control – The OAC uses the checksums MD5, SHA-1 or CRC32 and byte size values in the METS file element for the orderly transmission and ingest of digital objects. Storage – All digital content is written to remote external storage for preservation, either in Amazon S3 (at the San Diego Supercomputer Center) or for DataONE at the University of New Mexico. Scanning Systems – It’s unfortunate that the OAC does not seem to list what scanners or scanning systems they use to capture materials. It appears that they outsource their scanning projects to the Southern Regional Library Facility (SRLF) in
  • 14. Los Angeles. When looking on the SRLF website, I was able to find some information regarding the digitization process. To capture photographic negatives, prints and microfilm they use state of the art medium format cameras. They use scanners to produce high resolution images intended for long-term digital preservation. They do not, however list the kind of scanners they use. The OAC follows the FADGI guidelines for their digitization and preservation projects. Week 15 The Final Portfolio Project is a comprehensive assessment of what you have learned during this course. The Final Project has two parts: Limitations of Blockchain and Emerging Concepts. Blockchain continues to be deployed into various businesses and industries. However, Blockchain is not without its problems. Several challenges have already been associated with the use of this technology. Identify at least 5 key challenges to Blockchain. Additionally, discuss potential solutions to these challenges. Lastly, please discuss if we will see the limitations to blockchain be reduced or mitigated in the future. There are several emerging concepts that are using Big Data and Blockchain Technology. Please search the internet and highlight 5 emerging concepts that are exploring the use of Blockchain and Big Data and how they are being used. Conclude your paper with a detailed conclusion section which discusses both limitations and emerging concepts. The paper needs to be approximately 6-8 pages long, including both a title page and a references page (for a total of 8-10 pages). Be sure to use proper APA formatting and citations to
  • 15. avoid plagiarism. Your paper should meet the following requirements: • Be approximately 6-8 pages in length, not including the required cover page and reference page. • Follow APA7 guidelines. Your paper should include an introduction, a body with fully developed content, and a conclusion. • Support your answers with the readings from the course, the course textbook, and at least four scholarly journal articles from the UC library to support your positions, claims, and observations, in addition to your textbook. The UC Library is a great place to find resources. • Be clearly and well-written, concise, and logical, using excellent grammar and style techniques. You are being graded in part on the quality of your writing. 1. Read 2 chapters in the Besser text: Quality Control (Links to an external site.), and Delivery (Links to an external site.). The reference to CD-ROM error checking (at the end of the Quality Control chapter) is essentially obsolete. In the chapter on Delivery, the paragraph on web delivery is definitely not complete, but this link will give you a quick idea of the proliferation of scripting languages. (Links to an external site.) Additionally, the brief discussion about how Google searches websites is definitely NOT current. Write about what you learned from these chapters. Choose 1 topic from each of those readings (2 topics total). Find 2 authoritative and useful resources that further your understanding for each topic (4 total). Describe each source each as in previous assignments. 2. Explore the website of the UNT Digital Projects Unit (Links to an external site.). Keeping in mind what you have read in the Besser text, explore this site thoroughly. Summarize in *detail* what you found regarding metadata, quality control,
  • 16. hardware and any other topic you have learned about in this class. 3. Then spend time using the Portal to Texas History (Links to an external site.)and analyze this image database. List the attributes (metadata, image standards, etc.) that you looked at and discuss what you discovered about these attributes. 4. Find one more image database that you consider exemplary. As in Assignment 5 (#3), critically examine the database you choose, drawing on the topics we've covered since then and all you have learned during that time. Write a description of the site and describe the attributes you looked for (again, keep in mind what you have learned so far in this class). Summary of Assignment: 1. Read the assigned chapters Quality Control, and Delivery. Write about, as usual. 2. Find 4 sources and describe as usual. 3. Explore the Digital Projects Lab site and discuss what you discover. 4. Explore the Portal to Texas History and analyze the site. 5. Explore an additional image database and analyze the site.