KM tools can be categorized into the following types:
Groupware systems & KM 2.0, intranets & extranets, data warehousing/data mining/OLAP, decision support systems, content management systems, and document management systems. These tools help with knowledge discovery, organization, sharing, and decision making by providing functions like communication, collaboration, data analysis, content/document publishing and retrieval. Selecting the right KM tools is an important step in implementing a successful KM strategy.
http://www.embarcadero.com
Data yields information when its definition is understood or readily available and it is presented in a meaningful context. Yet even the information that may be gleaned from data is incomplete because data is created to drive applications, not to inform users. Metadata is the data that holds application
data definitions as well as their operational and business context, and so plays a critical role in data and application design and development, as well as in providing an intelligent operational environment that's driven by business meaning.
In this PPT, you will learn:
• The difference between data and information
• What a database is, the various types of databases, and why they are valuable assets for
decision making
• The importance of database design
• How modern databases evolved from file systems
• About flaws in file system data management
• The main components of the database system
• The main functions of a database management system (DBMS)
● Why Databases?
● Why Database Design is Important?
● The Database System Environment and Functions.
● Managing the Database System: A Shift in Focus.
What is Data Mining? Data Mining is defined as extracting information from huge sets of data. In other words, we can say that data mining is the procedure of mining knowledge from data.
http://www.embarcadero.com
Data yields information when its definition is understood or readily available and it is presented in a meaningful context. Yet even the information that may be gleaned from data is incomplete because data is created to drive applications, not to inform users. Metadata is the data that holds application
data definitions as well as their operational and business context, and so plays a critical role in data and application design and development, as well as in providing an intelligent operational environment that's driven by business meaning.
In this PPT, you will learn:
• The difference between data and information
• What a database is, the various types of databases, and why they are valuable assets for
decision making
• The importance of database design
• How modern databases evolved from file systems
• About flaws in file system data management
• The main components of the database system
• The main functions of a database management system (DBMS)
● Why Databases?
● Why Database Design is Important?
● The Database System Environment and Functions.
● Managing the Database System: A Shift in Focus.
What is Data Mining? Data Mining is defined as extracting information from huge sets of data. In other words, we can say that data mining is the procedure of mining knowledge from data.
In the present era, data exploration in business
intelligence becomes a big problem. Information plays an
important role in the business industry. As data is not classified
and segmented in some manner than data exploration becomes
very difficult. The theme of this paper is to use RedBox tool that
extracts data pattern by using clustering methodology of data
mining. It is different from other tools, where data are a
combination of alike items and provide a suitable group of each
cluster is required. In this method, a number of transaction are
getting from the internet by web mining. These transactions are
passing from a cluster based data mining algorithm and a
significant key is present to identify a priority combination of the
cluster through which information extract. This method is used in
the business world to improve the business productivity. The
major concept of this manuscript is to provide a comparison
between data mining techniques and upgrade Red box technique
by using web mining. In the future, it will try to improve the Red
box approach by using forecasting method.
Developing Sales Information System Application using Prototyping ModelEditor IJCATR
This research aimed to develop the system that be able to manage the sales transaction, so the transaction services will be more quickly and efficiently. The system has developed using prototyping model, which have steps including: 1) communication and initial data collection, 2) quick design, 3) formation of prototyping, 4) evaluation of prototyping, 5) repairing prototyping, and 6) the final step is producing devices properly so it can used by user. The prototyping model intended to adjust the system in accordance with its use later, made in stages so that the problems that arise will be immediately addressed. The results of this research is a software which have consumer transaction services including the purchasing services, sale, inventory management, and report for management needed purpose. Based on questionnaires given to 18 respondents obtained information on the evaluation system built, among others: 1) 88% strongly agree and 11% agree, the application can increase effectiveness and efficiently the organizations/enterprises; 2) 33% strongly agree, 62 agree, and 5% not agree, the application can meet the needs of organization/enterprise.
Characterizing and Processing of Big Data Using Data Mining TechniquesIJTET Journal
Abstract— Big data is a popular term used to describe the exponential growth and availability of data, both structured and unstructured. It concerns Large-Volume, Complex and growing data sets in both multiple and autonomous sources. Not only in science and engineering big data are now rapidly expanding in all domains like physical, bio logical etc...The main objective of this paper is to characterize the features of big data. Here the HACE theorem, that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective, is used. The aggregation of mining, analysis, information sources, user interest modeling, privacy and security are involved in this model. To explore and extract the large volumes of data and useful information or knowledge respectively is the most fundamental challenge in Big Data. So we should have a tendency to analyze these problems and knowledge revolution.
A simulated decision trees algorithm (sdt)Mona Nasr
The customer's information contained in
databases has increased dramatically in the last few years.
Data mining is a good approach to deal with this volume of
information to enhance the process of customer services.
One of the most important and powerful techniques of data
mining is decision trees algorithm. It appropriate for large
and sophisticated business area but it's complicated, high
cost and not easy to use by not specialists in the field. To
overcome this problem SDT is proposed which is a simple,
powerful and low-cost proposed methodology to simulate the
decision trees algorithm for different business scopes and
nature. SDT methodology consists of three phases. The first
phase is the data preparation which prepare data for
computing calculations, the second phase is SDT algorithm
which represents a simulation of decision trees algorithm to
find the most important rules that distinguish specific type of
customers, the third phase is to visualize results and rules for
better understanding and clarifying the results. In this paper
SDT methodology is tested by a dataset consists of 1000
instants for German Credit Data belongs to one of German
bank customers. SDT selects the most important rules and
paths that reaches the selected ratio and tested cluster of
customers successfully with interesting remarks and finding.
Data Profiling, Data Catalogs and Metadata HarmonisationAlan McSweeney
These notes discuss the related topics of Data Profiling, Data Catalogs and Metadata Harmonisation. It describes a detailed structure for data profiling activities. It identifies various open source and commercial tools and data profiling algorithms. Data profiling is a necessary pre-requisite activity in order to construct a data catalog. A data catalog makes an organisation’s data more discoverable. The data collected during data profiling forms the metadata contained in the data catalog. This assists with ensuring data quality. It is also a necessary activity for Master Data Management initiatives. These notes describe a metadata structure and provide details on metadata standards and sources.
International Refereed Journal of Engineering and Science (IRJES) irjes
International Refereed Journal of Engineering and Science (IRJES)
Ad hoc & sensor networks, Adaptive applications, Aeronautical Engineering, Aerospace Engineering
Agricultural Engineering, AI and Image Recognition, Allied engineering materials, Applied mechanics,
Architecture & Planning, Artificial intelligence, Audio Engineering, Automation and Mobile Robots
Automotive Engineering….
In the present era, data exploration in business
intelligence becomes a big problem. Information plays an
important role in the business industry. As data is not classified
and segmented in some manner than data exploration becomes
very difficult. The theme of this paper is to use RedBox tool that
extracts data pattern by using clustering methodology of data
mining. It is different from other tools, where data are a
combination of alike items and provide a suitable group of each
cluster is required. In this method, a number of transaction are
getting from the internet by web mining. These transactions are
passing from a cluster based data mining algorithm and a
significant key is present to identify a priority combination of the
cluster through which information extract. This method is used in
the business world to improve the business productivity. The
major concept of this manuscript is to provide a comparison
between data mining techniques and upgrade Red box technique
by using web mining. In the future, it will try to improve the Red
box approach by using forecasting method.
Developing Sales Information System Application using Prototyping ModelEditor IJCATR
This research aimed to develop the system that be able to manage the sales transaction, so the transaction services will be more quickly and efficiently. The system has developed using prototyping model, which have steps including: 1) communication and initial data collection, 2) quick design, 3) formation of prototyping, 4) evaluation of prototyping, 5) repairing prototyping, and 6) the final step is producing devices properly so it can used by user. The prototyping model intended to adjust the system in accordance with its use later, made in stages so that the problems that arise will be immediately addressed. The results of this research is a software which have consumer transaction services including the purchasing services, sale, inventory management, and report for management needed purpose. Based on questionnaires given to 18 respondents obtained information on the evaluation system built, among others: 1) 88% strongly agree and 11% agree, the application can increase effectiveness and efficiently the organizations/enterprises; 2) 33% strongly agree, 62 agree, and 5% not agree, the application can meet the needs of organization/enterprise.
Characterizing and Processing of Big Data Using Data Mining TechniquesIJTET Journal
Abstract— Big data is a popular term used to describe the exponential growth and availability of data, both structured and unstructured. It concerns Large-Volume, Complex and growing data sets in both multiple and autonomous sources. Not only in science and engineering big data are now rapidly expanding in all domains like physical, bio logical etc...The main objective of this paper is to characterize the features of big data. Here the HACE theorem, that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective, is used. The aggregation of mining, analysis, information sources, user interest modeling, privacy and security are involved in this model. To explore and extract the large volumes of data and useful information or knowledge respectively is the most fundamental challenge in Big Data. So we should have a tendency to analyze these problems and knowledge revolution.
A simulated decision trees algorithm (sdt)Mona Nasr
The customer's information contained in
databases has increased dramatically in the last few years.
Data mining is a good approach to deal with this volume of
information to enhance the process of customer services.
One of the most important and powerful techniques of data
mining is decision trees algorithm. It appropriate for large
and sophisticated business area but it's complicated, high
cost and not easy to use by not specialists in the field. To
overcome this problem SDT is proposed which is a simple,
powerful and low-cost proposed methodology to simulate the
decision trees algorithm for different business scopes and
nature. SDT methodology consists of three phases. The first
phase is the data preparation which prepare data for
computing calculations, the second phase is SDT algorithm
which represents a simulation of decision trees algorithm to
find the most important rules that distinguish specific type of
customers, the third phase is to visualize results and rules for
better understanding and clarifying the results. In this paper
SDT methodology is tested by a dataset consists of 1000
instants for German Credit Data belongs to one of German
bank customers. SDT selects the most important rules and
paths that reaches the selected ratio and tested cluster of
customers successfully with interesting remarks and finding.
Data Profiling, Data Catalogs and Metadata HarmonisationAlan McSweeney
These notes discuss the related topics of Data Profiling, Data Catalogs and Metadata Harmonisation. It describes a detailed structure for data profiling activities. It identifies various open source and commercial tools and data profiling algorithms. Data profiling is a necessary pre-requisite activity in order to construct a data catalog. A data catalog makes an organisation’s data more discoverable. The data collected during data profiling forms the metadata contained in the data catalog. This assists with ensuring data quality. It is also a necessary activity for Master Data Management initiatives. These notes describe a metadata structure and provide details on metadata standards and sources.
International Refereed Journal of Engineering and Science (IRJES) irjes
International Refereed Journal of Engineering and Science (IRJES)
Ad hoc & sensor networks, Adaptive applications, Aeronautical Engineering, Aerospace Engineering
Agricultural Engineering, AI and Image Recognition, Allied engineering materials, Applied mechanics,
Architecture & Planning, Artificial intelligence, Audio Engineering, Automation and Mobile Robots
Automotive Engineering….
seminar within a research school for young researchers “Innovations in Knowledge Management Practices” supported by Russian Science Foundation, 10-11 October 2016, Graduate School of Management St. Petersburg State University
http://gsom.spbu.ru/en/all_news/event2016_10_18/
The project is to ask college related queries and get the responses through a chatbot an Artificial Conversational Entity. This System is a web application which provides answer to the query of the student. Students just have to query through the bot which is used for chatting. Students can chat using any format there is no specific format the user has to follow. This system helps the student to be updated about the college activities.
CHAPTER5Database Systemsand Big DataRafal OlechowsJinElias52
CHAPTER
5
Database Systems
and Big Data
Rafal Olechowski/Shutterstock.com
Copyright 2018 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Know?Did Yo
u
• The amount of data in the digital universe is expected
to increase to 44 zettabytes (44 trillion gigabytes) by
2020. This is 60 times the amount of all the grains of
sand on all the beaches on Earth. The majority of
data generated between now and 2020 will not be
produced by humans, but rather by machines as they
talk to each other over data networks.
• Most major U.S. wireless service providers have
implemented a stolen-phone database to report and
track stolen phones. So if your smartphone or tablet
goes missing, report it to your carrier. If someone else
tries to use it, he or she will be denied service on the
carrier’s network.
• You know those banner and tile ads that pop up on
your browser screen (usually for products and
services you’ve recently viewed)? Criteo, one of
many digital advertising organizations, automates the
recommendation of ads up to 30 billion times each day,
with each recommendation requiring a calculation
involving some 100 variables.
Principles Learning Objectives
• The database approach to data management has
become broadly accepted.
• Data modeling is a key aspect of organizing data and
information.
• A well-designed and well-managed database is an
extremely valuable tool in supporting decision making.
• We have entered an era where organizations are
grappling with a tremendous growth in the amount of
data available and struggling to understand how to
manage and make use of it.
• A number of available tools and technologies allow
organizations to take advantage of the opportunities
offered by big data.
• Identify and briefly describe the members of the hier-
archy of data.
• Identify the advantages of the database approach to
data management.
• Identify the key factors that must be considered when
designing a database.
• Identify the various types of data models and explain
how they are useful in planning a database.
• Describe the relational database model and its funda-
mental characteristics.
• Define the role of the database schema, data definition
language, and data manipulation language.
• Discuss the role of a database administrator and data
administrator.
• Identify the common functions performed by all data-
base management systems.
• Define the term big data and identify its basic
characteristics.
• Explain why big data represents both a challenge and
an opportunity.
• Define the term data management and state its overall
goal.
• Define the terms data warehouse, data mart, and data
lakes and explain how they are different.
• Outline the extract, transform, load process.
• Explain how a NoSQL database is different from an
SQL database.
• Discuss the whole Hadoop computing environment and
its various components.
• Define the term in-memory database and ex ...
DISCUSSION 15 4All students must review one (1) Group PowerP.docxcuddietheresa
DISCUSSION 15 4
All students must review one (1) Group PowerPoint Presentation from another group and complete the follow activities:
1. First each student (individually) must summarize the content of the PowerPoint of another group in 200 words or more.
2. Additionally each student must present a detailed discussion of what they learned from the presentation they summarized and discuss the ways in which they would you use this information in their current or future profession.
PowerPoint is attached separately
Homework
Create a new product that will serve two business (organizational) markets.
Write a 750-1,000-word paper that describes your product, explains your strategy for entering the markets, and analyzes the potential barriers you may encounter. Explain how you plan to ensure your product will be successful, given your market strategy.
Include an introduction and conclusion that make relevant connections to course objectives.
Prepare this assignment according to the APA guidelines found in the APA Style Guide
Management Information Systems
Campbellsville University
Week 15: PowerPoint Presentation
Topic: Data
Group: E
GROUP MEMBERS FULL NAME
Data
Data can be defined as a specific piece of information or a basic building block of information.
Data is stored in files or in databases.
Data can be presented into tables, graphs or charts, so that legitimate and analytical results can be derived from the gathered information.
An authentic data is very important for the smooth running of any business organizations. It helps IT managers to make effective decisions. Data helps to interpret and enhance overall business processes (Cai & Zhu, 2015).
Uses of Data
The main purpose of data is to keep the records of several activities and situations.
Gathering data helps to better understand the interest of customers which can enhance the sales of organization (Haug & Liempd, 2011).
Relevant data assists in creating strong business strategies.
Use of big data helps to promote service support to the customers. It also helps organizations to find new markets and new business opportunities.
After all, data plays a great role in running the company more effectively and efficiently.
Data Management
Data management is the implementation of policies and procedures that put organizations in control of their business data regardless of where it resides. Data management is concerned with the end-to-end lifecycle of data, from creation to retirement, and the controlled progression of data to and from each stage within its lifecycle (Dunie, M. 2017).
Data Management
Information technology has evolved to deal with the most important data management computer science which helps the computer leads to the advantage of a navigable and transparent communication space.
Large volumes of data can be processed and managed with the help of management systems through the methods of algebra with applications in economic engineering especially in ...
data collection, data integration, data management, data modeling.pptxSourabhkumar729579
it contains presentation of data collection, data integration, data management, data modeling.
it is made by sourabh kumar student of MCA from central university of haryana
574An Integrated Framework for IT Infrastructure Management by Work Flow Mana...idescitation
Information Technology (IT) is one of the most emerging
fields in today’s Internet world. IT can be defined in various ways,
but is broadly considered to encompass the use of computers and
telecommunications equipment to store, retrieve, transmit and
manipulate data. Infrastructure is the base for everything. IT also
has an infrastructure, which can be managed and maintained
properly. For an organization’s Information Technology,
Infrastructure Management (IM) is the management of essential
operation components, such as policies, processes, equipment,
data, human resources and external contacts, for overall
effectiveness.
In this paper, we propose a methodology to manage the IT
Infrastructure in a better way. Our methodology uses the tree-
structure based architecture to manage the infrastructure with less
manual power. The process of how to manage the infrastructure is
discussed with efficient methodology and necessary steps with
algorithm, in this paper. Also, in this paper, the process of workflow
management on IT infrastructure management has been provided.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Azure Interview Questions and Answers PDF By ScholarHat
km ppt neew one
1. KM Tools
The scope of this section is to provide the reader with an overview of the types of KM tools available on the
market today and to gain an understanding of what their role is in the KM process. This is the most important
step, since there are literally thousands of options to choose from. However, in the future, I intend to also take a
look at some actual KM tools and present a few reviews.
In this section, I present an overview of the IT-based tools and systems that can help knowledge management
(KM) fulfill its goals.
To recap, I have dealt with KM tools throughout the section on tactical management initiatives, outlining its role
in knowledge discovery, organization, sharing, etc. In the section on knowledge management strategy, I
presented an article on knowledge management systems implementation, where I stated that IT based tools, for
the most part, fall into one of the following categories ..
Groupware systems & KM 2.0
The intranet and extranet
Data warehousing , data mining, & OLAP
Decision Support Systems
Content management systems
Document management systems
2. Groupware Systems & KM 2.0
Groupware is a term that refers to technology designed to help people collaborate
and includes a wide range of applications. Wikipedia defines three handy
categories for groupware:
Communication tools: Tools for sending messages and files, including email, web
publishing, wikis, file sharing, etc.
Conferencing tools: e.g. video/audio conferencing, chat, forums, etc.
Collaborative management tools: Tools for managing group activities, e.g. project
management systems, workflow systems, information management systems, etc.
3. The Intranet & Extranet
The intranet is essentially a small-scale version of the internet, operating with
similar functionality, but existing solely within the firm. Like the internet, the
intranet uses network technologies such as Transmission Control
Protocol/Internet Protocol (TCP/IP). It allows for the creation of internal networks
with common internet applications that can allow them to communicate with
different operating systems..
Although it need not be, the intranet is usually linked to the internet, where
broader searches are implemented. However, outsiders are excluded through
security measures such as firewalls.
Extranet
The extranet is an extension of the intranet to the firm's external network,
including partners, suppliers and so on. The term is sometimes used to refer to a
supplementary system working alongside the intranet or to a part of the intranet
that is made available to certain external users.
4. Warehousing Data: The Data
Warehouse, Data Mining, and OLAP
Warehousing data is based on the premise that the quality of a
manager's decisions is based, at least in part , on the quality of his
information. The goal of storing data in a centralized system is thus
to have the means to provide them with the right building blocks for
sound information and knowledge. Data warehouses contain
information ranging from measurements of performance to
competitive intelligence (Tanler1997).
5. Warehousing Data: Design and Implementation
anler (1997) identifies three stages in the design and implementation of the data
warehouse. The first stage is largely concerned with identifying the critical success
factors of the enterprise, so as to determine the focus of the systems applied to the
warehouse. The next step is to identify the information needs of the decision makers.
This involves the specification of current information lacks and the stages of the
decision-making process (i.e. the time taken to analyze data and arrive at a
decision). Finally, warehousing data should be implemented in a way that ensures
that users understand the benefit early on. The size of the database and the
complexity of the analytical requirements must be determined. Deployment issues,
such as how users will receive the information, how routine decisions must be
automated, and how users with varying technical skills can access the data, must be
addressed.
According to Frank (2002), the success of the implementation of the data warehouse
depends on:
Accurately specifying user information needs
Implementing metadata: Metadata is essentially data about data. This is regarded as
a particularly crucial step. Parankusham & Madupu (2006) outline the different roles
of met a data as including: data characterization and indexing, the facilitation or
restriction of data access, and the determination of the source and currency of data.
6. OLAP
OLAP allows three functions to be carried out.
Query and reporting: Ability to formulate queries without having to use the database programming language.
Multidimensional analysis: The ability to carry out analyses from multiple perspectives. Tanler (1997) provides an
example of a product analysis that can be then repeated for each market segment. This allows for quick
comparison of data relationships from different areas (e.g. by location, time, etc.). This analysis can include
customers, markets, products, and so on,
Statistical analysis: This function attempts to reduce the large quantities of data into formulas that capture the
answer to the query.
OLAP is basically responsible for telling the user what happened to the organization . It thus enhances
understanding reactively, using summarization of data and information.
What is Data Mining?
This is another process used to try to create useable knowledge or information from data warehousing. Data
mining, unlike statistical analysis, does not start with a preconceived hypothesis about the data, and the
technique is more suited for heterogeneous databases and date sets (Bali et al 2009). Karahoca and Ponce
(2009) describe data mining as "an important tool for the mission critical applications to minimize, filter, extract or
transform large databases or datasets into summarized information and exploring hidden patterns in knowledge
discovery (KD)." The knowledge discovery aspect is emphasized by Bali et al (2009), since the management of this
new knowledge falls within the KM discipline.
th is beyond the scope of this site to offer an in-depth look at the data mining process. Instead, I will present a
very brief overview, and point readers that are interested in the technical aspects towards free sources of
information.
Very briefly, data mining employs a wide range of tools and systems, including symbolic methods and statistical
analysis. According to Botha et al (2008), symbolic methods look for pattern primitives by using pattern description
languages so as to find structure. Statistical methods on the other hand measure and plot important
characteristics, which are then divided into classes and clusters.
Data mining is a very complex process with different process models.
7. Decision Support Systems
There are several kinds of such systems, however, in this subsection I
will look at only at data-driven decision support systems (from now
on referred to solely as decision support systems). The role of these
systems is to access and manipulate data. They usually work with a
data warehouse, use an online analytical processing system (OLAP),
and employ data mining techniques. The goal is to enhance
decision-making and solve problems by working with the manager
rather than replacing him.
An effective decision support system thus requires that the
organization:
Investigates the decisions made within their firm
Compares these decisions with KM activities
Evaluates any current decision support system in light of this
Modifies said system if necessary
8. Content Management Systems
Content management systems are very relevant to knowledge management
(KM) since they are responsible for the creation, management, and distribution
of content on the intranet, extranet, or a website. Content management is a
discipline in itself, so this section will be relatively brief, only outlining the basic
considerations.
A content management system may have the following functions:
Provide templates for publishing: Making publishing easier and more consistent
with existing structure/design.
Tag content with metadata: I.e. Allowing the input of data that classifies
content (e.g. keywords) so that it can be searched for and retrieved.
Make it easy to edit content
Version control: Tracking changes to pages and, if necessary, allowing previous
versions to be accessed
Allow for collaborative work on content
Integrated document management systems
Workflow management: Allowing for parallel content development
Provide extensions and plug-ins for increased functionality
Etc.
9. Document Management Systems
Document management systems, as the name implies, are systems
that aid in the publishing, storage, indexing, and retrieval of
documents. Although such systems deal almost exclusively with
explicit knowledge, the sheer volume of documents that an
organization has to deal with makes them useful and in some cases
even mandatory. Often they are a part of content management
systems.
Usually, a document management system will include the following
functions:
Capturing: In order for paper documents to be useable by the
document management system, they must be scanned in. For
companies that need to carry out this process and who have
numerous paper documents this may be time consuming and
expensive.
Classification using metadata: Metadata (data about data) is used
to identify the document so that it can be retrieved later. It can
include keywords, date, author, etc. The user is often asked to input
this metadata or the system may extract it from the document.
Optical character recognition may be used to identify text on
scanned images.