1. This document discusses quantifying information and knowledge using the information theory and informative matrices.
2. It begins by introducing the information theory as a way to quantify abstract concepts like information and knowledge. It then discusses defining data, information, knowledge, and informative systems using concepts from information theory.
3. The key points covered include defining the informative weight, content, and quantity of data and information; how these concepts combine in informative systems; and equations for calculating informative weight, content, and quantity based on the weights of individual data.
This document discusses different theoretical frameworks for understanding the concept of "information". It summarizes Buckland's matrix that categorizes information into tangible/intangible and entity/process. The document then examines various perspectives on understanding information as a thing, process, and knowledge. It outlines quantitative and qualitative models of information as process, and discusses views of information as knowledge.
This document discusses the concept of a "knowbula", which is defined as a 3D envelope that represents all human activities and knowledge in a given field. It proposes that knowbula can be visualized with objects along the x-axis, verb functions along the y-axis, and time along the z-axis. As human activity takes place, it causes incremental changes to the shape and volume of the knowbula as knowledge is gained. The document also discusses how machines can help track changes to objects over time and potentially extrapolate how changes could make objects more desirable.
Chaps29 the entirebookks2017 - The Mind MahineSyedVAhamed
In this chapter, we take bold step and propose the unthinkable: The genesis of a Customizable Mind Machine.
Thought that stems from the mind is deeply seated in a biological framework of neurons. The biological origin lies
in the marvel of evolution over the eons and refined ever so fast, faster than in the prior centuries. Three (a, b and
c), triadic objects are ceaselessly at work. At a personal level (a) Mind, knowledge and machines have been
intertwined like inspiration, words and language since the dawn of the human evolution and more recently (b)
technology, manufacturing and economics have formed a web for (c) wealth, global marketing and insatiable needs
of humans and civilization. These triadic cycles of nine essential objects of human existence are spinning quicker
and quicker every year. The Internet offers the mind no choice but to leap and soar over history and over the globe.
Alternatively, human mind can sink deeper and deeper into ignorance and oblivion. More recently, the Artificial
Intelligence at work in the Internet had challenged the natural intelligence at the cognizance level in the mind to find
its way to breakthroughs and innovations.
We integrate functions of the mind with the processing of knowledge in the hardware of machines by freely
traversing the neural, mental, physical, psychological, social, knowledge, and computational spaces. The laws of
neural biology and mind, laws of knowledge and social sciences and finally the laws of physics and mechanics, in
each of the spaces are unique and executed by distinctive processors for each space. Much as mind rules over
matter, the triad of mind, space and time creates a human-space that rules over the Relativistic-space of matter,
space and time.
Keywords—Mind, Knowledge, Machines, Technology, Human Needs, Knowledge Windows, Perceptual Spaces
Dodig-Crnkovic-Information and ComputationJosé Nafría
This document discusses open system thinking and natural computation from an info-computationalism perspective. It provides background on the author and their research interests in computing paradigms, natural/unconventional computing, information dynamics, and computational aspects of science. Key concepts covered include complexity, emergence, self-organization, generative models, agent-based models, and viewing information and computation as the primary stuff and dynamics of the universe respectively. Examples are given of complexity arising from simplicity and adaptive complex systems.
In this paper we define the notion of the Hybrid Social Learning Network. We propose mechanisms for interlinking and enhancing both the practice of professional learning and theories on informal learning. Our approach shows how we employ empirical and design work and a participatory pattern workshop to move from (kernel) theories via Design Principles and prototypes to social machines articulating the notion of a HSLN. We illustrate this approach with the example of Help Seeking for healthcare professionals.
This paper analyzes the concept of "ba", which refers to an enabling context or shared space for knowledge creation. The paper conducted a literature review of 135 papers, 4 dissertations, and 4 books on the topic of ba and related concepts since it was first introduced by Nonaka in 1998. The results identified four major groups of enabling conditions for knowledge processes: social/behavioral, cognitive/epistemic, informational, and business/managerial. These conditions can support different types of ba and knowledge processes at the individual, group, organizational, and inter-organizational levels. The paper proposes a framework using these conditions to design enabling contexts for knowledge organizations.
The document discusses several topics related to the roles of information professionals in a digital world, including:
1) Information professionals need to reexamine their purpose and roles given changes in technology and society, and may need to take on new functions like curating digital collections and providing digital reference services.
2) There is debate around what defines a "digital librarian" and whether these roles are simply extensions of existing librarian work or require new skills like web publishing and multimedia indexing.
3) Accessing information through physical documents is no longer adequate; information professionals must focus on developing skills and knowledge to help others make sense of information in a digital environment.
Indiopedia: a System to Acquire Legal Knowledge from the Crowd Using Games Te...Instituto i3G
This document describes a system called Indiopedia that uses crowdsourcing games to acquire legal knowledge from non-experts to expand lightweight ontologies in specific domains. Indiopedia builds on an existing legal knowledge system called Ontojuris that uses ontologies to improve legal search, but ontology building is time-consuming. Indiopedia addresses this by developing crowdsourcing games that turn the ontology expansion process into fun games for non-experts to play and contribute knowledge. The games aim to efficiently acquire semantic structures from many users to develop the legal ontologies in a scalable way. Initial experiments involved users playing games to help expand ontologies related to indigenous legislation.
This document discusses different theoretical frameworks for understanding the concept of "information". It summarizes Buckland's matrix that categorizes information into tangible/intangible and entity/process. The document then examines various perspectives on understanding information as a thing, process, and knowledge. It outlines quantitative and qualitative models of information as process, and discusses views of information as knowledge.
This document discusses the concept of a "knowbula", which is defined as a 3D envelope that represents all human activities and knowledge in a given field. It proposes that knowbula can be visualized with objects along the x-axis, verb functions along the y-axis, and time along the z-axis. As human activity takes place, it causes incremental changes to the shape and volume of the knowbula as knowledge is gained. The document also discusses how machines can help track changes to objects over time and potentially extrapolate how changes could make objects more desirable.
Chaps29 the entirebookks2017 - The Mind MahineSyedVAhamed
In this chapter, we take bold step and propose the unthinkable: The genesis of a Customizable Mind Machine.
Thought that stems from the mind is deeply seated in a biological framework of neurons. The biological origin lies
in the marvel of evolution over the eons and refined ever so fast, faster than in the prior centuries. Three (a, b and
c), triadic objects are ceaselessly at work. At a personal level (a) Mind, knowledge and machines have been
intertwined like inspiration, words and language since the dawn of the human evolution and more recently (b)
technology, manufacturing and economics have formed a web for (c) wealth, global marketing and insatiable needs
of humans and civilization. These triadic cycles of nine essential objects of human existence are spinning quicker
and quicker every year. The Internet offers the mind no choice but to leap and soar over history and over the globe.
Alternatively, human mind can sink deeper and deeper into ignorance and oblivion. More recently, the Artificial
Intelligence at work in the Internet had challenged the natural intelligence at the cognizance level in the mind to find
its way to breakthroughs and innovations.
We integrate functions of the mind with the processing of knowledge in the hardware of machines by freely
traversing the neural, mental, physical, psychological, social, knowledge, and computational spaces. The laws of
neural biology and mind, laws of knowledge and social sciences and finally the laws of physics and mechanics, in
each of the spaces are unique and executed by distinctive processors for each space. Much as mind rules over
matter, the triad of mind, space and time creates a human-space that rules over the Relativistic-space of matter,
space and time.
Keywords—Mind, Knowledge, Machines, Technology, Human Needs, Knowledge Windows, Perceptual Spaces
Dodig-Crnkovic-Information and ComputationJosé Nafría
This document discusses open system thinking and natural computation from an info-computationalism perspective. It provides background on the author and their research interests in computing paradigms, natural/unconventional computing, information dynamics, and computational aspects of science. Key concepts covered include complexity, emergence, self-organization, generative models, agent-based models, and viewing information and computation as the primary stuff and dynamics of the universe respectively. Examples are given of complexity arising from simplicity and adaptive complex systems.
In this paper we define the notion of the Hybrid Social Learning Network. We propose mechanisms for interlinking and enhancing both the practice of professional learning and theories on informal learning. Our approach shows how we employ empirical and design work and a participatory pattern workshop to move from (kernel) theories via Design Principles and prototypes to social machines articulating the notion of a HSLN. We illustrate this approach with the example of Help Seeking for healthcare professionals.
This paper analyzes the concept of "ba", which refers to an enabling context or shared space for knowledge creation. The paper conducted a literature review of 135 papers, 4 dissertations, and 4 books on the topic of ba and related concepts since it was first introduced by Nonaka in 1998. The results identified four major groups of enabling conditions for knowledge processes: social/behavioral, cognitive/epistemic, informational, and business/managerial. These conditions can support different types of ba and knowledge processes at the individual, group, organizational, and inter-organizational levels. The paper proposes a framework using these conditions to design enabling contexts for knowledge organizations.
The document discusses several topics related to the roles of information professionals in a digital world, including:
1) Information professionals need to reexamine their purpose and roles given changes in technology and society, and may need to take on new functions like curating digital collections and providing digital reference services.
2) There is debate around what defines a "digital librarian" and whether these roles are simply extensions of existing librarian work or require new skills like web publishing and multimedia indexing.
3) Accessing information through physical documents is no longer adequate; information professionals must focus on developing skills and knowledge to help others make sense of information in a digital environment.
Indiopedia: a System to Acquire Legal Knowledge from the Crowd Using Games Te...Instituto i3G
This document describes a system called Indiopedia that uses crowdsourcing games to acquire legal knowledge from non-experts to expand lightweight ontologies in specific domains. Indiopedia builds on an existing legal knowledge system called Ontojuris that uses ontologies to improve legal search, but ontology building is time-consuming. Indiopedia addresses this by developing crowdsourcing games that turn the ontology expansion process into fun games for non-experts to play and contribute knowledge. The games aim to efficiently acquire semantic structures from many users to develop the legal ontologies in a scalable way. Initial experiments involved users playing games to help expand ontologies related to indigenous legislation.
Is supply chain management important to implementAlexander Decker
This document discusses the importance of supply chain management (SCM) in manufacturing industries in Saudi Arabia. It begins with an introduction to SCM and its importance as a competitive strategy. It then reviews literature that has examined SCM tools and frameworks, core functions, strategies, and factors that affect SCM implementation. The document aims to study the existing SCM in Saudi Arabian industries, identify problematic areas, and propose performance measurement methods to monitor progress and enhance SCM.
Intellectual property rights and economic growthAlexander Decker
The document analyzes the relationship between intellectual property rights (IPR) protection and economic growth in developing countries. It finds that stronger IPR protection is positively correlated with economic growth, based on empirical analysis of a cross-country data set from 1995-2011 that controls for factors like investment, political stability, and openness. Previous studies on this topic are reviewed, with mixed findings reported on the impact of IPR on innovation and growth depending on a country's level of development.
This document compares two methods for measuring the tip radius of an atomic force microscope in situ: critical amplitude analysis and capacitance reconstruction. The critical amplitude method analyzes the bi-stability of tip oscillation at different amplitudes to determine tip radius, while capacitance reconstruction decouples short-range and electrostatic forces to calculate tip radius. Results show the critical amplitude method provided more consistent tip radius measurements across five probe tips and was more efficient for very small tip sizes compared to the more time-consuming capacitance method.
Internal attitude survey and workers commitment in nigerian banking industryAlexander Decker
This document summarizes a study that examined the relationship between internal attitude surveys and workers' commitment in the Nigerian banking industry. The study found that internal attitude surveys had a significant positive association with workers' continuance commitment and normative commitment, but no significant association with affective commitment. This suggests that when management is aware of employee attitudes and addresses issues, employees will feel obligated to remain with the organization and be less likely to leave due to sunk costs. The document provides background on internal attitude surveys, the three components of workers' commitment, and the potential relationship between surveys and commitment.
This document discusses the need for societies and economies to embrace environmental ethics as a driver for stable, just, and self-sustaining communities worldwide. It notes that current societies face challenges like climate change and ecosystem degradation. The paper recommends adopting ethical duties and virtues focused on positive environmental outcomes. Embracing environmental ethics could help address issues and create more humane and sustainable living conditions for future generations.
Troy Botkins has over 15 years of experience in injection molding including process engineering, validation, continuous improvement, and automation. He is a Six Sigma Greenbelt trained in Kaizen and 5S methodology. Botkins has held roles at Beach Mold & Tool, JohnsonControls Interiors Manufacturing, and Southern IndianaPlastics where he launched new tools, maintained process integrity, and led product launches and improvements. He holds an associate's degree from Ivy Tech Community College and a high school diploma.
Social Networks and Well-Being in Democracy in the Age of Digital CapitalismAJHSSR Journal
ABSTRACT : The objective of this work is, on the one hand, to study the new competitive forms that
correspond to the development of the different markets linked to electronic platforms and social networks on the
Internet. On the other hand, to develop a proposal for social welfare for the positive and negative impacts
produced by the development of these markets. In the first part, the main social and economic changes inherent
to political and social evolution are addressed. The main logical trends of the market are presented about
production and modalities of information appropriation, in particular the new forms of information asymmetries
in the electronic market.
KEYWORDS: Imperfections information; Network Economy; Social Welfare; Democracy, Digital Capitalism.
What Data Can Do: A Typology of Mechanisms
Angèle Christin .
International Journal of Communication > Vol 14 (2020) , de Angèle Christin del Departamento de Comunicación de Stanford University, USA titulado "What Data Can Do: A Typology of Mechanisms". Entre otras cosas es autora del libro "Metrics at Work.
The document discusses information retrieval and summarizes key points about data, information, knowledge, and informatics. It provides definitions of information retrieval and discusses challenges in retrieving information from large, unstructured collections. It also summarizes the scope of informatics as focusing on representing, processing, and communicating information in natural and artificial systems.
The document discusses the hierarchical relationship between data, information, and knowledge. It defines each term and provides examples to illustrate the differences. Data refers to raw unorganized facts, while information is data that has been organized and given context or meaning. Knowledge builds upon information by applying it or putting it to practical use through experiences. Wisdom involves understanding fundamental principles at a deeper systemic level. The key relationship is that data becomes information when given context, and information becomes knowledge when applied or used.
This document discusses the information professions and how they are affected by the information society. It addresses commonalities between information professions and examines how they may evolve. Specifically, it explores how information work can be better explained as a discipline through developing a theoretical framework describing its knowledge domain. This would help establish a metacommunity of information professionals with conceptual clarity around their social purpose and responsibilities. The document argues that a profession requires both a disciplinary theoretical base and a clear social role to distinguish it from other occupations.
Presentation given at the HEA Social Sciences learning and teaching summit 'Exploring the implications of ‘the era of big data’ for learning and teaching'.
A blog post outlining the issues discussed at the summit is available via: http://bit.ly/1lCBUIB
Searching for patterns in crowdsourced informationSilvia Puglisi
This document introduces crowdsourcing and discusses discovering patterns in crowdsourced data. It discusses defining the context of volunteered information on the internet in order to understand relationships between data. A network model is proposed where different types of context define nodes and relationships between context determine edges. Properties of small world networks are discussed including how they could be used to model relationships between crowdsourced data and evaluate data quality. Finally, applications to search ranking, privacy and security are briefly mentioned.
1. Contemporary social theories describe how digital technologies have shifted communication away from traditional face-to-face and print media towards functions like social media that follow an escalating logic and erase humanism.
2. The rise of social software, open access, and just-in-time knowledge enabled by the internet challenges traditional structures of gathering and sharing knowledge by allowing information to be more freely distributed and accessed.
3. However, merely distributing information does not necessarily lead to growth of knowledge, as knowledge requires information to be meaningfully modeled and analyzed rather than just believed. The internet allows "downloadable beliefs" but risks knowledge being reduced to beliefs without justification.
Artificial intelligence, machine learning, and databases will play an increasingly important role in the future as vast amounts of data are created every day. To make sense of this data, artificial intelligence systems rely on machine learning algorithms to detect patterns in data stored in databases. Different types of data structures like arrays, lists, stacks, queues and trees are used to organize data in databases to allow for efficient querying and retrieval of information. As more data is generated from various sources around the world, powerful tools will be needed to analyze complex data sets and discover new patterns.
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
The Origins of Information Science and the International Institute of Bibliog...Charlley Luz
The Origins of Information Science and the International Institute of Bibliography/International Federation
for Information and Documentation (FID)
W. Boyd Rayward
1997 John Wiley & Sons, Inc.
This document discusses the field of information science. It defines information science as an interdisciplinary field concerned with analyzing, collecting, storing, retrieving, and disseminating information. Information science incorporates aspects of computer science, as well as fields like library science, communication, management, and social science. The document traces the evolution of information science from focusing on applying computer technology to documents in the 1960s to becoming a broader field that studies the nature, collection, and management of information. Information science is described as an intersection of various disciplines and as being interdisciplinary in nature.
This document discusses the field of information science. It defines information science as an interdisciplinary field concerned with analyzing, collecting, storing, retrieving, and disseminating information. Information science incorporates aspects of computer science, as well as fields like library science, communication, management, and social science. The document traces the evolution of information science from focusing on applying computer technology to documents in the 1960s to becoming a broader field that studies the nature, collection, and management of information. Information science is described as an intersection of various disciplines and as being interdisciplinary in nature.
The document discusses the limitations of traditional information and communication theories based solely on data or information flow. It proposes a new field called "Cybersemiotics" that integrates Niklas Luhmann's communication theory with C.S. Peirce's semiotics to provide a trans-disciplinary approach to information, cognition, and communication studies. This new approach aims to address shortcomings in current models by incorporating phenomenological and social aspects of meaning, cognition, and language.
Is supply chain management important to implementAlexander Decker
This document discusses the importance of supply chain management (SCM) in manufacturing industries in Saudi Arabia. It begins with an introduction to SCM and its importance as a competitive strategy. It then reviews literature that has examined SCM tools and frameworks, core functions, strategies, and factors that affect SCM implementation. The document aims to study the existing SCM in Saudi Arabian industries, identify problematic areas, and propose performance measurement methods to monitor progress and enhance SCM.
Intellectual property rights and economic growthAlexander Decker
The document analyzes the relationship between intellectual property rights (IPR) protection and economic growth in developing countries. It finds that stronger IPR protection is positively correlated with economic growth, based on empirical analysis of a cross-country data set from 1995-2011 that controls for factors like investment, political stability, and openness. Previous studies on this topic are reviewed, with mixed findings reported on the impact of IPR on innovation and growth depending on a country's level of development.
This document compares two methods for measuring the tip radius of an atomic force microscope in situ: critical amplitude analysis and capacitance reconstruction. The critical amplitude method analyzes the bi-stability of tip oscillation at different amplitudes to determine tip radius, while capacitance reconstruction decouples short-range and electrostatic forces to calculate tip radius. Results show the critical amplitude method provided more consistent tip radius measurements across five probe tips and was more efficient for very small tip sizes compared to the more time-consuming capacitance method.
Internal attitude survey and workers commitment in nigerian banking industryAlexander Decker
This document summarizes a study that examined the relationship between internal attitude surveys and workers' commitment in the Nigerian banking industry. The study found that internal attitude surveys had a significant positive association with workers' continuance commitment and normative commitment, but no significant association with affective commitment. This suggests that when management is aware of employee attitudes and addresses issues, employees will feel obligated to remain with the organization and be less likely to leave due to sunk costs. The document provides background on internal attitude surveys, the three components of workers' commitment, and the potential relationship between surveys and commitment.
This document discusses the need for societies and economies to embrace environmental ethics as a driver for stable, just, and self-sustaining communities worldwide. It notes that current societies face challenges like climate change and ecosystem degradation. The paper recommends adopting ethical duties and virtues focused on positive environmental outcomes. Embracing environmental ethics could help address issues and create more humane and sustainable living conditions for future generations.
Troy Botkins has over 15 years of experience in injection molding including process engineering, validation, continuous improvement, and automation. He is a Six Sigma Greenbelt trained in Kaizen and 5S methodology. Botkins has held roles at Beach Mold & Tool, JohnsonControls Interiors Manufacturing, and Southern IndianaPlastics where he launched new tools, maintained process integrity, and led product launches and improvements. He holds an associate's degree from Ivy Tech Community College and a high school diploma.
Social Networks and Well-Being in Democracy in the Age of Digital CapitalismAJHSSR Journal
ABSTRACT : The objective of this work is, on the one hand, to study the new competitive forms that
correspond to the development of the different markets linked to electronic platforms and social networks on the
Internet. On the other hand, to develop a proposal for social welfare for the positive and negative impacts
produced by the development of these markets. In the first part, the main social and economic changes inherent
to political and social evolution are addressed. The main logical trends of the market are presented about
production and modalities of information appropriation, in particular the new forms of information asymmetries
in the electronic market.
KEYWORDS: Imperfections information; Network Economy; Social Welfare; Democracy, Digital Capitalism.
What Data Can Do: A Typology of Mechanisms
Angèle Christin .
International Journal of Communication > Vol 14 (2020) , de Angèle Christin del Departamento de Comunicación de Stanford University, USA titulado "What Data Can Do: A Typology of Mechanisms". Entre otras cosas es autora del libro "Metrics at Work.
The document discusses information retrieval and summarizes key points about data, information, knowledge, and informatics. It provides definitions of information retrieval and discusses challenges in retrieving information from large, unstructured collections. It also summarizes the scope of informatics as focusing on representing, processing, and communicating information in natural and artificial systems.
The document discusses the hierarchical relationship between data, information, and knowledge. It defines each term and provides examples to illustrate the differences. Data refers to raw unorganized facts, while information is data that has been organized and given context or meaning. Knowledge builds upon information by applying it or putting it to practical use through experiences. Wisdom involves understanding fundamental principles at a deeper systemic level. The key relationship is that data becomes information when given context, and information becomes knowledge when applied or used.
This document discusses the information professions and how they are affected by the information society. It addresses commonalities between information professions and examines how they may evolve. Specifically, it explores how information work can be better explained as a discipline through developing a theoretical framework describing its knowledge domain. This would help establish a metacommunity of information professionals with conceptual clarity around their social purpose and responsibilities. The document argues that a profession requires both a disciplinary theoretical base and a clear social role to distinguish it from other occupations.
Presentation given at the HEA Social Sciences learning and teaching summit 'Exploring the implications of ‘the era of big data’ for learning and teaching'.
A blog post outlining the issues discussed at the summit is available via: http://bit.ly/1lCBUIB
Searching for patterns in crowdsourced informationSilvia Puglisi
This document introduces crowdsourcing and discusses discovering patterns in crowdsourced data. It discusses defining the context of volunteered information on the internet in order to understand relationships between data. A network model is proposed where different types of context define nodes and relationships between context determine edges. Properties of small world networks are discussed including how they could be used to model relationships between crowdsourced data and evaluate data quality. Finally, applications to search ranking, privacy and security are briefly mentioned.
1. Contemporary social theories describe how digital technologies have shifted communication away from traditional face-to-face and print media towards functions like social media that follow an escalating logic and erase humanism.
2. The rise of social software, open access, and just-in-time knowledge enabled by the internet challenges traditional structures of gathering and sharing knowledge by allowing information to be more freely distributed and accessed.
3. However, merely distributing information does not necessarily lead to growth of knowledge, as knowledge requires information to be meaningfully modeled and analyzed rather than just believed. The internet allows "downloadable beliefs" but risks knowledge being reduced to beliefs without justification.
Artificial intelligence, machine learning, and databases will play an increasingly important role in the future as vast amounts of data are created every day. To make sense of this data, artificial intelligence systems rely on machine learning algorithms to detect patterns in data stored in databases. Different types of data structures like arrays, lists, stacks, queues and trees are used to organize data in databases to allow for efficient querying and retrieval of information. As more data is generated from various sources around the world, powerful tools will be needed to analyze complex data sets and discover new patterns.
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
The Origins of Information Science and the International Institute of Bibliog...Charlley Luz
The Origins of Information Science and the International Institute of Bibliography/International Federation
for Information and Documentation (FID)
W. Boyd Rayward
1997 John Wiley & Sons, Inc.
This document discusses the field of information science. It defines information science as an interdisciplinary field concerned with analyzing, collecting, storing, retrieving, and disseminating information. Information science incorporates aspects of computer science, as well as fields like library science, communication, management, and social science. The document traces the evolution of information science from focusing on applying computer technology to documents in the 1960s to becoming a broader field that studies the nature, collection, and management of information. Information science is described as an intersection of various disciplines and as being interdisciplinary in nature.
This document discusses the field of information science. It defines information science as an interdisciplinary field concerned with analyzing, collecting, storing, retrieving, and disseminating information. Information science incorporates aspects of computer science, as well as fields like library science, communication, management, and social science. The document traces the evolution of information science from focusing on applying computer technology to documents in the 1960s to becoming a broader field that studies the nature, collection, and management of information. Information science is described as an intersection of various disciplines and as being interdisciplinary in nature.
The document discusses the limitations of traditional information and communication theories based solely on data or information flow. It proposes a new field called "Cybersemiotics" that integrates Niklas Luhmann's communication theory with C.S. Peirce's semiotics to provide a trans-disciplinary approach to information, cognition, and communication studies. This new approach aims to address shortcomings in current models by incorporating phenomenological and social aspects of meaning, cognition, and language.
The document discusses data, information, and information systems. It defines data as raw facts without meaning on their own, while information is data that has been given meaning through relationships. An information system takes in data as input, processes it, and produces useful outputs along with feedback. The key components of an information system are input, processing, output, and feedback. Manual and computerized information systems are also compared.
Data Science is an interdisciplinary approach that combines computational science, statistics, and domain knowledge to extract meaningful insights from large and complex data. It aims to address challenges posed by the data revolution characterized by big data from diverse sources. There is no single agreed upon definition, but most definitions emphasize applying techniques from computer science, statistics, and the relevant domain area to discover patterns, make predictions, and support decision making from data. Key aspects include developing appropriate methodologies for knowledge discovery, forecasting and decisions using large and diverse data from sources like surveys, social media, sensors and more. The integration of domain knowledge representation with computational and statistical tools is seen as an important novelty that can enhance data analysis and interpretation.
‘Personal data literacies’: A critical literacies approach to enhancing under...eraser Juan José Calderón
‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. Luci Pangrazio
Deakin University, Australia
Neil Selwyn
Monash University, Australia
Abstract
The capacity to understand and control one’s personal data is now a crucial part of living in contemporary society. In this sense, traditional concerns over supporting the development of ‘digital literacy’ are now being usurped by concerns over citizens’ ‘data literacies’. In contrast to recent data safety and data science approaches, this article argues for a more critical form of ‘personal data literacies’ where digital data are understood as socially situated and context dependent. Drawing on the critical literacies tradition, the article outlines a range of salient socio-technical understandings of personal data generation and processing. Specifically, the article proposes a framework of ‘Personal Data Literacies’ that distinguishes five significant domains: (1) Data Identification, (2)
Data Understandings, (3) Data Reflexivity, (4) Data Uses, and (5) Data Tactics. The
article concludes by outlining the implications of this framework for future education and research around the area of individuals’ understandings of personal data.
1. DNA contains complex specified information that is organized in a digital code with syntax, semantics, and pragmatics. This information is used to build proteins and control processes in cells.
2. Modern science has not found a naturalistic explanation for how large amounts of specified digital information could arise. Intelligent design provides a better explanation than evolution by natural selection.
3. The intelligent designer behind DNA would need to be extremely intelligent to create such a complex system that exceeds human engineering abilities, and also have a sense of aesthetics to make nature delightful.
Similar to Information and knowledge valuation using the information theory and informative matrices (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Information and knowledge valuation using the information theory and informative matrices
1. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Information and Knowledge Valuation using the Information
Theory and Informative Matrices
Juan Carlos Piedra-Calderón1* Rubén González Crespo2 J. Javier Rainer2
1.
2.
Pontifical University of Salamanca, Av. de la Merced 108, 37005, Salamanca, España.
Pontifical University of Salamanca, Paseo Juan XXIII, 3 - 28040 - Madrid. España.
*E-mail of the corresponding: juankpc@gmail.com
Acknowledgement: This research is supported by the Society and Information Technology Research Group
(GISTI), Pontifical University of Salamanca.
Abstract
For a more concrete analysis of the quantification of knowledge, one must normalize the quantification of data
and information. One must, without entering into excessive detail, resort to the information theory, recognizing
three features that data, information and knowledge have in common: weight, content and quantity. The
following article is an introduction to and conceptualization, in the light of the Information Theory (IT), of the
quantification of information and knowledge by means of informative matrices and a normalization valid for
every IT organization.
Key words: Data, information, knowledge, informative quantity, informative weight, informative content,
informative matrix, information theory, generate.
Starting With The Information Theory
To quantify abstract concepts such as information and knowledge, one must begin with concrete models, such as
the Information Theory (Textos Científicios, 2008), which this is directly related to these concepts. Nevertheless,
in order to have a complete monitoring of this development, it is necessary to perform a standardization adapted
to this work that will allow us to have a better understanding of the results. This standardization will develop an
analogy to find a way to measure the information received from any system, not necessarily from a computer
based system (López, 1998).
Since the 40's (note 1) there have been many normalization proposals, some more complex than others, but most
of them taking the information as a set of mediated communication transmitted by technologies with several
elements such as: receiver, transmitter, channel, interference and others (Hency, 2011). But it is not easy to find
a proposal that only take into account the quantity of information and knowledge gathered from a set of specific
data. From a more concrete measurement, one could reach a more definitive conclusion towards the
measurement of knowledge a person can achieve from a processed and given context. For example: if one
studies an orange, all that matters is what the orange provides, without taking into account that previously a tree
was planted and watered until it grew and bore fruit. This means that ultimately what matters is the information
as the unique actor, and not as part of a set of actors framed in communication. This implies that it doesn’t matter
if there is a difference between the original and the received message, because only the final information that has
been received will be processed.
Data
There are many ways to define data although it is always seen as a synonymous of information. According to the
Royal Spanish Academy Dictionary data can described as follows:
•
"Data is a necessary premise to reach the exact knowledge of something" (Diccionario, 1986). Note that
in this concept there are three important aspects to underline: history, process (implicit) and knowledge.
•
"Data is a knowledge that has no meaning by itself when it is out of context. It is something incomplete
that needs a complement such us other data or elaboration process that will give it more sense" (De
Pablo, 1989).
•
"Data are numeric variables, qualitative values, phrases, words, symbols, etc.., that brings a real
knowledge of the system or the fact that it is being studied" (Minguet & Read, 2008).
Of these concepts can be underline items such as: it is a premise, has no meaning by itself, it must be
accompanied by other information to put it into context, and it is anything from which someone can get
75
2. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
knowledge. And so on, one can find plenty of definitions that always mingle the three interlinked concepts: data,
information and knowledge, without effectively focusing on what is data.
All data by itself can be considered as a source of information, but in order to have a meaning, it must be
necessarily associated to a system that contextualizes it. That means that it is essential to have another data and
compare these two, in order to reach a conclusion that will give a meaning to that information and produce
knowledge.
Data is a basic entity which has information, but it depends on a feature of contextualization. This means that
data as an entity has value, which corresponds to the informative weight (TorredeBabel, 2012). This weight
revolves around its entity, which would correspond to the informative content, and at the same time it gives it a
context, which would correspond to the informative quantity. From this ontological conception a better
definition can be reached that better fit what data really is (TorredeBabel, 2012a):
"Data is an entity whose characteristics are to have informative weight, informative content, and informative
quantity, but by itself has no value or meaning. Because it has not been contextualized its contribution does
not make sense or is null. The combination of data forms an informative system".
While speaking of data the word information is introduced, not referring to the semantic meaning of the word,
but rather the substance that data has by itself, that is to say, its content. One must take into account though, that
both information and knowledge are concepts that have an individual entity, and therefore can be considered as
data at a given time.
Data Type
There are two types of data: simple and complex:
•
Simple
•
Complex
Their characteristics are given by the addition of characteristics of other simple or complex
data that complement each other.
Their characteristics are those of the element and do not depend on any other complement.
Informative Systems
The most basic systems (note 2) always have two data, for example: on and off (appliance), to ring or no to ring
(ring bell), dot-dash (Morse), one and zero (binary), to beat or not to beat (heart). If one of these data units
happens to be missed, then the system would be meaningless, because it needs a partner to have sense (Singh,
1982).
Another example: given the datum "15", it is easy to think that it is a number that could represent a quantity, but
it could be several things because it has not been contextualized. If another data unit is added, “years old” and
yet we add a third one, "girl", all these pieces of data together start to form a system that is beginning to make
sense. Therefore the operation of the information systems would be as follows: various data are received, which
together provide a contextualization, from which one can obtain information and can contribute with knowledge.
Table 1: Informative weight, informative content and informative quantity of d1, d2 y d3
datum1
datum2
datum3
Information
Knowledge
“15”
“years old”
“girl”
“15 years old girl”
“age”
Informative Weight, Informative Content And Informative Quantity Of A Datum
Assuming a datum: d1, which contributes with a weight, a content and a quantity of information. To
contextualize d1, will be assumed to be an orange. This orange has a weight (50g), a content (the pulp, which
consist of several liquid drops) and a quantity (the juice which would be left after squeezing it out).
In the following table, it is possible to see the information values that has a datum by itself. It has no units,
because they do not belong to any system:
Table 2: Informative weight, informative content and informative quantity, of a datum.
Datum
Individual weight
relative (dp)
Individual content
relative (dc)
Individual quantity
relative (dT)
d1
1
¿1?
¿1?
76
3. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Information
Following the same procedure as when defining “data”, our first resource will be the dictionary of the Royal
Academy of the Spanish language where it analyzes several concepts of information to reach a better
understanding of this entity:
•
Information is a: "Communication or knowledge acquisition that allows to expand or precise the
knowledge on a particular subject" (Diccionario, 1986).
•
"Information is an elaborated set of data placed in a context so that it has a meaning to someone in a
particular time and place" (De Pablo, 1989).
•
"Information is a concept by which men represent events and facts" (Minguet & Read, 2008).
Of these concepts, one can take: information is exchanged, in other words, it has to be interpreted in order to
reach to an understanding. At the same time it means it would be pointless to have unused information, because
it would not be able to have a feedback. Additionally, it is a data set, which is under the same context, and from
which one can extract some knowledge. Out of all this one can say that:
Information is an entity whose characteristics are to have informative weight, informative content and
informative quantity, within a given context. It is the gathering of several data existing into a same context and
forming an informative group.
Information is the key element of knowledge, with the characteristic that it can be grouped with more
informative systems, within the same context. The orderly combination of informative systems with the same
entity's features, allows it to be part of a larger system where it behaves such as complex data. Information itself
is a simple system, but when it is combined with other information it becomes a complex system.
Informative Weight, Informative Content And Informative Quantity Calculation
Assuming two data d1 and d2, which form an informative system, that for instance on the computer world could
represent a "0" or "1" (Minguet & Read, 2008), each will have a weight, content and quantity of information.
Contextualized in the following manner: two data (e.g., flour and water mixture), the weight is the addition of
individual weights of each element (e.g., wheat or corn flour; fresh water or sea water), which will leave as a
result the total system weight. And finally the quantity will be what we make of it (e.g., bread).
3.1.1.
Informative Weight
The informative weight is the global maximum value from single or complex data that can by itself an
informative system. Then to calculate the weight of data, it is necessary to start from the conception of that each
one will donate its total weight, that is:
d1 = 1
and
d2 = 1
Keeping in mind that although they have the same weight, these represent two completely different data but
belong to the same system, therefore d1 will always be different from d2. Making an association with those two
data in a system, the informative weight of these association, called "Ip", could be obtained. The result will come
out from the addition of the individual weight values of d1 and d2, that is:
Ip = d1 + d2
In general, one would obtain the following equation to find the weight of the information of any system:
m
Ip = ∑ di
(1)
i=0
Substituting for the values of their weights:
Ip = 1 + 1
The result of the informative weight would be:
Ip = 2 units of informative weight
3.1.2.
Informative Content
It is the maximum quantity of information that can bring each permutation series of an information system. To
find the value of d1 and d2, the following possibilities should be considered:
•
The informative content "Ic", can be obtained through the addition of the weights of data:
77
4. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Ic = d1 + d2
If the weights of information are the same, it could be simplified to:
Ic = n · Ip
(2)
Which in this case, after substituting in (2) for the values of their weights would leave as follows:
Ic = 1 + 1 = 2
o
Ic = 2 · (1) = 2
Therefore, the result of the informative content would be:
Ic = 2 units of informative content
•
The informative content "Ic" can be achieved by multiplying the weights of data, as follows:
Ic
=
Ip1 · Ip2
If the weights of information received are the same, it could be simplified to:
Ic
(Ip)n
=
(3)
Substituting in (3) for the values of their weights:
Ic = 1 · 1 = 1
Ic = (1) 2 = 1
o
The result of the informative content would be:
Ic = 1 unit of informative content
As it is not possible that an operation with the same number of data with constant weights, gives as a result two
different informative contents, therefore it is necessary to find another mathematical form that contains the
properties of addition and multiplication.
There is a mathematical property of the logarithms, which says "the logarithm of a multiplication is equal to the
addition of the logarithms of the factors" (Profesoenlinea, 2011), that is:
log A · B
=
log A + log B
Substituting with the values of the example, the general equation to find the informative content would be the
following:
Ic
=
loga (Ip1 · Ip2)
=
loga Ip1 + loga Ip2
In general, when the informative contents “Ip” are the same, the equation will be as follows:
Ic = loga (Ip)n
Ic = n · loga Ip
(4)
(4’)
Before replacing the weight values of d1 and d2, it is important to know what will be the base "a" of the
logarithm to be used in the operation. Ultimately, and to sum it up, it is necessary to make an analogy:
•
Given that a datum in itself has an informative content equal 1, and it has been proven that to obtain the
informative content is necessary to use a logarithmic function, then "dc" would have to comply with the
following:
dc
=
loga dp
Resorting to the concept of logarithm, as "logarithm of a number to a given base is the exponent to which the
base must be raised to obtain the number" (Profesoenlinea, 2011). Solving the equation, the subscript "a", would
be:
adc
=
dp
Knowing that the minimum unit that brings a data value is dc = 1, then the final answer to the logarithm base,
will be:
a
=
dp
This means that the logarithmic base that is used to obtain the informative quantity that is given by the
information of the received data, will depend on the system upon which they are contextualized, that is to say,
for a system with two data, the base shall be two and so on (Arrendondo, 2011).
Therefore, to find the value of the informative content of the data pair d1 and d2, the value "Ip" should be
replaced, and in this case the base of the logarithm is "2" because there are two data, and would be as follows:
Ic = log2 (Ip)
(5)
Where
78
5. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Ic
=
log2 (d1 + d2)
Substituting in (5) for the values of their weights:
Ic =
log2 (1 + 1)
That is to say:
Ic =
log2 2
The final result is:
Ic =
3.1.3.
1 unit of informative content
Informative Quantity,
Until now all seems to be going fine under the Information Theory, but in the next step is where the first change
shows up by introducing the new concept of "informative quantity", which is not directly related to the weight of
the system, but with the informative content (Singh, 1982).
In order to obtain the informative quantity from the data pair, it is necessary to consider the probability of
occurrence "P", which the data has. Whereby informative quantity "dt" would be obtained by multiplying the
probability of the occurrence "Pp" of each value, with the informative content:
(6)
dt = Pp · loga (Ic)
According to Table 2, the value of "dt", would have to be “1”, which is consistent with the law of probabilities
that says: "the probability of a safe process is equal to 1". In this case, the datum is an entity that exists by itself
as a whole. Therefore its probability to be one datum is 100% or "1".
dt = 1 (log2 1)
Therefore the result would be:
dt = 0
This indicates that the Table 2 would have to be modified in the part that corresponds to the informative weight,
content and quantity, than had been assumed at the beginning for one individual datum value, as follows:
Table 3: Informative weight, informative content and informative quantity, in base of a datum.
Datum
Relative Individual
Weight (dp)
Relative Individual
Content (dc)
Relative Individual
Quantity (dt)
d1
1
1
0
As an example, if this unit (quantity) of Minimum information "dt" is concretized and contextualized in the
computer world, it would be known as "bits".
Now, returning to a minimum system with a set of data d1 and d2, in order to obtain the informative quantity "It",
it must be related with the probability of occurrence of data, as well as with its informative content. Therefore
the information "It" will be the addition of the products of probabilities "Pc" multiplied by the weight of the
corresponding data, as follows:
It
=
Pc · (Ic)
Replacing “Ic”
It
=
Pc · loga (Ip)
Replacing “Ip”
It
=
Pc · [ loga ( d1 + d2 ) ]
Making a general rule to get "It", keeping in mind the possibility that the probabilities of occurrence of data are
different, being the logarithmic base "a", the formula would be:
m
It
=
Pci · loga ( ∑ dp-i )
i=1
79
(7)
6. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
As d1 and d2, have the same probability to be received, and cannot be received separately, that is to say 100% or
1 for each one given that the contextualization is very wide. Replacing in (7) for their numerical values:
It =
1 · log2 (1 + 1)
The result of the informative quantity would be:
It = 1 unit of informative quantity
This leads to the measuring unit of the informative quantity of systems in general.
Contextualized in the world of computers, and grouping them into a set of eight bits, the result is the "byte". That
is also known as the unit of information, and shows that contextualizing data, can provide a first sense of it and
provide some information (Singh, 1982).
Finally an informative system consists in the grouping of permutation of the received data. As a first analysis, it
will give us as a result, an informative weight, an informative content and an informative quantity of the
information system.
3.2.
Informative Weight, Informative Content And Informative Quantity, Of A Simple System
As previously seen, a pair of data gives limited information, therefore it is necessary to think of a more complex
system. Taking the same example of the data d1 and d2, and doing permutations with them to conform new
groups (pairs in this case) of data d1 and d2, it will give some additional information, shown in the following
table:
Table 4: Permutations of data d1 and d2.
Permutation “n”
Permutations of data d1 y d2
I
d1
d1
II
d1
d2
III
d2
d1
IV
d2
d2
The above table shows that with two data it is possible to obtain four data combinations, which means four pairs
of data. This means having more information from the whole set. The next step is to make operations between
the weights of the data, their content, their quantities and their respective probabilities of occurrence.
3.3.
Probabilities And Relative Weights Of The Data In Permutations In A System
The values of probability of occurrence, are the result from the calculation of the relative weights of individual
data. They are used to display the occurrence that a datum has in respect to others in the same group or
permutation where it is located. That is, if it is expected two different data but instead it had received a pair with
the same datum, the probability of that datum will be 100%. The following table presents the probability of
occurrence of each of the data, regardless of the data that is expected.
Table 5: Relative probability of data d1 and d2
Permutation
“n”
Individual probability
to be only d1
Individual probability
to be only d2
Individual relative
probability permutation
I
100,00%
0,00%
100,00%
II
50,00%
50,00%
100,00%
III
50,00%
50,00%
100,00%
IV
0,00%
100,00%
100,00%
80
7. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Considering that the addition of the weight of a pair of data is one unit, and that the individual probability that a
pair of data could be received is 100%, that leaves the relative weight of data as follows:
Table 6: Relative weight of a pair of data d1 and d2 in their permutation
Permutation
“n”
Relative weight of a pair of data
d1 y d2
Weight of the
permutation
I
0,5
0,5
1
II
0,5
0,5
1
III
0,5
0,5
1
IV
0,5
0,5
1
Total weight = 4
From the table above it can be deduced that the value of the relative weights will also be equal to the value of
their relative probabilities, because if a datum X is expected, and a datum Y is received, the relative probability
that either one or the other is received will be reduced to the half that if it just waits to get the datum X. That is,
if it waits d2 and d2 is received, the probability is 100%, but if it waits d2 and d1 is received, then the probability
to receive d2 has been reduced to 50%. In larger systems, the probability of permutation will be reduced
according to the number of data that the group has. This is shown in the following table:
Table 7: Relative probability of a pair of data d1 and d2 in their permutation
Permutation “n”
Relative probability of a pair of data
d1 y d2
Permutation
Probability
I
50,00%
100,00%
II
50,00%
50,00%
100,00%
III
50,00%
50,00%
100,00%
IV
3.4.
50,00%
50,00%
50,00%
100,00%
Probability And Absolute Weight Of Data In Permutations On A System
Given that the analysis of relative weights and relative probabilities does not comply with the total weight of the
system that is one unit, and the total probability of the system which is 100%, it is necessary to find the absolute
values of probability and weight for the system. The following premises should be taken into account:
•
All permutations form a system.
•
The addition of probabilities for each permutation, should result in the probability of the whole system
that is always 100%.
•
Having four permutations, the absolute probability of each permutation will be the result of dividing the
total probability of the system between the numbers of permutations that the system has.
•
The values for the individual absolute probabilities of each data in the pair, will be a result obtained by
dividing the probability of the permutation with the number of data in the series but considering
whether the data arrives or not.
Then to fill the following table, first the initial probability of each data is calculated by dividing the total weight
of the system (which is one unit of weight with 100 % of probability), by the number of permutations (there are
four of them in the example).
Once the value of the probability of each permutation has been obtained, this becomes a new total that must be
divided by the value of data the permutation has. Taking into account the location where each data is expected.
The probability of obtaining an unexpected data is zero, which will leave the table as follows:
81
8. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 8: Individual probability of a pair of data d1 y d2
Permutation
“n”
Absolute individual probability of data
d1 and d2 in the pair
Absolute Probability of
the permutations
d1
d2
I
25,00%
0,00%
25,00%
II
12,50%
12,50%
25,00%
III
12,50%
12,50%
25,00%
IV
0,00%
25,00%
25,00%
Relative
probability of
data
only d1
only d2
System probability
50,00%
50,00%
100,00%
For absolute weights that a specific data has in regards of another, the same previous analogy of additions and
divisions should be followed. , Once the table of probabilities has been obtained, it would only be necessary to
extrapolate it directly to the concept of weight, as they have a one to one ratio (Arrendondo, 2011), obtaining the
following table: (Quintanilla, 2012)
Table 9: Individual weight of a pair of data d1 y d2
Permutation
“n”
Absolute individual weight of data
d1 and d2 in the pair
Absolute weight of the
permutations
I
0,25
0
0,25
II
0,125
0,125
0,25
III
0,125
0,125
0,25
IV
0
0,25
0,25
Relative weight
of data
Relative weight of d1
Relative weight of d2
System weight
0,5
0,5
1
Finally, to obtain the real absolute values of the entire system, one should have into consideration that the system
is always formed by a certain series of permutation, that is, a group of expected information to be more likely
than other permutations. Which means that once the division has been made to get the weight values for the
permutations, as well as the division for the weight values of each data, the following must be considered:
•
In the box of d1 datum, if the same datum is obtained for it, the value obtained should be put in that box.
•
In the box of d2 datum, if another datum is obtained, a zero is placed on the box, and its value is added
to the data d1, which is found in the same position and permutation as d1.
•
The value is null when in the d1 box, is received d2 and vice versa, (if they are in the same permutation).
And the value of this permutation is added to the permutation value that the expected series has. In this
permutation the individual calculation is repeated.
•
For a system of "n" data, the same analogy will be followed.
82
9. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 10: Absolute individual Weight / Probability of data d1 and d2 in the permutation
Expected series
(d1, d2)
Absolut Weight / Probability
of data d1, d2
Permutations
Permutation
“n”
Expected d1
Expected d2
Absolut
Weight /
Probability
I
d1
d1
0,250
0,00
0,250
II
d1
d2
0,250
0,250
0,5
III
d2
d1
0,00
0,00
0,00
IV
d2
d2
0,00
0,250
0,250
0,50
System
Weight /
Probability
0,50
1
3.5.
Informative System Weight, Calculation
As seen previously, within a given system, one can find two kinds of weight to calculate the system, absolute and
relative:
•
Absolute → it is the real weight of a datum within the data series. The absolute weight of the
permutation is the addition of the absolute weights of the data series. The addition of the absolute
weight, result in the absolute weight as a whole.
•
Relative → it is the addition of the absolute weights of a same datum in the same data line where it is
expected to be received, as it is shown in the next table. The addition of the relative weights results in
the absolute weight of the whole.
To differentiate the weight of the system, from the rest of weights that are within the system, it will be
represented as “D”, and its result will be the addition of all the individual weights of each group of system data.
m
D
=
∑ Ip
m
=
i=1
(8)
∑ dp
i=1
Table 11: Absolute y Relative weight of data d1 y d2
Permutation “n”
Absolute individual weight of data
d1 and d2 in the pair
Absolut weight of
information Ip
I
d1
d1
d1 + d1
II
d1
d2
d1 + d2
III
d2
d1
d2 + d1
IV
d2
d2
d2 + d2
Relative weight of d1 Relative weight of d2
in the system
in the system
dp1
3.6.
dp2
Informative System
Content
D
Informative System Content, Calculation
To get the absolute value of the information content of the system, the logarithmic function is used in each
permutation value, as indicated above, , and later on adding all these values to obtain the total result of the
system. To differentiate the informative system content, from other informative content, it will be represented as
“C”, and its result will be the multiplication of the logarithm with base “a” (which depends on the system, but for
our case, due to our calculations being with decimal ending, in our case will have base 10), with the weight of
83
10. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
the entire system information.
m
A
=
(9)
∑ loga Ic
i=1
Table 12: Informative System content of data d1 and d2
Permutation
“n”
Absolut
informative
weight “Ip”
Absolute individual weight of data
d1 and d2 in the pair
Absolut
informative
content “Ic”
I
d1
d1 + d1
loga Ip1
II
d1
d2
d1 + d2
loga Ip2
III
d2
d1
d2 + d1
loga Ip3
IV
d2
d2
d2 + d2
loga Ip4
Relative System
weight of d1
Relative System
weight of d2
System weight
System content
dp1
3.7.
d1
dp2
D
A
Informative Quantity System, Calculation
To obtain the informative quantity system, there are two ways:
•
The informative quantity in partials series of the system is represented as “I’’”, and it is the result of the
addition of all partial informative quantities. This result represents the minimum informative quantity
that can be obtained out of the system.
m
I’’
=
∑ P · It
(10)
i=1
•
The informative quantity in partials content of the system is represented as “I’”, and it is the result of
the multiplication of the probability “P’” by logarithm base "a" of the total informative partial content
of the entire system. This result represents the value of informative quantity that can be obtained out of
the system.
I‘
•
=
P’ · A’
(11)
The informative quantity of the system will be represented with the letter "I" and it is the result of the
multiplication of the probability "P" of the system, by logarithm base "a" of the total informative
content of the entire system. This result represents the maximum informative quantity that can be
obtained out of the system.
I
=
P · A
84
(12)
11. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 13: System informative quantity of data d1 y d2
Permutation
“n”
Absolute individual weight
of data d1 and d2 in the pair
Absolute
informative
weight “Ip”
Absolute
informative
content “Ic”
Absolute
informative
quantity “It”
I
d1
d1
d1 + d1
loga Ip1
p · loga Ic1
II
d1
d2
d1 + d2
loga Ip2
p · loga Ic2
III
d2
d1
d2 + d1
loga Ip3
p · loga Ic3
IV
d2
d2
d2 + d2
loga Ip4
p · loga Ic4
Relative
System
weight of d1
Relative
System
weight of d2
System
weight
dp1
dp2
D
System
Quantity by
content by
partial series
partial series
I'’
A’
System
content
Quantity by
partial
content
A
I'
System
quantity
I
3.8.
Permutation Matrix Types That Can Be Obtained In A System
One thing to keep in mind is that three types of permutation matrices can be found (two of which belong to
permutation series and one to a complex series) for system calculations for "n" data:
•
Permutation matrices with absolute data values.
•
Permutation matrices with absolute and relative data values.
•
Permutation matrices with relative data values.
3.9.
Permutation Matrices To Find D, A, I For A System With Absolute Data
For a system with "n" data, D, A, and I can be calculated by taking the absolute values of the data, that is, if it is
expected a system of "n" data, it always receives the “n” data without considering the permutations where there
are any missing data. Then, the number of series of permutations that can be found for this calculation, would be
given by "n · (n - 1)!" permutations, up to (n - 1) = 2.
For the following table, it is considered that the system receives three data
permutations would be:
n · (n – 1)!
(d1, d2 and d3), and so, its
(13)
3 · (2) = 6 permutations
3.10.
Permutation Matrices To Find D, A, I For A System With Absolute And Relative Data
For a system with “n” data, D, A, and I can be calculated by taking the relative values of the data. If a system of
"n" data is expected, it always receives the “n” data, but considering the possibility of combinations where some
of the data does not appear. For this case, the calculation would be given by “n · (n - 1)! + nn” permutations.
On the following table, a three data system is considered (d1, d2 and d3), thus its permutations would be:
85
12. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
n · (n – 1)! + nn
(14)
3 · (2) + 3
3
6 + 21 = 27 permutations
3.11.
Permutation Matrices To Find D, A, I For A System With Relative Data
It is possible to work with data that certainty is unknown. For this case, all the data that will be worked with, will
be relative because it does not belong to a formal informative system. It could be said that its starting point are
assumptions from which some knowledge wants to be obtained. In this case is given a particularity, as it is not
part of a certain and formal system, the system weight is not the unit, because of the uncertainty of the data that
will be worked with. This concept is better explained on the following table:
Table 15: Matrix example with the relative values of d1, d2 y d3.
Relative individual weight of data dn in the system
Permutation “n”
d1
d3
I
0,33
0,08
0,1
II
0
0,7
0
III
0
0,21
0,17
IV
0,62
0
0
V
0,02
0
0,5
VI
3.12.
d2
0
0,04
0
Informative Weight, Informative Content And Informative Quantity, For A Complex System
Certainly, the work with simple systems is very limited because of the small amount of information received.
However, they are the basis for the calculations of complex systems. For a more down to earth job, a more
precise solution will be the use of complex systems calculations. An example is shown in the following table:
Table 16: matrix of complex systems
Complex Matrix System
“n” Series
1
2
3
4
5
I
John
is
tall
-
-
II
Beer
is
always
a
liquid
III
Lisa
and
Anna
are
friends
IV
He
is
talking
with
others
V
Our
voices
are
listened
-
It is observed that as in the matrix with relative values, there is no need to make permutations, because these will
come given by series of a different value each. These values can be obtained in two ways: using simple systems
or adapted systems.
•
Through simple systems → values that fit any system can be obtained, that is, it can be calculated the
value for each word according to its position in the system, and also according to the probability to
appear that each letter has, compared with other letters of the alphabet, which would be very
86
13. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
complicated, tedious and for little use.
For the following example, it has been taken into account that the information once analyzed, can be
transformed again into data, therefore its informative quantity will become the informative weight to be analyzed
by the new system. For this example, take the minimum amount “I” will always be taken.
Then every word in the following table would represent a simple system that will have to be analyzed by
performing permutations to obtain the absolute values of the system for Ic, Ip, It, C, A', A and finally I', I'' and I to
obtain new knowledge. As follows:
Table 17. example of a complex system matrix based on simple systems
John = sist.1
tall = sist.3
-
-
Beer = sist.4
is = sist.2
always = sist.5
a = sist.6
liquid = sist.7
Lisa = sist.8
and = sist.9
Anna = sist.10
are = sist.11
friends = sist.12
He= sist.13
is = siste.2
talking = sist.14
with = sist.15
others = sist.16
Our = sist.17
•
is = sist.2
voices = sist.18
are = sist.11
listened = sist.19
-
Through customized systems → a more practical solution is given, and adjusted to any need that a
complex informative system analysis might have. For this case, if the information is known, it could be
possible to assign values to data for the analysis.
The values here used would not be random, as they will come from a previous analysis that highlights the needs.
In this example it can highlight the grammatical results from the sentences, that is, it can give a value to each
grammatical figure, as follows:
Table 18: weights for a complex system
Subject = 5 units
Article = 2 units
Verb = 10 units
Preposition /Adverb = 2 units
Predicate = 7 units
Noun = 3 units
It could drill down to obtain the best fit to the needs of each analysis. In this case, if one word in the sentence
does the function of subject, predicate, and at the same time is noun, then the values are divided by the number
of words that share the same feature and are added to the values of the proper figure, for example: “John” is a
subject but also a noun, then the values will be 5 + 1,5 (because there are two nouns in the sentence), “is” does
the function of the verb, and it has a value of 10, and “tall” is the predicate and a noun and thus its values will be
7 + 1,5.
87
14. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 19: example of a complex system matrix based on a particular analysis
Series
“n”
I
II
III
Matrix for complex system
1
John =
5 + 1.5
Beer =
5 + 1.5
Lisa =
1.67 + 1
IV
V
3.13.
He = 5 + 1.5
Our = 2.5
2
3
and = 1.67
is = 5
voices =
2.5 + 3
5
tall =
is = 10
is = 10
4
7 + 1.5
always = 2
Anna =
1.67 + 1
talking = 5
are =10
a=
liquid =
2 + 3.5
3.5 + 1.5
are = 10
friends =
7+1
with =
others =
3.5 + 2
3.5 + 1.5
listened = 7
Complex System Matrix To Obtain D, A, I For A Complex Data System
In general the analysis for a complex system is the same as for simple systems with the following particularities:
•
The informative weight is obtained by adding the values of the series to obtain the weight of the series
and then adding those values to obtain the minimum system weight.
•
The informative content is obtained proceeding with the same operation for simple systems, increasing
it to a concept of informative content by partial series that would come out from the addition of partial
informative content. The total informative content would be the multiplication of the logarithm by the
total value of the informative system weight obtained
•
The informative quantity is obtained with the same operation for simple systems, adding the concept of
informative quantity of the content by partial series that would come out as a result from the
multiplication of the probability of the informative content by the logarithm of the value of the
particular informative content. In the case of complex systems, three concepts have to be pointed out:
informative quantity by partial series, informative quantity by informative content and the informative
quantity of the system.
It is necessary to keep in mind that although these matrices seem similar to those formed with relative data, in
this case, they are not formed by permutations, but instead from received data series.
88
15. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 20: Summary of all concepts for a complex system
Series Absolute weight for each
“n”
element in the series
Absolute
informative
weight “Ip”
Absolute
informative
content “Ic”
Absolute informative
quantity “It”
I
Serie 1
Ip1 = ∑ dI
Ic1 = loga Ip1
It1 = p · loga Ic1
II
Serie 2
Ip2 = ∑ dII
Ic2 = loga Ip2
It2 = p · loga Ic2
III
Serie 3
Ip3 = ∑ dIII
Ic3 = loga Ip3
It3 = p · loga Ic3
IV
Serie 4
Ip4 = ∑ dIV
Ic4 = loga Ip4
It4 = p · loga Ic4
...
...
...
...
...
V
Serie n
Ipn = ∑ dn
Icn = loga Ipn
Itn = p · loga Icn
System weight
Content by partial
series
Quantity by partial
series
m
m
A’ = ∑ Ici
I’’ = ∑ Iti
i=1
i=1
System content
Quantity by partial
content
Relative system weight of
each element
m
D = ∑ Ipi
i=1
A = loga D
I’ = P’ · loga A’
System quantity
I = P · loga A
4.
The Knowledge
To speak about knowledge is to make reference to a system much more complex than that of information,
through which a person acquires an added value for his knowledge, it allows a person to acquire two elements of
the human reasoning which are: experience, as a set of skills that are acquired through the process to gain
knowledge, and wisdom as automatic knowledge (Miranda, 2008).
Knowledge is the product of two systems: biological and social. It can be studied from many perspectives based
on multiple empirical sciences. One of these perspectives is given through scientific knowledge. However, in
spite of many attempts there is not a specific theory of knowledge, only philosophical preconceptions, drawn
from cognitive processes that explain what knowledge could be, but that do not quantify it (Quintanilla, 2012).
Two dictionary definitions, are:
•
Knowledge comprises of: "Understanding, intelligence, natural reason and each of the sensory faculties
of man in so far as they are active" (Diccionario, 1986).
•
"It is the conscious perception of the meaning and significance of the information" (Miranda, 2008).
From these concepts one can take out very few things that indicate a path for the quantification of knowledge,
just ideas that put together various subjects, such as science, sociology, and psychology.
An important factor to consider in the formation of knowledge is experience, which is directly responsible for
the growth of it in any subject. The great philosophers affirm that the addition of knowledge on the same topic,
generates the experience. That is, tacit knowledge, explicit and implicit give true value to global knowledge. And
experience is the factor through which one can pass from one knowledge to another, acting directly on the
empowerment of information, thereby generating an exponential growth of knowledge.
For good transmission of knowledge, in a technological way, ICT are the most useful tools, especially to transmit
explicit knowledge, but also there are some others that help in the transmission of tacit knowledge and implicit,
such as video conferencing, webinars, and others (Contreras, 2010).
89
16. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Finally, based on these previous explanations, and putting into relation all the concepts already seen about data
and information, taking a more technological approach, one can determine an approximation to a concept that
describes in a better way what knowledge is:
Knowledge is an entity that results from the interaction of cognitive processes, skills and self-experiences,
acquired by transmission and shared to give a bigger value, that are able to transform any information or
knowledge shared through a particular and global vision of it, and at the same time enhancing its value.
4.1.
Knowledge Quantification
It is difficult to theoretically quantify the amount of knowledge of a person, because this would only serve
exclusively for that person. But generalizations and deductions from common factors can give a better idea of
quantification. Actually the idea of achieving the quantification of knowledge comes from the economy studies
(Miranda, 2008). This is because the current growth of a country is measured by the technological resources, as
well as information, knowledge, apprenticeship, and more recently the use of networks and collaboration
between organizations as elements pointing directly to an improvement in the production of a company
(Argüelles & Benavides, 2010). This capitalist vision of the knowledge society, puts the knowledge as a
determinant factor in economic growth, and thus it is often called "knowledge economy".
The quantifying of knowledge is an attainable concept if studied in general terms because in particular form, it
presents itself as a non-measurable item. This is because of the various structured cognitive processes through
which knowledge is developed in each brain. Evidence shows a need to qualify and quantify the knowledge of a
person. As a matter of fact, an approximation is made indirectly in job interviews, where the results give off a
value of knowledge and skills that a person has (from a more psychological rather than technical point of view).
For example, psychometric tests used to interview staff, using a range of situations to which it has been assigned
a score that helps to quantify some skill or experience (Contreras, 2010). These ranges are made largely to follow
a market criteria and the needs of the company.
4.2.
Calculating Knowledge Quantification
It would be possible to obtain a value for knowledge from the evaluation of data and information or from
retrofitted knowledge (that works as new information) and that generates knowledge value. Therefore, within the
concept of resolution of complex matrices of information, can be found a development of a concept for the
quantification of knowledge.
In order to obtain a quantification of knowledge, an important key stands is the concept of "information
processing", which, associated with different concepts of knowledge, provides learning, and through it,
experience.
Going back to the data system example, there was the data "girl", and "15", and it was complemented with the
datum "years old". If on top of it is added another data system that consists of "girl", " the party" and "enjoy",
these two systems would provide us with the following information: on one hand, "girl is 15 years old" and in
the other hand,: "girl enjoy the parties." From the process and analysis of these two systems, a knowledge can be
taken out of "girls who are 15 years old enjoy the parties".
Of course, knowledge is not about taking parts of sentences and summing it up in another sentence, but this
simple example shows briefly how does the extraction of knowledge take place through an easy process of
information. All this goes to show that knowledge can represent concepts generated by the processing of an
informative system that are received, understood and become facts. Furthermore, the more particular that
knowledge becomes, the more accurate, and more complex it will be, due to the processing of concrete data as
well as specific information that generate specific knowledge.
The following considerations should be taken into account:
•
•
The first information to be drawn from the data, gives a first knowledge, also known as unitary
knowledge and represented by C' (note 3). This is derived from the contextualization and assessment of
the information that is contained within the first primary data, that is:
(15)
C'
=
I
The knowledge obtained from this information, does not remain static, but serves to generate a global
knowledge or “C'''”, which is generated through a cognitive (information) and experimental
(experience) process that adds a new value of knowledge. C'' is the addition of tacit knowledge and
explicit knowledge, that is:
90
17. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
C''
•
Ie
=
(16)
The value of tacit knowledge "Ct", will be the result of the division of the years of specific experience
"e" in a subject, divided by the specific formative years "f" in a subject, that is:
e
(17)
Ct = ----f
•
The value to explicit knowledge “Ce”, will result from raising the specific formative years "f", to the
specific years of experience "e" in a subject, that is:
(18)
Ce = f e
Then following what the concept of knowledge says and its subsequent analysis, the following equation can be
deduced:
C
=
information + tacit knowledge + explicit knowledge
The result will be:
e
e
C
=
I + ------
+ f
f
Substituting for the values of information “I”, would be:
e
C
=
P . loga A + ------
+
f
e
(19)
f
Conclusions
The usefulness of information generated from patterns that add bigger value to organizations, can be seen as a
competitive factor. This is because the results obtained from all calculations done through the series of
information received, generate knowledge useful for any organization.
For example, it can be of a great help in the election of the best person to lead and manage a project. Taking the
information from the candidates, and making an individual analysis of each one using informative matrices, it
can give a comparison of values of skills, abilities, knowledge and experience, that will give a clue as to whom
will perform best at the job.
An example could be to search the best result of knowledge, abilities and experience of professional inside the
enterprises, in order to select the better director or manager for the next project development. As seen in the next
example:
91
18. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
Table 21: Knowledge – Abilities – Experience Valuation (values min 0 to max 10)
Partial Experience by field
(max 40)
Programmer
Analist
Project
Architect
Project
Manager
D
A/A’
I/I’/I’’
Java
7
3
0
1
11
1,041
0,018
Html
9
1
1
2
13
1,114
0,052
Uml
5
2
0
0
7
0,845
-0,062*
Xml
2
2
0
0
4
0,602
-0,133*
BD
4
1
0
0
5
0,699
-0,109*
CSS
6
2
0
1
9
0,954
-0,020*
Content
management
4
3
0
3
10
1
0
Partial Experience
(max. 70)
37
14
1
7
D=59
A’=6,255 I’’=0,394
A=1,771 I’=4,980
I=0,440
*Every negative value must be change to positive, because any information give an amount of knowledge
Table 22: Percentage of answer obtained
Max
Min
Answer
%
D
280,00
59
21,071
A
2,447
1,771
72,374
A'
11,214
6,255
55,778
I
0,964
0,440
45,643
I'
11,772
4,980
42,304
I''
2,295
0,394
17,168
When these results are compared with other evaluation of other consultants, it is possible to obtain a comparative
answer for any search for that have used this method.
From this first analysis, one can further investigate to find a more useful and better approach to quantification of
information and knowledge, in any system.
Notes
Note 1. The year when Shannon y Weaver published the “Information Theory”.
Nota 2. In this case a system is conceived as an association of data, information or knowledge, always associated
with those of the same context.
Note 3. C' → first knowledge of the first information obtained from the primary data.
References
Argüelles V, M. & Benavides G, C. (2008). Conocimiento y Crecimiento Económico: Una Estrategia Para Los
Países En Vías De Desarrollo. Revista de Economía Mundial. Nº 18. ISSN: 1576-0162. pp. 65 – 77. Retrieved
92
19. Information and Knowledge Management
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol.3, No.11, 2013
www.iiste.org
February 09, 2012, from http://rabida.uhu.es/dspace/bitstream/handle/10272/553/b1513600.pdf?sequence=1
Arredondo, T. (2011). Introducción A La Teoría De La Información. Universidad Técnica Federico Santa
María. Chile. pp. 5 – 29. Retrieved October 10, 2012, from http://profesores.elo.utfsm.cl/~tarredondo/info/softcomp/Introduccion%20a%20la%20Teoria%20de%20la%20Informacion.pdf
Contreras, E. (2010). Gestión del Conocimiento: Del tácito al explícito 20 años después. Trend Management.
Special Edition. Mayo 2010. pp. 2 – 3. Retrieved February 09, 2012, from http://www.dii.uchile.cl/wpcontent/uploads/2011/06/UCH-Contreras.pdf
De Pablo L, I. (1989), El Reto Informático: La gestión de la Información en la Empresa. Editorial Pirámide.
España. ISBN: 84-368-0464-3.
Diccionario de la Lengua Española, (1986), Diccionario de la Lengua Española, Distribuciones Mateos.
Editorial Antalbe, S.A. Madrid. pp. 144, 181, 182, 248, 396
Hercy, (2011), Teorías De La Información Y Comunicación. Taking form the Book Manual Técnico de
Alfabetización Digital 2011. Editorial del Libro LANIA. México. pp. 1 Retrieved October 31, 2012, from
http://www.slideshare.net/hercyb/teorias-de-la-informacin-y-comunicacin
López R, (1998). Crítica A La Teoría De La Información. Chile University. Retrieved October 31, 2012, from
http://www.periodismo.uchile.cl/cursos/psicologia/criticainformacion.pdf.
Minguet, J M., & Read, T., (2008). Informática Fundamental. Editorial Universitaria Ramón Areces S.A.
Madrid. Primera reimpresión 2008. pp. 52 - 61, 66 – 69.
Miranda, E. (2008). Epistemología Y Gnoseología. Teoría del Conocimiento. Retrieved Decembre 24, 2012,
from http://www.slideshare.net/guest975e56/epistemologa-y-gnoseologia-ok
Profesorenlinea, (2011). Logaritmos. Web of Profesor en Línea. Retrieved October 12, 2012, from
http://www.10.cl/matematica/logaritmo.html
Quintanilla, M A. (2012). Teoría Del Conocimiento. Fundación Gustavo Bueno. Proyecto de Filosofía en
Español. Retrieved Decembre 24, 2012, from http://www.filosofia.org/enc/dfc/conocimi.htm
Singh, J., (1982). Teoría De La Información, Del Lenguaje Y De La Cibernética. Editorial Alianza Universidad.
Madrid. pp. 24 – 33.
Textos Científicos, (2008), Teoría de la Información. Web Textos Científicos. Retrieved October 25, 2012, from
http://www.textoscientificos.com/informacion/teoria
Torredebabel. (2012). Definición De Ente. Web Definición.de. Retrieved November 02, 2012, from
http://www.e-torredebabel.com/Psicologia/Vocabulario/Ontologia.htm
Torredebabel. (2012a). Ontología. Web Definición.de. Retrieved November 02, 2012, from http://www.etorredebabel.com/Psicologia/Vocabulario/Ontologia.htm
Juan Carlos Piedra Calderón received his Advance Study Diploma (DEA) part of doctorate program and
received his Master degree in Business Process Management and Master in Information and Knowledge
Management from Universidad Pontificia de Salamanca (UPSAM – Spain). Actually is full time working on his
doctoral thesis for PhD degree in Information and Knowledge Society (UPSAM). He has been working as a
Knowledge Management Expert in several projects.
Rubén González Crespo received a PhD in Software Engineering from Universidad Pontificia de Salamanca
(UPSAM – Spain). He is working in the department of Languages and Computer Systems. Director of the
Research Group of Society and Information Technologies. Director adjunct of the engineer area in Universidad
de La Rioja. Technical Adviser of the Panama Government.
J. Javier Rainer received a PhD in Robotics and Automation from Universidad Politécnica de Madrid, UPM
(Spain). He is Director of Research and Director of Engineering Area at Bureau Veritas Business School, and
researcher at the Intelligent Control Group of the UPM. He received the best paper Award in IARIA Cognitive
2010.
93
20. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
CALL FOR JOURNAL PAPERS
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. There’s no deadline for
submission. Prospective authors of IISTE journals can find the submission
instruction on the following page: http://www.iiste.org/journals/
The IISTE
editorial team promises to the review and publish all the qualified submissions in a
fast manner. All the journals articles are available online to the readers all over the
world without financial, legal, or technical barriers other than those inseparable from
gaining access to the internet itself. Printed version of the journals is also available
upon request of readers and authors.
MORE RESOURCES
Book publication information: http://www.iiste.org/book/
Recent conferences: http://www.iiste.org/conference/
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar