Disciplinary and institutional perspectives on digital curationMichael Day
Slides from a presentation jointly given by Alexander Ball and Michael Day of UKOLN in a panel session on Scientific Data Curation at the DigCCurr 2009 Conference, Chapel Hill, NC, USA, 2 April 2009
Fuzzy net works using a multidimensional view of data have become very popular in both business and science in recent years. Fuzzy net works for fuzzy purposes such as medicine and bio-chemistry1 pose several great challenges to existing fuzzy net work technology. Fuzzy net works usually use pre-aggregated data to ensure fast query response. However, pre-aggregation cannot be used in practice if the dimension structures or the relationships between facts and dimensions are irregular. A technique for overcoming this limitation and some experimental results are presented. Queries over fuzzy fuzzy net works often need to reference data that is external to the fuzzy net work, e.g., data that is too complex to be handled by current fuzzy net work technology, data that is â€owned†by other organizations, or data that is updated frequently. This paper presents a federation architecture that allows the integration of multidimensional warehouse data with complex external data.
Clinical Decision Support Systems (CDSS) were explicitly introduced in the 90’s with the aim of providing knowledge to clinicians in order to influence its decisions and, therefore, improve patients’ health care. There are different architectural approaches for implementing CDSS. Some of these approaches are based on cloud computing, which provides on-demand computing resources over the internet. The goal of this paper is to determine and discuss key issues and approaches involving architectural designs in implementing a CDSS using cloud computing. To this end, we performed a standard Systematic Literature Review (SLR) of primary studies showing the intervention of cloud computing on CDSS implementations. Twenty-one primary studies were reviewed. We found that CDSS architectural components are similar in most of the studies. Cloud-based CDSS are most used in Home Healthcare and Emergency Medical Systems. Alerts/Reminders and Knowledge Service are the most common implementations. Major challenges are around security, performance, and compatibility. We concluded on the benefits of implementing a cloud-based CDSS since it allows cost-efficient, ubiquitous and elastic computing resources. We highlight that some studies show weaknesses regarding the conceptualization of a cloud-based computing approach and lack of a formal methodology in the architectural design process.
Curation and Preservation of Crystallography DataManjulaPatel
A presentation given by Manjula Patel (UKOLN) at "Chemistry in the Digital Age: A Workshop connecting research and education", June 11-12th 2009, Penn State University,
http://www.chem.psu.edu/cyberworkshop09
Functional and Architectural Requirements for Metadata: Supporting Discovery...Jian Qin
The tremendous growth in digital data has led to an increase in metadata initiatives for different types of scientific data, as evident in Ball’s survey (2009). Although individual communities have specific needs, there are shared goals that need to be recognized if systems are to effectively support data sharing within and across all domains. This paper considers this need, and explores systems requirements that are essential for metadata supporting the discovery and management of scientific data. The paper begins with an introduction and a review of selected research specific to metadata modeling in the sciences. Next, the paper’s goals are stated, followed by the presentation of valuable systems requirements. The results include a base-model with three chief principles: principle of least effort, infrastructure service, and portability. The principles are intended to support “data user” tasks. Results also include a set of defined user tasks and functions, and applications scenarios.
Disciplinary and institutional perspectives on digital curationMichael Day
Slides from a presentation jointly given by Alexander Ball and Michael Day of UKOLN in a panel session on Scientific Data Curation at the DigCCurr 2009 Conference, Chapel Hill, NC, USA, 2 April 2009
Fuzzy net works using a multidimensional view of data have become very popular in both business and science in recent years. Fuzzy net works for fuzzy purposes such as medicine and bio-chemistry1 pose several great challenges to existing fuzzy net work technology. Fuzzy net works usually use pre-aggregated data to ensure fast query response. However, pre-aggregation cannot be used in practice if the dimension structures or the relationships between facts and dimensions are irregular. A technique for overcoming this limitation and some experimental results are presented. Queries over fuzzy fuzzy net works often need to reference data that is external to the fuzzy net work, e.g., data that is too complex to be handled by current fuzzy net work technology, data that is â€owned†by other organizations, or data that is updated frequently. This paper presents a federation architecture that allows the integration of multidimensional warehouse data with complex external data.
Clinical Decision Support Systems (CDSS) were explicitly introduced in the 90’s with the aim of providing knowledge to clinicians in order to influence its decisions and, therefore, improve patients’ health care. There are different architectural approaches for implementing CDSS. Some of these approaches are based on cloud computing, which provides on-demand computing resources over the internet. The goal of this paper is to determine and discuss key issues and approaches involving architectural designs in implementing a CDSS using cloud computing. To this end, we performed a standard Systematic Literature Review (SLR) of primary studies showing the intervention of cloud computing on CDSS implementations. Twenty-one primary studies were reviewed. We found that CDSS architectural components are similar in most of the studies. Cloud-based CDSS are most used in Home Healthcare and Emergency Medical Systems. Alerts/Reminders and Knowledge Service are the most common implementations. Major challenges are around security, performance, and compatibility. We concluded on the benefits of implementing a cloud-based CDSS since it allows cost-efficient, ubiquitous and elastic computing resources. We highlight that some studies show weaknesses regarding the conceptualization of a cloud-based computing approach and lack of a formal methodology in the architectural design process.
Curation and Preservation of Crystallography DataManjulaPatel
A presentation given by Manjula Patel (UKOLN) at "Chemistry in the Digital Age: A Workshop connecting research and education", June 11-12th 2009, Penn State University,
http://www.chem.psu.edu/cyberworkshop09
Functional and Architectural Requirements for Metadata: Supporting Discovery...Jian Qin
The tremendous growth in digital data has led to an increase in metadata initiatives for different types of scientific data, as evident in Ball’s survey (2009). Although individual communities have specific needs, there are shared goals that need to be recognized if systems are to effectively support data sharing within and across all domains. This paper considers this need, and explores systems requirements that are essential for metadata supporting the discovery and management of scientific data. The paper begins with an introduction and a review of selected research specific to metadata modeling in the sciences. Next, the paper’s goals are stated, followed by the presentation of valuable systems requirements. The results include a base-model with three chief principles: principle of least effort, infrastructure service, and portability. The principles are intended to support “data user” tasks. Results also include a set of defined user tasks and functions, and applications scenarios.
Metadata for digital long-term preservationMichael Day
Presentation given at the Max Planck Gesellschaft eScience Seminar 2008: Aspects of long-term archiving, hosted by the Gesellschaft für Wissenschaftliche Datenverarbeitung mbh Göttingen (GWDG), Göttingen, Germany, 19-20 June 2008
Getaneh will talk about state-of-the-art metadata standards and how metadata can help ensure the integrity, identity and authenticity of digital documents. An overview of the various metadata initiatives and standards (OAIS, CEDARS, NEDLIB, LMER, PREMIS, and METS) will be provided along with information on how each one supports digital preservation.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
Presentation slides from a lecture given at the University of the West of England (UWE) as part of the MSc in Library and Library Management, University of the West of England, Frenchay Campus, Bristol, March 24, 2009
Slide deck from presentation on Oct 8, 2015 at Johns Hopkins University. Topic is Digital Curation in Art Museums: Technology, People, Process. #jhudigcur
Presentation given at second running of Digital Curation 101, London, 12 March 2009. The course was organised by the Digital Curation Centre (DCC), and ran from 10-12 March 2009
Presentation given at: Digital Curation 101, National eScience Centre (NeSC), Edinburgh, 8 October 2008. The course was organised by the Digital Curation Centre (DCC), and ran from 6-9 October 2008
AHM 2014: Enterprise Architecture for Transformative Research and Collaborati...EarthCube
Ilya Zaslavsky, David Valentine, Amarnath Gupta, Stephen Richard, Tanu Malik
Presentation given in the afternoon Architecture Forum Session on Day 1, June 24 at the EarthCube All-Hands Meeting
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
Metadata for digital long-term preservationMichael Day
Presentation given at the Max Planck Gesellschaft eScience Seminar 2008: Aspects of long-term archiving, hosted by the Gesellschaft für Wissenschaftliche Datenverarbeitung mbh Göttingen (GWDG), Göttingen, Germany, 19-20 June 2008
Getaneh will talk about state-of-the-art metadata standards and how metadata can help ensure the integrity, identity and authenticity of digital documents. An overview of the various metadata initiatives and standards (OAIS, CEDARS, NEDLIB, LMER, PREMIS, and METS) will be provided along with information on how each one supports digital preservation.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
Presentation slides from a lecture given at the University of the West of England (UWE) as part of the MSc in Library and Library Management, University of the West of England, Frenchay Campus, Bristol, March 24, 2009
Slide deck from presentation on Oct 8, 2015 at Johns Hopkins University. Topic is Digital Curation in Art Museums: Technology, People, Process. #jhudigcur
Presentation given at second running of Digital Curation 101, London, 12 March 2009. The course was organised by the Digital Curation Centre (DCC), and ran from 10-12 March 2009
Presentation given at: Digital Curation 101, National eScience Centre (NeSC), Edinburgh, 8 October 2008. The course was organised by the Digital Curation Centre (DCC), and ran from 6-9 October 2008
AHM 2014: Enterprise Architecture for Transformative Research and Collaborati...EarthCube
Ilya Zaslavsky, David Valentine, Amarnath Gupta, Stephen Richard, Tanu Malik
Presentation given in the afternoon Architecture Forum Session on Day 1, June 24 at the EarthCube All-Hands Meeting
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
The International Journal of Database Management Systems (IJDMS) is a bi monthly open
access peer-reviewed journal that publishes articles which contribute new results in all areas of
the database management systems & its applications. The goal of this journal is to bring
together researchers and practitioners from academia and industry to focus on understanding
Modern developments in this filed, and establishing new collaborations in these areas.
Meeting the NSF DMP Requirement June 13, 2012IUPUI
June 13 version of the IUPUI workshop Meeting the NSF Data Management Plan Requirement: What you need to know. This workshop is co-sponsored by the Office of the Vice Chancellor for Research and the University Library.
Supplementary presentation slides from a lecture on digital preservation given at the University of the West of England (UWE) as part of the MSc in Library and Library Management, University of the West of England, Frenchay Campus, Bristol, March 10, 2010
Meeting the NSF DMP Requirement: March 7, 2012IUPUI
March 7 version of the IUPUI workshop Meeting the NSF Data Management Plan Requirement: What you need to know. This workshop is co-sponsored by the Office of the Vice Chancellor for Research and the University Library.
An Empirical Study of the Applications of Classification Techniques in Studen...IJERA Editor
University servers and databases store a huge amount of data including personal details, registration details, evaluation assessment, performance profiles, and many more for students and lecturers alike. main problem that faces any system administration or any users is data increasing per-second, which is stored in different type and format in the servers, learning about students from a huge amount of data including personal details, registration details, evaluation assessment, performance profiles, and many more for students and lecturers alike. Graduation and academic information in the future and maintaining structure and content of the courses according to their previous results become importance. The paper objectives are extract knowledge from incomplete data structure and what the suitable method or technique of data mining to extract knowledge from a huge amount of data about students to help the administration using technology to make a quick decision. Data mining aims to discover useful information or knowledge by using one of data mining techniques, this paper used classification technique to discover knowledge from student’s server database, where all students’ information were registered and stored. The classification task is used, the classifier tree C4.5, to predict the final academic results, grades, of students. We use classifier tree C4.5 as the method to classify the grades for the students .The data include four years period [2006-2009]. Experiment results show that classification process succeeded in training set. Thus, the predicted instances is similar to the training set, this proves the suggested classification model. Also the efficiency and effectiveness of C4.5 algorithm in predicting the academic results, grades, classification is very good. The model also can improve the efficiency of the academic results retrieving and evidently promote retrieval precision.
Next-Generation Search Engines for Information RetrievalWaqas Tariq
In the recent years, there have been significant advancements in the areas of scientific data management and retrieval techniques, particularly in terms of standards and protocols for archiving data and metadata. Scientific data is generally rich, not easy to understand, and spread across different places. In order to integrate these pieces together, a data archive and associated metadata should be generated. This data should be stored in a format that can be locatable, retrievable and understandable, more importantly it should be in a form that will continue to be accessible as technology changes, such as XML. New search technologies are being implemented around these protocols, which makes searching easy, fast and yet robust. One such system, Mercury, a metadata harvesting, data discovery, and access system, built for researchers to search to, share and obtain spatiotemporal data used across a range of climate and ecological sciences.
Bridging the missing middle for al_tversionfinal_14_08_2014debbieholley1
Presentation to ALT-C 2014
Taking innovation from concept through to scalable delivery is complex, contested and under-theorised process. This report aims to capture the current major themes underpinning scaling, and apply these to the context of the Learning Layers project. An external review of our early ‘Design Research framework for scaling’ has highlighted that the approach is too linear and may rely too heavily on the diffusion of innovation paradigm originally proposed by Everett Rogers in the 1960s, which is less appropriate for scaling innovations in our project. Rather, we start out from design-based research principles where co-design with the users is producing both theories and practical educational interventions as outcomes of the process. This is a robust and appropriate approach suitable for addressing complex problems in educational practice for which no clear guidelines or solutions are available. We suggest that it is therefore also appropriate for multi-faceted and complex research projects such as Learning Layers.
The Survey of Data Mining Applications And Feature Scope IJCSEIT Journal
In this paper we have focused a variety of techniques, approaches and different areas of the research which
are helpful and marked as the important field of data mining Technologies. As we are aware that many MNC’s
and large organizations are operated in different places of the different countries. Each place of operation
may generate large volumes of data. Corporate decision makers require access from all such sources and
take strategic decisions .The data warehouse is used in the significant business value by improving the
effectiveness of managerial decision-making. In an uncertain and highly competitive business
environment, the value of strategic information systems such as these are easily recognized however in
today’s business environment, efficiency or speed is not the only key for competitiveness. This type of huge
amount of data’s are available in the form of tera- to peta-bytes which has drastically changed in the areas
of science and engineering. To analyze, manage and make a decision of such type of huge amount of data
we need techniques called the data mining which will transforming in many fields. This paper imparts more
number of applications of the data mining and also o focuses scope of the data mining which will helpful in
the further research.
Big data is a prominent term which characterizes the improvement and availability of data in all three
formats like structure, unstructured and semi formats. Structure data is located in a fixed field of a record
or file and it is present in the relational data bases and spreadsheets whereas an unstructured data file
includes text and multimedia contents. The primary objective of this big data concept is to describe the
extreme volume of data sets i.e. both structured and unstructured. It is further defined with three “V”
dimensions namely Volume, Velocity and Variety, and two more “V” also added i.e. Value and Veracity.
Volume denotes the size of data, Velocity depends upon the speed of the data processing, Variety is
described with the types of the data, Value which derives the business value and Veracity describes about
the quality of the data and data understandability. Nowadays, big data has become unique and preferred
research areas in the field of computer science. Many open research problems are available in big data
and good solutions also been proposed by the researchers even though there is a need for development of
many new techniques and algorithms for big data analysis in order to get optimal solutions. In this paper,
a detailed study about big data, its basic concepts, history, applications, technique, research issues and
tools are discussed.
Big data is a prominent term which characterizes the improvement and availability of data in all three
formats like structure, unstructured and semi formats. Structure data is located in a fixed field of a record
or file and it is present in the relational data bases and spreadsheets whereas an unstructured data file
includes text and multimedia contents. The primary objective of this big data concept is to describe the
extreme volume of data sets i.e. both structured and unstructured. It is further defined with three “V”
dimensions namely Volume, Velocity and Variety, and two more “V” also added i.e. Value and Veracity.
Volume denotes the size of data, Velocity depends upon the speed of the data processing, Variety is
described with the types of the data, Value which derives the business value and Veracity describes about
the quality of the data and data understandability. Nowadays, big data has become unique and preferred
research areas in the field of computer science. Many open research problems are available in big data
and good solutions also been proposed by the researchers even though there is a need for development of
many new techniques and algorithms for big data analysis in order to get optimal solutions. In this paper,
a detailed study about big data, its basic concepts, history, applications, technique, research issues and
tools are discussed.
Big data is a prominent term which characterizes the improvement and availability of data in all three
formats like structure, unstructured and semi formats. Structure data is located in a fixed field of a record
or file and it is present in the relational data bases and spreadsheets whereas an unstructured data file
includes text and multimedia contents. The primary objective of this big data concept is to describe the
extreme volume of data sets i.e. both structured and unstructured. It is further defined with three “V”
dimensions namely Volume, Velocity and Variety, and two more “V” also added i.e. Value and Veracity.
Volume denotes the size of data, Velocity depends upon the speed of the data processing, Variety is
described with the types of the data, Value which derives the business value and Veracity describes about
the quality of the data and data understandability. Nowadays, big data has become unique and preferred
research areas in the field of computer science. Many open research problems are available in big data
and good solutions also been proposed by the researchers even though there is a need for development of
many new techniques and algorithms for big data analysis in order to get optimal solutions. In this paper,
a detailed study about big data, its basic concepts, history, applications, technique, research issues and
tools are discussed.
Big data is a prominent term which characterizes the improvement and availability of data in all three formats like structure, unstructured and semi formats. Structure data is located in a fixed field of a record or file and it is present in the relational data bases and spreadsheets whereas an unstructured data file includes text and multimedia contents. The primary objective of this big data concept is to describe the extreme volume of data sets i.e. both structured and unstructured. It is further defined with three “V” dimensions namely Volume, Velocity and Variety, and two more “V” also added i.e. Value and Veracity. Volume denotes the size of data, Velocity depends upon the speed of the data processing, Variety is described with the types of the data, Value which derives the business value and Veracity describes about the quality of the data and data understandability. Nowadays, big data has become unique and preferred research areas in the field of computer science. Many open research problems are available in big data and good solutions also been proposed by the researchers even though there is a need for development of many new techniques and algorithms for big data analysis in order to get optimal solutions. In this paper, a detailed study about big data, its basic concepts, history, applications, technique, research issues and tools are discussed.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
1. II. Background
Data management is a challenge for any resource-constrained
research project (i.e. all) and especially those that may lack
data management expertise and capacity. These projects are the
source for much of the so called ‘dark data’ or ‘long-tail data’
(Heidorn, 2008) and this systematic effort seeks to increase the
application of data management principles and the reduction of
‘dark data.’ We seek a greater alignment of methodologies
across research, software, and stewardship.
Much effort has been expended developing numerous
specialized data management models and cataloging the
various existing data lifecycles (CEOS, 2011). Figures 1, 2, and
3 provide examples of existing data lifecycles as described in
CEOS (2011).
The term Agile Curation is being proposed as the name for an
approach that seeks to provide the benefits of data
management curation while incorporating the flexibility and
optimization for resource-constrained teams associated with
agile methods. Both agile and curation have specific definitions
in the academic literature.
“The word ‘agile’ by itself means that something is flexible and
responsive so agile methods implies its [ability] to survive in an
atmosphere of constant change and emerge with success”
( Anderson, 2004)
“Curation embraces and goes beyond that of enhanced present-
day re-use and of archival responsibility, to embrace
stewardship that adds value through the provision of context
and linkage, placing emphasis on publishing data in ways that
ease re-use and promoting accountability and integration.”
(Rusbridge et al. 2005)
Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community
Acknowledgements
This work was partially funded by National Science Foundation (NSF)
Grant NSF-1344155 & EPSCoR Program (Track 1 {Awards:
0447691,0814449,1301346} and Track 2 awards {0918635, 1329470})
III. Assumptions Underlying Agile Curation
[based on the Agile Underlying Assumptions found in Turke, et al (2002)]
1) Access to data is the first goal
2) Generative value is supported (Zittrain, 2006)
3) Researcher involvement through a participatory
framework that aligns data management with scientific
research processes (Yarmey and Baker, 2013)
4) Projects will utilize free open-source resources to the
greatest extent practical
5) Community participation increases project capacity
6) Data management requirements and practices evolve
as the research project proceeds
7) Bright and dedicated individuals can learn appropriate
skills and respond to the demands of their particular
project, as they proceed
8) Approaches apply across scales
9) Consider technical debt
10) Data evaluation can be conducted through use and
feedback
IV. References
Anderson, D. J., (2003) Agile management for software engineering: Applying the theory of constraints for
business results. Prentice Hall Professional
CEOS.WGISS.DISG. “Data Life Cycle Models and Concepts – Version 1”. TNO1, (2011), Issue 1.
http://wgiss.ceos.org/dsig/whitepapers/Data%20Lifecycle%20Models%20and%20Concepts%20v8.docx
Heidorn, P. B., (2008), Shedding light on the Dark Data in the Long Tail of Science, Library Trends, 57, 2, 280-
299
Paulo Sérgio Medeiros dos Santos, Amanda Varella, Cristine Ribeiro Dantas, and Daniel Beltrão Borges.
“Visualizing and Managing Technical Debt in Agile Development: An Experience Report”. H. Baumeister and
B. Weber (Eds.): XP 2013, LNBIP 149, pp. 121–134
Rusbridge, C., Burnhill, P., Ross, S., Buneman, P,. Giaretta, D., and Atkinson, M. (2005) The Digital Curation
Center: A vision for digital curation. In Proceedings to Global Data Interoperability-Challenges and
Technologies, 2005. Mass Storage and Systems Technology Committee of the IEEE Computer Society, June 20-
24, 2005, Sardinia, Italy, Retrieved November 13, 2014 from http://eprints.erpanet.org/82/
Turke, D., France, R., and Rumpe,B. (2002), Limitations of agile software processes., Third International
Conference on eXtreme Programming and Agile Processes in Software Engineering, Cambridge University
Press
Yarmey, L. and Baker, K.S. (2013) Towards Standardization: A Participatory Framework for Scientific Standard-
Making, International Journal of Digital Curation, 8,1, 157-172
Zittrain, J., (2006) The Generative Internet, 119 Harvard Law Review 1974 Published Version
doi:10.1145/1435417.1435426 Accessed December 3, 2014 1:47:07 PM EST Citable Link
http://nrs.harvard.edu/urn-3:HUL.InstRepos:9385626
I. Summary
This poster seeks to frame a dialogue on the concept and
implementation of data lifecycles. These thoughts are
informed by the adoption of agile practices within software
development, the review of policy and technique lifespans
within the field of organizational studies, and a
consideration of community-building and capacity.
Figure 1: NDIIP Lifecycle from CEOS 2011
Figure 3
Josh Young1, W. Christopher Lenhardt2, Mark Parsons3, Karl Benedict4
1. University Corporation for Atmospheric Research (UCAR) Unidata Program Center
2. Renaissance Computing Institute (RENCI), University of North Carolina at Chapel Hill
3. Institute for Data Exploration and Applications, Rensselaer Polytechnic Institute
4. University of New Mexico
Figure 2: OAIS Lifecycle from CEOS 2011