This document discusses how to present negative results in research in a positive manner. It notes that while the goal of research is to find positive answers, negative results are also useful for understanding what does not work and setting limits. The document provides examples of reframing negative results positively, such as precisely defining where a procedure fails instead of just saying it failed, or stating positive theorems about the limitations of methods. It also addresses how to present inherent negative results like impossibility theorems and how to address "dead ends" in research.
In this talk, logically distributive categories are introduced to provide a sound and complete semantics to multi-sorted, first-order, intuitionistic-based logical theories. The peculiar aspect is that no universe is required to interpret terms, making the semantics really point-free.
CORCON2014: Does programming really need data structures?Marco Benini
This talk tries to suggest how computer programming can be conceptually simplified by using abstract mathematics, in particular categorical semantics, so to achieve the 'correctness by construction' paradigm paying no price in term of efficiency.
Also, it introduces an alternative point of view on what is a program and how to conceive data structures, namely as computable morphisms between models of a logical theory.
Intuitionistic First-Order Logic: Categorical semantics via the Curry-Howard ...Marco Benini
A novel approach to giving an interpretation of logic inside category theory. This work has been developed as part of my sabbatical Marie Curie fellowship in Leeds.
Presented at the Logic Seminar, School of Mathematics, University of Leeds (2012).
This document discusses how to present negative results in research in a positive manner. It notes that while the goal of research is to find positive answers, negative results are also useful for understanding what does not work and setting limits. The document provides examples of reframing negative results positively, such as precisely defining where a procedure fails instead of just saying it failed, or stating positive theorems about the limitations of methods. It also addresses how to present inherent negative results like impossibility theorems and how to address "dead ends" in research.
In this talk, logically distributive categories are introduced to provide a sound and complete semantics to multi-sorted, first-order, intuitionistic-based logical theories. The peculiar aspect is that no universe is required to interpret terms, making the semantics really point-free.
CORCON2014: Does programming really need data structures?Marco Benini
This talk tries to suggest how computer programming can be conceptually simplified by using abstract mathematics, in particular categorical semantics, so to achieve the 'correctness by construction' paradigm paying no price in term of efficiency.
Also, it introduces an alternative point of view on what is a program and how to conceive data structures, namely as computable morphisms between models of a logical theory.
Intuitionistic First-Order Logic: Categorical semantics via the Curry-Howard ...Marco Benini
A novel approach to giving an interpretation of logic inside category theory. This work has been developed as part of my sabbatical Marie Curie fellowship in Leeds.
Presented at the Logic Seminar, School of Mathematics, University of Leeds (2012).
O Islã surgiu na Península Arábica no século VII. Maomé fundou o Islã após receber revelações divinas e escreveu o Corão, o livro sagrado muçulmano. Os muçulmanos seguem os Cinco Pilares da Fé, incluindo a oração, jejum, esmola, peregrinação e credo no único Deus.
L'occhio del biologo: elementi di fotografiaMarco Benini
The slides of the course "L'occhio del biologo", Alta Formazione, Università degli Studi dell'Insubria.
It is a small course on the fundamentals of photography oriented towards the scientific photography in a biological laboratory.
Este documento describe las áreas silvestres protegidas de Costa Rica. Explica que representan el 25% del territorio nacional y protegen la flora y fauna a través de categorías como reservas forestales, biológicas, monumentos culturales y parques nacionales. También destaca la importancia natural, cultural e histórica de estas áreas y los desafíos de falta de recursos económicos, conciencia y personal para su conservación.
A description of the formal model behind Constructive Adpositional Grammars.
Presented at Proof Theory and Constructive Mathematics Seminar, School of Mathematics, University of Leeds (2011).
The document discusses variations on Higman's Lemma, which states that a set of finite sequences ordered by embedding forms a well quasi-order if and only if the underlying set ordered does. The document defines well quasi-orders and related concepts. It then examines properties of products, coproducts, equalizers, coequalizers and exponentiation in categories of well quasi-orders and related categories. It proves Higman's Lemma using categorical concepts like equalizers and coequalizers. It also examines properties of categories of descending chains of a quasi-order and how these relate to properties of the original quasi-order.
Is it important to explain a theorem? A case study in UML and ALCQIAlexandre Rademaker
The document discusses conceptual modeling from a logical point of view. It outlines the main steps of conceptual modeling as observing the world, determining relevance, choosing terminology, writing axioms, and verifying correctness. It notes that steps 1-2 can use informal notations like UML but are essentially an "art". Step 5 of verification demands significant knowledge of the model. The document also discusses using logic to explain theorems proven from an ontology, providing examples of proofs using tableaux and sequent calculus that the ontology implies a disjunction.
1. The document discusses machine learning and provides an overview of key concepts like inductive reasoning, learning from examples, and the constituents of machine learning problems.
2. It explains that machine learning problems involve an example set, background concepts, background axioms, and potential errors in data. Common machine learning tasks are categorization and prediction.
3. The document also outlines the constituents of machine learning methods, including representation schemes, search methods, and approaches for selecting hypotheses when multiple solutions are produced.
1. The document discusses machine learning and provides an overview of key concepts like inductive reasoning, learning from examples, and the constituents of machine learning problems.
2. It explains that machine learning problems involve an example set, background concepts, background axioms, and potential errors in data. Common machine learning tasks are categorization and prediction.
3. The document also outlines the constituents of machine learning methods, including representation schemes, search methods, and approaches for selecting hypotheses when multiple solutions are produced.
Introduction to logic and prolog - Part 1Sabu Francis
The document provides an introduction to logic and Prolog programming. It discusses:
1) Alan Turing's invention of the modern computer to solve complex problems like decoding encrypted messages. This established the concept of algorithms being carried out through linear instruction processing.
2) Prolog programming focuses solely on logic and removes concerns about procedural elements like instruction pointers. It allows programmers to focus only on the problem's logic.
3) Logic is a tool for reasoning that uses concepts like true, false, if-then statements, and, or, etc. It helps clarify reasoning but cannot validate conclusions on its own if premises are flawed.
Statistics is used to interpret data and draw conclusions about populations based on sample data. Hypothesis testing involves evaluating two statements (the null and alternative hypotheses) about a population using sample data. A hypothesis test determines which statement is best supported.
The key steps in hypothesis testing are to formulate the hypotheses, select an appropriate statistical test, choose a significance level, collect and analyze sample data to calculate a test statistic, determine the probability or critical value associated with the test statistic, and make a decision to reject or fail to reject the null hypothesis based on comparing the probability or test statistic to the significance level and critical value.
An example tests whether the proportion of internet users who shop online is greater than 40% using
1. The document discusses mathematical logic and proofs. It introduces logic operators such as NOT, AND and OR and how they are used to construct truth tables and logical formulas.
2. Conditional statements like "if P then Q" are explained along with their contrapositive and negation. Logical equivalences between statements are important.
3. The concept of an argument is introduced, where valid arguments are those where the conclusion follows logically from the assumptions. Specific argument forms like modus ponens and modus tollens are discussed.
This document section discusses conditional probability and independence. It defines conditional probability as the probability of one event occurring given that another event is already known to have occurred. The general multiplication rule for calculating probabilities involving two or more events is introduced, as well as using tree diagrams to model chance processes. Events are defined as independent if knowing one event occurs does not impact the probability of the other occurring. The multiplication rule is simplified for independent events. Examples are provided to demonstrate these concepts.
This document section discusses conditional probability and independence. It defines conditional probability as the probability of one event occurring given that another event is already known to have occurred. The general multiplication rule for calculating probabilities involving two or more events is introduced. Tree diagrams can be used to model chance processes and calculate probabilities. Two events are independent if the occurrence of one does not impact the probability of the other. The multiplication rule is simplified for independent events. Examples are provided to demonstrate these concepts.
The document discusses multiple statistical comparisons and techniques for controlling error rates when performing multiple hypothesis tests on data. It introduces the concepts of family-wise error rate (FWER) and false discovery rate (FDR), and methods like the Sidak correction, Bonferroni correction, and Benjamini-Hochberg procedure for controlling FWER and FDR. It also discusses how p-value distributions can be used to estimate FDR and calculate q-values. Interactive demonstrations are provided to help illustrate key concepts like Type I and Type II errors.
The Reproducibility Crisis in Psychological Science: One Year LaterJimGrange
The document discusses the reproducibility crisis in psychological science. It notes several cases from 2011 that called into question research practices. An open science collaboration attempted to replicate 100 psychology studies and found that only 36% replicated. The document recommends promoting open science practices like replication, transparent statistics and data sharing, and teaching/rewarding rigor over quantity or novelty. It argues the field needs to change incentives to prioritize accuracy over publication. Overall, the document analyzes issues compromising reproducibility and proposes open science solutions to improve research integrity in psychology.
The document contains several groups of three or four related quotes on different topics. Some of the groupings include:
- Three quotes on ideas and uncertainty from Émile-Auguste Chartier, Kevlin Henney, and Virginia Satir.
- Four programming principles from Andrew Hunt and David Thomas: test early, test often, test automatically.
- Three types of programs from Meir M Lehman: S-programs, P-programs, and E-programs.
- Four stages of the PDSA cycle from Deming: plan, do, study, act.
Jarrar: First Order Logic- Inference MethodsMustafa Jarrar
Lecture slides by Mustafa Jarrar at Birzeit University, Palestine.
See the course webpage at: http://jarrar-courses.blogspot.com/2011/11/artificial-intelligence-fall-2011.html and http://www.jarrar.info
and on Youtube:
http://www.youtube.com/watch?v=v92oPUYxCQQ&list=PLCC05105BA39E9BC0
O Islã surgiu na Península Arábica no século VII. Maomé fundou o Islã após receber revelações divinas e escreveu o Corão, o livro sagrado muçulmano. Os muçulmanos seguem os Cinco Pilares da Fé, incluindo a oração, jejum, esmola, peregrinação e credo no único Deus.
L'occhio del biologo: elementi di fotografiaMarco Benini
The slides of the course "L'occhio del biologo", Alta Formazione, Università degli Studi dell'Insubria.
It is a small course on the fundamentals of photography oriented towards the scientific photography in a biological laboratory.
Este documento describe las áreas silvestres protegidas de Costa Rica. Explica que representan el 25% del territorio nacional y protegen la flora y fauna a través de categorías como reservas forestales, biológicas, monumentos culturales y parques nacionales. También destaca la importancia natural, cultural e histórica de estas áreas y los desafíos de falta de recursos económicos, conciencia y personal para su conservación.
A description of the formal model behind Constructive Adpositional Grammars.
Presented at Proof Theory and Constructive Mathematics Seminar, School of Mathematics, University of Leeds (2011).
The document discusses variations on Higman's Lemma, which states that a set of finite sequences ordered by embedding forms a well quasi-order if and only if the underlying set ordered does. The document defines well quasi-orders and related concepts. It then examines properties of products, coproducts, equalizers, coequalizers and exponentiation in categories of well quasi-orders and related categories. It proves Higman's Lemma using categorical concepts like equalizers and coequalizers. It also examines properties of categories of descending chains of a quasi-order and how these relate to properties of the original quasi-order.
Is it important to explain a theorem? A case study in UML and ALCQIAlexandre Rademaker
The document discusses conceptual modeling from a logical point of view. It outlines the main steps of conceptual modeling as observing the world, determining relevance, choosing terminology, writing axioms, and verifying correctness. It notes that steps 1-2 can use informal notations like UML but are essentially an "art". Step 5 of verification demands significant knowledge of the model. The document also discusses using logic to explain theorems proven from an ontology, providing examples of proofs using tableaux and sequent calculus that the ontology implies a disjunction.
1. The document discusses machine learning and provides an overview of key concepts like inductive reasoning, learning from examples, and the constituents of machine learning problems.
2. It explains that machine learning problems involve an example set, background concepts, background axioms, and potential errors in data. Common machine learning tasks are categorization and prediction.
3. The document also outlines the constituents of machine learning methods, including representation schemes, search methods, and approaches for selecting hypotheses when multiple solutions are produced.
1. The document discusses machine learning and provides an overview of key concepts like inductive reasoning, learning from examples, and the constituents of machine learning problems.
2. It explains that machine learning problems involve an example set, background concepts, background axioms, and potential errors in data. Common machine learning tasks are categorization and prediction.
3. The document also outlines the constituents of machine learning methods, including representation schemes, search methods, and approaches for selecting hypotheses when multiple solutions are produced.
Introduction to logic and prolog - Part 1Sabu Francis
The document provides an introduction to logic and Prolog programming. It discusses:
1) Alan Turing's invention of the modern computer to solve complex problems like decoding encrypted messages. This established the concept of algorithms being carried out through linear instruction processing.
2) Prolog programming focuses solely on logic and removes concerns about procedural elements like instruction pointers. It allows programmers to focus only on the problem's logic.
3) Logic is a tool for reasoning that uses concepts like true, false, if-then statements, and, or, etc. It helps clarify reasoning but cannot validate conclusions on its own if premises are flawed.
Statistics is used to interpret data and draw conclusions about populations based on sample data. Hypothesis testing involves evaluating two statements (the null and alternative hypotheses) about a population using sample data. A hypothesis test determines which statement is best supported.
The key steps in hypothesis testing are to formulate the hypotheses, select an appropriate statistical test, choose a significance level, collect and analyze sample data to calculate a test statistic, determine the probability or critical value associated with the test statistic, and make a decision to reject or fail to reject the null hypothesis based on comparing the probability or test statistic to the significance level and critical value.
An example tests whether the proportion of internet users who shop online is greater than 40% using
1. The document discusses mathematical logic and proofs. It introduces logic operators such as NOT, AND and OR and how they are used to construct truth tables and logical formulas.
2. Conditional statements like "if P then Q" are explained along with their contrapositive and negation. Logical equivalences between statements are important.
3. The concept of an argument is introduced, where valid arguments are those where the conclusion follows logically from the assumptions. Specific argument forms like modus ponens and modus tollens are discussed.
This document section discusses conditional probability and independence. It defines conditional probability as the probability of one event occurring given that another event is already known to have occurred. The general multiplication rule for calculating probabilities involving two or more events is introduced, as well as using tree diagrams to model chance processes. Events are defined as independent if knowing one event occurs does not impact the probability of the other occurring. The multiplication rule is simplified for independent events. Examples are provided to demonstrate these concepts.
This document section discusses conditional probability and independence. It defines conditional probability as the probability of one event occurring given that another event is already known to have occurred. The general multiplication rule for calculating probabilities involving two or more events is introduced. Tree diagrams can be used to model chance processes and calculate probabilities. Two events are independent if the occurrence of one does not impact the probability of the other. The multiplication rule is simplified for independent events. Examples are provided to demonstrate these concepts.
The document discusses multiple statistical comparisons and techniques for controlling error rates when performing multiple hypothesis tests on data. It introduces the concepts of family-wise error rate (FWER) and false discovery rate (FDR), and methods like the Sidak correction, Bonferroni correction, and Benjamini-Hochberg procedure for controlling FWER and FDR. It also discusses how p-value distributions can be used to estimate FDR and calculate q-values. Interactive demonstrations are provided to help illustrate key concepts like Type I and Type II errors.
The Reproducibility Crisis in Psychological Science: One Year LaterJimGrange
The document discusses the reproducibility crisis in psychological science. It notes several cases from 2011 that called into question research practices. An open science collaboration attempted to replicate 100 psychology studies and found that only 36% replicated. The document recommends promoting open science practices like replication, transparent statistics and data sharing, and teaching/rewarding rigor over quantity or novelty. It argues the field needs to change incentives to prioritize accuracy over publication. Overall, the document analyzes issues compromising reproducibility and proposes open science solutions to improve research integrity in psychology.
The document contains several groups of three or four related quotes on different topics. Some of the groupings include:
- Three quotes on ideas and uncertainty from Émile-Auguste Chartier, Kevlin Henney, and Virginia Satir.
- Four programming principles from Andrew Hunt and David Thomas: test early, test often, test automatically.
- Three types of programs from Meir M Lehman: S-programs, P-programs, and E-programs.
- Four stages of the PDSA cycle from Deming: plan, do, study, act.
Jarrar: First Order Logic- Inference MethodsMustafa Jarrar
Lecture slides by Mustafa Jarrar at Birzeit University, Palestine.
See the course webpage at: http://jarrar-courses.blogspot.com/2011/11/artificial-intelligence-fall-2011.html and http://www.jarrar.info
and on Youtube:
http://www.youtube.com/watch?v=v92oPUYxCQQ&list=PLCC05105BA39E9BC0
The document provides an overview of key concepts for the BIOL209: Fundamentals course, including:
- Instructor-provided slides had no impact on attendance but adversely affected exam performance.
- Note-taking and self-testing improves learning. Some students may experience math anxiety or stereotype threat.
- The scientific method involves forming hypotheses and testing them through experimentation and analysis of data.
- Understanding statistical and experimental design principles is important for reproducing and interpreting results. Randomization, replication, and controlling for confounding variables strengthen experimental conclusions.
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
This document contains information about global warming and arguments related to climate change. It begins with a video link to "An Inconvenient Truth" and a link to the Climate Crisis website. It then provides several images from globalwarmingart.com showing projections and impacts of global warming, such as rising sea levels and shrinking sea ice. The document concludes with information on the greenhouse effect and a module on arguments, including the differences between deductive and inductive reasoning.
Methodological Mistakes and Econometric ConsequencesAsad Zaman
Logical Postivism is a mistaken understanding of science, that science is about looking for patterns in the data. This misunderstanding was adopted as the basis for data analysis, leading to a fundamentally flawed methodology for econometrics. The mistake is explained and illustrated, and remedies are suggested
This document discusses rule-based systems and logic programming. It contains the following key points in 3 sentences:
The document introduces rule-based systems that represent knowledge as IF-THEN rules and facts, and describes forward chaining which applies rules to derive new facts from initial facts, and backward chaining which starts with a goal and looks for rules to prove the goal. It explains Horn clause logic and how Prolog implements backward chaining using Horn clauses. It also discusses how forward chaining can be used to dynamically add facts to a knowledge base and apply rules to derive new facts.
This document provides an introduction to logic, including propositional logic and predicate calculus. It defines key concepts such as logical values, propositions, operators, truth tables, logical expressions, worlds, models, inference rules, quantification, and definitions. Propositional logic manipulates true and false values using operators like AND and OR. Predicate calculus extends this to allow predicates, constants, functions, and quantification over variables. Inference involves applying rules to derive new statements, but the search space grows too large to feasibly perform by hand.
This document discusses key concepts related to data analysis and hypothesis testing. It explains that theories are composed of testable propositions and hypotheses. Hypotheses are evaluated through empirical research designed to test specific predictions. Researchers specify a null hypothesis and alternate hypothesis, collect data, and use statistical tests to either reject or fail to reject the null hypothesis. The goal is to minimize type 1 and type 2 errors by selecting an appropriate significance level and sample size. Traditional hypothesis testing uses an alpha level of 0.05. The document provides examples of how to state hypotheses, select a significance level, calculate test statistics, and make conclusions based on comparing p-values to the alpha level.
The Graph Minor Theorem: a walk on the wild side of graphsMarco Benini
The document summarizes the Graph Minor Theorem, which states that the set of all finite graphs forms a well-quasi ordering under the graph minor relation. It discusses Robertson and Seymour's 500-page proof of this theorem. It then outlines Nash-Williams' technique for proving that a quasi-order is a well-quasi ordering and describes an attempted proof of the Graph Minor Theorem using this technique. However, the attempt fails due to issues with the "coherence" of embeddings between decomposed components of graphs. Maintaining coherence of embeddings under the graph minor relation is identified as the key challenge in finding a simpler proof of the Graph Minor Theorem.
The famous Kruskal's tree theorem states that the collection of finite trees labelled over a well quasi order and ordered by homeomorphic embedding, forms a well quasi order. Its intended mathematical meaning is that the collection of finite, connected and acyclic graphs labelled over a well quasi order is a well quasi order when it is ordered by the graph minor relation.
Oppositely, the standard proof(s) shows the property to hold for trees in the Computer Science's sense together with an ad-hoc, inductive notion of embedding. The mathematical result follows as a consequence in a somewhat unsatisfactory way.
In this talk, a variant of the standard proof will be illustrated explaining how the Computer Science and the graph-theoretical statements are strictly coupled, thus explaining why the double statement is justified and necessary.
The Graph Minor Theorem: a walk on the wild side of graphsMarco Benini
The Graph Minor Theorem says that the collection of finite graphs
ordered by the minor relation is a well quasi order. This apparently
innocent statement hides a monstrous proof: the original result by
Robertson and Seymour is about 500 pages and twenty articles, in which a
new and deep branch of Graph Theory has been developed.
The theorem is famous and full of consequences both on the theoretical side
of Mathematics and in applications, e.g., to Computer Science. But there
is no concise proof available, although many attempts have been made.
In this talk, arising from one such failed attempts, an analysis of the
Graph Minor Theorem is presented. Why is it so hard?
Assuming to use the by-now standard Nash-Williams's approach to prove it,we will
illustrate a number of methods which allow to solve or circumvent some
of the difficulties. Finally, we will show that the core of this line of
thought lies in a coherence question which is common to many parts of
Mathematics: elsewhere it has been solved, although we were unable to
adapt those solutions to the present framework. So, there is hope for a
short proof of the Graph Minor Theorem but it will not be elementary.
Proof-Theoretic Semantics: Point-free meaninig of first-order systemsMarco Benini
This document summarizes a talk on providing a semantics for first-order logical theories using logical categories. The semantics interprets formulae as objects in a category and proofs as morphisms, without assuming elements exist. Quantifiers are interpreted using stars and costars. A logical category is a prelogical category where stars and costars exist to interpret all formulae. This semantics is sound and complete - a formula is true if a proof morphism exists. The semantics can interpret many other approaches and inconsistent theories have "trivial" models.
A talk I gave at the Yonsei University, Seoul in July 21st, 2015.
The aim was to show my background contribution to the CORCON (Correctness by Construction) research project.
I have to thank Prof. Byunghan Kim and Dr Gyesik Lee for their kind hospitality.
Numerical Analysis and Epistemology of InformationMarco Benini
The slides of my presentation at the workshop "Philosophical Aspects of Computer Science", European Centre for Living Technology, University “Ca’ Foscari”, Venice, March 2015.
Marie Skłodowska Curie Intra-European FellowshipMarco Benini
A brief report of my experience as a Marie Curie Research Fellow in Leeds to illustrate to my colleagues what means to participate in such a program.
I have to acknowledge the kind invitation of the Research Office of the Università degli Studi dell'Insubria and the Rector delegate to research, Prof. Umberto Piarulli.
This document discusses representing data types and programming in an abstract way that hides the concrete representation of data. It presents an approach where programs operate on abstract representations of data rather than concrete implementations, allowing computations to be performed without inspecting the output. As an example, it shows an abstract implementation of list concatenation that computes correctly without knowing the concrete list representation. This approach ensures correctness while preventing inspection of results.
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of explanations’. Analogously to the gradient of abstractions, a gradient of explanations is a sequence of discrete levels of explanation each one refining the previous, varying formalisation, and thus providing progressive evidence for hidden information. Because of this sequential and coherent uncovering of the information that explains a level of abstraction—the heapsort algorithm in our guiding example—the notion of gradient of explanations allows to precisely classify purposes in writing software according to the informal criterion of ‘depth’, and to give a precise meaning to the notion of ‘concreteness’.
This talk aims at introducing, through a very simple example, a way to represent data types in the λ-calculus, and thus, in functional programming languages, so that the structure of the data types itself becomes a parameter.
This very simple technical trick allows to reconsider programming as a way to express morphisms between models of a logical theory. As an application, it allows to realise a way to perform anonymous computations.
From a philosophical point of view, the presented approach shows how it is possible to conceive a real programming system where properties like correctness of programs can be proved, but data cannot be inspected, not even in principle.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
8.Isolation of pure cultures and preservation of cultures.pdf
Dealing with negative results
1. Dealing with negative results
Research methods and proposal writing
Dr M Benini
Università degli Studi dell’Insubria
kindly hosted by Dr L Longo
marco.benini@uninsubria.it
March 1st, 2016
2. Science is positive
The vast majority of scientific results are stated as answers to
questions of the kind
why
how
The first question asks for an explanation of some phenomenon;
the second one requires to provide a justified method to achieve
some result of interest
( 2 of 19 )
3. Science is positive
Consider the negated questions:
why not
how not
The first question requires to provide a way to falsify an
explanation; the second question asks to show a faulty way to
achieve a result, and to put in evidence its failure
As a matter of fact, we have a huge number of negative answers
in our experience, and we need them!
( 3 of 19 )
4. Science is positive
Consider the negated questions:
why not
how not
The first question requires to provide a way to falsify an
explanation; the second question asks to show a faulty way to
achieve a result, and to put in evidence its failure
As a matter of fact, we have a huge number of negative answers
in our experience, and we need them!
( 3 of 19 )
5. Science is positive
When we do research, we collect many negative answers:
when an experiment fails, we have to understand why it did
not succeed
when a (mathematical) proof fails to be found, we have to ask
ourselves what prevents it, eventually searching for a
counterexample
when a procedure fails to achieve the desired result, we have
to understand what went wrong, and why
( 4 of 19 )
6. Science is positive
When we do research, we collect many negative answers:
when an experiment fails, we have to understand why it did
not succeed
when a (mathematical) proof fails to be found, we have to ask
ourselves what prevents it, eventually searching for a
counterexample
when a procedure fails to achieve the desired result, we have
to understand what went wrong, and why
( 4 of 19 )
7. Science is positive
When we do research, we collect many negative answers:
when an experiment fails, we have to understand why it did
not succeed
when a (mathematical) proof fails to be found, we have to ask
ourselves what prevents it, eventually searching for a
counterexample
when a procedure fails to achieve the desired result, we have
to understand what went wrong, and why
( 4 of 19 )
8. Science is positive
When we do research, we collect many negative answers:
when an experiment fails, we have to understand why it did
not succeed
when a (mathematical) proof fails to be found, we have to ask
ourselves what prevents it, eventually searching for a
counterexample
when a procedure fails to achieve the desired result, we have
to understand what went wrong, and why
( 4 of 19 )
9. Science is positive
When we reach the final stage of our research efforts, publishing
the results, we focus on the positive achievements, And we
discard the negative experience we have accumulated
An experienced researcher, instead, should be more effective,
trying to use the negative results and to maximise the outcomes
of his efforts
( 5 of 19 )
10. Science is positive
When we reach the final stage of our research efforts, publishing
the results, we focus on the positive achievements, And we
discard the negative experience we have accumulated
An experienced researcher, instead, should be more effective,
trying to use the negative results and to maximise the outcomes
of his efforts
( 5 of 19 )
11. Negative content, positively presented
Although some results we have found are negative, they can
often be positively presented
when a procedure fails, precisely delimit the domain of
applicability, excluding the cases we know are bad: the limit
becomes a necessary condition, and the failure, a
counterexample to a direct generalisation
when a mathematical proof fails, find a counterexample, and
use it to add a condition that prevents its applicability: the
additional hypothesis becomes necessary because of the
counterexample
( 6 of 19 )
12. Negative content, positively presented
Although some results we have found are negative, they can
often be positively presented
when a procedure fails, precisely delimit the domain of
applicability, excluding the cases we know are bad: the limit
becomes a necessary condition, and the failure, a
counterexample to a direct generalisation
when a mathematical proof fails, find a counterexample, and
use it to add a condition that prevents its applicability: the
additional hypothesis becomes necessary because of the
counterexample
( 6 of 19 )
13. Negative content, positively presented
Although some results we have found are negative, they can
often be positively presented
when a procedure fails, precisely delimit the domain of
applicability, excluding the cases we know are bad: the limit
becomes a necessary condition, and the failure, a
counterexample to a direct generalisation
when a mathematical proof fails, find a counterexample, and
use it to add a condition that prevents its applicability: the
additional hypothesis becomes necessary because of the
counterexample
( 6 of 19 )
14. Negative content, positively presented
When stating a result, use a positive language
Theorem [Gödel] Peano Arithmetic is not complete.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that both G and ¬G are not provable.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that G is true on natural numbers, but G is not provable.
( 7 of 19 )
15. Negative content, positively presented
When stating a result, use a positive language
Theorem [Gödel] Peano Arithmetic is not complete.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that both G and ¬G are not provable.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that G is true on natural numbers, but G is not provable.
( 7 of 19 )
16. Negative content, positively presented
When stating a result, use a positive language
Theorem [Gödel] Peano Arithmetic is not complete.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that both G and ¬G are not provable.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that G is true on natural numbers, but G is not provable.
( 7 of 19 )
17. Negative content, positively presented
When stating a result, use a positive language
Theorem [Gödel] Peano Arithmetic is not complete.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that both G and ¬G are not provable.
Theorem [Gödel] There is a sentence G in Peano Arithmetic
such that G is true on natural numbers, but G is not provable.
( 7 of 19 )
18. Negative content, positively presented
When writing a statement, provide a positive content
Theorem An arbitrary list cannot be sorted in linear time.
Theorem An arbitrary list can be sorted in O(nlogn) steps.
Theorem The heap sort algorithm sorts any list in at most
O(nlogn) steps.
( 8 of 19 )
19. Negative content, positively presented
When writing a statement, provide a positive content
Theorem An arbitrary list cannot be sorted in linear time.
Theorem An arbitrary list can be sorted in O(nlogn) steps.
Theorem The heap sort algorithm sorts any list in at most
O(nlogn) steps.
( 8 of 19 )
20. Negative content, positively presented
When writing a statement, provide a positive content
Theorem An arbitrary list cannot be sorted in linear time.
Theorem An arbitrary list can be sorted in O(nlogn) steps.
Theorem The heap sort algorithm sorts any list in at most
O(nlogn) steps.
( 8 of 19 )
21. Negative content, positively presented
When writing a statement, provide a positive content
Theorem An arbitrary list cannot be sorted in linear time.
Theorem An arbitrary list can be sorted in O(nlogn) steps.
Theorem The heap sort algorithm sorts any list in at most
O(nlogn) steps.
( 8 of 19 )
22. Negative content, positively presented
When stating a counterexample, be positive
The real-valued function f (x) = |x| does not admit a derivative in
x = 0.
The real-valued function f (x) = |x| admits f (x) = −1 as
derivative on the interval (−∞,0), and f (x) = 1 on the interval
(0,+∞). Thus, f (0) is not defined.
( 9 of 19 )
23. Negative content, positively presented
When stating a counterexample, be positive
The real-valued function f (x) = |x| does not admit a derivative in
x = 0.
The real-valued function f (x) = |x| admits f (x) = −1 as
derivative on the interval (−∞,0), and f (x) = 1 on the interval
(0,+∞). Thus, f (0) is not defined.
( 9 of 19 )
24. Negative content, positively presented
When stating a counterexample, be positive
The real-valued function f (x) = |x| does not admit a derivative in
x = 0.
The real-valued function f (x) = |x| admits f (x) = −1 as
derivative on the interval (−∞,0), and f (x) = 1 on the interval
(0,+∞). Thus, f (0) is not defined.
( 9 of 19 )
25. Negative content, positively presented
When stating a limit to a procedure, be positive
Using a 12,800 ISO film speed in a digital camera prevents
getting a noise-free photograph when shooting in a dark
environment.
Using a 12,800 ISO film speed in a digital camera provides a
noisy photograph when shooting in a dark environment, which
can be useful to calculate the correct exposure at a lower
sensibility, and get a noise-free image.
( 10 of 19 )
26. Negative content, positively presented
When stating a limit to a procedure, be positive
Using a 12,800 ISO film speed in a digital camera prevents
getting a noise-free photograph when shooting in a dark
environment.
Using a 12,800 ISO film speed in a digital camera provides a
noisy photograph when shooting in a dark environment, which
can be useful to calculate the correct exposure at a lower
sensibility, and get a noise-free image.
( 10 of 19 )
27. Negative content, positively presented
When stating a limit to a procedure, be positive
Using a 12,800 ISO film speed in a digital camera prevents
getting a noise-free photograph when shooting in a dark
environment.
Using a 12,800 ISO film speed in a digital camera provides a
noisy photograph when shooting in a dark environment, which
can be useful to calculate the correct exposure at a lower
sensibility, and get a noise-free image.
( 10 of 19 )
28. Negative content, positively presented
When stating a limit to a procedure, be positive
Using a 12,800 ISO film speed in a digital camera prevents
getting a noise-free photograph when shooting in a dark
environment.
Using a 12,800 ISO film speed in a digital camera provides a
noisy photograph when shooting in a dark environment, which
can be useful to calculate the correct exposure at a lower
sensibility, and get a noise-free image.
( 10 of 19 )
29. Impossibility results
Some results are inherently negative. If they are also significant,
it is worth stating them in a negative way
Theorem [Turing] Deciding whether a program terminates on
all inputs, is not a computable problem.
Theorem [Turing] There is no computer program that, taking
as input a computer program, decides whether it terminates on
all inputs.
( 11 of 19 )
30. Impossibility results
Some results are inherently negative. If they are also significant,
it is worth stating them in a negative way
Theorem [Turing] Deciding whether a program terminates on
all inputs, is not a computable problem.
Theorem [Turing] There is no computer program that, taking
as input a computer program, decides whether it terminates on
all inputs.
( 11 of 19 )
31. Impossibility results
Some results are inherently negative. If they are also significant,
it is worth stating them in a negative way
Theorem [Turing] Deciding whether a program terminates on
all inputs, is not a computable problem.
Theorem [Turing] There is no computer program that, taking
as input a computer program, decides whether it terminates on
all inputs.
( 11 of 19 )
32. Dead ends
When we do research, experiments fail, proofs do not come
through, and procedures give crazy results
We use these failures to better understand our problems, and
eventually to fix what did not work properly
Sometimes, we get in a situation where there seem to be no way
out, in which we are unable to find a way to fix the incorrect
behaviours we encounter. We call these situation dead ends
( 12 of 19 )
33. Dead ends
When we do research, experiments fail, proofs do not come
through, and procedures give crazy results
We use these failures to better understand our problems, and
eventually to fix what did not work properly
Sometimes, we get in a situation where there seem to be no way
out, in which we are unable to find a way to fix the incorrect
behaviours we encounter. We call these situation dead ends
( 12 of 19 )
34. Dead ends
When you get into a dead end, you suffer from a lack of
alternatives. There are many ways to try to get out
Write down what you got as clearly as possible. The effect to
organise the knowledge collected so far often shows
alternatives
Show your attempts to a colleague or to your supervisor. A
fresh look often provides a new point of view
Leave the problem apart for a while. When we do not think
consciously to a problem, our mind follows different paths, and
we leave room for new intuitions
( 13 of 19 )
35. Dead ends
When you get into a dead end, you suffer from a lack of
alternatives. There are many ways to try to get out
Write down what you got as clearly as possible. The effect to
organise the knowledge collected so far often shows
alternatives
Show your attempts to a colleague or to your supervisor. A
fresh look often provides a new point of view
Leave the problem apart for a while. When we do not think
consciously to a problem, our mind follows different paths, and
we leave room for new intuitions
( 13 of 19 )
36. Dead ends
When you get into a dead end, you suffer from a lack of
alternatives. There are many ways to try to get out
Write down what you got as clearly as possible. The effect to
organise the knowledge collected so far often shows
alternatives
Show your attempts to a colleague or to your supervisor. A
fresh look often provides a new point of view
Leave the problem apart for a while. When we do not think
consciously to a problem, our mind follows different paths, and
we leave room for new intuitions
( 13 of 19 )
37. Dead ends
When you get into a dead end, you suffer from a lack of
alternatives. There are many ways to try to get out
Write down what you got as clearly as possible. The effect to
organise the knowledge collected so far often shows
alternatives
Show your attempts to a colleague or to your supervisor. A
fresh look often provides a new point of view
Leave the problem apart for a while. When we do not think
consciously to a problem, our mind follows different paths, and
we leave room for new intuitions
( 13 of 19 )
38. Dead ends
It is tempting to get some result out of a long period of hard
work. But, when the result is a dead end, do not publish it!
A dead end leads to nowhere, by definition. You have few
chances to motivate your partial achievements
You will spend time writing, submitting the paper, and receiving
comments. You know in advance that the results are not
complete, and that the direction leads nowhere: honestly, any
reviewer will recognise these weaknesses, lowering your chances.
Are you sure you are not wasting more time?
( 14 of 19 )
39. Dead ends
It is tempting to get some result out of a long period of hard
work. But, when the result is a dead end, do not publish it!
A dead end leads to nowhere, by definition. You have few
chances to motivate your partial achievements
You will spend time writing, submitting the paper, and receiving
comments. You know in advance that the results are not
complete, and that the direction leads nowhere: honestly, any
reviewer will recognise these weaknesses, lowering your chances.
Are you sure you are not wasting more time?
( 14 of 19 )
40. Dead ends
It is tempting to get some result out of a long period of hard
work. But, when the result is a dead end, do not publish it!
A dead end leads to nowhere, by definition. You have few
chances to motivate your partial achievements
You will spend time writing, submitting the paper, and receiving
comments. You know in advance that the results are not
complete, and that the direction leads nowhere: honestly, any
reviewer will recognise these weaknesses, lowering your chances.
Are you sure you are not wasting more time?
( 14 of 19 )
41. Publishing or discarding?
A basic rule in Economy says to buy when the market is low, and
to sell when the market is high
Consider pursuing a scientific result as an investment. Publishing
is the act which provides you with a revenue. It makes no sense
to submit an article when you know in advance that it will not
get published. You spend time and effort with no revenue
( 15 of 19 )
42. Publishing or discarding?
A positive result has bigger chances to get published than a
negative one. But negative results have some chances, too
Try to positively present your negative outcomes
Strong impossibility results are landmarks. Prepare yourself to
defend them, but be sure they will be important in the long
term
Weak impossibility results are interesting only for specialists.
You should carefully evaluate if your audience is wide enough
to invest time and effort to try to publish such a negative
achievement
( 16 of 19 )
43. Publishing or discarding?
A positive result has bigger chances to get published than a
negative one. But negative results have some chances, too
Try to positively present your negative outcomes
Strong impossibility results are landmarks. Prepare yourself to
defend them, but be sure they will be important in the long
term
Weak impossibility results are interesting only for specialists.
You should carefully evaluate if your audience is wide enough
to invest time and effort to try to publish such a negative
achievement
( 16 of 19 )
44. Publishing or discarding?
A positive result has bigger chances to get published than a
negative one. But negative results have some chances, too
Try to positively present your negative outcomes
Strong impossibility results are landmarks. Prepare yourself to
defend them, but be sure they will be important in the long
term
Weak impossibility results are interesting only for specialists.
You should carefully evaluate if your audience is wide enough
to invest time and effort to try to publish such a negative
achievement
( 16 of 19 )
45. Publishing or discarding?
A positive result has bigger chances to get published than a
negative one. But negative results have some chances, too
Try to positively present your negative outcomes
Strong impossibility results are landmarks. Prepare yourself to
defend them, but be sure they will be important in the long
term
Weak impossibility results are interesting only for specialists.
You should carefully evaluate if your audience is wide enough
to invest time and effort to try to publish such a negative
achievement
( 16 of 19 )
46. Publishing or discarding?
Dead ends are the result of our efforts. What matters in Science
are our findings, not the effort you made
You should carefully judge what parts of your efforts are
significant, and what are irrelevant to other people, with no
regards for the time you invested
Do not try to publish what you think is irrelevant! If you think
so, how could you possibly convince a reader? You have to
believe in the value of your results before trying to explain them
( 17 of 19 )
47. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
48. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
49. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
50. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
51. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
52. Experience
Get into the habit of writing down, as clearly as possible,
everything you find during your research
you can reuse this pieces when writing the final paper
you can use them to help a new collaborator to enter in your
team
you have an evidence of your work
you can use them to illustrate the point you reached to, e.g., a
colleague or to your supervisor
you keep track of what you have understood. In the future,
you may want to reuse these partial results elsewhere, and you
do not want to rework all the details from scratch
( 18 of 19 )
53. Experience
In the long term, negative results will form the fundamental part
of your experience
you will know what directions and tools have few hopes to
lead to a result
you have a track of open problems, not-yet-attempted
alternatives, variants to explore
negative results, well documented, can speed up your
achievements when you hear about a new result that
overcomes a difficulty which blocked you in the past
All these aspects of experience qualify you as an expert
( 19 of 19 )
54. Experience
In the long term, negative results will form the fundamental part
of your experience
you will know what directions and tools have few hopes to
lead to a result
you have a track of open problems, not-yet-attempted
alternatives, variants to explore
negative results, well documented, can speed up your
achievements when you hear about a new result that
overcomes a difficulty which blocked you in the past
All these aspects of experience qualify you as an expert
( 19 of 19 )
55. Experience
In the long term, negative results will form the fundamental part
of your experience
you will know what directions and tools have few hopes to
lead to a result
you have a track of open problems, not-yet-attempted
alternatives, variants to explore
negative results, well documented, can speed up your
achievements when you hear about a new result that
overcomes a difficulty which blocked you in the past
All these aspects of experience qualify you as an expert
( 19 of 19 )
56. Experience
In the long term, negative results will form the fundamental part
of your experience
you will know what directions and tools have few hopes to
lead to a result
you have a track of open problems, not-yet-attempted
alternatives, variants to explore
negative results, well documented, can speed up your
achievements when you hear about a new result that
overcomes a difficulty which blocked you in the past
All these aspects of experience qualify you as an expert
( 19 of 19 )
57. Experience
In the long term, negative results will form the fundamental part
of your experience
you will know what directions and tools have few hopes to
lead to a result
you have a track of open problems, not-yet-attempted
alternatives, variants to explore
negative results, well documented, can speed up your
achievements when you hear about a new result that
overcomes a difficulty which blocked you in the past
All these aspects of experience qualify you as an expert
( 19 of 19 )