Many of us data science and business analytics practitioners perform research and analysis for decision makers on a regular basis. The deliverable of such analysis often results in a Power Point presentation, and/or a model that needs to be productionalized. The code used to produce the analysis also needs to be considered a deliverable.
Many of us perform analysis without reproducibility in mind. With the increasing democratization of data, it is becoming more and more important for people that may not have scientific training to be able to create analysis that can be picked up by somebody else who can then reproduce your results. That, and creating reproducible research is just solid science.
We are going to spend an evening walking though the various tools available to create reproducible research on Big Data. You will get introduced to the Tidyverse of R packages and how to use them. We will discuss the ins and outs of various notebook technologies like Jupyter, and Zeppelin. You will have an opportunity to learn how to get up and running with R and Spark and the various options you have to learn on real clusters instead of just your local environment. There also be a quick introduction to source control and the various options you have around using Git.
The theme of the evening will be “getting started”. We will go over various training resources and show you the optimal path to go from zero to master. Some commentary will be provided around the current state of the job market and intel from the front lines of the data science language wars. This is a large topic and the evening will be fairly dynamic and responsive to the needs of the audience.
Bob Wakefield has spent the better part of 16 years building data systems for many organizations across various industries. He has been running Hadoop in a lab environment for 3 years. He is the principal of Mass Street Analytics, LLC a boutique data consultancy. Mass Street is a Hortonworks Consultant Partner and Confluent Partner.
In his spare time, he likes to work on an equity investment application that combines various sources of information to automatically arrive at investing decisions. When he is not doing that, you’ll find him flying his A-10 simulator. Full CV can be found here: https://www.linkedin.com/in/bobwakefieldmba/
Capacity Building: Data Science in the University At Rensselaer Polytechnic ...James Hendler
In this short talk, presented at the ITU's Capacity Building Symposium, I review some of the pedagogical innovation in data science happening at Rensselaer (RPI) and some aspects of teaching data science that are crucial to larger success.
Data Science Provenance: From Drug Discovery to Fake FansJameel Syed
Knowledge work adds value to raw data; how this activity is performed is critical for how reliably results can be reproduced and scrutinized. With a brief diversion into epistemology, the presentation will outline the challenges for practitioners and consumers of Big Data analysis, and demonstrate how these were tackled at Inforsense (life sciences workflow analytics platform) and Musicmetric (social media analytics for music).
The talk covers the following issues with concrete examples:
- Representations of provenance
- Considerations to allow analysis computation to be recreated
- Reliable collection of noisy data from the internet
- Archiving of data and accommodating retrospective changes
- Using linked data to direct Big Data analytics
Data for Science: How Elsevier is using data science to empower researchersPaul Groth
Each month 12 million people use Elsevier’s ScienceDirect platform. The Mendeley social network has 4.6 million registered users. 3500 institutions make use of ClinicalKey to bring the latest in medical research to doctors and nurses. How can we help these users be more effective? In this talk, I give an overview of how Elsevier is employing data science to improve its services from recommendation systems, to natural language processing and analytics. While data science is changing how Elsevier serves researchers, it’s also changing research practice itself. In that context, I discuss the impact that large amounts of open research data are having and the challenges researchers face in making use of it, in particular, in terms of data integration and reuse. We are at just beginning to see of how technology and data is changing science correspondingly this impacts how best to empower those who practice it.
Many of us data science and business analytics practitioners perform research and analysis for decision makers on a regular basis. The deliverable of such analysis often results in a Power Point presentation, and/or a model that needs to be productionalized. The code used to produce the analysis also needs to be considered a deliverable.
Many of us perform analysis without reproducibility in mind. With the increasing democratization of data, it is becoming more and more important for people that may not have scientific training to be able to create analysis that can be picked up by somebody else who can then reproduce your results. That, and creating reproducible research is just solid science.
We are going to spend an evening walking though the various tools available to create reproducible research on Big Data. You will get introduced to the Tidyverse of R packages and how to use them. We will discuss the ins and outs of various notebook technologies like Jupyter, and Zeppelin. You will have an opportunity to learn how to get up and running with R and Spark and the various options you have to learn on real clusters instead of just your local environment. There also be a quick introduction to source control and the various options you have around using Git.
The theme of the evening will be “getting started”. We will go over various training resources and show you the optimal path to go from zero to master. Some commentary will be provided around the current state of the job market and intel from the front lines of the data science language wars. This is a large topic and the evening will be fairly dynamic and responsive to the needs of the audience.
Bob Wakefield has spent the better part of 16 years building data systems for many organizations across various industries. He has been running Hadoop in a lab environment for 3 years. He is the principal of Mass Street Analytics, LLC a boutique data consultancy. Mass Street is a Hortonworks Consultant Partner and Confluent Partner.
In his spare time, he likes to work on an equity investment application that combines various sources of information to automatically arrive at investing decisions. When he is not doing that, you’ll find him flying his A-10 simulator. Full CV can be found here: https://www.linkedin.com/in/bobwakefieldmba/
Capacity Building: Data Science in the University At Rensselaer Polytechnic ...James Hendler
In this short talk, presented at the ITU's Capacity Building Symposium, I review some of the pedagogical innovation in data science happening at Rensselaer (RPI) and some aspects of teaching data science that are crucial to larger success.
Data Science Provenance: From Drug Discovery to Fake FansJameel Syed
Knowledge work adds value to raw data; how this activity is performed is critical for how reliably results can be reproduced and scrutinized. With a brief diversion into epistemology, the presentation will outline the challenges for practitioners and consumers of Big Data analysis, and demonstrate how these were tackled at Inforsense (life sciences workflow analytics platform) and Musicmetric (social media analytics for music).
The talk covers the following issues with concrete examples:
- Representations of provenance
- Considerations to allow analysis computation to be recreated
- Reliable collection of noisy data from the internet
- Archiving of data and accommodating retrospective changes
- Using linked data to direct Big Data analytics
Data for Science: How Elsevier is using data science to empower researchersPaul Groth
Each month 12 million people use Elsevier’s ScienceDirect platform. The Mendeley social network has 4.6 million registered users. 3500 institutions make use of ClinicalKey to bring the latest in medical research to doctors and nurses. How can we help these users be more effective? In this talk, I give an overview of how Elsevier is employing data science to improve its services from recommendation systems, to natural language processing and analytics. While data science is changing how Elsevier serves researchers, it’s also changing research practice itself. In that context, I discuss the impact that large amounts of open research data are having and the challenges researchers face in making use of it, in particular, in terms of data integration and reuse. We are at just beginning to see of how technology and data is changing science correspondingly this impacts how best to empower those who practice it.
This talk takes as inspiration Prof. Carole Goble's notion that research communication should be more like software development practices. It looks at some of the state of the art and how it fits into to that framework. It argues that we are moving towards that vision and discusses some of the norms that need to be accepted in this new world. Presented at http://www.dagstuhl.de/15302
Top 10 Myths Regarding Data Scientists Roles in India | EdurekaEdureka!
YouTube Link: https://youtu.be/-4XGrD7JPG0
** Data Science Master Program: https://www.edureka.co/masters-program/data-scientist-certification ***
This PPT debunks the Myths about Data Scientists Roles in India and Abroad. Data Science Being a new field has gained quite a good momentum and there are a few misconceptions about Data Scientists in People's mind which has been cleared in this PPT.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Presentation at "International knowledge graph workshop" at KDD 2020. The short overview talk shows how we have moved from Semantic Web to Linked Data to Knowledge Graphs. We argue that the same "a little semantics goes a long way" principle from the early days of the Semantic Web still is needed today -- some lessons learned and steps ahead are outlined.
RDAP 15: “This is just for me”: Researchers on their data documentation pract...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Part of "Beyond metadata: Supporting non-standardized documentation to facilitate data reuse"
Sara Mannheimer, Data Management Librarian, Montana State University
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
RDAP 15: Beyond Metadata: Leveraging the “README” to support disciplinary Doc...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Part of “Beyond metadata: Supporting non-standardized documentation to facilitate data reuse”
Fortune Teller API - Doing Data Science with Apache SparkBas Geerdink
This presentation of the Endpoint 2015 conference gives an overview of a short data science project: predicting the future happiness of a person, as if he or she walks into a circus tent! First, the domain problem is analyzed. Then, the data is gathered and analyzed. Finally a linear regression model is created and the app is published in the form of a REST API. The technology that is demoed is using Apache Spark and Zeppelin, and can be found on Github: https://github.com/geerdink/FortuneTellerApi
RDAP13 Elizabeth Moss: The impact of data reuseASIS&T
Kathleen Fear, ICPSR, University of Michigan
“The impact of data reuse: a pilot study of 5 measures”
Panel: Data citation and altmetrics
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Data discovery and metadata - Natasha Simons
Research Data Management workshop at the iSchools Data Science Winter Institute, 7-9 December 2017, University of Hong Kong
What's wrong with our scholarly infrastructure?Björn Brembs
First of a two-part series on the issues scientists face with their expensive, antiquated infrastructure and how to overcome these problems. First part on problems, second part (upcoming) on solutions.
This talk takes as inspiration Prof. Carole Goble's notion that research communication should be more like software development practices. It looks at some of the state of the art and how it fits into to that framework. It argues that we are moving towards that vision and discusses some of the norms that need to be accepted in this new world. Presented at http://www.dagstuhl.de/15302
Top 10 Myths Regarding Data Scientists Roles in India | EdurekaEdureka!
YouTube Link: https://youtu.be/-4XGrD7JPG0
** Data Science Master Program: https://www.edureka.co/masters-program/data-scientist-certification ***
This PPT debunks the Myths about Data Scientists Roles in India and Abroad. Data Science Being a new field has gained quite a good momentum and there are a few misconceptions about Data Scientists in People's mind which has been cleared in this PPT.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Presentation at "International knowledge graph workshop" at KDD 2020. The short overview talk shows how we have moved from Semantic Web to Linked Data to Knowledge Graphs. We argue that the same "a little semantics goes a long way" principle from the early days of the Semantic Web still is needed today -- some lessons learned and steps ahead are outlined.
RDAP 15: “This is just for me”: Researchers on their data documentation pract...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Part of "Beyond metadata: Supporting non-standardized documentation to facilitate data reuse"
Sara Mannheimer, Data Management Librarian, Montana State University
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
RDAP 15: Beyond Metadata: Leveraging the “README” to support disciplinary Doc...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Part of “Beyond metadata: Supporting non-standardized documentation to facilitate data reuse”
Fortune Teller API - Doing Data Science with Apache SparkBas Geerdink
This presentation of the Endpoint 2015 conference gives an overview of a short data science project: predicting the future happiness of a person, as if he or she walks into a circus tent! First, the domain problem is analyzed. Then, the data is gathered and analyzed. Finally a linear regression model is created and the app is published in the form of a REST API. The technology that is demoed is using Apache Spark and Zeppelin, and can be found on Github: https://github.com/geerdink/FortuneTellerApi
RDAP13 Elizabeth Moss: The impact of data reuseASIS&T
Kathleen Fear, ICPSR, University of Michigan
“The impact of data reuse: a pilot study of 5 measures”
Panel: Data citation and altmetrics
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Data discovery and metadata - Natasha Simons
Research Data Management workshop at the iSchools Data Science Winter Institute, 7-9 December 2017, University of Hong Kong
What's wrong with our scholarly infrastructure?Björn Brembs
First of a two-part series on the issues scientists face with their expensive, antiquated infrastructure and how to overcome these problems. First part on problems, second part (upcoming) on solutions.
In 2014, librarians at Washington University in St. Louis developed an annual research conference for advanced graduate students in the Humanities. This conference was inspired by the desire to connect to graduate students at the dissertation stage as librarians had observed a gap in librarian-graduate student interactions between the first years of graduate school and when students embark on their own dissertation research. Librarians discovered that graduate students often struggle in isolation with similar research questions as well as project management and dissertation writing; thus, we aptly entitled the conference “You’re in Good Company: A Mini-Conference for Advanced Graduate Students in the Humanities.” We will share the make-up of the conference, gathering input on session offerings, funding considerations, marketing, assessment, and administrative needs.
Our presentation will focus in part on the variety of sessions we have been able to offer and our collaborations with faculty and other campus partners. Sessions included not only advanced research skills but also hands-on workshops for technologies such as Zotero, Scrivener, and mobile apps. Faculty presented sessions about dissertation writing, time management strategies, tips for getting published and funded, as well as their own personal experiences.
The conference demonstrated the value of the library to the university community as You’re in Good Company will be in its third year and appears to be filling a void to further research skills, discovery of Humanities resources, and awareness of new technologies. We will also share our developing body of conference video and audio recordings. Finally, we will present recommendations to assist other librarians interested in developing a similar conference.
KEYNOTE: Erin McKiernan, My pledge to be open (Yeah, how’s that going?)Right to Research
KEYNOTE: Erin McKiernan, My pledge to be open (Yeah, how’s that going?)
As given at OpenCon 2015 Brussels. Marking the official launch of http://whyopenresearch.org/
Understanding the Depth of Google Scholar and its Implication for Webometrics...Idowu Adegbilero-Iwari
A presentation on Google Scholar, webometrics ranking of higher institutions and Open Access to research publications. The presentation details the parameters Google scholar uses for indexing research publications and the implication of that for the visibility of scholars, their institutions and their webometrics rank.
MyScienceWork—A Global Platform for Researchers, Institutions, and Publishers
When a scientific social network is associated with an institutional archive with perfect usage and access rights for article sharing, scholarly communications increase dramatically! The global platform www.mysciencework.com (MSW) serves the needs of individual researchers eager to access and share research. With a global community of 500,000 researchers, MyScienceWork’s goal is to democratize science and make research more open and discoverable. For several years, the MSW team has been working with publishers and institutions to map, structure, manage, measure impact, and promote research content. Their work has been based on 70 million scientific publications and 12 million patents from thousands of institutional repositories and publisher databases.
The social network aspect is a critical part of MyScienceWork. It is the first scholarly social network to scrupulously respect the copyrights of publications and give publishers the opportunity to increase their content visibility and website traffic.
To increase international visibility of scholarly works, MSW not only includes user uploads to the service, but indexes content from a wide variety of sources, representing millions of articles and patents covering a wide breadth of scientific disciplines from all countries. The MSW database averages over one million visitors per month which increases the reach of any content provided by a repository or publisher indexed in MyScienceWork by an overage of 60%.
Featured Presenters:
- Virginie Simon, CEO & Co-Founder, MyScienceWork
- Yann Mahé, Sales and Marketing Director, MyScienceWork
- Dr. Marc Diederich, Head of the Lab of the Laboratoire de Biologie Moléculaire et Cellulaire du Cancer (LBMCC)
- Darrell Gunter, Director North America, International Association of STM Publishers
Presented by David Smith at The Data Science Summit, Chicago, April 20 2017.
The ability to independently reproduce results is a critical issue within the scientific community today, and is equally important for collaboration and compliance in business. In this talk, I'll introduce several features available in R that help you make reproducibility a standard part of your data science workflow. The talk will include tips on working with data and files, combining code and output, and managing R's changing package ecosystem.
Springer LOD conference portal. Demo paper - screenshotsAliaksandr Birukou
This is a slide deck with main features I have used as a backup for the demo at The 16th International Semantic Web Conference – ISWC2017 in Vienna next week. Many thanks to Volha Bryl and Andrey Gromyko from Net Wise for helping me to prepare the demo, as well as Alfred Hofmann (Lecture Notes in Computer Science (LNCS) ) and Henning Schoenenberger (Knowledge Graph (SN SciGraph) ) for continuous support. Of course, this is also based on the earlier work of Markus Kaindl and Kai Eckert from Stuttgart Media University.
If you want to read the original paper - here it is: http://birukou.eu/publications/papers/201710Birukou-ISWC2017-springer-lod.pdf
Research Data (and Software) Management at Imperial: (Everything you need to ...Sarah Anna Stewart
A presentation on research data management tools, workflows and best practices at Imperial College London with a focus on software management. Presented at the 2017 session of the HPC Summer School (Dept. of Computing).
MediaEval 2018: NewsREEL Multimedia at MediaEval 2018: News Recommendation wi...multimediaeval
Paper: http://ceur-ws.org/Vol-2283/MediaEval_18_paper_5.pdf
Youtube: https://youtu.be/tgO8k3mNH4g
Andreas Lommatzsch, Benjamin Kille, Martha Larson, Frank Hopfgartner and Leif Ramming, NewsREEL Multimedia at MediaEval 2018: News Recommendation with Image and Text Content. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: NewsREEL Multimedia premiers 2018 as part of the MediaEval Benchmarking Initiative. The NewsREEL task combines recommendation algorithms with image and text analysis. Participants must predict engagement with news items based on text snippets and annotated images. Several major German news portals have supplied data. The algorithms are evaluated in terms of precision on unknown data. This paper describes the task and the provided data in detail and explains the applied evaluation approach. The algorithms are evaluated based on Precision and Average-Precision for the top news items.
Presented by Benjamin Kille
Presentation for the Open Science Week in Dublin, 2022. The presentation outlines motivations and solutions from the "Replacing Academic Journals" proposal:
https://doi.org/10.5281/zenodo.5526634
Good Riddance: Academic Publishers are Abandoning PublishingBjörn Brembs
Talk at RIOT science club on the myriad ways in which science would do so much better if scholarly institutions took their money and spent it on modern information technology instead of antiquated and counter-productive journals.
Publish or perish: how our literature serves the anti-science agendaBjörn Brembs
Online presentation at the Leibniz-Institute of Freshwater Ecology and Inland Fisheries on how our journals are one of the core reasons for the reliability, affordability and functionality issues science is experiencing today and how to fix them.
Modernisierung der Infrastruktur - so viel mehr als nur Zugang.Björn Brembs
Vortrag über die Ursachen warum die digitale informationsinfrastruktur in der Wissenschaft so hoffnungslos veraltet ist und welche verheerenden Konsequenzen das für die Wissenschaft hat.
Incentives for infrastructure modernizationBjörn Brembs
Slides for EUA meeting explaining a strategy for open science infrastructure reform. The strategy is laid out in detail here:
http://bjoern.brembs.net/2018/11/maybe-try-another-kind-of-mandate/
The neurobiological nature of free willBjörn Brembs
Our own experience of our free will has been classified as either supernatural or an illusion because it is difficult to reconcile with macroscopic determinism as well as with microscopic quantum randomness. The former constituting a prison in which no freedom can exist, the latter signifying destructive chaos rather than creative action. Lost in this dichotomy is the demonstrated constructive combination of chance and necessity in complex systems, such as evolution. Recent converging evidence from neuroscience, ecology and genetics suggests that nervous systems, including human brains, have evolved neural circuits that harness (potentially quantum) chance events by embedding them in the controlling architecture of neuronal rules, in order to carefully inject them as creative components into ongoing goal-directed behavior. This presentation contains evidence that this form of behavioral variability may constitute a necessary neural mechanism for free will to evolve in humans.
The evolutionary conserved neurobiology of operant learningBjörn Brembs
Presentation at the 2016 annual meeting of the Mind and Brain College of the University of Lisbon on the multiple learning systems interacting during operant learning.
A replication crisis in the making: how we reward unreliable scienceBjörn Brembs
Presentation at the 2016 annual meeting of the Mind and Brain College of the university of Lisbon on the infrastructural causes for the apparent replication crisis in the experimental/biomedical sciences.
General brain function: Action – Outcome EvaluationBjörn Brembs
My presentation for the brain and behavior symposium entitled "Brain, Cognition, Behavior, Evolution: Polyglot to Monoglot?" organized by Jerry Hogan at the Institute of advanced studies of the University of Sao Paulo, Brazil.
Journal Club: Behavioral variability in C. elegansBjörn Brembs
My journal club presentation for Tuesday, March 17, 2015 on the paper: Feedback from Network States Generates Variability in a Probabilistic Olfactory Circuit http://dx.doi.org/10.1016/j.cell.2015.02.018
My presentation for the General ONline Research Conference in Cologn, Germany on March 6, 2014. On these slides I detail our proof-of-concept of making all our digital data openly accessible online by default, automatically, whenever the researcher who collected them evaluates them using our custom R-Scripts.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
5. • Limited access
• No scientific impact
analysis
• Lousy peer-review
• No global search
• No functional hyperlinks
• Useless data visualization
• No submission standards
• (Almost) no statistics
• No content-mining
• No effective way to sort,
filter and discover
• No networking feature
• etc.
…it’s like the
web in 1995!
7. Report on Integration of Data and Publications, ODE Report 2011
http://www.alliancepermanentaccess.org/wp-content/plugins/download-monitor/download.php?id=ODE+Report+on+Integration+of+Data+and+Publications
12. • Email
• Webspace
• Blog
• Library access card
• ‘Green’ OA repository
• No archiving of publications
• No archiving of code
• No archiving of data
18. The weakening relationship between the Impact Factor and papers' citations in the digital age (2012): George A. Lozano, Vincent Lariviere, Yves Gingras arXiv:1205.4328
19. Macleod MR, et al. (2015) Risk of Bias in Reports of In Vivo Research: A Focus for Improvement. doi:10.1371/journal.pbio.1002273
20. Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. doi:10.3389/fnhum.2013.00291
21. Munafò, M., Stothart, G., & Flint, J. (2009). Bias in genetic association studies and impact factor DOI: 10.1038/mp.2008.77
22. Brown, E. N., & Ramaswamy, S. (2007).
Quality of protein crystal structures.
doi:10.1107/S0907444907033847
37. “The decision, based on market and competitor analysis, will bring Emerald’s
APC pricing in line with the wider market, taking a mid-point position amongst its
competitors.”
Emerald spokesperson
38. Save time and money by making science
open by default as an added benefit
61. Publikationstätigkeit
(vollständige Publikationsliste, darunter Originalarbeiten als Erstautor/in,
Seniorautor/in, Impact-Punkte insgesamt und in den letzten 5 Jahren,
darunter jeweils gesondert ausgewiesen als Erst- und Seniorautor/in,
persönlicher Scientific Citations Index (SCI, h-Index nach Web of
Science) über alle Arbeiten)
Publications:
Complete list of publications, including original research papers as first
author, senior author, impact points total and in the last 5 years, with
marked first and last-authorships, personal Scientific Citations Index
(SCI, h-Index according to Web of Science) for all publications.
62. 1) Publish in the “Journal of Unreliable
Research” of your field – or take your chances
#getyourGlam
63. 2) Publish everything else where publication is
quick and where it can be widely read
#dontwastetimepublishing
64. 3) Ask your PI what will happen to all the work
you put into your code & data and how you can
get as many people as possible to use it
#openscience
67. (Sources: Van Noorden, R. (2013). Open access: The true cost of science publishing. doi:10.1038/495426a, Packer, A. L. (2010). The SciELO Open
Access: A Gold Way from the South. Can. J. High. Educ. 39, 111–126)
Potentialforinnovation:9.8bp.a.
Costs[thousandUS$/article]
Legacy SciELO