UserZoom Webinar: How to Conduct Web Customer Experience BenchmarkingUserZoom
You can't manage what you can't measure, so... How do you actually measure user experience?
In this webinar we covered what, why, and how to conduct website user experience & usability benchmarking. We discussed how to effectively measure the quality of a website's user experience across various competitors, within one industry, across time, using an online quantitative research methodology commonly referred to as "unmoderated remote usability testing."
Simple Ways of Planning, Designing and Testing Usability of a Software Produc...KAROLINA ZMITROWICZ
Originally presented at QS-Tag 2016
https://www.qs-tag.de/en/abstracts/tag-1/simple-ways-of-planning-designing-and-testing-usability-of-a-software-product/
[QE 2018] Paul Gerrard – Automating Assurance: Tools, Collaboration and DevOpsFuture Processing
The Digital Transformation is real. It is having a profound effect on how business is done and the nature of the systems required to deliver productive customer experiences and consequent business benefits. The demand for flexible and rapid delivery of software and systems is there. Software development teams can deliver if they adopt the disciplines of Continuous Delivery, DevOps and in-production experimentation. The barrier to achieving success in the software delivery process is likely to be the inability of testers to align testing and automated testing in particular to the development processes. Our track record in test automation is not good enough. In order to succeed a new way of thinking about testing is required, and the New Model of Testing offers a way of identifying the elements of the test process that must be ‘shifted left’. This does not necessarily mean testers move, but rather that the thinking processes must move.
During this lecture, Paul has shown that it is possible that users, BAs, and developers take some responsibility in this area. The New Model applies to all testing, whether performed in development, integration, system or user testing, by people or tools.
Intro to machine learning for web folks @ BlendWebMixLouis Dorard
Get a business understanding of ML by going through key concepts and concrete use cases that illustrate its possibilities for web-based companies.
In this presentation I introduce new technology that makes ML more accessible, and I explain in simple terms the limitations to what can be achieved. Finally, I discuss pragmatic considerations of real-world applications and I give a sneak peak at the Machine Learning Canvas — a framework for describing a predictive system that uses ML to provide value to its end user.
--
L'utilisation du Machine Learning s'est fortement développée ces dernières années, jusqu'à être présent aujourd'hui dans environ la moitié des applications que nous utilisons sur smartphone. Même s'ils n'ont pas connaissance du Machine Learning (ML), les utilisateurs d'applications mobile et web sont devenus demandeurs de fonctionnalités prédictives que le ML rend possibles. Par ailleurs, dans le cadre de l'entreprise, le ML représente un avantage compétitif important qui permet de valoriser ses data en les couplant à une intelligence machine.
Auparavant réservée aux grosses entreprises, cette technologie se démocratise grâce aux nouveaux outils de ML-as-a-Service et aux APIs de prediction. Afin d'en tirer profit, nous verrons ensemble les clés de compréhension du fonctionnement du machine learning, qui sous-tendent ses possibilités et ses limites. Nous verrons également comment amorcer son utilisation dans votre propre projet, au travers du Machine Learning Canvas qui permet de décrire un système où le ML est au cœur de la création de valeur.
UserZoom Webinar: How to Conduct Web Customer Experience BenchmarkingUserZoom
You can't manage what you can't measure, so... How do you actually measure user experience?
In this webinar we covered what, why, and how to conduct website user experience & usability benchmarking. We discussed how to effectively measure the quality of a website's user experience across various competitors, within one industry, across time, using an online quantitative research methodology commonly referred to as "unmoderated remote usability testing."
Simple Ways of Planning, Designing and Testing Usability of a Software Produc...KAROLINA ZMITROWICZ
Originally presented at QS-Tag 2016
https://www.qs-tag.de/en/abstracts/tag-1/simple-ways-of-planning-designing-and-testing-usability-of-a-software-product/
[QE 2018] Paul Gerrard – Automating Assurance: Tools, Collaboration and DevOpsFuture Processing
The Digital Transformation is real. It is having a profound effect on how business is done and the nature of the systems required to deliver productive customer experiences and consequent business benefits. The demand for flexible and rapid delivery of software and systems is there. Software development teams can deliver if they adopt the disciplines of Continuous Delivery, DevOps and in-production experimentation. The barrier to achieving success in the software delivery process is likely to be the inability of testers to align testing and automated testing in particular to the development processes. Our track record in test automation is not good enough. In order to succeed a new way of thinking about testing is required, and the New Model of Testing offers a way of identifying the elements of the test process that must be ‘shifted left’. This does not necessarily mean testers move, but rather that the thinking processes must move.
During this lecture, Paul has shown that it is possible that users, BAs, and developers take some responsibility in this area. The New Model applies to all testing, whether performed in development, integration, system or user testing, by people or tools.
Intro to machine learning for web folks @ BlendWebMixLouis Dorard
Get a business understanding of ML by going through key concepts and concrete use cases that illustrate its possibilities for web-based companies.
In this presentation I introduce new technology that makes ML more accessible, and I explain in simple terms the limitations to what can be achieved. Finally, I discuss pragmatic considerations of real-world applications and I give a sneak peak at the Machine Learning Canvas — a framework for describing a predictive system that uses ML to provide value to its end user.
--
L'utilisation du Machine Learning s'est fortement développée ces dernières années, jusqu'à être présent aujourd'hui dans environ la moitié des applications que nous utilisons sur smartphone. Même s'ils n'ont pas connaissance du Machine Learning (ML), les utilisateurs d'applications mobile et web sont devenus demandeurs de fonctionnalités prédictives que le ML rend possibles. Par ailleurs, dans le cadre de l'entreprise, le ML représente un avantage compétitif important qui permet de valoriser ses data en les couplant à une intelligence machine.
Auparavant réservée aux grosses entreprises, cette technologie se démocratise grâce aux nouveaux outils de ML-as-a-Service et aux APIs de prediction. Afin d'en tirer profit, nous verrons ensemble les clés de compréhension du fonctionnement du machine learning, qui sous-tendent ses possibilités et ses limites. Nous verrons également comment amorcer son utilisation dans votre propre projet, au travers du Machine Learning Canvas qui permet de décrire un système où le ML est au cœur de la création de valeur.
الموعد الإثنين 03 يناير 2022
143
مبادرة
#تواصل_تطوير
المحاضرة ال 143 من المبادرة
المهندس / محمد الرافعي طرباي
نقيب المبرمجين بالدقهلية
بعنوان
"IT INDUSTRY"
How To Getting Into IT With Zero Experience
وذلك يوم الإثنين 03 يناير2022
السابعة مساء توقيت القاهرة
الثامنة مساء توقيت مكة المكرمة
و الحضور من تطبيق زووم
https://us02web.zoom.us/meeting/register/tZUpf-GsrD4jH9N9AxO39J013c1D4bqJNTcu
علما ان هناك بث مباشر للمحاضرة على القنوات الخاصة بجمعية المهندسين المصريين
ونأمل أن نوفق في تقديم ما ينفع المهندس ومهمة الهندسة في عالمنا العربي
والله الموفق
للتواصل مع إدارة المبادرة عبر قناة التليجرام
https://t.me/EEAKSA
ومتابعة المبادرة والبث المباشر عبر نوافذنا المختلفة
رابط اللينكدان والمكتبة الالكترونية
https://www.linkedin.com/company/eeaksa-egyptian-engineers-association/
رابط قناة التويتر
https://twitter.com/eeaksa
رابط قناة الفيسبوك
https://www.facebook.com/EEAKSA
رابط قناة اليوتيوب
https://www.youtube.com/user/EEAchannal
رابط التسجيل العام للمحاضرات
https://forms.gle/vVmw7L187tiATRPw9
ملحوظة : توجد شهادات حضور مجانية لمن يسجل فى رابط التقيم اخر المحاضرة
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...Louis Dorard
The creation and deployment of predictive models that are at the core of artificially intelligent systems, is now being largely automated. However, formalizing the right machine learning problem that will leverage data to make applications and products more intelligent — and to create value — remains a challenge.
The Machine Learning Canvas is used by teams of managers, scientists and engineers to align their activities by providing a visual framework that helps specify the key aspects of AI systems: value proposition, data to learn from, usage of predictions, constraints, and measures of performance. In this presentation, we’ll motivate the usage of the MLC, we'll explain its structure, how to fill it in, and we’ll go over some example applications.
Being a very brief history of how "architecture" become a thing in software, and of how it delivers on its core claim to fame, which is:
Enabling you to Reason & Calculate about quite vague "Quality" requirements and thereby to achieve confidence of a successful system and happy customers
Alfonso de la Nuez's talk, "How to conduct global UX benchmarking", at BigDesign event, about what, why, and how to conduct website user experience & usability benchmarking.
Supervised learning is a fundamental concept in machine learning, where a computer algorithm learns from labeled data to make predictions or decisions. It is a type of machine learning paradigm that involves training a model on a dataset where both the input data and the corresponding desired output (or target) are provided. The goal of supervised learning is to learn a mapping or relationship between inputs and outputs so that the model can make accurate predictions on new, unseen data.v
Modern Perspectives on Recommender Systems and their Applications in MendeleyKris Jack
Presentation given for one of Pearson's Data Research teams. It motivates the use of recommender systems, describes common approaches to building and evaluating them and gives examples of how they are used in Mendeley. Thanks to Maya Hristakeva for creating some of the slides.
Machine Learning 2 deep Learning: An IntroSi Krishan
Provides a brief introduction to machine learning, reasons for its popularity, a simple walk through example and then a need for deep learning and some of its characteristics. This is an updated version of an earlier presentation.
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Raheel Ahmad
This presentation is from the Federated & Distributed Machine Learning Conference. This talk focuses on why we need explainable AI and how can we build models that are trustworthy, transparency and unbiased.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Amp Up Your Testing by Harnessing Test DataTechWell
The data tsunami is coming—or maybe it’s already here. Data science, big data, and machine learning are the buzzwords of the day. Data is changing our products and the way we build them, so we should also change the way we verify our products. In a world of increasing connectivity and accelerated deadlines, data can provide an edge. But what role should data play in assessing the quality of software? Where does it make sense to use data, and where is it inappropriate? Steve Rowe covers the history of how data fits into testing, explains why data is an important tool to have in your quality toolkit, and presents strategies for adding data to your testing plans and using it more effectively in your testing.
الموعد الإثنين 03 يناير 2022
143
مبادرة
#تواصل_تطوير
المحاضرة ال 143 من المبادرة
المهندس / محمد الرافعي طرباي
نقيب المبرمجين بالدقهلية
بعنوان
"IT INDUSTRY"
How To Getting Into IT With Zero Experience
وذلك يوم الإثنين 03 يناير2022
السابعة مساء توقيت القاهرة
الثامنة مساء توقيت مكة المكرمة
و الحضور من تطبيق زووم
https://us02web.zoom.us/meeting/register/tZUpf-GsrD4jH9N9AxO39J013c1D4bqJNTcu
علما ان هناك بث مباشر للمحاضرة على القنوات الخاصة بجمعية المهندسين المصريين
ونأمل أن نوفق في تقديم ما ينفع المهندس ومهمة الهندسة في عالمنا العربي
والله الموفق
للتواصل مع إدارة المبادرة عبر قناة التليجرام
https://t.me/EEAKSA
ومتابعة المبادرة والبث المباشر عبر نوافذنا المختلفة
رابط اللينكدان والمكتبة الالكترونية
https://www.linkedin.com/company/eeaksa-egyptian-engineers-association/
رابط قناة التويتر
https://twitter.com/eeaksa
رابط قناة الفيسبوك
https://www.facebook.com/EEAKSA
رابط قناة اليوتيوب
https://www.youtube.com/user/EEAchannal
رابط التسجيل العام للمحاضرات
https://forms.gle/vVmw7L187tiATRPw9
ملحوظة : توجد شهادات حضور مجانية لمن يسجل فى رابط التقيم اخر المحاضرة
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...Louis Dorard
The creation and deployment of predictive models that are at the core of artificially intelligent systems, is now being largely automated. However, formalizing the right machine learning problem that will leverage data to make applications and products more intelligent — and to create value — remains a challenge.
The Machine Learning Canvas is used by teams of managers, scientists and engineers to align their activities by providing a visual framework that helps specify the key aspects of AI systems: value proposition, data to learn from, usage of predictions, constraints, and measures of performance. In this presentation, we’ll motivate the usage of the MLC, we'll explain its structure, how to fill it in, and we’ll go over some example applications.
Being a very brief history of how "architecture" become a thing in software, and of how it delivers on its core claim to fame, which is:
Enabling you to Reason & Calculate about quite vague "Quality" requirements and thereby to achieve confidence of a successful system and happy customers
Alfonso de la Nuez's talk, "How to conduct global UX benchmarking", at BigDesign event, about what, why, and how to conduct website user experience & usability benchmarking.
Supervised learning is a fundamental concept in machine learning, where a computer algorithm learns from labeled data to make predictions or decisions. It is a type of machine learning paradigm that involves training a model on a dataset where both the input data and the corresponding desired output (or target) are provided. The goal of supervised learning is to learn a mapping or relationship between inputs and outputs so that the model can make accurate predictions on new, unseen data.v
Modern Perspectives on Recommender Systems and their Applications in MendeleyKris Jack
Presentation given for one of Pearson's Data Research teams. It motivates the use of recommender systems, describes common approaches to building and evaluating them and gives examples of how they are used in Mendeley. Thanks to Maya Hristakeva for creating some of the slides.
Machine Learning 2 deep Learning: An IntroSi Krishan
Provides a brief introduction to machine learning, reasons for its popularity, a simple walk through example and then a need for deep learning and some of its characteristics. This is an updated version of an earlier presentation.
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Raheel Ahmad
This presentation is from the Federated & Distributed Machine Learning Conference. This talk focuses on why we need explainable AI and how can we build models that are trustworthy, transparency and unbiased.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Amp Up Your Testing by Harnessing Test DataTechWell
The data tsunami is coming—or maybe it’s already here. Data science, big data, and machine learning are the buzzwords of the day. Data is changing our products and the way we build them, so we should also change the way we verify our products. In a world of increasing connectivity and accelerated deadlines, data can provide an edge. But what role should data play in assessing the quality of software? Where does it make sense to use data, and where is it inappropriate? Steve Rowe covers the history of how data fits into testing, explains why data is an important tool to have in your quality toolkit, and presents strategies for adding data to your testing plans and using it more effectively in your testing.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
6. Brain-Behavior Predictive Modeling: My Journey
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• Data Scientist @ UMich
• Predicting BrainAge or Cognition
2017 2023
2022
2021
2020
2019
2018
Whistler: Whistler:
“Value-Based
Machine Learning”
• Ph.D. candidate @ Donders Institute
• Big Data Normative Modeling +
Transfer Learning to Clinical Datasets
Whistler:
“Developmental Mega Sample”
7. Brain-Behavior Predictive Modeling: Current Status
Combine a bunch
of data from
open datasets Fit a bunch of different
algorithms, ranging
from simple to super
complicated
Realize there is little
overlap in available
phenotypes across these
datasets. You are left with
age, sex, maybe cognition
(if you’re lucky).
Realize that there isn’t a
lot of signal in the data,
and that you can’t even
predict age that well
(maybe within ~3-5years)
Publish your results anyways….
a) being super optimistic and
slightly overselling the
interpretation and potential.
b) sharing your honest viewpoint
(using MRI doesn’t help much).
Have trouble finding a journal that
will publish this perspective.
a) repeat
b) leave for a data science industry
job or another field “where ML can
have more impact”
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
8. Brain-Behavior Predictive Modeling: Bingo Card
Fluid
Intelligence
Brain Age
Poor
Reliability
Reference to
Marek et al.
Nature Paper
No confound
correction...
“could be
motion”
HCP /
ABCD /
UKBiobank
r = 0.28
“Has clinical
potential
(one day)”
“We need a
bigger
sample size”
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
14. • The quest for high performance.
• A narrow objective of becoming more
accurate, and an immediate (short term)
action plan for how to achieve this goal
(minimize the loss function on a
particular set of data).
Definition: Accuracy
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
15. • Consider utility to be more closely aligned with
the model’s purpose (i.e., answering the research
question and adding real world value).
• Utility looks at the bigger picture and makes
creative adjustments to align with the ultimate
research goal and real-world application.
Definition: Utility
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
16. • In the process of setting up the optimization problem we convinced ourselves
that it makes sense to optimize for accuracy because it is more easily
mathematically formulated than utility…
• … But if you zoom out to look at the bigger picture you realize the goal of the
A.I. field is to do useful stuff that makes life easier for humans, not to create
intelligence (become more accurate).
Bringing Together Optimization, Accuracy, & Utility
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
17. Measuring Accuracy
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• A single standard metric that represents the models’ ability to predict
observations in the test set.
18. Optimization for Accuracy
• A specific loss function is used to improve model during training/validation. Often
same metric is used to evaluate “out of sample” performance in test set.
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
19. Optimization for Accuracy
• Benchmarking A comparison of model performance to another model.
• “Best” model is determined by being more accurate than the others.
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
20. • There is no consensus regarding the definition of success in creating artificial
intelligence meaning there is no finish line or upper limit on attaining A.I.
• Without a clear definition of goals and a vision of what success looks like,
how will we know when we have reached the goal?
• What does it mean to become infinitely more intelligent?
• What purpose does it serve to have a world full of agents (machines or humans)
that are super intelligent?
• Goodhart’s Law: “When a measure becomes a target, it ceases to be a good metric.”
Limitations of Accuracy
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
21. (Abstract) Limitations of Accuracy
• Soccer team example…. the star player who only thinks about themselves
(accuracy) vs. the team captain who puts the team first (utility).
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Jamie Tart
vs.
Roy Kent
22. (Concrete) Limitations of Accuracy
• High accuracy does not imply:
• reproducibility
• meaningfulness (that the features used are better than random)
• does not come with explainability
• equal accuracy doesn’t imply that two models have learned in the same way
Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
27. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Predicting patient pain score instead of radiologist’s dx
Optimization for Utility (fairness is priority value)
• Use knee X-rays to predict patients’ self-reported experienced pain,
instead of using standard measures of pain severity (radiologist dx).
• Relative to radiologist dx, which accounted for only 9% of racial
disparities in pain, using self reported pain labels accounted for 43%
of racial disparities in pain (4.7× more than radiologist dx).
28. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• Equal opportunity & Multi-objective optimization
Fairness
Accuracy
Optimization for Utility (fairness is priority value)
29. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Optimization for Utility (efficiency/useability is priority value)
• Optimizing for teamwork, AI learning to complement humans.
https://pcnportal.dccn.nl/
• Sharing pre-trained models & creating accessible tools
30. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Benefits of Utility
• Collaborative, efficient, well-defined purpose.
• Functional (real depth and meaning) rather than attractive (shallow,
surface-level appeal).
• An opportunity to think deeply and align your models with your purpose.
• Creative thinking and problem solving is required.
• More of a challenge… thus more satisfying solutions will be created.
31. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Roadblocks
• Cognitive biases make us focus on simpler problems.
• As problem complexity increases, we shift responsibility, and think along the
lines of “this is out of my expertise, it is someone else’s problem to solve”.
Ambiguity Effect Bandwagon Effect Status Quo Bias
Loss Aversion
32. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
Roadblocks
During
Development
In the Wild
(Real world)
Stationary Data
Single Decision Maker
Complex, Non-stationary Data
Many Stakeholders
34. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• Many other fields have defined utility and figured out how to optimize for it.
• Let’s learn from them.
Future Directions
Human-Computer Interaction (HCI)
Ethical A.I.
Value-based healthcare
Behavioral Economics
35. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• Utility (and value priorities) will always depend on the context.
• We need open communication and guidelines about making these decisions.
Future Directions
36. Value-Based Machine Learning | Intro | Accuracy | Utility | What’s Next
• Too much (tunnel vision) focus on accuracy of predictive models.
• We have lost track of our “why” and this has created a lack of model utility.
• It should be a priority to define our values which will help us build a better plan
for moving towards these goals and values.
• Optimizing for utility is an abstract and creative practice that requires diverse
perspectives and input. It should be an on-going process.
Take Home Messages
37. Charlie-Mop
Acknowledgments
Value-Based Machine Learning
Chandra Sripada, Mike Angstadt, Daniel Kessler, Liza
Levina, Ivy Tso, Alex Weigard, Jenna Wiens
Ph.D. supervisors: Andre Marquand, Eric Ruhé, &
Christian Beckmann.
Lab members: Seyed Mostafa Kia, Thomas Wolfers,
Mariam Zabihi, Charlotte Fraza, Pieter Barkema, Stijn
de Boers, Barbora Rehák Bučková
Donders Institute, Nijmegen University of Michigan, Ann Arbor
Historically, A.I. as a field has overpromised solutions and underperformed on bringing scientific advancements into the real world.
Shifting priorities to focus on utility over intelligence will help make the goals of A.I./ML more explicit + actionable and thus will improve scientific communication through creating more realistic public expectations and building trust.
During an optimization step, the model parameters are iteratively updated such that the loss function (i.e., mean squared error) is minimized within the training data set.
To summarize, when setting up the optimization step of a machine learning model, we are deciding what is right and what is wrong.
For example, maintaining high accuracy while simultaneously using less computational resources which saves money and reduces carbon emissions.
An opportunity to reframe our research questions to better align with our true purpose and vision.
An extreme simplification of a model’s performance and traits.
Does not capture reliability, validity, complexity, fairness, useability, etc.
The goal is to achieve the highest accuracy, lowest mean squared/absolute error, highest correlation between predicted and observed.
Continuous version of the binary winner and loser, followed by a ranking-based comparison.
Contributes to the “replication is all we need” attitude, and a lack of thinking about true innovation.
We propose a simple, interpretable, and actionable framework for measuring and removing discrimination based on protected attributes. We argue that, unlike demographic parity, our framework provides a meaningful measure of discrimination, while demonstrating in theory and experiment that we also achieve much higher utility.
Usefulness means saving time/energy/costs/resources
Making utility explicit (so that we can mathematically model it) is more challenging than mathematically modeling accuracy/performance.
We favor simple-looking options and complete information over complex, ambiguous options
We’d rather do the quick, simple thing than the important complicated thing, even if the important complicated thing is ultimately a better use of time and energy.
In practice, there is often a single decision maker (ML developer), and the underlying population is assumed to be stationary.
This is not true in the wild (real world setting) where there are a lot of people involved, each with different value priority queue, and the data is of course incomplete and very messy.
Fairness and accuracy are often assumed to be in opposition, meaning there is a trade-off when optimizing for one over the other (i.e., optimizing for more predictive fairness leads to less accurate predictions or optimizing for accuracy results in less fairness).