The document discusses various topics related to evolutionary computation and artificial intelligence, including:
- Evolutionary computation concepts like genetic algorithms, genetic programming, evolutionary programming, and swarm intelligence approaches like ant colony optimization and particle swarm optimization.
- The use of intelligent agents in artificial intelligence and differences between single and multi-agent systems.
- Soft computing techniques involving fuzzy logic, machine learning, probabilistic reasoning and other approaches.
- Specific concepts discussed in more depth include genetic algorithms, genetic programming, swarm intelligence, ant colony optimization, and metaheuristics.
This slide deck is a compilation of slides from various sources that stitches together a gentle introduction to Artificial Intelligence, Machine Learning and Deep Learning.
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
How to fine-tune and develop your own large language model.pptxKnoldus Inc.
In this session, we will what are large language models, how we can fin-tune a pre-trained LLM with our data, including data preparation, model training, model evaluation.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Breaking down the AI magic of ChatGPT: A technologist's lens to its powerful ...rahul_net
ChatGPT has taken the world of natural language processing by storm, and as an experienced AI practitioner, enterprise architect, and technologist with over two decades of experience, I'm excited to share my insights on how this innovative powerhouse is designed from an AI components perspective. In this post, I'll provide a fresh take on the key components that make ChatGPT a powerful conversational AI tool, including its use of the Transformer architecture, pre-training on large amounts of text data, and fine-tuning with human feedback. With ChatGPT's massive success, there's no doubt that it's changing the way we think about language and conversation. So, whether you're a seasoned pro or new to the world of AI, my post will provide a valuable perspective on this fascinating technology. Check out my slides to learn more!
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Machine Learning Ml Overview Algorithms Use Cases And ApplicationsSlideTeam
"You can download this product from SlideTeam.net"
Machine Learning ML Overview Algorithms Use Cases and Applications is for the mid level managers giving information about Machine Learning, how Machine Learning works, Machine Learning algorithms and its use cases. You can also learn the difference between Machine learning vs Traditional programming to understand how to implement machine learning in a better way for business growth. https://bit.ly/2ZaVSG9
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
‘Big models’: the success and pitfalls of Transformer models in natural langu...Leiden University
Abstract: Large Language Models receive a lot of attention in the media these days. We have all experienced that generative language models of the GPT family are very fluent and can convincingly answer complex questions. But they also have their limitations and pitfalls. In this presentation I will introduce Transformer-based language models, explain the relation between BERT, GPT, and the 130 thousand other models available on https://huggingface.co. I will discuss their use and applications and why they are so powerful. Then I will point out challenges and pitfalls of Large Language Models and the consequences for our daily work and education.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
This slide deck is a compilation of slides from various sources that stitches together a gentle introduction to Artificial Intelligence, Machine Learning and Deep Learning.
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
How to fine-tune and develop your own large language model.pptxKnoldus Inc.
In this session, we will what are large language models, how we can fin-tune a pre-trained LLM with our data, including data preparation, model training, model evaluation.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Breaking down the AI magic of ChatGPT: A technologist's lens to its powerful ...rahul_net
ChatGPT has taken the world of natural language processing by storm, and as an experienced AI practitioner, enterprise architect, and technologist with over two decades of experience, I'm excited to share my insights on how this innovative powerhouse is designed from an AI components perspective. In this post, I'll provide a fresh take on the key components that make ChatGPT a powerful conversational AI tool, including its use of the Transformer architecture, pre-training on large amounts of text data, and fine-tuning with human feedback. With ChatGPT's massive success, there's no doubt that it's changing the way we think about language and conversation. So, whether you're a seasoned pro or new to the world of AI, my post will provide a valuable perspective on this fascinating technology. Check out my slides to learn more!
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Machine Learning Ml Overview Algorithms Use Cases And ApplicationsSlideTeam
"You can download this product from SlideTeam.net"
Machine Learning ML Overview Algorithms Use Cases and Applications is for the mid level managers giving information about Machine Learning, how Machine Learning works, Machine Learning algorithms and its use cases. You can also learn the difference between Machine learning vs Traditional programming to understand how to implement machine learning in a better way for business growth. https://bit.ly/2ZaVSG9
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
‘Big models’: the success and pitfalls of Transformer models in natural langu...Leiden University
Abstract: Large Language Models receive a lot of attention in the media these days. We have all experienced that generative language models of the GPT family are very fluent and can convincingly answer complex questions. But they also have their limitations and pitfalls. In this presentation I will introduce Transformer-based language models, explain the relation between BERT, GPT, and the 130 thousand other models available on https://huggingface.co. I will discuss their use and applications and why they are so powerful. Then I will point out challenges and pitfalls of Large Language Models and the consequences for our daily work and education.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mobile Recommendation Engine
collaborative filtering and content based approach in hybrid manner then Genetic Algorithm for Enhancement of the Recommendation Engine. by this marketers also will get the unique characteristics of the product that must be created and also recommend to the user.
For three decades, many mathematical programming methods have been developed to solve optimization problems. However, until now, there has not been a single totally efficient and robust method to coverall optimization problems that arise in the different engineering fields.Most engineering application design problems involve the choice of design variable values that better describe the behaviour of a system.At the same time, those results should cover the requirements and specifications imposed by the norms for that system. This last condition leads to predicting what the entrance parameter values should be whose design results comply with the norms and also present good performance, which describes the inverse problem.Generally, in design problems the variables are discreet from the mathematical point of view. However, most mathematical optimization applications are focused and developed for continuous variables. Presently, there are many research articles about optimization methods; the typical ones are based on calculus,numerical methods, and random methods.
The calculus-based methods have been intensely studied and are subdivided in two main classes: 1) the direct search methods find a local maximum moving a function over the relative local gradient directions and 2) the indirect methods usually find the local ends solving a set of non-linear equations, resultant of equating the gradient from the object function to zero, i.e., by means of multidimensional generalization of the notion of the function’s extreme points from elementary calculus given smooth function without restrictions to find a possible maximum which is to be restricted to those points whose slope is zero in all directions. The real world has many discontinuities and noisy spaces, which is why it is not surprising that the methods depending upon the restrictive requirements of continuity and existence of a derivative, are unsuitable for all, but a very limited problem domain. A number of schemes have been applied in many forms and sizes. The idea is quite direct inside a finite search space or a discrete infinite search space, where the algorithms can locate the object function values in each space point one at a time. The simplicity of this kind of algorithm is very attractive when the numbers of possibilities are very small. Nevertheless, these outlines are often inefficient, since they do not complete the requirements of robustness in big or highly-dimensional spaces, making it quite a hard task to find the optimal values. Given the shortcomings of the calculus-based techniques and the numerical ones the random methods have increased their popularity.
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGESPNandaSai
Digital image processing is vast fields which can be using various applications. Which include Detection of criminal face, fingerprint authentication system, in medical field, object recognition etc. Brain tumor detection plays an important role in medical field. Brain tumor detection is detection of tumor affected part in the brain along with its shape size and boundary, so it useful in medical field.
Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologist and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient outcome. For a better understanding of the pathophysiology of diseases, quantitative imaging can reveal clues about the disease characteristics and effects on particular anatomical structures
Machine Learning without the Math: An overview of Machine LearningArshad Ahmed
A brief overview of Machine Learning and its associated tasks from a high level. This presentation discusses key concepts without the maths.The more mathematically inclined are referred to Bishops book on Pattern Recognition and Machine Learning.
In this file, you'll get introduced to Evolutionary Computing and Genetic Algorithm.
Implementing The Genetic Algorithm in CPU and GPU part that was mentioned on the first page was presented via Matlab Software.
Special thanks to our dear professor and supervisor, Aref Safari Ph.D
-Sina Mohammadi
This presentations covers Definition of Operations Research , Models, Scope,Phases ,advantages,limitations, tools and techniques in OR and Characteristics of Operations research
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
2. • EVOLUTIONARY COMPUTATION: SOFT COMPUTING, GENETIC
ALGORITHMS, GENETIC PROGRAMMING CONCEPTS, EVOLUTIONARY
PROGRAMMING, SWARM INTELLIGENCE, ANT COLONY PARADIGM,
PARTICLE SWARM OPTIMIZATION AND APPLICATIONS OF
EVOLUTIONARY ALGORITHMS.
• INTELLIGENT AGENTS: AGENTS VS SOFTWARE PROGRAMS,
CLASSIFICATION OF AGENTS, WORKING OF AN AGENT, SINGLE AGENT
AND MULTIAGENT SYSTEMS, PERFORMANCE EVALUATION,
ARCHITECTURE, AGENT COMMUNICATION LANGUAGE, APPLICATIONS
TOPICS TO COVER…!
PPT BY: MADHAV
MISHRA
2
3. SOFT COMPUTING
• Soft computing is the use of approximate calculations to provide imprecise but
usable solutions to complex computational problems.
• The approach enables solutions for problems that may be either unsolvable or
just too time-consuming to solve with current hardware.
• Soft computing is sometimes referred to as computational intelligence.
• Soft computing provides an approach to problem-solving using means other
than computers.
• With the human mind as a role model, soft computing is tolerant of partial
truths, uncertainty, imprecision and approximation, unlike traditional
computing models.
• The tolerance of soft computing allows researchers to approach some problems
that traditional computing can't process.
PPT BY: MADHAV MISHRA 3
5. • As a field of mathematical and computer study, soft computing has been
around since the 1990s.
• The inspiration was the human mind's ability to form real-world solutions to
problems through approximation.
• Soft computing contrasts with possibility, an approach that is used when there
is not enough information available to solve a problem.
• Soft computing is used where the problem is not adequately specified for the
use of conventional math and computer techniques.
• Soft computing has numerous real-world applications in domestic, commercial
and industrial situations.
PPT BY: MADHAV MISHRA 5
7. 7
PPT BY: MADHAV MISHRA
• Genetic algorithm (GA) is a search-based optimization technique based
on the principles of genetics and natural selection. It is frequently used to
find optimal or near-optimal solutions to difficult problems which
otherwise would take a lifetime to solve. It is frequently used to solve
optimization problems, in research, and in machine learning.
• Optimization refers to finding the values of inputs in such a way that we
get the “best” output values. The definition of “best” varies from
problem to problem, but in mathematical terms, it refers to maximizing
or minimizing one or more objective functions, by varying the input
parameters.
• The set of all possible solutions or values which the inputs can take
make up the search space. In this search space, lies a point or a set of
points which gives the optimal solution. The aim of optimization is to
find that point or set of points in the search space.
GENETIC ALGORITHMS
8. • Genetic algorithms (gas) are search based algorithms based on the concepts of
natural selection and genetics. Gas are a subset of a much larger branch of
computation known as evolutionary computation.
• Gas were developed by John Holland and his students and colleagues at the
University of Michigan, most notably David E. Goldberg and has since been tried
on various optimization problems with a high degree of success.
Advantages of gas
• Gas have various advantages which have made them immensely popular. These include −
• Does not require any derivative information (which may not be available for many real-world
problems).
• Is faster and more efficient as compared to the traditional methods.
• Has very good parallel capabilities.
• Provides a list of “good” solutions and not just a single solution.
PPT BY: MADHAV MISHRA 8
9. Limitations of gas
• Like any technique, gas also suffer from a few limitations. These include −
• Gas are not suited for all problems, especially problems which are simple and for which derivative
information is available.
• Fitness value is calculated repeatedly which might be computationally expensive for some
problems.
• Being stochastic, there are no guarantees on the optimality or the quality of the solution.
• If not implemented properly, the GA may not converge to the optimal solution.
BASIC TERMINOLOGY
• Population − it is a subset of all the possible (encoded) solutions to the given
problem..
• Chromosomes − a chromosome is one such solution to the given problem.
• Gene − a gene is one element position of a chromosome.
• Allele − it is the value a gene takes for a particular chromosome.
PPT BY: MADHAV MISHRA 9
11. • Genotype − genotype is the population in the computation space. In the computation space, the
solutions are represented in a way which can be easily understood and manipulated using a
computing system.
• Phenotype − phenotype is the population in the actual real world solution space in which solutions
are represented in a way they are represented in real world situations.
• Decoding and encoding − for simple problems, the phenotype and genotype spaces are the same.
• However, in most of the cases, the phenotype and genotype spaces are different.
• Decoding is a process of transforming a solution from the genotype to the phenotype space,
• While encoding is a process of transforming from the phenotype to genotype space. Decoding
should be fast as it is carried out repeatedly in a GA during the fitness value calculation.
• Fitness function − a fitness function simply defined is a function which takes the solution as input
and produces the suitability of the solution as the output. In some cases, the fitness function and the
objective function may be the same, while in others it might be different based on the problem.
• Genetic operators − these alter the genetic composition of the offspring. These include crossover,
mutation, selection, etc.
PPT BY: MADHAV MISHRA 11
12. 12
PPT BY: MADHAV MISHRA
The basic structure of a GA is as follows −
• We start with an initial population
(which may be generated at random or
seeded by other heuristics), select
parents from this population for mating.
Apply crossover and mutation operators
on the parents to generate new off-
springs. And finally these off-springs
replace the existing individuals in the
population and the process repeats.
• In this way genetic algorithms actually
try to mimic the human evolution to
some extent.
BASIC STRUCTURE OF GA
13. GENETIC PROGRAMMING CONCEPTS
• Genetic programming is a domain-independent method that genetically breeds a population of
computer programs to solve a problem.
• Specifically, genetic programming iteratively transforms a population of computer programs into
a new generation of programs by applying analogs of naturally occurring genetic operations.
Preparatory steps of genetic programming:
• The human user communicates the high-level statement of the problem to the genetic
programming system by performing certain well-defined preparatory steps.
• The five major preparatory steps for the basic version of genetic programming require:
1. The set of terminals (E.G., The independent variables of the problem, zero-argument
functions, and random constants) for each branch of the to-be-evolved program,
2. The set of primitive functions for each branch of the to-be-evolved program,
3. The fitness measure (for explicitly or implicitly measuring the fitness of individuals in the
population),
4. Certain parameters for controlling the run, and
5. The termination criterion and method for designating the result of the run.
PPT BY: MADHAV
MISHRA
13
14. SWARM INTELLIGENCE
What is a swarm?
• A loosely structured collection of interacting agents
• Agents:
• Individuals that belong to a group (but are not necessarily identical)
• They contribute to and benefit from the group
• They can recognize, communicate, and/or interact with each other
• The instinctive perception of swarms is a group of agents in motion – but that does not always
have to be the case.
• A swarm is better understood if thought of as agents exhibiting a collective behavior
Swarm intelligence (SI):
• An artificial intelligence (AI) technique based on the collective behavior in decentralized, self-
organized systems
• Generally made up of agents who interact with each other and the environment
• No centralized control structures
PPT BY: MADHAV MISHRA 14
15. EXAMPLES OF SWARMS IN NATURE:
Classic example: swarm of bees
• Can be extended to other similar systems:
• Ant colony
• Agents: ants
• Flock of birds
• Agents: birds
• Traffic
• Agents: cars
• Crowd
• Agents: humans
• Immune system
• Agents: cells and molecules
PPT BY: MADHAV MISHRA 15
16. SWARM ROBOTICS
• Swarm robotics
• The application of SI principles to collective robotics
• A group of simple robots that can only communicate locally and operate in a biologically inspired
manner
• A currently developing area of research
Two common SI algorithms
• Ant colony optimization
• Particle swarm optimization
Ant colony optimization (ACO):
• The study of artificial systems modeled after the behavior of real ant colonies and are useful
in solving discrete optimization problems
• Introduced in 1992 by Marco Dorigo
• Originally called it the Ant System (AS)
• Has been applied to
• Traveling salesman problem (and other shortest path problems)
• Several np-hard problems
• It is a population-based metaheuristic used to find approximate solutions to difficult
PPT BY: MADHAV MISHRA 16
17. WHAT IS METAHEURISTIC?
• “A metaheuristic refers to a master strategy that guides and modifies other heuristics to
produce solutions beyond those that are normally generated in a quest for local
optimality” – Fred Glover
• Or more simply:
• It is a set of algorithms used to define heuristic methods that can be used for a large set of
problems
Artificial ants
• A set of software agents
• Stochastic
• Based on the pheromone model
• Pheromones are used by real ants to mark paths. Ants follow these paths (I.E., Trail-following
behaviors)
• Incrementally build solutions by moving on a graph
• Constraints of the problem are built into the heuristics of the ants
PPT BY: MADHAV MISHRA 17
18. USING ACO
• The optimization problem must be written in the form of a path finding problem with a
weighted graph
• The artificial ants search for “good” solutions by moving on the graph
• Ants can also build infeasible solutions – which could be helpful in solving some optimization
problems
• The metaheuristic is constructed using three procedures:
• Construct ants solutions
• Update pheromones
• Daemon actions
CONSTRUCT ANTS SOLUTIONS
• Manages the colony of ants
• Ants move to neighboring nodes of the graph
• Moves are determined by stochastic local decision policies based on pheromone tails and
heuristic information
• Evaluates the current partial solution to determine the quantity of pheromones the ants
PPT BY: MADHAV MISHRA 18
19. Update pheromones
• Process for modifying the pheromone trails
• Modified by
• Increase
• Ants deposit pheromones on the nodes (or the edges)
• Decrease
• Ants don’t replace the pheromones and they evaporate
• Increasing the pheromones increases the probability of paths being used (I.E., Building
the solution)
• Decreasing the pheromones decreases the probability of the paths being used (I.E.,
Forgetting)
Daemon actions
• Used to implement larger actions that require more than one ant
• Examples:
• Perform a local search
• Collection of global information
PPT BY: MADHAV MISHRA 19
20. PARTICLE SWARM OPTIMIZATION (PSO)
• A population based stochastic optimization technique
• Searches for an optimal solution in the computable search space
• Developed in 1995 by dr. Eberhart and dr. Kennedy
• Inspiration: swarms of bees, flocks of birds, schools of fish
• In PSO individuals strive to improve themselves and often achieve this by observing and
imitating their neighbors
• Each PSO individual has the ability to remember
• PSO has simple algorithms and low overhead
• Making it more popular in some circumstances than genetic/evolutionary algorithms
• Has only one operation calculation:
• Velocity: a vector of numbers that are added to the position coordinates to move an individual
PPT BY: MADHAV MISHRA 20
21. PSYCHOLOGICAL SYSTEMS
• A psychological system can be thought of as an “information-processing” function.
• You can measure psychological systems by identifying points in psychological space.
• Usually the psychological space is considered to be multidimensional.
“Philosophical leaps” required:
• Individual minds = a point in space
• Multiple individuals can be plotted in a set of coordinates
• Measuring the individuals result in a “population of points”
• Individuals near each other imply that they are similar
• Some areas of space are better than other
PPT BY: MADHAV MISHRA 21
22. APPLYING SOCIAL PSYCHOLOGY
• Individuals (points) tend to
• Move towards each other
• Influence each other
• Why?
• Individuals want to be in agreement with their neighbors
• Individuals (points) are influenced by:
• Their previous actions/behaviors
• The success achieved by their neighbors
PPT BY: MADHAV MISHRA
22
23. WHAT HAPPENS IN PSO
• Individuals in a population learn from previous experiences and the experiences of those
around them
• The direction of movement is a function of:
• Current position
• Velocity (or in some models, probability)
• Location of individuals “best” success
• Location of neighbors “best” successes
• Therefore, each individual in a population will gradually move towards the “better” areas
of the problem space
• Hence, the overall population moves towards “better” areas of the problem space.
PPT BY: MADHAV MISHRA 23
24. PERFORMANCE OF PSO ALGORITHMS:
Relies on selecting several parameters correctly
Parameters:
Constriction factor
Used to control the convergence properties of a PSO
Inertia weight
How much of the velocity should be retained from previous steps
Cognitive parameter
The individual’s “best” success so far
Social parameter
Neighbors’ “best” successes so far
Vmax
Maximum velocity along any dimension
PPT BY: MADHAV MISHRA 24
25. ADVANTAGES OF SI
• The systems are scalable because the same control architecture can be
applied to a couple of agents or thousands of agents
• The systems are flexible because agents can be easily added or removed
without influencing the structure
• The systems are robust because agents are simple in design, the reliance
on individual agents is small, and failure of a single agents has little impact
on the system’s performance
• The systems are able to adapt to new situations easily
PPT BY: MADHAV MISHRA 25
26. AGENTS VS SOFTWARE PROGRAMS
• Traditional software programs lack the ability to assess and react to the environment
and modify their behaviour accordingly.
• They do not follow a goal-oriented and autonomous approach to problem- solving.
• In such program there is no concept of capturing environment which is dynamic in
nature and can effects the output of the system at different times of its invocation
• We will try to understand this using following approach:
AGENTS & OBJECTS
AGENTS & EXPERT SYSTEMS
PPT BY: MADHAV MISHRA 26
27. AGENTS & OBJECTS :
• Wooldridge (American professor) has depicted the underlying difference between agents and
objects in terms of autonomy and behaviour which depends on characteristics such as reaction,
proactiveness and social ability.
• Standard object models do not support the kind of behaviour normally displayed by agents.
• The most basic difference lies in the degree to which agents & objects are autonomous.
• The manner in which different objects communicate with each other is called message passing,
example of a typical object that is clearly defined in Java/ C++ consist of instance variables and
methods which can have public or private accesses.
• On other hand, in case of an agent, it may or may not chose to perform a certain action which is
of no interest to itself even if it is directed by other agents in favour of that particular action.
• That is if agent 1 request agent 2 to perform an action, then agent 2 may choose to perform or
not perform this action. Therefore, the decision to perform a given action rests with agent.
• In the case of object systems the decision is taken by the object that invokes the method.
• Thus from above analysis, we can state that agents display stronger sense of autonomy than
objects, and can take important decision of whether or not to perform an action on the request
of another agent.
PPT BY: MADHAV MISHRA
27
28. AGENTS & EXPERT SYSTEMS:
• Expert systems where considered to be the most important AI technology of 1980’s.
• An expert system is a system that is considered to be an expert system when it comes to
solving problems or giving advice in some knowledge- rich domain.
• Expert system are defined as rule- based systems in which a knowledge engineer uses the
knowledge of a certain domain and codes this knowledge as rules and facts in a special
type of database known as a knowledge base.
• The rules are usually rules of thumb, that is, they are based on some heuristics knowledge
of the domain expert.
• Therefore, expert systems are based on the fact that pervious knowledge of a certain
application exists and that we can acquire this knowledge from samples or interviews with
the domain experts and then code this gathered knowledge into the knowledge base.
• However, expert systems are not capable of interacting with their environment and do no
display reactive, proactive or social abilities such as cooperation, coordination and
negotiation.
PPT BY: MADHAV MISHRA
28
29. CLASSIFICATION OF AGENTS
• Agents can be classified into different classes.
• Nwana has defined a typology and classified agents on the basis of the
parameters: mobility and interaction with environment(nwana, 1996).
• They maybe also be further classified depending on the primary attributes such
as autonomy, cooperation and learning ability.
• The term mobility refers to the ability of agents to move around in a given
network.
• Agents that posses mobility are called mobile agents, they are able to roam
around in wide-area networks, interact with foreign hosts and also perform
tasks on behalf of their owners before returning back to the originator.
• We have three primary characteristics of agents are as follows:
Autonomy, Reactivity and Proactiveness
PPT BY: MADHAV MISHRA
29
30. • AUTONOMY:
- Autonomy is that characteristics of an agent which enables it to function without the
direct intervention of humans or other intelligent systems, and thus retain control over
its actions and internal state.
- Autonomy is considered to be the central concept in designing an agent.
• REACTIVITY:
- Reactivity is that characteristics of agents owing to which they judge their environment
and respond in accordance to the changes occurring in it.
- As mentioned earlier, reactive agents do not possess any internal model of their
environment and act using a response type of behaviour by responding to the
environment in which they are embedded.
• PROACTIVENESS:
- Agent that posses the characteristics of proactiveness are capable of taking initiative to
make decisions in order to achieve their goals.
PPT BY: MADHAV MISHRA 30
31. WORKING OF AN AGENT
• AN AGENT GENERALLY MAPS ITS INTERNALS STATE TO ITS DATA STRUCTURES, THE
OPERATIONS WHICH MAY BE PERFORMED ON THESE DATA STRUCTURES, AND THE
CONTROL FLOW BETWEEN THE DATA STRUCTURES.
• ONE OF THE CHALLENGING GOALS TO DESIGN AN AGENT PROGRAM IS TO IMPLEMENT
THE MAPPING FROM PERCEPT'S TO ACTIONS.
• THE AGENTS TAKES SENSORY INPUT FROM THE ENVIRONMENT AND PRODUCES ACTIONS
THAT AFFECT IT AS OUTPUT.
• THE AGENT STARTS IN SOME INITIAL INTERNAL STATE, OBSERVES ITS ENVIRONMENT
STATES, AND THEN GENERATES A PERCEPT.
• BASED ON THIS PERCEPT, THE ACTION IS THEN PERFORMED, AND THE AGENT ENTERS
ANOTHER CYCLE, UPDATING IT’S STATE AND CHOOSING AN ACTION TO PERFORM.
PPT BY: MADHAV MISHRA 31
39. ARCHITECTURE OF INTELLIGENT AGENTS
• We will look into four types of agents architectures:
- Logical based,
- Reactive,
- Belief- desire- intention,
- Layered.
• Each of this is described as given below:
- Logical based architecture : agents in which decision making is done through logical
deduction
- Reactive architecture: agents in which decision making is implemented in some form of direct
mapping from situation to action.
- Belief- desire- intention architecture: agents in which decision making depends upon the
manipulation of data structures representing the beliefs, desires and intentions of the agent.
- Layered architecture: agents in which decision making is done via various software layers,
each of which is more or less explicitly reasoning about the environment at different levels of
abstraction.
PPT BY: MADHAV MISHRA 39