Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Ai artificial intelligence professional vocabulary collection - NuAIgRuchi Jain
The field of artificial intelligence continues to expand, standing on the edge of the precipice of mainstream breakthroughs.
AI will be more involved in our day today life in the near future.
NuAIg Consulting helps you weave AI fabric in CX and auxillary operation with vertical best fit effective solutions to simplify AI adoption
Deep Learning Hardware: Past, Present, & FutureRouyun Pan
Yann LeCun gave a presentation on deep learning hardware, past, present, and future. Some key points:
- Early neural networks in the 1960s-1980s were limited by hardware and algorithms. The development of backpropagation and faster floating point hardware enabled modern deep learning.
- Convolutional neural networks achieved breakthroughs in vision tasks in the 1980s-1990s but progress slowed due to limited hardware and data.
- GPUs and large datasets like ImageNet accelerated deep learning research starting in 2012, enabling very deep convolutional networks for computer vision.
- Recent work applies deep learning to new domains like natural language processing, reinforcement learning, and graph networks.
- Future challenges include memory-aug
4 technologies including AI, IoT, blockchain, and virtual reality will revolutionize the maritime sector. AI and blockchain will have many applications including supply chain visibility, data science, voyage optimization, and diagnosis systems. Training seafarers for an AI-based future will require changes to curriculum, including teaching mathematics, programming, cybersecurity, and manual operation skills. Ensuring safe return to port capabilities with redundancy will also be important as autonomy increases.
This presentation is part of the webinar. Here is the link for the webinar recording https://www.anymeeting.com/geospatialworld/E955DA81854C39
Presentation Credits: NVIDIA & Geospatial Media
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Ai artificial intelligence professional vocabulary collection - NuAIgRuchi Jain
The field of artificial intelligence continues to expand, standing on the edge of the precipice of mainstream breakthroughs.
AI will be more involved in our day today life in the near future.
NuAIg Consulting helps you weave AI fabric in CX and auxillary operation with vertical best fit effective solutions to simplify AI adoption
Deep Learning Hardware: Past, Present, & FutureRouyun Pan
Yann LeCun gave a presentation on deep learning hardware, past, present, and future. Some key points:
- Early neural networks in the 1960s-1980s were limited by hardware and algorithms. The development of backpropagation and faster floating point hardware enabled modern deep learning.
- Convolutional neural networks achieved breakthroughs in vision tasks in the 1980s-1990s but progress slowed due to limited hardware and data.
- GPUs and large datasets like ImageNet accelerated deep learning research starting in 2012, enabling very deep convolutional networks for computer vision.
- Recent work applies deep learning to new domains like natural language processing, reinforcement learning, and graph networks.
- Future challenges include memory-aug
4 technologies including AI, IoT, blockchain, and virtual reality will revolutionize the maritime sector. AI and blockchain will have many applications including supply chain visibility, data science, voyage optimization, and diagnosis systems. Training seafarers for an AI-based future will require changes to curriculum, including teaching mathematics, programming, cybersecurity, and manual operation skills. Ensuring safe return to port capabilities with redundancy will also be important as autonomy increases.
This presentation is part of the webinar. Here is the link for the webinar recording https://www.anymeeting.com/geospatialworld/E955DA81854C39
Presentation Credits: NVIDIA & Geospatial Media
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
The document is a PowerPoint presentation on artificial intelligence that contains the following key points:
1. It discusses the origins and early history of AI research from the 1950s conference at Dartmouth College.
2. It covers various aspects of AI including knowledge representation, natural language processing, emotion and social skills in machines, and creativity in AI systems.
3. It provides an overview of artificial neural networks and how they are inspired by biological neural systems, focusing on artificial neurons, learning processes, and function approximation using neural networks.
This presentation deals with the basics of AI and it's connection with neural network. Additionally, it explains the pros and cons of AI along with the applications.
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
Deep Learning: Towards General Artificial IntelligenceRukshan Batuwita
For the past several years Deep Learning methods have revolutionized the areas in Pattern Recognition, namely, Computer Vision, Speech Recognition, Natural Language Processing etc. These techniques have been mainly developed by academics, closely working with tech giants such as Google, Microsoft and Facebook where the research outcomes have been successfully integrated into commercial products such as Google image and voice search, Google Translate, Microsoft Cortana, Facebook M and many more interesting applications that are yet to come. More recently, Google DeepMind Technologies has been working on Artificial General Intelligence using Deep Reinforcement Learning methods, where their AlphaGo system beat the world champion of the complex Chinese game 'Go' in March 2016. This talk will present a thorough introduction to major Deep Learning techniques, recent breakthroughs and some exciting applications.
Introduction to Artificial IntelligenceLuca Bianchi
Artificial intelligence has been defined in many ways as our understanding has evolved. Currently, AI is divided into narrow, general and super intelligence based on capabilities. Machine learning is a key approach in AI and involves algorithms that can learn from data to improve performance. Deep learning uses neural networks with many layers to learn representations of data and has achieved success in areas like computer vision and natural language processing.
This document outlines advances in deep learning and neural networks. It discusses challenges in machine learning like feature extraction. It describes how neuroscience experiments showed the brain's ability to learn new tasks. Neural networks aim to mimic the brain through techniques like backpropagation to train multi-layer models. Breakthroughs like pre-training and convolutional networks helped scale networks to many layers. Deep learning is now used in speech translation, image recognition, handwriting recognition and more.
This fast-paced session provides a brief history of AI, followed by AI-related topics, such as Machine Learning, Deep Learning and Reinforcement Learning, and the most popular frameworks for Machine Learning. You will learn about some of the successes of AI, and also some of the significant challenges in AI. No specialized knowledge is required, but an avid interest is recommended to derive the maximum benefit from this session.
This document provides an overview of deep learning, including its history, algorithms, tools, and applications. It begins with the history and evolution of deep learning techniques. It then discusses popular deep learning algorithms like convolutional neural networks, recurrent neural networks, autoencoders, and deep reinforcement learning. It also covers commonly used tools for deep learning and highlights applications in areas such as computer vision, natural language processing, and games. In the end, it discusses the future outlook and opportunities of deep learning.
Image Recognition Expert System based on deep learningPRATHAMESH REGE
The document summarizes literature on image recognition expert systems and deep learning. It discusses two papers:
1. The Low-Power Image Recognition Challenge which established a benchmark for comparing low-power image recognition solutions based on both accuracy and energy efficiency using datasets like ILSVRC.
2. The role of knowledge-based systems and expert systems in automatic interpretation of aerial images. It discusses techniques like semantic networks, frames and logical inference used to solve ill-defined problems with limited information. Frameworks like the blackboard model, ACRONYM and SIGMA are discussed.
IRJET- Recognition of Handwritten Characters based on Deep Learning with Tens...IRJET Journal
This paper proposes a convolutional neural network model to recognize handwritten digits using the MNIST dataset. The model is built using TensorFlow and consists of convolutional, pooling and fully connected layers. The model is trained on 60,000 images and tested on 10,000 images, achieving 98% accuracy on the training set and classifying digits with low error of 0.03% on the test set. Previous methods for handwritten digit recognition are discussed and the CNN approach is shown to provide superior performance with faster training times compared to other models.
Introduction to Deep Learning for Non-ProgrammersOswald Campesato
This session provides a brief history of AI, followed by AI-related topics, such as robots in AI, Machine Learning and Deep Learning, use cases for AI, some of the successes of AI, and also some of the significant challenges in AI. You will also learn about AI and mobile devices and the ethics of AI. An avid interest is recommended to derive the maximum benefit from this session.
The document discusses the syllabus for a course on Neural Networks. The mid-term syllabus covers introduction to neural networks, supervised learning including the perceptron and LMS algorithm. The end-term syllabus covers additional topics like backpropagation, unsupervised learning techniques and associative models including Hopfield networks. It also lists some references and applications of neural networks.
This document is a resume for Manoj Alwani providing his contact information, education history, professional experience, skills, projects, publications, and courses. It details that he has a M.S. in Computer Science from Stony Brook University and a B.Tech from India. His professional experience includes research roles at Element Inc and Stony Brook University focused on deep learning and computer vision. His skills and projects involve areas such as deep learning, computer vision, parallel computing, robotics, and natural language processing.
Artificial Intelligence AI Topics History and Overviewbutest
The document discusses the history and concepts of artificial intelligence including machine learning. It provides definitions of key AI terms and describes some famous early AI programs. It also discusses machine learning methods and applications, different types of learning, and challenges in the field. Games AI is explored through techniques like min-max trees used in chess programs. The Turing Test is introduced as a proposal to measure intelligence along with proposed modifications.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima PanditPurnima Pandit
The document discusses soft computing and its techniques, including artificial neural networks (ANN). It provides an overview of ANN, including how biological neurons inspired the basic ANN model. A neuron has inputs, outputs, weights, and an activation function. Networks can be single or multilayer. Learning involves updating weights to minimize error, with backpropagation commonly used for multilayer networks. Applications include pattern recognition, function approximation, and parameter estimation. A simple example is provided to estimate the slope and intercept of a line using ANN.
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistancePhD Assistance
Imagine a world where knowledge isn’t limited to humans!!! A world in which computers will think and collaborate with humans to create a more exciting universe. Although this future is still a long way off, Artificial Intelligence has made significant progress in recent years. In almost every area of AI, such as quantum computing, healthcare, autonomous vehicles, the internet of things, robotics, and so on, there is a lot of research going on. So much so that the number of annual Published Research Papers on Artificial Intelligence has increased by 90% since 1996.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/2Sdlfn4
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
The document provides an introduction to artificial intelligence (AI), including a brief history and the four phases of its development. It discusses what AI is, how it works by collecting and processing data through machine learning algorithms to make inferences. The key domains of AI are described as natural language processing, computer vision, speech recognition, and data. The types of AI are defined based on capabilities as artificial narrow intelligence, artificial general intelligence, and potential future artificial super intelligence. Related fields like machine learning, neural networks, data science, expert systems, and robotics are also outlined. Advantages, disadvantages, relevance to daily life, future possibilities, ethical concerns are presented at a high level.
Elon Musk believes that AI poses a fundamental risk to human civilization. The document then provides explanations of different types of AI like artificial neural networks, convolutional neural networks, recurrent neural networks, supervised learning, unsupervised learning, and reinforcement learning. It gives examples of applications for each type and compares human intelligence and learning to AI systems. In the end, the document asks if AI is really a threat to humans according to the definition of AI provided.
The document is a PowerPoint presentation on artificial intelligence that contains the following key points:
1. It discusses the origins and early history of AI research from the 1950s conference at Dartmouth College.
2. It covers various aspects of AI including knowledge representation, natural language processing, emotion and social skills in machines, and creativity in AI systems.
3. It provides an overview of artificial neural networks and how they are inspired by biological neural systems, focusing on artificial neurons, learning processes, and function approximation using neural networks.
This presentation deals with the basics of AI and it's connection with neural network. Additionally, it explains the pros and cons of AI along with the applications.
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
Deep Learning: Towards General Artificial IntelligenceRukshan Batuwita
For the past several years Deep Learning methods have revolutionized the areas in Pattern Recognition, namely, Computer Vision, Speech Recognition, Natural Language Processing etc. These techniques have been mainly developed by academics, closely working with tech giants such as Google, Microsoft and Facebook where the research outcomes have been successfully integrated into commercial products such as Google image and voice search, Google Translate, Microsoft Cortana, Facebook M and many more interesting applications that are yet to come. More recently, Google DeepMind Technologies has been working on Artificial General Intelligence using Deep Reinforcement Learning methods, where their AlphaGo system beat the world champion of the complex Chinese game 'Go' in March 2016. This talk will present a thorough introduction to major Deep Learning techniques, recent breakthroughs and some exciting applications.
Introduction to Artificial IntelligenceLuca Bianchi
Artificial intelligence has been defined in many ways as our understanding has evolved. Currently, AI is divided into narrow, general and super intelligence based on capabilities. Machine learning is a key approach in AI and involves algorithms that can learn from data to improve performance. Deep learning uses neural networks with many layers to learn representations of data and has achieved success in areas like computer vision and natural language processing.
This document outlines advances in deep learning and neural networks. It discusses challenges in machine learning like feature extraction. It describes how neuroscience experiments showed the brain's ability to learn new tasks. Neural networks aim to mimic the brain through techniques like backpropagation to train multi-layer models. Breakthroughs like pre-training and convolutional networks helped scale networks to many layers. Deep learning is now used in speech translation, image recognition, handwriting recognition and more.
This fast-paced session provides a brief history of AI, followed by AI-related topics, such as Machine Learning, Deep Learning and Reinforcement Learning, and the most popular frameworks for Machine Learning. You will learn about some of the successes of AI, and also some of the significant challenges in AI. No specialized knowledge is required, but an avid interest is recommended to derive the maximum benefit from this session.
This document provides an overview of deep learning, including its history, algorithms, tools, and applications. It begins with the history and evolution of deep learning techniques. It then discusses popular deep learning algorithms like convolutional neural networks, recurrent neural networks, autoencoders, and deep reinforcement learning. It also covers commonly used tools for deep learning and highlights applications in areas such as computer vision, natural language processing, and games. In the end, it discusses the future outlook and opportunities of deep learning.
Image Recognition Expert System based on deep learningPRATHAMESH REGE
The document summarizes literature on image recognition expert systems and deep learning. It discusses two papers:
1. The Low-Power Image Recognition Challenge which established a benchmark for comparing low-power image recognition solutions based on both accuracy and energy efficiency using datasets like ILSVRC.
2. The role of knowledge-based systems and expert systems in automatic interpretation of aerial images. It discusses techniques like semantic networks, frames and logical inference used to solve ill-defined problems with limited information. Frameworks like the blackboard model, ACRONYM and SIGMA are discussed.
IRJET- Recognition of Handwritten Characters based on Deep Learning with Tens...IRJET Journal
This paper proposes a convolutional neural network model to recognize handwritten digits using the MNIST dataset. The model is built using TensorFlow and consists of convolutional, pooling and fully connected layers. The model is trained on 60,000 images and tested on 10,000 images, achieving 98% accuracy on the training set and classifying digits with low error of 0.03% on the test set. Previous methods for handwritten digit recognition are discussed and the CNN approach is shown to provide superior performance with faster training times compared to other models.
Introduction to Deep Learning for Non-ProgrammersOswald Campesato
This session provides a brief history of AI, followed by AI-related topics, such as robots in AI, Machine Learning and Deep Learning, use cases for AI, some of the successes of AI, and also some of the significant challenges in AI. You will also learn about AI and mobile devices and the ethics of AI. An avid interest is recommended to derive the maximum benefit from this session.
The document discusses the syllabus for a course on Neural Networks. The mid-term syllabus covers introduction to neural networks, supervised learning including the perceptron and LMS algorithm. The end-term syllabus covers additional topics like backpropagation, unsupervised learning techniques and associative models including Hopfield networks. It also lists some references and applications of neural networks.
This document is a resume for Manoj Alwani providing his contact information, education history, professional experience, skills, projects, publications, and courses. It details that he has a M.S. in Computer Science from Stony Brook University and a B.Tech from India. His professional experience includes research roles at Element Inc and Stony Brook University focused on deep learning and computer vision. His skills and projects involve areas such as deep learning, computer vision, parallel computing, robotics, and natural language processing.
Artificial Intelligence AI Topics History and Overviewbutest
The document discusses the history and concepts of artificial intelligence including machine learning. It provides definitions of key AI terms and describes some famous early AI programs. It also discusses machine learning methods and applications, different types of learning, and challenges in the field. Games AI is explored through techniques like min-max trees used in chess programs. The Turing Test is introduced as a proposal to measure intelligence along with proposed modifications.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima PanditPurnima Pandit
The document discusses soft computing and its techniques, including artificial neural networks (ANN). It provides an overview of ANN, including how biological neurons inspired the basic ANN model. A neuron has inputs, outputs, weights, and an activation function. Networks can be single or multilayer. Learning involves updating weights to minimize error, with backpropagation commonly used for multilayer networks. Applications include pattern recognition, function approximation, and parameter estimation. A simple example is provided to estimate the slope and intercept of a line using ANN.
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistancePhD Assistance
Imagine a world where knowledge isn’t limited to humans!!! A world in which computers will think and collaborate with humans to create a more exciting universe. Although this future is still a long way off, Artificial Intelligence has made significant progress in recent years. In almost every area of AI, such as quantum computing, healthcare, autonomous vehicles, the internet of things, robotics, and so on, there is a lot of research going on. So much so that the number of annual Published Research Papers on Artificial Intelligence has increased by 90% since 1996.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/2Sdlfn4
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
The document provides an introduction to artificial intelligence (AI), including a brief history and the four phases of its development. It discusses what AI is, how it works by collecting and processing data through machine learning algorithms to make inferences. The key domains of AI are described as natural language processing, computer vision, speech recognition, and data. The types of AI are defined based on capabilities as artificial narrow intelligence, artificial general intelligence, and potential future artificial super intelligence. Related fields like machine learning, neural networks, data science, expert systems, and robotics are also outlined. Advantages, disadvantages, relevance to daily life, future possibilities, ethical concerns are presented at a high level.
Elon Musk believes that AI poses a fundamental risk to human civilization. The document then provides explanations of different types of AI like artificial neural networks, convolutional neural networks, recurrent neural networks, supervised learning, unsupervised learning, and reinforcement learning. It gives examples of applications for each type and compares human intelligence and learning to AI systems. In the end, the document asks if AI is really a threat to humans according to the definition of AI provided.
This document discusses artificial intelligence, machine learning, deep learning, and data science. It defines each term and explains the relationships between them. AI is the overarching field, while machine learning and deep learning are subsets of AI. Machine learning allows machines to improve performance over time without human intervention by learning from examples, and deep learning uses artificial neural networks with many layers to closely mimic the human brain. The document provides an example of a fruit detection system using deep learning that trains a neural network to detect ripe fruit for automated harvesting.
The document discusses the key differences between image processing and computer vision. Image processing involves applying mathematical transformations to images, like smoothing or sharpening, without understanding the image content. Computer vision applies machine learning techniques to computer vision tasks like object recognition, classification, and interpretation of images, aiming to emulate human vision capabilities. While there is overlap, computer vision uses image processing techniques alongside pattern recognition and temporal information processing.
Everything You Need to Know About Computer VisionKavika Roy
https://www.datatobiz.com/blog/computer-vision-guide/
To most, they consist of pixels only, but digital images, like any other form of content, can be mined for data by computers. Further, they can also be analyzed afterward. Use image processing methods, including computers, to retrieve the information from still photographs, and even videos. Here we are going to discuss everything you must know about computer vision.
There are two forms-Machine Vision, which is this tech’s more “traditional” type, and Computer Vision (CV), a digital world offshoot. While the first is mostly for industrial use, as an example are cameras on a conveyor belt in an industrial plant, the second is to teach computers to extract and understand “hidden” data inside digital images and videos.
Facebook this August said it was open-sourcing its work to improve its Computer Visiontechnology software for users further. This image was posted by FB Research scientist Piotr Dollar to explain the difference between human and computer vision.
Thanks to advances in artificial intelligence and innovations in deep learning and neural networks, the field has been able to take big leaps in recent years, and in some tasks related to detection and labeling of objects has been able to surpass humans.
One of the driving factors behind computer vision development is the amount of data we produce now, which will then get used to educate and develop computer vision.
This document provides an overview of artificial intelligence (AI), including its history, goals, applications, and future prospects. It discusses how AI works using artificial neural networks and logic. Some key applications mentioned are expert systems, natural language processing, computer vision, speech recognition, and robotics. Both advantages like fast response time and ability to process large data and disadvantages like lack of common sense and potential dangerous self-modification are outlined. The future of AI having both benefits of assistance and risks of robot rebellion if given full cognition is explored.
This document provides an overview of machine learning basics, including definitions of machine learning, neural networks, and different types of machine learning such as supervised, unsupervised, and reinforcement learning. It discusses applications of machine learning in areas like healthcare, finance, translation, and gaming. Deep learning and challenges in the field are also summarized. The document is intended as a brief introduction for beginners to understand machine learning concepts.
A quick guide to artificial intelligence working - TechaheadJatin Sapra
It is already on its way to achieving so as it has empowered the mobile app development agencies to build what was once assumed impossible. Despite this, much of this field remains undiscovered.
Ai artificial intelligence professional vocabulary collectionRuchi Jain
AI is expanding with an edge on the mainstream breakthrough. AI will be involved in all spheres of our life in future. It is important for us to understand what AI is, what it’s terms means, and what are the AI terminologies. Below are some AI terms.
We, NuAIg helps businesses to reap the benefit of AI for their revenue growth with cost reduction.
This document provides biographical information about Şaban Dalaman and summaries of key concepts in artificial intelligence and machine learning. It summarizes Şaban Dalaman's educational and professional background, then discusses Alan Turing's universal machine concept, the 1956 Dartmouth workshop proposal that helped define the field of AI, and definitions of AI, machine learning, deep learning, and data science. It also lists different tribes and algorithms within machine learning.
Artificial intelligence (AI) refers to simulating human intelligence through machine programming. Building an AI system involves reverse-engineering human traits into a machine to surpass human capabilities using computational abilities. There are two main types of AI: strong AI systems can problem-solve without human intervention for tasks like self-driving cars, while weak AI focuses on specific tasks like personal assistants answering questions. AI is applied in areas such as search recommendations, facial recognition, and spam filtering through techniques including machine learning, deep learning, and neural networks. The future of AI is impacting industries like transportation, manufacturing, healthcare, and education.
Here is a presentation about Artificial Neural Networking(ANN).
Dive into the dynamic world of Artificial Neural Networking with our latest presentation! 🧠💡 Uncover the intricacies of neural networks, their role in AI, and the transformative impact on various industries. From fundamentals to advanced applications, this presentation offers a comprehensive exploration of the evolving landscape of neural networking. Join us on this journey of innovation and discovery. #ArtificialIntelligence #NeuralNetworking #Innovation
Neural networking this is about neural networksv02527031
Artificial neural networks (ANNs) are inspired by biological neural networks and are a type of machine learning. They follow principles of neuronal organization and learn by examples to make predictions. ANNs have multiple layers including an input layer, one or more hidden layers, and an output layer. They are often used for applications like image recognition, natural language processing, medical diagnosis, and autonomous vehicles. While ANNs can perform well on large datasets, they also face challenges including overfitting, data and computational requirements, and a lack of transparency.
The document discusses the history and various approaches to artificial intelligence, including neural networks, expert systems, and genetic programming. It also examines applications such as speech recognition, game playing, and pattern recognition. Additionally, it addresses potential dangers of advanced AI, such as androids displacing human jobs or nanomachines achieving superintelligent computing power. The document concludes by considering whether developing powerful AI technologies is something researchers "should" pursue.
ARTIFICIAL INTELLIGENT ( ITS / TASK 6 ) done by Wael Saad Hameedi / P71062Wael Alawsey
This document provides an overview of artificial intelligence and several AI techniques. It discusses neural networks, genetic algorithms, expert systems, fuzzy logic, and the suitability of AI for solving transportation problems. Neural networks can be used to perform tasks like optical character recognition by analyzing images. Genetic algorithms use principles of natural selection to arrive at optimal solutions. Expert systems mimic human experts to provide advice. Fuzzy logic allows for gradual membership in sets rather than binary membership. Complexity and uncertainty make transportation well-suited for AI approaches.
Edge AI allows devices like self-driving cars to make decisions immediately using on-device processing rather than cloud-based processing, which introduces latency. Edge AI processes data and inferences locally on IoT and sensor devices. This enables applications like self-driving cars using computer vision to detect humans and stop in real-time. While Edge AI provides benefits like lower latency, security, and data privacy, it also faces limitations in processing power and operational complexity compared to cloud-based AI.
Barcodes and image recognition technology are examples of machine-readable representations of data. Barcodes use a pattern of bars and spaces that can be read by optical scanners to identify numbers and letters. Image recognition allows computers to identify objects in images through techniques like deep learning, which automatically extracts features from image data. Face recognition is a type of image recognition that extracts features from facial images and compares them to identify individuals, using algorithms like ResNet that represent faces as vectors and compare their Euclidean distances.
This document provides an overview of artificial intelligence (AI). It discusses the history of AI beginning in the mid-20th century. It describes how AI works using artificial neurons and neural networks that mimic the human brain. The document outlines several goals and applications of AI including expert systems, natural language processing, computer vision, robotics, and more. It also discusses both the advantages and disadvantages of AI as well as considerations for its future development and impact.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
1. By Othmane GacemBy Othmane Gacem
The magic behind
AI
By Othmane Gacem
ogacem@outlook.com
2. By Othmane GacemBy Othmane Gacem
Topics
Selected applications of Artificial Neural Network
• How can machines learn
• The difference between our neurones and an
artificial neural network
• Image recognition
• Facial recognition
• Self driving cars
• Handwriting recognition
• Art (neural style transfer)
• AI in business
“People worry that computers will get too smart and take over
the world, but the real problem is that they’re too stupid and
they’ve already taken over the world.”
- Pedro Domingos, author of “The Master Algorithm”
By Othmane Gacem
3. By Othmane GacemBy Othmane Gacem
Make sense of everything
Don’t know what is AI, machine learning, robotics, they similarities and differences ? I got you covered!
AI (Artificial Intelligence)ML (Machine Learning)
Robotics
And robotics in all this ? Isn’t AI something like a robot with human capabilities ? Indeed,
when one searches for AI on google image most pictures represent robots. AI is in fact a
computer algorithm based on statistical techniques and does not necessarily involves
robotics.
Oxford definition of robots: A machine capable of carrying out a complex series of actions
automatically, especially one programmable by a computer.
Robotics is much more about a predefined (pre-programmed) sets of actions while AI’s (in
fact ML) definition from Wikipedia clearly states “without being explicitly programmed”.
Example of robots without AI: robots assembling cars
Example of robots with AI: Sophia, a conversation-able-human looking robot
StatisticsOxford dictionary definition: The practice or
science of collecting and analysing numerical data in
large quantities, especially for the purpose of
inferring proportions in a whole from those in a
representative sample.
-> This is the science which underlies ML and AI
Oxford dictionary definition: The capacity of a
computer to learn from experience, i.e. to modify its
processing on the basis of newly acquired information.
Wikipedia: An interdisciplinary field that uses statistical
techniques to give computer systems the ability to
“learn” (e.g., progressively improve performance on a
specific task) from data, without being explicitly
programmed.
Tools: Supervised learning, Clustering, Dimensionality
reduction, neural networks
Oxford dictionary: The theory and development of computer
systems able to perform tasks normally requiring human
intelligence, such as visual perception, speech recognition, decision-
making, and translation between languages.
Or best: "AI is whatever hasn't been done yet.“ If you cannot
replicate what humans do then it’s not AI.
AI is not a technology in itself, it’s an application of ML for tasks
previously limited to humans.
4. By Othmane GacemBy Othmane Gacem
In more details...
• Artificial Intelligence (AI): AI is not a technology but rather a set of
computer algorithms which can mimic human capacities. For example self
driving car, facial recognition, chat-bot algorithms which all use machine
learning with statistical/mathematical origins. Although the capabilities are
somewhat similar to human capacities the way the computer achieves this is
most probably similar to the way the brain works for the very good reason
that we do not know how the brain works.
• Machine learning: This method is defined by the systematic use of
statistical methods in order to make predictions. But as explained in this
presentation, what is called “learning” is in fact only the ability of the
algorithm to find an equation which makes predictions sufficiently near the
real world data to have a practical use.
• Artificial Neural Networks: ANNs are several machine learning techniques
(mostly logistic regressions) stacked one after the other and very loosely look
like a network of brain neurons because the result of one regression is then
sent to the next steps, similarly to neurons sending chemicals to each others.
ANN are the basis of many applications which mimic human capabilities and
termed AI. They are used in self driving, playing Go and chess etc..
• Natural Language Processing: NLP is an ANN which is specialized in
learning how humans speak and then can form its own sentences and texts
and conduct an almost normal conversation with a human, this is called a
chat-bot. It’s ability to actually understand the meaning of words is extremely
limited.
• Any link to automatization ? Not in the strict sense. Automatization
implies several consecutive pre-programmed steps conducted by a ‘robot’ (a
computer program) for a specific action and can be repeated whenever
needed. So we, human, explicitly program the robot to do whatever we want,
it does not need to learn anything.
5. By Othmane GacemBy Othmane Gacem
Machine Learning: How can a machine “learn” ?
X
Y
Learning is: Cost (Error) minimization
The computer’s algorithm is finding the line which passes closest to the
black points. Here it starts with the red line (1), calculates the cost
(distance between the points and the line, length of the yellow lines)
and then tries to minimize the cost by moving the line to a better place,
here in green. We can see that the blue lines are shorter than the
yellow lines, this is the basic form of learning or training (both words
are used interchangeably here).
1
2
Gradient descent
In this 3D example, the cost, (presented as arrows on the left graph) is the
dependant variable and this time there are two independent variables (x1 & x2).
The algorithm starts at the non-optimal point A (where costs are high). This could be
the equivalent to the red line in example 1, it will then move slowly to the bottom
(local minimum) which is the position of the green line. The fact that the algorithm
gradually goes to the minimal cost point is the learning in machine learning. It
gradually descent to the local minimum.
Linear Regression Gradient Descent
Data points
These points represent the data
from the axis. The axis can
represent age, income, price,
costs, pixel values, location and
any other source of data.
Inspired from the courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-
founder, Coursera; Adjunct Professor, Stanford University; formerly head of Baidu AI Group/Google Brain
6. By Othmane GacemBy Othmane Gacem
NaturalNeuralNetworkArtificialNeuralNetwork
Neurons
Neurons are brain cells, allowing us to use
our senses and think. They are
interconnected between each other in the
brain and communicate with the help of
chemical signals. Depending on the signal a
neurone gets from its neighbours, it will
choose whether to emit a signal to another
neurone.
Artificial Neural Networks (ANN)
Artificial Neural Networks are a sequence
of mathematical calculations based on
statistical principles. The only reason we
call these Neural Network is simply
because these mathematical calculations
are connected in a way which loosly looks
like neurones in our brain.
Artificial Neurons
Each circle in the picture performs a
mathematical operation and sends the
result to the next circle. Simply as that.
Is AI built the same way as
our brain?
No.
Comparison between Natural and Artificial Neural Networks
7. By Othmane GacemBy Othmane Gacem
AI can see: Image recognition
What we see What a computer sees
Pixel values
The higher
the darker
RGB colors
Red, Green, Blue from
which all other colours
are produced.
Training phase
The neural network analyses
millions of these pixel numbers
(pictures) together with their
labels: “cat” and “No cat”.
New image
The computer sees this
picture for the first time
and must predict if the
picture is a cat or not.
Prediction Cat !
… millions
more
Cat No CatNo Cat CatCat
Prediction
The neural network trained on the pictures with labels
is used again to detect a cat in the new image. The
neural network will essentially compare the new image
with the images it already knows, if the similarity is high
enough it will label it as “cat”.
1 2 3
Artificial Neural Network
A series of regressions
(mathematical equations) which
outputs 1 or 0 at every nodes.
1: Cat
0: No Cat
1 or 0
Inspired from the courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-
founder, Coursera; Adjunct Professor, Stanford University; formerly head of Baidu AI Group/Google Brain
8. By Othmane GacemBy Othmane Gacem
AI recognises faces: Facial recognition
• Self-driving cars
• Facial recognition
• Augmented reality
• Emotions recognition
• Unlock your phone
• Criminal identification
• Medical diagnosis
• KYC process
Applications for image recognition
input Layer 1 Layer 2 Output
Abasicartificialneuralnetwork
Layer 1
At this level, only simple
features are recognized.
Generally, filters at the
beginning are specialised in
recognising simple features such
as vertical or horizontal lines.
Layer 2
Later, after the superposition
of layers, the neural network
can recognize more complex
features such as eyes, nose,
ears, as seen in the example
for layer 2.
Output
Finally, towards the end, the
neural network can recognize
entire faces allowing it to
confidently recognize a face
or predict whether there is a
cat in the picture.
Input
The image is first transformed
in series of pixel values and
fed into the neural network
as already seen in the image
recognition chapter.
Dive into an ANN
To understand in more details how image recognition is
done, it is important to add some details to our previous
cat example but this time taking the picture of a human.
Let’s dive into what each layer of the neural network
really does.
- Courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-founder, Coursera; Adjunct Professor, Stanford
University; formerly head of Baidu AI Group/Google Brain
- https://hackernoon.com/what-is-a-capsnet-or-capsule-network-2bfbe48769cc
9. By Othmane GacemBy Othmane Gacem
AI as a driver: Self-driving cars
Left Right
Left Right Left Right
Training Test
Human driver
A human is driving and the computer
observes and keeps in memory the
driving wheel’s position.
Image
The computer takes pictures of the
road ahead, let’s say every second.
Left Right
Learning by observing
The computer compares the pictures
with the driving wheel’s position it
observed while the human driver was
driving. This is how the neural network is
trained.
New road
The computer is now
driving on a new road.
Neural network
The neural network is the same. It
will compare the pictures of the
new road ahead with the ones it
learned.
2
1
3
1
2
3
Inspired from the courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-founder, Coursera;
Adjunct Professor, Stanford University; formerly head of Baidu AI Group/Google Brain
Prediction
The computer’s prediction (or
decision) is to turn the steering
wheel slightly to the left. It will
then give the order to the car’s
wheels. This process is done
thousand of times per seconds.
10. By Othmane GacemBy Othmane Gacem
Example from Tesla
Road detection
The Tesla car detects the road ahead and it
is demonstrated through this picture where
the road ahead is covered with pink points.
This is possible thanks to the method
described in the previous example!
Multiple cameras
In the previous example, the car used a single front camera taking
pictures of the road ahead. In real-life driving cars make the use of
multiple cameras, radars and other sensors in order to ensure the
detection of not only the road ahead but also the road signs, other
cars and people. But the intuition behind the neural network training
is the same regardless of the type of camera.
https://www.tesla.com/
11. By Othmane GacemBy Othmane Gacem
AI as a reader: Handwriting recognition (Optical Character Recognition)
H
e
l
l
o
W
o
r
l
d
Training Test
Large data base
This process needs to be repeated
for every letter in the alphabet as
well as the numbers and any special
characters.
Training
Once the training is complete,
the ANN contains a great
variety of fonts for each letter.
Trained ANN
After this extensive training, the ANN
is able to predict (read) other people’s
handwriting. In fact, it recognizes that
the H looks very similar to all the H it
has seen during the training and thus
recognizes it as an H, then moves to
the next letter and so on.
Treated as image
The different letters are
treated as pictures in a
similar way as image
recognition seen previously.
1
2
3 4
.
.
.
1 New
Never seen before
handwriting.
Character separation
The first step is to detect
every character and
separate them. This is done
with another neural
network which was trained
for this task, fairly similar to
the “cat not cat” example.
2 3
- Courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-founder, Coursera; Adjunct Professor,
Stanford University; formerly head of Baidu AI Group/Google Brain
- http://veniceatlas.epfl.ch/atlas/digitization/automatic-transcription/handwritten-text-recognition-with-the-rwth-ocr-system/
Wide range of fonts
First, every letter of the alphabet
needs to be learned. As everyone’s
handwriting is different, the artificial
neural networks also needs to get
used to the different fonts and
handwriting techniques hence the
different As.
12. By Othmane GacemBy Othmane Gacem
AI as an Artist: Style transfer
Content picture
This picture is the object
whose basic shapes will be
retained.
Style picture
This picture contains the
style that we want to
transfer to the content
picture.
Generated image
Starting image
The starting image is composed
of random colours. The goal
being to have a mix of both the
content and style picture.
Intermediate picture
Who said art is what separates humans from robots ?
This method is called “AI style transfer” and enables to create new art
with much less effort. In the future we might see this technique
integrated to augmented reality (AR) headsets and see the world in a
entirely different way !
- Courses “Machine Learning” and “Deep learning specialization” Mafrom Andrew Ng, Co-founder, Coursera; Adjunct
Professor, Stanford University; formerly head of Baidu AI Group/Google Brain
- source: https://github.com/ea167/code-fest
14. By Othmane GacemBy Othmane Gacem
AI in business
• Credit card fraud detection
Machine learning can learn your credit card usage habits, amount spent, where,
when, for what product or services. These are then compared with other people’s
expenses who are similar to you as well as comparing it with your historical
purchases. This data enables to predict if the purchase made right now with your
credit card is made by you or by a thief who stole your credit card information.
• Security thanks to facial recognition
We are all familiar with apple’s face ID, but companies might increasingly rely on
this technology. For instance, by using facial recognition instead of using the actual
entrance pass. This increases the security and avoids risks when a pass is stolen.
• Medical diagnosis
For instance: taking pictures of birth marks to detect a potential cancer.
Birthmarks are at risk of melomania and image recognition can detect this.
There are now apps doing this and often better than a trained doctor.
Business applications for AI are only recently emerging as the costs are going down and some applications are increasingly proving a real
return on investment
Using AI, with techniques explained in this presentation
Meaning the technology mimics human capabilities.
Using machine learning or statistical diagnosis
Not AI in the sense of mimic human capacities. Their application is as
valuable as AI, but AI is just not always necessary.
15. By Othmane GacemBy Othmane Gacem
Scared of AI ?
IS ARTIFICIAL INTELLIGENCE A
DANGER? MORE THAN HALF OF UK
FEARS ROBOTS WILL TAKE OVER
FORGET TERRORISM, CLIMATE CHANGE
AND PANDEMICS: ARTIFICIAL
INTELLIGENCE IS THE BIGGEST THREAT
TO HUMANITY
MICROSOFT’S NADELLA SAYS
AI CAN MAKE THE WORLD
MORE INCLUSIVE AI OFFERS A UNIQUE OPPORTUNITY
FOR SOCIAL PROGRESS
By Othmane Gacem
ogacem@outlook.com