The document discusses machine learning methods and their applications in space engineering. It provides an overview of recent advances in machine learning techniques such as deep learning, genetic programming, and smart search methods. It also summarizes areas explored by the European Space Agency's Advanced Concepts Team (ACT), including using neurocontrollers, swarm intelligence, biomimetics, and evolution/search methods for applications like spacecraft control, formation flying, computer vision, and trajectory optimization. The document envisions that machine learning could enable more intelligent spacecraft by 2040 if the gap in onboard computing is filled.
IRJET - Object Detection using Deep Learning with OpenCV and PythonIRJET Journal
This document summarizes research on object detection techniques using deep learning. It discusses using the YOLO algorithm to identify objects in images using a single neural network that predicts bounding boxes and class probabilities. The document reviews prior research on algorithms like R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. It then describes the YOLO loss function and methodology for finding bounding boxes of objects in an image. The document concludes that YOLO is well-suited for real-time object detection applications due to its advantages over other algorithms.
This chapter introduces the theoretical foundations of knowledge mining and intelligent agents. It discusses key concepts like knowledge, intelligent agents, and the fundamental tasks of knowledge discovery in databases. The chapter also provides an overview of several well-developed intelligent agent methodologies, including ant colony optimization, particle swarm optimization, and evolutionary algorithms that can be used for knowledge mining.
The document discusses the Mechanical Engineering department at IIT Kanpur. It describes some of the research areas and projects being conducted, including developing train safety systems like wheel impact load detection and derailment detection devices. It also discusses research on measuring wheel technology, onboard diagnosis systems, bogie design, and developing stability control for cars. The department has strong programs in areas like computational mechanics, materials science, robotics, and is growing its nuclear engineering program.
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
IRJET- Comparative Study of Different Techniques for Text as Well as Object D...IRJET Journal
This document discusses and compares different techniques for object and text detection from real-time images, including OCR, RCNN, Mask RCNN, Fast RCNN, and Faster RCNN algorithms. It finds that Mask RCNN, an extension of Faster RCNN, is generally the best algorithm for object detection in real-time images, as it outperforms other models in accuracy for tasks like object detection, segmentation, and captioning challenges. The document provides background on machine learning and neural networks approaches to image recognition and object detection.
IRJET- A Real Time Yolo Human Detection in Flood Affected Areas based on Vide...IRJET Journal
This document proposes a method for real-time human detection in flood-affected areas using video content analysis and the YOLO object detection algorithm. It trains YOLO on the COCO Human dataset to detect and localize humans in video frames from surveillance cameras. The results show that YOLO can accurately detect multiple humans, even with occlusion, and single humans under varying illumination. This approach aims to help rescue operations quickly identify affected areas and prioritize aid.
IRJET - Object Detection using Deep Learning with OpenCV and PythonIRJET Journal
This document summarizes research on object detection techniques using deep learning. It discusses using the YOLO algorithm to identify objects in images using a single neural network that predicts bounding boxes and class probabilities. The document reviews prior research on algorithms like R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. It then describes the YOLO loss function and methodology for finding bounding boxes of objects in an image. The document concludes that YOLO is well-suited for real-time object detection applications due to its advantages over other algorithms.
This chapter introduces the theoretical foundations of knowledge mining and intelligent agents. It discusses key concepts like knowledge, intelligent agents, and the fundamental tasks of knowledge discovery in databases. The chapter also provides an overview of several well-developed intelligent agent methodologies, including ant colony optimization, particle swarm optimization, and evolutionary algorithms that can be used for knowledge mining.
The document discusses the Mechanical Engineering department at IIT Kanpur. It describes some of the research areas and projects being conducted, including developing train safety systems like wheel impact load detection and derailment detection devices. It also discusses research on measuring wheel technology, onboard diagnosis systems, bogie design, and developing stability control for cars. The department has strong programs in areas like computational mechanics, materials science, robotics, and is growing its nuclear engineering program.
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
IRJET- Comparative Study of Different Techniques for Text as Well as Object D...IRJET Journal
This document discusses and compares different techniques for object and text detection from real-time images, including OCR, RCNN, Mask RCNN, Fast RCNN, and Faster RCNN algorithms. It finds that Mask RCNN, an extension of Faster RCNN, is generally the best algorithm for object detection in real-time images, as it outperforms other models in accuracy for tasks like object detection, segmentation, and captioning challenges. The document provides background on machine learning and neural networks approaches to image recognition and object detection.
IRJET- A Real Time Yolo Human Detection in Flood Affected Areas based on Vide...IRJET Journal
This document proposes a method for real-time human detection in flood-affected areas using video content analysis and the YOLO object detection algorithm. It trains YOLO on the COCO Human dataset to detect and localize humans in video frames from surveillance cameras. The results show that YOLO can accurately detect multiple humans, even with occlusion, and single humans under varying illumination. This approach aims to help rescue operations quickly identify affected areas and prioritize aid.
Journal club done with Vid Stojevic for PointNet:
https://arxiv.org/abs/1612.00593
https://github.com/charlesq34/pointnet
http://stanford.edu/~rqi/pointnet/
Deep learning for Indoor Point Cloud processing. PointNet, provides a unified architecture operating directly on unordered point clouds without voxelisation for applications ranging from object classification, part segmentation, to scene semantic parsing.
Alternative download link:
https://www.dropbox.com/s/ziyhgi627vg9lyi/3D_v2017_initReport.pdf?dl=0
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
IRJET - Direct Me-Nevigation for Blind PeopleIRJET Journal
This document describes a system for direct navigation assistance for blind people using object detection and audio cues. It uses a convolutional neural network model called You Only Look Once (YOLO) to perform real-time object detection on camera images and then describes the detected objects and their locations to the blind user using 3D spatialized sound. The system aims to allow blind users to independently navigate environments by audibly identifying surrounding objects. It analyzes previous works on sensory substitution and assistive technologies for the blind, as well as research on using 3D sound for navigation assistance. The document outlines the object detection methods used, including YOLO and anchor boxes to improve accuracy at detecting multiple objects within each image grid.
Graph Centric Analysis of Road Network Patterns for CBD’s of Metropolitan Cit...Punit Sharnagat
OSMnx is a Python package to retrieve, model, analyze, and visualize street networks from OpenStreetMap.
OpenStreetMap (OSM) is a collaborative mapping project that provides a free and publicly editable map of the world.
OpenStreetMap provides a valuable crowd-sourced database of raw geospatial data for constructing models of urban street networks for scientific analysis
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
The Status of ML Algorithms for Structure-property Relationships Using Matb...Anubhav Jain
The document discusses the development of Matbench, a standardized benchmark for evaluating machine learning algorithms for materials property prediction. Matbench includes 13 standardized datasets covering a variety of materials prediction tasks. It employs a nested cross-validation procedure to evaluate algorithms and ranks submissions on an online leaderboard. This allows for reproducible evaluation and comparison of different algorithms. Matbench has provided insights into which algorithm types work best for certain prediction problems and has helped measure overall progress in the field. Future work aims to expand Matbench with more diverse datasets and evaluation procedures to better represent real-world materials design challenges.
Evaluating Machine Learning Algorithms for Materials Science using the Matben...Anubhav Jain
1) The document discusses evaluating machine learning algorithms for materials science using the Matbench protocol.
2) Matbench provides standardized datasets, testing procedures, and an online leaderboard to benchmark and compare machine learning performance.
3) This allows different groups to evaluate algorithms independently and identify best practices for materials science predictions.
Deep Learning for X ray Image to Text Generationijtsrd
This document discusses using deep learning techniques for X-ray image to text generation. Specifically, it proposes using a convolutional neural network (CNN) and recurrent neural network (RNN) model to classify X-ray images into predefined categories and then generate a text description of the image category. The system would be trained on a dataset of X-ray images that have been manually annotated with labels and captions. The goal is for the trained model to then be able to classify new X-ray images and describe them in text without any manual annotation. The document provides background on existing approaches to image captioning and object detection, and outlines the proposed system architecture for this X-ray image to text generation task.
A Literature Survey: Neural Networks for object detectionvivatechijri
Humans have a great capability to distinguish objects by their vision. But, for machines object
detection is an issue. Thus, Neural Networks have been introduced in the field of computer science. Neural
Networks are also called as ‘Artificial Neural Networks’ [13]. Artificial Neural Networks are computational
models of the brain which helps in object detection and recognition. This paper describes and demonstrates the
different types of Neural Networks such as ANN, KNN, FASTER R-CNN, 3D-CNN, RNN etc. with their accuracies.
From the study of various research papers, the accuracies of different Neural Networks are discussed and
compared and it can be concluded that in the given test cases, the ANN gives the best accuracy for the object
detection.
A simple framework for contrastive learning of visual representationsDevansh16
Link: https://machine-learning-made-simple.medium.com/learnings-from-simclr-a-framework-contrastive-learning-for-visual-representations-6c145a5d8e99
If you'd like to discuss something, text me on LinkedIn, IG, or Twitter. To support me, please use my referral link to Robinhood. It's completely free, and we both get a free stock. Not using it is literally losing out on free money.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
Live conversations at twitch here: https://rb.gy/zlhk9y
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Comments: ICML'2020. Code and pretrained models at this https URL
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
Cite as: arXiv:2002.05709 [cs.LG]
(or arXiv:2002.05709v3 [cs.LG] for this version)
Submission history
From: Ting Chen [view email]
[v1] Thu, 13 Feb 2020 18:50:45 UTC (5,093 KB)
[v2] Mon, 30 Mar 2020 15:32:51 UTC (5,047 KB)
[v3] Wed, 1 Jul 2020 00:09:08 UTC (5,829 KB)
IRJET - Explicit Content Detection using Faster R-CNN and SSD Mobilenet V2IRJET Journal
This document compares two object detection models, Faster R-CNN and SSD MobileNet v2, for detecting explicit content in images. Faster R-CNN uses a region proposal network to identify regions of interest, which are then classified and bounded. SSD MobileNet combines the Single Shot Detector framework with the efficient MobileNet architecture using depthwise separable convolutions. The document evaluates these models in terms of speed, accuracy, and model size for explicit content detection.
2019년 파이콘 한국에서 진행된 튜토리얼 자료입니다. 최재식 교수님께서 설명가능인공지능이란 무엇인가에 대해 발표해주신 Part 1 발표자료입니다. 아래 링크를 통해 행사 관련 정보를 확인하실 수 있습니다.
http://xai.unist.ac.kr/Tutorial/2018/
https://github.com/OpenXAIProject/PyConKorea2019-Tutorials
Part 1: https://www.slideshare.net/OpenXAI/2019-part-1
Part 2: https://www.slideshare.net/OpenXAI/2019-lrp-part-2
Part 3: https://www.slideshare.net/OpenXAI/2019-shap-part-3
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
This document provides guidance on labeling fundus images for classification models. It recommends using optimized labeling tools to annotate optic disc positions more efficiently than manual drawing. Popular tools include Labelbox and VGG Image Annotator. The document estimates that labeling 1,000 fundus images with a single object each could take around 1 hour and 20 minutes. It also notes that pre-trained non-medical networks can be built upon for "small data" sets of 1,000 images.
Deconstructing SfM-Net architecture and beyond
"SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates."
Alternative download:
https://www.dropbox.com/s/aezl7ro8sy2xq7j/sfm_net_v2.pdf?dl=0
AlexNet achieved unprecedented results on the ImageNet dataset by using a deep convolutional neural network with over 60 million parameters. It achieved top-1 and top-5 error rates of 37.5% and 17.0%, significantly outperforming previous methods. The network architecture included 5 convolutional layers, some with max pooling, and 3 fully-connected layers. Key aspects were the use of ReLU activations for faster training, dropout to reduce overfitting, and parallelizing computations across two GPUs. This dramatic improvement demonstrated the potential of deep learning for computer vision tasks.
We test if modern computer-vision algorithms can predict if users are reading relevant information, from their eye movement patterns. The slides accompany the video presentation at https://youtu.be/ZebBgUhL-EU
The full research paper is available at:
https://dl.acm.org/doi/10.1145/3343413.3377960
and also at
https://arxiv.org/abs/2001.05152
The document discusses how emerging technologies are enabling new approaches to modeling complex systems using large numbers of autonomous agents. It describes efforts to develop agent-based modeling frameworks that can leverage exascale supercomputers to simulate phenomena like microbial ecosystems, cybersecurity, and energy systems at an unprecedented scale. These models incorporate hybrid discrete-continuous methods and very high-resolution data to better understand dynamic social and natural processes.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Artificial intelligence (AI) is experiencing steadily growing interest over the recent years. For good reason, since these innovative algorithms and methods, such as machine learning and deep neural networks, in which knowledge is acquired and applied based on data, enable the automation of a wide range of processes and quickly deliver precise results. AI is also getting more and more popular in the space sector. The Institute of Space Technology & Space Applications (ISTA) at the Universität der Bundeswehr in Munich is conducting research around AI for space operations, science, and technology. An overview of activities and current developments towards fault management, autonomous collision avoidance, autonomous landing, as well as radio science at ISTA will be presented.
Journal club done with Vid Stojevic for PointNet:
https://arxiv.org/abs/1612.00593
https://github.com/charlesq34/pointnet
http://stanford.edu/~rqi/pointnet/
Deep learning for Indoor Point Cloud processing. PointNet, provides a unified architecture operating directly on unordered point clouds without voxelisation for applications ranging from object classification, part segmentation, to scene semantic parsing.
Alternative download link:
https://www.dropbox.com/s/ziyhgi627vg9lyi/3D_v2017_initReport.pdf?dl=0
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
IRJET - Direct Me-Nevigation for Blind PeopleIRJET Journal
This document describes a system for direct navigation assistance for blind people using object detection and audio cues. It uses a convolutional neural network model called You Only Look Once (YOLO) to perform real-time object detection on camera images and then describes the detected objects and their locations to the blind user using 3D spatialized sound. The system aims to allow blind users to independently navigate environments by audibly identifying surrounding objects. It analyzes previous works on sensory substitution and assistive technologies for the blind, as well as research on using 3D sound for navigation assistance. The document outlines the object detection methods used, including YOLO and anchor boxes to improve accuracy at detecting multiple objects within each image grid.
Graph Centric Analysis of Road Network Patterns for CBD’s of Metropolitan Cit...Punit Sharnagat
OSMnx is a Python package to retrieve, model, analyze, and visualize street networks from OpenStreetMap.
OpenStreetMap (OSM) is a collaborative mapping project that provides a free and publicly editable map of the world.
OpenStreetMap provides a valuable crowd-sourced database of raw geospatial data for constructing models of urban street networks for scientific analysis
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
The Status of ML Algorithms for Structure-property Relationships Using Matb...Anubhav Jain
The document discusses the development of Matbench, a standardized benchmark for evaluating machine learning algorithms for materials property prediction. Matbench includes 13 standardized datasets covering a variety of materials prediction tasks. It employs a nested cross-validation procedure to evaluate algorithms and ranks submissions on an online leaderboard. This allows for reproducible evaluation and comparison of different algorithms. Matbench has provided insights into which algorithm types work best for certain prediction problems and has helped measure overall progress in the field. Future work aims to expand Matbench with more diverse datasets and evaluation procedures to better represent real-world materials design challenges.
Evaluating Machine Learning Algorithms for Materials Science using the Matben...Anubhav Jain
1) The document discusses evaluating machine learning algorithms for materials science using the Matbench protocol.
2) Matbench provides standardized datasets, testing procedures, and an online leaderboard to benchmark and compare machine learning performance.
3) This allows different groups to evaluate algorithms independently and identify best practices for materials science predictions.
Deep Learning for X ray Image to Text Generationijtsrd
This document discusses using deep learning techniques for X-ray image to text generation. Specifically, it proposes using a convolutional neural network (CNN) and recurrent neural network (RNN) model to classify X-ray images into predefined categories and then generate a text description of the image category. The system would be trained on a dataset of X-ray images that have been manually annotated with labels and captions. The goal is for the trained model to then be able to classify new X-ray images and describe them in text without any manual annotation. The document provides background on existing approaches to image captioning and object detection, and outlines the proposed system architecture for this X-ray image to text generation task.
A Literature Survey: Neural Networks for object detectionvivatechijri
Humans have a great capability to distinguish objects by their vision. But, for machines object
detection is an issue. Thus, Neural Networks have been introduced in the field of computer science. Neural
Networks are also called as ‘Artificial Neural Networks’ [13]. Artificial Neural Networks are computational
models of the brain which helps in object detection and recognition. This paper describes and demonstrates the
different types of Neural Networks such as ANN, KNN, FASTER R-CNN, 3D-CNN, RNN etc. with their accuracies.
From the study of various research papers, the accuracies of different Neural Networks are discussed and
compared and it can be concluded that in the given test cases, the ANN gives the best accuracy for the object
detection.
A simple framework for contrastive learning of visual representationsDevansh16
Link: https://machine-learning-made-simple.medium.com/learnings-from-simclr-a-framework-contrastive-learning-for-visual-representations-6c145a5d8e99
If you'd like to discuss something, text me on LinkedIn, IG, or Twitter. To support me, please use my referral link to Robinhood. It's completely free, and we both get a free stock. Not using it is literally losing out on free money.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
Live conversations at twitch here: https://rb.gy/zlhk9y
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Comments: ICML'2020. Code and pretrained models at this https URL
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
Cite as: arXiv:2002.05709 [cs.LG]
(or arXiv:2002.05709v3 [cs.LG] for this version)
Submission history
From: Ting Chen [view email]
[v1] Thu, 13 Feb 2020 18:50:45 UTC (5,093 KB)
[v2] Mon, 30 Mar 2020 15:32:51 UTC (5,047 KB)
[v3] Wed, 1 Jul 2020 00:09:08 UTC (5,829 KB)
IRJET - Explicit Content Detection using Faster R-CNN and SSD Mobilenet V2IRJET Journal
This document compares two object detection models, Faster R-CNN and SSD MobileNet v2, for detecting explicit content in images. Faster R-CNN uses a region proposal network to identify regions of interest, which are then classified and bounded. SSD MobileNet combines the Single Shot Detector framework with the efficient MobileNet architecture using depthwise separable convolutions. The document evaluates these models in terms of speed, accuracy, and model size for explicit content detection.
2019년 파이콘 한국에서 진행된 튜토리얼 자료입니다. 최재식 교수님께서 설명가능인공지능이란 무엇인가에 대해 발표해주신 Part 1 발표자료입니다. 아래 링크를 통해 행사 관련 정보를 확인하실 수 있습니다.
http://xai.unist.ac.kr/Tutorial/2018/
https://github.com/OpenXAIProject/PyConKorea2019-Tutorials
Part 1: https://www.slideshare.net/OpenXAI/2019-part-1
Part 2: https://www.slideshare.net/OpenXAI/2019-lrp-part-2
Part 3: https://www.slideshare.net/OpenXAI/2019-shap-part-3
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
This document provides guidance on labeling fundus images for classification models. It recommends using optimized labeling tools to annotate optic disc positions more efficiently than manual drawing. Popular tools include Labelbox and VGG Image Annotator. The document estimates that labeling 1,000 fundus images with a single object each could take around 1 hour and 20 minutes. It also notes that pre-trained non-medical networks can be built upon for "small data" sets of 1,000 images.
Deconstructing SfM-Net architecture and beyond
"SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates."
Alternative download:
https://www.dropbox.com/s/aezl7ro8sy2xq7j/sfm_net_v2.pdf?dl=0
AlexNet achieved unprecedented results on the ImageNet dataset by using a deep convolutional neural network with over 60 million parameters. It achieved top-1 and top-5 error rates of 37.5% and 17.0%, significantly outperforming previous methods. The network architecture included 5 convolutional layers, some with max pooling, and 3 fully-connected layers. Key aspects were the use of ReLU activations for faster training, dropout to reduce overfitting, and parallelizing computations across two GPUs. This dramatic improvement demonstrated the potential of deep learning for computer vision tasks.
We test if modern computer-vision algorithms can predict if users are reading relevant information, from their eye movement patterns. The slides accompany the video presentation at https://youtu.be/ZebBgUhL-EU
The full research paper is available at:
https://dl.acm.org/doi/10.1145/3343413.3377960
and also at
https://arxiv.org/abs/2001.05152
The document discusses how emerging technologies are enabling new approaches to modeling complex systems using large numbers of autonomous agents. It describes efforts to develop agent-based modeling frameworks that can leverage exascale supercomputers to simulate phenomena like microbial ecosystems, cybersecurity, and energy systems at an unprecedented scale. These models incorporate hybrid discrete-continuous methods and very high-resolution data to better understand dynamic social and natural processes.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Artificial intelligence (AI) is experiencing steadily growing interest over the recent years. For good reason, since these innovative algorithms and methods, such as machine learning and deep neural networks, in which knowledge is acquired and applied based on data, enable the automation of a wide range of processes and quickly deliver precise results. AI is also getting more and more popular in the space sector. The Institute of Space Technology & Space Applications (ISTA) at the Universität der Bundeswehr in Munich is conducting research around AI for space operations, science, and technology. An overview of activities and current developments towards fault management, autonomous collision avoidance, autonomous landing, as well as radio science at ISTA will be presented.
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
This document provides an overview of the COSC 426 Augmented Reality course taught by Mark Billinghurst. The course will cover topics such as AR technology, interaction techniques, applications, and research directions. It will consist of weekly lectures and students will complete a group research project and assignments. Assessment will include the research project, assignments, and a final exam.
Dr. Edwin Hernandez has expertise in wireless communications and simulations that generate large amounts of data. His presentation discusses using big data analytics for radio frequency systems. MobileCDS is an RF propagation simulator that uses 3D databases, ray tracing, and big data analysis to simulate RF signal propagation through environments with buildings and vehicles. The large number of rays and results generated require parallelization techniques like Hadoop and MapReduce to analyze trends across millions of data points.
Deep Learning Hardware: Past, Present, & FutureRouyun Pan
Yann LeCun gave a presentation on deep learning hardware, past, present, and future. Some key points:
- Early neural networks in the 1960s-1980s were limited by hardware and algorithms. The development of backpropagation and faster floating point hardware enabled modern deep learning.
- Convolutional neural networks achieved breakthroughs in vision tasks in the 1980s-1990s but progress slowed due to limited hardware and data.
- GPUs and large datasets like ImageNet accelerated deep learning research starting in 2012, enabling very deep convolutional networks for computer vision.
- Recent work applies deep learning to new domains like natural language processing, reinforcement learning, and graph networks.
- Future challenges include memory-aug
University of florida 3 d lapidary scanner 110614Robert Harker
The subject invention pertains to an apparatus and method for collecting 2-D data slices of a specimen. Embodiments can incorporate a lapidary platen and an image recording system to image a specimen. The lapidary wheel platen can provide an imaging plane such that an image can be taken as the lapidary wheel platen abrades a surface of the specimen. A specimen mount can maintain the surface of the specimen properly aligned in the image plane. The imaging system can be a continuous recording system such as a video camera, a discrete recording system such as a flatbed scanner, or combinations of continuous and discrete recording systems to simultaneously collect two distinct data sets. The 2-D data set(s) can then be processed to create intricate 3-D color models.
Visualising large spatial databases and Building bespoke geodemographicsDr Muhammad Adnan
This presentation outlines my work at the Local Futures and the PhD research. I have been working on a combined project between Local Futures and UCL and the presentation starts by giving an introduction of the project. My PhD investigated the creation of Real-time bespoke geodemographics, and this presentation presents the work i did during the PhD journey.
The document discusses future frameworks and techniques for analyzing potential futures, including short and long-term forecasting methods. It covers several frameworks for conceptualizing technologies and their development over time, including level of technology, dimensional models, chronological models, and paradigms of growth. Key drivers of technological change and the concept of discontinuities and adjacent advances are also examined.
Ara V. Nefian is seeking a challenging research position involving computer vision, machine learning, robotics, and multimedia processing. She has a PhD in electrical engineering from Georgia Tech and over 10 years of research experience. Her skills include computer vision, Bayesian networks, image and video processing. She has published 40 papers and holds 10 patents related to these fields. Her most recent role is as a Senior System Scientist at Carnegie Mellon University where she leads projects in 3D terrain reconstruction from planetary images and autonomous robotics.
Ara V. Nefian is seeking a challenging research position involving computer vision, machine learning, robotics, and multimedia processing. She has over 10 years of research experience and 40 publications. Her background includes a PhD in electrical engineering focused on face recognition using HMMs. She has led numerous projects at companies like CMU, Intel, and Nokia involving 3D reconstruction, computer vision, Bayesian networks, and multimedia processing. She has also filed 20 patents related to these areas.
Embedded systems The Past Present and the FutureSrikanth KS
This presentation provides an overview of the trends in embedded systems. It will mainly help engineering students to select a good final year project.
The document provides an introduction to computer vision. It discusses the schedule, instructor, report format, grading, and references for the computer vision course. The course will be held weekly on Saturdays starting in July 2023. It will be taught by Dr. Ichsan Ibrahim and include lectures, assignments, a midterm exam, and final exam. Assignment reports must follow a specific format and be submitted digitally. Computer vision applications discussed include optical character recognition, face detection, smile detection, biometric authentication, object recognition in mobile phones and supermarkets, lane monitoring systems, and reconstructing 3D models from images.
Bo Li-they’ve created images that reliably fool neural networkGeekPwn Keen
Although deep neural networks (DNNs) perform well in a variety of applications, they are vulnerable to adversarial examples resulting from small-magnitude perturbations added to the input data. Inputs modified in this way can be mislabeled as a target class in targeted attacks or as a random class different from the ground truth in untargeted attacks. However, recent studies have demonstrated that such adversarial examples have limited effectiveness in the physical world due to changing physical conditions—they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this talk, Bo Li will introduce a general attack algorithm to take into account the numerous physical conditions and produces robust adversarial perturbations. This method captures a range of diverse physical conditions, including those encountered when images are captured from moving vehicles. We evaluate our physical attacks using this methodology and effectively fool real-world road sign classifiers.
Deep Learning: Changing the Playing Field of Artificial Intelligence - MaRS G...MaRS Discovery District
Deep learning is changing the field of artificial intelligence and revolutionizing our online experience, with applications including speech and image recognition. Information and communications technology giants such as Google, Facebook, IBM and Baidu, among others, are rapidly deploying deep learning into new products and services.
Behind all of the present-day excitement about deep learning are years of high risk and hard work by a small group of eminent computer scientists and theorists connected through the Canadian Institute for Advanced Research (CIFAR).
Transforming IT Into Innovating Together is a presentation by Tom Soderstrom, CTO of NASA's Jet Propulsion Laboratory (JPL). The presentation discusses 9 emerging IT trends and how JPL is innovating to take advantage of them. The trends include: 1) Extreme collaboration made simple through knowledge sharing and social networking, 2) The pervasive cloud and using cloud computing, 3) Becoming more eco-friendly, 4) Refocused cyber security, 5) Consumer driven IT, 6) Apps over programs, 7) Immersive visualization and interaction, 8) Big data and handling large datasets, and 9) Understanding human behavior through technology. The presentation provides examples of how JPL is already innovating in
Similar to Dario izzo - Machine Learning methods and space engineering (20)
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
2024.03.22 - Mike Heddes - Introduction to Hyperdimensional Computing.pdfAdvanced-Concepts-Team
Presentation in Science Coffee of the Advanced Concepts Team of the European Space Agency.
Date: 22.03.2024
Speaker: Mike Heddes (University of California, Irvine)
Topic: Introduction to Hyperdimensional Computing
Abstract:
Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is a computing framework capable of forming compositional distributed representations. HD/VSA forms a "concept space" by exploiting the geometry and algebra of high-dimensional spaces. The central idea is to represent information with randomly generated vectors, called hypervectors. Together with a set of operations on these hypervectors, HD/VSA can represent compositional structures, which, in turn, enables features such as reasoning by analogy and cognitive computing. In this introductory talk, I will introduce the high-dimensional spaces and the fundamental operations on hypervectors. I will then cover applications of HD/VSA such as reasoning by analogy and graph classification.
Isabelle Diacaire - From Ariadnas to Industry R&D in optics and photonicsAdvanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency.
Date: 28.02.2024
Speaker: Isabelle Dicaire (CCTT Optech)
Topic: From Ariadnas to Industry R&D in optics and photonics
The ExoGRAVITY project - observations of exoplanets from the ground with opti...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 09.02.2024.
Speaker: Sylvestre Lacour (Paris Observatory/LESIA)
Title: The ExoGRAVITY project - observations of exoplanets from the ground with optical interferometry
Abstract: I will talk about the latest observations and results with the GRAVITY instrument installed at the VLTI, Paranal observatory.
Presentation in the Science Coffee hosted by the Advanced Concepts Team of the European Space Agency on the 12.01.2024.
Speaker: Benoit Famaey (CNRS - Observatoire astronomique de Strasbourg)
Title: Modified Newtonian Dynamics
Abstract: Presentation around the topic of MOND / tests of MOND
Presentation in Science Coffee of ESA’s Advanced Concepts Team on the 24.11.2023 by Pablo Gomet (ESA/ESAC)
Abstract:
Current and upcoming space science missions will produce petascale data in the coming years. This requires a rethinking of data distribution and processing practices. For example, the Euclid mission will be sending more than 100GB of compressed data to Earth every day. Analysis and processing of data on this scale requires specialized infrastructure and toolchains. Further, providing users with this data locally is not practical due to bandwidth and storage constraints. Thus, a paradigm shift of bringing users code to the data and providing a computational infrastructure and toolchain around the data is required. The ESA Datalabs platforms is specifically focused on fulfilling this need. It provides a centralized platform with access to data from various missions including the James Webb Space Telescope, Gaia, and others. Pre-setup environments with the necessary toolchains and standard software tools such as JupyterLab are provided and enable data access with minimal overhead. And, with the built-in Science Application Store, a streamlined environment is given that allows rapid deployment of desired processing or science exploitation pipelines. In this manner, ESA Datalabs provides an accessible and potent framework for high-performance computing and machine learning applications. While users may upload data, there is no need to download data, thus mitigating the bandwidth burden. As the computational load is handled within the computational infrastructure of ESA Datalabs, high scalability is achieved, and resources can be requisitioned as needed. Finally, the platform-centric approach facilitates direct collaboration on code and data. Currently, the platform is already available to several hundred users, regularly showcased in dedicated workshops and interested users may request access online.
Jonathan Sauder - Miniaturizing Mechanical Systems for CubeSats: Design Princ...Advanced-Concepts-Team
ESA/ACT Science Coffee presentation of Nov 3, 2023 by Jonathan Sauder (NASA/JPL/CalTech)
Abstract:
In the past decade CubeSats have evolved from small university educational opportunities to industry and governments using them make new discoveries and monetize space. While originally most missions were restricted to Low Earth Orbit (LEO), CubeSats have begun to increase their reach across the solar system with the advent of Mars Cube One (MarCO) in 2018. However, with the small, constrained CubeSat form factor there is often a need to expand the CubeSat through deployable mechanical systems once the satellite is in space. In reviewing many CubeSat missions, it has been found that over 90% have deployable structures actuated by a mechanical system. These include antennas, solar panels, and instrument booms.
There is a key challenge in CubeSat mechanism design, as one can not just shrink larger spacecraft mechanisms down to the CubeSat form factor. Rather, these mechanisms must be designed in a way to reduce complexity, which means good mechanical design principles are paramount. From experience designing the deployment mechanisms for the MarCO and RainCube missions, working on deployable antenna technology, and reviewing deployables used on hundreds of other CubeSats, several key principles have been identified for developing miniaturized mechanical systems for mechanisms. These principles will be discussed in the presentation, and examples will be provided. Small satellite missions can be made more robust by incorporating good design principles into future miniaturized mechanical systems, which in turn with result in greater reliability of small satellites. This is especially important given that many small satellites have mission critical deployables, and the ever-increasing number of interplanetary small satellite missions and opportunities.
Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in quantum physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential.
EDEN ISS is a European project focused on advancing bio-regenerative life support systems, in particular plant cultivation in space. A mobile test facility was designed and built between March 2015 and October 2017. The facility incorporates a Service Section which houses several subsystems necessary for plant cultivation and the Future Exploration Greenhouse. The latter is built similar to a future space greenhouse and provides a fully controlled environment for plant cultivation. The facility was setup in Antarctica in close vicinity to the German Neumayer Station III in January 2018 and successfully operated between February and November of the same year. During that nine month period around 270 kg of food was produced by the crops cultivated in the greenhouse. Besides the mere production of food for the overwintering crew (10 people) of the Neumayer Station III a large number of experiments were conducted. These experiments delivered valuable data for engineering of space greenhouses, horticultural sciences, microbiology, food quality and safety, psychology and operation of a food production facility in a remote environment. Component and subsystem validation was conducted to better understand engineering issues when building a space greenhouse. Fresh edible and inedible biomass was measured upon every harvest, dry weight ratios were determined and crop life cycle data was collected. More than 400 plant and microbiological samples were taken for the microbiology, and food quality and safety scientists working on the project. Some samples were composed of freeze dried plant tissue, but most samples were frozen at -40°C and shipped to Europe for analysis in specialized laboratories. A survey with the overwintering crew was executed to get information about the impact of the greenhouse on the crew during the nine month long winter season. Operation procedures for horticultural tasks, but also for system maintenance were developed and tested. The required crewtime, energy and resources demands were measured. This presentation shows an overview of the research results of the EDEN ISS research campaign in Antarctica close to the Neumayer Station III.
The quest to create artificial general intelligence has largely followed a “brain in a vat” approach, aiming to build a disembodied mind that can carry out the kinds of logical reasoning and inference that humans are capable of, usually demonstrated through language. This approach may some day pay off, but it’s not how nature did it. Intelligence did not evolve to solve abstract problems – it evolved to adaptively control behaviour in the real world. Living organisms are agents that can act, for their own reasons, in pursuit of their own goals – most fundamentally, to persist as a self through time. By charting the evolution of agency, we can see the origins of action and the concomitant emergence of behavioural control systems; the transition from pragmatic perception-action couplings to more and more internalised semantic representations; and, on our lineage, a trajectory of increasing cognitive depth and ever more sophisticated mapping and modelling of the world and the self. The resultant accumulation of causal knowledge grants the ability to simulate more complex scenarios, to predict and plan over longer timeframes, to optimise over more competing goals at once, and ultimately to exercise conscious rational control over behaviour. In this way, intelligent entities – agents – evolved, with greater and greater autonomy, flexibility, and causal power in the world. To realise intelligence in artificial systems, it may similarly be necessary to develop embodied, situated agents, with meaning and understanding grounded in relation to real-world goals, actions, and consequences.
Brains rely on spiking neural networks for ultra-low-power information processing. Building artificial intelligence with similar efficiency requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. Toward this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks and showcase their robustness and self-calibration capabilities on analog neuromorphic hardware. Drawing further inspiration from biology, I will discuss the impact of homeostatic plasticity and network initialization in the excitatory-inhibitory balanced regime on deep spiking neural network training. Finally, I will show how approximations relate surrogate gradients to biologically plausible online learning rules with a minor impact on their effectiveness.
The promise of computer aided manufacturing is to make materializable structures that could not be fabricated using traditional methods. An example is 3D printed lattices, where variation in the lattice geometry and print media can define a vast spectrum of resulting material behaviour, ranging from fully flexible forms to completely stiff examples with high strength. While these “architected materials” offer huge promise for industrial applications, in practice they are difficult to generate and explore digitally, and even harder to simulate for mechanical testing. In this talk I will outline a range of approaches to the study of architected materials using machine learning. I will describe several projects using graph neural networks (GNNs) to model lattice geometry, and report on a few recent works that construct inverse models. These approaches are progress toward better methods for approximation of the material behaviour of the space of all lattice geometries, offering potential for real-time material feedback at the design stage, and a streamlined selection process for architected materials.
Electromagnetically Actuated Systems for Modular, Self-Assembling and Self-Re...Advanced-Concepts-Team
This talk will cover two research projects within the MIT Space Exploration Initiative’s microgravity self-assembly portfolio. While the sizes and geometries of today’s space structures are limited by launch mass and volume, modular reconfigurability may support tightly packing structure modules over multiple launches and provide for adaptation to unforeseen circumstances once deployed. Self-assembly methods also promise to reduce crew EVA construction time on-orbit, when leveraged for large-scale habitat structures. We will report on a quasi-stochastic self-assembly hardware platform, and accompanying robotics simulation, for hollow buckyball shells in orbit. This talk will also introduce a reconfigurable space structure based on electromagnetically pivoting cubes that originated in the ACT. Both projects will show recent hardware for fully untethered modules, results from physical experiments on parabolic flights and a 30-day ISS mission, and simulation approaches for planning and characterizing self-assembly and reconfigurability.
HORUS (Hyper-effective nOise Removal U-net Software) is a cutting-edge AI tool designed to enhance Lunar Reconnaissance Orbiter (LRO) optical low-light imagery of the Moon's shadowed regions by removing most of the CCD-related and photon noise. For the first time, HORUS enables scientists and engineers to identify intra-shadow geologic features (craters, boulders, etc.) as small as 3 meters across, making this tool uniquely useful for applications such as geologic mapping, landing site selection, hazard recognition, and mission planning, directly supporting the robotic and crewed exploration of the Moon's south pole.
META-SPACE: Psycho-physiologically Adaptive and Personalized Virtual Reality ...Advanced-Concepts-Team
This document proposes developing an adaptive virtual reality system called "meta-space" to promote well-being for astronauts and others in isolated environments. It would collect physiological and behavioral data to detect psychological states and adapt VR content accordingly, such as virtual escapes of Earth or interactive games. A proposed development plan includes exploring signals, combining them into an adaptive layer, generating the virtual world, and optimizing the headset through testing.
The Large Interferometer For Exoplanets (LIFE) II: Key Methods and TechnologiesAdvanced-Concepts-Team
The LIFE initiative has the goal to develop the science, the technology and a roadmap for an aspiring space mission that will allow humankind to detect and characterize, via nulling interferometry, the atmospheres of hundreds of nearby extrasolar planets including dozens that may be similar to Earth. This follow-up talk will tackle more of the techniques and technologies that will enable such an ambitious undertaking. I will outline the underlying measuring principle, and provide some overview over essential technologies, their current status and necessary developments.
Black holes have evolved from theoretical prediction to accepted hypothesis, due to the wealth of new discoveries in the last decades. In this talk I will discuss the observational evidence for the existence of black holes of different sizes and what we know about their evolution based on observations and theory. I will also describe what Quasars and Active Galactic Nuclei are, and how these extremely luminous objects can be used to study black holes at the early ages of the Universe.
In vitro simulation of spaceflight environment to elucidate combined effect o...Advanced-Concepts-Team
Long-term exposure to microgravity, ionizing radiation and increased levels of psychological stress can cause changes in the astronauts’ skin, resulting in skin rashes, itches and delayed wound healing during space missions. There is still a lack of understanding how the complex spaceflight environment induces these defects. This PhD project aims to investigate how exposure to a combination of spaceflight stressors can affect the structure and function of the skin, and how they can hamper wound healing. For this we have developed in vitro simulation models and are exposing primary human dermal fibroblasts to hydrocortisone, ionizing radiation and simulated microgravity. Results indicate a significant negative effect of hydrocortisone as well as simulated microgravity on wound healing capability of dermal fibroblasts. Furthermore, a project has been initiated with the support of the European Space Agency Academy “Spin Your Thesis!” Campaign, aiming to investigate the effects of an increased gravitational force on fibroblast function related to wound healing. Altogether the results of this PhD project will give more insights into the effects of combined spaceflight stressors on dermal skin cells, and improve risk assessment for human deep space exploration.
The Large Interferometer For Exoplanets (LIFE): the science of characterising...Advanced-Concepts-Team
Studying the atmospheres of a statistically significant number of rocky, terrestrial exoplanets - including the search for habitable and potentially inhabited planets - is one of the major goals of exoplanetary science and possibly the most challenging question in 21st century astrophysics. However, despite being at the top of the agenda of all major space agencies and ground-based observatories, none of the currently planned projects or missions worldwide has the technical capabilities to achieve this goal. In this talk we present new results from the LIFE Mission initiative, which addresses this issue by investigating the scientific potential of a mid infrared nulling interferometer observatory. Here we will focus on the mission's yield estimates, our simulator software as well as various exemplary science cases such as observing Earth- and Venus-twins or searching for phosphine in exoplanetary atmospheres.
Vernal pools are ephemeral wetland ecosystems that provide habitat for specialized plants and animals. They form "archipelagos" distributed across the landscape. Microbial communities in vernal pool soil and water show environmental filtering between habitats. Next-generation sequencing of soil samples revealed differences in microbial composition between soil, wet soil, and water. Species diversity and community composition changes with increasing spatial distance between pools, following a distance-decay pattern. Vernal pools may provide insights into the origins and mechanisms of biodiversity as well as how biodiversity responds to environmental changes. As a new frontier for science, further study of vernal pool ecosystems can help us understand the role of symbiosis and adaptation in life.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
Microbial interaction
Microorganisms interacts with each other and can be physically associated with another organisms in a variety of ways.
One organism can be located on the surface of another organism as an ectobiont or located within another organism as endobiont.
Microbial interaction may be positive such as mutualism, proto-cooperation, commensalism or may be negative such as parasitism, predation or competition
Types of microbial interaction
Positive interaction: mutualism, proto-cooperation, commensalism
Negative interaction: Ammensalism (antagonism), parasitism, predation, competition
I. Mutualism:
It is defined as the relationship in which each organism in interaction gets benefits from association. It is an obligatory relationship in which mutualist and host are metabolically dependent on each other.
Mutualistic relationship is very specific where one member of association cannot be replaced by another species.
Mutualism require close physical contact between interacting organisms.
Relationship of mutualism allows organisms to exist in habitat that could not occupied by either species alone.
Mutualistic relationship between organisms allows them to act as a single organism.
Examples of mutualism:
i. Lichens:
Lichens are excellent example of mutualism.
They are the association of specific fungi and certain genus of algae. In lichen, fungal partner is called mycobiont and algal partner is called
II. Syntrophism:
It is an association in which the growth of one organism either depends on or improved by the substrate provided by another organism.
In syntrophism both organism in association gets benefits.
Compound A
Utilized by population 1
Compound B
Utilized by population 2
Compound C
utilized by both Population 1+2
Products
In this theoretical example of syntrophism, population 1 is able to utilize and metabolize compound A, forming compound B but cannot metabolize beyond compound B without co-operation of population 2. Population 2is unable to utilize compound A but it can metabolize compound B forming compound C. Then both population 1 and 2 are able to carry out metabolic reaction which leads to formation of end product that neither population could produce alone.
Examples of syntrophism:
i. Methanogenic ecosystem in sludge digester
Methane produced by methanogenic bacteria depends upon interspecies hydrogen transfer by other fermentative bacteria.
Anaerobic fermentative bacteria generate CO2 and H2 utilizing carbohydrates which is then utilized by methanogenic bacteria (Methanobacter) to produce methane.
ii. Lactobacillus arobinosus and Enterococcus faecalis:
In the minimal media, Lactobacillus arobinosus and Enterococcus faecalis are able to grow together but not alone.
The synergistic relationship between E. faecalis and L. arobinosus occurs in which E. faecalis require folic acid
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
2. Mission
Created in 2002 “to monitor, perform and foster research on advanced space
systems, innovative concepts and working methods”
2
3. too immature for regular
ESA programmes or projects
concepts, techniques &
scientific domains with
no/weak links to space
emerging from cutting-edge
basic scientific research
topics on which ESA is
expected to have a
position
biomimetic approaches to engineering, brain-machine interfaces,
liquid breathing, curiosity cloning, peer-to-peer computing, crowd
sourcing gaming, innovation diffusion and dynamics
mathematical global optimisation techniques, cloud-based
uncertainty modelling, helicon thrusters, pure general
relativistic approach to GNSS constellation design, vibrating
systems in general relativity, metamaterials in the optical
frequency range, distributed/swarm intelligence, laser
filamentation
planetary protection research, space nuclear power sources,
asteroid deflection, liquid ventilation, pulsar navigation,
biomimetic drilling
solar power from space, torpor/hibernation, asteroid deflection,
active removal of space debris, novel working methods,
terraforming, geoengineering
4. Learning from others…Interdisciplinary
Most game-changing
developments emerge around
the fringes or intersections of
disciplines
Regular renewal of
personnel
Regular in-flow of new
insights keeps team on
the leading edge
Encourage taking risks
Encourage and reward
high risk / high gain
activities
Scientific rigour and
competence
Avoid drifting into the realm of
science fiction
Support from top-management
Activities tend to be ridiculed, admired,
not taken seriously or seen as threat to
core of the establishment.
Close ties with academia
Most relevant ideas/concepts
on a time horizon of 10+ years are
generated within academia and
research centres
5. ACT Research AreasFundamental Physics
Impact of new ideas in
physics on the space
sector
Biomimetics & Bioengineering
Benefitting from Darwinian
evolution to solve engineering
problems
Mission Analysis
Mathematical
techniques for future
mission analysis
Artificial Intelligence
Engineering of intelligent
computer systems
Advanced Energy Systems
Innovating energy systems
Planetary System Science
Options and opportunities
from complex climate
systems
Computer Science &
Applied Mathematics
Post von neumann
architectures
Advanced Propulsion
Explore and review break-
through propulsion concepts
Computational Management Science
Explore computational aspects of
management
Advanced Materials
Benefitting from the control
at micro/nano scale
6. We are currently hiring 5 new Research fellows
(post-docs)!
1 - Artificial Intelligence
2 - Computer science
3 - Biomimetics
4 - Fundamental Physics
5 - Mission Analysis
Deadline 6th July!
www.esa.int/act
13. Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN
ImageNet
2006
2012
• 1980-1990: attempts to train DNNs failed
• 2006: first worldwide success stories in 2006
Deep Belief Networks and autoencoders: networks
trained layer by layer
• 2006-2016: great success and explosion of DL, for example:
Convolutional Neural Networks (CNNs): ImageNet success
Long Short-Term Memory (LSTM): huge success in speech
recognition
.
Just A Hype? No, DL is here to stay.
Deep Learning
First DL
success
14. Genetic Programming
Symbolic regression (SR): Learn the underlying physics from data
Symbolic regression leverages an “evolutionary” approach
to model creation, testing billions of potential models per
second, and converging on the simplest, most accurate ones
that explain your data. S.R. makes no prior assumptions about
the data set, instead fitting models to the data
dynamically.
Schmidt M., Lipson H. (2009)
"Distilling Free-Form Natural Laws from
Experimental Data," Science, Vol. 324,
no. 5923, pp. 81 - 85.
Companies using Nutonian SR tool:
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR
ImageNet
2006
2012
2009Nutonian
First DL
success
15. Smart “search” (optimization) methods
Evolutionary algorithms: exploiting artificial selection to evolve
increasingly better solutions to design problems
Orders of magnitude better from Genetic Algorithm (80s) to
modern techniques:
Covariant Adaptation Evolutionary Strategy (CMA-ES),
Multi-objective Evolutionary techniques via decomposition
(MOEA/D) and Self-adaptive Differential Evolution (jDE)
Monte Carlo Tree Search: for sequential decision-making problems
One of the most successful techniques in AI for games
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009Nutonian
First DL
success, jDE
2003CMA-ES
2007MOEA/D
16. Perception, understanding and
communication
Sensors:
Dynamic Vision Sensors (DVS)
Elementary Motion detectors (EMD)
Light Field cameras (LFC)
...
Algorithms:
SIFT - Scale-invariant feature
transform
CNN - Convolutional Networks
Using TTC, OF - Time to contact, optic
flow
LSTM - Long Short Term Memory
Networks
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009Nutonian
A LSTM wins ICDAR
handwriting
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
18. 1. Text recognition
2. Colorization of black and white images
3. Adding sounds to silent movies
4. Object classification and detection in
photographs
5. Generate image from caption
6. Handwriting generation
7. Text Generation (scripts, poetry, etc.)
8. Image Caption Generation
9. Music Composition
10. Software continuous integration
11. Manage currencies
12. Drive cars
13. Navigate
14. Chat
15. Generative Design
Some jobs computers can perform (and that
could not 25 years ago)
19. Application Areas of AI
Self-driving vehicles
Google, Tesla,
Mercedes-Benz, etc.
Autonomous flying (drones)
Amazon Prime Air delivery
Military Drones
Robotics
Factory automation,
Medicine,
Scientific exploration
...
20. Application Areas of AI
Virtual Assistants
Cortana, Siri, Viv
Language-based services
Machine translation
Document summarization
Emotionally aware interfaces
Affective computing
21. The next big things in AI/CS (10-20 years ahead)
23. In the same place as where ANNs were in the 90s, these
technologies hold great potential, and may become the
next big things
● Artificial Evolution (Evolutionary Computing)
-> Designing the unexpected
● Genetic Programming
-> Computers programming themselves
● Artificial Life
-> Digital ecologies
tHE nEXT bIG tHINGS are today’s “failures”
The seeds of these innovations are well planted
24. The 2006 NASA ST5 spacecraft antenna
(found by Genetic Programming)
The ST5 mission successfully launched on March 22, 2006, and so this
evolved antenna represents the world's first artificially-evolved object to
fly in space
25. The ESA (ACT) VLBI GTOC8 trajectory
In 2011 the Humies Gold Medal Award was awarded to the ACT work on “Search for
a grand tour of the Jupiter Galilean moons” for human-competitive results that
were produced by any form of genetic and evolutionary computation.
27. R3000: New Horizons
RAD6000: Spirit-Opportunity, Messenger, Deep Space
1, Dawn
RAD750: Kepler,
Juno, Curiosity
i386: ISS
x86: Falcon 9, Hubble
The Excuses:
Radiation Tolerance
Reliability
Satellite build time
Launch delays
Paperwork
Power Consumption
NGMP (ESA, LEON4)
28. Scenario #1: the gap is not filled.
in 2040 the intelligence on board spacecraft will
feel as exciting as a videogame from the 90s
29. Scenario #2: the gap is filled.
in 2040 the intelligence on board spacecraft will
compare to today’s situation as modern VR based
games compare to Pong
31. Explored areas – Neurocontrollers
Evolution in robotic islands: ALife in
the Galapagos
Deep Reinforcement learning for
Spacecraft hovering near unkown
small bodies
Morphological evolution of soft
robots at different gravity levels
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
2009Nutonian
A LSTM wins ICDAR
handwriting
32. Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
Explored areas – Swarm Intelligence
Decentralized Formation Flight with
collision avoidance: Equilibrium Shaping
Autonomous self-assembly of large
space structures
Root Swarm: Sensor webs deployment
ACT MIT SPHERES experiments: first ANN
controlling multiple (homogeneous)
agents in space
Nutonian
A LSTM wins ICDAR
handwriting
33. Optic flow based lunar landing: from
bees to Apollo
Scent of science: from a female chasing
moth to the chase of methane on Mars
Explored areas – Biomimetic Sensing and
Actuation
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
34. Explored areas – Vision
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
Nutonian
A LSTM wins ICDAR
handwriting
Astro Drone - gamification for the
acquisition of vision data-sets
Learning “to see” in zero gravity - from
stereo vision to monocular vision (using
the MIT SPHERES platform)
35. Explored areas – Evolution and smart search
Evolution of Interplanetary Trajectories
Parallel evolution in modern CPU
architectures, the island model, PyGMO
Novel tree search paradigms: Monte Carlo Tree
Search, Ant Colony Optimization, Lazy Race Tree
Search
Humies Gold medal - “for Human-Competitive
Results Produced by Genetic and Evolutionary
Computation”
1st place in the 8th edition of GTOC - “The
America’s cup of rocket science”
Alpha GO 2016
2000
CNN, BP,
LSTM,
RNN, GP,
SR, DE,
MCTS
ImageNet
2006
2012
2009
2003CMA-ES
2007MOEA/D
DVS 2013
EMD,
SIFT
LFC from Stanford 2004
First DL
success, jDE
Nutonian
A LSTM wins ICDAR
handwriting
37. pip install pagmo
conda config --add channels conda-forge
conda install pagmo
● Provides “free” parallelization via the asynchronous island
model
● mpi, threads, multiprocess, etc.. all encapsulated in the island
● available for osx, linux and windows
● Fully FLOSS philosophy
● Easily extendible with your own algorithms or problems
● Tutorials and doc constantly up to date
● Community support active via a dedicated gitter channel
https://esa.github.io/pagmo2/index.html
pagmo/pygmo 2.x
39. Background: the algebra of floating points
>>> def my_function(x):
... return cos(x[0])+(x[0]+3*x[1]+x[2])**7
>>> x = [0.1,0.2,0.3]
>>> my_function(x)
1.9896041652780259
Behind this seemingly trivial computation, a number of implicit assumptions we
tend to forget.
Note: we rarely question that the floating
point algebra is “conformal” to the real number
algebra.
40. Background: the algebra of Truncated Taylor
polynomials
>>> def my_function(x):
... return cos(x[0])+(x[0]+3*x[1]+x[2])**7
>>> x = [gdual(0.1,"x0",5), gdual(0.2,"x1",5), gdual(0.3,"x2", 5)]
>>> my_function(x)
42*dx0*dx2+105*dx0*dx2**2+630*dx0*dx1*dx2+126*dx0*dx1+945*dx0*dx1**2+3
5.0166*dx0**3+6.90017*dx0+20.82*dx2**2+6.946*dx2+125.73*dx1*dx2+314.1*
dx1*dx2**2+105*dx0**2*dx2+34.8*dx2**3+945*dx1**2*dx2+945*dx1**3+20.502
5*dx0**2+189*dx1**2+20.973*dx1+ 1.9896+315*dx0**2*dx1
Substitute the algebra of floating point numbers with
the algebra of Taylor series expansion, trust it to be
conformal to the algebra of continuous functions -> a
differential Algebra [4].
45. Application to Machine learning and Evolutionary Computations
Gradient: widely used
(backprop, SQP, interior
point)
Hessian: less used, often
approximated, anyway
researched
Traditionally the error (or fitness) is conceived as a real number: instead,
consider it as a function (of whatever parameters you choose). Using the new
algebra, represent it, in the computer, as a truncated Taylor polynomial just as
before you were representing it as a floating point.
46. Application to Machine learning and Evolutionary
Computations
Traditionally the error (or fitness) is conceived as a real number: instead,
consider it as a function (of whatever params you choose). Using the new
algebra, represent it, in the computer, as a truncated Taylor polynomial just as
before you were representing it as a floating point.
Similar complexity as
Hessians and gradients, but
an entirely unresearched
field both in machine
learning and evolutionary
computations.
52. 1. Pre-compute many optimal trajectories
2. Train an artificial neural network to approximate the optimal
behaviour
3. Use the network to drive the spacecraft
Our approach:
54. Goal: Solve the deterministic continuous-time optimal control problem,
that is:
Current methods (direct or indirect) are not suitable for real-time
on-board implementation, an alternative is to correct the deviations from
a precomputed profile or use polynomial fits.
Hamilton-Jacobi-Bellman equation
1 - precompute many optimal solutions:
55. Direct methods Indirect methods
- Hermite-Simpson transcription and
non-linear programming (NLP)
solver
- Fast and easy implementation
- Chattering effects in the training
data have a huge negative impact on
the results.
- regularization techniques are used to
remove them.
- Suboptimal results
- Solve the Hamilton-Jacobi-
Bellman equations with shooting
methods
- Provides the actual optimal
trajectories
- But… an initial guess is necessary to
solve the problem and it is really
difficult to find them
- More difficult and awkward
implementation
1 - precompute many optimal solutions:
56. Two methods to generate the data
1 - precompute many optimal solutions:
57. Optimization for different problems:
Free Landing - Pinpoint Landing, Time optimal - Power optimal - Mass
optimal
Resulting in different control profiles:
1 - precompute many optimal solutions:
58. - The initial state of each trajectory is randomly selected from a training area
- 150,000 trajectories are generated for each one of the problems
- Computing the trajectory for a specific starting point is difficult, but we speed up the process
of generating random optimal trajectories by:
Random walks Homotopy methods
1 - precompute many optimal solutions:
59. - The networks are trained on
the state-control action
paris of the trajectories
- Networks with 1 - 5 hidden
layers
- Supervised Learning
- Trained with Stochastic
Gradient Descent (and
momentum)
- Minimize the squared loss
error (C)
2 - Approximate state-action with a DNN
60. ● Deep networks are always better than shallow
networks with the same number of parameters.
Challenging landing
example
2 - Approximate state-action with a DNN
61. DNNs with supervised learning and large datasets successfully approximate the
optimal control
2 - Approximate state-action with a DNN
62. DNNs with supervised learning and large datasets successfully approximate the
optimal control
2 - Approximate state-action with a DNN
63. Very accurate results
+
The DNNs can be used as
an on-board reactive
system (32.000x faster
than optimization
methods used to
generate the data)
3 - How good is it?
64. ● Successful landings from states outside of the training initial
conditions
● This suggest that an approximation to the solution of the HJB equations
is learned
Multicopter
(power)
Multicopter
(power)
Spacecraft I
generalization
66. After reaching the target point the spacecraft hovers until it runs of fuel
Is it learning the dynamics of the model?
No training data
below this line
generalization
67. The system was evaluated with the Parrot Bepop Drone in the TU Delft University
Optimal trajectory and
trajectory followed by the
drone (after some scaling
to adjust them, don’t trust
this image)
Optimal trajectories
generated for the Bepop
drone
4 - thE REAL WORLD
70. 1 - Train a neural network to guess the state from an on-board camera
2 - Use it together with the previous DNNs to get fully automated
visual landing
CNN for state estimation from camera
71. A simple setup is used: a 3D model (Blender) of a
rocket landing on a sea platform (Falcon 9 inspired)
CNN for state estimation from camera
76. References:
[1] Carlos Sánchez-Sánchez, Dario Izzo and Daniel Hennes. "Optimal
real-time landing using deep networks." Proceedings of the Sixth
International Conference on Astrodynamics Tools and Techniques,
ICATT. Vol. 12. 2016.
[2] Carlos Sánchez-Sánchez, Dario Izzo and Daniel Hennes. "Learning
the optimal state-feedback using deep networks." Computational
Intelligence (SSCI), 2016 IEEE Symposium Series on. IEEE, 2016.
[3] Carlos Sánchez-Sánchez and Dario Izzo. "Real-time optimal
control via Deep Neural Networks: study on landing problems."
arXiv preprint arXiv:1610.08668(2016).
78. Encoding a computer program
weighted Cartesian Genetic Program
Miller, Julian F., and Peter Thomson. "Cartesian genetic programming."
European Conference on Genetic Programming. Springer Berlin
Heidelberg, 2000.
79. weighted dCGP
Case B: all floats except w3,1
and w10,1
-> gduals operating in P3
:
Case A: all floats
The output is a float
The output is a Taylor polynomial in w3,1
and w10,1
truncated at the third
order
80. Weighted CGP
>>> ks = kernel_set(["sum","diff","mul","div"])()
>>> CGP = expression(1,1,1,10,10,2,ks, seed = 21312312)
>>> print(CGP(["x"]))
[x**2 - x]
>>> print(CGP([0.1]))
[-0.09]
>>> ks = kernel_set(["sum","diff","mul","div"])()
>>> CGPw = expression(1,1,1,10,10,2,ks, seed = 21312312)
>>> print(CGPw(["x"]))
[w1_0*w1_1*w2_0*x**2 - w2_1*x]
Traditional CGP: all floats, no weights
Case A: all floats, weighted expression
84. Example 1: Learning ephemeral constants
Taking the skeleton out of the closet ….
‘‘...the finding of
numerical
constants is a
skeleton in the
GP closet...[and an]
area of research
that requires more
investigation...’’ -
John Koza
85. 1 - The error of a CGP expression is
computed in the new algebra (so it’s a
function, remember….).
2 - We thus get the order n Taylor
expansion of the error: example with
one only constant and order three
3 - We use the differential expression obtained for the error to update
the ephemeral constant values so that the error is minimized
4 - At order 1 and with some type of gradient descent, you may think of
this as a backpropagation to learn ephemeral constant values.
Example 1: Learning ephemeral constants
86. What order?
Using:
Expressions such as:
Result in the error:
A parabolid! -> A second order Taylor polynomial represents
this error exactly.
87. The final algorithm
(1-4)-ES evolves the chromosome and thus
the symbolic expression y(x,c). We assign
to each fitness the minimum of the MSE
across all possible values of the
constants:
The solution is approximated by one step
of the Newton method.
Hessian and gradient are extracted from a second order Taylor
approximation
88. Learning ephemeral constants: results
● Success: MSE < 1e-14 (i.e. we learn the exact value of the
constants)
● We sample ~50 points in a uniform grid within bounds.
● We perform 100 runs.
● We compute the Expected Run Time (ERT): the expected value
of the number of d-CGP expressions that have to be evaluated
before meeting the success criteria set.
● Closest work is Topchy and Punch [1]: not comparable to
these results.
90. Weight batch learning
(1-4)-ES evolves the chromosome and thus
the symbolic expression y(x,c). We assign
to each fitness the minimum of the MSE
across all possible values of the
weights:
Solution by the Newton method is
now troublesome.
A new learning method: weight
batch learning
We basically perform one Newton step to learn a batch of two weights
at a time. No Lamarckian learning: at each generation weights are
sampled from a normal distribution.
91. Learning constants with weighted dCGP: results
● Same setup as in the
previous experiments.
● All constants are
learned within the set
precision.
● Generally requires less
generations of the ES
● ERT is higher because of
the Newton iterations.
92. Example 3: Solving differential equations
Cauchy problems, Neumann and Dirichlet problems, TPBV problems
93. Solving Differential Equations: simple example
with y(0.1)=20.1 and x in [0.1,1]
We assume is represented by our d-CGP expression (1 in 1 out).
We construct a grid of 10 values for x between 0.1 and 1.
Computing x as a gdual with truncation order 1, we get from the
Taylor expansion of the program output.
Following Tsoulos and Lagaris [2], at each generation we use as error the
sum of two terms:
- The violation of the differential equation:
- The violation of the boundary condition:
94. Solving Differential Equations: results
Problem ERT dCGP ERT Tsoulos
8123 130600
35482 148400
22600 88200
896 38200
24192 40600
327020 797000
Comparison to Tsoulos and Lagaris [2] work possible.
CGP outperforming grammatical evolution in these tasks
d-CGP generalizing Tsoulos and Lagaris [2] method allowing high
order and mixed derivatives
95. Example 4: Finding prime integrals
From differential equations to the fundamental conservation laws
96. Finding prime integrals
Prime integrals are typically found by great
mathematicians and their intuition
dynamical system in normal form
a prime integral
98. Found in 1888 as a third example of integrable top
Kovalevskaya Top
Not only angular momentum …
“whatever this is” … is conserved
99. Finding prime integrals
No need to solve this to create
training data (i.e. no need to observe
the system as in Schmidt and Lipson [3])
We get the derivatives from the
1st order Taylor expansion of
the program output!
101. References:
[1] Topchy, Alexander, and William F. Punch. "Faster genetic programming
based on local gradient search of numeric leaf values." Proceedings
of the 3rd Annual Conference on Genetic and Evolutionary
Computation. Morgan Kaufmann Publishers Inc., 2001.
[2] Tsoulos, Ioannis G., and Isaac E. Lagaris. "Solving differential
equations with genetic programming." Genetic Programming and
Evolvable Machines 7.1 (2006): 33-54.
[3] Schmidt, Michael, and Hod Lipson. "Distilling free-form natural laws
from experimental data." Science 324.5923 (2009): 81-85.
[4] Ritt, Joseph Fels. Differential algebra. Vol. 33. American
Mathematical Soc., 1950.
[5] Izzo, Dario, Francesco Biscani, and Alessio Mereta. "Differentiable
Genetic Programming." European Conference on Genetic
Programming. Springer, Cham, 2017.
103. Kelvins Portal: compete to excel
• Asking the correct questions is essential to be
successful in science.
• A dedicated competition portal: Kelvins, reach the
absolute zero (error).
• AlgoritHmic and data mining competitions co-exist
• Targeting machine learning, data mining communities
but also space engineers.
• Portal: https://kelvins.esa.int/
• Competitions in the pipeline: Asteroid belt / debris
surrogate models, orbital propagation error
prediction.
104. Mars Express Power Challenge
• predict the power consumption
of the spacecraft thermal
subsystem.
• Three years of spacecraft
telemetry are released … can
you predict the fourth year?
• The ultimate goal is to
automate operations and
extend satellite life time,
which in turn increases the
scientific return.
105. Fact sheet
● Downloads: 650
● Number of different countries > 20
● Teams in the final leaderboard: 40
● Registered teams: 133
● Submitted solutions: ~200
106. Winners
Jozef Stefan Institute, Lujbjana, Slovenia
Codename: MMMe8
Department Of Knowledge Technologies: Prof. Saso Dzeroski
SCORE
(RMSE
lower is
better)
0.07916
116. Star Trackers: First Contact (ongoing)
• A spacecraft is lost in space
and needs to autonomously
determine its attitude based
on the camera image of a star
tracker.
• Given 10 000 images of such a
scenario, participants of the
competition have to identify
stars visible in the images.
• The goal is to improve
state-of-the-art algorithms in
terms of speed, accuracy and
robustness
117. The Kessler Run: GTOC9
It is the year 2060 and the commercial exploitation
of Low Earth Orbits (LEOs) went well beyond the
trillion of Euros market size. Following the
unprecedented explosion of a Sun-synchronous
satellite, the Kessler effect triggered further
impacts and the Sun-synchronous LEO environment
was severely compromised. Scientists from all
main space agencies and private space companies
isolated a set of 123 orbiting debris pieces that, if
removed, would restore the possibility to operate
in that precious orbital environment and prevent
the Kessler effect to permanently compromise it.
You are thus called to design a series of missions
able to remove all critical debris pieces while
minimizing the overall cumulative cost of such an
endeavour. Each single mission cost (in EUR) will
depend on how early the mission is submitted via
this web-site (regardless of their actual launch
epoch) and on the spacecraft initial mass.
118. The Kessler Run: GTOC9
● Number of different countries: 19
● Teams in the final leaderboard: 36
● Registered teams: 69
● Registered institutions: 125
● Scientists registered: ~320
● Missions submitted:~1200
● A difficult combinatorial problem with
complex optimization procedures to evaluate
the various heuristics / costs involved in
transfers.
● Some Links to tsp variants or set cover.
● Won by an approach based on genetic
algorithms and ant colony optimization by Jet
propulsion Laboratory.
● Surrogate models suggested and used
successfully.
119. Interplanetary Trajectory Planning with Monte Carlo Tree Search
Hennes, Daniel, and Dario Izzo. "Interplanetary trajectory planning with Monte Carlo tree search."
Proceedings of the 24th International Conference on Artificial Intelligence,” AAAI Press. 2015.
http://ijcai.org/Proceedings/15/Papers/114.pdf
Optimal real-time landing using deep networks
Sánchez-Sánchez, Carlos, Dario Izzo, and Daniel Hennes. "Optimal Real-Time Landing Using Deep Networks."
http://www.esa.int/gsp/ACT/doc/AI/pub/ACT-RPR-AI-2016-ICATT-optimal_landing_deep_networks.pdf
Evolving solutions to TSP variants for active space debris removal.
Izzo, Dario, et al. "Evolving solutions to TSP variants for active space debris removal." Proceedings of the 2015
Annual Conference on Genetic and Evolutionary Computation. ACM, 2015.
An evolutionary robotics approach for the distributed control of satellite formations
Izzo, Dario, Luís F. Simões, and Guido CHE de Croon. "An evolutionary robotics approach for the distributed
control of satellite formations."Evolutionary Intelligence 7.2 (2014): 107-118.
Search for a grand tour of the jupiter galilean moons
Izzo, Dario, et al. "Search for a grand tour of the jupiter galilean moons." Proceedings of the 15th annual
conference on Genetic and evolutionary computation. ACM, 2013.
Evolutionary robotics approach to odor source localization
De Croon, G. C. H. E., et al. "Evolutionary robotics approach to odor source localization." Neurocomputing 121
(2013): 481-497.
Novelty search for soft robotic space exploration
Methenitis, Georgios, et al. "Novelty search for soft robotic space exploration." Proceedings of the 2015
Annual Conference on Genetic and Evolutionary Computation. ACM, 2015.
Lattice formation in space for a swarm of pico satellites
Pinciroli, Carlo, et al. "Lattice formation in space for a swarm of pico satellites." International Conference
on Ant Colony Optimization and Swarm Intelligence. Springer Berlin Heidelberg, 2008.
Selected ACT bibliography (more here)