https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
Lecture slides in DASI spring 2018, National Cheng Kung University, Taiwan. The content is about deep reinforcement learning: policy gradient including variance reduction and importance sampling
#10 pydata warsaw object detection with dn nsAndrew Brozek
PyData Warsaw #10: Deep & Machine Learning
Object detection with Deep Learning
These are the references for the first part of the talk.
1) a Stanford lecture
http://vision.stanford.edu/teaching/cs231n/slides/2016/winter1516_lecture8.pdf
2) OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
https://arxiv.org/abs/1312.6229
3) Selective Search for Object Recognition https://www.koen.me/research/selectivesearch/
4) Rich feature hierarchies for accurate object detection and semantic segmentation
https://arxiv.org/abs/1311.2524
5) Fast R-CNN
https://arxiv.org/abs/1504.08083
6) Faster R-CNN: Towards Real-Time Object
Detection with Region Proposal Networks
https://arxiv.org/pdf/1506.01497.pdf
7) A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection
https://arxiv.org/abs/1704.03414
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
Lecture slides in DASI spring 2018, National Cheng Kung University, Taiwan. The content is about deep reinforcement learning: policy gradient including variance reduction and importance sampling
#10 pydata warsaw object detection with dn nsAndrew Brozek
PyData Warsaw #10: Deep & Machine Learning
Object detection with Deep Learning
These are the references for the first part of the talk.
1) a Stanford lecture
http://vision.stanford.edu/teaching/cs231n/slides/2016/winter1516_lecture8.pdf
2) OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
https://arxiv.org/abs/1312.6229
3) Selective Search for Object Recognition https://www.koen.me/research/selectivesearch/
4) Rich feature hierarchies for accurate object detection and semantic segmentation
https://arxiv.org/abs/1311.2524
5) Fast R-CNN
https://arxiv.org/abs/1504.08083
6) Faster R-CNN: Towards Real-Time Object
Detection with Region Proposal Networks
https://arxiv.org/pdf/1506.01497.pdf
7) A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection
https://arxiv.org/abs/1704.03414
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI AI Frontiers
Topic: How to use big data to enhance AI
Outline:
1. Spark ETL
Spark SQL
Spark Streaming
2. Spark ML
Spark ML pipeline
Distributed model tuning
Spark ML model and data lineage management
3. Spark XGboost
XGboost introduction
XGboost with Spark
XGboost with GPU
4. Spark Deep Learning pipeline
Transfer learning
Build Spark ML pipeline with TensorFlow
Model selection on distributed TF model
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Se...郁凱 黃
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning
- Author: Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L. Lewis, Xiaoshi Wang
- Origin: https://papers.nips.cc/paper/5421-deep-learning-for-real-time-atari-game-play-using-offline-monte-carlo-tree-search-planning
- Related: https://github.com/number9473/nn-algorithm/issues/251
DataScienceLab2017_Оптимизация гиперпараметров машинного обучения при помощи ...GeeksLab Odessa
DataScienceLab, 13 мая 2017
Оптимизация гиперпараметров машинного обучения при помощи Байесовской оптимизации
Максим Бевза (Research Engineer at Grammarly)
Все алгоритмы машинного обучения нуждаются в настройке (тьюнинге). Часто мы используем Grid Search или Randomized Search или нашу интуицию для подбора гиперпараметров. Байесовская оптимизация поможет нам направить Randomized Search в те места, которые наиболее перспективны, так, чтобы тот же (или лучший) результат мы получили за меньшее количество итераций.
Все материалы: http://datascience.in.ua/report2017
Distributed implementation of a lstm on spark and tensorflowEmanuel Di Nardo
Academic project based on developing a LSTM distributing it on Spark and using Tensorflow for numerical operations.
Source code: https://github.com/EmanuelOverflow/LSTM-TensorSpark
spaGO: A self-contained ML & NLP library in GOMatteo Grella
Introduction to spaGO, a beautiful and maintainable machine learning library written in Go designed to support relevant neural network architectures in natural language processing tasks.
Github: https://github.com/nlpodyssey/spago
This short text will get you up to speed in no time on creating visualizations using R's ggplot2 package. It was developed as part of a training to those who had no prior experience in R and had limited knowledge on general programming concepts. It's a must have initial guide for those exploring the field of Data Science
Similar to Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th Place Solution (20)
PFN福田圭祐による東大大学院「融合情報学特別講義Ⅲ」(2022年10月19日)の講義資料です。
・Introduction to Preferred Networks
・Our developments to date
・Our research & platform
・Simulation ✕ AI
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
4. ● Kaggle: Lyft Motion Prediction for Autonomous Vehicles
● l5kit Data HP: Data - Lyft
Competition/Dataset page
5. ● Focus on “Motion Prediction” part
○ Given bird-eye-view image (No natural images)
○ Predict 3 possible trajectories with confidence.
Competition introduction
Competition Scope Image from https://self-driving.lyft.com/level5/data/
6. ● It was focusing “Perception” part
○ https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles
○ Detect car as 3d object
Last year competition: Lyft 3D Object Detection
Image from https://self-driving.lyft.com/level5/data/ Image from https://www.kaggle.com/tarunpaparaju/lyft-competition-understanding-the-data
7. ● Information in the bird-eye-view
○ Label of passengers (e.g. car, bicycle and pedestrian...)
○ Status of traffic light
○ Road information (e.g. pedestrian crossings and direction)
○ Location and timestamp...
Competition introduction
These information
can be gathered into
single image using
l5kit library
8. ● Total dataset size: 1118 hours, 26344 km
● Road length: 6.8 miles
● Train (89GB), Validation (11GB), Test Dataset (3GB):
○ Big data: Approx 200M, 190K, 71K Agents to predict motion.
Lyft level5 Data description
Image from https://arxiv.org/pdf/2006.14480.pdf
“One Thousand and One Hours: Self-driving Motion Prediction Dataset”
10. ● Route on google map
● Not so long distance, around Lyft office (Actually, CNN can “memorize” the place from image)
EDA using google earth
1.Station 2.Intersection
2.←Paper fig
2.Signals
11. ● Many straight roads
● Some complicated intersections...
EDA using google earth
12. ● More & more EDA, Train/Valid/Test stat is almost same!
No extrapolation found in this dataset…
○ Agent type distribution:CAR 91%, CYCLIST 2%, PEDESTRIAN 7%
○ Date :From 2019 October to 2020 March
○ Time :Daytime, From 7am to 7pm
○ Place:All road is included in train/valid/test
● Less effort is necessary “how to handle & train data”
→ Pure programming skill & ML techniques were important.
More EDA, No extrapolation found in this dataset...
Time
https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/189516
Date
14. ● Structured numpy array + zarr is used to save data on disk.
● structured array: https://numpy.org/doc/stable/user/basics.rec.html
Raw Data format
● zarr: https://zarr.readthedocs.io/en/stable/
○ It can save structured array on disk
15. ● l5kit is provided as baseline: https://github.com/lyft/l5kit
○ (Complicated) data preprocessing part is already implemented
○ Rasterizer
■ Semantic → protocol buffer is used inside MapAPI to draw semantic Map
■ Satellite → Draw satellite image.
● Most kaggle competition : 0 → 1
This competition : 1 → 10
L5kit library
Rasterizer
(base implementation
provided by Lyft)
Raw data (zarr)
- World coordinate
in time
- Extent (size)
- Yaw
CNN
Predict future
coordinates
(3 trajectories)
Typical approach already supported by l5kit Image
18. ● 1. Use train_full.zarr
● 2. l5kit==1.1.0
● 3. Set min_history=0, min_future=10 in AgentDataset
● 4. Cosine annealing for LR decrease until 0, with training 1 epoch
→ That’s enough to win the prize! (Private LB: 10.274)
● 5. Ensemble with GMM (Gaussian Mixture Models)
→ Further boosted score by 0.8 (Private LB: 9.475)
Short Summary
20. ● How to predict probabilistic behavior?
● Suggested Baseline kernel “Lyft: Training with multi-mode confidence”
○ Single model outputs 3 trajectories with the confidence at the same time
○ Train using competition evaluation metric loss directly
○ 1st place solution also originate from our approach (link)
Approach/Solution:
21. Approach/Metric:
• In this competition, model outputs 3 hypotheses (trajectories).
– ground truth:
– hypotheses:
• Assume the ground truth positions to be modeled by a mixture of Normal distributions.
• LB score is calculated by following metric and we directory used it as loss function of
CNN.
22. ● To utilize all possible data? → Let’s use train_full.zarr without down sampling
○ But size is big!….
○ 89 GB
○ 191,177,863 record with default setting
→ Need distributed training!
※ It was important to use all the data, to get good score in the competition.
Use train_full.zarr dataset
23. ● torch.distributedis used
○ 8 V100 GPUs * 5 days for 1 epoch
● Practically, need to modify AgentDataset to cache index arrays in disk
○ AgentDataset is copied in DataLoader when num_workers is set.
■ 8 multiprocesses * 4 num_workers = 32 copy is created
■ On-memory usage of AgentDataset is huge! Cannot fit in RAM.
● cumulative_sizesattribute was the bottleneck.
○ Cache track_id, scene_index, state_indexinto zarr to
reduce on-memory usage.
Distributed training
24. ● Pointed out in “We did it all wrong” discussion:
○ The target_positions value need to be rotated in the same way with the image,
specified by agent’s “yaw”
Use l5kit==1.1.0
l5kit==1.0.6 target_positions l5kit==1.1.0 target_positions
25. ● Use chopped dataset: Only use 100-th frame from each scene.
○ This is how test data is made.
○ But it discards all ground truth data,
instead, set agent_mask in AgentDataset to make validation data.
● Check validation/test dataset carefully
○ We Noticed that it contains at least 10 future frames & 0 history frames.
→ Next page
Validation strategy
26. ● Set min_history=0, min_future=10 in AgentDataset
○ MOST IMPORTANT!
○ Public LB Score jumps to 13.059 here.
Align training dataset to validation/test dataset
27. ● Tried several models
● Worked Well:
○ Resnet18
○ Resnet50
○ SEResNeXt50
○ ecaresnet18
● Not working well: Big, deeper models tend to have worse performance...
○ ResNet101
○ ResNet152
CNN Models
28. ● Trained hyperparameters
○ Batch size 12 * 8 processes
○ Adam optimizer
○ Cosing annealing with 1 epoch (Better than Exponential decay)
Training with cosine annealing
29. ● Used albumentationslibrary, tried several augmentations.
○ Tried Cutout, Blur, Downscale
○ Other augmentation used in natural image, ex flip, was not appropriate this time
● Only cutout is adopted for final model.
Augmentation: 1. Image based augmentation
Cutout Blur DownscaleOriginal image
30. ● Modified BoxRasterizer to add augmentation
○ 1. Random Agent drop
○ 2. Agent extent size scaling
● We could not find clear improvement during our experiment.
Final model does not use this augmentation...
Augmentation: 2. Rasterizer level augmentation
Several agents
are dropped
Host car size
is different
31. ● How to ensemble models?
○ In this competition, we train model to predict three trajectories (x1,x2,x3) and
three confidences (c1,c2,c3).
○ Simple ensemble methods such as averaging do not work.
● Consider the outputs as Gaussian mixture models
○ The outputs can be considered as confidence-weighted GMMs with
n_components=3
○ You can take the average of GMMs and the average of N GMMs takes the form
of GMM with n_components=3N
Ensemble by GMM and EM algorithm
32. ● You can get ensembled outputs from by
following the steps below.
○ Sampling enough points (e.g. 1000N) from the distribution .
○ Run the EM algorithm with n_components=3on the sampled points
(We used sklearn.mixture.GaussianMixture).
○ Let be the output of the EM algorithm.
Ensemble by GMM and EM algorithm
37. ● CNN Models: Smaller model was enough
○ ResNet18 was enough to get 4th place
○ Tried bigger ResNet101, ResNet152, etc… But worse performance
● Only 1 epoch training was enough!
○ Because data is very big & almost duplicated for consecutive frames
○ Important to use Cosine annealing for learning rate schedule
● Rasterizer (drawing image) is bottleneck
○ CPU intensive task, GPU util is not 100%.
Findings
Rasterizer
(base implementation
provided by Lyft)
Raw data
- World coordinate
in time
- Extent (size)
- Yaw
CNN
Predict future
coordinates
(3 trajectories)
Typical approach
Image
38. ● https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/201493
● Optimize Rasterizer implementation
→ 8 GPU * 2 days for 1 epoch
● Hyperparameters with “heavy” training
○ Semantic + Satellite images
○ Bigger image (448 * 224) ← (224, 224)
○ num history: 30 ← 10
○ min_future: 5 ← 10
○ Modify agent filter threshold
○ batch_size: 64
etc...
● Pre-training small image 4 epoch → Fine tune big image 1 epoch
○ It was very effective
[1st place solution] : L5kit Speedup
39. ● 10th place solution GNN based methods called VectorNet
○ Faster training & inference
■ They did not use rasterized images at all
■ 11 GPU hours for 1 epoch (Our CNN needs about 960 GPU hours)
○ Comparable performance to CNN-based methods
Other interesting approaches: VectorNet
VectorNet [Gao+, CVPR2020] VectorNet
CNN
CNN
(or not shared)
41. ● How different is the 3 trajectory generated by CNN models?
● Case1: Different directions
○ CNN can predict different possible ways/directions that agents move in the
future.
The diversity of 3 trajectory
42. ● How different is the 3 trajectory generated by CNN models?
● Case2: Speed or start time is different
○ Even direction is straight, CNN can predict different possible
speed/acceleration that agents move in the future.
The diversity of 3 trajectory
44. ● raster_size (Image size)
○ Tried 224x224 & 128x128.
○ Default 224x224 was better
● pixel_size
○ Tried 0.5, 0.25, 0.15.
○ Default 0.5 was better.
● num_history specific model
○ Short history model:
■ Tried to train 0 history model
→ the performance was not better than original model
○ Long history model
■ Tried 10, 14, 20
■ Default 10 was better in our experiment
(But 1st place solution used num_history=30)
Hyperparamter change
45. ● Added velocity arrow to the BoxRasterizer
Custom Rasterizer: 1. VelocityBoxRasterizer
46. ● Original SemanticRasterizer: Semantic image is drawn as RGB image
Custom Rasterizer: 2. ChannelSemanticRasterizer
● ChannelSemanticRasterizer:
○ Separated road, lane, green/yellow/red signal & crosswalk
Somehow, the training performance was worse than original SemanticRasterizer...
47. ● We thought that the red signal length is important to predict when the stopping
agent starts moving in the future.
● This Semantic Rasterizer changes its value by looking how long the single continued
in the history.
Custom Rasterizer: 3. TLSemanticRasterizer
48. ● Draw each agent type in different color/channel
○ CAR = Blue
○ CYCLIST = Yellow
○ PEDESTRIAN = Red
○ UNKNOWN = Gray
● Unknown type agent is also drawn
Custom Rasterizer: 4. AgentTypeBoxRasterizer
49. ● Predict all agent’s future coords at once, from 1 image.
● Using semantic segmentation models (segmentation-models-pytorch)
● Stopped investigation because agent sometimes exists very far from host car.
Multi-agent prediction model
https://self-driving.lyft.com/level5/data/
50. ● What kind of data makes the serious big error?
● When the “yaw” annotation is wrong, prediction & actual direction becomes different!
● Fix data’s yaw field contributes total score improvement?
○ YES! for validation dataset (see below).
○ NO!! for test dataset, yaw annotation seems wrong for only stopped cars.
● In the application, I guess this is very important problem to be considered...
Yaw correction
Loss=43988 Loss=30962 Loss=10818
51. ● Kaggle page: Lyft Motion Prediction for Autonomous Vehicles
● Data HP: https://self-driving.lyft.com/level5/data/
● Solution Discussion: Lyft Motion Prediction for Autonomous Vehicles
● Solution Code: https://github.com/pfnet-research/kaggle-lyft-motion-prediction-4th-place-solution
References