The document discusses using generative adversarial networks (GANs) for supervised learning tasks like face frontalization. It describes how GANs can be used to generate frontal face images from profile images by training a generator network against a discriminator network. The generator consists of an encoder that encodes the input face into a latent vector, and a decoder that reconstructs the face from the vector. Using GANs improves training by giving the generator two objectives: minimizing pixel-wise loss and fooling the discriminator. This allows generating higher quality images faster than without the adversarial training. GPUs are recommended for training due to the computational intensity of deep learning models.
Altitude London 2018: A hands-on tour of Image Optimisation workshopFastly
- The document presents an agenda for a tour of Fastly's image optimization capabilities, including why images should be optimized, how Fastly's image optimization works, image transformations available, and VCL tips.
- It discusses choosing appropriate image formats like JPEG, PNG, and WebP based on content type, and techniques like downscaling images server-side to reduce file size.
- It demonstrates how to handle the "Save-Data" request header to lower image quality for data-saving users and ensure consistent caching with purging.
Certification Study Group - Professional ML Engineer Session 3 (Machine Learn...gdgsurrey
Dive into the essentials of ML model development, processes, and techniques to combat underfitting and overfitting, explore distributed training approaches, and understand model explainability. Enhance your skills with practical insights from a seasoned expert.
The Frontier of Deep Learning in 2020 and BeyondNUS-ISS
This talk will be a summary of the recent advances in deep learning research, current trends in the industry, and the opportunities that lie ahead.
We will discuss topics in research such as:
Transformers, GPT-3, BERT
Neural Architecture Search, Evolutionary Search
Distillation, self-learning
NeRF
Self-Attention
Also shifting industry trends such as:
The move to free data
Rising importance of 3D vision
Using synthetic data (Sim2Real)
Mobile vision & Federated Learning
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
This document discusses scaling deep learning models to multiple nodes using parallel policy gradients. The author shows that a simple parallel policy gradients model trained on a supercomputer can learn to play Atari Pong faster than Google DeepMind's A3C model and a 7-year old child. Optimization efforts including reducing progress updates, using MKL-backed NumPy, and updating the agent every 4 seconds instead of full games resulted in the model learning to play Pong in under 4 minutes when run on 1536 cores of a supercomputer. However, humans are still around 10,000 times faster at learning the game than current AI techniques. The author concludes that HPC can play a large role in deep learning by helping researchers build high-performance
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.
Altitude London 2018: A hands-on tour of Image Optimisation workshopFastly
- The document presents an agenda for a tour of Fastly's image optimization capabilities, including why images should be optimized, how Fastly's image optimization works, image transformations available, and VCL tips.
- It discusses choosing appropriate image formats like JPEG, PNG, and WebP based on content type, and techniques like downscaling images server-side to reduce file size.
- It demonstrates how to handle the "Save-Data" request header to lower image quality for data-saving users and ensure consistent caching with purging.
Certification Study Group - Professional ML Engineer Session 3 (Machine Learn...gdgsurrey
Dive into the essentials of ML model development, processes, and techniques to combat underfitting and overfitting, explore distributed training approaches, and understand model explainability. Enhance your skills with practical insights from a seasoned expert.
The Frontier of Deep Learning in 2020 and BeyondNUS-ISS
This talk will be a summary of the recent advances in deep learning research, current trends in the industry, and the opportunities that lie ahead.
We will discuss topics in research such as:
Transformers, GPT-3, BERT
Neural Architecture Search, Evolutionary Search
Distillation, self-learning
NeRF
Self-Attention
Also shifting industry trends such as:
The move to free data
Rising importance of 3D vision
Using synthetic data (Sim2Real)
Mobile vision & Federated Learning
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
This document discusses scaling deep learning models to multiple nodes using parallel policy gradients. The author shows that a simple parallel policy gradients model trained on a supercomputer can learn to play Atari Pong faster than Google DeepMind's A3C model and a 7-year old child. Optimization efforts including reducing progress updates, using MKL-backed NumPy, and updating the agent every 4 seconds instead of full games resulted in the model learning to play Pong in under 4 minutes when run on 1536 cores of a supercomputer. However, humans are still around 10,000 times faster at learning the game than current AI techniques. The author concludes that HPC can play a large role in deep learning by helping researchers build high-performance
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.
Using GANs to improve generalization in a semi-supervised setting - trying it...PyData
In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
This document discusses using generative adversarial networks (GANs) for semi-supervised learning. GANs can help in a semi-supervised setup by creating a more diverse set of unlabeled data and improving generalization when labeled data is limited. The discriminator is trained on labeled data, unlabeled data, and generated data, learning to both generate realistic samples and classify inputs. Loss functions are modified to address generating realistic samples and classification, improving the GAN training process. Google Colaboratory and deployment options like Amazon SageMaker and AWS Lambda are also discussed.
AI Food detector; A model of Generative adversarial network for food Classifierjimmy majumder
This document describes training and testing a neural network model for flower classification using images. It begins by outlining the steps to load and preprocess image data in different ways, including using Keras utilities, writing their own input pipeline with TensorFlow, and downloading datasets. The model is then configured by making classes from a dataset of 3670 flower images. The neural network layers are executed and an accuracy of 69.9% is achieved on the small dataset, as shown in a GitHub link provided.
This document discusses the challenges of machine learning development circa 2013 and outlines Dato's approach to addressing these challenges. In 2013, machine learning development was difficult, slow, and expensive. It required specialized knowledge and infrastructure. Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world" through high-level toolkits, auto feature engineering, automated machine learning (AutoML), and scalable data structures. The document demonstrates how Dato's tools can build an intelligent application with just a few lines of code and handle large datasets by leveraging out-of-core computation.
Deep learning techniques can be used to learn features from data rather than relying on hand-crafted features. This allows neural networks to be applied to problems in computer vision, natural language processing, and other domains. Transfer learning techniques take advantage of features learned from one task and apply them to another related task, even when limited data is available for the second task. Deploying machine learning models in production requires techniques for serving predictions through scalable APIs and caching layers to meet performance requirements.
Extracting information from images using deep learning and transfer learning ...PAPIs.io
For online businesses, recommender systems are paramount. There is an increasing need to take into account all the user information to tailor the best product offer, tailored to each new user.
Part of that information is the content that the user actually sees: the visuals of the products. When it comes to products like luxury hotels, pictures of the room, the building or even the nearby beach can significantly impact users’ decision.
In this talk, we will describe how we improved an online vacation retailer recommender system by using the information in images. We’ll explain how to leverage open data and pre-trained deep learning models to derive information on user taste. We will use a transfer learning approach that enables companies to use state of the art machine learning methods without needing deep learning expertise.
The document provides an overview of a presentation about Google Cloud developer tools and an easier path to machine learning. It introduces the speaker and their background and experience. It then outlines the agenda which includes introductions to machine learning and Google Cloud, Google APIs, Cloud ML APIs, and other APIs to consider. It provides examples of using various Cloud ML APIs like Vision, Natural Language, and Speech for tasks like image labeling, text analysis, and speech recognition. The goal is to demonstrate how APIs powered by machine learning can help ease the burden of learning machine learning by allowing users to leverage pre-built models if they can call APIs.
This talk is composed of 3 major parts: the iterative creation of a recommender engine, the labeling of images, the post processing of images.
After introducing the main topic, labeling images to improve recommendation engine performances, we start with a recommendation engine discussion. We briefly describe the “classical” recommender system (collaborative filtering, content based filtering) and their advantages and limitations. We then describe the re-ranking approach we used to combine different engines into one. Re-ranking is a method (used by Google for example) that takes the different ranking as features and optimizes a certain loss. In our case we combine our different recommendations through a logistic regression that predict the probability of purchases for each tuple (user, sale). This version of the engine led to +7% revenue per customer and is now running in production.
We then explain why we wanted to use images information. It seemed that sales with some given images were performing better than others. If we had labels on all images we could use them in a content-based recommender system (used itself in the re-ranking engine). We then described how to label our images using pre-trained models, transfer learning and external APIs. We also show how easy it is to steal these APIs.
The final part deals with post processing of the images. Since most pre-trained models only output one class prediction, we need to reshape these into broad themes that can be used in our engine. We use a Non Negative Matrix Factorization for this purpose and show that we have very interpretable results. We conclude by comparing visually the different engines.
The key take away (more information in the pitch part) are theses:
- Machine learning: overview of recommender systems, re-ranking, how to label images, transfer learning.
- Do iterative data science. Start simple, then try more complex systems.
- Avoid rushing in deep learning without checking what you can find on Internet. Use pre-trained models and transfer learning.
There is a lot of hype around deep learning and image recognition. However, there are not that many success stories for web pure player companies. In our case we explain how we started with simple recommender systems before improving them gradually and finally using images information.
One of the key take away is the following: do iterative data science. Always prefer shipping a minimum viable product before creating something complex. At our clients, we commonly see teams rushing into images projects for the only purpose of doing deep learning without a clear ROI in mind.
We insist on the fact that deep learning is not an end in itself. Here, it boils down to making new information available in the system. In this sense, deep learning methods are just an extension of Business Intelligence.
Cutting Edge Computer Vision for EveryoneIvo Andreev
Microsoft offers a wide range of tools and advanced solutions to support you in managing computer vision related tasks.
From purely coding approaches with ML.NET, through zero-code ComputerVision.ai to advanced and flexible AI service in Azure ML, there is a solution for every need and each type of person.
From running on premises, through managed infrastructure to completely cloud services the speed of getting to the desired results and the return of investment are guaranteed.
Join this session to get insights about the options, deployment, pricing, pros and cons compared and select the most appropriate tech for your business case.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/practical-image-data-augmentation-methods-for-training-deep-learning-object-detection-models-a-presentation-from-ej-technology-consultants/
Evan Juras, Computer Vision Engineer at EJ Technology Consultants, presents the “Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models” tutorial at the September 2020 Embedded Vision Summit.
Data augmentation is a method of expanding deep learning training datasets by making various automated modifications to existing images in the dataset. The resulting increased data diversity can enable a more accurate and robust model without the need to manually obtain more images.
In this presentation, Juras explores practical methods of image data augmentation for training object detection models. He also shows how to create an augmented dataset of 50,000 unique images with labeled bounding boxes in a few hours using a short Python script.
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Getting Intimate with Images on Android with James HalpernFITC
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
OVERVIEW
As most Android developers know, dealing with the extreme degree of fragmentation in the Android ecosystem is often challenging. Among the more difficult challenges is managing memory usage, as devices that are in the market today can have as little as 13MB of memory. Now imagine the pains that developers go through when faced with the headache of having massive bitmaps eat up memory in a millisecond.
In this presentation, James Halpern will talk about the complexities of image and memory management in Android and walk you through the creation of a successful, powerful and open source image management utility. Come to this presentation to learn about techniques that will help you optimize the performance of your apps. Learn about Android’s memory limitations and the role the garbage collector plays in your app’s performance and complexity. Learn how to communicate android graphics issues to developers, and how good design can create fewer bugs. James will conclude this presentation by briefly walking you through his open sourced image management solution that gracefully handles most of these issues in a simple to use package.
The document discusses the benefits of regular exercise for both physical and mental health. It notes that exercise can help reduce the risk of diseases like heart disease and diabetes, improve mood, and reduce feelings of stress and anxiety. Regular exercise of 150 minutes per week is recommended for substantial health benefits.
Learning visual representation without human labelKai-Wen Zhao
Self supervised learning (SSL) is one of the most fast-growing research topic in recent years. SSL provides algorithm that directly learn visual representation from data itself rather than human manual labels. From theoretical point of view, SSL explores information theory & the nature of large scale dataset.
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Entreprises : découvrez les briques essentielles d’une solution IoTScaleway
> APRÈS CE WEBINAIRE, VOUS SEREZ EN MESURE DE COMPRENDRE :
- Les principaux composants d’une solution IoT
- L’importance de bien concevoir sa solution dès les premières étapes
Understand, verify, and act on the security of your Kubernetes clusters - Sca...Scaleway
After this webinar, you will able to:
- Apply a minimum security template on your clusters
- Base your cloud-native application design on security restrictions
- Learn easy security policies to apply
- Protect your data
- Be aware of very common and dangerous security issues that can be easily fixed
More Related Content
Similar to Harnessing the power of Generative Adversarial Networks (GANs) for supervised learning
Using GANs to improve generalization in a semi-supervised setting - trying it...PyData
In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
This document discusses using generative adversarial networks (GANs) for semi-supervised learning. GANs can help in a semi-supervised setup by creating a more diverse set of unlabeled data and improving generalization when labeled data is limited. The discriminator is trained on labeled data, unlabeled data, and generated data, learning to both generate realistic samples and classify inputs. Loss functions are modified to address generating realistic samples and classification, improving the GAN training process. Google Colaboratory and deployment options like Amazon SageMaker and AWS Lambda are also discussed.
AI Food detector; A model of Generative adversarial network for food Classifierjimmy majumder
This document describes training and testing a neural network model for flower classification using images. It begins by outlining the steps to load and preprocess image data in different ways, including using Keras utilities, writing their own input pipeline with TensorFlow, and downloading datasets. The model is then configured by making classes from a dataset of 3670 flower images. The neural network layers are executed and an accuracy of 69.9% is achieved on the small dataset, as shown in a GitHub link provided.
This document discusses the challenges of machine learning development circa 2013 and outlines Dato's approach to addressing these challenges. In 2013, machine learning development was difficult, slow, and expensive. It required specialized knowledge and infrastructure. Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world" through high-level toolkits, auto feature engineering, automated machine learning (AutoML), and scalable data structures. The document demonstrates how Dato's tools can build an intelligent application with just a few lines of code and handle large datasets by leveraging out-of-core computation.
Deep learning techniques can be used to learn features from data rather than relying on hand-crafted features. This allows neural networks to be applied to problems in computer vision, natural language processing, and other domains. Transfer learning techniques take advantage of features learned from one task and apply them to another related task, even when limited data is available for the second task. Deploying machine learning models in production requires techniques for serving predictions through scalable APIs and caching layers to meet performance requirements.
Extracting information from images using deep learning and transfer learning ...PAPIs.io
For online businesses, recommender systems are paramount. There is an increasing need to take into account all the user information to tailor the best product offer, tailored to each new user.
Part of that information is the content that the user actually sees: the visuals of the products. When it comes to products like luxury hotels, pictures of the room, the building or even the nearby beach can significantly impact users’ decision.
In this talk, we will describe how we improved an online vacation retailer recommender system by using the information in images. We’ll explain how to leverage open data and pre-trained deep learning models to derive information on user taste. We will use a transfer learning approach that enables companies to use state of the art machine learning methods without needing deep learning expertise.
The document provides an overview of a presentation about Google Cloud developer tools and an easier path to machine learning. It introduces the speaker and their background and experience. It then outlines the agenda which includes introductions to machine learning and Google Cloud, Google APIs, Cloud ML APIs, and other APIs to consider. It provides examples of using various Cloud ML APIs like Vision, Natural Language, and Speech for tasks like image labeling, text analysis, and speech recognition. The goal is to demonstrate how APIs powered by machine learning can help ease the burden of learning machine learning by allowing users to leverage pre-built models if they can call APIs.
This talk is composed of 3 major parts: the iterative creation of a recommender engine, the labeling of images, the post processing of images.
After introducing the main topic, labeling images to improve recommendation engine performances, we start with a recommendation engine discussion. We briefly describe the “classical” recommender system (collaborative filtering, content based filtering) and their advantages and limitations. We then describe the re-ranking approach we used to combine different engines into one. Re-ranking is a method (used by Google for example) that takes the different ranking as features and optimizes a certain loss. In our case we combine our different recommendations through a logistic regression that predict the probability of purchases for each tuple (user, sale). This version of the engine led to +7% revenue per customer and is now running in production.
We then explain why we wanted to use images information. It seemed that sales with some given images were performing better than others. If we had labels on all images we could use them in a content-based recommender system (used itself in the re-ranking engine). We then described how to label our images using pre-trained models, transfer learning and external APIs. We also show how easy it is to steal these APIs.
The final part deals with post processing of the images. Since most pre-trained models only output one class prediction, we need to reshape these into broad themes that can be used in our engine. We use a Non Negative Matrix Factorization for this purpose and show that we have very interpretable results. We conclude by comparing visually the different engines.
The key take away (more information in the pitch part) are theses:
- Machine learning: overview of recommender systems, re-ranking, how to label images, transfer learning.
- Do iterative data science. Start simple, then try more complex systems.
- Avoid rushing in deep learning without checking what you can find on Internet. Use pre-trained models and transfer learning.
There is a lot of hype around deep learning and image recognition. However, there are not that many success stories for web pure player companies. In our case we explain how we started with simple recommender systems before improving them gradually and finally using images information.
One of the key take away is the following: do iterative data science. Always prefer shipping a minimum viable product before creating something complex. At our clients, we commonly see teams rushing into images projects for the only purpose of doing deep learning without a clear ROI in mind.
We insist on the fact that deep learning is not an end in itself. Here, it boils down to making new information available in the system. In this sense, deep learning methods are just an extension of Business Intelligence.
Cutting Edge Computer Vision for EveryoneIvo Andreev
Microsoft offers a wide range of tools and advanced solutions to support you in managing computer vision related tasks.
From purely coding approaches with ML.NET, through zero-code ComputerVision.ai to advanced and flexible AI service in Azure ML, there is a solution for every need and each type of person.
From running on premises, through managed infrastructure to completely cloud services the speed of getting to the desired results and the return of investment are guaranteed.
Join this session to get insights about the options, deployment, pricing, pros and cons compared and select the most appropriate tech for your business case.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/practical-image-data-augmentation-methods-for-training-deep-learning-object-detection-models-a-presentation-from-ej-technology-consultants/
Evan Juras, Computer Vision Engineer at EJ Technology Consultants, presents the “Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models” tutorial at the September 2020 Embedded Vision Summit.
Data augmentation is a method of expanding deep learning training datasets by making various automated modifications to existing images in the dataset. The resulting increased data diversity can enable a more accurate and robust model without the need to manually obtain more images.
In this presentation, Juras explores practical methods of image data augmentation for training object detection models. He also shows how to create an augmented dataset of 50,000 unique images with labeled bounding boxes in a few hours using a short Python script.
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Getting Intimate with Images on Android with James HalpernFITC
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
OVERVIEW
As most Android developers know, dealing with the extreme degree of fragmentation in the Android ecosystem is often challenging. Among the more difficult challenges is managing memory usage, as devices that are in the market today can have as little as 13MB of memory. Now imagine the pains that developers go through when faced with the headache of having massive bitmaps eat up memory in a millisecond.
In this presentation, James Halpern will talk about the complexities of image and memory management in Android and walk you through the creation of a successful, powerful and open source image management utility. Come to this presentation to learn about techniques that will help you optimize the performance of your apps. Learn about Android’s memory limitations and the role the garbage collector plays in your app’s performance and complexity. Learn how to communicate android graphics issues to developers, and how good design can create fewer bugs. James will conclude this presentation by briefly walking you through his open sourced image management solution that gracefully handles most of these issues in a simple to use package.
The document discusses the benefits of regular exercise for both physical and mental health. It notes that exercise can help reduce the risk of diseases like heart disease and diabetes, improve mood, and reduce feelings of stress and anxiety. Regular exercise of 150 minutes per week is recommended for substantial health benefits.
Learning visual representation without human labelKai-Wen Zhao
Self supervised learning (SSL) is one of the most fast-growing research topic in recent years. SSL provides algorithm that directly learn visual representation from data itself rather than human manual labels. From theoretical point of view, SSL explores information theory & the nature of large scale dataset.
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Similar to Harnessing the power of Generative Adversarial Networks (GANs) for supervised learning (20)
Entreprises : découvrez les briques essentielles d’une solution IoTScaleway
> APRÈS CE WEBINAIRE, VOUS SEREZ EN MESURE DE COMPRENDRE :
- Les principaux composants d’une solution IoT
- L’importance de bien concevoir sa solution dès les premières étapes
Understand, verify, and act on the security of your Kubernetes clusters - Sca...Scaleway
After this webinar, you will able to:
- Apply a minimum security template on your clusters
- Base your cloud-native application design on security restrictions
- Learn easy security policies to apply
- Protect your data
- Be aware of very common and dangerous security issues that can be easily fixed
Discover the benefits of Kubernetes to host a SaaS solutionScaleway
What you can take away from this presentation:
- What a SaaS solution is
- Key figures on the SaaS market
- Advantages of Kubernetes Kapsule for SaaS
- How to optimize your costs and loads while maintaining stability
- How to guarantee the security of your infrastructures
- The difference between a multi-instance and a multi-tenant architecture
6 winning strategies for agil SaaS editorsScaleway
What you can take away from this presentation:
- Switch to micro service architecture
- Be ready for the multicloud world
- Focus on unique value proposal
- Be part of software ecosystems... or be the ecosystem
- Develop API to accelerate new sales channel
- Give them first, then make them addict to your soft
Webinar - Relying on Bare Metal to manage your workloadsScaleway
Upon leaving this webinar, you will be able to distinguish the different types of workloads, but you will also be capable of testing your infrastructure allowing you to better manage its peak loads.
Webinaire du 09/04/20 - S'appuyer sur du Bare Metal pour gérer ses pics de ch...Scaleway
Suite à ce webinaire vous serez capable de distinguer les différents types de workload et de savoir comment tester votre infrastructure pour mieux gérer les pics de charge.
- Scaleway uses VXLAN with BGP EVPN to build an overlay fabric on their network infrastructure. This provides multi-tenancy, encapsulation of Ethernet frames over UDP, and support for both bridging and routing.
- The underlay fabric uses Clos topology with IPv4 and eBGP for high bandwidth and resilience. Edge devices run as VTEPs to connect virtual networks over the overlay.
- A virtual route reflector provides the control plane for the overlay fabric, decoupling it from the underlying hardware. This allows routing between subnets and multi-homing of hosts between VTEPs.
Why and how we proxy our IoT broker connectionsScaleway
This document discusses how and why Scaleway proxies its IoT broker connections. It uses a reverse proxy to wrap existing MQTT brokers for horizontal scalability, TLS termination, metrics reporting and other features. The proxy parses MQTT packets on the fly and handles authentication, authorization, optional topic rewriting and more in a multi-stage pipeline before transmitting packets to brokers. Brokers are discovered dynamically using Kubernetes DNS to find the appropriate broker for each device connection.
From local servers up to Kubernetes in the cloudScaleway
This document discusses transitioning from local servers to cloud infrastructure and containerization using Kubernetes. It begins with an overview of infrastructure as a service (IAAS) benefits like hardware failure proofing and on-demand resources. It then discusses containerizing applications for portability and faster deployments. Finally, it outlines orchestrating containers with Kubernetes to automatically scale applications and maintain high availability. The document proposes these transitions to improve time to market, costs, and focus on development rather than operations. It acknowledges challenges around networking, configuration, and volumes that containers and orchestration address.
L’IA, booster de votre activité : principes, usages & idéationScaleway
This document provides an overview of artificial intelligence and machine learning techniques including supervised and unsupervised learning using statistics on large datasets. It discusses applications such as recognizing text, clustering individuals/concepts, natural language generation, signal processing, product recommendations, deep learning using neural networks, challenges around business process modeling, data availability, explainability, and solutions including specialized hardware and frameworks. It also covers costs associated with AI hardware and examples of training duration and expenses.
Comment automatiser le déploiement de sa plateforme sur des infrastructures ...Scaleway
This document discusses deploying infrastructure on bare metal servers using Nomad and Ansible. It begins with an introduction to Bare Metal Dedibox and its benefits over dedicated servers and cloud instances. Next, it demonstrates how to deploy the Nomad cluster manager and Consul using Ansible playbooks to provision and configure servers. It provides an example of using Nomad and bare metal for infinite scaling of Ceph storage. Overall, the document shows how infrastructure can be automatically deployed on bare metal servers through Nomad and Ansible configuration management.
This document discusses serverless computing and Scaleway's serverless solutions. It introduces the concepts of serverless functions and containers, which allow developers to focus on coding applications while the infrastructure automatically scales and handles security, deployment, and maintenance. Scaleway offers two simple serverless solutions - Functions, which allow scheduling code on any language, and Containers, which can deploy applications in seconds across multiple clouds.
Migrating the Online’s console with DockerScaleway
The document discusses the migration of an online console to a Dockerized environment using Nomad for deployment. Some key points:
- The console previously ran on old servers with various OSes and a homemade deployment system, making it difficult to scale and deploy new servers.
- The new approach uses Docker containers, Nomad for scheduling, Consul for service discovery, and Vault for secrets. Containers are deployed via Gitlab CI in a blue/green manner.
- Challenges included missing libraries, hardcoded paths, and issues promoting containers in Nomad. The new system addresses the prior limitations and allows easy scaling and deployment.
Routage à grande échelle des requêtes via RabbitMQScaleway
The document describes the distributed queuing system used to process tasks in Instances' control plane. Messages from APIs are routed through RabbitMQ exchanges and bindings to Celery worker queues. This allows scaling components independently and processing tasks asynchronously. Key-value headers and routing rules ensure messages are routed to the correct queues and workers.
Instances Behind the Scene: What happen when you click on «create a new insta...Scaleway
When a user clicks "create a new instance" on the Scaleway platform, several processes occur behind the scenes. The instance creation request is sent to the Scaleway API, then various services including Flask, PostgreSQL, and RabbitMQ work to provision the new virtual machine. Finally, the QEMU hypervisor is used to start the new virtual machine instance using a disk image file. The presentation also briefly discusses provisioning on bare metal servers and announces several related talks at the event.
Demystifying IoT : Bringing the cloud to connected devices with IoT StationScaleway
This document discusses IoT Station, a platform as a service for connecting devices to the cloud using MQTT protocol. It consists of an IoT hub that handles messaging between connected things and cloud services. The platform provides tools like messaging, data storage, serverless functions and AI capabilities to connected devices through edge computing deployments. Upcoming features include support for MQTT 5.0 and integrating additional connectivity protocols and device management. The key aspects are lightweight pub/sub messaging, persistence, security and scalability.
L’odyssée d’une requête HTTP chez ScalewayScaleway
This document provides a summary of the journey a HTTP request takes when interacting with Scaleway's API gateway and backend services. It describes how requests are routed, load balanced, authenticated, rate limited and sent to the appropriate backend service across different regions. The document is presented as a story told over 7 chapters, covering topics like gRPC integration, locality routing, authentication, service discovery and load balancing.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Harnessing the power of Generative Adversarial Networks (GANs) for supervised learning
1.
2. Olga Petrova
Machine Learning Engineer @ Scaleway
Harnessing the power of Generative
Adversarial Networks (GANs)
for supervised learning
3. 1. INTRODUCTION
a)Generating content with AI
b)Supervised vs. Unsupervised learning
2. DEEP LEARNING PIPELINE
a)The building blocks of a DL project
b)How can a GPU help you?
3. FACE FRONTALIZATION GAN
OUTLINE
5. THIS SLIDE DOES NOT EXIST
ThisPersonDoesNotExist.com
These images were generated via StyleGAN,
an artificial neural network by NVIDIA.
6. WHY USE A GPU FOR THIS?
NVIDIA
• Graphics Processing Unit manufacturer
• Training: very computationally intensive
• GPUs are optimised for Deep Learning
Scaleway GPU offer
• Dedicated 16-GB NVIDIA Tesla P100 GPU
• 10 CPU cores
• 45 GB of RAM
• 400 GB of local NVMe storage
8. SUPERVISED vs. UNSUPERVISED
Unsupervised learning
ThisPersonDoesNot exist GAN:
- Show the model a lot of pictures of people
(~ 70 000 images from Flickr)
- The model learns how to generate new pictures of faces
Unlabelled data: unsupervised learning
The training set: instead of (input, output) pairs, only (input)
The original use of GANs was unsupervised learning
9. SUPERVISED vs. UNSUPERVISED
Unsupervised learning
ThisPersonDoesNot exist GAN:
- Show the model a lot of pictures of people
(~ 70 000 images from Flickr)
- The model learns how to generate new pictures of faces
Unlabelled data: unsupervised learning
The training set: instead of (input, output) pairs, only (input)
The original use of GANs was unsupervised learning
10. SUPERVISED vs. UNSUPERVISED
Supervised learning
Labeled data:
(input, output) pairs
- Dog vs cat: inputs = images,
outputs = class labels
- Super resolution: inputs = images,
outputs = SRed images
Typically, GANs were not used
Low resolution input vs. Super resolution output
“Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network” by Twitter (2017)
11. SUPERVISED vs. UNSUPERVISED
Supervised learning
Labeled data:
(input, output) pairs
- Dog vs cat: inputs = images,
outputs = class labels
- Super resolution: inputs = images,
outputs = SRed images
Typically, GANs were not used
Super resolution vs. Super resolution w/ GAN
“Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network” by Twitter (2017)
12. FACE FRONTALIZATION
Supervised learning
Inputs:
profile images at an angle
Outputs:
frontal images of the face
“Beyond Face Rotation: Global and Local Perception GAN for
Photorealistic and Identity Preserving Frontal View Synthesis” by
R. Huang et al (2017)
input
generated
output
ground
truth
17. FACE FRONTALIZATION
THE MODEL
• Architecture + hyper parameters ← ML engineer’s job
• Trainable parameters ← learned numerical values
18. FACE FRONTALIZATION
TWO REGIMES
1. Training: learn the right parameters for the model
2. Inference: using the trained model to infer output
19. TRAINING THE MODEL I
Training data: correct (input, output) pairs
1. Feed inputs into the model
2. Compare the generated output to ground truth
3. Adjust trainable parameters to generate better outputs
20. TRAINING THE MODEL II
• Training done in mini-batches:
analyse a few images, then update parameters
• 1 pass through training dataset = 1 training epoch
22. TECHNICAL CHALLENGE: COMPUTATION
Heavy computations:
Trainable parameters: ~ 8 000 000 in the frontalization GAN
Process many arithmetic operations in parallel
24. CPU vs GPU performance in training
GP1-XS 4 vCPUs
Scaleway Tesla
P100 GPU
Pricing
€39/month
€0.078/hour
€500/month
€1/hour
/hourTraining time per
epoch
8.5 hours 18 minutes
Cost €0.66 €0.30
25. CPU vs GPU performance in training
GP1-XS 4 vCPUs
Scaleway Tesla
P100 GPU
Pricing
€39/month
€0.078/hour
€500/month
€1/hour
/hourTraining time per
epoch
8.5 hours 18 minutes
Cost €0.66 €0.30
26. CPU vs GPU performance in training
GP1-XS 4 vCPUs
Scaleway Tesla
P100 GPU
Pricing
€39/month
€0.078/hour
€500/month
€1/hour
/hourTraining time per
epoch
8.5 hours 18 minutes
Cost €0.66 €0.30
27. CPU vs GPU performance in training
GP1-XS 4 vCPUs
Scaleway Tesla
P100 GPU
Pricing
€39/month
€0.078/hour
€500/month
€1/hour
/hourTraining time per
epoch
8.5 hours 18 minutes
Cost €0.66 €0.30
28. CPU vs GPU performance in training
GP1-XS 4 vCPUs
Scaleway Tesla
P100 GPU
Pricing
€39/month
€0.078/hour
€500/month
€1/hour
/hourTraining time per
epoch
8.5 hours 18 minutes
Cost €0.66 €0.30
GPU: over 28 times faster for less than half the price
29. TECHNICAL CHALLENGE: HEAVY I/O
Feed batches of (input, output) pairs of images
Frontalization training set size ~700 000, 13+ Gb
30. TECHNICAL CHALLENGE: HEAVY I/O
SOLUTION: Local Storage
Scaleway GPU instances come with
400 Gb of local NVMe SSD storage
31. FACE FRONTALIZATION GAN
a) GAN: Generative Adversarial Network
b) Generator: Encoder + Decoder
c) Training and Inference
32. GANs: Generative Adversarial Nets I
Generative Adversarial Nets by Ian Goodfellow et
al. (2014)
Yann LeCun (Director of Facebook AI):
“the most interesting idea in the last 10 years in
Machine Learning”
Fig: http://skymind.ai/wiki/generative-adversarial-network-gan
Generative: there is a Generator part
Adversarial: there is also a Discriminator. You train the two against each other
33. GANs: Generative Adversarial Nets II
1. Generator: generates output images
2. Discriminator: has two objectives
- accept images from the training set (Real)
and
-reject the generated images (Fake)
The purpose of training:
the Generator gets good enough to be able to fool the Discriminator
into accepting the generated images as Real
Fig: http://skymind.ai/wiki/generative-adversarial-network-gan
34. FACE FRONTALIZATION GAN
a) GAN: Generative Adversarial Network
b) Generator: Encoder + Decoder
c) Training and Inference
35. GENERATOR I
Input image: 3 x 128 x 128 = 49152 numbers
Perhaps we do not need all the 49152 values to describe a face?
36. GENERATOR II
ENCODER: Analyse the face → 512 numbers that describe it
DECODER: 512 numbers → Reconstruct the face
ENCODER + DECODER = GENERATOR
37. FACE FRONTALIZATION GAN
a) GAN: Generative Adversarial Network
b) Generator: Encoder + Decoder
c) Training and Inference
38. TRAINING
Model: only the Generator (not the Discriminator)
To train the GAN: train both Discriminator and Generator
To see the benefit of GAN, consider only the Generator first
39. TRAINING: ONLY THE GENERATOR
• ML engineer needs an assessment for how far the generated result is
from the ideal
• Example: pixelwise loss function
40. TRAINING: ONLY THE GENERATOR
Top: faces generated after 1 epoch (~700 000 training samples)
Bottom: ground truth frontal face photographs
• ML engineer needs an assessment for how far the generated result is
from the ideal
• Example: pixelwise loss function
generated
output
ground
truth
41. TRAINING: ONLY THE GENERATOR
Top: faces generated after 10 epochs
Bottom: actual frontal face photographs
• ML engineer needs an assessment for how far the generated result is
from the ideal
• Example: pixelwise loss function
generated
output
ground
truth
42. TRAINING: ONLY THE GENERATOR
Top: faces generated after 50 epochs
Bottom: actual frontal face photographs
• ML engineer needs an assessment for how far the generated result is
from the ideal
• Example: pixelwise loss function
generated
output
ground
truth
43. TRAINING: ONLY THE GENERATOR
Top: faces generated after 400 epochs
Bottom: actual frontal face photographs
• ML engineer needs an assessment for how far the generated result is
from the ideal
• Example: pixelwise loss function
generated
output
ground
truth
44. TRAINING: ONLY THE GENERATOR
Top: faces generated after 400 epochs
Bottom: actual frontal face photographs
• Why does this work?
• We have a lot of trainable parameters (~ 5 000 000)
generated
output
ground
truth
45. INFERENCE
Model after one training epoch: results are blurry
• Training for too long leads to overfitting the training data:
the model does not generalise well at inference time
• How can we get good training results faster?
Top: input
Middle: generated output
Bottom: ground truth
46. INFERENCE
Model after 50 training epochs: getting better
• Training for too long leads to overfitting the training data:
the model does not generalise well at inference time
• How can we get good training results faster?
Top: input
Middle: generated output
Bottom: ground truth
47. INFERENCE
Model after 400 epochs: generated images are getting worse
• Training for too long leads to overfitting the training data:
the model does not generalise well at inference time
• How can we get good training results faster?
Top: input
Middle: generated output
Bottom: ground truth
48. TRAINING THE GAN:
GENERATOR + DISCRIMINATOR
Top: faces generated after 1 epoch (old model)
Bottom: actual frontal face photographs
1. Minimize the pixel wise loss as before
2. Fool the Discriminator into believing the generated images
are Real
generated
output
ground
truth
Train Generator + Discriminator. Two objectives for the Generator:
49. TRAINING THE GAN:
GENERATOR + DISCRIMINATOR
Old model + GAN after 1 training epoch:
fine features are sharpened much faster
1. Minimize the pixel wise loss as before
2. Fool the Discriminator into believing the generated images
are Real
generated
output
ground
truth
Train Generator + Discriminator. Two objectives for the Generator:
50. TRAINING THE GAN:
GENERATOR + DISCRIMINATOR
Only the Generator
Generated output after 10 training epochs
Train Generator + Discriminator. Two objectives for the Generator:
GAN = Generator + Discriminator
1. Minimize the pixel wise loss as before
2. Fool the Discriminator into believing the generated images
are Real
51. TRAINING THE GAN:
GENERATOR + DISCRIMINATOR
Only the Generator
1. Minimize the pixel wise loss as before
2. Fool the Discriminator into believing the generated images
are Real
Generated output after 10 training epochs
Train Generator + Discriminator. Two objectives for the Generator:
GAN = Generator + Discriminator
52. 1. With GAN we get better visual quality for the training set faster
2. Stop training earlier
3. The final model will generalise better
TRAINING THE GAN:
GENERATOR + DISCRIMINATOR
Generated output after 10 training epochs
Only the Generator GAN = Generator + Discriminator
53. COMBINED LOSS FUNCTION
Combined Loss function
1. Pixelwise loss for the generator:
how close is the output to ground truth?
2. Binary Cross Entropy loss for the
discriminator:
Fake or Real
Fig: http://skymind.ai/wiki/generative-adversarial-network-gan
Loss = the difference between
the generated output and the
desired output
input
generated
output
ground
truth
54. 1. Fine features: small in pixelwise loss
2. Discriminator: uses fine features to
categorise images as Real or Fake
3. Result: better visual quality of the output Super resolution vs. Super resolution w/
GAN
“Photo-Realistic Single Image Super-Resolution Using a
Generative Adversarial Network” by Twitter (2017)
Loss = the difference between
the generated output and the
desired output
input
generated
output
ground
truth
COMBINED LOSS FUNCTION
55. CONCLUSION
1. Using GANs in
Supervised Learning
can be a good idea
2. Training such
complex deep
networks benefits
greatly from a GPU
56. Thank You
Stay tuned for exclusive how-to's and updates, follow us on Twitter and LinkedIn
@Scaleway
Emplacement QR Code
You can also follow me on LinkedIn www.linkedin.com/in/olga-p-petrova/
The face frontalization GAN code can be found on www.github.com/scaleway/frontalization