I have implemented various optimizers (gradient descent, momentum, adam, etc.) based on gradient descent using only numpy not deep learning framework like TensorFlow.
This document provides an overview of multilayer perceptrons (MLPs) and the backpropagation algorithm. It defines MLPs as neural networks with multiple hidden layers that can solve nonlinear problems. The backpropagation algorithm is introduced as a method for training MLPs by propagating error signals backward from the output to inner layers. Key steps include calculating the error at each neuron, determining the gradient to update weights, and using this to minimize overall network error through iterative weight adjustment.
This document discusses gradient descent algorithms, feedforward neural networks, and backpropagation. It defines machine learning, artificial intelligence, and deep learning. It then explains gradient descent as an optimization technique used to minimize cost functions in deep learning models. It describes feedforward neural networks as having connections that move in one direction from input to output nodes. Backpropagation is mentioned as an algorithm for training neural networks.
Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Deep learning is a branch of machine learning that uses neural networks with multiple processing layers to learn representations of data with multiple levels of abstraction. It has been applied to problems like image recognition, natural language processing, and game playing. Deep learning architectures like deep neural networks use techniques like pretraining, dropout, and early stopping to avoid overfitting. Popular deep learning frameworks and libraries include TensorFlow, Keras, and PyTorch.
This document provides an overview of activation functions in deep learning. It discusses the purpose of activation functions, common types of activation functions like sigmoid, tanh, and ReLU, and issues like vanishing gradients that can occur with some activation functions. It explains that activation functions introduce non-linearity, allowing neural networks to learn complex patterns from data. The document also covers concepts like monotonicity, continuity, and differentiation properties that activation functions should have, as well as popular methods for updating weights during training like SGD, Adam, etc.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
The document discusses gradient descent methods for unconstrained convex optimization problems. It introduces gradient descent as an iterative method to find the minimum of a differentiable function by taking steps proportional to the negative gradient. It describes the basic gradient descent update rule and discusses convergence conditions such as Lipschitz continuity, strong convexity, and condition number. It also covers techniques like exact line search, backtracking line search, coordinate descent, and steepest descent methods.
Deep learning - Conceptual understanding and applicationsBuhwan Jeong
This document provides an overview of deep learning, including conceptual understanding and applications. It defines deep learning as a deep and wide artificial neural network. It describes key concepts in artificial neural networks like signal transmission between neurons, graphical models, linear/logistic regression, weights/biases/activation, and backpropagation. It also discusses popular deep learning applications and techniques like speech recognition, natural language processing, computer vision, representation learning using restricted Boltzmann machines and autoencoders, and deep network architectures.
This document provides an overview of multilayer perceptrons (MLPs) and the backpropagation algorithm. It defines MLPs as neural networks with multiple hidden layers that can solve nonlinear problems. The backpropagation algorithm is introduced as a method for training MLPs by propagating error signals backward from the output to inner layers. Key steps include calculating the error at each neuron, determining the gradient to update weights, and using this to minimize overall network error through iterative weight adjustment.
This document discusses gradient descent algorithms, feedforward neural networks, and backpropagation. It defines machine learning, artificial intelligence, and deep learning. It then explains gradient descent as an optimization technique used to minimize cost functions in deep learning models. It describes feedforward neural networks as having connections that move in one direction from input to output nodes. Backpropagation is mentioned as an algorithm for training neural networks.
Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Deep learning is a branch of machine learning that uses neural networks with multiple processing layers to learn representations of data with multiple levels of abstraction. It has been applied to problems like image recognition, natural language processing, and game playing. Deep learning architectures like deep neural networks use techniques like pretraining, dropout, and early stopping to avoid overfitting. Popular deep learning frameworks and libraries include TensorFlow, Keras, and PyTorch.
This document provides an overview of activation functions in deep learning. It discusses the purpose of activation functions, common types of activation functions like sigmoid, tanh, and ReLU, and issues like vanishing gradients that can occur with some activation functions. It explains that activation functions introduce non-linearity, allowing neural networks to learn complex patterns from data. The document also covers concepts like monotonicity, continuity, and differentiation properties that activation functions should have, as well as popular methods for updating weights during training like SGD, Adam, etc.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
The document discusses gradient descent methods for unconstrained convex optimization problems. It introduces gradient descent as an iterative method to find the minimum of a differentiable function by taking steps proportional to the negative gradient. It describes the basic gradient descent update rule and discusses convergence conditions such as Lipschitz continuity, strong convexity, and condition number. It also covers techniques like exact line search, backtracking line search, coordinate descent, and steepest descent methods.
Deep learning - Conceptual understanding and applicationsBuhwan Jeong
This document provides an overview of deep learning, including conceptual understanding and applications. It defines deep learning as a deep and wide artificial neural network. It describes key concepts in artificial neural networks like signal transmission between neurons, graphical models, linear/logistic regression, weights/biases/activation, and backpropagation. It also discusses popular deep learning applications and techniques like speech recognition, natural language processing, computer vision, representation learning using restricted Boltzmann machines and autoencoders, and deep network architectures.
An Autoencoder is a type of Artificial Neural Network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise.”
Methods of Optimization in Machine LearningKnoldus Inc.
In this session we will discuss about various methods to optimise a machine learning model and, how we can adjust the hyper-parameters to minimise the cost function.
Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Gradient descent optimization with simple examples. covers sgd, mini-batch, momentum, adagrad, rmsprop and adam.
Made for people with little knowledge of neural network.
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
This is the documentation of the study-meeting in lab.
Tha book title is "Hands-On Machine Learning with Scikit-Learn and TensorFlow" and this is the chapter 8.
Vanishing gradients occur when error gradients become very small during backpropagation, hindering convergence. This can happen when activation functions like sigmoid and tanh are used, as their derivatives are between 0 and 0.25. It affects earlier layers more due to more multiplicative terms. Using ReLU activations helps as their derivative is 1 for positive values. Initializing weights properly also helps prevent vanishing gradients. Exploding gradients occur when error gradients become very large, disrupting learning. It can be addressed through lower learning rates, gradient clipping, and gradient scaling.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
The document discusses hyperparameters and hyperparameter tuning in deep learning models. It defines hyperparameters as parameters that govern how the model parameters (weights and biases) are determined during training, in contrast to model parameters which are learned from the training data. Important hyperparameters include the learning rate, number of layers and units, and activation functions. The goal of training is for the model to perform optimally on unseen test data. Model selection, such as through cross-validation, is used to select the optimal hyperparameters. Training, validation, and test sets are also discussed, with the validation set used for model selection and the test set providing an unbiased evaluation of the fully trained model.
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
A Convolutional Neural Network (CNN) is a type of neural network that can process grid-like data like images. It works by applying filters to the input image to extract features at different levels of abstraction. The CNN takes the pixel values of an input image as the input layer. Hidden layers like the convolution layer, ReLU layer and pooling layer are applied to extract features from the image. The fully connected layer at the end identifies the object in the image based on the extracted features. CNNs use the convolution operation with small filter matrices that are convolved across the width and height of the input volume to compute feature maps.
This document summarizes various optimization techniques for deep learning models, including gradient descent, stochastic gradient descent, and variants like momentum, Nesterov's accelerated gradient, AdaGrad, RMSProp, and Adam. It provides an overview of how each technique works and comparisons of their performance on image classification tasks using MNIST and CIFAR-10 datasets. The document concludes by encouraging attendees to try out the different optimization methods in Keras and provides resources for further deep learning topics.
Deep learning and neural networks are inspired by biological neurons. Artificial neural networks (ANN) can have multiple layers and learn through backpropagation. Deep neural networks with multiple hidden layers did not work well until recent developments in unsupervised pre-training of layers. Experiments on MNIST digit recognition and NORB object recognition datasets showed deep belief networks and deep Boltzmann machines outperform other models. Deep learning is now widely used for applications like computer vision, natural language processing, and information retrieval.
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
1. Recurrent neural networks can model sequential data like time series by incorporating hidden state that has internal dynamics. This allows the model to store information for long periods of time.
2. Two key types of recurrent networks are linear dynamical systems and hidden Markov models. Long short-term memory networks were developed to address the problem of exploding or vanishing gradients in training traditional recurrent networks.
3. Recurrent networks can learn tasks like binary addition by recognizing patterns in the inputs over time rather than relying on fixed architectures like feedforward networks. They have been successfully applied to handwriting recognition.
1. Machine learning is a set of techniques that use data to build models that can make predictions without being explicitly programmed.
2. There are two main types of machine learning: supervised learning, where the model is trained on labeled examples, and unsupervised learning, where the model finds patterns in unlabeled data.
3. Common machine learning algorithms include linear regression, logistic regression, decision trees, support vector machines, naive Bayes, k-nearest neighbors, k-means clustering, and random forests. These can be used for regression, classification, clustering, and dimensionality reduction.
The document discusses deep neural networks (DNN) and deep learning. It explains that deep learning uses multiple layers to learn hierarchical representations from raw input data. Lower layers identify lower-level features while higher layers integrate these into more complex patterns. Deep learning models are trained on large datasets by adjusting weights to minimize error. Applications discussed include image recognition, natural language processing, drug discovery, and analyzing satellite imagery. Both advantages like state-of-the-art performance and drawbacks like high computational costs are outlined.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This document provides resources for topology and shape optimization using Altair software, including links to free courses, tutorials on the SIMP method, modal analysis, reduced order modeling using DMIG, and the use of submodels, stiffness matrices, and global optimization with soft convergence criteria. It lists various topology optimization parameters and options in Altair software and includes external links to additional tutorials and community questions.
Automated Models for Quantifying Centrality of Survey ResponsesMatthew Lease
Research talk presented at "Innovations in Online Research" (October 1, 2021)
Event URL: https://web.cvent.com/event/d063e447-1f16-4f70-a375-5d6978b3feea/websitePage:b8d4ce12-3d02-4d24-897d-fd469ca4808a.
An Autoencoder is a type of Artificial Neural Network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise.”
Methods of Optimization in Machine LearningKnoldus Inc.
In this session we will discuss about various methods to optimise a machine learning model and, how we can adjust the hyper-parameters to minimise the cost function.
Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Gradient descent optimization with simple examples. covers sgd, mini-batch, momentum, adagrad, rmsprop and adam.
Made for people with little knowledge of neural network.
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
This is the documentation of the study-meeting in lab.
Tha book title is "Hands-On Machine Learning with Scikit-Learn and TensorFlow" and this is the chapter 8.
Vanishing gradients occur when error gradients become very small during backpropagation, hindering convergence. This can happen when activation functions like sigmoid and tanh are used, as their derivatives are between 0 and 0.25. It affects earlier layers more due to more multiplicative terms. Using ReLU activations helps as their derivative is 1 for positive values. Initializing weights properly also helps prevent vanishing gradients. Exploding gradients occur when error gradients become very large, disrupting learning. It can be addressed through lower learning rates, gradient clipping, and gradient scaling.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
The document discusses hyperparameters and hyperparameter tuning in deep learning models. It defines hyperparameters as parameters that govern how the model parameters (weights and biases) are determined during training, in contrast to model parameters which are learned from the training data. Important hyperparameters include the learning rate, number of layers and units, and activation functions. The goal of training is for the model to perform optimally on unseen test data. Model selection, such as through cross-validation, is used to select the optimal hyperparameters. Training, validation, and test sets are also discussed, with the validation set used for model selection and the test set providing an unbiased evaluation of the fully trained model.
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
A Convolutional Neural Network (CNN) is a type of neural network that can process grid-like data like images. It works by applying filters to the input image to extract features at different levels of abstraction. The CNN takes the pixel values of an input image as the input layer. Hidden layers like the convolution layer, ReLU layer and pooling layer are applied to extract features from the image. The fully connected layer at the end identifies the object in the image based on the extracted features. CNNs use the convolution operation with small filter matrices that are convolved across the width and height of the input volume to compute feature maps.
This document summarizes various optimization techniques for deep learning models, including gradient descent, stochastic gradient descent, and variants like momentum, Nesterov's accelerated gradient, AdaGrad, RMSProp, and Adam. It provides an overview of how each technique works and comparisons of their performance on image classification tasks using MNIST and CIFAR-10 datasets. The document concludes by encouraging attendees to try out the different optimization methods in Keras and provides resources for further deep learning topics.
Deep learning and neural networks are inspired by biological neurons. Artificial neural networks (ANN) can have multiple layers and learn through backpropagation. Deep neural networks with multiple hidden layers did not work well until recent developments in unsupervised pre-training of layers. Experiments on MNIST digit recognition and NORB object recognition datasets showed deep belief networks and deep Boltzmann machines outperform other models. Deep learning is now widely used for applications like computer vision, natural language processing, and information retrieval.
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
1. Recurrent neural networks can model sequential data like time series by incorporating hidden state that has internal dynamics. This allows the model to store information for long periods of time.
2. Two key types of recurrent networks are linear dynamical systems and hidden Markov models. Long short-term memory networks were developed to address the problem of exploding or vanishing gradients in training traditional recurrent networks.
3. Recurrent networks can learn tasks like binary addition by recognizing patterns in the inputs over time rather than relying on fixed architectures like feedforward networks. They have been successfully applied to handwriting recognition.
1. Machine learning is a set of techniques that use data to build models that can make predictions without being explicitly programmed.
2. There are two main types of machine learning: supervised learning, where the model is trained on labeled examples, and unsupervised learning, where the model finds patterns in unlabeled data.
3. Common machine learning algorithms include linear regression, logistic regression, decision trees, support vector machines, naive Bayes, k-nearest neighbors, k-means clustering, and random forests. These can be used for regression, classification, clustering, and dimensionality reduction.
The document discusses deep neural networks (DNN) and deep learning. It explains that deep learning uses multiple layers to learn hierarchical representations from raw input data. Lower layers identify lower-level features while higher layers integrate these into more complex patterns. Deep learning models are trained on large datasets by adjusting weights to minimize error. Applications discussed include image recognition, natural language processing, drug discovery, and analyzing satellite imagery. Both advantages like state-of-the-art performance and drawbacks like high computational costs are outlined.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This document provides resources for topology and shape optimization using Altair software, including links to free courses, tutorials on the SIMP method, modal analysis, reduced order modeling using DMIG, and the use of submodels, stiffness matrices, and global optimization with soft convergence criteria. It lists various topology optimization parameters and options in Altair software and includes external links to additional tutorials and community questions.
Automated Models for Quantifying Centrality of Survey ResponsesMatthew Lease
Research talk presented at "Innovations in Online Research" (October 1, 2021)
Event URL: https://web.cvent.com/event/d063e447-1f16-4f70-a375-5d6978b3feea/websitePage:b8d4ce12-3d02-4d24-897d-fd469ca4808a.
After this presentation you will know how to:
- sell Drupal 8 to business on large enterprise
- plan migration of code and content
- technically migrate a lot of custom code and data
- automate migration process
- test migration and regression
- overcome migration challenges, based on a JYSK case
https://drupalcampkyiv.org/node/55
Go over a few case studies which have had a great impact on travel experience for users and the technology behind them.
Usage of machine learning to provide personalized sorting of search results which in turn increased the propensity to buy.
Collecting and analyzing Site experimentation data.
Marketing Channel Optimization and Campaign effectiveness
Data platform that enables all of our use cases
The document discusses different patterns for markup and JavaScript, including simple patterns using only HTML, patterns with additional logic implemented in JavaScript, and patterns with behavioral enhancements using JavaScript. It also covers some advantages of defining behaviors through JavaScript, such as improved maintainability, as well as potential disadvantages related to unnecessary code download. The conclusion recommends using JavaScript behaviors for shared interactions while being mindful of file size.
The Agile Manifesto and the Agile Principles should provide guidance for projects. This talk is about my personal reflection of my last multi-team project with regards to this guidance.
This document discusses Eclipse MicroProfile metrics and practical use cases. It begins with an overview of why metrics are important, especially for reactive applications and microservices. It then covers using metrics in Java monoliths and microservices, including with JMX, telemetry APIs, and MicroProfile. Several practical use cases are presented for generating different metric types like counters, gauges, meters, and histograms in microservices. Finally, examples of base metrics that could be monitored in monoliths are discussed, such as heap usage, CPU utilization, and GC executions.
The document discusses Agile management approaches used by Matt Harasymczuk and his team. It covers using Scrum for new features and Kanban for maintenance, conducting weekly iterations with planning, daily standups, reviews and retrospectives. It also discusses setting up cross-functional project teams, establishing processes for documentation, testing and releases, and evolving their practices over time to become more Agile.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
2. 2018.12.15. MODUCON
• Research Scientist
• Ph. D. in Physics
• Research interests
• Generative models (GANs)
• Style transfer
• Reinforcement learning
• Generate and Transfer for Art (GTA Lab)
• github: https://github.com/ilguyi
• e-mail: ilgu.yi@modulabs.co.kr
8. 2018.12.15. MODUCON
Examples
• Portfolio optimization
• variables: amounts invested in different assets
• constraints: budget, max./min. investment per asset, minimum return
• objective: overall risk or return variance
• Device sizing in electronic circuits
• variables: device widths and lengths
• constraints: manufacturing limits, timing requirements, maximum area
• objective: power consumption
• Data fitting
• variables: model parameters
• constraints: prior information, parameter limits
• objective: measure of misfit or prediction error
Slide credit: Boyd & Vandenberghe, https://web.stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf, p. 3
9. 2018.12.15. MODUCON
Why do We Care?
• Optimization is at the heart of many (most practical?) machine learning
algorithms
• Linear regression:
• Classification (logistic regression or SVM):
minimize
w
||Xw y||2
<latexit sha1_base64="FN7RoaBkQDz4822lh9I0IboJCYw=">AAACGXicbZDLSgMxFIYzXmu9VV26CRbBjWVGBF0W3HRZwV6gU0smPdMGk8yQZCx1Oq/hxldx40IRl7rybUwvC209EPj4/3PIOX8Qc6aN6347S8srq2vruY385tb2zm5hb7+uo0RRqNGIR6oZEA2cSagZZjg0YwVEBBwawd3V2G/cg9IskjdmGENbkJ5kIaPEWKlTcP1Edq0PJh1kqS+I6eswFUwywR4gy7CPR6PmAJ/i4Wh0m55lnULRLbmTwovgzaCIZlXtFD79bkQTAdJQTrRueW5s2ilRhlEOWd5PNMSE3pEetCxKIkC308llGT62SheHkbJPGjxRf0+kRGg9FIHtnKw+743F/7xWYsLLdspknBiQdPpRmHBsIjyOCXeZAmr40AKhitldMe0TRaixYeVtCN78yYtQPyt5lq/Pi+XKLI4cOkRH6AR56AKVUQVVUQ1R9Iie0St6c56cF+fd+Zi2LjmzmQP0p5yvH2bzocA=</latexit><latexit sha1_base64="FN7RoaBkQDz4822lh9I0IboJCYw=">AAACGXicbZDLSgMxFIYzXmu9VV26CRbBjWVGBF0W3HRZwV6gU0smPdMGk8yQZCx1Oq/hxldx40IRl7rybUwvC209EPj4/3PIOX8Qc6aN6347S8srq2vruY385tb2zm5hb7+uo0RRqNGIR6oZEA2cSagZZjg0YwVEBBwawd3V2G/cg9IskjdmGENbkJ5kIaPEWKlTcP1Edq0PJh1kqS+I6eswFUwywR4gy7CPR6PmAJ/i4Wh0m55lnULRLbmTwovgzaCIZlXtFD79bkQTAdJQTrRueW5s2ilRhlEOWd5PNMSE3pEetCxKIkC308llGT62SheHkbJPGjxRf0+kRGg9FIHtnKw+743F/7xWYsLLdspknBiQdPpRmHBsIjyOCXeZAmr40AKhitldMe0TRaixYeVtCN78yYtQPyt5lq/Pi+XKLI4cOkRH6AR56AKVUQVVUQ1R9Iie0St6c56cF+fd+Zi2LjmzmQP0p5yvH2bzocA=</latexit><latexit sha1_base64="FN7RoaBkQDz4822lh9I0IboJCYw=">AAACGXicbZDLSgMxFIYzXmu9VV26CRbBjWVGBF0W3HRZwV6gU0smPdMGk8yQZCx1Oq/hxldx40IRl7rybUwvC209EPj4/3PIOX8Qc6aN6347S8srq2vruY385tb2zm5hb7+uo0RRqNGIR6oZEA2cSagZZjg0YwVEBBwawd3V2G/cg9IskjdmGENbkJ5kIaPEWKlTcP1Edq0PJh1kqS+I6eswFUwywR4gy7CPR6PmAJ/i4Wh0m55lnULRLbmTwovgzaCIZlXtFD79bkQTAdJQTrRueW5s2ilRhlEOWd5PNMSE3pEetCxKIkC308llGT62SheHkbJPGjxRf0+kRGg9FIHtnKw+743F/7xWYsLLdspknBiQdPpRmHBsIjyOCXeZAmr40AKhitldMe0TRaixYeVtCN78yYtQPyt5lq/Pi+XKLI4cOkRH6AR56AKVUQVVUQ1R9Iie0St6c56cF+fd+Zi2LjmzmQP0p5yvH2bzocA=</latexit><latexit sha1_base64="FN7RoaBkQDz4822lh9I0IboJCYw=">AAACGXicbZDLSgMxFIYzXmu9VV26CRbBjWVGBF0W3HRZwV6gU0smPdMGk8yQZCx1Oq/hxldx40IRl7rybUwvC209EPj4/3PIOX8Qc6aN6347S8srq2vruY385tb2zm5hb7+uo0RRqNGIR6oZEA2cSagZZjg0YwVEBBwawd3V2G/cg9IskjdmGENbkJ5kIaPEWKlTcP1Edq0PJh1kqS+I6eswFUwywR4gy7CPR6PmAJ/i4Wh0m55lnULRLbmTwovgzaCIZlXtFD79bkQTAdJQTrRueW5s2ilRhlEOWd5PNMSE3pEetCxKIkC308llGT62SheHkbJPGjxRf0+kRGg9FIHtnKw+743F/7xWYsLLdspknBiQdPpRmHBsIjyOCXeZAmr40AKhitldMe0TRaixYeVtCN78yYtQPyt5lq/Pi+XKLI4cOkRH6AR56AKVUQVVUQ1R9Iie0St6c56cF+fd+Zi2LjmzmQP0p5yvH2bzocA=</latexit>
minimize
w
nX
i=1
log(1 + exp( yix>
i w))
<latexit sha1_base64="L8eYEYOwPDVTRuogWL6sJ5SRlfY=">AAACQXicbZBLTxsxFIU9PFpISxtg2Y1FhBSEQDMVEmyQkNiwBKmBSJkQeZw7wcKPkX2nJFjz19jwD9h13w0LqoptN3VCFryuZOvTOff6cbJCCodx/CuamZ2b//BxYbH26fPSl6/15ZVTZ0rLocWNNLadMQdSaGihQAntwgJTmYSz7PJw7J/9BOuE0T9wVEBXsYEWueAMg9Srt9NS94MP6K8qnyIM0eVeCS2UuIaqoilNXal6Xuwn1bnXQZBmQJs0oZs0hWHR3BoFs6LD8X4eTjBFRa82Nnr1RrwdT4q+hWQKDTKt4179Lu0bXirQyCVzrpPEBXY9syi4hKqWlg4Kxi/ZADoBNVPgun6SQEXXg9KnubFhaaQT9fmEZ8q5kcpCp2J44V57Y/E9r1Nivtf1QhclguZPF+WlpGjoOE7aFxY4ylEAxq0Ib6X8glnGMYRaCyEkr7/8Fk6/byeBT3YaB0fTOBbIN7JGmiQhu+SAHJFj0iKc3JDf5IH8iW6j++hv9PjUOhNNZ1bJi4r+/QdBgbEg</latexit><latexit sha1_base64="L8eYEYOwPDVTRuogWL6sJ5SRlfY=">AAACQXicbZBLTxsxFIU9PFpISxtg2Y1FhBSEQDMVEmyQkNiwBKmBSJkQeZw7wcKPkX2nJFjz19jwD9h13w0LqoptN3VCFryuZOvTOff6cbJCCodx/CuamZ2b//BxYbH26fPSl6/15ZVTZ0rLocWNNLadMQdSaGihQAntwgJTmYSz7PJw7J/9BOuE0T9wVEBXsYEWueAMg9Srt9NS94MP6K8qnyIM0eVeCS2UuIaqoilNXal6Xuwn1bnXQZBmQJs0oZs0hWHR3BoFs6LD8X4eTjBFRa82Nnr1RrwdT4q+hWQKDTKt4179Lu0bXirQyCVzrpPEBXY9syi4hKqWlg4Kxi/ZADoBNVPgun6SQEXXg9KnubFhaaQT9fmEZ8q5kcpCp2J44V57Y/E9r1Nivtf1QhclguZPF+WlpGjoOE7aFxY4ylEAxq0Ib6X8glnGMYRaCyEkr7/8Fk6/byeBT3YaB0fTOBbIN7JGmiQhu+SAHJFj0iKc3JDf5IH8iW6j++hv9PjUOhNNZ1bJi4r+/QdBgbEg</latexit><latexit sha1_base64="L8eYEYOwPDVTRuogWL6sJ5SRlfY=">AAACQXicbZBLTxsxFIU9PFpISxtg2Y1FhBSEQDMVEmyQkNiwBKmBSJkQeZw7wcKPkX2nJFjz19jwD9h13w0LqoptN3VCFryuZOvTOff6cbJCCodx/CuamZ2b//BxYbH26fPSl6/15ZVTZ0rLocWNNLadMQdSaGihQAntwgJTmYSz7PJw7J/9BOuE0T9wVEBXsYEWueAMg9Srt9NS94MP6K8qnyIM0eVeCS2UuIaqoilNXal6Xuwn1bnXQZBmQJs0oZs0hWHR3BoFs6LD8X4eTjBFRa82Nnr1RrwdT4q+hWQKDTKt4179Lu0bXirQyCVzrpPEBXY9syi4hKqWlg4Kxi/ZADoBNVPgun6SQEXXg9KnubFhaaQT9fmEZ8q5kcpCp2J44V57Y/E9r1Nivtf1QhclguZPF+WlpGjoOE7aFxY4ylEAxq0Ib6X8glnGMYRaCyEkr7/8Fk6/byeBT3YaB0fTOBbIN7JGmiQhu+SAHJFj0iKc3JDf5IH8iW6j++hv9PjUOhNNZ1bJi4r+/QdBgbEg</latexit><latexit sha1_base64="L8eYEYOwPDVTRuogWL6sJ5SRlfY=">AAACQXicbZBLTxsxFIU9PFpISxtg2Y1FhBSEQDMVEmyQkNiwBKmBSJkQeZw7wcKPkX2nJFjz19jwD9h13w0LqoptN3VCFryuZOvTOff6cbJCCodx/CuamZ2b//BxYbH26fPSl6/15ZVTZ0rLocWNNLadMQdSaGihQAntwgJTmYSz7PJw7J/9BOuE0T9wVEBXsYEWueAMg9Srt9NS94MP6K8qnyIM0eVeCS2UuIaqoilNXal6Xuwn1bnXQZBmQJs0oZs0hWHR3BoFs6LD8X4eTjBFRa82Nnr1RrwdT4q+hWQKDTKt4179Lu0bXirQyCVzrpPEBXY9syi4hKqWlg4Kxi/ZADoBNVPgun6SQEXXg9KnubFhaaQT9fmEZ8q5kcpCp2J44V57Y/E9r1Nivtf1QhclguZPF+WlpGjoOE7aFxY4ylEAxq0Ib6X8glnGMYRaCyEkr7/8Fk6/byeBT3YaB0fTOBbIN7JGmiQhu+SAHJFj0iKc3JDf5IH8iW6j++hv9PjUOhNNZ1bJi4r+/QdBgbEg</latexit>
or ||w||2
+ C
nX
i=1
⇠i s.t. ⇠i 1 yix>
i w, ⇠i 0
<latexit sha1_base64="6B03gjAYLYyPcCL21hjDtGuh5ls=">AAACZHicbVFda9swFJW9j3ZZt7krexqMy8JgsC3YpdC+DAp96WMHS1uIkyArciuqD0+6bhMU/8m99bEv+x2T3Qy2dBckzj3nXCQdFZUUDtP0NoofPX7ydGPzWe/51ouXr5Lt16fO1JbxITPS2POCOi6F5kMUKPl5ZTlVheRnxdVRq59dc+uE0d9xUfGxohdalIJRDNQ08TnyOVrljW0gh+XyZrmc+N0GPsER5K5WUy++Zs3E6yDPRehaWzfkSu8GOOj6P4rkPyCDL7Bo23m7TcIJpmrg5vO6L50m/XSQdgUPQbYCfbKqk2nyM58ZViuukUnq3ChLKxx7alEwyZteXjteUXZFL/goQE0Vd2PfhdTAh8DMoDQ2LI3QsX9PeKqcW6giOBXFS7euteT/tFGN5cHYC13VyDW7P6isJaCBNnGYCcsZykUAlFkR7grsklrKMPxLL4SQrT/5ITjdHWQBf9vrHx6v4tgkb8l78pFkZJ8ckmNyQoaEkbtoI0qi7ehXvBXvxG/urXG0mtkh/1T87jdCc7c+</latexit><latexit sha1_base64="6B03gjAYLYyPcCL21hjDtGuh5ls=">AAACZHicbVFda9swFJW9j3ZZt7krexqMy8JgsC3YpdC+DAp96WMHS1uIkyArciuqD0+6bhMU/8m99bEv+x2T3Qy2dBckzj3nXCQdFZUUDtP0NoofPX7ydGPzWe/51ouXr5Lt16fO1JbxITPS2POCOi6F5kMUKPl5ZTlVheRnxdVRq59dc+uE0d9xUfGxohdalIJRDNQ08TnyOVrljW0gh+XyZrmc+N0GPsER5K5WUy++Zs3E6yDPRehaWzfkSu8GOOj6P4rkPyCDL7Bo23m7TcIJpmrg5vO6L50m/XSQdgUPQbYCfbKqk2nyM58ZViuukUnq3ChLKxx7alEwyZteXjteUXZFL/goQE0Vd2PfhdTAh8DMoDQ2LI3QsX9PeKqcW6giOBXFS7euteT/tFGN5cHYC13VyDW7P6isJaCBNnGYCcsZykUAlFkR7grsklrKMPxLL4SQrT/5ITjdHWQBf9vrHx6v4tgkb8l78pFkZJ8ckmNyQoaEkbtoI0qi7ehXvBXvxG/urXG0mtkh/1T87jdCc7c+</latexit><latexit sha1_base64="6B03gjAYLYyPcCL21hjDtGuh5ls=">AAACZHicbVFda9swFJW9j3ZZt7krexqMy8JgsC3YpdC+DAp96WMHS1uIkyArciuqD0+6bhMU/8m99bEv+x2T3Qy2dBckzj3nXCQdFZUUDtP0NoofPX7ydGPzWe/51ouXr5Lt16fO1JbxITPS2POCOi6F5kMUKPl5ZTlVheRnxdVRq59dc+uE0d9xUfGxohdalIJRDNQ08TnyOVrljW0gh+XyZrmc+N0GPsER5K5WUy++Zs3E6yDPRehaWzfkSu8GOOj6P4rkPyCDL7Bo23m7TcIJpmrg5vO6L50m/XSQdgUPQbYCfbKqk2nyM58ZViuukUnq3ChLKxx7alEwyZteXjteUXZFL/goQE0Vd2PfhdTAh8DMoDQ2LI3QsX9PeKqcW6giOBXFS7euteT/tFGN5cHYC13VyDW7P6isJaCBNnGYCcsZykUAlFkR7grsklrKMPxLL4SQrT/5ITjdHWQBf9vrHx6v4tgkb8l78pFkZJ8ckmNyQoaEkbtoI0qi7ehXvBXvxG/urXG0mtkh/1T87jdCc7c+</latexit><latexit sha1_base64="6B03gjAYLYyPcCL21hjDtGuh5ls=">AAACZHicbVFda9swFJW9j3ZZt7krexqMy8JgsC3YpdC+DAp96WMHS1uIkyArciuqD0+6bhMU/8m99bEv+x2T3Qy2dBckzj3nXCQdFZUUDtP0NoofPX7ydGPzWe/51ouXr5Lt16fO1JbxITPS2POCOi6F5kMUKPl5ZTlVheRnxdVRq59dc+uE0d9xUfGxohdalIJRDNQ08TnyOVrljW0gh+XyZrmc+N0GPsER5K5WUy++Zs3E6yDPRehaWzfkSu8GOOj6P4rkPyCDL7Bo23m7TcIJpmrg5vO6L50m/XSQdgUPQbYCfbKqk2nyM58ZViuukUnq3ChLKxx7alEwyZteXjteUXZFL/goQE0Vd2PfhdTAh8DMoDQ2LI3QsX9PeKqcW6giOBXFS7euteT/tFGN5cHYC13VyDW7P6isJaCBNnGYCcsZykUAlFkR7grsklrKMPxLL4SQrT/5ITjdHWQBf9vrHx6v4tgkb8l78pFkZJ8ckmNyQoaEkbtoI0qi7ehXvBXvxG/urXG0mtkh/1T87jdCc7c+</latexit>
Slide credit: Duchi, Convex Optimization for Machine Learning Fall 2009, p. 5
10. 2018.12.15. MODUCON
We still Care…
• Maximum likelihood estimation:
• Collaborative filtering:
• k-means:
• And more (graphical models, feature selection, active learning, control)
Slide credit: Duchi, Convex Optimization for Machine Learning Fall 2009, p. 6
maximize
✓
nX
i=1
log p✓(xi)
<latexit sha1_base64="Z/lbKWlvpXmZKmvFJGbDo+uBlPY=">AAACNXicbVDLSgMxFM34tr6qLt0Ei6AbmRFBN4LgxoULBatCpw6Z9E4bTDJDckdah/kpN/6HK124UMStv2BaK/g6EHI45x6Se+JMCou+/+iNjI6NT0xOTVdmZufmF6qLS2c2zQ2HOk9lai5iZkEKDXUUKOEiM8BULOE8vjro++fXYKxI9Sn2Mmgq1tYiEZyhk6LqUZjrlvMBixA7gKx0N3TRJoViXaHEDZQlDWlocxUVYi8oLwvtBJm2aRZ9Zda7zis3omrN3/QHoH9JMCQ1MsRxVL0PWynPFWjkklnbCPwMmwUzKLiEshLmFjLGr1gbGo5qpsA2i8HWJV1zSosmqXFHIx2o3xMFU9b2VOwmFcOO/e31xf+8Ro7JbrMQOssRNP98KMklxZT2K6QtYYCj7DnCuBHur5R3mGEcXZEVV0Lwe+W/5GxrM3D8ZLu2fzisY4qskFWyTgKyQ/bJITkmdcLJLXkgz+TFu/OevFfv7XN0xBtmlskPeO8fLriuBg==</latexit><latexit sha1_base64="Z/lbKWlvpXmZKmvFJGbDo+uBlPY=">AAACNXicbVDLSgMxFM34tr6qLt0Ei6AbmRFBN4LgxoULBatCpw6Z9E4bTDJDckdah/kpN/6HK124UMStv2BaK/g6EHI45x6Se+JMCou+/+iNjI6NT0xOTVdmZufmF6qLS2c2zQ2HOk9lai5iZkEKDXUUKOEiM8BULOE8vjro++fXYKxI9Sn2Mmgq1tYiEZyhk6LqUZjrlvMBixA7gKx0N3TRJoViXaHEDZQlDWlocxUVYi8oLwvtBJm2aRZ9Zda7zis3omrN3/QHoH9JMCQ1MsRxVL0PWynPFWjkklnbCPwMmwUzKLiEshLmFjLGr1gbGo5qpsA2i8HWJV1zSosmqXFHIx2o3xMFU9b2VOwmFcOO/e31xf+8Ro7JbrMQOssRNP98KMklxZT2K6QtYYCj7DnCuBHur5R3mGEcXZEVV0Lwe+W/5GxrM3D8ZLu2fzisY4qskFWyTgKyQ/bJITkmdcLJLXkgz+TFu/OevFfv7XN0xBtmlskPeO8fLriuBg==</latexit><latexit sha1_base64="Z/lbKWlvpXmZKmvFJGbDo+uBlPY=">AAACNXicbVDLSgMxFM34tr6qLt0Ei6AbmRFBN4LgxoULBatCpw6Z9E4bTDJDckdah/kpN/6HK124UMStv2BaK/g6EHI45x6Se+JMCou+/+iNjI6NT0xOTVdmZufmF6qLS2c2zQ2HOk9lai5iZkEKDXUUKOEiM8BULOE8vjro++fXYKxI9Sn2Mmgq1tYiEZyhk6LqUZjrlvMBixA7gKx0N3TRJoViXaHEDZQlDWlocxUVYi8oLwvtBJm2aRZ9Zda7zis3omrN3/QHoH9JMCQ1MsRxVL0PWynPFWjkklnbCPwMmwUzKLiEshLmFjLGr1gbGo5qpsA2i8HWJV1zSosmqXFHIx2o3xMFU9b2VOwmFcOO/e31xf+8Ro7JbrMQOssRNP98KMklxZT2K6QtYYCj7DnCuBHur5R3mGEcXZEVV0Lwe+W/5GxrM3D8ZLu2fzisY4qskFWyTgKyQ/bJITkmdcLJLXkgz+TFu/OevFfv7XN0xBtmlskPeO8fLriuBg==</latexit><latexit sha1_base64="Z/lbKWlvpXmZKmvFJGbDo+uBlPY=">AAACNXicbVDLSgMxFM34tr6qLt0Ei6AbmRFBN4LgxoULBatCpw6Z9E4bTDJDckdah/kpN/6HK124UMStv2BaK/g6EHI45x6Se+JMCou+/+iNjI6NT0xOTVdmZufmF6qLS2c2zQ2HOk9lai5iZkEKDXUUKOEiM8BULOE8vjro++fXYKxI9Sn2Mmgq1tYiEZyhk6LqUZjrlvMBixA7gKx0N3TRJoViXaHEDZQlDWlocxUVYi8oLwvtBJm2aRZ9Zda7zis3omrN3/QHoH9JMCQ1MsRxVL0PWynPFWjkklnbCPwMmwUzKLiEshLmFjLGr1gbGo5qpsA2i8HWJV1zSosmqXFHIx2o3xMFU9b2VOwmFcOO/e31xf+8Ro7JbrMQOssRNP98KMklxZT2K6QtYYCj7DnCuBHur5R3mGEcXZEVV0Lwe+W/5GxrM3D8ZLu2fzisY4qskFWyTgKyQ/bJITkmdcLJLXkgz+TFu/OevFfv7XN0xBtmlskPeO8fLriuBg==</latexit>
minimize
w
X
i j
log 1 + exp(w>
xi w>
xj)
<latexit sha1_base64="TUfyFrghYNVb4IwaYImAjY4SsM4=">AAACWHicbVDLahsxFNVMkyZxX26yzOZSU3AoDTOl0C4D3WSZQpwELNdo5Du2Ej0G6U5jd5ifLHTR/ko3kR0v8jogcTjnHj1OUWkVKMv+Jumzjc3nW9s7nRcvX71+0327exZc7SUOpNPOXxQioFYWB6RI40XlUZhC43lx9W3pn/9EH5Szp7SocGTE1KpSSUFRGncdr+0k+kjNddtwwjmFsjHKKqN+YdsCBx5qM24U8HiwhMsoaTeNG5bUz+EDcJxX/esfMeyqFuZxtIWPcFe4bA+AezWd0cG428sOsxXgMcnXpMfWOBl3f/OJk7VBS1KLEIZ5VtGoEZ6U1Nh2eB2wEvJKTHEYqRUGw6hZFdPC+6hMoHQ+LkuwUu8mGmFCWJgiThpBs/DQW4pPecOayq+jRtmqJrTy9qKy1kAOli3DRMWuSC8iEdKr+FaQM+GFpNh1J5aQP/zyY3L26TCP/Pvn3tHxuo5tts/esT7L2Rd2xI7ZCRswyf6w/8lGspn8S1m6le7cjqbJOrPH7iHdvQGLSrTH</latexit><latexit sha1_base64="TUfyFrghYNVb4IwaYImAjY4SsM4=">AAACWHicbVDLahsxFNVMkyZxX26yzOZSU3AoDTOl0C4D3WSZQpwELNdo5Du2Ej0G6U5jd5ifLHTR/ko3kR0v8jogcTjnHj1OUWkVKMv+Jumzjc3nW9s7nRcvX71+0327exZc7SUOpNPOXxQioFYWB6RI40XlUZhC43lx9W3pn/9EH5Szp7SocGTE1KpSSUFRGncdr+0k+kjNddtwwjmFsjHKKqN+YdsCBx5qM24U8HiwhMsoaTeNG5bUz+EDcJxX/esfMeyqFuZxtIWPcFe4bA+AezWd0cG428sOsxXgMcnXpMfWOBl3f/OJk7VBS1KLEIZ5VtGoEZ6U1Nh2eB2wEvJKTHEYqRUGw6hZFdPC+6hMoHQ+LkuwUu8mGmFCWJgiThpBs/DQW4pPecOayq+jRtmqJrTy9qKy1kAOli3DRMWuSC8iEdKr+FaQM+GFpNh1J5aQP/zyY3L26TCP/Pvn3tHxuo5tts/esT7L2Rd2xI7ZCRswyf6w/8lGspn8S1m6le7cjqbJOrPH7iHdvQGLSrTH</latexit><latexit sha1_base64="TUfyFrghYNVb4IwaYImAjY4SsM4=">AAACWHicbVDLahsxFNVMkyZxX26yzOZSU3AoDTOl0C4D3WSZQpwELNdo5Du2Ej0G6U5jd5ifLHTR/ko3kR0v8jogcTjnHj1OUWkVKMv+Jumzjc3nW9s7nRcvX71+0327exZc7SUOpNPOXxQioFYWB6RI40XlUZhC43lx9W3pn/9EH5Szp7SocGTE1KpSSUFRGncdr+0k+kjNddtwwjmFsjHKKqN+YdsCBx5qM24U8HiwhMsoaTeNG5bUz+EDcJxX/esfMeyqFuZxtIWPcFe4bA+AezWd0cG428sOsxXgMcnXpMfWOBl3f/OJk7VBS1KLEIZ5VtGoEZ6U1Nh2eB2wEvJKTHEYqRUGw6hZFdPC+6hMoHQ+LkuwUu8mGmFCWJgiThpBs/DQW4pPecOayq+jRtmqJrTy9qKy1kAOli3DRMWuSC8iEdKr+FaQM+GFpNh1J5aQP/zyY3L26TCP/Pvn3tHxuo5tts/esT7L2Rd2xI7ZCRswyf6w/8lGspn8S1m6le7cjqbJOrPH7iHdvQGLSrTH</latexit><latexit sha1_base64="TUfyFrghYNVb4IwaYImAjY4SsM4=">AAACWHicbVDLahsxFNVMkyZxX26yzOZSU3AoDTOl0C4D3WSZQpwELNdo5Du2Ej0G6U5jd5ifLHTR/ko3kR0v8jogcTjnHj1OUWkVKMv+Jumzjc3nW9s7nRcvX71+0327exZc7SUOpNPOXxQioFYWB6RI40XlUZhC43lx9W3pn/9EH5Szp7SocGTE1KpSSUFRGncdr+0k+kjNddtwwjmFsjHKKqN+YdsCBx5qM24U8HiwhMsoaTeNG5bUz+EDcJxX/esfMeyqFuZxtIWPcFe4bA+AezWd0cG428sOsxXgMcnXpMfWOBl3f/OJk7VBS1KLEIZ5VtGoEZ6U1Nh2eB2wEvJKTHEYqRUGw6hZFdPC+6hMoHQ+LkuwUu8mGmFCWJgiThpBs/DQW4pPecOayq+jRtmqJrTy9qKy1kAOli3DRMWuSC8iEdKr+FaQM+GFpNh1J5aQP/zyY3L26TCP/Pvn3tHxuo5tts/esT7L2Rd2xI7ZCRswyf6w/8lGspn8S1m6le7cjqbJOrPH7iHdvQGLSrTH</latexit>
minimize
µ1,··· ,µk
J(µ) =
kX
j=1
X
i2Cj
||xi µj||2
<latexit sha1_base64="5D+ThTq6ib1eZT+eFi8T0Aj2c4A=">AAACZHicbVDPSxwxGM1Mf2i3th0rPRVK6FKw0MqMFPQiCF6kJwtdFTbrkMlkNG6SGZIv4prNP9lbj73072h2dw6t9oOQl/fe9yV5VSeFhTz/maSPHj95urb+bPB848XLV9nm61PbOsP4iLWyNecVtVwKzUcgQPLzznCqKsnPqunRQj+74caKVn+HWccnil5q0QhGIVJl5onTddQ5eKJc6YvwCRNWt2DjviCmIXgC/BZs45XQQok7HgIm+Ot21D/iA0ysU6W/PijCRXT3R4GJ0Pgo8tE8n99GJuDPq5HXYT6/8LuhzIb5Tr4s/BAUPRiivk7K7AepW+YU18AktXZc5B1MPDUgmORhQJzlHWVTesnHEWqquJ34ZUgBf4hMjZvWxKUBL9m/OzxV1s5UFZ2KwpW9ry3I/2ljB83+xAvdOeCarS5qnMTQ4kXiuBaGM5CzCCgzIr4VsytqKIOY+yCGUNz/8kNwurtTRPzty/DwuI9jHb1F79E2KtAeOkTH6ASNEEO/krUkSzaT3+lGupW+WVnTpO/ZQv9U+u4PpT65Pg==</latexit><latexit sha1_base64="5D+ThTq6ib1eZT+eFi8T0Aj2c4A=">AAACZHicbVDPSxwxGM1Mf2i3th0rPRVK6FKw0MqMFPQiCF6kJwtdFTbrkMlkNG6SGZIv4prNP9lbj73072h2dw6t9oOQl/fe9yV5VSeFhTz/maSPHj95urb+bPB848XLV9nm61PbOsP4iLWyNecVtVwKzUcgQPLzznCqKsnPqunRQj+74caKVn+HWccnil5q0QhGIVJl5onTddQ5eKJc6YvwCRNWt2DjviCmIXgC/BZs45XQQok7HgIm+Ot21D/iA0ysU6W/PijCRXT3R4GJ0Pgo8tE8n99GJuDPq5HXYT6/8LuhzIb5Tr4s/BAUPRiivk7K7AepW+YU18AktXZc5B1MPDUgmORhQJzlHWVTesnHEWqquJ34ZUgBf4hMjZvWxKUBL9m/OzxV1s5UFZ2KwpW9ry3I/2ljB83+xAvdOeCarS5qnMTQ4kXiuBaGM5CzCCgzIr4VsytqKIOY+yCGUNz/8kNwurtTRPzty/DwuI9jHb1F79E2KtAeOkTH6ASNEEO/krUkSzaT3+lGupW+WVnTpO/ZQv9U+u4PpT65Pg==</latexit><latexit sha1_base64="5D+ThTq6ib1eZT+eFi8T0Aj2c4A=">AAACZHicbVDPSxwxGM1Mf2i3th0rPRVK6FKw0MqMFPQiCF6kJwtdFTbrkMlkNG6SGZIv4prNP9lbj73072h2dw6t9oOQl/fe9yV5VSeFhTz/maSPHj95urb+bPB848XLV9nm61PbOsP4iLWyNecVtVwKzUcgQPLzznCqKsnPqunRQj+74caKVn+HWccnil5q0QhGIVJl5onTddQ5eKJc6YvwCRNWt2DjviCmIXgC/BZs45XQQok7HgIm+Ot21D/iA0ysU6W/PijCRXT3R4GJ0Pgo8tE8n99GJuDPq5HXYT6/8LuhzIb5Tr4s/BAUPRiivk7K7AepW+YU18AktXZc5B1MPDUgmORhQJzlHWVTesnHEWqquJ34ZUgBf4hMjZvWxKUBL9m/OzxV1s5UFZ2KwpW9ry3I/2ljB83+xAvdOeCarS5qnMTQ4kXiuBaGM5CzCCgzIr4VsytqKIOY+yCGUNz/8kNwurtTRPzty/DwuI9jHb1F79E2KtAeOkTH6ASNEEO/krUkSzaT3+lGupW+WVnTpO/ZQv9U+u4PpT65Pg==</latexit><latexit sha1_base64="5D+ThTq6ib1eZT+eFi8T0Aj2c4A=">AAACZHicbVDPSxwxGM1Mf2i3th0rPRVK6FKw0MqMFPQiCF6kJwtdFTbrkMlkNG6SGZIv4prNP9lbj73072h2dw6t9oOQl/fe9yV5VSeFhTz/maSPHj95urb+bPB848XLV9nm61PbOsP4iLWyNecVtVwKzUcgQPLzznCqKsnPqunRQj+74caKVn+HWccnil5q0QhGIVJl5onTddQ5eKJc6YvwCRNWt2DjviCmIXgC/BZs45XQQok7HgIm+Ot21D/iA0ysU6W/PijCRXT3R4GJ0Pgo8tE8n99GJuDPq5HXYT6/8LuhzIb5Tr4s/BAUPRiivk7K7AepW+YU18AktXZc5B1MPDUgmORhQJzlHWVTesnHEWqquJ34ZUgBf4hMjZvWxKUBL9m/OzxV1s5UFZ2KwpW9ry3I/2ljB83+xAvdOeCarS5qnMTQ4kXiuBaGM5CzCCgzIr4VsytqKIOY+yCGUNz/8kNwurtTRPzty/DwuI9jHb1F79E2KtAeOkTH6ASNEEO/krUkSzaT3+lGupW+WVnTpO/ZQv9U+u4PpT65Pg==</latexit>
25. 2018.12.15. MODUCON
Computing the Gradient
• Use backpropagation to compute gradients efficiently
• Need a differentiable function
• Can’t use functions like argmax or hard binary
• Unless using a different way to compute gradients
Slide credit: P. Ramachandran, CS 598 LAZ- Cutting-Edge Trends in Deep Learning and Recognition, Lec05, p. 19
26. 2018.12.15. MODUCON
How to Pick the Learning Rate?
• Too big = diverge, too small = slow convergence
• No “one learning rate to rule them all”
• Start from a high value and keep cutting by half if model diverges
• Learning rate schedule: decay learning rate over time
Slide credit: P. Ramachandran, CS 598 LAZ- Cutting-Edge Trends in Deep Learning and Recognition, Lec05, p. 19
27. 2018.12.15. MODUCON
Too Small Learning Rate
Figure credit: A. Géron, Hands-on Machine Learning with Scikit-Learn & TensorFlow, chap 1, p. 112
28. 2018.12.15. MODUCON
Too Large Learning Rate
Figure credit: A. Géron, Hands-on Machine Learning with Scikit-Learn & TensorFlow, chap 1, p. 112
29. 2018.12.15. MODUCON
Learning Rate
• Which is better?
• Is it better to keep learning rate?
• Decay learning rate
appropriately
Figure credit: cs231n spring 2018 slide: Lecture 6. p. 84
30. 2018.12.15. MODUCON
Stochastic Gradient Descent
• Gradient Descent
• Cross entropy error, CEE
w0
k := wk ⌘
@L
@wk
• Loss
(mini-batch)
Loss
• Mini-batch size:
•
L
1
N i
yi log ˆyi
L
1
m i
yi log ˆyi
m<latexit sha1_base64="qWck30ONVt2kTy0KbJVkwBgxduc=">AAAB6HicbZBNSwMxEIZn/az1q+rRS7AInsquCHoseOmxBfsB7VKy6Wwbm2SXJCuU0l/gxYMiXv1J3vw3pu0etPWFwMM7M2TmjVLBjfX9b29jc2t7Z7ewV9w/ODw6Lp2ctkySaYZNlohEdyJqUHCFTcutwE6qkcpIYDsa38/r7SfUhifqwU5SDCUdKh5zRq2zGrJfKvsVfyGyDkEOZchV75e+eoOEZRKVZYIa0w381IZTqi1nAmfFXmYwpWxMh9h1qKhEE04Xi87IpXMGJE60e8qShft7YkqlMRMZuU5J7cis1ubmf7VuZuO7cMpVmllUbPlRnAliEzK/mgy4RmbFxAFlmrtdCRtRTZl12RRdCMHqyevQuq4Ejhs35Wotj6MA53ABVxDALVShBnVoAgOEZ3iFN+/Re/HevY9l64aXz5zBH3mfP9ddjPc=</latexit><latexit sha1_base64="qWck30ONVt2kTy0KbJVkwBgxduc=">AAAB6HicbZBNSwMxEIZn/az1q+rRS7AInsquCHoseOmxBfsB7VKy6Wwbm2SXJCuU0l/gxYMiXv1J3vw3pu0etPWFwMM7M2TmjVLBjfX9b29jc2t7Z7ewV9w/ODw6Lp2ctkySaYZNlohEdyJqUHCFTcutwE6qkcpIYDsa38/r7SfUhifqwU5SDCUdKh5zRq2zGrJfKvsVfyGyDkEOZchV75e+eoOEZRKVZYIa0w381IZTqi1nAmfFXmYwpWxMh9h1qKhEE04Xi87IpXMGJE60e8qShft7YkqlMRMZuU5J7cis1ubmf7VuZuO7cMpVmllUbPlRnAliEzK/mgy4RmbFxAFlmrtdCRtRTZl12RRdCMHqyevQuq4Ejhs35Wotj6MA53ABVxDALVShBnVoAgOEZ3iFN+/Re/HevY9l64aXz5zBH3mfP9ddjPc=</latexit><latexit sha1_base64="qWck30ONVt2kTy0KbJVkwBgxduc=">AAAB6HicbZBNSwMxEIZn/az1q+rRS7AInsquCHoseOmxBfsB7VKy6Wwbm2SXJCuU0l/gxYMiXv1J3vw3pu0etPWFwMM7M2TmjVLBjfX9b29jc2t7Z7ewV9w/ODw6Lp2ctkySaYZNlohEdyJqUHCFTcutwE6qkcpIYDsa38/r7SfUhifqwU5SDCUdKh5zRq2zGrJfKvsVfyGyDkEOZchV75e+eoOEZRKVZYIa0w381IZTqi1nAmfFXmYwpWxMh9h1qKhEE04Xi87IpXMGJE60e8qShft7YkqlMRMZuU5J7cis1ubmf7VuZuO7cMpVmllUbPlRnAliEzK/mgy4RmbFxAFlmrtdCRtRTZl12RRdCMHqyevQuq4Ejhs35Wotj6MA53ABVxDALVShBnVoAgOEZ3iFN+/Re/HevY9l64aXz5zBH3mfP9ddjPc=</latexit><latexit sha1_base64="qWck30ONVt2kTy0KbJVkwBgxduc=">AAAB6HicbZBNSwMxEIZn/az1q+rRS7AInsquCHoseOmxBfsB7VKy6Wwbm2SXJCuU0l/gxYMiXv1J3vw3pu0etPWFwMM7M2TmjVLBjfX9b29jc2t7Z7ewV9w/ODw6Lp2ctkySaYZNlohEdyJqUHCFTcutwE6qkcpIYDsa38/r7SfUhifqwU5SDCUdKh5zRq2zGrJfKvsVfyGyDkEOZchV75e+eoOEZRKVZYIa0w381IZTqi1nAmfFXmYwpWxMh9h1qKhEE04Xi87IpXMGJE60e8qShft7YkqlMRMZuU5J7cis1ubmf7VuZuO7cMpVmllUbPlRnAliEzK/mgy4RmbFxAFlmrtdCRtRTZl12RRdCMHqyevQuq4Ejhs35Wotj6MA53ABVxDALVShBnVoAgOEZ3iFN+/Re/HevY9l64aXz5zBH3mfP9ddjPc=</latexit>
32. 2018.12.15. MODUCON
The Momentum Method
• Introduce velocity variable:
• It is the direction and speed at which parameters move through parameter
space
• Momentum is mass times velocity term in physics
• The momentum algorithm assumes unit mass
• A hyperparameter determines exponential decay
v<latexit sha1_base64="235vjU4tS6ea5yNRrUD4VlxqA8o=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY8NJjC/YD2lA220m7drMJu5tCCf0FXjwo4tWf5M1/47bNQVtfWHh4Z4adeYNEcG1c99spbG3v7O4V90sHh0fHJ+XTs7aOU8WwxWIRq25ANQousWW4EdhNFNIoENgJJg+LemeKSvNYPppZgn5ER5KHnFFjreZ0UK64VXcpsgleDhXI1RiUv/rDmKURSsME1brnuYnxM6oMZwLnpX6qMaFsQkfYsyhphNrPlovOyZV1hiSMlX3SkKX7eyKjkdazKLCdETVjvV5bmP/VeqkJ7/2MyyQ1KNnqozAVxMRkcTUZcoXMiJkFyhS3uxI2pooyY7Mp2RC89ZM3oX1T9Sw3byu1eh5HES7gEq7BgzuoQR0a0AIGCM/wCm/Ok/PivDsfq9aCk8+cwx85nz/lAY0A</latexit><latexit sha1_base64="235vjU4tS6ea5yNRrUD4VlxqA8o=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY8NJjC/YD2lA220m7drMJu5tCCf0FXjwo4tWf5M1/47bNQVtfWHh4Z4adeYNEcG1c99spbG3v7O4V90sHh0fHJ+XTs7aOU8WwxWIRq25ANQousWW4EdhNFNIoENgJJg+LemeKSvNYPppZgn5ER5KHnFFjreZ0UK64VXcpsgleDhXI1RiUv/rDmKURSsME1brnuYnxM6oMZwLnpX6qMaFsQkfYsyhphNrPlovOyZV1hiSMlX3SkKX7eyKjkdazKLCdETVjvV5bmP/VeqkJ7/2MyyQ1KNnqozAVxMRkcTUZcoXMiJkFyhS3uxI2pooyY7Mp2RC89ZM3oX1T9Sw3byu1eh5HES7gEq7BgzuoQR0a0AIGCM/wCm/Ok/PivDsfq9aCk8+cwx85nz/lAY0A</latexit><latexit sha1_base64="235vjU4tS6ea5yNRrUD4VlxqA8o=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY8NJjC/YD2lA220m7drMJu5tCCf0FXjwo4tWf5M1/47bNQVtfWHh4Z4adeYNEcG1c99spbG3v7O4V90sHh0fHJ+XTs7aOU8WwxWIRq25ANQousWW4EdhNFNIoENgJJg+LemeKSvNYPppZgn5ER5KHnFFjreZ0UK64VXcpsgleDhXI1RiUv/rDmKURSsME1brnuYnxM6oMZwLnpX6qMaFsQkfYsyhphNrPlovOyZV1hiSMlX3SkKX7eyKjkdazKLCdETVjvV5bmP/VeqkJ7/2MyyQ1KNnqozAVxMRkcTUZcoXMiJkFyhS3uxI2pooyY7Mp2RC89ZM3oX1T9Sw3byu1eh5HES7gEq7BgzuoQR0a0AIGCM/wCm/Ok/PivDsfq9aCk8+cwx85nz/lAY0A</latexit><latexit sha1_base64="235vjU4tS6ea5yNRrUD4VlxqA8o=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY8NJjC/YD2lA220m7drMJu5tCCf0FXjwo4tWf5M1/47bNQVtfWHh4Z4adeYNEcG1c99spbG3v7O4V90sHh0fHJ+XTs7aOU8WwxWIRq25ANQousWW4EdhNFNIoENgJJg+LemeKSvNYPppZgn5ER5KHnFFjreZ0UK64VXcpsgleDhXI1RiUv/rDmKURSsME1brnuYnxM6oMZwLnpX6qMaFsQkfYsyhphNrPlovOyZV1hiSMlX3SkKX7eyKjkdazKLCdETVjvV5bmP/VeqkJ7/2MyyQ1KNnqozAVxMRkcTUZcoXMiJkFyhS3uxI2pooyY7Mp2RC89ZM3oX1T9Sw3byu1eh5HES7gEq7BgzuoQR0a0AIGCM/wCm/Ok/PivDsfq9aCk8+cwx85nz/lAY0A</latexit>
2 [0, 1)<latexit sha1_base64="Ba3q1rx4knV/3kRC2rXIsDTuCFk=">AAAB+nicbZDLSgMxFIbP1Futt6ku3QSLoCBlRgRdFtx0WcFeYKaUTJppQ5PMkGSUMvZR3LhQxK1P4s63Mb0stPWHwMd/zuGc/FHKmTae9+0U1tY3NreK26Wd3b39A7d82NJJpghtkoQnqhNhTTmTtGmY4bSTKopFxGk7Gt1O6+0HqjRL5L0Zp7Qr8ECymBFsrNVzy+EAC4FRyCQKvAvkn/fcilf1ZkKr4C+gAgs1eu5X2E9IJqg0hGOtA99LTTfHyjDC6aQUZpqmmIzwgAYWJRZUd/PZ6RN0ap0+ihNlnzRo5v6eyLHQeiwi2ymwGerl2tT8rxZkJr7p5kymmaGSzBfFGUcmQdMcUJ8pSgwfW8BEMXsrIkOsMDE2rZINwV/+8iq0Lqu+5burSq2+iKMIx3ACZ+DDNdSgDg1oAoFHeIZXeHOenBfn3fmYtxacxcwR/JHz+QPBV5Je</latexit><latexit sha1_base64="Ba3q1rx4knV/3kRC2rXIsDTuCFk=">AAAB+nicbZDLSgMxFIbP1Futt6ku3QSLoCBlRgRdFtx0WcFeYKaUTJppQ5PMkGSUMvZR3LhQxK1P4s63Mb0stPWHwMd/zuGc/FHKmTae9+0U1tY3NreK26Wd3b39A7d82NJJpghtkoQnqhNhTTmTtGmY4bSTKopFxGk7Gt1O6+0HqjRL5L0Zp7Qr8ECymBFsrNVzy+EAC4FRyCQKvAvkn/fcilf1ZkKr4C+gAgs1eu5X2E9IJqg0hGOtA99LTTfHyjDC6aQUZpqmmIzwgAYWJRZUd/PZ6RN0ap0+ihNlnzRo5v6eyLHQeiwi2ymwGerl2tT8rxZkJr7p5kymmaGSzBfFGUcmQdMcUJ8pSgwfW8BEMXsrIkOsMDE2rZINwV/+8iq0Lqu+5burSq2+iKMIx3ACZ+DDNdSgDg1oAoFHeIZXeHOenBfn3fmYtxacxcwR/JHz+QPBV5Je</latexit><latexit sha1_base64="Ba3q1rx4knV/3kRC2rXIsDTuCFk=">AAAB+nicbZDLSgMxFIbP1Futt6ku3QSLoCBlRgRdFtx0WcFeYKaUTJppQ5PMkGSUMvZR3LhQxK1P4s63Mb0stPWHwMd/zuGc/FHKmTae9+0U1tY3NreK26Wd3b39A7d82NJJpghtkoQnqhNhTTmTtGmY4bSTKopFxGk7Gt1O6+0HqjRL5L0Zp7Qr8ECymBFsrNVzy+EAC4FRyCQKvAvkn/fcilf1ZkKr4C+gAgs1eu5X2E9IJqg0hGOtA99LTTfHyjDC6aQUZpqmmIzwgAYWJRZUd/PZ6RN0ap0+ihNlnzRo5v6eyLHQeiwi2ymwGerl2tT8rxZkJr7p5kymmaGSzBfFGUcmQdMcUJ8pSgwfW8BEMXsrIkOsMDE2rZINwV/+8iq0Lqu+5burSq2+iKMIx3ACZ+DDNdSgDg1oAoFHeIZXeHOenBfn3fmYtxacxcwR/JHz+QPBV5Je</latexit><latexit sha1_base64="Ba3q1rx4knV/3kRC2rXIsDTuCFk=">AAAB+nicbZDLSgMxFIbP1Futt6ku3QSLoCBlRgRdFtx0WcFeYKaUTJppQ5PMkGSUMvZR3LhQxK1P4s63Mb0stPWHwMd/zuGc/FHKmTae9+0U1tY3NreK26Wd3b39A7d82NJJpghtkoQnqhNhTTmTtGmY4bSTKopFxGk7Gt1O6+0HqjRL5L0Zp7Qr8ECymBFsrNVzy+EAC4FRyCQKvAvkn/fcilf1ZkKr4C+gAgs1eu5X2E9IJqg0hGOtA99LTTfHyjDC6aQUZpqmmIzwgAYWJRZUd/PZ6RN0ap0+ihNlnzRo5v6eyLHQeiwi2ymwGerl2tT8rxZkJr7p5kymmaGSzBfFGUcmQdMcUJ8pSgwfW8BEMXsrIkOsMDE2rZINwV/+8iq0Lqu+5burSq2+iKMIx3ACZ+DDNdSgDg1oAoFHeIZXeHOenBfn3fmYtxacxcwR/JHz+QPBV5Je</latexit>
53. 2018.12.15. MODUCON
Learning Rate is Crucial
• Learning rate: most difficult hyperparameters to set
• It significantly affects model performance
• Loss function is highly sensitive to some directions in parameter space and
insensitive to others
• Momentum helps but introduces another hyperparameters
• If direction of sensitivity is axis aligned, separate learning rate for each
parameter and adjust them throughput learning
55. 2018.12.15. MODUCON
Adagrad
• J. Duchi, et. al., Adaptive subgradient methods for online learning and
stochastic optimization (http://jmlr.org/papers/v12/duchi11a.html)
• It adapts the learning rate to the parameters, performing smaller updates (i.e.
low learning rates) for parameters associated with frequently occurring
features, and larger updates (i.e. high learning rates) for parameters
associated with infrequent features
• Previously, we performed an update for all parameters at once as every
parameter used the same learning rate
• As Adagrad uses a different learning rate for every parameter at every time
step
w<latexit sha1_base64="E83TDbJMZtue5+b0Rpx/6FdLIkA=">AAAB8XicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbcdFnBPrANZTKdtEMnkzBzo5TQv3DjQhG3/o07/8ZJm4W2Hhg4nHMvc+4JEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJbe53Hrk2Ilb3OE24H9GREqFgFK300I8ojoMwe5oNKlW35s5BVolXkCoUaA4qX/1hzNKIK2SSGtPz3AT9jGoUTPJZuZ8anlA2oSPes1TRiBs/myeekXOrDEkYa/sUkrn6eyOjkTHTKLCTeUKz7OXif14vxfDGz4RKUuSKLT4KU0kwJvn5ZCg0ZyinllCmhc1K2JhqytCWVLYleMsnr5L2Zc2z/O6qWm8UdZTgFM7gAjy4hjo0oAktYKDgGV7hzTHOi/PufCxG15xi5wT+wPn8Af2EkSE=</latexit><latexit sha1_base64="E83TDbJMZtue5+b0Rpx/6FdLIkA=">AAAB8XicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbcdFnBPrANZTKdtEMnkzBzo5TQv3DjQhG3/o07/8ZJm4W2Hhg4nHMvc+4JEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJbe53Hrk2Ilb3OE24H9GREqFgFK300I8ojoMwe5oNKlW35s5BVolXkCoUaA4qX/1hzNKIK2SSGtPz3AT9jGoUTPJZuZ8anlA2oSPes1TRiBs/myeekXOrDEkYa/sUkrn6eyOjkTHTKLCTeUKz7OXif14vxfDGz4RKUuSKLT4KU0kwJvn5ZCg0ZyinllCmhc1K2JhqytCWVLYleMsnr5L2Zc2z/O6qWm8UdZTgFM7gAjy4hjo0oAktYKDgGV7hzTHOi/PufCxG15xi5wT+wPn8Af2EkSE=</latexit><latexit sha1_base64="E83TDbJMZtue5+b0Rpx/6FdLIkA=">AAAB8XicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbcdFnBPrANZTKdtEMnkzBzo5TQv3DjQhG3/o07/8ZJm4W2Hhg4nHMvc+4JEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJbe53Hrk2Ilb3OE24H9GREqFgFK300I8ojoMwe5oNKlW35s5BVolXkCoUaA4qX/1hzNKIK2SSGtPz3AT9jGoUTPJZuZ8anlA2oSPes1TRiBs/myeekXOrDEkYa/sUkrn6eyOjkTHTKLCTeUKz7OXif14vxfDGz4RKUuSKLT4KU0kwJvn5ZCg0ZyinllCmhc1K2JhqytCWVLYleMsnr5L2Zc2z/O6qWm8UdZTgFM7gAjy4hjo0oAktYKDgGV7hzTHOi/PufCxG15xi5wT+wPn8Af2EkSE=</latexit><latexit sha1_base64="E83TDbJMZtue5+b0Rpx/6FdLIkA=">AAAB8XicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbcdFnBPrANZTKdtEMnkzBzo5TQv3DjQhG3/o07/8ZJm4W2Hhg4nHMvc+4JEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJbe53Hrk2Ilb3OE24H9GREqFgFK300I8ojoMwe5oNKlW35s5BVolXkCoUaA4qX/1hzNKIK2SSGtPz3AT9jGoUTPJZuZ8anlA2oSPes1TRiBs/myeekXOrDEkYa/sUkrn6eyOjkTHTKLCTeUKz7OXif14vxfDGz4RKUuSKLT4KU0kwJvn5ZCg0ZyinllCmhc1K2JhqytCWVLYleMsnr5L2Zc2z/O6qWm8UdZTgFM7gAjy4hjo0oAktYKDgGV7hzTHOi/PufCxG15xi5wT+wPn8Af2EkSE=</latexit>
wi<latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit>
⌘<latexit sha1_base64="dCG1lAu4MamMLQGuw8Bo7bnQR6c=">AAAB63icbZBNSwMxEIazftb6VfXoJVgET2VXBD0WvPRYwX5Au5RsOtuGJtklmRVK6V/w4kERr/4hb/4bs+0etPWFwMM7M2TmjVIpLPr+t7exubW9s1vaK+8fHB4dV05O2zbJDIcWT2RiuhGzIIWGFgqU0E0NMBVJ6EST+7zeeQJjRaIfcZpCqNhIi1hwhrnVB2SDStWv+QvRdQgKqJJCzUHlqz9MeKZAI5fM2l7gpxjOmEHBJczL/cxCyviEjaDnUDMFNpwtdp3TS+cMaZwY9zTShft7YsaUtVMVuU7FcGxXa7n5X62XYXwXzoROMwTNlx/FmaSY0PxwOhQGOMqpA8aNcLtSPmaGcXTxlF0IwerJ69C+rgWOH26q9UYRR4mckwtyRQJyS+qkQZqkRTgZk2fySt485b14797HsnXDK2bOyB95nz8Kso4+</latexit><latexit sha1_base64="dCG1lAu4MamMLQGuw8Bo7bnQR6c=">AAAB63icbZBNSwMxEIazftb6VfXoJVgET2VXBD0WvPRYwX5Au5RsOtuGJtklmRVK6V/w4kERr/4hb/4bs+0etPWFwMM7M2TmjVIpLPr+t7exubW9s1vaK+8fHB4dV05O2zbJDIcWT2RiuhGzIIWGFgqU0E0NMBVJ6EST+7zeeQJjRaIfcZpCqNhIi1hwhrnVB2SDStWv+QvRdQgKqJJCzUHlqz9MeKZAI5fM2l7gpxjOmEHBJczL/cxCyviEjaDnUDMFNpwtdp3TS+cMaZwY9zTShft7YsaUtVMVuU7FcGxXa7n5X62XYXwXzoROMwTNlx/FmaSY0PxwOhQGOMqpA8aNcLtSPmaGcXTxlF0IwerJ69C+rgWOH26q9UYRR4mckwtyRQJyS+qkQZqkRTgZk2fySt485b14797HsnXDK2bOyB95nz8Kso4+</latexit><latexit sha1_base64="dCG1lAu4MamMLQGuw8Bo7bnQR6c=">AAAB63icbZBNSwMxEIazftb6VfXoJVgET2VXBD0WvPRYwX5Au5RsOtuGJtklmRVK6V/w4kERr/4hb/4bs+0etPWFwMM7M2TmjVIpLPr+t7exubW9s1vaK+8fHB4dV05O2zbJDIcWT2RiuhGzIIWGFgqU0E0NMBVJ6EST+7zeeQJjRaIfcZpCqNhIi1hwhrnVB2SDStWv+QvRdQgKqJJCzUHlqz9MeKZAI5fM2l7gpxjOmEHBJczL/cxCyviEjaDnUDMFNpwtdp3TS+cMaZwY9zTShft7YsaUtVMVuU7FcGxXa7n5X62XYXwXzoROMwTNlx/FmaSY0PxwOhQGOMqpA8aNcLtSPmaGcXTxlF0IwerJ69C+rgWOH26q9UYRR4mckwtyRQJyS+qkQZqkRTgZk2fySt485b14797HsnXDK2bOyB95nz8Kso4+</latexit><latexit sha1_base64="dCG1lAu4MamMLQGuw8Bo7bnQR6c=">AAAB63icbZBNSwMxEIazftb6VfXoJVgET2VXBD0WvPRYwX5Au5RsOtuGJtklmRVK6V/w4kERr/4hb/4bs+0etPWFwMM7M2TmjVIpLPr+t7exubW9s1vaK+8fHB4dV05O2zbJDIcWT2RiuhGzIIWGFgqU0E0NMBVJ6EST+7zeeQJjRaIfcZpCqNhIi1hwhrnVB2SDStWv+QvRdQgKqJJCzUHlqz9MeKZAI5fM2l7gpxjOmEHBJczL/cxCyviEjaDnUDMFNpwtdp3TS+cMaZwY9zTShft7YsaUtVMVuU7FcGxXa7n5X62XYXwXzoROMwTNlx/FmaSY0PxwOhQGOMqpA8aNcLtSPmaGcXTxlF0IwerJ69C+rgWOH26q9UYRR4mckwtyRQJyS+qkQZqkRTgZk2fySt485b14797HsnXDK2bOyB95nz8Kso4+</latexit>
wi<latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit><latexit sha1_base64="0prmBy7BzaaAl9QhvxDl6sMPGkU=">AAAB9XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXBTZcV7APasWTSTBuayQxJxlKG+Q83LhRx67+482/MtLPQ1gOBwzn3ck+OHwuujeN8o9LG5tb2Tnm3srd/cHhUPT7p6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3uV+94kpzSP5YOYx80IyljzglBgrPQ5CYiZ+kM6yYcqzYbXm1J0F8DpxC1KDAq1h9WswimgSMmmoIFr3XSc2XkqU4VSwrDJINIsJnZIx61sqSci0ly5SZ/jCKiMcRMo+afBC/b2RklDreejbyTylXvVy8T+vn5jg1ku5jBPDJF0eChKBTYTzCvCIK0aNmFtCqOI2K6YTogg1tqiKLcFd/fI66VzVXcvvr2uNZlFHGc7gHC7BhRtoQBNa0AYKCp7hFd7QDL2gd/SxHC2hYucU/gB9/gBMFJMJ</latexit>
t<latexit sha1_base64="pA2axXYPJdYvt8IEY8q8iokWeXo=">AAAB6HicbZDLSgNBEEVrfMb4irp00xgEV2FGBF0G3GSZgHlAMoSeTk3SpudBd40QQr7AjQtF3PpJ7vwbO8ksNPFCw+FWFV11g1RJQ6777Wxsbm3v7Bb2ivsHh0fHpZPTlkkyLbApEpXoTsANKhljkyQp7KQaeRQobAfj+3m9/YTayCR+oEmKfsSHsQyl4GStBvVLZbfiLsTWwcuhDLnq/dJXb5CILMKYhOLGdD03JX/KNUmhcFbsZQZTLsZ8iF2LMY/Q+NPFojN2aZ0BCxNtX0xs4f6emPLImEkU2M6I08is1ubmf7VuRuGdP5VxmhHGYvlRmClGCZtfzQZSoyA1scCFlnZXJkZcc0E2m6INwVs9eR1a1xXPcuOmXK3lcRTgHC7gCjy4hSrUoA5NEIDwDK/w5jw6L86787Fs3XDymTP4I+fzB+H5jP4=</latexit><latexit sha1_base64="pA2axXYPJdYvt8IEY8q8iokWeXo=">AAAB6HicbZDLSgNBEEVrfMb4irp00xgEV2FGBF0G3GSZgHlAMoSeTk3SpudBd40QQr7AjQtF3PpJ7vwbO8ksNPFCw+FWFV11g1RJQ6777Wxsbm3v7Bb2ivsHh0fHpZPTlkkyLbApEpXoTsANKhljkyQp7KQaeRQobAfj+3m9/YTayCR+oEmKfsSHsQyl4GStBvVLZbfiLsTWwcuhDLnq/dJXb5CILMKYhOLGdD03JX/KNUmhcFbsZQZTLsZ8iF2LMY/Q+NPFojN2aZ0BCxNtX0xs4f6emPLImEkU2M6I08is1ubmf7VuRuGdP5VxmhHGYvlRmClGCZtfzQZSoyA1scCFlnZXJkZcc0E2m6INwVs9eR1a1xXPcuOmXK3lcRTgHC7gCjy4hSrUoA5NEIDwDK/w5jw6L86787Fs3XDymTP4I+fzB+H5jP4=</latexit><latexit sha1_base64="pA2axXYPJdYvt8IEY8q8iokWeXo=">AAAB6HicbZDLSgNBEEVrfMb4irp00xgEV2FGBF0G3GSZgHlAMoSeTk3SpudBd40QQr7AjQtF3PpJ7vwbO8ksNPFCw+FWFV11g1RJQ6777Wxsbm3v7Bb2ivsHh0fHpZPTlkkyLbApEpXoTsANKhljkyQp7KQaeRQobAfj+3m9/YTayCR+oEmKfsSHsQyl4GStBvVLZbfiLsTWwcuhDLnq/dJXb5CILMKYhOLGdD03JX/KNUmhcFbsZQZTLsZ8iF2LMY/Q+NPFojN2aZ0BCxNtX0xs4f6emPLImEkU2M6I08is1ubmf7VuRuGdP5VxmhHGYvlRmClGCZtfzQZSoyA1scCFlnZXJkZcc0E2m6INwVs9eR1a1xXPcuOmXK3lcRTgHC7gCjy4hSrUoA5NEIDwDK/w5jw6L86787Fs3XDymTP4I+fzB+H5jP4=</latexit><latexit sha1_base64="pA2axXYPJdYvt8IEY8q8iokWeXo=">AAAB6HicbZDLSgNBEEVrfMb4irp00xgEV2FGBF0G3GSZgHlAMoSeTk3SpudBd40QQr7AjQtF3PpJ7vwbO8ksNPFCw+FWFV11g1RJQ6777Wxsbm3v7Bb2ivsHh0fHpZPTlkkyLbApEpXoTsANKhljkyQp7KQaeRQobAfj+3m9/YTayCR+oEmKfsSHsQyl4GStBvVLZbfiLsTWwcuhDLnq/dJXb5CILMKYhOLGdD03JX/KNUmhcFbsZQZTLsZ8iF2LMY/Q+NPFojN2aZ0BCxNtX0xs4f6emPLImEkU2M6I08is1ubmf7VuRuGdP5VxmhHGYvlRmClGCZtfzQZSoyA1scCFlnZXJkZcc0E2m6INwVs9eR1a1xXPcuOmXK3lcRTgHC7gCjy4hSrUoA5NEIDwDK/w5jw6L86787Fs3XDymTP4I+fzB+H5jP4=</latexit>