This talk by Lucas Theis from Twitter/Magic Pony on "Compressing Images with Neural Networks" was presented at the Learning Image Representations event on 30th August at Twitter as part of the Creative AI meetup.
Since the advent of the horseshoe priors for regularization, global-local shrinkage methods have proved to be a fertile ground for the development of Bayesian theory and methodology in machine learning. They have achieved remarkable success in computation, and enjoy strong theoretical support. Much of the existing literature has focused on the linear Gaussian case. The purpose of the current talk is to demonstrate that the horseshoe priors are useful more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularization in nonlinear and non-Gaussian models; multivariate models; and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.
We review our recent progress in the development of graph kernels. We discuss the hash graph kernel framework, which makes the computation of kernels for graphs with vertices and edges annotated with real-valued information feasible for large data sets. Moreover, we summarize our general investigation of the benefits of explicit graph feature maps in comparison to using the kernel trick. Our experimental studies on real-world data sets suggest that explicit feature maps often provide sufficient classification accuracy while being computed more efficiently. Finally, we describe how to construct valid kernels from optimal assignments to obtain new expressive graph kernels. These make use of the kernel trick to establish one-to-one correspondences. We conclude by a discussion of our results and their implication for the future development of graph kernels.
Glocalized Weisfeiler-Lehman Graph Kernels: Global-Local Feature Maps of Graphs Christopher Morris
Most state-of-the-art graph kernels only take local graph properties into account, i.e., the kernel is computed with regard to properties of the neighborhood of vertices or other small substructures. On the other hand, kernels that do take global graph properties into account may not scale well to large graph databases. Here we propose to start exploring the space between local and global graph kernels, striking the balance between both worlds. Specifically, we introduce a novel graph kernel based on the k-dimensional Weisfeiler-Lehman algorithm. Unfortunately, the k-dimensional Weisfeiler-Lehman algorithm scales exponentially in k. Consequently, we devise a stochastic version of the kernel with provable approximation guarantees using conditional Rademacher averages. On bounded-degree graphs, it can even be computed in constant time. We support our theoretical results with experiments on several graph classification benchmarks, showing that our kernels often outperform the state-of-the-art in terms of classification accuracies.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Since the advent of the horseshoe priors for regularization, global-local shrinkage methods have proved to be a fertile ground for the development of Bayesian theory and methodology in machine learning. They have achieved remarkable success in computation, and enjoy strong theoretical support. Much of the existing literature has focused on the linear Gaussian case. The purpose of the current talk is to demonstrate that the horseshoe priors are useful more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularization in nonlinear and non-Gaussian models; multivariate models; and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.
We review our recent progress in the development of graph kernels. We discuss the hash graph kernel framework, which makes the computation of kernels for graphs with vertices and edges annotated with real-valued information feasible for large data sets. Moreover, we summarize our general investigation of the benefits of explicit graph feature maps in comparison to using the kernel trick. Our experimental studies on real-world data sets suggest that explicit feature maps often provide sufficient classification accuracy while being computed more efficiently. Finally, we describe how to construct valid kernels from optimal assignments to obtain new expressive graph kernels. These make use of the kernel trick to establish one-to-one correspondences. We conclude by a discussion of our results and their implication for the future development of graph kernels.
Glocalized Weisfeiler-Lehman Graph Kernels: Global-Local Feature Maps of Graphs Christopher Morris
Most state-of-the-art graph kernels only take local graph properties into account, i.e., the kernel is computed with regard to properties of the neighborhood of vertices or other small substructures. On the other hand, kernels that do take global graph properties into account may not scale well to large graph databases. Here we propose to start exploring the space between local and global graph kernels, striking the balance between both worlds. Specifically, we introduce a novel graph kernel based on the k-dimensional Weisfeiler-Lehman algorithm. Unfortunately, the k-dimensional Weisfeiler-Lehman algorithm scales exponentially in k. Consequently, we devise a stochastic version of the kernel with provable approximation guarantees using conditional Rademacher averages. On bounded-degree graphs, it can even be computed in constant time. We support our theoretical results with experiments on several graph classification benchmarks, showing that our kernels often outperform the state-of-the-art in terms of classification accuracies.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
How can we apply machine learning techniques on graphs to obtain predictions in a variety of domains? Know more from Sami Abu-El-Haija, an AI Scientist with experience from both industry (Google Research) and academia (University of Southern California).
https://telecombcn-dl.github.io/2018-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
High-quality rendering of 3D virtual environments typically depends on high-quality 3D models with significant geometric complexity and texture data. One major bottleneck for real-time image-synthesis represents the number of state changes, which a specific rendering API has to perform. To improve performance, batching can be used to group and sort geometric primitives into batches to reduce the number of required state changes, whereas the size of the batches determines the number of required draw-calls, and therefore, is critical for rendering performance. For example, in the case of texture atlases, which provide an approach for efficient texture management, the batch size is limited by the efficiency of the texture-packing algorithm and the texture resolution itself. This paper presents a pre-processing approach and rendering technique that overcomes these limitations by further grouping textures or texture atlases and thus enables the creation of larger geometry batches. It is based on texture arrays in combination with an additional indexing schema that is evaluated at run-time using shader programs. This type of texture management is especially suitable for real-time rendering of large-scale texture-rich 3D virtual environments, such as virtual city and landscape models.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
Introduction to Graph neural networks @ Vienna Deep Learning meetupLiad Magen
Graphs are useful data structures that can be used to model various sorts of data: from molecular protein structures to social networks, pandemic spreading models, and visually rich content such as websites & invoices. In the recent few years, graph neural networks have done a huge leap forward. It is a powerful tool that every data scientist should know. In this talk, we will review their basic structure, show some example usages, and explore the existing (python) tools.
Interactive Rendering and Stylization of Transportation Networks Using Distan...Matthias Trapp
Transportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.
How can we apply machine learning techniques on graphs to obtain predictions in a variety of domains? Know more from Sami Abu-El-Haija, an AI Scientist with experience from both industry (Google Research) and academia (University of Southern California).
https://telecombcn-dl.github.io/2018-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
High-quality rendering of 3D virtual environments typically depends on high-quality 3D models with significant geometric complexity and texture data. One major bottleneck for real-time image-synthesis represents the number of state changes, which a specific rendering API has to perform. To improve performance, batching can be used to group and sort geometric primitives into batches to reduce the number of required state changes, whereas the size of the batches determines the number of required draw-calls, and therefore, is critical for rendering performance. For example, in the case of texture atlases, which provide an approach for efficient texture management, the batch size is limited by the efficiency of the texture-packing algorithm and the texture resolution itself. This paper presents a pre-processing approach and rendering technique that overcomes these limitations by further grouping textures or texture atlases and thus enables the creation of larger geometry batches. It is based on texture arrays in combination with an additional indexing schema that is evaluated at run-time using shader programs. This type of texture management is especially suitable for real-time rendering of large-scale texture-rich 3D virtual environments, such as virtual city and landscape models.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
Introduction to Graph neural networks @ Vienna Deep Learning meetupLiad Magen
Graphs are useful data structures that can be used to model various sorts of data: from molecular protein structures to social networks, pandemic spreading models, and visually rich content such as websites & invoices. In the recent few years, graph neural networks have done a huge leap forward. It is a powerful tool that every data scientist should know. In this talk, we will review their basic structure, show some example usages, and explore the existing (python) tools.
Interactive Rendering and Stylization of Transportation Networks Using Distan...Matthias Trapp
Transportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.
[Japanese]Obake-GAN (Perturbative GAN): GAN with Perturbation Layersyumakishi
Abstract
Obake-GAN (Perturbative GAN), which replaces convolution layers of existing convolutional GANs (DCGAN, WGAN-GP , BIGGAN, etc.) with perturbation layers that adds a fixed noise mask, is proposed. Compared with the convolutional GANs, the number of parameters to be trained is smaller, the convergence of training is faster, the inception score of generated images is higher, and the overall training cost is reduced. Algorithmic generation of the noise masks is also proposed, with which the training, as well as the generation, can be boosted with hardware acceleration. Obake-GAN is evaluated using conventional datasets (CIFAR10, LSUN, ImageNet), both in the cases when a perturbation layer is adopted only for Generators and when it is introduced to both Generator and Discriminator .
修士論文「Obake-GAN: GAN with Perturbation Layers」の発表資料
GANの畳込層の代わりに摂動層を導入し、
・Generator 学習パラメータ52%削減
・Discriminator 学習パラメータ87%削減
・ImageNetでInception Score 45%改善
・学習の収束を高速化
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
論文紹介:Learning With Neighbor Consistency for Noisy LabelsToru Tamaki
Ahmet Iscen, Jack Valmadre, Anurag Arnab, Cordelia Schmid, "Learning With Neighbor Consistency for Noisy Labels" CVPR2022
https://openaccess.thecvf.com/content/CVPR2022/html/Iscen_Learning_With_Neighbor_Consistency_for_Noisy_Labels_CVPR_2022_paper.html
ADVANCED SINGLE IMAGE RESOLUTION UPSURGING USING A GENERATIVE ADVERSARIAL NET...sipij
The resolution of an image is a very important criterion for evaluating the quality of the image. Higher resolution of image is always preferable as images of lower resolution are unsuitable due to fuzzy quality. Higher resolution of image is important for various fields such as medical imaging; astronomy works and so on as images of lower resolution becomes unclear and indistinct when their sizes are enlarged. In recent times, various research works are performed to generate higher resolution of an image from its lower resolution. In this paper, we have proposed a technique of generating higher resolution images form lower resolution using Residual in Residual Dense Block network architecture with a deep network. We have also compared our method with other methods to prove that our method provides better visual quality images.
AUTO AI 2021 talk Real world data augmentations for autonomous driving : B Ra...Ravi Kiran B.
Modern perception pipelines in autonomous driving (AD) systems are based on Deep Neural Networks (DNNs) which utilize multiple hyper-parameter configurations and training strategies. Data augmentations is now a well-established training strategy to improve the generalization of DNNs, especially in a low dataset regime. Self-supervised learning and semi-supervised methods depend heavily on data augmentation strategies. In this study we view generalization due to data augmentations training DNNs since they implicitly model the geometric, viewpoint based transformations present on images/pointclouds due to noise, perspective, motion of the ego-vehicle. We shortly review current data augmentation strategies for perception tasks in AD, and recent developments on understanding its effects on model generalization.
In the talk we shall review data augmentation strategies through two case studies:
- Improving model performance of monocular 3D object detection model by using geometry preserving data augmentations on images
- Understand the role of data augmentation in reducing data redundancy and improving label efficiency within an active learning pipeline
A NOVEL IMAGE STEGANOGRAPHY APPROACH USING MULTI-LAYERS DCT FEATURES BASED ON...ijma
Steganography is the science of hidden data in the cover image without any updating of the cover image.
The recent research of the steganography is significantly used to hide large amount of information within
an image and/or audio files. This paper proposed a new novel approach for hiding the data of secret image
using Discrete Cosine Transform (DCT) features based on linear Support Vector Machine (SVM)
classifier. The DCT features are used to decrease the image redundant information. Moreover, DCT is
used to embed the secrete message based on the least significant bits of the RGB. Each bit in the cover
image is changed only to the extent that is not seen by the eyes of human. The SVM used as a classifier to
speed up the hiding process via the DCT features. The proposed method is implemented and the results
show significant improvements. In addition, the performance analysis is calculated based on the
parameters MSE, PSNR, NC, processing time, capacity, and robustness.
An Image representation using Compressive Sensing and Arithmetic Coding IJCERT
The demand for graphics and multimedia communication over intenet is growing day by day. Generally the coding efficiency achieved by CS measurements is below the widely used wavelet coding schemes (e.g., JPEG 2000). In the existing wavelet-based CS schemes, DWT is mainly applied for sparse representation and the correlation of DWT coefficients has not been fully exploited yet. To improve the coding efficiency, the statistics of DWT coefficients has been investigated. A novel CS-based image representation scheme has been proposed by considering the intra- and inter-similarity among DWT coefficients. Multi-scale DWT is first applied. The low- and high-frequency subbands of Multi-scale DWT are coded separately due to the fact that scaling coefficients capture most of the image energy. At the decoder side, two different recovery algorithms have been presented to exploit the correlation of scaling and wavelet coefficients well. In essence, the proposed CS-based coding method can be viewed as a hybrid compressed sensing schemes which gives better coding efficiency compared to other CS based coding methods.
Sinusoidal Function for Population Size in Quantum Evolutionary Algorithm and...sipij
Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems. Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilistic representation for solutions and is highly suitable for combinatorial problems like Knapsack problem. Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for this kind of problems yet. This paper improves QEA whit change population size and used it in fractal image compression. Utilizing the self-similarity property of a natural image, the partitioned iterated function system (PIFS) will be found to encode an image through Quantum Evolutionary Algorithm (QEA) method Experimental results show that our method has a better performance than GA and conventional fractal image compression algorithms.
A simple framework for contrastive learning of visual representationsDevansh16
Link: https://machine-learning-made-simple.medium.com/learnings-from-simclr-a-framework-contrastive-learning-for-visual-representations-6c145a5d8e99
If you'd like to discuss something, text me on LinkedIn, IG, or Twitter. To support me, please use my referral link to Robinhood. It's completely free, and we both get a free stock. Not using it is literally losing out on free money.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
Live conversations at twitch here: https://rb.gy/zlhk9y
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Comments: ICML'2020. Code and pretrained models at this https URL
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
Cite as: arXiv:2002.05709 [cs.LG]
(or arXiv:2002.05709v3 [cs.LG] for this version)
Submission history
From: Ting Chen [view email]
[v1] Thu, 13 Feb 2020 18:50:45 UTC (5,093 KB)
[v2] Mon, 30 Mar 2020 15:32:51 UTC (5,047 KB)
[v3] Wed, 1 Jul 2020 00:09:08 UTC (5,829 KB)
MINIMIZING DISTORTION IN STEGANOG-RAPHY BASED ON IMAGE FEATUREijcsit
There are two defects in WOW. One is image feature is not considered when hiding information through minimal distortion path and it leads to high total distortion. Another is total distortion grows too rapidly with hidden capacity increasing and it leads to poor anti-detection when hidden capacity is large. To solve these two problems, a new algorithm named MDIS was proposed. MDIS is also based on the minimizing additive distortion framework of STC and has the same distortion function with WOW. The feature that there are a large number of pixels, having the same value with one of their eight neighbour pixels and the mechanism of secret sharing are used in MDIS, which can reduce the total distortion, improve the antidetection and increase the value of PNSR. Experimental results showed that MDIS has better invisibility, smaller distortion and stronger anti-detection than WOW.
There are two defects in WOW. One is image feature is not considered when hiding information through minimal distortion path and it leads to high total distortion. Another is total distortion grows too rapidly with hidden capacity increasing and it leads to poor anti-detection when hidden capacity is large. To solve these two problems, a new algorithm named MDIS was proposed. MDIS is also based on the minimizing additive distortion framework of STC and has the same distortion function with WOW. The feature that there are a large number of pixels, having the same value with one of their eight neighbour pixels and the mechanism of secret sharing are used in MDIS, which can reduce the total distortion, improve the antidetection and increase the value of PNSR. Experimental results showed that MDIS has better invisibility, smaller distortion and stronger anti-detection than WOW.
Performance analysis of transformation and bogdonov chaotic substitution base...IJECEIAES
In this article, a combined Pseudo Hadamard transformation and modified Bogdonav chaotic generator based image encryption technique is proposed. Pixel position transformation is performed using Pseudo Hadamard transformation and pixel value variation is made using Bogdonav chaotic substitution. Bogdonav chaotic generator produces random sequences and it is observed that very less correlation between the adjacent elements in the sequence. The cipher image obtained from the transformation stage is subjected for substitution using Bogdonav chaotic sequence to break correlation between adjacent pixels. The cipher image is subjected for various security tests under noisy conditions and very high degree of similarity is observed after deciphering process between original and decrypted images.
Similar to Lucas Theis - Compressing Images with Neural Networks - Creative AI meetup (20)
Luba Elliott - AI art - ICCV ConferenceLuba Elliott
This talk was given as part of the ICCV Workshop on Computer Vision for Fashion, Art and Design on the 2nd November in Seoul. See the workshop computer vision art gallery at computervisionart.com.
AI Art Gallery Overview - Luba Elliott - NeurIPS Creativity WorkshopLuba Elliott
This talk on 'AI Art Gallery Overview' was given by Luba Elliott at the NeurIPS Creativity Workshop on the 8th December in Montreal, Canada. The AI art gallery can be found at www.aiartonline.com.
Creativity is Intelligence - Kenneth Stanley - NeurIPS Creativity WorkshopLuba Elliott
This invited talk on 'Creativity is Intelligence' was given by Kenneth Stanley at the 2018 NeurIPS Workshop on Machine Learning for Creativity and Design in Montreal, Canada on the 8th December.
Seen by machine: Computational Spectatorship in the BBC ArchiveLuba Elliott
This talk on 'Seen by machine: Computational Spectatorship in the BBC Archive' was given by Daniel Chávez Heras as part of the Creative AI meetup on the 15th November at the Goethe Institute in London.
Natasha Jaques - Learning via Social Awareness - Creative AI meetupLuba Elliott
This talk by Natasha Jaques from MIT Media Lab on "Learning via Social Awareness: Improving a deep generative sketching model with facial feedback" was presented on 10th September 2018 at IDEA London as part of the Creative AI meetup.
Sander Dieleman - Generating music in the raw audio domain - Creative AI meetupLuba Elliott
This talk by Sander Dieleman from DeepMind on "Generating music in the raw audio domain" was presented on 10th September 2018 at IDEA London as part of the Creative AI meetup.
Marco Marchesi - Practical uses of style transfer in the creative industryLuba Elliott
This talk by Marco Marchesi from Happy Finish on "Can you make this image more neoclassical? Practical uses of Style Transfer in the creative industry" was presented at the Style Transfer event on 18th April at TechHub as part of the Creative AI meetup.
Hooman Shayani - CAD/CAM in the Age of AI: Designers’ Journey from Earth to SkyLuba Elliott
This talk by Hooman Shayani from Autodesk on "CAD/CAM in the Age of AI: Designers’ Journey from Earth to Sky" was presented at the Design and Manufacturing in the Age of AI event on 24th October at UCL as part of the Creative AI meetup.
Emily Denton - Unsupervised Learning of Disentangled Representations from Vid...Luba Elliott
This talk by Emily Denton from New York University on "Unsupervised Learning of Disentangled Representations from Video" was presented at the Learning Image Representations event on 30th August at Twitter as part of the Creative AI meetup.
Georgia Ward Dyer - O Time thy pyramids - Creative AI meetupLuba Elliott
This talk by Georgia Ward Dyer from Royal College of Art on "O Time thy pyramids" was presented at the Calligraphic Traces event on 31st July at Thoughtworks as part of the Creative AI meetup. The upload consists of slides followed by Georgia's notes from the talk.
Daniel Berio - Graffiti synthesis, a motion centric approach - Creative AI me...Luba Elliott
This talk by Daniel Berio from Goldsmiths University on "Graffiti synthesis, a motion centric approach" was presented at the Calligraphic Traces event on 31st July at Thoughtworks as part of the Creative AI meetup.
Ali Eslami - Artificial Intelligence and Computer Aided Design - Creative AI ...Luba Elliott
This talk by Ali Eslami on "Artificial Intelligence and Computer Aided Design" was presented at the AI & Architecture event on the 21st June held at the Digital Catapult. It was part of the Creative AI meetup series and the London Festival of Architecture.
Daghan Cam - Adaptive Autonomous Manufacturing with AI - Creative AI meetupLuba Elliott
This talk by Daghan Cam from AI Build on "Adaptive Autonomous Manufacturing with AI" was presented at the AI & Architecture event on the 21st June held at the Digital Catapult. It was part of the Creative AI meetup series and the London Festival of Architecture.
Martin Arjovsky - Wasserstein GAN - Creative AI meetupLuba Elliott
This talk "On Different Distances Between Distributions and Generative Adversarial Networks" about the Wasserstein GAN was presented at the Creative AI meetup on 26th May held at Imperial College in partnership with the Deep Learning Network.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
3. Is compression still a problem?
Demand for higher quality (4k, 60fps)
New forms of media (VR, 360, stereo)
Network congestion
Emerging markets
4. Image compression
A brief introduction to image compression
Autoregressive models
Lossless image compression with autoregressive models
SRGANs
Using GANs and super-resolution as a pragmatic approach to compression
Compressive autoencoders
End-to-end training of neural networks for compression
10. Recurrent image density estimator
Pixels
SLSTM units
SLSTM units
Pixels
RIDE
xij
x<ij
MCGSMB
C
he distribution of images such that the prediction of a pixel (black)
he upper-left green region. (B) A graphical model representation of anTheis & Bethge, Generative Image Modeling Using Spatial LSTMs, 2015
11. Autoregressive models
RIDE (3 layers)
PixelRNN (12 layers)
GAN
Alec Radford (@AlecRad)
van den Oord et al., 2016
Sohl-Dickstein et al., 2015
Deep Diffusion
LAPGAN
Denton et al., 2015
Deep Unsupervised Learning using Nonequilibrium
(a) (b)
Figure 3. The proposed framework trained on the CIFAR-10 [20] dataset. (a) Examp
the diffusion model.
will also be a Gaussian (binomial) distribution. The longer
the trajectory the smaller the diffusion rate can be made.
During learning only the mean and covariance for a Gaus-
sian diffusion kernel, or the bit flip probability for a bi-
nomial kernel, need be estimated. As shown in Table
C.1, fµ x(t)
, t and f⌃ x(t)
, t are functions defining the
mean and covariance of the reverse Markov transitions for
a Gaussian, and fb x(t)
, t is a function providing the bit
flip probability for a binomial distribution. For all results in
this paper, multi-layer perceptrons are used to define these
functions. A wide range of regression or function fitting
techniques would be applicable however, including nonpa-
rameteric methods.
then only a si
to exactly ev
substitution.
process in sta
2.4. Training
Training amo
L =
Z
dx(
=
Z
dx(
log
2
4
CIFAR-10
Data
Autoregressive
42. Conclusion
• End-to-end trained lossy image compression is now on par with the best hand-designed
image compression algorithms (BPG) and quickly improving
• Neural nets already offer many advantages:
• Can be specialized to datasets
• Can be adapted to other forms of content
• Can be optimized for different metrics
• Can be adapted for various settings (small/large encoder/decoder)