This document provides an overview of single image super resolution using deep learning. It discusses how super resolution can be used to generate a high resolution image from a low resolution input. Deep learning models like SRCNN were early approaches for super resolution but newer models use deeper networks and perceptual losses. Generative adversarial networks have also been applied to improve perceptual quality. Key applications are in satellite imagery, medical imaging, and video enhancement. Metrics like PSNR and SSIM are commonly used but may not correlate with human perception. Overall, deep learning has advanced super resolution techniques but challenges remain in fully evaluating perceptual quality.
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
Enhanced Deep Residual Networks for Single Image Super-ResolutionNAVER Engineering
발표자: 김희원 (서울대학교 박사과정)
발표일: 2017.9.
(현)서울대학교 전기정보공학 석박통합과정 재학
Best Paper Award of NTIRE 2017 Workshop: Challenge Track
개요:
Single Image Super-Resolution은 저해상도 이미지를 고해상도의 원본 이미지로 복원시키는 연구 분야입니다. 실생활에서 접할 수 있는 흔한 예로는 SNS 사진 중 작은 부분을 크게 확대해도 선명하게 하는 것이나, thumb nail로 원본 이미지만큼의 해상도를 만들어 내는 것입니다.
이번 발표에서는 딥러닝 전과 후의 연구방향에 대해서 알아본 후, CVPR 2017의 2nd NTIRE Workshop Challenge에서 우승한 저희 팀의 연구를 신경망 구조에 대한 분석을 중심으로 살펴보려고 합니다.
Super resolution in deep learning era - Jaejun YooJaeJun Yoo
Abstract (Eng/Kor):
Image restoration (IR) is one of the fundamental problems, which includes denoising, deblurring, super-resolution, etc. Among those, in today's talk, I will more focus on the super-resolution task. There are two main streams in the super-resolution studies; a traditional model-based optimization and a discriminative learning method. I will present the pros and cons of both methods and their recent developments in the research field. Finally, I will provide a mathematical view that explains both methods in a single holistic framework, while achieving the best of both worlds. The last slide summarizes the remaining problems that are yet to be solved in the field.
영상 복원(Image restoration, IR)은 low-level vision에서 매우 중요하게 다루는 근본적인 문제 중 하나로서 denoising, deblurring, super-resolution 등의 다양한 영상 처리 문제를 포괄합니다. 오늘 발표에서는 영상 복원 분야 중에서도 super-resolution 문제에 대해 집중적으로 다루겠습니다. 전통적인 model-based optimization 방식과 deep learning을 적용하여 문제를 푸는 방식에 대해, 각각의 장단점과 최신 연구 발전 흐름을 소개하겠습니다. 마지막으로는 이 둘을 하나로 잇는 통일된 관점을 제시하고 관련 연구들 살펴본 후, super-resolution 분야에서 아직 남아있는 문제점들을 정리하겠습니다.
발표자: 전석준(KAIST 박사과정)
발표일: 2018.8.
Super-resolution은 저해상도 이미지를 고해상도 이미지로 변환시키는 기술로 오랜기간 연구되어 온 주제입니다. 최근 딥러닝 기술이 적용됨에 따라 super-resolution 성능이 비약적으로 향상되었습니다. 저희는 스테레오 이미지를 이용하여 더 높은 해상도의 이미지를 얻는 기술을 개발하였습니다. 이에 관련 내용을 발표하고자 합니다.
1. Multi-Frame Super-Resolution
2. Learning-Based Super-Resolution
3. Stereo Imaging
4. Deep-Learning Based Stereo Super-Resolution
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
Enhanced Deep Residual Networks for Single Image Super-ResolutionNAVER Engineering
발표자: 김희원 (서울대학교 박사과정)
발표일: 2017.9.
(현)서울대학교 전기정보공학 석박통합과정 재학
Best Paper Award of NTIRE 2017 Workshop: Challenge Track
개요:
Single Image Super-Resolution은 저해상도 이미지를 고해상도의 원본 이미지로 복원시키는 연구 분야입니다. 실생활에서 접할 수 있는 흔한 예로는 SNS 사진 중 작은 부분을 크게 확대해도 선명하게 하는 것이나, thumb nail로 원본 이미지만큼의 해상도를 만들어 내는 것입니다.
이번 발표에서는 딥러닝 전과 후의 연구방향에 대해서 알아본 후, CVPR 2017의 2nd NTIRE Workshop Challenge에서 우승한 저희 팀의 연구를 신경망 구조에 대한 분석을 중심으로 살펴보려고 합니다.
Super resolution in deep learning era - Jaejun YooJaeJun Yoo
Abstract (Eng/Kor):
Image restoration (IR) is one of the fundamental problems, which includes denoising, deblurring, super-resolution, etc. Among those, in today's talk, I will more focus on the super-resolution task. There are two main streams in the super-resolution studies; a traditional model-based optimization and a discriminative learning method. I will present the pros and cons of both methods and their recent developments in the research field. Finally, I will provide a mathematical view that explains both methods in a single holistic framework, while achieving the best of both worlds. The last slide summarizes the remaining problems that are yet to be solved in the field.
영상 복원(Image restoration, IR)은 low-level vision에서 매우 중요하게 다루는 근본적인 문제 중 하나로서 denoising, deblurring, super-resolution 등의 다양한 영상 처리 문제를 포괄합니다. 오늘 발표에서는 영상 복원 분야 중에서도 super-resolution 문제에 대해 집중적으로 다루겠습니다. 전통적인 model-based optimization 방식과 deep learning을 적용하여 문제를 푸는 방식에 대해, 각각의 장단점과 최신 연구 발전 흐름을 소개하겠습니다. 마지막으로는 이 둘을 하나로 잇는 통일된 관점을 제시하고 관련 연구들 살펴본 후, super-resolution 분야에서 아직 남아있는 문제점들을 정리하겠습니다.
발표자: 전석준(KAIST 박사과정)
발표일: 2018.8.
Super-resolution은 저해상도 이미지를 고해상도 이미지로 변환시키는 기술로 오랜기간 연구되어 온 주제입니다. 최근 딥러닝 기술이 적용됨에 따라 super-resolution 성능이 비약적으로 향상되었습니다. 저희는 스테레오 이미지를 이용하여 더 높은 해상도의 이미지를 얻는 기술을 개발하였습니다. 이에 관련 내용을 발표하고자 합니다.
1. Multi-Frame Super-Resolution
2. Learning-Based Super-Resolution
3. Stereo Imaging
4. Deep-Learning Based Stereo Super-Resolution
지금까지 Super Resolution은 많은 방법들이 등장해왔다. 딥러닝이 영상처리 분야에서 눈에 띄는 성과를 보여주기 시작했고 이는 Super Resolution 문제에도 마찬가지로 적용됐다. 이번 발표에서는 1달 동안 3 가지 딥러닝 SR 모델을 구현한 경험담과 이를 통한 딥러닝 SR의 동향을 얘기하고자 한다. 딥러닝 SR이 기존의 SR을 어떻게 대체했는지 SRCNN을 소개로 시작하며 그 이후 딥러닝 SR의 발전과 현재 어디까지 왔는지 VDSR과 RDN을 통해 설명하겠다. 마지막으로 구현하면서 느낀 점들과 앞으로의 SR에 대한 생각을 얘기하려 한다.
Photo-realistic Single Image Super-resolution using a Generative Adversarial ...Hansol Kang
* Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
Literature Review on Single Image Super Resolutionijtsrd
In this paper, a detailed survey study on single image super-resolution (SR) has been presented, which aims at recovering a high-resolution (HR) image from a given low-resolution (LR) one. It is always the research emphasis because of the requirement of higher definition video displaying, such as the new generation of Ultra High Definition (UHD) TVs. Super-resolution (SR) is a popular topic of image processing that focuses on the enhancement of image resolution. In general, SR takes one or several low-resolution (LR) images as input and maps them as output images with high resolution (HR), which has been widely applied in remote sensing, medical imaging, biometric identification. Shalini Dubey | Prof. Pankaj Sahu | Prof. Surya Bazal"Literature Review on Single Image Super Resolution" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18339.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/18339/literature-review-on-single-image-super-resolution/shalini-dubey
Slides by Míriam Bellver at the UPC Reading group for the paper:
Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. "SSD: Single Shot MultiBox Detector." ECCV 2016.
Full listing of papers at:
https://github.com/imatge-upc/readcv/blob/master/README.md
A presentation about the development of the ideas from the autoencoder to the Stable Diffusion text-to-image model.
Models covered: autoencoder, VAE, VQ-VAE, VQ-GAN, latent diffusion, and stable diffusion.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
In this project, we propose methods for semantic segmentation with the deep learning state-of-the-art models. Moreover,
we want to filterize the segmentation to the specific object in specific application. Instead of concentrating on unnecessary objects we
can focus on special ones and make it more specialize and effecient for special purposes. Furtheromore, In this project, we leverage
models that are suitable for face segmentation. The models that are used in this project are Mask-RCNN and DeepLabv3. The
experimental results clearly indicate that how illustrated approach are efficient and robust in the segmentation task to the previous work
in the field of segmentation. These models are reached to 74.4 and 86.6 precision of Mean of Intersection over Union. The visual
Results of the models are shown in Appendix part.
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
Intro to selective search for object proposals, rcnn family and retinanet state of the art model deep dives for object detection along with MAP concept for evaluating model and how does anchor boxes make the model learn where to draw bounding boxes
지금까지 Super Resolution은 많은 방법들이 등장해왔다. 딥러닝이 영상처리 분야에서 눈에 띄는 성과를 보여주기 시작했고 이는 Super Resolution 문제에도 마찬가지로 적용됐다. 이번 발표에서는 1달 동안 3 가지 딥러닝 SR 모델을 구현한 경험담과 이를 통한 딥러닝 SR의 동향을 얘기하고자 한다. 딥러닝 SR이 기존의 SR을 어떻게 대체했는지 SRCNN을 소개로 시작하며 그 이후 딥러닝 SR의 발전과 현재 어디까지 왔는지 VDSR과 RDN을 통해 설명하겠다. 마지막으로 구현하면서 느낀 점들과 앞으로의 SR에 대한 생각을 얘기하려 한다.
Photo-realistic Single Image Super-resolution using a Generative Adversarial ...Hansol Kang
* Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
Literature Review on Single Image Super Resolutionijtsrd
In this paper, a detailed survey study on single image super-resolution (SR) has been presented, which aims at recovering a high-resolution (HR) image from a given low-resolution (LR) one. It is always the research emphasis because of the requirement of higher definition video displaying, such as the new generation of Ultra High Definition (UHD) TVs. Super-resolution (SR) is a popular topic of image processing that focuses on the enhancement of image resolution. In general, SR takes one or several low-resolution (LR) images as input and maps them as output images with high resolution (HR), which has been widely applied in remote sensing, medical imaging, biometric identification. Shalini Dubey | Prof. Pankaj Sahu | Prof. Surya Bazal"Literature Review on Single Image Super Resolution" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18339.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/18339/literature-review-on-single-image-super-resolution/shalini-dubey
Slides by Míriam Bellver at the UPC Reading group for the paper:
Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. "SSD: Single Shot MultiBox Detector." ECCV 2016.
Full listing of papers at:
https://github.com/imatge-upc/readcv/blob/master/README.md
A presentation about the development of the ideas from the autoencoder to the Stable Diffusion text-to-image model.
Models covered: autoencoder, VAE, VQ-VAE, VQ-GAN, latent diffusion, and stable diffusion.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
In this project, we propose methods for semantic segmentation with the deep learning state-of-the-art models. Moreover,
we want to filterize the segmentation to the specific object in specific application. Instead of concentrating on unnecessary objects we
can focus on special ones and make it more specialize and effecient for special purposes. Furtheromore, In this project, we leverage
models that are suitable for face segmentation. The models that are used in this project are Mask-RCNN and DeepLabv3. The
experimental results clearly indicate that how illustrated approach are efficient and robust in the segmentation task to the previous work
in the field of segmentation. These models are reached to 74.4 and 86.6 precision of Mean of Intersection over Union. The visual
Results of the models are shown in Appendix part.
Slides by Amaia Salvador at the UPC Computer Vision Reading Group.
Source document on GDocs with clickable links:
https://docs.google.com/presentation/d/1jDTyKTNfZBfMl8OHANZJaYxsXTqGCHMVeMeBe5o1EL0/edit?usp=sharing
Based on the original work:
Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards real-time object detection with region proposal networks." In Advances in Neural Information Processing Systems, pp. 91-99. 2015.
Intro to selective search for object proposals, rcnn family and retinanet state of the art model deep dives for object detection along with MAP concept for evaluating model and how does anchor boxes make the model learn where to draw bounding boxes
Scaling up Deep Learning Based Super Resolution AlgorithmsXiaoyong Zhu
Superresolution is a process for obtaining one or more high-resolution images from one or more low-resolution observations. It has been used for many applications, including satellite and aerial imaging, medical image processing, ultrasound imaging, line fitting, automated mosaicking, infrared imaging, facial image improvement, text image improvement, compressed image and video enhancement, and fingerprint image enhancement. While research on superresolution began in the 1970s, recently, with the power of deep learning, many notable new methods have been created, including SRCNN, SRResNet, and lately, SRGANs, which use generative adversarial networks. However, since these approaches require a lot of images to train the deep learning network, they are supercompute intensive. Fortunately, with the power of the cloud, you can easily scale up the compute resources as needed, making the algorithm converge faster.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Recent Progress on Object Detection_20170331Jihong Kang
This slide provides a brief summary of recent progress on object detection using deep learning.
The concept of selected previous works(R-CNN series/YOLO/SSD) and 6 recent papers (uploaded to the Arxiv between Dec/2016 and Mar/2017) are introduced in this slide.
Most papers are focusing on improving the performance of small object detection.
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...IJMER
Images with high resolution (HR) often required in most electronic imaging applications.
There are two types of resolution first is high resolution and other one is low resolution. Now high
resolution means pixel density with in an image is high and low resolution means pixel density with in
an image is low. Therefore high resolution image can offer more detail compare to low resolution
image that may be critical in many application. Super resolution is the process to obtain high
resolution image from one or more low resolution images. Here in paper explain such robust methods
of image super resolution. This paper describes the learning-based SR technique that utilizes an
example-based algorithm. This technique divides a large volume of training images into small
rectangular pieces called patches and patch pairs of low-resolution and high-resolution images are
stored in dictionary. After that there are low resolution patch is extracted from the input images. The
most alike patch pair is searched in the dictionary to synthesize high resolution image using the
searched high resolution patch in the pair.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-brailovskiy
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Ilya Brailovskiy, Principal Engineer at Amazon Lab126, presents the "How Image Sensor and Video Compression Parameters Impact Vision Algorithms" tutorial at the May 2017 Embedded Vision Summit.
Recent advances in deep learning algorithms have brought automated object detection and recognition to human accuracy levels on various test datasets. But algorithms that work well on an engineer’s PC often fail when deployed as part of a complete embedded system. In this talk, Brailovskiy examines some of the key embedded vision system elements that can degrade the performance of vision algorithms.
For example, in many systems video is compressed, transmitted, and then decompressed before being presented to vision algorithms. Not surprisingly, video encoding parameters, such as bit rate, can have a significant impact on vision algorithm accuracy. Similarly, image sensor parameters can have a profound effect on the nature of the images captured, and therefore on the performance of vision algorithms. He explores how image sensor and video compression parameters impact vision algorithm performance, and discusses methods for selecting the best parameters to aid vision algorithm accuracy.
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithmijtsrd
This paper addresses a unified work for achieving single image super-resolution, which consists of improving a high resolution from blurred, decimated and noisy version. Single image super-resolution is also known as image enhancement or image scaling up. In this paper mainly four steps are used for enhancement of single image resolution: input image, low sampling the image, an analytical solution and L2 regularization. This proposes to deal with the decimation and blurring operators by their particular properties in the frequency domain, which leads to a fast super-resolution approach. And an analytical solution obtained and implemented for the L2-regularization i.e. L2-L2 optimized algorithm. This aims to reduce the computational cost of the existing methods by the proposed method. Simulation results taken on different images and different priors with an advance machine learning technique and conducted results compared with the existing method. Varsha Patil | Meharunnisa SP"Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15635.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/15635/single-image-super-resolution-using-analytical-solution-for-l2-l2-algorithm/varsha-patil
These slides discuss some milestone results in image classification using Deep Convolutional neural network and talks about our results on Obscenity detection in images by using Deep Convolutional neural network and transfer learning on ImageNet models.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Single Image Super Resolution Overview
1. Intelligence Machine Vision Lab
Strictly Confidential
Single Image Super Resolution Overview
SUALAB
Ho Seong Lee
2. 2Type A-3
Contents
• What is Super Resolution?
• Applications of Super Resolution
• Deep Learning for Single Image Super Resolution
• Some Issues for Super Resolution
3. 3Type A-3
What is Super Resolution?
• Super Resolution
• Restore High-Resolution(HR) image(or video) from Low-Resolution(LR) image(or video)
• According to the number of input LR images, SR can be classified SISR or MISR
• Single Image Super Resolution
Efficient & Popular
Super
Resolution
4. 4Type A-3
What is Super Resolution?
• Single Image Super Resolution
• Restore High-Resolution(HR) image(or video) from Low-Resolution(LR) image(or video)
• Ill-Posed Problem.. (Regular Inverse Problem) → We can’t have ground truth from LR image
HR 3
HR 2
HR 1
LR
5. 5Type A-3
What is Super Resolution?
• Interpolation-based Single Image Super Resolution
• In image upscaling task, bicubic or bilinear or Lanczos interpolation is usually used.
• Fast, easy.. but low quality..
Super
Resolution
Deep SR
bilinear
6. 6Type A-3
What is Super Resolution?
• Single Image Super Resolution algorithms
• Interpolation-based method
• Reconstruction-based method
• (Deep) Learning-based method
• Today, I will cover learning-based method
7. 7Type A-3
Applications of Super Resolution
• Satellite image processing
• Medical image processing
• Multimedia Industry and Video Enhancement
Reference: “Super Resolution Applications in Modern Digital Image Processing”, 2016 IJCA
TV & Monitor
HD(1280x720), FHD(1920x180) UHD(3840x2160)
8. 8Type A-3
Deep Learning for Single Image Super Resolution
• Learning-based Single Image Super Resolution
• For tackling regular inverse problem, almost use this paradigm
• [HR(GT) image] + [distortion & down-sampling] → [LR(input) image]
• This is limitation of SISR training
• Overall restoration quality is dependent on the distortion & down-sampling method
Reference: “Deep Learning for Single Image Super-Resolution: A Brief Review”, 2018 IEEE Transactions on Multimedia (TMM)
9. 9Type A-3
Deep Learning for Single Image Super Resolution
• Learning-based Single Image Super Resolution
• [HR(GT) image] + [distortion & down-sampling] → [LR(input) image]
• In CVPR 2017 SR Challenge, many team showed many degradation of quality metric
Reference: http://www.vision.ee.ethz.ch/~timofter/publications/NTIRE2017SRchallenge_factsheets.pdf
10. 10Type A-3
Deep Learning for Single Image Super Resolution
• First Deep Learning architecture for Single Image Super Resolution
• SRCNN(2014) – three-layer CNN, MSE Loss, Early upsampling
• Compared to traditional methods, it shows excellent performance.
Reference: “Image Super-Resolution Using Deep Convolutional Networks”, 2014 ECCV
11. 11Type A-3
Deep Learning for Single Image Super Resolution
• Efficient Single Image Super Resolution
• FSRCNN(2016), ESPCN(2016)
• Use Late Upsampling with deconvolution or sub-pixel convolutional layer
Inefficient in Memory, FLOPS
Reference: “Image Super-Resolution Using Deep Convolutional Networks”, 2014 ECCV
12. 12Type A-3
Deep Learning for Single Image Super Resolution
• FSRCNN(Fast Super-Resolution Convolutional Neural Network)
• Use Deconvolution layer instead of pre-processing(upsampling)
• Faster and more accurate than SRCNN
Reference: “Accelerating the Super-Resolution Convolutional Neural Network”, 2016 ECCV
13. 13Type A-3
Deep Learning for Single Image Super Resolution
• ESPCN(Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel
Convolutional Neural Network)
• Use sub-pixel convolutional layer (pixel shuffler or depth_to_space)
• This sub-pixel convolutional layer is used in recent SR models
Reference: “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network”, 2016 CVPR
Code: https://github.com/leftthomas/ESPCN
14. 14Type A-3
Deep Learning for Single Image Super Resolution
• Deeper Networks for Super-Resolution
• SRCNN, FSRCNN, ESPCN are shallow network → Why not deep network?
• Failed to train deeper models.. → Use shallow network → how to use deeper network?
Reference: “Image Super-Resolution Using Deep Convolutional Networks”, 2014 ECCV
15. 15Type A-3
Deep Learning for Single Image Super Resolution
• VDSR(Accurate Image Super-Resolution Using Very Deep Convolutional Networks)
• VGG based deeper model(20-layer) for Super-Resolution → large receptive field
• Residual learning & High learning rate with gradient clipping
• MSE Loss, Early upsampling
Reference: “Accurate Image Super-Resolution Using Very Deep Convolutional Networks”, 2016 CVPR
16. 16Type A-3
Deep Learning for Single Image Super Resolution
• Deeper Networks for Super-Resolution after VDSR
• DRCN(Deeply-recursive Convolutional network), 2016 CVPR
• SRResNet, 2017 CVPR
• DRRN(Deep Recursive Residual Network), 2017 CVPR
Reference: “Deep Learning for Single Image Super-Resolution: A Brief Review”, 2018 IEEE Transactions on Multimedia (TMM)
17. 17Type A-3
Deep Learning for Single Image Super Resolution
• Deeper Networks for Super-Resolution after VDSR
• EDSR, MDSR (Enhanced Deep Residual Network, Multi Scale EDSR), 2017 CVPRW
• DenseSR, 2017 CVPR
• MemNet, 2017 CVPR
Reference: “Deep Learning for Single Image Super-Resolution: A Brief Review”, 2018 IEEE Transactions on Multimedia (TMM)
18. 18Type A-3
Deep Learning for Single Image Super Resolution
• Generative Adversarial Network(GAN) for Super-Resolution
• SRGAN(Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network)
• First GAN-based SR Model, MSE Loss → Blurry Output → GAN loss + Content loss = Perceptual loss
Reference: “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, 2017 CVPR
19. 19Type A-3
Deep Learning for Single Image Super Resolution
• Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
• MSE Loss → Blurry Output → GAN loss + Content loss = Perceptual loss
• Replace MSE loss to VGG loss (used in style transfer) and add adversarial loss
Reference: “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, 2017 CVPR
20. 20Type A-3
Deep Learning for Single Image Super Resolution
• Generative Adversarial Network(GAN) for Super-Resolution
• SRGAN, EnhanceNet, SRFeat, ESRGAN
Reference: “A Deep Journey into Super-resolution: A survey”, 2019 arXiv
21. 21Type A-3
Deep Learning for Single Image Super Resolution
Reference: “A Deep Journey into Super-resolution: A survey”, 2019 arXiv
• Deep Learning for Single Image Super Resolution
22. 22Type A-3
Deep Learning for Single Image Super Resolution
Reference: “A Deep Journey into Super-resolution: A survey”, 2019 arXiv
23. 23Type A-3
Deep Learning for Single Image Super Resolution
• Deep Learning for Single Image Super Resolution
24. 24Type A-3
Some Issues for Super Resolution
• Checkerboard artifact
• Deconvolution (Transposed convolution) layer can easily have “uneven overlap”
• Simple solution: use “resize + conv” or “sub-pixel convolutional layer”
Reference: “Deconvolution and Checkerboard Artifacts”, distill blog(https://distill.pub/2016/deconv-checkerboard/)
25. 25Type A-3
Some Issues for Super Resolution
• Loss function
• Propose a various loss function methods in Image Restoration task
• Report the best result when using mixed loss with MS-SSIM loss + 𝒍 𝟏 loss
Reference: “Loss Functions for Image Restoration with Neural Networks”, 2016 IEEE TCI
26. 26Type A-3
Some Issues for Super Resolution
• Loss function
• Recent papers almost use 𝒍 𝟏 loss
Reference: “A Deep Journey into Super-resolution: A survey”, 2019 arXiv
27. 27Type A-3
Some Issues for Super Resolution
• Metric (Distortion measure)
• Almost paper use distortion metric(PSNR and SSIM) as performance metric
• But, high PSNR, SSIM do not guarantee good quality that looks good to human..
Reference: “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, 2017 CVPR
28. 28Type A-3
Some Issues for Super Resolution
• Metric (Human opinion score)
• But, high PSNR, SSIM do not guarantee good quality that looks good to human..
• In SRGAN Paper, Mean Opinion Score(MOS) Testing is done for quantify perceptual quality
• 26 raters, score from 1(bad) to 5(excellent)
• The raters were calibrated on the NN(1) and HR(5) version of 20 images from BSD300 dataset
29. 29Type A-3
Some Issues for Super Resolution
• Metric Paper (The Perception-Distortion Tradeoff, 2018 CVPR)
• Analysis distortion measure(PSNR, SSIM, etc.) vs human opinion score(Perceptual quality)
• Good supplementary video: https://www.youtube.com/watch?v=6Yid4dituqo (PR-12)
Reference: “The Perception-Distortion Tradeoff”, 2018 CVPR
Better
quality
30. 30Type A-3
References
• “Image Super-Resolution Using Deep Convolutional Networks”, 2014 ECCV
• “Super Resolution Applications in Modern Digital Image Processing”, 2016 IJCA
• “Loss Functions for Image Restoration with Neural Networks”, 2016 IEEE TCI
• “Deconvolution and Checkerboard Artifacts”, distill blog(https://distill.pub/2016/deconv-checkerboard/)
• “Accurate Image Super-Resolution Using Very Deep Convolutional Networks”, 2016 CVPR
• “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network”, 2016
CVPR
• “Accelerating the Super-Resolution Convolutional Neural Network”, 2016 ECCV
• “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, 2017 CVPR
• http://www.vision.ee.ethz.ch/~timofter/publications/NTIRE2017SRchallenge_factsheets.pdf
• “The Perception-Distortion Tradeoff”, 2018 CVPR
• “Deep Learning for Single Image Super-Resolution: A Brief Review”, 2018 IEEE Transactions on Multimedia (TMM)
• “A Deep Journey into Super-resolution: A survey”, 2019 arXiv