Peter Muschick MSc thesis
Universitat Pollitecnica de Catalunya, 2020
Sign language recognition and translation has been an active research field in the recent years with most approaches using deep neural networks to extract information from sign language data. This work investigates the mostly disregarded approach of using human keypoint estimation from image and video data with OpenPose in combination with transformer network architecture. Firstly, it was shown that it is possible to recognize individual signs (4.5% word error rate (WER)). Continuous sign language recognition though was more error prone (77.3% WER) and sign language translation was not possible using the proposed methods, which might be due to low accuracy scores of human keypoint estimation by OpenPose and accompanying loss of information or insufficient capacities of the used transformer model. Results may improve with the use of datasets containing higher repetition rates of individual signs or focusing more precisely on keypoint extraction of hands.
It is a technique to modify a source speaker's speech to sound as if it was spoken by a target speaker.
Voice morphing enables speech patterns to be cloned
And an accurate copy of a person's voice can be made that can wishes to say, anything in the voice of someone else.
Sign Language Recognition based on Hands symbols ClassificationTriloki Gupta
Communication is always having a great impact in every domain and how it is considered the meaning of the thoughts and expressions that attract the researchers to bridge this gap for every living being.
The objective of this project is to identify the symbolic expression through images so that the communication gap between a normal and hearing impaired person can be easily bridged.
Github Link:https://github.com/TrilokiDA/Hand_Sign_Language
It is a technique to modify a source speaker's speech to sound as if it was spoken by a target speaker.
Voice morphing enables speech patterns to be cloned
And an accurate copy of a person's voice can be made that can wishes to say, anything in the voice of someone else.
Sign Language Recognition based on Hands symbols ClassificationTriloki Gupta
Communication is always having a great impact in every domain and how it is considered the meaning of the thoughts and expressions that attract the researchers to bridge this gap for every living being.
The objective of this project is to identify the symbolic expression through images so that the communication gap between a normal and hearing impaired person can be easily bridged.
Github Link:https://github.com/TrilokiDA/Hand_Sign_Language
This Presentation is for project work which will work on the "FACE DETECTION USING MATLAB".
This presentation will be prepared on the practical basis instead of theoretical knowledge. So result may vary on the basis of your practical work.
This Presentation is of standard format which is also beneficial for the engineering student for project work.
Introduction to Digital Videos, Motion Estimation: Principles & Compensation. Learn more in IIT Kharagpur's Image and Video Communication online certificate course.
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...ijtsrd
Recognition languages are developed for the better communication of the challenged people. The recognition signs include the combination of various with hand gestures, movement, arms and facial expressions to convey the words thought. The languages used in sign are rich and complex as equal as to languages that are spoken. As the technological world is growing rapidly, the sign languages for human are made to recognised by systems in order to improve the accuracy and the multiply the various sign languages with newer forms. In order to improve the accuracy in detecting the input sign, a model has been proposed. The proposed model consists of three phases a training phase, a testing phase and a storage output phase. A gesture is extracted from the given input picture. The extracted image is processed to remove the background noise data with the help of threshold pixel image value. After the removal of noise from the image and the filtered image to trained model is tested with a user input and then the detection accuracy is measured. A total of 50 sign gestures were loaded into the training model. The trained model accuracy is measured and then the output is extracted in the form of the mentioned language symbol. The detection mechanism of the proposed model is compared with the other detection methods such as Hidden Markov Model(HMM), Convolutional Neural Networks(CNN) and Support Vector Machine(SVM). The classification is done by means of a Support Vector Machine(SVM) which classifies at a higher accuracy. The accuracy obtained was 99 percent in comparison with the other detection methods. D. Anbarasan | R. Aravind | K. Alice"GRS “ Gesture based Recognition System for Indian Sign Language Recognition System for Deaf and Dumb People" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9638.pdf http://www.ijtsrd.com/engineering/computer-engineering/9638/grs--gesture-based-recognition-system-for-indian-sign-language-recognition-system-for-deaf-and-dumb-people/d-anbarasan
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
Wavelet transform is one of the important methods of compressing image data so that it takes up less memory. Wavelet based compression techniques have advantages such as multi-resolution, scalability and tolerable degradation over other techniques.
Breaking Through The Challenges of Scalable Deep Learning for Video AnalyticsJason Anderson
Meetup Link: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/250444108/
Recording Link: https://www.youtube.com/watch?v=4uXg1KTXdQc
When developing a machine learning system, the possibilities are limitless. However, with the recent explosion of Big Data and AI, there are more options than ever to filter through. Which technologies to select, which model topologies to build, and which infrastructure to use for deployment, just to name a few. We have explored these options for our faceted refinement system for video content system (consisting of 100K+ videos) along with their many roadblocks. Three primary areas of focus involve natural language processing, video frame sampling, and infrastructure deployment.
Presenter: Dr. Xin Wang, NII
Paper: https://arxiv.org/abs/2111.07725
Self-supervised speech model is a rapid progressing research topic, and many pre-trained models have been released and used in various down stream tasks. For speech anti-spoofing, most countermeasures (CMs) use signal processing algorithms to extract acoustic features for classification. In this study, we use pre-trained self-supervised speech models as the front end of spoofing CMs. We investigated different back end architectures to be combined with the self-supervised front end, the effectiveness of fine-tuning the front end, and the performance of using different pre-trained self-supervised models. Our findings showed that, when a good pre-trained front end was fine-tuned with either a shallow or a deep neural network-based back end on the ASVspoof 2019 logical access (LA) training set, the resulting CM not only achieved a low EER score on the 2019 LA test set but also significantly outperformed the baseline on the ASVspoof 2015, 2021 LA, and 2021 deepfake test sets. A sub-band analysis further demonstrated that the CM mainly used the information in a specific frequency band to discriminate the bona fide and spoofed trials across the test sets.
This Presentation is for project work which will work on the "FACE DETECTION USING MATLAB".
This presentation will be prepared on the practical basis instead of theoretical knowledge. So result may vary on the basis of your practical work.
This Presentation is of standard format which is also beneficial for the engineering student for project work.
Introduction to Digital Videos, Motion Estimation: Principles & Compensation. Learn more in IIT Kharagpur's Image and Video Communication online certificate course.
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...ijtsrd
Recognition languages are developed for the better communication of the challenged people. The recognition signs include the combination of various with hand gestures, movement, arms and facial expressions to convey the words thought. The languages used in sign are rich and complex as equal as to languages that are spoken. As the technological world is growing rapidly, the sign languages for human are made to recognised by systems in order to improve the accuracy and the multiply the various sign languages with newer forms. In order to improve the accuracy in detecting the input sign, a model has been proposed. The proposed model consists of three phases a training phase, a testing phase and a storage output phase. A gesture is extracted from the given input picture. The extracted image is processed to remove the background noise data with the help of threshold pixel image value. After the removal of noise from the image and the filtered image to trained model is tested with a user input and then the detection accuracy is measured. A total of 50 sign gestures were loaded into the training model. The trained model accuracy is measured and then the output is extracted in the form of the mentioned language symbol. The detection mechanism of the proposed model is compared with the other detection methods such as Hidden Markov Model(HMM), Convolutional Neural Networks(CNN) and Support Vector Machine(SVM). The classification is done by means of a Support Vector Machine(SVM) which classifies at a higher accuracy. The accuracy obtained was 99 percent in comparison with the other detection methods. D. Anbarasan | R. Aravind | K. Alice"GRS “ Gesture based Recognition System for Indian Sign Language Recognition System for Deaf and Dumb People" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9638.pdf http://www.ijtsrd.com/engineering/computer-engineering/9638/grs--gesture-based-recognition-system-for-indian-sign-language-recognition-system-for-deaf-and-dumb-people/d-anbarasan
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
Wavelet transform is one of the important methods of compressing image data so that it takes up less memory. Wavelet based compression techniques have advantages such as multi-resolution, scalability and tolerable degradation over other techniques.
Breaking Through The Challenges of Scalable Deep Learning for Video AnalyticsJason Anderson
Meetup Link: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/250444108/
Recording Link: https://www.youtube.com/watch?v=4uXg1KTXdQc
When developing a machine learning system, the possibilities are limitless. However, with the recent explosion of Big Data and AI, there are more options than ever to filter through. Which technologies to select, which model topologies to build, and which infrastructure to use for deployment, just to name a few. We have explored these options for our faceted refinement system for video content system (consisting of 100K+ videos) along with their many roadblocks. Three primary areas of focus involve natural language processing, video frame sampling, and infrastructure deployment.
Presenter: Dr. Xin Wang, NII
Paper: https://arxiv.org/abs/2111.07725
Self-supervised speech model is a rapid progressing research topic, and many pre-trained models have been released and used in various down stream tasks. For speech anti-spoofing, most countermeasures (CMs) use signal processing algorithms to extract acoustic features for classification. In this study, we use pre-trained self-supervised speech models as the front end of spoofing CMs. We investigated different back end architectures to be combined with the self-supervised front end, the effectiveness of fine-tuning the front end, and the performance of using different pre-trained self-supervised models. Our findings showed that, when a good pre-trained front end was fine-tuned with either a shallow or a deep neural network-based back end on the ASVspoof 2019 logical access (LA) training set, the resulting CM not only achieved a low EER score on the 2019 LA test set but also significantly outperformed the baseline on the ASVspoof 2015, 2021 LA, and 2021 deepfake test sets. A sub-band analysis further demonstrated that the CM mainly used the information in a specific frequency band to discriminate the bona fide and spoofed trials across the test sets.
Curriculum Development of an Audio Processing Laboratory Coursesipij
This paper describes the development of an audio processing laboratory curriculum at the graduate level. A real-time speech and audio signal-processing laboratory is set up to enhance speech and multi-media signal processing courses to conduct design projects. The recent fixed-point TMS320C5510 DSP Starter Kit (DSK) from Texas Instruments (TI) is used; a set of courseware is developed. In addition, this paper discusses the instructor’s and students’ assessments and recommendations in this real-time signal-processing laboratory course.
Enhancing Developer Productivity with Code ForensicsTechWell
Imagine an engineering system that could evaluate developer performance, recognize rushed check-ins, and use that data to speed up development. “Congratulations Jane. You know this code well. No check-in test gate for you.” Anthony Voellm shares how behavioral analysis and developer assessments can be applied to improve productivity. This approach was motivated by today's test systems, tools, and processes that are all designed around the premise that “all developers are created equal.” Studies have shown developer error rates can vary widely and have a number of root causes—the mindset of the developer at the time the code was written, experience level, amount of code in a check-in, complexity of the code, and much more. With Digital Code Forensics, a set of metrics that can evaluate developers, Anthony demonstrates how even modest applications of this approach can speed up development. Discover and use the cutting edge of engineering productivity.
Jan Luts - Exploring artificial intelligence innovations in ESCO and EuropassEADTU
Empower webinar week 6 July 2023. Disclaimer: Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.
[Dec./2017] My Personal/Professional Journey after Graduate Univ.Hayoung Yoon
Prof. Young-Bae Ko from Ajou Univ., South Korea, invites me to talk about my personal and professional journies after graduate university. The audience was fresh graduate students, including some international students. It was not easy to present because it was the first non-technical presentation or myself delivered in English. On the otherhand, it was a precious time in preparing slides for me to recall all good memories and joys during my Ph.D. and industrial career. As a student pursuing Ph.D., I made many failures. But every time I face them, there was someone who couraged and helped me to overcome. I am sharing these slides with gratitude for those who supported me to be an engineer.
BBA100 Business and SocietyGood Evening, everyone.T.docxgarnerangelika
BBA100: Business and Society
Good Evening, everyone.
Thank you so much for the opportunity to present our bid to jointly host the 17th PST conference in New York.
1
Dr. Hak J. Kim
Chairperson & Associate Professor
Department of Information Systems and Business Analytics
Phone: 516-463-4244
E-mail: [email protected]
Information Systems & Business Analytics
I am Hak Kim from Hofstra University.
My colleague, Dr. Hossein Sarrafzadeh from St. Bonaventure University.
We prepare this presentation, but unfortunately he cannot join today’s presentation.
2
Who am I
BBA from Korea University
MBA from Korea University
MIM (Int’l Mgt) from Thunderbird University
MS (Telecommunications) from Univ. of Colorado
Ph.D. in Information Science from Univ. of Pittsburgh
Research Engineer at SK Telecom (‘Korea Mobile Telecom)
Assistant/Associate Professor in University of Houston
Moved to Hofstra University in Fall 2010
Director of Computer and Network Lab from Fall 2010
Tenured at Hofstra University in 2014
Department Chairperson from Fall 2016
Office Hours
Hak J. Kim, Ph.D.
Office: 211A Weller Hall
Phone: 516.463.4244
E-mail: [email protected]
Office Hours:
M/W: 2:30pm - 4:30p
Anytime available by appointment
In general, I’m in my office a lot – feel free to stop by with questions if the door is open.
Information Systems and Business Analytics (IS/BAN)
Academic Scope of IS/BAN
Database
&
Enterprise Systems
Business Applications
&
Programming
(Coding)
Computer Networking
&
Cyber Security
Data
Analysis
Business
Problem
Solving
Data
Visualization
Data
Collection
Programs & Tracks
Undergraduate
Major
Information Systems
Business Analytics
Minor
Information Systems
IS & Business Analytics
Business & Design
Graduate
M.B.A.
Information Systems
Business Analytics
Quality Management
Master of Science (MS)
MS in Information Systems
MS in Business Analytics
IT/Analytics Skill Sets
Academic Alliance
BBA100 Sessions
(IS/BAN Area)
Session I: Data and Information Systems (SAP ERP)
Session II: Mobile App Design (MIT App Inventor)
Session III: Computer Networking/Cybersecurity
Session IV: Big Data, IoT, AI, Blockchain, etc.
Today’s Menu
Session I: Data and Information Systems
Data vs. Information
Information Systems
Database (DB) and DBMS
SAP ERP Hands-on Lab
Data vs. Information
Processing
Data
Information
Input
Output
Raw facts
Manipulated Data
Models
Tools
Example :
Thousands of
Transactions
Statistics
Excel
Average Sales
in Regions
Example: Data
What is the revenue in each country in 2007?
Example: Information
Pivot Table
Enterprise IT Infrastructure
Enterprise (Business) Information Systems
Information Systems
Function
Transactional
Processing
System
Executive
Information
System
Management
Information
System
Decision
Support
System
Artificial
Intelligence
System
ERP
i.e., 5-year forecasting
i.e., Quarterly Sales Report
i.e., Investment Decision
i.e., Ordering/Fulfilme.
Similar to Learn2Sign : Sign language recognition and translation using human keypoint estimation and transformer model (20)
This document provides an overview of deep generative learning and summarizes several key generative models including GANs, VAEs, diffusion models, and autoregressive models. It discusses the motivation for generative models and their applications such as image generation, text-to-image synthesis, and enhancing other media like video and speech. Example state-of-the-art models are provided for each application. The document also covers important concepts like the difference between discriminative and generative modeling, sampling techniques, and the training procedures for GANs and VAEs.
Machine translation and computer vision have greatly benefited from the advances in deep learning. A large and diverse amount of textual and visual data have been used to train neural networks whether in a supervised or self-supervised manner. Nevertheless, the convergence of the two fields in sign language translation and production still poses multiple open challenges, like the low video resources, limitations in hand pose estimation, or 3D spatial grounding from poses.
The transformer is the neural architecture that has received most attention in the early 2020's. It removed the recurrency in RNNs, replacing it with and attention mechanism across the input and output tokens of a sequence (cross-attenntion) and between the tokens composing the input (and output) sequences, named self-attention.
These slides review the research of our lab since 2016 on applied deep learning, starting from our participation in the TRECVID Instance Search 2014, moving into video analysis with CNN+RNN architectures, and our current efforts in sign language translation and production.
Machine translation and computer vision have greatly benefited of the advances in deep learning. The large and diverse amount of textual and visual data have been used to train neural networks whether in a supervised or self-supervised manner. Nevertheless, the convergence of the two field in sign language translation and production is still poses multiple open challenges, like the low video resources, limitations in hand pose estimation, or 3D spatial grounding from poses. This talk will present these challenges and the How2✌️Sign dataset (https://how2sign.github.io) recorded at CMU in collaboration with UPC, BSC, Gallaudet University and Facebook.
https://imatge.upc.edu/web/publications/sign-language-translation-and-production-multimedia-and-multimodal-challenges-all
https://imatge-upc.github.io/synthref/
Integrating computer vision with natural language processing has achieved significant progress
over the last years owing to the continuous evolution of deep learning. A novel vision and language
task, which is tackled in the present Master thesis is referring video object segmentation, in which a
language query defines which instance to segment from a video sequence. One of the biggest chal-
lenges for this task is the lack of relatively large annotated datasets since a tremendous amount of
time and human effort is required for annotation. Moreover, existing datasets suffer from poor qual-
ity annotations in the sense that approximately one out of ten language expressions fails to uniquely
describe the target object.
The purpose of the present Master thesis is to address these challenges by proposing a novel
method for generating synthetic referring expressions for an image (video frame). This method pro-
duces synthetic referring expressions by using only the ground-truth annotations of the objects as well
as their attributes, which are detected by a state-of-the-art object detection deep neural network. One
of the advantages of the proposed method is that its formulation allows its application to any object
detection or segmentation dataset.
By using the proposed method, the first large-scale dataset with synthetic referring expressions for
video object segmentation is created, based on an existing large benchmark dataset for video instance
segmentation. A statistical analysis and comparison of the created synthetic dataset with existing ones
is also provided in the present Master thesis.
The conducted experiments on three different datasets used for referring video object segmen-
tation prove the efficiency of the generated synthetic data. More specifically, the obtained results
demonstrate that by pre-training a deep neural network with the proposed synthetic dataset one can
improve the ability of the network to generalize across different datasets, without any additional annotation cost. This outcome is even more important taking into account that no additional annotation cost is involved.
Master MATT thesis defense by Juan José Nieto
Advised by Víctor Campos and Xavier Giro-i-Nieto.
27th May 2021.
Pre-training Reinforcement Learning (RL) agents in a task-agnostic manner has shown promising results. However, previous works still struggle to learn and discover meaningful skills in high-dimensional state-spaces. We approach the problem by leveraging unsupervised skill discovery and self-supervised learning of state representations. In our work, we learn a compact latent representation by making use of variational or contrastive techniques. We demonstrate that both allow learning a set of basic navigation skills by maximizing an information theoretic objective. We assess our method in Minecraft 3D maps with different complexities. Our results show that representations and conditioned policies learned from pixels are enough for toy examples, but do not scale to realistic and complex maps. We also explore alternative rewards and input observations to overcome these limitations.
https://imatge.upc.edu/web/publications/discovery-and-learning-navigation-goals-pixels-minecraft
https://github.com/telecombcn-dl/lectures-all/
These slides review techniques for interpreting the behavior of deep neural networks. The talk reviews basic techniques such as the display of filters and tensors, as well as more advanced ones that try to interpret which part of the input data is responsible for the predictions, or generate data that maximizes the activation of certain neurons.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/dlai-2020/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/drl-2020/
This course presents the principles of reinforcement learning as an artificial intelligence tool based on the interaction of the machine with its environment, with applications to control tasks (eg. robotics, autonomous driving) o decision making (eg. resource optimization in wireless communication networks). It also advances in the development of deep neural networks trained with little or no supervision, both for discriminative and generative tasks, with special attention on multimedia applications (vision, language and speech).
Giro-i-Nieto, X. One Perceptron to Rule Them All: Language, Vision, Audio and Speech. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 7-8).
Tutorial page:
https://imatge.upc.edu/web/publications/one-perceptron-rule-them-all-language-vision-audio-and-speech-tutorial
Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities.
Image segmentation is a classic computer vision task that aims at labeling pixels with semantic classes. These slides provide an overview of the basic approaches applied from the deep learning field to tackle this challenge and presents the basic subtasks (semantic, instance and panoptic segmentation) and related datasets.
Presented at the International Summer School on Deep Learning (ISSonDL) 2020 held online and organized by the University of Gdansk (Poland) between the 30th August and 2nd September.
http://2020.dl-lab.eu/virtual-summer-school-on-deep-learning/
https://imatge-upc.github.io/rvos-mots/
Video object segmentation can be understood as a sequence-to-sequence task that can benefit from the curriculum learning strategies for better and faster training of deep neural networks. This work explores different schedule sampling and frame skipping variations to significantly improve the performance of a recurrent architecture. Our results on the car class of the KITTI-MOTS challenge indicate that, surprisingly, an inverse schedule sampling is a better option than a classic forward one. Also, that a progressive skipping of frames during training is beneficial, but only when training with the ground truth masks instead of the predicted ones.
Deep neural networks have achieved outstanding results in various applications such as vision, language, audio, speech, or reinforcement learning. These powerful function approximators typically require large amounts of data to be trained, which poses a challenge in the usual case where little labeled data is available. During the last year, multiple solutions have been proposed to leverage this problem, based on the concept of self-supervised learning, which can be understood as a specific case of unsupervised learning. This talk will cover its basic principles and provide examples in the field of multimedia.
Deep neural networks have revolutionized the data analytics scene by improving results in several and diverse benchmarks with the same recipe: learning feature representations from data. These achievements have raised the interest across multiple scientific fields, especially in those where large amounts of data and computation are available. This change of paradigm in data analytics has several ethical and economic implications that are driving large investments, political debates and sounding press coverage under the generic label of artificial intelligence (AI). This talk will present the fundamentals of deep learning through the classic example of image classification, and point at how the same principal has been adopted for several tasks. Finally, some of the forthcoming potentials and risks for AI will be pointed.
More from Universitat Politècnica de Catalunya (20)
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
2. Were my hands visible? Was the
background not distracting? Did
my clothes contrast my skin color?
Was the video quality sufficient?
3. • Problem
• Communication issues of sign language speakers (in digital environments)
[DFG+]
• Proposed solutions
• Creation of automatically generated subtitles and translations of sign languages
• Speech2Signs: Spoken to Sign Language Translation using NN of prof Xavier
Giró and Amanda Duarte (PhD cand.) at Universitat Politècnica de Catalunya,
Barcelona
• Here: Research of sign language translation with a new dataset called How2Sign
and OpenPose
3
Motivation
University of Stuttgart 06.11.2020
4. • Introduction
• Sign language research
• Current state
• Related works
• Methods
• Results
• Discussion & Summary
4
Content
University of Stuttgart 06.11.2020
5. • Sign languages are
• individual and independent languages
• Sign languages are spoken on multiple and parallel channels [Dam11]
• All information of sign languages cannot be covered in texts [Sut95] [Sto05]
[Pri90]
• Research of sign language translation is dependent on the translation direction
5
Characteristics of neural sign language translation research
Introduction
University of Stuttgart 06.11.2020
6. • Research of sign language translation: Sign language to spoken language
6
Translation direction
Introduction
‘Hi my name is ...’ / Audio
[DPG+20]
University of Stuttgart 06.11.2020
Input: image/video Output: text/audio
7. • Research of sign language translation: Spoken language to sign language
7
Translation direction
Introduction
University of Stuttgart 06.11.2020
GAN = Generative Adversarial Networks
‘Hi my name is ...’ / Audio
Input: text/audio Output: animated avatar or
generated videos (GAN)
‘Hi my name is ...’ / Audio
[DPG+20]
8. • Research of sign language translation: Sign language to sign language
8
Translation direction
Introduction
University of Stuttgart 06.11.2020
[DPG+20]
Input: image/video
GAN = Generative Adversarial Networks
‘Hi my name is ...’ / Audio
Output: animated avatar or
generated videos (GAN)
[DPG+20]
9. • Sign language to sign language: no known publications
• Spoken language to sign language: (Saunders et al., 2020 [SCB20], Stoll et al., 2018
[STL+18])
• Sign language to spoken language:
• Sign Recognition (Zahoor et al., 2011 [ZAH+11])
• Continuous Sign Recognition (Koller et al., 2015 [KFN15])
• Sign Language Translation (Camgöz et al., 2018 [CHK+18], Camgöz et al. 2020
[CKHB20])
9
Current state of research
Introduction
University of Stuttgart 06.11.2020
10. 10
Sign language to spoken language tasks
Introduction
Task Sign Recognition Continuous Sign
Recognition
Sign Language
Translation
Sign Language
representation
Images Videos Videos
Spoken Language
representation
Classes Signs Text
“A” “HI ME SARAH”
“Hi my name is
Sarah”
11. • Enable use of sign language with sign language translation
• Current sign language datasets issues
• Limited range of topics & vocabulary & amount of speakers [DPG+20]
→ Collection and Creation of How2Sign dataset [DPG+20]
11
Sign language to spoken language translation
Introduction
University of Stuttgart 06.11.2020
12. 12
Proposed solution - Sign language into spoken language translation
Introduction
Task Sign Recognition Continuous Sign
Recognition
Sign Language
Translation
Dataset SLR [GB]
PHOENIX14T
[CHK+18]
PHOENIX14T,
How2Sign [DPG+20]
Extraction OpenPose [CHS+18]
Model Transformer [VSP+17]
Evaluation R, M, B, W
Rouge [Lin04], Meteor [BL02], BLEU [PRWZ02], Word Error Rate [KP02]
University of Stuttgart 06.11.2020
14. Task Sign
Recognition
Continuous
Sign
Recognition
Sign Language
Translation
Sign Language
Translation
Dataset SLR
PHOENIX14T
(Glosses)
PHOENIX14T
(German)
How2Sign
(English)
Type Images Videos Videos Videos
Annotation Classes Glosses German English
Hours - 10.5 10.5 80
Utterances 5 000 8 200 8 200 35 000
Vocab 24 1 000 3 000 16 000
14
Dataset
Methods
University of Stuttgart 06.11.2020
15. Task Sign
Recognition
Continuous
Sign
Recognition
Sign Language
Translation
Sign Language
Translation
Dataset SLR
PHOENIX14T
(Glosses)
PHOENIX14T
(German)
How2Sign
(English)
Type Images Videos Videos Videos
Annotation Classes Glosses German English
Hours - 10.5 10.5 80
Utterances 5 000 8 200 8 200 35 000
Vocab 24 1 000 3 000 16 000
15
Dataset
Methods
University of Stuttgart 06.11.2020
16. • Human keypoint estimation with pretrained convolutional networks [CHS+18]
16
OpenPose - Human Keypoint Estimation
Methods
Input Output
University of Stuttgart 06.11.2020
17. • Receive 137 estimated keypoints (body, face, hands) per frame
• Keypoint: x- & y-coordinates and confidence score
• Data Normalization [KKJC19]
17
OpenPose - Human Keypoint Estimation
Methods
x = {x ∈ R | 0 ≤ x ≤ max(frame x-axis)}
n = {n ∈ N | 0 ≤ n ≤ #keypoints}
f = {f ∈ N | 0 < f ≤ #frames}
u = {u ∈ N | 0 < u ≤ #utterances}
University of Stuttgart 06.11.2020
18. • Transformer models from Attention is all you need [VSP+17] based on self-attention
• Schematic structure of the used Transformer model [Ala18]:
18
Models
Methods
N = Normalization layer
MLP = Multi layer perceptron
C = Classification layer
University of Stuttgart 06.11.2020
19. 19
Proposed solution - Sign language into spoken language translation
Methods - Overview
Rouge [Lin04], Meteor [BL02], BLEU [PRWZ02], Word Error Rate [KP02]
Task Sign Recognition Continuous Sign
Recognition
Sign Language
Translation
Dataset SLR [GB]
PHOENIX14T
[CHK+18]
PHOENIX14T,
How2Sign [DPG+20]
Extraction OpenPose [CHS+18]
Model Transformer [VSP+17]
Evaluation R, M, B, W
University of Stuttgart 06.11.2020
20. 20
SLR - Sign Recognition
Results
Work Our study Gupta et al. [GB]
Dataset SLR SLR
Extraction OpenPose CNN
Model Transformer MLP
Evaluation W W
Rouge [Lin04], Meteor [BL02], BLEU [PRWZ02], Word Error Rate [KP02]
University of Stuttgart 06.11.2020
21. 21
SLR - Sign Recognition
Results
Experiment Hidden
size
#Layer Dropout LR #Heads WER (%)
Number
MLP size of
transformer
layer
Amount of
Transformer
layer
Dropout in
transformer
layer
Learning
rate
Amount of
attention
heads
Result
University of Stuttgart 06.11.2020
23. 23
PHOENIX14T - Continuous Sign Recognition
Results
Work Our study Camgöz et al., 2020
[CKHB20]
Dataset PHOENIX14T PHOENIX14T
Extraction OpenPose CNN
Model Transformer Transformer
Evaluation W W
Rouge [Lin04], Meteor [BL02], BLEU [PRWZ02], Word Error Rate [KP02]
University of Stuttgart 06.11.2020
24. 24
PHOENIX14T - Continuous Sign Recognition
Results
Experiment Hidden
size
#Layer Dropout LR #Heads WER (%)
Val
WER (%)
Test
1 128 1 0.2 10-4
1 93.3 94.1
2 512 2 0.2 10-4
4 85.5 84.4
3 2048 4 0.2 10-4
8 79.3 81.2
Camgöz et
al., 2020
- 24.88 24.59
University of Stuttgart 06.11.2020
25. 25
PHOENIX14T - Sign Language Translation
Results
Work Our study Ko et al., 2019
[KKJC19]
Camgöz et al., 2020
[CKHB20]
Dataset
PHOENIX14T
How2Sign
KETI (na) PHOENIX14T
Extraction OpenPose OpenPose CNN
Model Transformer Seq2Seq Transformer
Evaluation R, M, B, W R, M, B, C B, W
na = not available
Rouge [Lin04], Meteor [BL02], BLEU [PRWZ02], Word Error Rate [KP02]
University of Stuttgart 06.11.2020
27. 27
How2Sign - Sign Language Translation
Results
Exp #Hid #Lay Drop LR #H B1 B2 B3 B4 M R
1 1024 4 0.4 10-5
32 1.0 0.0 0.0 0.0 2.0 3.0
2 2048 6 0.4 10-5
16 1.0 0.0 0.0 0.0 1.0 2.0
oom 2048 4 0.4 10-5
64 - - - - - -
oom 2048 8 0.4 10-5
32 - - - - - -
oom = out of memory error
BLEU-1, BLEU-2, BLEU-3, BLEU-4, Meteor, Rouge
University of Stuttgart 06.11.2020
28. 28
Translation results
Discussion
University of Stuttgart 06.11.2020
Task Dataset Translation/ Recognition
quality
Sign Recognition SLR High
Continuous Sign Recognition PHOENIX14T Low
Sign Language Translation
PHOENIX14T Low
How2Sign Not possible
→ Bigger and more complex datasets were not possible to translate
29. • Keypoint estimation accuracy of OpenPose might be too low
29
Limitations
Discussion
University of Stuttgart 06.11.2020
30. • Confidence scores of a video of ~2800 frames displaying a sign language speaker
30
OpenPose - How2Sign: face & body confidence scores
Discussion
University of Stuttgart 06.11.2020
31. • Confidence scores of a video of ~2800 frames displaying a sign language speaker
31
OpenPose - How2Sign: left & right hand confidence scores
Discussion
University of Stuttgart 06.11.2020
32. • Keypoint estimation accuracy of OpenPose might be too low
• Models with bigger hyperparameters exceed the server memory
• Complexity of used models might be too low
32
Limitations
Discussion
University of Stuttgart 06.11.2020
33. • OpenPose and transformer model are suited for sign recognition
• Proposed methods did not show satisfying results for continuous sign recognition and
sign language translation
33
Summary
University of Stuttgart 06.11.2020
34. • Run OpenPose with different datasets and examine accuracy
• Datasets with more repetitions of single signs
• Focus on hand recognition
• Continue with transformer models
• Use pre-defined transformer models from libraries
• Use OpenPose for facial recognition
34
Outlook
University of Stuttgart 06.11.2020
35. [Jac96] R. Jacobs. “Just how hard is it to learn ASL? The case for ASL as a truly foreign language.” In: Multicultural aspects of sociolinguistics in
deaf communities 2 (1996), pp. 183–226
[Dam11] S. Damian. “Spoken vs. Sign Languages-What’s the Difference?” In: Cognition, Brain, Behavior 15.2 (2011), p. 251
[DFG+] P. Dreuw, J. Forster, Y. Gweth, D. Stein, H. Ney, G. Martinez, J. V. Llahi, O. Crasborn, E. Ormel, W. Du, T. Hoyoux, J. Piater, J. M. Moya,
M. Wheatley. “SignSpeak – Understanding, Recognition, and Translation of Sign Languages.” en. In: (), p. 8
[ACH+13] M. Adams, C. Castaneda, H. W. Hackman, M. L. Peters, X. Zuniga, W. J. Blumenfeld. Readings for diversity and social justice. Third
edition. New York: Routledge Taylor & Franacis Group, 2013., 2013
[Sut95] V. Sutton. Lessons in sign writing. SignWriting, 1995
[Sto05] W. Stokoe. “Sign language structure: an outline of the visual communication systems of the American deaf. 1960.” In: Journal of deaf
studies and deaf education 10 1 (2005), pp. 3–37
[Pri90] S. Prillwitz. “Hamburger Notations-System - Entwicklung einer Gebärdenschrift mit Computeranwendung.” In: Gebärde, Laut und
graphisches Zeichen: Schrifterwerb im Problemfeld von Mehrsprachigkeit. Ed. by G. List, G. List. Wiesbaden: VS Verlag für Sozialwissenschaften,
1990, pp. 60–82.
[DPG+20] A. Duarte, S. Palaskar, D. Ghadiyaram, K. DeHaan, F. Metze, J. Torres, X. Giro-i-Nieto. “How2Sign: A Large-scale Multimodal Dataset
for Continuous American Sign Language.”
[SCB20] B. Saunders, N. C. Camgoz, R. Bowden. “Progressive Transformers for Endto-End Sign Language Production.” (Apr. 2020)
35
Sources I
36. [CHS+18] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, Y. Sheikh. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affnity Fields.
2018.
[GB] R. Gupta, V. Behl. im Rishabh Gupta/Indian-Sign-Language-Recognition. URL: https://github.com/imRishabhGupta/Indian-Sign-Language-
Recognition
[CHK+18] N. C. Camgoz, S. Hadfeld, O. Koller, H. Ney, R. Bowden. “Neural Sign Language Translation.” In: IEEE Conference on Computer Vision
and Pattern Recognition (CVPR). 2018
[CKHB20] N. C. Camgoz, O. Koller, S. Hadfeld, R. Bowden. “Sign Language Transformers: Joint End-to-end Sign Language Recognition and
Translation.”, (Mar. 2020).
[VSP+17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin. “Attention Is All You Need.” (Dec.
2017).
[KP02] D. Klakow, J. Peters. “Testing the correlation of word error rate and perplexity.” In: Speech Communication 38.1 (2002), pp. 19–28. ISSN:
0167-6393.
[PRWZ02] K. Papineni, S. Roukos, T. Ward, W. J. Zhu. “BLEU: a Method for Automatic Evaluation of Machine Translation.” In: (Oct. 2002).
[BL02] S. Banerjee, A. Lavie. “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments.” en. In: (2002).
[Lin04] C.-Y. Lin. “Rouge: A package for automatic evaluation of summaries.” In: Text summarization branches out. 2004.
36
Sources II
37. [STL+18] S. Stoll, N. Camgoz, S. Hadfield and R. Bowden. Text2Sign: Towards Sign Language Production Using Neural Machine Translation and
Generative Adversarial Networks. 2018.
[KKJC19] S.-K. Ko, C. J. Kim, H. Jung, C. Cho. “Neural Sign Language Translation based on Human Keypoint Estimation.” (June 2019).
[Ala18] J. Alammar. The Illustrated Transformer. June 2018. URL: http://jalammar.github.io/illustrated-transformer/
[KFN15] O. Koller, J. Forster, H. Ney. “Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling
multiple signers.” In: Computer Vision and Image Understanding 141 (Dec. 2015).
[ZAH+11] Zafrulla, Zahoor and Brashear, Helene and Starner, Thad and Hamilton, Harley and Presti, Peter. American Sign Language Recognition
with the Kinect. 2011
37
Sources III
38. Thank you!
e-mail
www.
University of Stuttgart
Peter Muschick
github.com/asdf11x/stt
swt89259@stud.uni-stuttgart.de
Photo by Louisa
Schaad on Unsplash
41. • How hard is it to learn Sign Language actually? [Jac96] (for native English speakers)
• American Sign Language is as hard to learn as Japanese or Arabic
Time + Theme + Comment + Speaker
• Time = grammatical tense
• Theme = object of the sentence
• Comment = additional information about the subject
• Speaker = subject of the sentence
“I went to the university yesterday” -> YESTERDAY UNIVERSITY GO I
41
Sign language
43. • Average confidence scores of OpenPose
43
OpenPose
Results
SLR PHOENIX14T How2Sign
body - 0.31 0.40
face - 0.77 0.84
left hand 0.55 0.31 0.47
right hand - 0.29 0.43
44. • Average confidence scores of OpenPose
44
OpenPose
Results
SLR* PHOENIX14T How2Sign
body - 0.31 0.40
face - 0.77 0.84
left hand 0.55 0.31 0.47
right hand - 0.29 0.43
45. • Average confidence scores of OpenPose
45
OpenPose
Results
SLR* PHOENIX14T How2Sign
body - 0.31 0.40
face - 0.77 0.84
left hand 0.55 0.31 0.47
right hand - 0.29 0.43
46. • Confidence scores of 242 images displaying left hand showing the letter A from
different angles
46
OpenPose - SLR
Results
47. • Confidence scores of 120 frames displaying a sign language speaker
47
OpenPose - PHOENIX14T
Results
48. • Confidence scores of 120 frames displaying a sign language speaker
48
OpenPose - PHOENIX14T
Results