The document discusses deep learning and learning hierarchical representations. It makes three key points:
1. Deep learning involves learning multiple levels of representations or features from raw input in a hierarchical manner, unlike traditional machine learning which uses engineered features.
2. Learning hierarchical representations is important because natural data lies on low-dimensional manifolds and disentangling the factors of variation can lead to more robust features.
3. Architectures for deep learning involve multiple levels of non-linear feature transformations followed by pooling to build increasingly abstract representations at each level. This allows the representations to become more invariant and disentangled.
The document provides an overview of the Kinect programming model for color images and depth sensing. It discusses the ColorImageStream and DepthImageStream object models, including the frame data returned for camera images and depth maps. It also covers pixel formats, depth sensing techniques using infrared, and examples of processing color and depth frames in code.
This document describes the development of a digital cathode ray oscilloscope (CRO) using LabVIEW. The digital CRO replaces the hardware of a traditional CRO with software and virtual instrumentation. It discusses how LabVIEW was used to graphically program the virtual CRO and provide all the functionality of a physical CRO. The digital CRO allows students to learn about oscilloscope operation and electronic measurement techniques through interactive software instead of requiring physical equipment.
Aquila: An Open-Source GPU-Accelerated Toolkit for Cognitive and Neuro-Roboti...Martin Peniak
These slide are from the NVIDIA GTC Express Webminar presented by Martin Peniak and Anthony Morse. There should be an audio/video version available at NVIDIA GTC site below.
http://www.gputechconf.com/object/gtc-express-webinar.html
The presentation focuses on the cognitive robotics research, GPUs and Aquila, an open-source toolkit providing many different tools and biologically-inspired models, useful for cognitive and developmental robotics research. Aquila addresses the need for high-performance robot control, which is typically confounded by processing power limitations that are inherent in the standard CPU architectures.
The document discusses deep learning and convolutional neural networks. It provides a brief history of convolutional networks, starting with early models from the 1960s and work by LeCun in the 1980s and 1990s applying convolutional networks to tasks like handwritten digit recognition. The document also discusses how convolutional networks learn hierarchical representations and have been applied to tasks like face detection, semantic segmentation, and scene parsing. It notes that while deep learning has been successful, it is still missing capabilities for reasoning, structured prediction, memory and truly unsupervised learning.
2 track kinect@Bicocca - hardware e funzinamentoMatteo Valoriani
The document discusses the Microsoft Kinect sensor and its capabilities. It provides information on the Kinect's resolutions for color, depth and skeletal tracking. It compares the Microsoft Kinect SDK to the OpenNI SDK. It also provides an overview of getting started with the Kinect SDK and examples of basic Kinect API usage in C# for discovering sensors, managing the sensor state, retrieving data, and controlling the tilt motor.
This document discusses using artificial neural networks for hand gesture recognition. It introduces gesture recognition and ANNs, describing how ANNs can be used for gesture recognition by being adaptive systems that change structure based on information flow. The document outlines training ANNs using feedforward and backpropagation algorithms in MATLAB for gesture recognition. It also provides steps of the recognition process and discusses advantages like learning without reprogramming and disadvantages like needing training.
Welcome to the world of Microsoft Kinect for
Windows–enabled applications. This document
is your roadmap to building exciting humancomputer interaction solutions you once
thought were impossible.
The document discusses deep learning and learning hierarchical representations. It makes three key points:
1. Deep learning involves learning multiple levels of representations or features from raw input in a hierarchical manner, unlike traditional machine learning which uses engineered features.
2. Learning hierarchical representations is important because natural data lies on low-dimensional manifolds and disentangling the factors of variation can lead to more robust features.
3. Architectures for deep learning involve multiple levels of non-linear feature transformations followed by pooling to build increasingly abstract representations at each level. This allows the representations to become more invariant and disentangled.
The document provides an overview of the Kinect programming model for color images and depth sensing. It discusses the ColorImageStream and DepthImageStream object models, including the frame data returned for camera images and depth maps. It also covers pixel formats, depth sensing techniques using infrared, and examples of processing color and depth frames in code.
This document describes the development of a digital cathode ray oscilloscope (CRO) using LabVIEW. The digital CRO replaces the hardware of a traditional CRO with software and virtual instrumentation. It discusses how LabVIEW was used to graphically program the virtual CRO and provide all the functionality of a physical CRO. The digital CRO allows students to learn about oscilloscope operation and electronic measurement techniques through interactive software instead of requiring physical equipment.
Aquila: An Open-Source GPU-Accelerated Toolkit for Cognitive and Neuro-Roboti...Martin Peniak
These slide are from the NVIDIA GTC Express Webminar presented by Martin Peniak and Anthony Morse. There should be an audio/video version available at NVIDIA GTC site below.
http://www.gputechconf.com/object/gtc-express-webinar.html
The presentation focuses on the cognitive robotics research, GPUs and Aquila, an open-source toolkit providing many different tools and biologically-inspired models, useful for cognitive and developmental robotics research. Aquila addresses the need for high-performance robot control, which is typically confounded by processing power limitations that are inherent in the standard CPU architectures.
The document discusses deep learning and convolutional neural networks. It provides a brief history of convolutional networks, starting with early models from the 1960s and work by LeCun in the 1980s and 1990s applying convolutional networks to tasks like handwritten digit recognition. The document also discusses how convolutional networks learn hierarchical representations and have been applied to tasks like face detection, semantic segmentation, and scene parsing. It notes that while deep learning has been successful, it is still missing capabilities for reasoning, structured prediction, memory and truly unsupervised learning.
2 track kinect@Bicocca - hardware e funzinamentoMatteo Valoriani
The document discusses the Microsoft Kinect sensor and its capabilities. It provides information on the Kinect's resolutions for color, depth and skeletal tracking. It compares the Microsoft Kinect SDK to the OpenNI SDK. It also provides an overview of getting started with the Kinect SDK and examples of basic Kinect API usage in C# for discovering sensors, managing the sensor state, retrieving data, and controlling the tilt motor.
This document discusses using artificial neural networks for hand gesture recognition. It introduces gesture recognition and ANNs, describing how ANNs can be used for gesture recognition by being adaptive systems that change structure based on information flow. The document outlines training ANNs using feedforward and backpropagation algorithms in MATLAB for gesture recognition. It also provides steps of the recognition process and discusses advantages like learning without reprogramming and disadvantages like needing training.
Welcome to the world of Microsoft Kinect for
Windows–enabled applications. This document
is your roadmap to building exciting humancomputer interaction solutions you once
thought were impossible.
The document provides an overview of the Kinect for Windows SDK and its capabilities for natural user interface and skeletal tracking. It describes the Kinect hardware components, software architecture, data streams for color, depth, infrared and audio data. It explains how to retrieve frames of data through polling or events. It also covers coordinate systems, skeletal tracking, and transforming between spaces. The SDK enables applications to sense natural input through skeletal tracking, audio capture and analysis of color/depth images.
This document summarizes a seminar presentation about Puppetooner, a system for 3D animation using physical puppet models and Microsoft Kinect depth sensing. The system allows puppeteers to manipulate physical puppets in front of a Kinect to capture their motion in real-time. This captured motion is then applied to virtual 3D character models which are projected back onto the physical puppets through projection mapping, creating an augmented reality experience. The system provides a low-cost and accessible way to create 3D animations without requiring software expertise by allowing puppeteers to focus on physical performance.
Deep learning is a type of machine learning that uses neural networks with multiple layers between the input and output layers. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning has achieved great success in computer vision, speech recognition, and natural language processing due to recent advances in algorithms, computing power, and the availability of large datasets. Deep learning models can learn complex patterns directly from large amounts of unlabeled data without relying on human-engineered features.
Cognitive vision aims to make computer vision more robust and adaptable by endowing it with cognitive capabilities. Symbolic AI approaches that use predefined symbolic representations face limitations like the symbol grounding and frame problems. The emergent view is that cognition and perception develop jointly through an agent's interactions with its environment. Representation learning using neural networks may help develop abstract representations from experience in an unsupervised manner. Deep hierarchical models like autoencoders can learn compact internal representations in a data-driven way. Embodiment, where perception and action develop together, may also be important for cognitive vision.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Computer vision, machine, and deep learningIgi Ardiyanto
This document provides an overview of computer vision, machine learning, and deep learning with Python. It introduces computer vision and some example applications like optical character recognition and face detection. It then discusses machine learning and how it can be applied to computer vision problems. Deep learning is introduced as a type of machine learning using artificial neural networks. Examples of successful deep learning applications are presented, including speech recognition and the AlphaGo program that mastered the game of Go. Finally, Python is discussed as a programming language well-suited for scientific and deep learning applications due to supporting libraries like NumPy, Scipy, and Matplotlib.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Session 10 in module 3 from the Master in Computer Vision by UPC, UAB, UOC & UPF.
This lecture provides an overview of state of the art applications of convolutional neural networks to the problems in video processing: semantic recognition, optical flow estimation and object tracking.
Formal and Informal Collaborative Learning in 3D Virtual CampusesMikhail Fominykh
Presentation slides of the academic paper.
Mikhail Fominykh, Ekaterina Prasolova-Førland and Peter Leong: "Formal and Informal Collaborative Learning in 3D Virtual Campuses," in the 6th International Conference on Collaboration Technologies (CollabTech), Sapporo, Japan, August 27–29, 2012, Information Processing Society of Japan, ISBN: 978-4-915256-86-8 C3804, pp. 64–69.
This document provides an outline for a workshop on disseminating research online. The workshop covers developing an online dissemination strategy, choosing tools for content curation and sharing research, and integrating social networks. It includes discussions of developing goals and tactics, assessing strategies, and measuring digital impact. Hands-on activities allow participants to design dissemination plans and curate research topics. The document provides many links to additional resources on creating web and social media strategies, using specific tools, and monitoring online engagement.
Learning Computer-Mediated Cooperation in 3D Visualization ProjectsMikhail Fominykh
The document discusses using 3D collaborative virtual environments (3D CVEs) as a platform for learning cooperation technology. An exploratory case study was conducted with 37 university students working in small groups to create 3D educational visualizations of course topics. Four modes of cooperation emerged: asynchronous group collaboration, synchronous group collaboration, synchronous community collaboration, and asynchronous community collaboration. Results showed student groups adopted different technology landscapes depending on the cooperation mode. The authors conclude that creating and presenting 3D visualizations facilitated in-depth learning while forcing students to intensify collaboration and explore cooperation tools and methods.
Virtual Reality Experience for Creating and Retrieving Fluid Knowledge Mikhail Fominykh
Blue Sky conference http://fet-eye.eu/bs-2014
Abstract: Virtual reality technologies are going beyond the desktop, allowing mixed-reality to be embedded in common spaces. The application of these technological advances to the creative collaborative processes, including learning, would not only allow spaces for conveying, understanding, expressing, sharing ideas, safe training and trials, but also preserving and retrieving these highly dynamic activities. This project aims at building a system for exploring, preserving and retrieving knowledge that resides in collaborative activities conducted in mixed reality settings. It is believed that such a system can represent the next step in virtual reality-based working and learning, giving access to the knowledge that is usually hidden and too fluid for being captured and re-experienced. The proposed system will be created as an online platform, based on cloud computing. It will use virtual reality in a new way to ensure high educational value for each unique learner, allowing the use of innovative methodologies based on learning by doing while keeping formation costs low.
The document proposes a strategy to reassure shoppers at Lucky/Save Mart grocery stores that the stores are offering low prices and new ways to save money. The strategy involves making the stores feel constantly "on sale" through signage highlighting savings programs and frequent rotation of "new ways to save". New savings ideas will be promoted each week through various in-store communications to give shoppers a perception of ongoing discounted prices.
vAcademia – Educational Virtual World with 3D RecordingMikhail Fominykh
Mikhail Morozov, Alexey Gerasimov and Mikhail Fominykh: "vAcademia – Educational Virtual World with 3D Recording," in Arjan Kuijper and Alexei Sourin ed. the 12th International Conference on Cyberworlds (CW), Darmstadt, Germany, September 25–27, 2012, IEEE, ISBN: 978-0-7695-4814-2/12, pp. 199–206.
Lecture by Mikhail Fominykh at Technology-Enhanced Learning 2 [advanced] course, University of Oulu, Finland: 3D virtual worlds and collaborative learning, March 14, 2013
Collaborative Work on 3D Content in Virtual Environments: Methodology and Rec...Mikhail Fominykh
The document summarizes a study on collaborative work on 3D content in virtual environments. It discusses how a case study was conducted with 25 students working in groups to visualize research projects in a 3D environment. The case study analyzed the collaborative process, design choices, and how the 3D visualizations increased understanding of projects. It provides recommendations for supporting collaborative work on 3D educational content, including providing virtual exhibits, tutorials, and connecting communities.
Smart russia congress Creative collaboration with dedicated tools in a virtua...Mikhail Fominykh
Invited speech at Conference Smart Congress (in Russian), Moscow, Russia, April 24–25, 2014
Выступление на конференции СМАРТ Конгресс, Москва, Россия, 24–25 апреля 2014. ВШЭ
Repositories of community memory as visualized activities in 3D virtual worldsMikhail Fominykh
Paper presentation: Mikhail Fominykh, Ekaterina Prasolova-Førland, Leif Martin Hokstad, and Mikhail Morozov: "Repositories of Community Memory as Visualized Activities in 3D Virtual Worlds," in the 47th Hawaii International Conference on System Sciences (HICSS), Waikoloa, HI, USA, January 6–9, 2014, IEEE, ISBN: 978-1-4799-2504-9/14, pp. 678–687. doi>10.1109/HICSS.2014.90
Lecture by Mikhail Fominykh at Technology-Enhanced Learning 1 course, University of Oulu, Finland: Technological decisions in course design, March 14, 2013
The document proposes a "Common Cents" stimulus plan to combat challenges facing a grocery store chain called FoodMaxx, including decreased average ticket size and weekend traffic. The plan aims to motivate existing customers to spend more by creating a perception of in-store savings. It would do this by emphasizing items priced under $1 using signs, displays, and shelf tags marked with "c¢". The program would be introduced in phases, first testing it in a single market, then evaluating expansion opportunities after 8 weeks. The goal is to increase impulse purchases and cart size through a treasure hunt experience that reinforces FoodMaxx's low price image.
Jtelss2015 lecture ideas vs proposals for young researchersMikhail Fominykh
This document provides guidance for young researchers on developing research ideas and proposals. It compares research projects to PhD theses, noting their similarities in structure and required elements. Both require motivation, literature reviews, methodology, results, and implications. The document also directs researchers to Horizon 2020 funding opportunities from the European Commission, highlighting relevant calls and eligibility requirements. Key funding programs discussed include ERC Starting Grants, FET Open, and Marie Curie Actions. Overall, the document aims to help young researchers transition their ideas into competitive research proposals and funding applications.
The document provides an overview of the Kinect for Windows SDK and its capabilities for natural user interface and skeletal tracking. It describes the Kinect hardware components, software architecture, data streams for color, depth, infrared and audio data. It explains how to retrieve frames of data through polling or events. It also covers coordinate systems, skeletal tracking, and transforming between spaces. The SDK enables applications to sense natural input through skeletal tracking, audio capture and analysis of color/depth images.
This document summarizes a seminar presentation about Puppetooner, a system for 3D animation using physical puppet models and Microsoft Kinect depth sensing. The system allows puppeteers to manipulate physical puppets in front of a Kinect to capture their motion in real-time. This captured motion is then applied to virtual 3D character models which are projected back onto the physical puppets through projection mapping, creating an augmented reality experience. The system provides a low-cost and accessible way to create 3D animations without requiring software expertise by allowing puppeteers to focus on physical performance.
Deep learning is a type of machine learning that uses neural networks with multiple layers between the input and output layers. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning has achieved great success in computer vision, speech recognition, and natural language processing due to recent advances in algorithms, computing power, and the availability of large datasets. Deep learning models can learn complex patterns directly from large amounts of unlabeled data without relying on human-engineered features.
Cognitive vision aims to make computer vision more robust and adaptable by endowing it with cognitive capabilities. Symbolic AI approaches that use predefined symbolic representations face limitations like the symbol grounding and frame problems. The emergent view is that cognition and perception develop jointly through an agent's interactions with its environment. Representation learning using neural networks may help develop abstract representations from experience in an unsupervised manner. Deep hierarchical models like autoencoders can learn compact internal representations in a data-driven way. Embodiment, where perception and action develop together, may also be important for cognitive vision.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Computer vision, machine, and deep learningIgi Ardiyanto
This document provides an overview of computer vision, machine learning, and deep learning with Python. It introduces computer vision and some example applications like optical character recognition and face detection. It then discusses machine learning and how it can be applied to computer vision problems. Deep learning is introduced as a type of machine learning using artificial neural networks. Examples of successful deep learning applications are presented, including speech recognition and the AlphaGo program that mastered the game of Go. Finally, Python is discussed as a programming language well-suited for scientific and deep learning applications due to supporting libraries like NumPy, Scipy, and Matplotlib.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Session 10 in module 3 from the Master in Computer Vision by UPC, UAB, UOC & UPF.
This lecture provides an overview of state of the art applications of convolutional neural networks to the problems in video processing: semantic recognition, optical flow estimation and object tracking.
Formal and Informal Collaborative Learning in 3D Virtual CampusesMikhail Fominykh
Presentation slides of the academic paper.
Mikhail Fominykh, Ekaterina Prasolova-Førland and Peter Leong: "Formal and Informal Collaborative Learning in 3D Virtual Campuses," in the 6th International Conference on Collaboration Technologies (CollabTech), Sapporo, Japan, August 27–29, 2012, Information Processing Society of Japan, ISBN: 978-4-915256-86-8 C3804, pp. 64–69.
This document provides an outline for a workshop on disseminating research online. The workshop covers developing an online dissemination strategy, choosing tools for content curation and sharing research, and integrating social networks. It includes discussions of developing goals and tactics, assessing strategies, and measuring digital impact. Hands-on activities allow participants to design dissemination plans and curate research topics. The document provides many links to additional resources on creating web and social media strategies, using specific tools, and monitoring online engagement.
Learning Computer-Mediated Cooperation in 3D Visualization ProjectsMikhail Fominykh
The document discusses using 3D collaborative virtual environments (3D CVEs) as a platform for learning cooperation technology. An exploratory case study was conducted with 37 university students working in small groups to create 3D educational visualizations of course topics. Four modes of cooperation emerged: asynchronous group collaboration, synchronous group collaboration, synchronous community collaboration, and asynchronous community collaboration. Results showed student groups adopted different technology landscapes depending on the cooperation mode. The authors conclude that creating and presenting 3D visualizations facilitated in-depth learning while forcing students to intensify collaboration and explore cooperation tools and methods.
Virtual Reality Experience for Creating and Retrieving Fluid Knowledge Mikhail Fominykh
Blue Sky conference http://fet-eye.eu/bs-2014
Abstract: Virtual reality technologies are going beyond the desktop, allowing mixed-reality to be embedded in common spaces. The application of these technological advances to the creative collaborative processes, including learning, would not only allow spaces for conveying, understanding, expressing, sharing ideas, safe training and trials, but also preserving and retrieving these highly dynamic activities. This project aims at building a system for exploring, preserving and retrieving knowledge that resides in collaborative activities conducted in mixed reality settings. It is believed that such a system can represent the next step in virtual reality-based working and learning, giving access to the knowledge that is usually hidden and too fluid for being captured and re-experienced. The proposed system will be created as an online platform, based on cloud computing. It will use virtual reality in a new way to ensure high educational value for each unique learner, allowing the use of innovative methodologies based on learning by doing while keeping formation costs low.
The document proposes a strategy to reassure shoppers at Lucky/Save Mart grocery stores that the stores are offering low prices and new ways to save money. The strategy involves making the stores feel constantly "on sale" through signage highlighting savings programs and frequent rotation of "new ways to save". New savings ideas will be promoted each week through various in-store communications to give shoppers a perception of ongoing discounted prices.
vAcademia – Educational Virtual World with 3D RecordingMikhail Fominykh
Mikhail Morozov, Alexey Gerasimov and Mikhail Fominykh: "vAcademia – Educational Virtual World with 3D Recording," in Arjan Kuijper and Alexei Sourin ed. the 12th International Conference on Cyberworlds (CW), Darmstadt, Germany, September 25–27, 2012, IEEE, ISBN: 978-0-7695-4814-2/12, pp. 199–206.
Lecture by Mikhail Fominykh at Technology-Enhanced Learning 2 [advanced] course, University of Oulu, Finland: 3D virtual worlds and collaborative learning, March 14, 2013
Collaborative Work on 3D Content in Virtual Environments: Methodology and Rec...Mikhail Fominykh
The document summarizes a study on collaborative work on 3D content in virtual environments. It discusses how a case study was conducted with 25 students working in groups to visualize research projects in a 3D environment. The case study analyzed the collaborative process, design choices, and how the 3D visualizations increased understanding of projects. It provides recommendations for supporting collaborative work on 3D educational content, including providing virtual exhibits, tutorials, and connecting communities.
Smart russia congress Creative collaboration with dedicated tools in a virtua...Mikhail Fominykh
Invited speech at Conference Smart Congress (in Russian), Moscow, Russia, April 24–25, 2014
Выступление на конференции СМАРТ Конгресс, Москва, Россия, 24–25 апреля 2014. ВШЭ
Repositories of community memory as visualized activities in 3D virtual worldsMikhail Fominykh
Paper presentation: Mikhail Fominykh, Ekaterina Prasolova-Førland, Leif Martin Hokstad, and Mikhail Morozov: "Repositories of Community Memory as Visualized Activities in 3D Virtual Worlds," in the 47th Hawaii International Conference on System Sciences (HICSS), Waikoloa, HI, USA, January 6–9, 2014, IEEE, ISBN: 978-1-4799-2504-9/14, pp. 678–687. doi>10.1109/HICSS.2014.90
Lecture by Mikhail Fominykh at Technology-Enhanced Learning 1 course, University of Oulu, Finland: Technological decisions in course design, March 14, 2013
The document proposes a "Common Cents" stimulus plan to combat challenges facing a grocery store chain called FoodMaxx, including decreased average ticket size and weekend traffic. The plan aims to motivate existing customers to spend more by creating a perception of in-store savings. It would do this by emphasizing items priced under $1 using signs, displays, and shelf tags marked with "c¢". The program would be introduced in phases, first testing it in a single market, then evaluating expansion opportunities after 8 weeks. The goal is to increase impulse purchases and cart size through a treasure hunt experience that reinforces FoodMaxx's low price image.
Jtelss2015 lecture ideas vs proposals for young researchersMikhail Fominykh
This document provides guidance for young researchers on developing research ideas and proposals. It compares research projects to PhD theses, noting their similarities in structure and required elements. Both require motivation, literature reviews, methodology, results, and implications. The document also directs researchers to Horizon 2020 funding opportunities from the European Commission, highlighting relevant calls and eligibility requirements. Key funding programs discussed include ERC Starting Grants, FET Open, and Marie Curie Actions. Overall, the document aims to help young researchers transition their ideas into competitive research proposals and funding applications.
Creative Collaboration on a Media Handbook for Educators: Design of a Joint E...Mikhail Fominykh
Mikhail Fominykh, Terje Valjataga, Venla Vallivaara and Monica Divitini; "Creative Collaboration on a Media Handbook for Educators: Design of a Joint European Course", in the Mobile Learning and Creativity Workshop (MLCW12), European Conference on Technology-Enhanced Learning (EC-TEL), Saarbrucken, Germany, September 19, 2012.
Working on Educational Content in 3D Collaborative Virtual Environments: Chal...Mikhail Fominykh
Collaborative construction and exploration of educational content is an important part of a learning process. In this paper, we focus on collaborative construction of educational visualizations in 3D Collaborative Virtual Environments (CVEs), analyzing results from our earlier case studies in Active Worlds and Second Life. We discuss various aspects of presenting educational content in a 3D environment, such as aesthetics, functionality and expressed meaning, various design solutions adopted by students in their constructions and the challenges they faced. Furthermore, we outline the implications for using 3D CVEs for working on educational content as a part of everyday classroom activities.
Visionaire project learning in 3D virtual worlds, enabling vacademia in caveMikhail Fominykh
My invited presentation "Learning in 3D Virtual Worlds, enabling vAcademia in CAVE" at the VISIONAIR General Assembly and Open Forum. VISIONAIR is an EU project that provides Trans National Access (TNA) to visualization and virtual reality facilities in European universities.
Wearable Experience: New Educational Media for Knowledge Intensive TrainingMikhail Fominykh
This slides were presented at the invited speech at the World Conference on Educational Media and Technology (EdMedia) which was held in Vancouver, BC, Canada on June 28-30, 2016.
Abstract: Wearable computing and augmented reality are disruptive technologies. They fundamentally change the way we educate and train people to a master level of performance. With advanced sensors we can capture experience as it emerges. For example, a trainee can receive live guidance in the form of semi-transparent 3D hands that appear at the right place spatially and operated by a remote expert using sensor data. Captured guidance provides reference to scale, allowing repeated access to the information asynchronously at the right time and in the right place where it is most urgently needed. Expert guidance can be captured with wearable sensors and later re-enacted by trainees with augmented reality creating a believable illusion of a master-apprentice knowledge sharing. The captured experience therefore represents a new type of educational media that has properties of carrying both explicit and tacit knowledge. This new media helps to convert experience to knowledge and enable learning by bringing closer the theoretical knowledge and immediate experience, which are traditionally separated. Tailored content of captured experience can be presented with augmented reality using intuitive and immersive user interfaces. This can have a positive impact on mental processing and memorization, not only adding scaffolds for high performance, but also acting as a safety net preventing potential problems sensed in the environment. Learning how to master a complex task usually involves reflecting on your own performance, looking back at your behavior and comparing it to that of others. The goal of this new training methodology is to enable the full cycle of immersive experience observing an expert, training with and without guidance, and observing own performance.
This project develops a natural user interface for interacting with 3D environments using the Microsoft Kinect. Two Kinect devices are placed in a virtual reality space to track a user's full body movements and gestures. The Kinect data is used to create a digital avatar that represents the user's position and allows directly interacting with virtual objects by reaching out. Gesture recognition is also implemented to provide additional controls for navigation and selection. The goal is to make interacting with complex 3D data more intuitive by mirroring natural physical interactions.
This project develops a natural user interface for interacting with 3D environments using the Microsoft Kinect. Two Kinect devices are placed in a virtual reality space to track a user's full body movements and gestures. The Kinect data is used to create a digital avatar that represents the user's position and allows directly interacting with virtual objects by reaching out. Gesture recognition is also implemented to provide additional controls for navigation and selection. The goal is to make interacting with complex 3D data more intuitive by mirroring natural physical interactions.
ENHANCED EDUCATION THROUGH AUGMENTED REALITYIRJET Journal
This document discusses using augmented reality to enhance education. It begins by defining augmented reality and how it can improve perception by combining virtual and real worlds. The document then discusses several ways AR has been used in education, including visualizing 3D models and animations over real-world items. It presents several case studies on using AR to teach concepts like electrical generators and magnetic fields. The document also outlines some of the technical methods involved in building AR applications, such as image processing, object detection and overlaying virtual content. Overall, the document argues that AR can make learning more interactive and engaging by bringing virtual interactive elements into the real world classroom environment.
[Paper introduction] Performance Capture of Interacting Characters with Handh...Mitsuru Nakazawa
This document summarizes a paper that presents a method for capturing full performance of interacting characters using only 3 handheld Kinect sensors. The method reconstructs a skeleton motion and time-varying surface geometry of humans from the asynchronous and uncalibrated Kinect sensor data. It matches geometric data from the Kinects to a human body model and optimizes the skeleton poses and camera parameters. Non-rigid deformations of the human surface are estimated through Laplacian deformation. The method is shown to capture complex motions with self-occlusions better than traditional multi-camera motion capture systems.
This document provides an overview of NUI (Natural User Interface) and biometrics in Windows 10. It discusses the evolution of user interfaces from CLI to GUI to NUI. It then focuses on Microsoft Kinect v2, describing its sensor components, hardware requirements, architecture, frame sources, and capabilities like body tracking, facial tracking, and gesture recognition. It also covers related topics like recording and playback, visualizers, KinectFusion, custom gestures, and other frameworks. The document concludes with sections on Intel RealSense cameras and SDK, as well as Microsoft Passport and Windows Hello for strong authentication using biometrics like fingerprints, facial recognition, and iris scanning.
This document describes using augmented reality to enhance a physics experiment on pendulums. It defines augmented reality as computer-generated virtual objects superimposed on live camera images using fiducial markers and computer vision. It outlines the tools used to develop augmented reality applications, including fiducial markers, C++, OpenCV library, and OpenGL. It provides the mathematical formulas and procedures for camera calibration and 3D construction to precisely position virtual objects in the scene. Specifically, it details the steps to implement a pendulum experiment using OpenCV functions to find marker corners in images and calculate intrinsic and extrinsic camera parameters. The conclusion states that other science experiments will also be augmented and potential applications include face recognition and automatic paper grading.
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...Kitsukawa Yuki
パターン・映像情報処理特論において論文を紹介した時の発表資料です。
Xiangyun Meng, Wei Wang, and Ben Leong. 2015. SkyStitch: A Cooperative Multi-UAV-based Real-time Video Surveillance System with Stitching. In Proceedings of the 23rd ACM international conference on Multimedia (MM '15). ACM, New York, NY, USA, 261-270. DOI=http://dx.doi.org/10.1145/2733373.2806225
The document provides an overview of virtual reality (VR), including its history, types, technologies, applications, and challenges. It discusses how VR immerses users in simulated, 3D environments through head-mounted displays and other sensory inputs. The document also outlines the typical components of a VR system, including input processors, simulation processors, rendering processors, and world databases that store virtual objects and environments. Some applications mentioned include entertainment, medicine, manufacturing, education, and training. Current issues with VR adoption include cybersickness, cost, and lack of integration between software packages.
The computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors.
Human motion is fundamental to understanding behaviour. In spite of advancement on single image 3 Dimensional pose and estimation of shapes, current video-based state of the art methods unsuccessful to produce precise and motion of natural sequences due to inefficiency of ground-truth 3 Dimensional motion data for training. Recognition of Human action for programmed video surveillance applications is an interesting but forbidding task especially if the videos are captured in an unpleasant lighting environment. It is a Spatial-temporal feature-based correlation filter, for concurrent observation and identification of numerous human actions in a little-light environment. Estimated the presentation of a proposed filter with immense experimentation on night-time action datasets. Tentative results demonstrate the potency of the merging schemes for vigorous action recognition in a significantly low light environment.
VIBE: Video Inference for Human Body Pose and Shape EstimationArithmer Inc.
The document describes the VIBE approach for 3D human pose and shape estimation from video. VIBE uses an adversarial learning framework with a temporal encoder network that incorporates self-attention. It regresses pose and shape parameters from video frames. A motion discriminator is trained to distinguish real from generated poses, enforcing kinematically plausible poses without 3D ground truth labels. Results show VIBE generates accurate 3D poses and shapes from in-the-wild videos.
This document provides a summary of a report on 3D object recognition using the Point Cloud Library (PCL). It describes the key steps in the global and local pipelines for 3D object recognition. The global pipeline is used for object detection and classification, while the local pipeline determines the object's pose. The report details the training and testing process for each pipeline, including keypoint detection, descriptor calculation, and matching. It also presents results from experiments on public and custom datasets and analyzes the performance of different algorithm combinations.
Virtual reality (VR) is an interactive simulation that immerses users in a 3D virtual world. The document outlines the history of VR from early flight simulators to modern commercial systems. It describes types of VR including immersive VR using head-mounted displays and mixed reality. The key technologies that enable VR like head displays, data gloves, and control devices are also discussed. Current applications of VR span entertainment, medicine, manufacturing, and education. While issues remain around simulator sickness and cost, VR offers opportunities for visualization, interaction, and experiencing virtual worlds.
Virtual reality (VR) can simulate physical presence in non-physical worlds through computer simulation. The document discusses the history of VR from early prototypes in the 1950s-1960s to current applications. It outlines different types of VR including immersive, telepresence, and mixed reality systems. The technology used in VR includes head-mounted displays, data gloves, omnidirectional monitors, and CAVE rooms. Developing VR involves 3D modeling, sound editing, and simulation software. Applications of VR include military training, healthcare, education, and entertainment. Benefits are more engaging learning while costs and technical issues remain challenges.
Virtual reality (VR) uses computer technology to simulate a user's physical presence in an imaginary world. The document discusses the definition of VR, its history from early prototypes in the 1950s-60s to current applications, as well as the key technologies involved including hardware like head-mounted displays and software for 3D modeling and simulations. Some examples of VR's use in healthcare, education, entertainment and the military are provided. Both the merits of more engaging learning and the drawbacks of lack of understanding real-world effects are outlined.
this presentation covers the very aspects of creating the virtual environments and also gives a small tutorial on how to create AR apps to create custom synthetic environments.
Kinect for Xbox 360: the world's first viral 3D technologykamutef
Exploring the technology behind Kinect for Xbox 360.
How it works and the multiplicity of ways the technology is being applied to solve problems, assist research and transform lives around the world.
IRJET - 3D Virtual Dressing Room ApplicationIRJET Journal
This document describes a 3D virtual dressing room application that was developed using the Microsoft Kinect sensor. The application aims to address issues with traditional dressing rooms like wasting time trying on clothes, limited variety, and privacy concerns. The proposed approach uses Kinect to extract the user from the video stream, align 3D cloth models to the user's body, and apply skin color detection to handle occlusions. The body joints are used for positioning, scaling, and rotating the cloth models. The models are then overlaid on the user in real-time. The document discusses related work on virtual dressing rooms and 3D alignment of clothes to user models. It also outlines the methodology, including using Kinect's image and depth streams to develop
[PDF] the molecular control toolkit - Controlling 3D molecular graphics via g...Quân Lê
The Molecular Control Toolkit allows users to control 3D molecular graphics through gestures and voice commands. It supports the Leap Motion and Microsoft Kinect devices. The toolkit was tested on 18 medical researchers performing rotation, selection, and zooming tasks in the Aquaria molecular graphics program. Participants were able to learn the gesture controls within 20-30 minutes of training. The toolkit provides a flexible architecture using device connectors, gesture listeners, and dispatchers. Future work may expand voice commands and support additional gesture devices.
Similar to Virtualizing Real-life Lectures with vAcademia and Kinect (20)
Teaching Augmented Reality to Computer Science students under lockdownMikhail Fominykh
The slides were used in a presentation at a webinar "How can digital tools and new teaching methods improve students learning?" http://epic.agu.edu.tr/events/webinar-how-can-digital-tools-and-new-teaching-methods-improve-students-learning/
The webinar was held on 25 June 2020
Empowering Young Job Seekers with Virtual RealityMikhail Fominykh
"Empowering Young Job Seekers with Virtual Reality" has been presented at IEEE VR 2019, the 26th IEEE Conference on Virtual Reality and 3D User Interfaces will be held from March 23rd through March 27th, 2019 at the Osaka International Convention Center in Osaka, Japan. http://www.ieeevr.org/2019/
Abstract: This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of ‘Immersive Job Taste’ that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than those including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.
Immersive Job Taste: a Concept of Demonstrating Workplaces with Virtual RealityMikhail Fominykh
"Immersive Job Taste: a Concept of Demonstrating Workplaces with Virtual Reality" has been presented at 2019 IEEE VR Fourth Workshop on K-12+ Embodied Learning through Virtual & Augmented Reality (KELVAR) on March 23, 2019.
https://sites.google.com/site/vrkelvar/
ABSTRACT
This paper presents a new concept of ‘Immersive Job Taste’ – interactive virtual reality demonstration of a workplace that aims to give a feeling of going through an average workday of a professional with elements of basic training. The main target audiences of Job Taste simulations are young job seekers who can be aided in selecting a career path at school or a welfare center, choosing the first or a new occupation, often after a period of being unemployed. The design methodology behind the Immersive Job Taste concept includes presentation of a workplace, typical tasks, feedback on performance, and advice on applying for jobs in the specific industry. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The prototypes were evaluated by several groups of primary users and experts. The results indicate a generally very positive attitude towards the concept. In this paper, we discuss the potential impact of applying the concept and directions for future work.
Workplace training 4.0 for Industry 4.0 Experience Capturing and Re-enactment...Mikhail Fominykh
Invited speech at IMTEL Innovation Day at the Norwegian University of Science and Technology on November 20, 2018.
The WEKIT training methodology and the technological platform allow creating educational experience efficiently using the time of the expert, aimed for the areas where expertise is rare and experts are scarce.
This approach is based on the idea of using wearable sensors to capture performance of an expert and then making it available for trainees using Augmented Reality.
Virtuelle arbeidsplasser – karriereveiledning i fremtidens NAV-kontor?Mikhail Fominykh
Slides til presentasjon på konferanse "Unge i arbeidslivet"
Tid: onsdag 24. og torsdag 25. oktober 2018
Sted: Scandic Holmenkollen Park, Oslo
Virtuelle arbeidsplasser – karriereveiledning i fremtidens NAV-kontor? Et utviklingsprosjekt med bruk av spillteknologi i et samarbeid med Norges teknisk-naturvitenskapelige universitet (NTNU), NAV Trøndelag og Brukerrådet for ungdom i Trøndelag (BRU).
Mikhail Fominykh, forsker, NTNU, Heidi Fossen, koordinator for forskning og utdanning, NAV Trøndelag og Hans Kristian Lilleberg brukerrepresentant ungdom, BRU
Industrial Training and Workplace Experience with Augmented and Virtual RealityMikhail Fominykh
Slides form the keynote at the Simposio Internacional de Informática Educativa (SIIE 2018)
http://siie2018.uca.es/index.php/en/keynotes-en/
Abstract: In the context of the 4th industrial revolution and a globalized world, there is a pressing need for continuous acquisition and update of skills to maintain efficiency and to ensure inclusion and participation of all citizens in the globalized workplace. At the highly automated and rapidly updated workplaces, the need for expertise and effective training is growing. In the EU-funded research-and-innovation project WEKIT, we address these challenges by developing a new approach to industrial training. This approach is based on the idea of using wearable sensors to capture expert performance and then making it available for trainees using Augmented Reality. The WEKIT training methodology and the technological platform allow creating effective educational experience efficiently using the time of the expert involved in content creation. The idea of capturing workplace experience finds another application area in the research project Virtual Internship, funded by the Norwegian welfare authority. In this project, we use augmented and virtual reality to increase awareness of schoolchildren about various professions and improve motivation of young unemployed to search for a new job. We aim to find out if immersive and interactive experiences of exploring workplaces and trying typical tasks can help in mitigating the youth unemployment.
EATEL Summer School on Technology Enhanced learning Jtelss18Mikhail Fominykh
Opening and closing slides from the 14th EATEL Summer School on Technology Enhanced learning JTELSS18, held in Durres, Albania on May 14-18 2018.
http://ea-tel.eu/jtelss/jtelss2018/
Active learning modules for multi professional emergency management training ...Mikhail Fominykh
These are the slides of the paper by: Ekaterina Prasolova-Førland, Judith Molka-Danielsen, Mikhail Fominykh, and Katherine Lamb: "Active Learning Modules for Multi-Professional Emergency Management Training in Virtual Reality". The paper has been presented at the International Conference on Teaching, Assessment and Learning for Engineering (TALE), Tai Po, Honk Kong, December 12–14, 2017, IEEE.
http://tale-conference.org/tale2017/
Wekit - performance augmentation in industrial training - technology enhanced...Mikhail Fominykh
Invited speech at the Symposium on eInfrastructures and Discruptive Technologies in eAssessment at the Technology-Enhanced Assessment conference TEA 2017
Technology acceptance of augmented reality and wearable technologies ilrn 201...Mikhail Fominykh
"Technology Acceptance of Augmented Reality and Wearable Technologies" #TAM at #iLRN2017
by Fridolin Wild, Roland Klemke, Paul Lefrere, Mikhail Fominykh and Timo Kuula
Paper presented at the 3rd Immersive Learning Research Network Conference in Coimbra, Portugal on 28 June 2017
Publication: https://link.springer.com/chapter/10.1007/978-3-319-60633-0_11
Role playing and experiential learning in a professional counseling distance ...Mikhail Fominykh
Presentation given at the 29th EdMedia conference, Washington DC.
Abstract: In this paper, we explore role-playing and experiential learning approaches applied in an immersive virtual environment for a professional counseling distance course. Training professional counselors requires practice and therefore possesses a challenge for the distant education. Although both counseling professionals’ codes of ethics provide guidance for the ethical practice in difficult situations, the prevailing response among many of these professionals tends to be ambivalent. We explored conditions that influenced knowledge acquisition of graduate rehabilitation counseling students who role-played two challenging scenarios and then had a possibility to review the performance. The data were collected using questionnaires and interviews. The potential of the teaching method and the supporting technology are discussed. The findings indicate that role-playing and experiential learning are valued by the participants as a teaching method in a distance course.
Conceptual framework for therapeutic training Fominykh EdMedia 2017Mikhail Fominykh
Presentation given at the 29th EdMedia conference, Washington DC.
Abstract: This paper presents a concept for designing low-cost therapeutic training with biofeedback and virtual reality. We completed the first evaluation of a prototype - a mobile learning application for relaxation training, primarily for adolescents suffering from tension-type headaches. The system delivers visual experience on a head-mounted display. A wirelessly connected wristband is used to measure user’s pulse and adjust the training scenario based on the heart rate data. Repeating the exercise can make the user able to go through the scenario without using the app, learn how to relax, and ultimately combat tension-type headache. The prototype has been evaluated with 25 participants. The results demonstrate that the application provides a relaxing experience and the implementation of biofeedback is useful for therapeutic training. The results are discussed to evaluate the technological, therapeutic and educational potential of the prototype and to improve the conceptual framework.
The document discusses the WEKIT project, which aims to develop a wearable experience training methodology. This methodology involves capturing an expert's experience, enabling trainees to re-enact it wearing augmented reality devices, and then evaluating the training. The WEKIT platform and prototype use various sensors and AR tools to match trainee performance to expert data. The project is evaluating the approach in industrial settings like aircraft maintenance and healthcare imaging. The goal is to provide innovative learning that transfers experts' tacit knowledge through immersive experience sharing.
Cognitive behavior training with virtual reality and wearable technology @ we...Mikhail Fominykh
The slides were used for a presentation of the prototype on CBT with VR and WT at the WELL workshop (Wearable enhanced learning). The prototype is being designed for training relaxation techniques. Technologically is it aimed to be mobile, so that patients can practice at any time and at any place.
Wearable Experience for Knowledge-Intensive Training WEKIT lectureMikhail Fominykh
This lecture gives an overview of Augmented Reality and Wearable Technology and their use in workplace learning. It explains the basic concepts related the relevant pedagogies
(learning by doing, experiential learning, tacit and explicit knowledge) and some technological details (state of the art and devices).The lecture introduces experience capturing and experience reenactment both as a training approach and from the technical point of view. The lecture also contains a brief introduction of the WEKIT EU project.
This document discusses using virtual reality for emergency management training. It describes several virtual reality projects for training nurses, medical students, and first responders. These include virtual hospitals, operating rooms, emergency rooms, and disaster scenarios. The goal is to create an active learning module using virtual reality that will be implemented in emergency management courses. Relevant frameworks mentioned include naturalistic decision making, experiential learning, and cognitive load theory. The presentation provides information on the theoretical approaches and software that will be used to develop virtual reality training simulations.
Wekit Horizon2020 project partner presentation by Europlan UK ltdMikhail Fominykh
Europlan will play several roles in the WEKIT project including being a visionary for the WEKIT Framework, quality control, and leading exploitation and community building efforts. Europlan people will contribute to quality control, exploitation, public awareness, and the Framework. Mikhail Fominykh will specifically contribute to the Framework, quality control, exploitation, and public awareness.
An introductory lecture to Virtual Reality. This version of the lecture was presented at an open lecture at Aksaray University in Turkey for computer science and engineering students.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Generating privacy-protected synthetic data using Secludy and Milvus
Virtualizing Real-life Lectures with vAcademia and Kinect
1. Virtualizing Real-life
Lectures with vAcademia
and Kinect
Andrey Smorkalov
Volga State University of Technology, Russia
Mikhail Fominykh and Ekaterina Prasolova-Førland
Norwegian University of Science and Technology, Norway
Workshop on Off-The-Shelf Virtual Reality
IEEE Virtual Reality Conference
March, 16 2013 | Orlando, FL, USA
1
2. Goal
o A low-cost technological setup for translating real-
life presentations and lectures into a 3D virtual
environment
– Streaming real-life lectures into 3D virtual environment
– Automatically creating immersive 3D recordings
2
3. Motivation: learning with VR
o Virtual worlds have recognized affordances for learning,
but also many challenges
o Cost is a limiting factor for learning with virtual worlds
and other VR
o Industry, military, and healthcare are the major areas
where VR is currently used for educational purposes
o Exploring new ways of using 3D virtual worlds for
learning: capturing lectures and creating asynchronous
content out of synchronous learning activities
3
4. Motivation: capturing lectures
o „Traditional‟ video recording of lectures and web
conferences change the context of learning and do not
provide immersion or sense of presence as in 3D virtual
worlds
o „2D‟ recordings, including Machinima, do not provide a
possibility for collaborative work or a method for further
developing the content
o Kinect was previously used to improve video recording
of presentations by designing an automatic camera
control system
o => Combining 3D recording in vAcademia with Kinect
for advanced, immersive capturing of lectures
4
5. First prototype: system
implementation
Kinect plugin Animation library Cal3D
Script Executing
Library
vAcademia
graphic engine
Scripts
vAcademia
Virtualizing real-life lectures mode interface of vAcademia
5
6. First prototype: system
implementation
o Five body parts: left arm, right arm, left leg, right leg,
and head
o Standing mode (all body parts) and sitting mode (only
arms and head)
o “Adequately recognized” status for each part
o If a body part is not recognized adequately, last
adequate state is used for 0.2–0.5 sec, and then the
default state
6
7. First prototype: system
performance
o Requirements of the components
– vAcademia requires and actively uses one CPU core.
– Kinect requires a dual-core CPU, but uses only one core, as the
second is reserved for the application that uses Kinect data.
o The process of animating the lecturer’s avatar
based on the data from Kinect is not
computationally complex.
o System’s performance is satisfactory if component
requirements are satisfied, which has been
confirmed during the evaluation.
7
8. First prototype: system
evaluation
o Non-systematic evaluation during iterative
development process
– Several evaluation sessions two-three different courses
– Auditoriums of different configurations and lightning
– Involving different teachers
o Data
– Short interviews with the lecturer while watching the 3D recording
created vAcademia
o Most common feedback
– Too many restrictions on the lecturer’s movements
– Suggestions on how to increase the educational value
8
9. Applying Kinect Motion Capture in
vAcademia: Challenges
1. Low accuracy in capturing gestures
– We could not build a reliable avatar model that can move without
unnatural poses
2. Kinect does not recognize the turn of the lecturer
– Left and right arms are mixed up, unnatural pose is returned
3. Kinect cannot capture parts of the body that are
covered by other body parts or foreign objects
– Additional requirements to the setup
– Lower recognition accuracy
9
10. Applying Kinect Motion Capture in
vAcademia: Solutions for 1
o Positioning Kinect device and the lecturer
– < 1.8 m. for standing mode
– < 1.3 m. for sitting mode
– Kinect device at 0.5 m. from the floor
– Software-based turn into a zero-degree position by the vertical axis
o Additional filtration mechanism for sorting out
unnatural positions of the body parts
– Limited the acceptable values of Euler angles between the bones
– Separated hands as distinct body parts
10
11. Applying Kinect Motion Capture in
vAcademia: Solutions for 2
o The turn is recognized relatively as a function of the
position of the pelvis end points
– The resultant value is valid within the range from -110 to 110
degrees against the “facing Kinect device” direction.
o Colored markers
– Two markers are placed on the body of a lecturer on the left and on
the right side, facing the Kinect device.
– The colors should be different from the lecturer’s clothing and the
material should not be shimmering.
– If they are recognized, the system considers that the lecturer is in
the acceptable turn range. If not -> last correctly recognized state -
> default state.
11
12. Applying Kinect Motion Capture in
vAcademia: Solutions for 2
o Testing colored
markers
12
13. Applying Kinect Motion Capture in
vAcademia: Proposal for 3
o Multiple Kinect devices
– Three Kinect devices: to the left, to the right, and in front of the
lecturer
o New challenges:
– Increased price of the system
– Data from the multiple Kinect devices should be adjusted to a
single coordinate system => increased requirements for the
accuracy of locating Kinect devices
– Additional requirements to the auditorium (>7 m. across)
– Merging the data from multiple Kinect devices
13
14. Supporting Slide Presentations:
Challenges
1. Matching relative positions in real and virtual
worlds
– The position of the lecturer against the whiteboard should match
the position of the avatar against the virtual whiteboard.
2. Capturing a physical pointer
– It is an important part of lecture experience, but Kinect cannot
capture it.
3. The gestures switching slides in real world do not
have the same meaning in 3D virtual world
14
15. Supporting Slide Presentations:
Solutions for 1
o Precise match between the physical whiteboard and
the virtual one.
– Performed once after installing the physical whiteboard and the
Kinect device in the classroom.
– Capturing left and right edges of the physical whiteboard in Kinect
coordinate system.
– Installing the Kinect device and the physical whiteboard on a
specified distance from the floor.
o Further improvement
– Recognizing the borders of the physical whiteboard and creating
the replica in the 3D virtual world keeping the proportion
automatically.
15
16. Supporting Slide Presentations:
Solutions for 2
o Directing the virtual pointer based on the position of
the lecturer’s hand.
– If the half line that extends from the lecturer’s hand towards the
physical whiteboard crosses it, the avatar in the 3D VW directs a
virtual pointer to the same point.
– In order to keep the lecturer aware of his or her hand being
captured, we display a semi-transparent yellow area on the physical
whiteboard on top of the slides.
16
18. Supporting Slide Presentations:
Solutions for 3
o Switching slides functionality in PowerPoint by
recognizing standard gestures Swipe Left and Swipe
Right
18
19. Learning Scenarios
o Scenario 1: Lecturing as a synchronous mixed
reality activity
– Interactions between students in the physical and virtual
classrooms
– Recording student and lecturer activities in the same context
o Scenario 2: Round-table discussion as a
synchronous mixed reality activity
– Participants joining through the 3D virtual world or captured from
the real world
– Multiple Kinect-based systems can be installed in remote locations,
each of them can capturing two participants
– The designed system provides a significant advantage over pure 3D
virtual worlds in the non-verbal communication support
19
20. Learning Scenarios (2)
o Scenario 3: Motion capture for synchronous
mixed reality educational role plays
– Taking turns in the physical classroom or letting the users
captured by Kinect play the roles of facilitators
o Scenario 4: Creating immersive 3D recordings
out of live lectures
– Easy and low-cost creation of educational content for later
(asynchronous) use, such as lectures and simulations
– Any activity, including streaming Kinect-captured lectures, in
the 3D virtual world can be easily saved and revisited later
– The resultant 3D recordings combine the convenience of video
and immersive qualities of 3D virtual worlds
20
21. Questions?
Feedbacks?
Andrey Smorkalov Mikhail Fominykh Ekaterina Prasolova-Førland
smorkalovAY@volgatech.net mikhail.fominykh@ntnu.no ekaterip@ntnu.no
Acknowledgments
Mikhail Morozov
morozovMN@volgatech.net
Multimedia Systems Laboratory Virtual Spaces LLC
Volga State University of Technology vAcademia
http://mmlab.ru http://vacademia.com
21