The document summarizes the Project IVY, which uses virtual reality to support interpreter training. It describes the design and implementation of the IVY Virtual Environment prototype, including its use of Second Life, multimedia content from previous projects, and a web application. Evaluations showed positive feedback but also limitations around sound, aesthetics, and scalability due to Second Life restrictions. Future work includes additional evaluations and designing a new system to overcome current limitations.
Project IVY - Using Virtual reality for interpreter-mediated communication an...Panagiotis Ritsos
Project IVY - Using Virtual reality for interpreter-mediated communication and training, Virtual Learning Technologies 2012, The Management Centre, Bangor, Wales, UK
The document discusses networked art and Brazilian experiences with technological poetics from the Technological Poetics Research Group. It includes information about the author Dra. Ivani Santana, who is a researcher in dance and technology, and is presenting at the INDICATE Final Conference in Ankara, Turkey on October 15-16, 2012 on this topic. Different terms used in art related to technological poetics and networked art are also listed.
This document summarizes a study that examined how the visual content of news videos influences comprehension. The study analyzed 5 BBC news videos using a four-category coding system to classify shots based on the semantic relationship between visuals and audio. Results showed that talking head shots provided little benefit to comprehension, while direct shots facilitated understanding. Indirect and divergent shots sometimes helped or hindered comprehension depending on how related they were to the audio. The study provides implications for how news videos can be used pedagogically.
The document discusses two projects, VidiVideo and IM3I, that aimed to develop automatic metadata extraction and semantic video search engines. VidiVideo created a system that automatically annotates videos with over 1000 semantic concepts and provides desktop and web-based search interfaces. IM3I provided tools for audio-visual annotation, indexing services, and specialized search interfaces to enable new ways of interacting with multimedia archives. Both projects achieved state-of-the-art performance in object and concept recognition contests.
Evolutionary Togetherness: How to Manage Coupled Evolution in Metamodeling Ec...Alfonso Pierantonio
The document discusses model-driven engineering and metamodeling ecosystems. It notes that in MDE, metamodels are cornerstones that define related artifacts like models, transformations, and editors. When a metamodel changes, it can invalidate these other artifacts in the ecosystem. The document examines challenges in co-evolving all artifacts when a metamodel changes, such as manually adapting models which is tedious and error-prone. It proposes that an ecosystem needs infrastructure to consistently co-evolve artifacts, such as by defining relationships between elements and detecting change impacts to determine necessary adaptations. A megamodel is proposed as a way to formally specify an ecosystem and the dependencies between its elements.
The document summarizes the activities and research projects of the Integrated Media Systems Center (IMSC) at the University of Southern California. IMSC is an NSF Engineering Research Center that conducts research in multimedia and immersive technologies through partnerships with industry, government agencies, and other universities. Its research focuses on areas like immersive audio, computer vision, graphics & animation, and virtual reality simulations. IMSC also supports education programs and has graduated over 200 students. Its application projects include ImmersiNet for entertainment, InterAct for communication, and 2020Classroom for education.
Miriam Esteve is a technical computer engineer and creative multimedia artist from Spain. She has expertise in programming, 3D modeling, video editing, and interactive installations. Some of her projects include creating interactive particle simulations and projection mapping for a performance, developing augmented reality videos, and programming audiovisual scripts to react to music. Miriam has worked on artistic residencies in Barcelona and has over 10 years of experience in technical fields and multimedia creation.
The document provides a timeline of digital technologies from 1859 to the present. It begins in the pre-microcomputer era when John Dewey advocated for hands-on, experiential learning. The microcomputer era began in the 1970s-80s with the development of microprocessors. Mainframes were large computers used by organizations. Word processing software allowed easy editing. The internet era enabled email, video conferencing, online learning, and social networking. Mobile technologies now include e-books, podcasts, and ubiquitous connections.
Project IVY - Using Virtual reality for interpreter-mediated communication an...Panagiotis Ritsos
Project IVY - Using Virtual reality for interpreter-mediated communication and training, Virtual Learning Technologies 2012, The Management Centre, Bangor, Wales, UK
The document discusses networked art and Brazilian experiences with technological poetics from the Technological Poetics Research Group. It includes information about the author Dra. Ivani Santana, who is a researcher in dance and technology, and is presenting at the INDICATE Final Conference in Ankara, Turkey on October 15-16, 2012 on this topic. Different terms used in art related to technological poetics and networked art are also listed.
This document summarizes a study that examined how the visual content of news videos influences comprehension. The study analyzed 5 BBC news videos using a four-category coding system to classify shots based on the semantic relationship between visuals and audio. Results showed that talking head shots provided little benefit to comprehension, while direct shots facilitated understanding. Indirect and divergent shots sometimes helped or hindered comprehension depending on how related they were to the audio. The study provides implications for how news videos can be used pedagogically.
The document discusses two projects, VidiVideo and IM3I, that aimed to develop automatic metadata extraction and semantic video search engines. VidiVideo created a system that automatically annotates videos with over 1000 semantic concepts and provides desktop and web-based search interfaces. IM3I provided tools for audio-visual annotation, indexing services, and specialized search interfaces to enable new ways of interacting with multimedia archives. Both projects achieved state-of-the-art performance in object and concept recognition contests.
Evolutionary Togetherness: How to Manage Coupled Evolution in Metamodeling Ec...Alfonso Pierantonio
The document discusses model-driven engineering and metamodeling ecosystems. It notes that in MDE, metamodels are cornerstones that define related artifacts like models, transformations, and editors. When a metamodel changes, it can invalidate these other artifacts in the ecosystem. The document examines challenges in co-evolving all artifacts when a metamodel changes, such as manually adapting models which is tedious and error-prone. It proposes that an ecosystem needs infrastructure to consistently co-evolve artifacts, such as by defining relationships between elements and detecting change impacts to determine necessary adaptations. A megamodel is proposed as a way to formally specify an ecosystem and the dependencies between its elements.
The document summarizes the activities and research projects of the Integrated Media Systems Center (IMSC) at the University of Southern California. IMSC is an NSF Engineering Research Center that conducts research in multimedia and immersive technologies through partnerships with industry, government agencies, and other universities. Its research focuses on areas like immersive audio, computer vision, graphics & animation, and virtual reality simulations. IMSC also supports education programs and has graduated over 200 students. Its application projects include ImmersiNet for entertainment, InterAct for communication, and 2020Classroom for education.
Miriam Esteve is a technical computer engineer and creative multimedia artist from Spain. She has expertise in programming, 3D modeling, video editing, and interactive installations. Some of her projects include creating interactive particle simulations and projection mapping for a performance, developing augmented reality videos, and programming audiovisual scripts to react to music. Miriam has worked on artistic residencies in Barcelona and has over 10 years of experience in technical fields and multimedia creation.
The document provides a timeline of digital technologies from 1859 to the present. It begins in the pre-microcomputer era when John Dewey advocated for hands-on, experiential learning. The microcomputer era began in the 1970s-80s with the development of microprocessors. Mainframes were large computers used by organizations. Word processing software allowed easy editing. The internet era enabled email, video conferencing, online learning, and social networking. Mobile technologies now include e-books, podcasts, and ubiquitous connections.
Virtual Reality: Stereoscopic Imaging for Educational Institutions Rodrigo Arnaut
Virtual reality (VR) in education is a markedly present subject in research institutions in many countries. This paper will discuss the application of VR techniques, including the use of computer graphics and three-dimensional (3D) video production. Stereoscopy is a key point for the visualization of these applications. The system developed uses a 3D lens, a home camera, common video edition software, two low cost projectors, light polarized filters and cheap 3D eyeglasses. During the 3D video production, the aim was to evaluate all the involved process, since the elaboration of scripts, video capture and projection until the costs to build the system. This is important to demonstrate for educational institutions the advantages in adopting resources of VR for the improvement of learning.
Artigo completo em: http://periodicos.ifsc.edu.br/index.php/rtc/article/view/108
Event-driven Model Transformations in Domain-specific Modeling LanguagesIstvan Rath
This PhD thesis by István Ráth focuses on event-driven model transformations in domain-specific modeling languages. The thesis contains 3 parts: 1) developing concepts for event-driven graph transformations based on incremental pattern matching, 2) applying these concepts to provide advanced language engineering features like simulation, and 3) integrating modeling tools using change-driven transformations. The research aims to address challenges in scalability, usability and tool integration for model-driven software engineering.
The document discusses the history and development of ontologies. It begins with definitions of key terms like ontology, vocabulary, and taxonomy. It then provides a brief history of ontologies dating back to ancient Greek philosophers. The document also discusses how ontologies are used in computer science to formally represent domain knowledge. It provides examples of ontologies in fields like medicine, commerce, and the semantic web. Finally, it discusses best practices for building ontologies, such as reusing existing terms and collaborating with domain experts and end users.
This document provides an overview of the iTILT project, which aims to explore effective uses of interactive whiteboards (IWBs) for communicative language teaching. It discusses general tips for using IWBs, including classroom organization, organizing materials, and additional devices. It also covers criteria for designing and evaluating IWB-based materials, including ensuring tasks are communicative, interactive, and focus on meaning over form. Examples of using IWBs for teaching the four skills - speaking, listening, reading, and writing - as well as vocabulary and grammar are also provided.
Rahul Budhiraja is seeking a master's program to further his research interests in augmented reality. He has a B.Tech in information technology from Indian Institute of Information Technology with research experience including augmented reality applications for military and education. His skills include C++, OpenGL and experience with augmented reality, computer vision and human computer interaction projects. He provides three academic references and one industrial reference.
The document invites stakeholders to attend a series of talks on topics related to ICT during National ICT Month. The talks will cover ICT in education, free and open source software, e-governance, PC maintenance and recycling, and web programming. The activities will be held from June 21-25, 2010 at the CICT-NCC office and will include exhibits showcasing ICT projects for education. Attendees are asked to RSVP by emailing the contact persons by the deadline since space is limited. The attached program details the schedule of talks and sessions across the five days.
The document provides an overview of the IVY Virtual Environment project. The project aims to create a 3D virtual environment to support the acquisition of interpreting skills through simulated scenarios. It discusses the technical design of the virtual environment, including using Second Life, managing dialogues and audio files, and developing various scenario locations. Future plans include integrating avatar gestures and directional sound into the dialogue simulations. The research goals are to evaluate how the virtual environment compares to traditional interpreting training methods and how the sense of immersion impacts the user experience.
The document summarizes Project IVY, which aims to create a 3D virtual environment to support interpreter training. The virtual environment will include:
1) A range of virtual interpreting scenarios like business meetings that can be used for simulation, exercises, and live interaction.
2) Audio and video materials from previous projects to use in the scenarios.
3) Features like freely navigable areas, quick scenario switching, and audio controls for training interpreting skills.
4) Initial scenarios include classrooms, meeting rooms, and courtrooms to prepare interpreters for different settings.
The document summarizes Project IVY, which aims to create a 3D virtual environment to support interpreter training. It discusses the requirements and technical aspects of the IVY Virtual Environment (IVY-VE) being developed, including using existing audio materials and scenarios like classrooms and meeting rooms. Early trials with interpreter students showed promise. Future work includes tighter integration with the virtual world and evaluating the research questions around how immersion impacts the user experience versus traditional training methods.
The document describes the creation of a 3d virtual world based on Sun Microsystem's environment named "Wonderland".
We designed (through Archicad) a close reconstruction of Cattid, an actual lab of Sapienza - University of Rome.
First slides should give you an overview above overall Cattid's activities.
The document discusses MPEG-V, a new standard for representing multi-sensorial and immersive experiences that combines both physical and informational worlds. It proposes using sensors to capture real-world stimuli and control virtual environments, with MPEG-V defining architectures and data formats to allow bidirectional exchange of information. Example use cases are presented where real-world motions or environmental data could influence and control virtual simulations.
Challenges and requirements for a next generation service for video content s...Wesley De Neve
- The document discusses challenges and requirements for next-generation video content sharing services. It summarizes the research activities of the Multimedia Lab and Image and Video Systems Lab, including their work on video coding, content adaptation, and multimedia search/retrieval.
- It observes that user-generated content will increase in quality and duration, requiring support for high-definition and content adaptation across devices. The amount of user content is growing exponentially, requiring more efficient annotation and retrieval.
- Potential areas for cooperation include developing an adaptation/delivery framework for personalized content retrieval using semantic web tools, and long-term, developing immersive multimedia experiences.
The document discusses two European educational projects that used virtual worlds - ST.ART and AVATAR. ST.ART aimed to teach secondary students about street art using OpenSim, running from 2009-2011. AVATAR taught teachers to use virtual worlds like Second Life, running a 24 month online course from 2009-2011. Both projects found virtual worlds can help overcome classroom limitations and engage students through collaborative and experiential learning. The document promotes joining the new Euroversity network to continue sharing knowledge about teaching and learning in virtual environments.
An implementation of virtual worlds platform for educators in Second Life is summarized in 3 sentences:
The platform brings together educators from different universities in Turkey to share experiences and improve skills regarding pedagogy and virtual worlds through weekly academic meetings on the Infolit iSchool island in Second Life. Educators participate in presentations, open forums, and brainstorming to discuss using virtual worlds for education. The goal is to create a community for educators to learn from each other and find solutions for challenges of implementing virtual worlds.
This document discusses an integrated personalized e-learning system called SUMA & T-Maestro. The system allows for multimodal access to e-learning content through different devices. It separates the base e-learning platform from new applications and functions. One new feature is multimodal access through devices like interactive digital TV, web, and mobile. The system provides personalized and adaptive learning content to users based on their profile and device. It was tested with a pilot cooking course and showed potential for increasing learning opportunities through interactive digital TV.
AVATAR – The Course: Recommendations for Using 3D Virtual Environments for Te...eLearning Papers
The first case study involved 30 Danish students aged 16-17 using Second Life to learn English. Students visited virtual locations related to Berlin and solved puzzles in the Robin Hood Quest. While students gained skills in navigating Second Life, the tasks proved too complex. Future projects could involve collaboration across countries and be scheduled earlier. Mastering Second Life required a steep learning curve. The second case study involved Italian high school students using Second Life over 30 hours to create a virtual bazaar and gain communication, cooperation, and problem-solving skills.
Exploiting metadata, ontologies and semantics to design/enhance new end-user ...Ahmet Soylu
The document discusses research on enabling end-user involvement in adaptive technologies. It aims to provide abstract development approaches, allow users to access context and participate in adaptation, and enable users to create personal environments using applications and devices. The research involves using ontologies for modeling at the individual and collective level, developing a widget-based approach, and mining behaviors to automate orchestration of widgets. While conceptual frameworks and methods are proposed, practical challenges remain in realizing a uniform approach and improving automated techniques.
Visionaire project learning in 3D virtual worlds, enabling vacademia in caveMikhail Fominykh
My invited presentation "Learning in 3D Virtual Worlds, enabling vAcademia in CAVE" at the VISIONAIR General Assembly and Open Forum. VISIONAIR is an EU project that provides Trans National Access (TNA) to visualization and virtual reality facilities in European universities.
The document describes an online 3D virtual environment called the Arts Metaverse being developed at the University of British Columbia. It aims to provide an immersive collaborative space for students to reconstruct and experience ancient civilizations. The environment uses Open Croquet, an open-source platform, allowing students to build virtual models piece-by-piece and review each other's work. The goal is to enhance experiential and visual learning about history, culture, and artifacts through a participatory virtual community.
Repositories of community memory as visualized activities in 3D virtual worldsMikhail Fominykh
Paper presentation: Mikhail Fominykh, Ekaterina Prasolova-Førland, Leif Martin Hokstad, and Mikhail Morozov: "Repositories of Community Memory as Visualized Activities in 3D Virtual Worlds," in the 47th Hawaii International Conference on System Sciences (HICSS), Waikoloa, HI, USA, January 6–9, 2014, IEEE, ISBN: 978-1-4799-2504-9/14, pp. 678–687. doi>10.1109/HICSS.2014.90
The document presents a mobile virtual network classroom system. It introduces mobile learning (m-learning) and how it allows interactive virtual classes through mobile devices. The system architecture has layers for streaming/encoding, a broadcasting/management server, and client-side mobile apps. Key features include real-time audio/video, whiteboarding, polling, and print/save. Limitations include technical requirements, limited instructor assistance, and some subjects not being suitable for online learning. The conclusion is that the system provides an accessible way for distance learning and mobile learners.
Virtual Reality: Stereoscopic Imaging for Educational Institutions Rodrigo Arnaut
Virtual reality (VR) in education is a markedly present subject in research institutions in many countries. This paper will discuss the application of VR techniques, including the use of computer graphics and three-dimensional (3D) video production. Stereoscopy is a key point for the visualization of these applications. The system developed uses a 3D lens, a home camera, common video edition software, two low cost projectors, light polarized filters and cheap 3D eyeglasses. During the 3D video production, the aim was to evaluate all the involved process, since the elaboration of scripts, video capture and projection until the costs to build the system. This is important to demonstrate for educational institutions the advantages in adopting resources of VR for the improvement of learning.
Artigo completo em: http://periodicos.ifsc.edu.br/index.php/rtc/article/view/108
Event-driven Model Transformations in Domain-specific Modeling LanguagesIstvan Rath
This PhD thesis by István Ráth focuses on event-driven model transformations in domain-specific modeling languages. The thesis contains 3 parts: 1) developing concepts for event-driven graph transformations based on incremental pattern matching, 2) applying these concepts to provide advanced language engineering features like simulation, and 3) integrating modeling tools using change-driven transformations. The research aims to address challenges in scalability, usability and tool integration for model-driven software engineering.
The document discusses the history and development of ontologies. It begins with definitions of key terms like ontology, vocabulary, and taxonomy. It then provides a brief history of ontologies dating back to ancient Greek philosophers. The document also discusses how ontologies are used in computer science to formally represent domain knowledge. It provides examples of ontologies in fields like medicine, commerce, and the semantic web. Finally, it discusses best practices for building ontologies, such as reusing existing terms and collaborating with domain experts and end users.
This document provides an overview of the iTILT project, which aims to explore effective uses of interactive whiteboards (IWBs) for communicative language teaching. It discusses general tips for using IWBs, including classroom organization, organizing materials, and additional devices. It also covers criteria for designing and evaluating IWB-based materials, including ensuring tasks are communicative, interactive, and focus on meaning over form. Examples of using IWBs for teaching the four skills - speaking, listening, reading, and writing - as well as vocabulary and grammar are also provided.
Rahul Budhiraja is seeking a master's program to further his research interests in augmented reality. He has a B.Tech in information technology from Indian Institute of Information Technology with research experience including augmented reality applications for military and education. His skills include C++, OpenGL and experience with augmented reality, computer vision and human computer interaction projects. He provides three academic references and one industrial reference.
The document invites stakeholders to attend a series of talks on topics related to ICT during National ICT Month. The talks will cover ICT in education, free and open source software, e-governance, PC maintenance and recycling, and web programming. The activities will be held from June 21-25, 2010 at the CICT-NCC office and will include exhibits showcasing ICT projects for education. Attendees are asked to RSVP by emailing the contact persons by the deadline since space is limited. The attached program details the schedule of talks and sessions across the five days.
The document provides an overview of the IVY Virtual Environment project. The project aims to create a 3D virtual environment to support the acquisition of interpreting skills through simulated scenarios. It discusses the technical design of the virtual environment, including using Second Life, managing dialogues and audio files, and developing various scenario locations. Future plans include integrating avatar gestures and directional sound into the dialogue simulations. The research goals are to evaluate how the virtual environment compares to traditional interpreting training methods and how the sense of immersion impacts the user experience.
The document summarizes Project IVY, which aims to create a 3D virtual environment to support interpreter training. The virtual environment will include:
1) A range of virtual interpreting scenarios like business meetings that can be used for simulation, exercises, and live interaction.
2) Audio and video materials from previous projects to use in the scenarios.
3) Features like freely navigable areas, quick scenario switching, and audio controls for training interpreting skills.
4) Initial scenarios include classrooms, meeting rooms, and courtrooms to prepare interpreters for different settings.
The document summarizes Project IVY, which aims to create a 3D virtual environment to support interpreter training. It discusses the requirements and technical aspects of the IVY Virtual Environment (IVY-VE) being developed, including using existing audio materials and scenarios like classrooms and meeting rooms. Early trials with interpreter students showed promise. Future work includes tighter integration with the virtual world and evaluating the research questions around how immersion impacts the user experience versus traditional training methods.
The document describes the creation of a 3d virtual world based on Sun Microsystem's environment named "Wonderland".
We designed (through Archicad) a close reconstruction of Cattid, an actual lab of Sapienza - University of Rome.
First slides should give you an overview above overall Cattid's activities.
The document discusses MPEG-V, a new standard for representing multi-sensorial and immersive experiences that combines both physical and informational worlds. It proposes using sensors to capture real-world stimuli and control virtual environments, with MPEG-V defining architectures and data formats to allow bidirectional exchange of information. Example use cases are presented where real-world motions or environmental data could influence and control virtual simulations.
Challenges and requirements for a next generation service for video content s...Wesley De Neve
- The document discusses challenges and requirements for next-generation video content sharing services. It summarizes the research activities of the Multimedia Lab and Image and Video Systems Lab, including their work on video coding, content adaptation, and multimedia search/retrieval.
- It observes that user-generated content will increase in quality and duration, requiring support for high-definition and content adaptation across devices. The amount of user content is growing exponentially, requiring more efficient annotation and retrieval.
- Potential areas for cooperation include developing an adaptation/delivery framework for personalized content retrieval using semantic web tools, and long-term, developing immersive multimedia experiences.
The document discusses two European educational projects that used virtual worlds - ST.ART and AVATAR. ST.ART aimed to teach secondary students about street art using OpenSim, running from 2009-2011. AVATAR taught teachers to use virtual worlds like Second Life, running a 24 month online course from 2009-2011. Both projects found virtual worlds can help overcome classroom limitations and engage students through collaborative and experiential learning. The document promotes joining the new Euroversity network to continue sharing knowledge about teaching and learning in virtual environments.
An implementation of virtual worlds platform for educators in Second Life is summarized in 3 sentences:
The platform brings together educators from different universities in Turkey to share experiences and improve skills regarding pedagogy and virtual worlds through weekly academic meetings on the Infolit iSchool island in Second Life. Educators participate in presentations, open forums, and brainstorming to discuss using virtual worlds for education. The goal is to create a community for educators to learn from each other and find solutions for challenges of implementing virtual worlds.
This document discusses an integrated personalized e-learning system called SUMA & T-Maestro. The system allows for multimodal access to e-learning content through different devices. It separates the base e-learning platform from new applications and functions. One new feature is multimodal access through devices like interactive digital TV, web, and mobile. The system provides personalized and adaptive learning content to users based on their profile and device. It was tested with a pilot cooking course and showed potential for increasing learning opportunities through interactive digital TV.
AVATAR – The Course: Recommendations for Using 3D Virtual Environments for Te...eLearning Papers
The first case study involved 30 Danish students aged 16-17 using Second Life to learn English. Students visited virtual locations related to Berlin and solved puzzles in the Robin Hood Quest. While students gained skills in navigating Second Life, the tasks proved too complex. Future projects could involve collaboration across countries and be scheduled earlier. Mastering Second Life required a steep learning curve. The second case study involved Italian high school students using Second Life over 30 hours to create a virtual bazaar and gain communication, cooperation, and problem-solving skills.
Exploiting metadata, ontologies and semantics to design/enhance new end-user ...Ahmet Soylu
The document discusses research on enabling end-user involvement in adaptive technologies. It aims to provide abstract development approaches, allow users to access context and participate in adaptation, and enable users to create personal environments using applications and devices. The research involves using ontologies for modeling at the individual and collective level, developing a widget-based approach, and mining behaviors to automate orchestration of widgets. While conceptual frameworks and methods are proposed, practical challenges remain in realizing a uniform approach and improving automated techniques.
Visionaire project learning in 3D virtual worlds, enabling vacademia in caveMikhail Fominykh
My invited presentation "Learning in 3D Virtual Worlds, enabling vAcademia in CAVE" at the VISIONAIR General Assembly and Open Forum. VISIONAIR is an EU project that provides Trans National Access (TNA) to visualization and virtual reality facilities in European universities.
The document describes an online 3D virtual environment called the Arts Metaverse being developed at the University of British Columbia. It aims to provide an immersive collaborative space for students to reconstruct and experience ancient civilizations. The environment uses Open Croquet, an open-source platform, allowing students to build virtual models piece-by-piece and review each other's work. The goal is to enhance experiential and visual learning about history, culture, and artifacts through a participatory virtual community.
Repositories of community memory as visualized activities in 3D virtual worldsMikhail Fominykh
Paper presentation: Mikhail Fominykh, Ekaterina Prasolova-Førland, Leif Martin Hokstad, and Mikhail Morozov: "Repositories of Community Memory as Visualized Activities in 3D Virtual Worlds," in the 47th Hawaii International Conference on System Sciences (HICSS), Waikoloa, HI, USA, January 6–9, 2014, IEEE, ISBN: 978-1-4799-2504-9/14, pp. 678–687. doi>10.1109/HICSS.2014.90
The document presents a mobile virtual network classroom system. It introduces mobile learning (m-learning) and how it allows interactive virtual classes through mobile devices. The system architecture has layers for streaming/encoding, a broadcasting/management server, and client-side mobile apps. Key features include real-time audio/video, whiteboarding, polling, and print/save. Limitations include technical requirements, limited instructor assistance, and some subjects not being suitable for online learning. The conclusion is that the system provides an accessible way for distance learning and mobile learners.
The document summarizes a workshop on digital ecosystems for collaborative learning that aims to help educators deploy CSCL scripts into mainstream virtual learning environments that integrate third-party web and augmented reality tools. Specifically, it seeks to integrate augmented reality into distributed learning environments along with VLEs and other web tools to help teachers sustainably use these tools in authentic collaborative learning classrooms. It presents a prototype and proof of concept using a jigsaw-based collaborative script, web and AR browsers, and geo-located web resources and 3D models to support the lifecycle of CSCL scripts and orchestrate reflected spaces across ubiquitous learning tools and devices.
Things you should_know_about_future_trendsCheryl Todd
This document summarizes emerging technology trends in higher education, as presented at a Stone Soup Seminar on May 11, 2010. It outlines several virtual technologies used in education, including Second Life for virtual worlds and VoIP, visual understanding environments, IBM's Many Eyes for data visualization, and e-readers for digital textbooks. Potential benefits include increased accessibility, collaboration, and customizable course materials. Challenges include the need to modify lesson plans and issues around content ownership and distribution.
Second Life in Education especially in MinnesotaAnn Treacy
The document discusses applications of immersive technologies like virtual worlds and 3D environments. It summarizes a breakout session that introduced Second Life and demonstrated how it can be used for education. Specific examples discussed include using virtual worlds to recreate a college campus, hold international meetings, and create a virtual tutoring center.
Cnie Projet Enjeux S Diaporama Banff 2008 Vaguest7e67ab
ENJEUX-S is a synchronous teaching environment for distance education developed by researchers at SAVIE and funded by CANARIE. It allows for real-time collaboration between users through webcams, voice chat, and screen sharing. The goals of the ENJEUX-S project are to develop an integrated synchronous platform for collaborative work and explore new approaches to pedagogy and distance education. ENJEUX-S provides a way for students and teachers to interact in real-time from different locations through its multimedia communication features.
Similar to Using Virtual Reality for Interpreter-mediated Communication and Training (20)
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Using Virtual Reality for Interpreter-mediated Communication and Training
1. Cyberworlds International Conference – September 2012
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
Using Virtual Reality for Interpreter-mediated
Communication and Training
Panagiotis D. Ritsos1, Robert Gittins1, Sabine Braun2, Catherine Slater2 and Jonathan C. Roberts1
[1] School of Computer Science, Bangor University, UK - {p.ritsos, r.gittins, j.c.roberts}@bangor.ac.uk
[2] Centre for Translation Studies, University of Surrey, UK – {s.braun, c.slater}@surrey.ac.uk
Lifelong Learning Programme – Project 511862-LLP-2010-1-UK-KA3-KA3MP - The IVY project has been funded with support from the European
Commission. This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made
of the information contained therein.
2. Presentation Outline
To present purpose of project IVY – Interpreting in Virtual Reality and the
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
IVY Virtual Environment (IVY-VE)
To present the strategic decisions, resulting design and implementation progress
to date, towards the creation of a prototype
To provide an overview of the main features of our prototype
Comment on preliminary evaluation and pinpoint identified limitation
To allow for discussion on future development
3. Ysgol Gwyddorau Cyfrifiadurol The Project IVY Consortium
School of Computer Science
University of Surrey (UK)
Uniwersystet im. Adama Mickiewicza (Poland)
University of Cyprus (Cyprus)
Steinbeis GmbH & Co. KG für Technologiertransfer (Germany)
Bangor University (UK)
Eberhard Karls Universität Tübingen (Germany)
Bar Ilan University (Israel)
4. Project IVY – Scope
The rise of migration and multilingualism in Europe requires professional
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
interpreters in business, legal, medical and many other settings.
Future interpreters need to master an ever broadening range of interpreting
skills and scenarios – training for which is often difficult to achieve with
traditional teaching methods.
Project IVY employs 3D virtual world technology to create an educational
space that supports the acquisition and application of skills required in
interpreter-mediated communication.
Project IVY uses existing interpreter resources – audio and video material from
previous video conferencing research.
5. IVY VE – Requirements
To provide an intuitive, easy to use interface within a Virtual World for
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
accessing multimedia material created for interpreting training and simulation.
To allow easy scenario management by users who often are not experts in
commuting (i.e., can not/should not write code)...
… meaning the creation and modification of existing scenarios in terms of their
multimedia content, requiring basic CRUD functionality.
To enable limited dialogue and monologue synthesis, resulting in the
enrichment of the corpora with different language combinations of existing
scenarios.
6. IVY Virtual Environment (IVY-VE) in a nutshell
A dedicated, adaptive 3D virtual environment for
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
o interpreting students and
o future clients of interpreters
Supports range of virtual interpreting scenarios (e.g., ‘business meeting’) that can
be run in different modes:
o Interpreting (& Learning Activity) mode, where students can practice using
dialogues and monologues
o Exploration mode, where clients can learn about interpreting
o Live interaction mode, where both groups can engage in role plays
Uses multilingual video/audio-based content for interpreting scenarios, by
adapting existing multimedia corpora from the LLP project BACKBONE and the
ELISA corpus, and creating three new corpora in Greek, Russian and Hebrew;
Supported by two sets of pedagogical material for interpreter students and
(future) ‘clients’, e.g. awareness-raising and interpreting exercises.
7. School of Computer Science
Ysgol Gwyddorau Cyfrifiadurol Project IVY – Scenario forms
Dialogue
Monologue
8. IVY Virtual World – Second Life
Second Life was chosen as the Virtual World for our first prototype.
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
Exploration of alternatives, such as OpenSim, WebGL and Unity will follow in
the future.
Second Life compared to alternatives (OpenSim, ActiveWorls etc) offers:
o Large community, various add-ons, plugins and examples of customisations.
o A platform for social interaction and education, used by numerous institutions,
colleges, universities – thus increasing chances of exposure.
o Accessibility via public servers and it does not require that you run the VW yourself.
9. IVY VE – Lack of Instancing & Scalability
Due to limits on the number of primitives available to the IVY Island and the
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
lack of instancing mechanisms in SL, IVE-VE uses a collection of unique, in-
world locations for each type of scenario (e.g., Classroom, Meeting Office).
Therefore, a scenario may share its location with another, being carried out in a
similar setting.
In order to maintain consistency in the virtual world only one scenario can be
played per location at a given time.
Upon a scenario launch by a user, all scenarios sharing the same location
become unavailable for other users.
Once the user exits the selected scenario, all scenarios sharing the same
location become available again.
10. IVY VE – Design Strategies
Script things in Second Life using LSL
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
+ Comparable aesthetics and better integration with the native SL GUI
+ Graphics appear crispier, clearer
- Scripting can be fairly static and requires coding
- Communication with DBs is fairly limited in terms of size of info that can be
transmitted.
Rely on web application technologies as interface creation, database
connectivity and overall flexibility surpasses LSL. Therefore…
…either embed SL in a web application
o Popular notion, no web viewer supported from Linden Labs.
o Only Canvas from Tripodean Technologies appears to exist at the moment.
…or ‘embed’ the web application within SL, using HTML on a Prim.
11. IVY VE – Implementation
Our chosen implementation strategy aspires to merge useful features from both
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
alternatives, resulting into a hybrid solution.
One module consists of a web application with two entry points. One entry point
remains independent of SL and is used by content managers to manage scenarios
and users’ information, offering basic CRUD functionality.
The other end is viewable within Second Life, in the form of a ‘Heads-up-
Display’ (HUD), populated from a database, showing available scenarios to the
users, having a player functionality and initiating in-world teleport events.
12. IVY VE – User Classes and Characteristics
Separating user roles allows controlled access to different parts of the application
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
as well as means of monitoring scenario selection and execution.
The system’s user classes and their respective role descriptions are:
o Interpreters/Users, whose purpose is to explore, participate and exercise with the
scenarios in Second Life.
o Observers, whose purpose is to observe other users in Second Life.
o Content Managers who are responsible for user and scenario management.
13. IVY VE – The IVY Island
Locations created according to the corpus requirements, trying to keep prim-count
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
to a minimum.
Reception serves as a central hub and a HUD pick-up point.
14. IVY VE – Scenario Actors
We populate the scenarios with additional ‘actors’. Currently these actors are
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
manually placed in each location, using Second-life ‘robots’ (bots) from
Pikkubot and Thoys.
Bots are controlled either by using in-world chat to issue commands directly to
the bots or through a dedicated server’s telnet prompt.
We currently use animation overrides, to make them appear life-like.
Our aspiration is to create a service that talks to the bots and relays scenario
specific information and teleport commands.
However… sound does not appear to originate from the bots.
15. IVY VE – Web Application
The IVY web application is build using the Appfuse 2 open source project
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
(appfuse.org).
Appfuse, built on the Java platform, uses industry-standard features, such as
Apache Maven integration, JPA support for database operations and popular
web frameworks such as Spring and Apache Struts, employed in this example.
Appfuse comes with out of the box features, needed in IVY-VE such as:
o Generic CRUD backend
o Authentication and authorization,
o User management
o Strong Internationalization support
Our prototype is deployed using Apache Tomcat 6.x and uses the MySQL 5.x
database.
16. IVY VE – Audio File Management
The IVY-VE uses audio extracts (segments), in MPEG-2 Audio Layer III
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
format, from the LLP project BACKBONE, wrapped in XSPF (XML Shareable
Playlist Format) play-lists (scripts) and played within pre-fabricated scenes.
Audio segments are uniquely named and can be interchanged — within each
script — to derive further language combinations of that scenario.
Actors may speak for more than one consecutive segments, allowing greater
flexibility in creating scripts where one talks for extended periods.
However, it is assumed that only one actor talks per audio segment and there is
no overlap between actor’s speech.
Each script has textual information associated with it, such as brief content
information, scene description and domain keywords.
17. IVY VE – Heads Up Display I
The HUD is built using the jQuery JavaScript library, displaying the list of
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
scenarios in the database as a drill-down menu.
It is normally attached to the bottom left corner of the user’s viewport.
Audio is being played by means of a Flash player, parsing the XSPF playlists
upon scenario selection.
Navigation through the island is performed using slurls, providing direct url-
like teleport links to locations within the virtual world.
Each slurl is being called upon scenario launch, triggering the native SL-client
teleport interface.
18. IVY VE – Heads Up Display II
Login, form selection and language combination selection views
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
Free Scenarios views – Scenario Info & Player
Locked Scenarios views
19. IVY VE – Administrator’s Panel I
The management console allows Content Administrators to easily populate the
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
scenario database and create XSPF playlists.
New scenarios are created through a form, where administrators provide textual
information (title, language combos, participants gender etc.) as well as an
ordered list of the audio tracks in each scenario.
Scenario textual info is stored in the DB and XSPF playlists on a separate playlist
inventory.
A separate listing of all the scenarios in the system allows administrators to see
which user is currently working on each scenario and also fire-up their SL client
and teleport to that location.
20. IVY VE – Administrator’s Panel II
Scenario Upload Form
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
Scenario Listing
(with Teleport functionality)
21. IVY VE – Preliminary Functional Evaluation
A preliminary evaluation of the current prototype was done by nine
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
interpreting and two virtual world experts, using talk-aloud, try-out sessions
in Second Life, followed by a discussion with the assessor.
The evaluation focused on the HUD functionality and in-world locations.
The admin panel was not evaluated at this stage; however it has been
successfully used, by content administrators, for the past four months to upload
scenarios in our system.
Overall opinion was quite positive and users with very limited experience in
virtual worlds, gaming or similar environments felt comfortable using IVY-VE.
However, a series of limitations of the current system where pinpointed…
22. IVY VE – Current Limitations
Some users focused too much on the HUD, not paying attention to the world.
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
No sound directionality and no visual cues on who is speaking.
Interpreters were enthusiastic they have a tool – but does that generates bias?
Lots of comments regarding aesthetics of locations – all participants observed
and commented on the quality of locations using photorealistic graphics.
No zoning, instancing and replication like in games.
Limitation on available ‘prims’ affects world scalability
… and consequently scenario management and execution, actor placement.
Currently bots are manually placed to required scenarios – nowadays we use
many bots and manage locations to cover all gender combinations in our corpus.
Sound is heard only from the player controller and is not broadcast back to the
world – hence observers do not hear it.
23. IVY VE – Future Work
Two evaluation cycles will follow, focused on interpreting students and potential
Ysgol Gwyddorau Cyfrifiadurol
School of Computer Science
clients of interpreters during the autumn of 2012 and will attempt to get some
feedback on the IVY-VE usability and resulting sense of immersion.
HUD is currently being updated to accommodate the additional modes, namely
exploration and live interaction.
Interpreting mode will also be enhanced with a series of exercises, both generic to
dialogues and monologues, as well as specific to particular scenarios which have
an inherit challenge in interpreting practice.
Enhancement of the current system with dedicated service components to allow
puppeteering of bots,
Use student/client feedback and experience from using IVY-VE to design a
new bespoke system…
..while exploring alternative technologies that allow tighter integration with
current web-based scenario system, e.g., Unity and WebGL.
24. Ysgol Gwyddorau Cyfrifiadurol IVY VE – Forthcoming Events
School of Computer Science
Virtual Learning Technologies (VLT) 2012, Bangor, Wales, 31st October 2012
Exploiting Emerging Technologies to Prepare Interpreters and their Clients
for Professional Practice, London, 23rd November 2012
For more info visit:
Consortium website http://www.virtual-interpreting.net/Seminar.html
…or the Bangor IVY partner website http://www.vmg.cs.bangor.ac.uk/IVY/