StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery ivaderivader
The paper presents three methods for text-driven manipulation of StyleGAN imagery using CLIP:
1. Direct optimization of the latent w vector to match a text prompt
2. Training a mapping function to map text to changes in the latent space
3. Finding global directions in the latent space corresponding to attributes by measuring distances between text embeddings
The methods allow editing StyleGAN images based on natural language instructions and demonstrate CLIP's ability to provide fine-grained controls, but rely on pretrained StyleGAN and CLIP models and may struggle with unseen text or image domains.
Deep learning for natural language embeddingsRoelof Pieters
This document discusses approaches to understanding natural language through deep learning techniques. It begins by outlining some of the challenges of language understanding, such as ambiguity and productivity. It then discusses using neural networks for natural language processing tasks like language modeling, sentiment analysis and machine translation. Recurrent and recursive neural networks are presented as approaches to model the compositionality of language. Different methods for obtaining word embeddings like Word2Vec, GloVe and earlier distributional semantic models are also summarized.
Iccv2009 recognition and learning object categories p2 c03 - objects and an...zukun
The document discusses research at the intersection of vision and language processing in the human brain. It describes how different areas of the brain are involved in processing vision and language, including areas responsible for object recognition (LOC) and face recognition (FFA). It also discusses early work using simple images to understand how humans can quickly summarize a visual scene in a sentence after only brief exposures.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
발표자: 조경현 (NYU 교수)
Kyunghyun Cho is an assistant professor of computer science and data science at New York University.
He was a postdoctoral fellow at University of Montreal until summer 2015, and received PhD and MSc degrees from Aalto University early 2014.
He tries best to find a balance among machine learning, natural language processing and life, but often fails to do so.
개요:
There are three axes along which advances in machine learning and deep learning happen. They are (1) network architectures, (2) learning algorithms and (3) spatio-temporal abstraction.
In this talk, I will describe a set of research topics I’ve pursued in each of these axes.
- For network architectures, I will describe how recurrent neural networks, which were largely forgotten during 90s and early 2000s, have evolved over time and have finally become a de facto standard in machine translation.
- I continue on to discussing various learning paradigms, how they related to each other, and how they are combined in order to build a strong learning system. Along this line, I briefly discuss my latest research on designing a query-efficient imitation learning algorithm for autonomous driving.
- Lastly, I present my view on what it means to be a higher-level learning system. Under this view each and every end-to-end trainable neural network serves as a module, regardless of how they were trained, and interacts with each other in order to solve a higher-level task.
I will describe my latest research on trainable decoding algorithm as a first step toward building such a framework.
발표영상: https://youtu.be/soZXAH3leeQ (본 발표는 영어로 진행됩니다.)
- 프로젝트명 : HomeNavi
- 발표 제목 : 3D Environment HOMENavi
- 발표자: 이의령 - RL Korea / 양홍선 - 고려대학교
- 내용 요약 : 3D 환경에서 강화학습 기반으로 네비게이션 방법에 대한 최신 연구 방향 및 비전에 대해 소개합니다. 기존 로봇 분야에서 SLAM 기반으로 네비게이션 방법과 달리 강화학습으로 접근했을 때 어떠한 장점과 단점이 있는지, 그리고 최근에 공개된 3D 강화학습 환경이 어떤 것들이 있는지 소개합니다. 그리고 베이스라인이 되는 논문들에 대한 간략한 설명과 함께 직접 실험을 통해 느낀 경험들을 공유하고자 합니다.
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery ivaderivader
The paper presents three methods for text-driven manipulation of StyleGAN imagery using CLIP:
1. Direct optimization of the latent w vector to match a text prompt
2. Training a mapping function to map text to changes in the latent space
3. Finding global directions in the latent space corresponding to attributes by measuring distances between text embeddings
The methods allow editing StyleGAN images based on natural language instructions and demonstrate CLIP's ability to provide fine-grained controls, but rely on pretrained StyleGAN and CLIP models and may struggle with unseen text or image domains.
Deep learning for natural language embeddingsRoelof Pieters
This document discusses approaches to understanding natural language through deep learning techniques. It begins by outlining some of the challenges of language understanding, such as ambiguity and productivity. It then discusses using neural networks for natural language processing tasks like language modeling, sentiment analysis and machine translation. Recurrent and recursive neural networks are presented as approaches to model the compositionality of language. Different methods for obtaining word embeddings like Word2Vec, GloVe and earlier distributional semantic models are also summarized.
Iccv2009 recognition and learning object categories p2 c03 - objects and an...zukun
The document discusses research at the intersection of vision and language processing in the human brain. It describes how different areas of the brain are involved in processing vision and language, including areas responsible for object recognition (LOC) and face recognition (FFA). It also discusses early work using simple images to understand how humans can quickly summarize a visual scene in a sentence after only brief exposures.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
발표자: 조경현 (NYU 교수)
Kyunghyun Cho is an assistant professor of computer science and data science at New York University.
He was a postdoctoral fellow at University of Montreal until summer 2015, and received PhD and MSc degrees from Aalto University early 2014.
He tries best to find a balance among machine learning, natural language processing and life, but often fails to do so.
개요:
There are three axes along which advances in machine learning and deep learning happen. They are (1) network architectures, (2) learning algorithms and (3) spatio-temporal abstraction.
In this talk, I will describe a set of research topics I’ve pursued in each of these axes.
- For network architectures, I will describe how recurrent neural networks, which were largely forgotten during 90s and early 2000s, have evolved over time and have finally become a de facto standard in machine translation.
- I continue on to discussing various learning paradigms, how they related to each other, and how they are combined in order to build a strong learning system. Along this line, I briefly discuss my latest research on designing a query-efficient imitation learning algorithm for autonomous driving.
- Lastly, I present my view on what it means to be a higher-level learning system. Under this view each and every end-to-end trainable neural network serves as a module, regardless of how they were trained, and interacts with each other in order to solve a higher-level task.
I will describe my latest research on trainable decoding algorithm as a first step toward building such a framework.
발표영상: https://youtu.be/soZXAH3leeQ (본 발표는 영어로 진행됩니다.)
- 프로젝트명 : HomeNavi
- 발표 제목 : 3D Environment HOMENavi
- 발표자: 이의령 - RL Korea / 양홍선 - 고려대학교
- 내용 요약 : 3D 환경에서 강화학습 기반으로 네비게이션 방법에 대한 최신 연구 방향 및 비전에 대해 소개합니다. 기존 로봇 분야에서 SLAM 기반으로 네비게이션 방법과 달리 강화학습으로 접근했을 때 어떠한 장점과 단점이 있는지, 그리고 최근에 공개된 3D 강화학습 환경이 어떤 것들이 있는지 소개합니다. 그리고 베이스라인이 되는 논문들에 대한 간략한 설명과 함께 직접 실험을 통해 느낀 경험들을 공유하고자 합니다.
This document summarizes an information session about the Boston University Developer Student Club (BU DSC). The BU DSC is a student-run community that empowers students through technology to solve local problems. It introduces the BU DSC team and leaders. It also outlines upcoming BU DSC events and workshops on topics like Git, GitHub, Flutter and career development. Students are encouraged to join events and provide feedback on what else they want to learn through a short Google form.
This document provides an introduction and overview of the National Taipei University of Education Developer Student Club (DSC NTUE) led by Oscar Chen. It outlines Oscar's background and experience. It then discusses the structure and roles of the DSC core team and subteams. The document outlines the DSC's planned workshop series covering topics like Python, App Inventor, Dialogflow, Firebase, and Android Study Jams. It provides a timeline for the workshops from November to December and discusses opportunities for students through participation including career certificates and Google Developer badges. It encourages students to use Google technologies to solve real-world problems and stresses that as part of the global DSC community, students hold opportunities to impact the world through their
This document provides an overview of a workshop to build a web app using JavaScript and jQuery. The agenda includes going over starter code, learning key concepts, building the app, reviewing solutions, and discussing next steps. Suggestions are given for learning like not getting discouraged and taking advantage of support. The workshop will have participants work on grabbing elements, attaching event listeners, and using resources like Google.
This document discusses expert systems, transfer learning, and their impact on future projects. It summarizes building an image recognition model in 20 minutes using transfer learning. It also discusses the limitations for small teams in machine learning and how those limitations are being lifted, such as through reduced data needs and increased processing power availability.
This document provides an introduction to JavaScript. It outlines the agenda which includes learning key JavaScript concepts, completing assignments with support from TAs, and reviewing the answer key. It then covers topics like variables, functions, if/else statements, comparing values, and using parameters in functions. Examples are provided. The document also gives information about the instructor, TJ Stalcup, and Thinkful's approach to project-based learning. It concludes by discussing ways to continue learning JavaScript after the intro workshop.
The document summarizes how the neuros team develops web applications. They follow an agile methodology involving planning, prototyping, and development phases. In planning, they identify business processes and requirements. They prototype UIs with Figma and design application architecture with UML. In development, they use Trello, Planning Poker and version control tools. They develop frontends with Vue.js and backends with Django. Vue.js and Django are preferred for their flexibility, performance, security and large community support. Neuros aims to deliver custom, responsive web applications using best practices.
Dinesh Saivarma Rudraraju has over 2 years of experience as a Software Development Engineer at Amazon and a Technology Analyst at Infosys. He received his B.Tech from the International Institute of Information Technology, Hyderabad, India in 2013. His experience includes developing platforms for customer feedback at Amazon and a floor planning tool for a location-based services application at Infosys. In academics, he worked on projects involving video surveillance systems and an interpreter for the BASIC programming language.
Kyle Morrison has a Master of Fine Arts in Dramatic Media Production from the University of Georgia and a Bachelor of Science in Information Technology from the University of Missouri. He has experience programming virtual reality experiments and environments using C#, Unity, and various VR SDKs. His technical skills also include 3D modeling, animation, game development, and web development. He is currently a 3D Modeler and Programmer at the Games and Virtual Environments Lab at UGA.
This document provides information about an Android Academy community event on Material Design concepts and implementation. It begins with introducing Jonathan Yarkoni from IronSource who will be presenting. It then provides details about the event, including that it will cover Material Design concepts, properties, and components like navigation. The document discusses Material Design principles and how they were developed. It provides examples of Material components like cards, toolbars, and tabs. Finally, it discusses implementing Material Design on older Android versions using the Android Support Library.
Richa Gupta has over 2 years of experience designing, developing, implementing, and supporting SAP PI integrations. She has worked extensively with SAP PI 7.1 and 7.3, using various adapters like IDoc, FTP, SOAP, and JDBC. Her skills include graphical mappings, parameterized mappings, monitoring, object configuration, and migration. She has contributed to several projects for clients in various industries.
The Developer Student Club at Boston University is a student-run community that empowers students through technology. It offers workshops on topics like Android and iOS app development, machine learning, and cloud infrastructure. Students of all experience levels are welcome. The club aims to help students learn new skills, build real solutions, and meet new people. Upcoming events include a resume/cover letter workshop and solution challenge where students build projects to solve local problems.
Dev conf 2018 DesOps - Prepare Today for Future of Design Samir Dash
The deck I am to present at
DevConf 2018, on 5th August, at Christ University, Bengaluru
More info at: http://desops.io/2018/07/04/talk-at-devconf18-designops-prepare-today-for-future-of-design/
What Problem is Your Organization Looking to Solve?Float
Building the business case for immersive technology requires you to be able to address key questions about the benefits of the solutions, as well as the costs, hardware, the software required, and the overall skills needed to move forward. This session will give you targeted answers to the questions your organization may have about implementing AR and VR strategies.
Building the Neo4j Sandbox: AWS, ECS, Docker, Python, Neo4j, ++Ryan Boyd
Video: https://www.youtube.com/watch?v=2XbNhAJ9wh0
Try Neo4j Sandbox today: https://neo4j.com/sandbox-v2/
The Neo4j Sandbox is an environment where anyone can get their own instance of Neo4j up and running in seconds, with tutorials and datasets related to their usecase.
Since Ryan wanted people to have the freedom to interact with Neo4j with full privileges, he decided to use Docker to provide container-level isolation for the Sandbox.
This session will talk about some of the design decisions made and how the architecture was achieved using Docker, EC2 Container Service, Auth0, EC2, Elastic Load Balancers, EC2 AutoScaling Group, AWS Lambda functions, Python, S3, IAM, CloudWatch, SES, FullContact, MaxMind and more.
Ryan will also discuss future features and get feedback from users during Q&A.
The INFO SESSION PowerPoint presentation by Google Developer Student Clubs (GDSC) at SDIET offers an exciting glimpse into the innovative world of technology and community collaboration. This dynamic presentation introduces attendees to the club's mission, activities, and the opportunities it provides to foster learning and skill development. Through engaging visuals and informative content, the presentation highlights past successful projects, upcoming events, and the ways in which students can actively participate in coding challenges, workshops, and tech talks. Whether you're a tech enthusiast, aspiring developer, or simply curious about the intersection of technology and creativity, this INFO SESSION offers valuable insights and a pathway to join a vibrant community of like-minded individuals.
The world around us is full of connected information. Neo4j was originally developed to solve two complex "network" problems in a document management system, as it was too hard to manage rich connection information efficiently in traditional and new "NOSQL" databases.During this meetup, we will talk about the technology, and about the journey that a couple of technologists from Malmö took. You will learn* how Neo Technology grew from just the three founders in to a global database company with use-cases in every domain imaginable.* how focusing on customer and community feedback allows us to provide a solution for managing connected data to everyone, not just the large internet companies.
Of course we will also introduce the graph model, it's whiteboard friendlyness and how you get started with Neo4j and it's easy and powerful query language Cypher. We'll also compare the graph and relational data model to see how they differ in shape and capabilities. Finally we discuss the foundations that enable Graph databases to provide higher join performance, faster development processes and more inclusive software for all stakeholders. With use-cases from Gaming, Dating and Finance we'll see how to apply the graph capabilities to these domains to realize new functionality or opportunities that were not possible before.
Finally, if there's a question you've always wanted to ask/discuss, we'll have plenty of time for that at the end of Michael's presentation.
This document summarizes an information session about the Boston University Developer Student Club (BU DSC). The BU DSC is a student-run community that empowers students through technology to solve local problems. It introduces the BU DSC team and leaders. It also outlines upcoming BU DSC events and workshops on topics like Git, GitHub, Flutter and career development. Students are encouraged to join events and provide feedback on what else they want to learn through a short Google form.
This document provides an introduction and overview of the National Taipei University of Education Developer Student Club (DSC NTUE) led by Oscar Chen. It outlines Oscar's background and experience. It then discusses the structure and roles of the DSC core team and subteams. The document outlines the DSC's planned workshop series covering topics like Python, App Inventor, Dialogflow, Firebase, and Android Study Jams. It provides a timeline for the workshops from November to December and discusses opportunities for students through participation including career certificates and Google Developer badges. It encourages students to use Google technologies to solve real-world problems and stresses that as part of the global DSC community, students hold opportunities to impact the world through their
This document provides an overview of a workshop to build a web app using JavaScript and jQuery. The agenda includes going over starter code, learning key concepts, building the app, reviewing solutions, and discussing next steps. Suggestions are given for learning like not getting discouraged and taking advantage of support. The workshop will have participants work on grabbing elements, attaching event listeners, and using resources like Google.
This document discusses expert systems, transfer learning, and their impact on future projects. It summarizes building an image recognition model in 20 minutes using transfer learning. It also discusses the limitations for small teams in machine learning and how those limitations are being lifted, such as through reduced data needs and increased processing power availability.
This document provides an introduction to JavaScript. It outlines the agenda which includes learning key JavaScript concepts, completing assignments with support from TAs, and reviewing the answer key. It then covers topics like variables, functions, if/else statements, comparing values, and using parameters in functions. Examples are provided. The document also gives information about the instructor, TJ Stalcup, and Thinkful's approach to project-based learning. It concludes by discussing ways to continue learning JavaScript after the intro workshop.
The document summarizes how the neuros team develops web applications. They follow an agile methodology involving planning, prototyping, and development phases. In planning, they identify business processes and requirements. They prototype UIs with Figma and design application architecture with UML. In development, they use Trello, Planning Poker and version control tools. They develop frontends with Vue.js and backends with Django. Vue.js and Django are preferred for their flexibility, performance, security and large community support. Neuros aims to deliver custom, responsive web applications using best practices.
Dinesh Saivarma Rudraraju has over 2 years of experience as a Software Development Engineer at Amazon and a Technology Analyst at Infosys. He received his B.Tech from the International Institute of Information Technology, Hyderabad, India in 2013. His experience includes developing platforms for customer feedback at Amazon and a floor planning tool for a location-based services application at Infosys. In academics, he worked on projects involving video surveillance systems and an interpreter for the BASIC programming language.
Kyle Morrison has a Master of Fine Arts in Dramatic Media Production from the University of Georgia and a Bachelor of Science in Information Technology from the University of Missouri. He has experience programming virtual reality experiments and environments using C#, Unity, and various VR SDKs. His technical skills also include 3D modeling, animation, game development, and web development. He is currently a 3D Modeler and Programmer at the Games and Virtual Environments Lab at UGA.
This document provides information about an Android Academy community event on Material Design concepts and implementation. It begins with introducing Jonathan Yarkoni from IronSource who will be presenting. It then provides details about the event, including that it will cover Material Design concepts, properties, and components like navigation. The document discusses Material Design principles and how they were developed. It provides examples of Material components like cards, toolbars, and tabs. Finally, it discusses implementing Material Design on older Android versions using the Android Support Library.
Richa Gupta has over 2 years of experience designing, developing, implementing, and supporting SAP PI integrations. She has worked extensively with SAP PI 7.1 and 7.3, using various adapters like IDoc, FTP, SOAP, and JDBC. Her skills include graphical mappings, parameterized mappings, monitoring, object configuration, and migration. She has contributed to several projects for clients in various industries.
The Developer Student Club at Boston University is a student-run community that empowers students through technology. It offers workshops on topics like Android and iOS app development, machine learning, and cloud infrastructure. Students of all experience levels are welcome. The club aims to help students learn new skills, build real solutions, and meet new people. Upcoming events include a resume/cover letter workshop and solution challenge where students build projects to solve local problems.
Dev conf 2018 DesOps - Prepare Today for Future of Design Samir Dash
The deck I am to present at
DevConf 2018, on 5th August, at Christ University, Bengaluru
More info at: http://desops.io/2018/07/04/talk-at-devconf18-designops-prepare-today-for-future-of-design/
What Problem is Your Organization Looking to Solve?Float
Building the business case for immersive technology requires you to be able to address key questions about the benefits of the solutions, as well as the costs, hardware, the software required, and the overall skills needed to move forward. This session will give you targeted answers to the questions your organization may have about implementing AR and VR strategies.
Building the Neo4j Sandbox: AWS, ECS, Docker, Python, Neo4j, ++Ryan Boyd
Video: https://www.youtube.com/watch?v=2XbNhAJ9wh0
Try Neo4j Sandbox today: https://neo4j.com/sandbox-v2/
The Neo4j Sandbox is an environment where anyone can get their own instance of Neo4j up and running in seconds, with tutorials and datasets related to their usecase.
Since Ryan wanted people to have the freedom to interact with Neo4j with full privileges, he decided to use Docker to provide container-level isolation for the Sandbox.
This session will talk about some of the design decisions made and how the architecture was achieved using Docker, EC2 Container Service, Auth0, EC2, Elastic Load Balancers, EC2 AutoScaling Group, AWS Lambda functions, Python, S3, IAM, CloudWatch, SES, FullContact, MaxMind and more.
Ryan will also discuss future features and get feedback from users during Q&A.
The INFO SESSION PowerPoint presentation by Google Developer Student Clubs (GDSC) at SDIET offers an exciting glimpse into the innovative world of technology and community collaboration. This dynamic presentation introduces attendees to the club's mission, activities, and the opportunities it provides to foster learning and skill development. Through engaging visuals and informative content, the presentation highlights past successful projects, upcoming events, and the ways in which students can actively participate in coding challenges, workshops, and tech talks. Whether you're a tech enthusiast, aspiring developer, or simply curious about the intersection of technology and creativity, this INFO SESSION offers valuable insights and a pathway to join a vibrant community of like-minded individuals.
The world around us is full of connected information. Neo4j was originally developed to solve two complex "network" problems in a document management system, as it was too hard to manage rich connection information efficiently in traditional and new "NOSQL" databases.During this meetup, we will talk about the technology, and about the journey that a couple of technologists from Malmö took. You will learn* how Neo Technology grew from just the three founders in to a global database company with use-cases in every domain imaginable.* how focusing on customer and community feedback allows us to provide a solution for managing connected data to everyone, not just the large internet companies.
Of course we will also introduce the graph model, it's whiteboard friendlyness and how you get started with Neo4j and it's easy and powerful query language Cypher. We'll also compare the graph and relational data model to see how they differ in shape and capabilities. Finally we discuss the foundations that enable Graph databases to provide higher join performance, faster development processes and more inclusive software for all stakeholders. With use-cases from Gaming, Dating and Finance we'll see how to apply the graph capabilities to these domains to realize new functionality or opportunities that were not possible before.
Finally, if there's a question you've always wanted to ask/discuss, we'll have plenty of time for that at the end of Michael's presentation.
Diversity is all you need(DIAYN) : Learning Skills without a Reward FunctionYeChan(Paul) Kim
DIAYN is an unsupervised reinforcement learning method that learns diverse skills without a reward function. It works by maximizing the mutual information between skills and states visited to ensure skills dictate different states, while minimizing the mutual information between skills and actions given a state to distinguish skills based on states. It also maximizes a mixture of policies to encourage diverse skills. Experiments show DIAYN discovers locomotion skills in complex environments and sometimes learns skills that solve benchmark tasks. The learned skills can then be adapted to maximize rewards, used for hierarchical RL, and to imitate experts.
파이콘 코리아 2018년도 튜토리얼 세션의 "RL Adventure : DQN 부터 Rainbow DQN까지"의 발표 자료입니다.
2017년도 Deepmind에서 발표한 value based 강화학습 모형인 Rainbow의 이해를 돕기 위한 튜토리얼로 DQN부터 Rainbow까지 순차적으로 중요한 점만 요약된 내용이 들어있습니다.
파트 1 : DQN, Double & Dueling DQN - 성태경
파트 2 : PER and NoisyNet - 양홍선
파트 3 : Distributed RL - 이의령
파트 4 : RAINBOW - 김예찬
관련된 코드와 구현체를 확인하고 싶으신 분들은
https://github.com/hongdam/pycon2018-RL_Adventure
에서 확인하실 수 있습니다
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
3. Project Introduction
3D Environment 기반 Home Navigation
• House(Indoor) 3D Dataset
• Reinforcement Learning Environment
• ‘Go to Kitchen’ 과 같은 Instruction 기반 Task 수행
3
9. Mobile Robot
9
A mobile robot is a robot that is capable of locomotion.
- wikipedia-
중분류 소분류 기술내용
Navigation
Driving
Path Planning
Obstacle Avoidance
Recognizing the surroundings
Localization
&
Mapping
Dead Reckoning
LandMark
SLAM
Credit : Machine Learning & Robotics / Geonhee Lee
10. Path Planning
10
• 현재 위치에서부터 지도상에 지정받은 목표 지점까지 이동 궤적(Trajectory)을 생성
• Map상의 Global Path Planning과 Local Path Planning으로 나누어
로봇의 이동 경로를 생성
• Algorithm: A*, D*, RRT(Rapidly-exploring random tree), Probabilistic Roadmap 등
Credit : Machine Learning & Robotics / Geonhee Lee
11. SLAM
11
Simultaneous Localization and Mapping
• Computational problem of constructing a map of an environment
while simultaneously keeping track of a robot’s location
Credit : Fast Campus SLAM Workshop 2018 / Dong-Won Shin
13. SLAM
13
Mapping
• Scenarios in which a prior map is not available and needs to be built.
• Map can inform path planning or provide an intuitive visualization
for a human or robot.
Credit : Fast Campus SLAM Workshop 2018 / Dong-Won Shin
18. Vision - Language
Vision + Language Application
• Image Captioning
Input:
The man at bat
readies to swing at
the pitch while the
umpire looks on.
Desired
Output:
A large bus sitting
next to a very tall
building.
18
19. Vision - Language
Vision + Language Deep Learning Architecture
• Image Captioning
Credit : https://www.analyticsvidhya.com/blog/2018/04/solving-an-image-captioning-task-using-deep-learning/
19
20. Vision - Language
20
Vision + Language Application
• Visual Question Answering(VQA)
Input:
Q: What is the
Musache made of?
Q: Is this a
Vegetarian Pizza?
Desired
Output:
A: Bananas A: No
21. Vision - Language
Vision + Language Deep Learning Architecture
• Visual Question Answering(VQA)
Credit : https://arxiv.org/pdf/1505.00468v6.pdf
21
22. Vision - Language Navigation
22
Evolution of Language and Vision datasets towards Actions
Credit : https://lvatutorial.github.io/
23. Vision - Language Navigation
23
Evolution of Language and Vision datasets towards Actions
24. Vision - Language Navigation
24
Evolution of Language and Vision datasets towards Actions
25. Vision - Language Navigation
25
Evolution of Language and Vision datasets towards Actions
26. Vision - Language Navigation
26
Evolution of Language and Vision datasets towards Actions
27. Vision - Language Navigation
27
Evolution of Language and Vision datasets towards Actions
28. Vision - Language Navigation
28
Evolution of Language and Vision datasets towards Actions
29. Vision - Language Navigation
29
Evolution of Language and Vision datasets towards Actions
30. Vision - Language Navigation
30
Evolution of Language and Vision datasets towards Actions
31. Vision - Language Navigation
31
Evolution of Language and Vision datasets towards Actions
32. Vision - Language Navigation
32
Evolution of Language and Vision datasets towards Actions
33. Vision - Language Navigation
33
Language
ActionsVision
• Image / video
understanding
• 3D environment
perception
• Camera motion
• Robotics /
Manipulation
• APIs
• Instruction following
• Question answering
• Dialog
‘Complete’
Agent
35. 3D Environment
35
X
SUNCG (Song et al., 2017)
Datasets
Environments
Tasks & Metrics
Matterport3D (Chang et al., 2017) Stanford 2D-3D-S (Armeni et al., 2017)
Credit : Connecting Language and Vision to Actions ACL2018 Tutorial / Abhishek Das
36. 3D Environment
36
X
SUNCG (Song et al., 2017)
Datasets
Environments
Tasks & Metrics
Matterport3D (Chang et al., 2017)
AI2-THOR
(Kolve et al., 2017)
MINOS
(Savva et al., 2017)
Gibson
(Zamir et al., 2018)
Stanford 2D-3D-S (Armeni et al., 2017)
CHALET
(Yan et al., 2018)
House3D
(Wu et al., 2017)
HoME (Brodeur et al., 2018)
VirtualHome
(Puig et al., 2018)
AdobeIndoorNav
(Mo et al., 2018)
Matterport3DSim
(Anderson et al., 2018)
Credit : Connecting Language and Vision to Actions ACL2018 Tutorial / Abhishek Das
37. 3D Environment
37Credit : Connecting Language and Vision to Actions ACL2018 Tutorial / Abhishek Das
X
EmbodiedQA
SUNCG (Song et al., 2017)
Datasets
Environments
Tasks & Metrics
Matterport3D (Chang et al., 2017)
AI2-THOR
(Kolve et al., 2017)
MINOS
(Savva et al., 2017)
Gibson
(Zamir et al., 2018)
Stanford 2D-3D-S (Armeni et al., 2017)
CHALET
(Yan et al., 2018)
House3D
(Wu et al., 2017)
Interactive QA
(Gordon et al., 2018)
Vision-Language Navigation
(Anderson et al., 2018)
Language grounding
(Chaplot et al., 2017,
Hermann & Hill et al., 2017)
Visual Navigation
(Zhu & Gordon et al., 2017,
Savva et al., 2017,
Wu et al., 2017)
HoME (Brodeur et al., 2018)
VirtualHome
(Puig et al., 2018)
AdobeIndoorNav
(Mo et al., 2018)
Matterport3DSim
(Anderson et al., 2018)
38. 3D Environment
38Credit : Connecting Language and Vision to Actions ACL2018 Tutorial / Abhishek Das
X
EmbodiedQA
SUNCG (Song et al., 2017)
Datasets
Environments
Tasks & Metrics
Matterport3D (Chang et al., 2017)
AI2-THOR
(Kolve et al., 2017)
MINOS
(Savva et al., 2017)
Gibson
(Zamir et al., 2018)
Stanford 2D-3D-S (Armeni et al., 2017)
CHALET
(Yan et al., 2018)
House3D
(Wu et al., 2017)
Interactive QA
(Gordon et al., 2018)
Vision-Language Navigation
(Anderson et al., 2018)
Language grounding
(Chaplot et al., 2017,
Hermann & Hill et al., 2017)
Visual Navigation
(Zhu & Gordon et al., 2017,
Savva et al., 2017,
Wu et al., 2017)
HoME (Brodeur et al., 2018)
VirtualHome
(Puig et al., 2018)
AdobeIndoorNav
(Mo et al., 2018)
Matterport3DSim
(Anderson et al., 2018)
>= 2017 (!)
39. Paper (in project)
39
⚫ House3D Environment 구축
⚫ RoomNav 학습 모델
House3D
Yi Wu et, al(2017)
Gated
Attention
Chaplot et, al(2017)
⚫ Gated Attention Module
⚫ House3D RoomNav의
레퍼런스 모델
Embodied QA
Abhishek et, al(2017)
⚫ 최초 VQA + RL 접근
⚫ Embodied QA Dataset 구축
⚫ Hirarchical Model
⚫ PACMAN 학습 모델
⚫ CVPR 2018
FollowNet
P Shah et, al(2017)
⚫ Conditioned Attention 모형
⚫ Long Instruction(Language)
사용
⚫ ICRA 2018
Arxiv Link Arxiv Link Arxiv Link Arxiv Link
Code Code Code
40. Paper
40
⚫ Target Driven Visual
Navigation in Indoor Scene
⚫ Siamese 형태의
RL기반 Navigation 학습 모델
⚫ ICRA 2017
Target Driven
Visual Navi
Yuke Zhu et, al(2017)
CMP
Gupta et, al(2017)
Arxiv Link
⚫ Cognitive Mapping and
Planning for visual Navigation
⚫ Value Iteration Network
⚫ CVPR 2017
Arxiv Link
⚫ Visual Question Answering
in Interactive Environment
⚫ CVPR 2018
Arxiv Link
CodeCodeCode
⚫ Vision and Language
Navigation
⚫ CVPR 2018 spotlight
Arxiv Link
IQA
Gordon et, al(2018)
VLN
Anderson et, al(2018)