Virtual and augmented realities will continue to advance, allowing for more immersive experiences. Technologies like high resolution scanning, sophisticated virtual displays, and advanced artificial intelligence could enable the creation of highly realistic virtual humans and environments. However, issues around authenticity, data ownership, and the responsibilities of technology creators will need to be addressed as these virtual worlds become more integrated with physical reality. Future generations may experience reality in ways that are very different from today due to emerging technologies that continue to blur the lines between virtual and physical.
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
Grady Booch, IBM Fellow and IBM’s Chief Scientist for Watson, presented “Embodied Cognition with Project Intu” as part of the Cognitive Systems Institute Speaker Series on December 8, 2016
The Future of Education, the Spacial Web and Self Organizing Systems.Zenka Caro
Learn about advances in citizen science, virtual reality, consciousness and the spacial web. How can self organizing systems support a global renaissance? This talk was given at CSUN University for the distinguished speakers program and covers the future of curiosity. Video can be found here: https://youtu.be/iRgd6shlolA
"Bugünün Teknolojisi ve Korkusuz Makinelerin Yarını!"
"Today's Tech versus Brave Machine's Tomorrow !"
Yarının güvenli geleceğini inşa etmek için ihtiyacımız olan makine felsefesi nedir sizce? Bu konuşmamda Bilgisayar Görüsü'nün kilit taşı olacağı çözümlerden bahsedeceğiz. Günümüz tekniklerini ve bu tekniklerin sınırlarını inceleyeceğiz.
Today's Tech versus Brave Machine's Tomorrow !
What kind of machine philosophy do we need to build safe future of tomorrow? We will talk about proposals of computer vision as future's keystones. By examining today's pain points and limitations we will try to derive tomorrow's technologic boundaries.
My talk from Playful 11 in London where I argue we all might be cyborgs already. I talk about how we cognitively project ourselves to our surroundings and possessions, and why everything will be about software, designed behaviour and superpowers.
Artificial Intelligence or the Brainization of the EconomyWilly Braun
60 years ago, John McCarthy used for the first time the term “Artificial Intelligence”. What does it mean and how has it evolved since 1956?
This is what daphni tried to answer in this in-depth report about AI. We’ve interviewed some of the brightest minds in the field: Bruno Maisonnier (founder of Aldebaran robotics), Massimiliano Versaca (CEO Neurala), Alexandre Lebrun (co-founder of wit.ai), Luc Julia (VP Innovation Samsung).
By Paul Bazin and Pierre-Eric Leibovici
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
Grady Booch, IBM Fellow and IBM’s Chief Scientist for Watson, presented “Embodied Cognition with Project Intu” as part of the Cognitive Systems Institute Speaker Series on December 8, 2016
The Future of Education, the Spacial Web and Self Organizing Systems.Zenka Caro
Learn about advances in citizen science, virtual reality, consciousness and the spacial web. How can self organizing systems support a global renaissance? This talk was given at CSUN University for the distinguished speakers program and covers the future of curiosity. Video can be found here: https://youtu.be/iRgd6shlolA
"Bugünün Teknolojisi ve Korkusuz Makinelerin Yarını!"
"Today's Tech versus Brave Machine's Tomorrow !"
Yarının güvenli geleceğini inşa etmek için ihtiyacımız olan makine felsefesi nedir sizce? Bu konuşmamda Bilgisayar Görüsü'nün kilit taşı olacağı çözümlerden bahsedeceğiz. Günümüz tekniklerini ve bu tekniklerin sınırlarını inceleyeceğiz.
Today's Tech versus Brave Machine's Tomorrow !
What kind of machine philosophy do we need to build safe future of tomorrow? We will talk about proposals of computer vision as future's keystones. By examining today's pain points and limitations we will try to derive tomorrow's technologic boundaries.
My talk from Playful 11 in London where I argue we all might be cyborgs already. I talk about how we cognitively project ourselves to our surroundings and possessions, and why everything will be about software, designed behaviour and superpowers.
Artificial Intelligence or the Brainization of the EconomyWilly Braun
60 years ago, John McCarthy used for the first time the term “Artificial Intelligence”. What does it mean and how has it evolved since 1956?
This is what daphni tried to answer in this in-depth report about AI. We’ve interviewed some of the brightest minds in the field: Bruno Maisonnier (founder of Aldebaran robotics), Massimiliano Versaca (CEO Neurala), Alexandre Lebrun (co-founder of wit.ai), Luc Julia (VP Innovation Samsung).
By Paul Bazin and Pierre-Eric Leibovici
Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing Machines as Social Prostheses by Harry Collins.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
"Click to Continue" by Sam Otis, from Content+Design Meetup, Oct. 4, 2017Blend Interactive
Graphical interfaces help make powerful technology intuitive and accessible. They give us super powers. Join Sam Otis, Lead Designer at Blend Interactive, as the Sioux Falls Content + Design Group joins up with Sioux Falls Design Week for a fun look at how GUIs (Graphical User Interfaces) have developed, what makes an interface good today, and what challenges the future holds.
Singularity-Proof Yourself by Sage FranchSage Franch
Will your job exist in the future? How will your skills fit into the landscape of artificial intelligence and quantum computing? Explore the path from today to the singularity and how you can continue to be an active participant in the tech workforce of tomorrow. We look at some of the top emerging careers in technology and the skills that will be demanded in tomorrow’s job market. Learn how blockchain, AI, mixed reality, and quantum computing will transform the tech sector, and how you can prepare to be a part of building this future.
Humanity will change more in the next 20 years than in the previous 300 years. What if …robots replaced the world’s workforce?
This is the presentation delivered by Glen Leonhard at London Business School's 2015 Global Leadership Summit.
The Future of HCI: Intelligent User Interfaces as Agents of ChangeChris Khalil
The predominant interaction paradigm for the last 30 years has been Direct Manipulation. This metaphor is starting to crack under the weight of information it has to deal with. The Indirect Management approach taken by systems such as Intelligent Agents aim to alleviate the cognitive load on users.
This presentation shows the constraints we face in the user experience field and some future opportunities and threats.
This presentation aims to help IP owners assess how children of today want to experience heritage brands in the digital space. Using models developed by Dubit we look at how children are consuming heritage IPs and how this can influence digital adaptations.
The presentation was presented by Dubit in 2013 at the iKids conference in New York, Sheffield's Children's Media Conference and Digital Kids in San Francisco where we were joined by Brad Jashinsky, Director of Digital Media for Summertime Entertainment - the team behind the forthcoming film Legends of Oz: Dorothy's Return.
Good news for Oculus VR and Facebook! New research from Dubit shows kids not only love to use Oculus Rift but they want to see it used in schools and other areas outside of gaming.
This document is a summary of the findings from a series of focus groups conducted with children on their experiences and expectations for Oculus Rift and virtual reality.
The Shape of Robots to Come - Robolift - March 2011Dominique Sciamma
This the presentation made during ROBOTLIFT, a conference organized by LIFT in the context of INNOROBO in Lyon (March 2011), the first european exhibition devoted to Robotics and services
This presentation give an introduction to Artificial Intelligence subjectiveness and history. The primary goal of the presentation is to provide a deep enough understanding of Artificial Narrow Intelligence and Artificial General Intelligence so that the people can appreciate the strengths or weaknesses of the AI. The presentation also includes a classification(the main domains of AI) and the most relevant examples from the past decades. In the second part it provides some statistics and future possible applications and forecasts.
Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing Machines as Social Prostheses by Harry Collins.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
"Click to Continue" by Sam Otis, from Content+Design Meetup, Oct. 4, 2017Blend Interactive
Graphical interfaces help make powerful technology intuitive and accessible. They give us super powers. Join Sam Otis, Lead Designer at Blend Interactive, as the Sioux Falls Content + Design Group joins up with Sioux Falls Design Week for a fun look at how GUIs (Graphical User Interfaces) have developed, what makes an interface good today, and what challenges the future holds.
Singularity-Proof Yourself by Sage FranchSage Franch
Will your job exist in the future? How will your skills fit into the landscape of artificial intelligence and quantum computing? Explore the path from today to the singularity and how you can continue to be an active participant in the tech workforce of tomorrow. We look at some of the top emerging careers in technology and the skills that will be demanded in tomorrow’s job market. Learn how blockchain, AI, mixed reality, and quantum computing will transform the tech sector, and how you can prepare to be a part of building this future.
Humanity will change more in the next 20 years than in the previous 300 years. What if …robots replaced the world’s workforce?
This is the presentation delivered by Glen Leonhard at London Business School's 2015 Global Leadership Summit.
The Future of HCI: Intelligent User Interfaces as Agents of ChangeChris Khalil
The predominant interaction paradigm for the last 30 years has been Direct Manipulation. This metaphor is starting to crack under the weight of information it has to deal with. The Indirect Management approach taken by systems such as Intelligent Agents aim to alleviate the cognitive load on users.
This presentation shows the constraints we face in the user experience field and some future opportunities and threats.
This presentation aims to help IP owners assess how children of today want to experience heritage brands in the digital space. Using models developed by Dubit we look at how children are consuming heritage IPs and how this can influence digital adaptations.
The presentation was presented by Dubit in 2013 at the iKids conference in New York, Sheffield's Children's Media Conference and Digital Kids in San Francisco where we were joined by Brad Jashinsky, Director of Digital Media for Summertime Entertainment - the team behind the forthcoming film Legends of Oz: Dorothy's Return.
Good news for Oculus VR and Facebook! New research from Dubit shows kids not only love to use Oculus Rift but they want to see it used in schools and other areas outside of gaming.
This document is a summary of the findings from a series of focus groups conducted with children on their experiences and expectations for Oculus Rift and virtual reality.
The Shape of Robots to Come - Robolift - March 2011Dominique Sciamma
This the presentation made during ROBOTLIFT, a conference organized by LIFT in the context of INNOROBO in Lyon (March 2011), the first european exhibition devoted to Robotics and services
This presentation give an introduction to Artificial Intelligence subjectiveness and history. The primary goal of the presentation is to provide a deep enough understanding of Artificial Narrow Intelligence and Artificial General Intelligence so that the people can appreciate the strengths or weaknesses of the AI. The presentation also includes a classification(the main domains of AI) and the most relevant examples from the past decades. In the second part it provides some statistics and future possible applications and forecasts.
Please download this slideshare ppt, as it will give you access to all the youtube and slideshare streams that are embedded in this presentation. In this narrative powerpoint which connects to the work of others, I envision the future of humanity influenced by technology.
Design Careers in the Science Fiction FutureBill DeRouchey
What design challenges may exist when today’s junior designers are tomorrow’s design leaders? This extrapolates on current technology trends to speculate on what new design challenges may develop over the next 20 years, and how we can prepare ourselves for the unknown future.
We’re entering a new world of virtual, mixed, and augmented reality — what some are even calling “the 4th design evolution.” This new medium comes with a fresh set of interaction challenges. Simple things, like organizing or retrieving files, placing screens, and activity switching need to be revisited. Just as the shift from desktop to mobile, and mobile to smart objects required us to rethink interaction patterns, this coming shift presents similar challenges.
We can react to these challenges or approach them in a thoughtful, structured way, considering how we live, build, and work in these immersive computing spaces. To this end, speakers Anderson and McCauley will share the framework they’re developing, a framework that critically examines emerging mixed design patterns in light of the timeless stuff we know about biology, cognition, and how our bodies use physical space. Attendees will see firsthand what’s happening in these new mediums — from games to business applications — while also walking away with a thoughtful way to approach interactions that will prepare you for this next design evolution.
Implications of the near and far futureJon McMillan
Speech delivered by MCCM Jon McMillan, Master Chief for Navy Public Affairs at the Navy Mass Communication Specialist 10 Year Anniversary. The near and far future will dramatically change how Navy communicators perform their job.
AI+Labor Markets Presentation to CSM-16-may-2024Joaquim Jorge
Presentation Title: AI & Labor Markets
Presenter: Joaquim Jorge
Description:
Explore the transformative impact of Artificial Intelligence (AI) on labor markets in this comprehensive presentation by Joaquim Jorge. This insightful slideset delves into the opportunities and challenges that AI integration brings to various industries, highlighting key AI techniques and their real-world applications.
Bias in Hiring and Firing:
The presentation critically examines biases in AI systems used for hiring and firing decisions:
Hiring Bias: Instances where AI systems, like LinkedIn’s recommendation system and OpenAI's GPT, have shown biases in résumé ranking and job advertisements, including gender bias and cost-efficiency algorithms inadvertently favoring male candidates.
Firing Bias: AI's role in monitoring productivity and making termination decisions, with examples from Amazon’s “Time off Task” system and Uber’s driver performance metrics, highlighting unfair terminations affecting minority groups.
Mitigation Strategies:
Bias Audits: Regularly auditing AI systems to identify and mitigate biases.
Diverse Training Data: Ensuring training data are diverse and representative of all demographic groups.
Human Oversight: Implementing human oversight to review and validate AI decisions.
Explainable AI (XAI): Making AI decisions transparent and accountable to detect and correct biases.
Future of Labor Markets:
The presentation explores potential futures of labor markets with AI, presenting both utopian and dystopian scenarios:
Utopian Scenario: AI could lead to increased worker satisfaction by automating repetitive tasks, creating new career opportunities, and reducing physical labor demands, resulting in better work-life balance and economic opportunities.
Dystopian Scenario: AI could widen the economic divide, increase job precarity, and erode worker rights. Risks include increased surveillance, loss of autonomy, and the social and psychological impacts of job displacement.
Key Takeaways:
Understand the role and impact of different AI technologies in various sectors.
Recognize and address biases in AI systems, especially in hiring and firing decisions.
Explore potential futures of labor markets with AI integration.
Learn strategies for ensuring ethical and fair AI applications.
This presentation is essential for professionals, researchers, and policymakers interested in the intersection of AI and labor markets, providing a detailed analysis of current trends, challenges, and future possibilities.
Artificial Intelligence: Facts and MythsJoaquim Jorge
The presentation explores the development and application of artificial intelligence (AI) from its inception to its current status in the modern world. The term "artificial intelligence" was first coined by John McCarthy in 1956 to describe efforts to develop computer programs capable of performing tasks that typically require human intelligence. This concept was first introduced at a conference held at Dartmouth College, where programs demonstrated capabilities such as playing chess, proving theorems, and interpreting texts.
In the early stages, Alan Turing contributed to the field by defining intelligence as the ability of a being to respond to certain questions intelligently, proposing what is now known as the Turing Test to evaluate the presence of intelligent behavior in machines. As the decades progressed, AI evolved significantly. The 1980s focused on machine learning, teaching computers to learn from data, leading to the development of models that could improve their performance based on their experiences.
The 1990s and 2000s saw further advances in algorithms and computational power, which allowed for more sophisticated data analysis techniques, including data mining. By the 2010s, the proliferation of big data and the refinement of deep learning techniques enabled AI to become mainstream. Notable milestones included the success of Google's AlphaGo and advancements in autonomous vehicles by companies like Tesla and Waymo.
A major theme of the presentation is the application of generative AI, which has been used for tasks such as natural language text generation, translation, and question answering. Generative AI uses large datasets to train models that can then produce new, coherent pieces of text or other media.
The presentation also discusses the ethical implications and the need for regulation in AI, highlighting issues such as privacy, bias, and the potential for misuse. These concerns have prompted calls for comprehensive regulations to ensure the safe and equitable use of AI technologies.
Artificial intelligence has also played a significant role in healthcare, particularly highlighted during the COVID-19 pandemic, where it was used in drug discovery, vaccine development, and analyzing the spread of the virus. The capabilities of AI in healthcare are vast, ranging from medical diagnostics to personalized medicine, demonstrating the technology's potential to revolutionize fields beyond just technical or consumer applications.
In conclusion, AI continues to be a rapidly evolving field with significant implications for various aspects of society. The development from theoretical concepts to real-world applications illustrates both the potential benefits and the challenges that come with integrating advanced technologies into everyday life. The ongoing discussion about AI ethics and regulation underscores the importance of managing these technologies responsibly to maximize their their benefits while minimizing potential harms.
Recent advances in VR and AR technology have enabled interactive graphics applications to support healthcare professionals in training, diagnosis, planning, and treatment. This field has progressed enough to warrant a course that can inspire new ideas within the graphics community. Medical images create virtual human anatomy models, allowing for natural interaction and visualization in healthcare scenarios. VR and AR are conceptually different and suited for different types of problems. VR is immersive and suitable for learning anatomy, surgical skills, and analyzing 3D medical data. AR, on the other hand, overlays helpful information onto the physical environment, making it useful for tasks such as communication with patients and training assistants. However, several challenges, such as nonstandard equipment and disorientation, still limit the widespread use of these technologies. This course covers current advances and challenges in this area, including integrating AI techniques.
About the Speaker: Joaquim Jorge holds the UNESCO Chair of Artificial Intelligence & Extended Reality at the University of Lisboa, Portugal. He joined Eurographics in 1986 and ACM/SIGGRAPH in 1989. He is Editor-in-Chief of the Computers and Graphics Journal, Eurographics Fellow, ACM Distinguished Member, and member of IEEE Computer Society Board of Governors. He organized 50+ conferences, including Eurographics 2016 (IPC CO-Chair), IEEE VR 2020/21/22 as co-(papers)chair, and ACM IUI 2012 (IPC co-chair). He served on 210+ program committees and (co)authored over 360 peer-reviewed publications and five books. His research interests include graphics, virtual reality, and advanced HCI techniques applied to health technologies.
Websites:
https://en.wikipedia.org/wiki/Joaquim_Jorge_(computer_scientist)
Google Scholar:
https://scholar.google.com/citations?user=RgiMdpAAAAAJ&hl=en
D.S. Lopes, D. Medeiros, S.F. Paulo, P.B. Borges, V. Nunes, V. Mascarenhas, M. Veiga, J.A. Jorge, Interaction Techniques for Immersive CT Colonography: A Professional Assessment, In: Frangi A., Schnabel J., Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11071, Pages 629–637, Springer, Cham, 2018. DOI: 10.1007/978-3-030-00934-2_70
M. Sousa, D. Mendes, S. Paulo, N. Matela, J. Jorge, D.S. Lopes, VRRRRoom: Virtual Reality for radiologists in the reading room, Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems (CHI 2017), New York: ACM Press, 2017. DOI: 10.1145/3025453.3025566
D.S. Lopes, P.F. Parreira, S.F. Paulo, V. Nunes, P.A. Rego, M.C. Neves, P.S. Rodrigues, J.A. Jorge, On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface, Journal of Biomedical Informatics, 72, Pages 140–149, 2017. DOI: 10.1016/j.jbi.2017.07.009
Invited paper from the Computers & Graphics Journal Presented at the SMI 2023 Conference in Genoa, Italy on July 4th 2023. Abstract: Highly complex and dense models of 3D objects have recently become indispensable in digital industries. Mesh decimation then plays a crucial role in the production pipeline to efficiently get visually convincing yet compact expressions of complex meshes. However, the current pipeline typically does not allow artists control the decimation process, just a simplification rate. Thus a preferred approach in production settings splits the process into a first pass of saliency detection highlighting areas of greater detail, and allowing artists to iterate until satisfied before simplifying the model. We propose a novel, efficient multi-scale method to compute mesh saliency at coarse and finer scales, based on fast mesh entropy of local surface measurements. Unlike previous approaches, we ensure a robust and straightforward calculation of mesh saliency even for densely tessellated models with millions of polygons. Moreover, we introduce a new adaptive subsampling and interpolation algorithm for saliency estimation. Our implementation achieves speedups of up to three orders of magnitude over prior approaches. Experimental results showcase its resilience to problem scenarios that efficiently scales up to process multi-million vertex meshes. Our evaluation with artists in the entertainment industry also demonstrates its applicability to real use-case scenarios.
Authors:
Rafael Kuffner dos Anjos, Leeds, UK
Richard Andrew Roberts and Benjamin Allen, VUW, NZ,
Joaquim Jorge, INESC-ID, U Lisboa, PT
Ken Anjyo, OLM, JP
How to Craft and Deliver Winning PresentationsJoaquim Jorge
Public speaking is often referred to as a soft skill. However, your career, your job opportunities, promotion, and tenure (for professors) are intrinsically tied to being able to speak in public and persuade others of your ideas.
Being able to "sell" your work, opinions, ideas, and points of view can be as important as being good at Math, Coding, and doing good science.
Many tend to ignore this (at their peril). This talk will present ways to make your ideas, thoughts, and theses look appealing, engaging and engaging to others. While I am using PowerPoint, these practices and techniques will work equally well if you have PDF slides, Keynote presentations, or even (gasp!) Prezi...
I will also cover slide preparation techniques, practical multimedia advice, and delivery how-tos (and not-tos).
Bio: Joaquim Jorge heads the Graphics and Interaction Research Line at INESC-ID and is a Full Professor of Computer Science at Instituto Superior Técnico (IST), the School of Engineering of the University of Lisboa, Portugal, and is Editor-in-Chief of the Computers and Graphics Journal. He received Ph.D. and MSc degrees in Computer Science from Rensselaer Polytechnic Institute, Troy, NY, in 1995 and a BSEE from IST in 1984. An organizer of 50+ scientific conferences, Jorge has preferred over 42 invited talks and public presentations. He is a Distinguished Speaker and Member of the Association for Computing Machinery (ACM) and a Distinguished Visitor of the Institute of Electrical and Electronics Engineers (IEEE). He received the IFIP Silver Core in 2014 and is a Fellow of the Eurographics Association.
The growing interest of Augmented Reality (AR) together with the renaissance of
Virtual Reality (VR) has opened new approaches and techniques on how professionals interact with medical imagery, plan, train and perform surgeries and also help people with special needs in Rehabilitation tasks. Indeed, many medical specialties already rely on 2D and 3D image data for diagnosis, surgical planning, surgical navigation, medical education or patient-clinician communication.
However, the vast majority of current medical interfaces and interaction techniques continue
unchanged, while the most innovative solutions have not unleashed the full potential of VR and
AR. This is probably because extending conventional workstations to accommodate VR and AR
interaction paradigms is not free of challenges. Notably, VR and AR-based workstations,
besides having to render complex anatomical data in interactive frame rates, must promote
proper anatomical insight, boost visual memory through seamless visual collaboration between
professionals, free interaction from being seated at a desk (e.g., using mouse and
keyboard) to adopt non-stationary postures and freely walk within a workspace, and must also
support a fluid exchange of image data and 3D models as this fosters interesting discussions
to solve clinical cases. Moreover, VR and AR-based techniques must also be designed
according to good human-computer interaction principles since it is well known that medical
professionals can be resistant to changes in their workflow. In this course, we will survey recent approaches to healthcare, including diagnosis, surgical training, planning, and followup as well as AR/MR/VR tools for patient rehabilitation. We discuss challenges, techniques, and principles in applying Extended Reality in these contexts and outline opportunities for future research. References: https://dl.acm.org/citation.cfm?id=3359418 This course was also presented in Toronto in two additional talks.
Anatomy Studio: a Tool for Virtual Dissection Through Augmented 3D Reconstruc...Joaquim Jorge
3D reconstruction from anatomical slices allows anatomists to reconstruct real structures by tracing organs from a lengthy series of cryosections.
Notwithstanding, conventional interfaces rely on isolated single-user experiences using mouse-based input for tracing.
In this work, we present Anatomy Studio, a collaborative mixed-reality approach, combined with tablets and styli, to assist anatomists by easing manual image segmentation and exploration tasks.
We contribute novel interaction techniques intended to promote spatial understanding and expedite manual segmentation.
By using mid-air interactions and interactive surfaces, anatomists can easily access any cryosection and edit contours, while following other user's contributions.
A user study including experienced anatomists and medical professionals, conducted in real working sessions, demonstrates that Anatomy Studio is appropriate and useful for 3D reconstruction.
Results indicate that our approach encourages closely-coupled collaborations and group discussion.
We also discuss the implications of our work and provide domain insights.
top 10 ways to get your paper rejected at Computers and Graphics JournalJoaquim Jorge
How NOT to get your paper published at the Computers and Graphics Journal. Top 10 mistakes you must do to help Editors and Reviewers Reject your paper. Also, some career advice for Computer Graphics and Interactive Techniques Researchers. Contains advice to students and researchers on traps and pitfalls to avoid.
Virtual Reality for Health Applications - siggraph asia 2018Joaquim Jorge
VROOOM: Reading room conditions such as illumination, ambient light, human factors, and display luminance, play an important role in how radiologists analyze and interpret images. Indeed, serious diagnostic errors can appear when studying images through everyday monitors, namely whenever professionals are ill-positioned with respect to the display or visualize images under improper light and luminance conditions. In this work, we show that virtual reality head-mounted displays can assist radiodiagnostics by considerably diminishing the effects of unsuitable ambient conditions. Our approach combines virtual reality displays with interactive surfaces to support professional radiologists in analyzing medical images and formulating diagnostics. We evaluated our prototype with senior radiologists and radiology residents, and results indicate that our approach constitutes a viable, flexible, portable and cost-efficient option to traditional radiology reading rooms.
ICOLONIC: CT Colonography (CTC) is considered the leading imaging
technique for colorectal cancer (CRC) screening. However, conventional CTC systems rely on clumsy 2D input devices and stationary displays that make it hard to perceive the colon structure in 3D. To visualize such anatomically complex data, the immersion and freedom of movement afforded by Virtual Reality (VR) systems bear the promise to assist clinicians to improve 3D reading, hence, enabling more expedite diagnoses. To
this end, we propose iCOLONIC, a set of interaction techniques using VR to perform CTC reading. iCOLONIC combines immersive Fly-Through navigation with positional tracking, multi-scale representations and mini-maps to guide radiologists and surgeons while navigating throughout the colon. Contrary to stationary VR solutions, iCOLONIC allows users to freely walk within a workspace to analyze both local and global 3D
features.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
2. What I want from my tech
future
Widely available high res scanning
Increasingly sophisticated virtual spaces to
inhabit
Next gen dimensional displays supporting this
Advancing tech like AI and more for avatars &
friends
Addressing issues
What if…?
Back to the future
14. State of the art virtual
characters
AI programming can help create Virtual
Humans that:
are very conversational and responsive
can see/perceive what the human with whom
they interact with is feeling
17. Avatars
An Avatar is a 3 dimensional
construct that represents a human
person• Currently used for game play
• Or for social VR
• Or for immersive video conferencing
Avatars allow us to essentially be in
2 or more places at once.
18. Avatar use is increasing
Avatar use continues to grow – especially
among the upcoming generations.
1.5 billion kids with avatars!
Kids are used to presenting themselves in this
way, and experiencing others as avatars. (free
social rime)
What do avatars do for them - for us? < huge
topic
We project some part of our self through our
avatar(s)
20. Hi Fidelity expression
tracking
How it does this
No need for markers
More sophisticated computer vision looks at facial
features instead
Eyes, nostrils, mouth lines, etc.
Or estimates facial movements from voice
intonation
Body behaviors can be brought in via a
video/depth sensing device (Kinect)
These will be integrated into our devices soon
21. What if we cannot actually pilot or
inhabit our avatar?
30. Future displays
• What if the display could control actual
molecules?
• A nano-molecular display?
• The molecules form solid objects, like chairs,
good enough to physically sit in
• The realization of the Star Trek Holodeck!
• Ivan Sutherland’s Wonderland 1965!
31. The Next Wave of AI
• AI research is on its second wave & is becoming
increasingly useful
• Virtual human AI can be expanded to include
• New models like AI based on neural mechanism models
• New architectures that supports better learning
• Extensions to create new behaviors interpolated from
known behaviors
32. Can my avatar learn from
me?
I can inhabit my avatar but it doesn’t know I
am there
In games, the other characters might have
some form of AI but they are NOT us –
avatars need this too.
Why? I really want my avatar to learn
FROM ME while I use it!
33. What will we want from our future AI
assistants?
Films starting to show the future we
might expect – Her, ExMachina,
SimOne, etc.
But what if AI can really understand
us or know what is best for us? Who
decides?
What if they really get emotions?
What does that even mean for an
AI?
Can AI get too real??
34. Can AI get too real??
https://www.youtube.com/watch?v=txSOaY-je-o
35. Issues
We can’t get there without a number of issues:
How real is too real? And whose reality?
Authenticity: Who is really there?
Who owns the data? Who owns our digital lives?
What is the responsibility of the keeper?
36. Issues
Authenticity
How do we know who is whom, who is inhabiting,
or if there is any one home besides the AI?
How will others know that construct is us,
whether digital or robotic?
SSI Self-sovereign identities will be an important
part of our digital futures.
38. More issues…
Who owns the data? Who owns our digital lives?
Do you own any photos of your great
grandfather?
A diary from an ancestor?
A library of fully interactive scans form something
like the Shoah foundation?
What is the responsibility of the keeper of that
data? Profit or preservation?
39. New generations will know very different realities from
what we know because of what we are inventing today.
It will be their task to live them, fully, to THINK about
not only what constitutes reality, but how all realities
can, do and will affect us as human beings.
The potential of fluid immersive realities
It’s going to be a wild ride!
The Future
@skydeas1
jfmorie at gmail
Editor's Notes
What are the technologies coming in the near future that could change the way we live, connect and interact on a global scale?
It is a distributed future, and we can be sure that it will include ways to meet face-to-face instantly with anyone around the globe via digital and even physical avatars.
To set up my thinking -- cover some recent trends
Not really trying to predict the future.-- so many factors–big disruptors that happen that change the trajectories
That being said, it is still fun to explore what might be possible, given what we know today.
sophisticated ways to capture not only our 3D data in minute detail, but even the reflectance of our skins,
subsurface scattering that our unique layers of dermis and epidermis create to form the visual form others see when they look at us.
This happens a lot now for actors in films – so that the visual effects artists can use their digital double in scenes long after primary shooting is done.
And there are scanning booths in major cities where in less than 30 seconds your digital data can be captured. Most of these places will then make a physical 3D print of your scan.
The perfect Valentine’s day or Christmas gift for your loved ones.
This could be the first baby picture of your offspring not in a few years, but now.
How often will they get ”scanned” throughout their lives?
Well, consumer grade 3D “depthie” cameras are already available and have been for a few years.
Not as good quality as professional scanning systems, but great for the average person.
Cloud maps with snapshot truth textures
AND they can also do a depth map of the environments, AND you can then bring that into VR!
We have this tech in an enterprise form NOW but soon on consumer cameras!
This leads to a short digression into how amazingly well we can capture the environments around us. Beyond consumer grade ..
Gigapixel imagery, 360, and new techniques like those of Simon Che de Boer
Incredible photogrammetry but more -
Further work by Simon includes deep machine learning to extract ground truth images for relightable texture maps
But just around the corner – think of all the sensors being placed everywhere today. Smart homes Smart cities have a lot of knowledge about you– but how they use it is beyond our control
Virtual spaces can be just as smart – we can expect to see the rise of not only smart immersive space, but some will eventually be considered sentient too. Knowing so much about us that they can adjust to our needs and desires.
No 2 sense VR
Include as many senses as possible! Scent collar at
http://alltheseworldsllc.com/solutions/a-deep-inhale-scent-in-virtual-spaces/
Back to virtual characters. We have been able to capture much of our unique behavioral motions with full body motion capture suits
Many techniques – some require markers WITH markerless techniques being perfected
This data can be transferred to any similar character
Facial expressions for example, which embody so many of our human emotions, are extremely complex and require as many or more sensors than does the rest of the body!
VolCap or volumetric capture is now also widely available. Actors’ performances are being captured for replay in 360 video and other immersive media forms
Can do faithful movements WITHOUT a ton of sensors on our person.
VolCap studios are springing up globally. Muki showed a map of these in her keynote Thursday.
Describe Reggie’s performance for NASA study; more on NASA later
To David Bowie, who was actually created from 2D images and videos for the fabrication of his digtal data
(and has no AI but is performed by an actor..…)
Virtual Humans, as they prefer to be called, are becoming more sophisticated
And they can be very convincing -
I m going to show you a couple brief examples of these types of Virtual Humans.
Or these two girl guides from the BMOS 10 years ago now. I was part of this project.
The kids visiting the computer center could talk to these virtual humans in natural language
These virtual humans can see/perceive what the human with whom they interact with is feeling, but they still cannot learn; what an AI agent knows must be put in by people – a constrained range of responses that makes it seem to be intelligent
Avatars are a special breed of virtual humans – in that they are meant to be inhabited, driven, used by an actual human
Therefore they are (unlike other virtual humans or virtual influences, or even AI driven characters we have just seen.
Data fro 2013. No, these kids are inhabiting virtual worlds instead …
… with an avatar! Why
The only free time the kids have to test their social skills, to be with their friends without supervision in within these social connected virtual worlds.
We can do this now in many social VR applications with digital avatars. But what is missing?
Well, actual expressions maybe, but Hi Fidelity has a solution figured out
Consider future astronauts going to MARS. The ANSIBLE project for NASA did.
Their social interactions are going to be asynchronous because of the communication lag,
We gave them worlds to ease social isolation & sensory monotony
social interaction asynchronous ( email/chess), some sort of seemingly rt interactions…
Such avatars can embody the family members movements, expressions, speech patterns and more. Embodied “recording
Not rt, but is doesn’t leave the social interaction totally hanging waiting for that immediate response.
Example of use of recorded/asynchronous avatars
So – for this practical use the question is still – How can they become true representations of US? What will it take?
We need avatars that know how to learn from us WHILE WE USE them.
Then they can operate without us. More later
We can interact now in many social VR applications with digital avatars. But what is missing?
The physical component of human to human connection…
Many technologies must be invented; existing ones must be merged. From haptics to VR to AR to better UI….
Better robots, safer, more humanoids.
But when this happens – YOU CAN hug your grandma every night.
Robotic avatars – useful for many things, but also for closer human connections
Imagine when we have better haptics! When a handshake feels like a hand to the operator at some remote location.
Here I am getting set up with a high end haptics hand that will allow me to feel remotely what my robot hand is actually touching.
These are the kinds of Tech XPRIZE wants to encourage
This will happen, in many ways. I call these blended or fluid realities
It is happening now, with AR overlays, Pokémon popping up all over the neighborhood.
But this really means we need new and better display technologies.
Some next gen display are being developed… but most are not scalable or consumer useful
But we need displays that are even more radical to blend realities
Future displays are going to seem as radically different to us as Raster displays were from vector-based ones, as plasma and OLEDs are from rasters.
Light field displays will allow us to focus on different distances in one display. Imagine that!
MAYBE they will become part of us, our bionic, transhumanist future selves.
DARPA has been working on contact lens displays, aka bionic eyes.
Sony recently patented a one of these, as has Google
But what if we didn’t have to wear the display device?
The best user interface of the future will be tightly aligned with the one we live in now – reality
No learning curve
As we will see later, I am not the first person to propose this.
We have the IBM Watson AI making better medical decisions than highly trained doctors.
We have Baby X being developed in New Zealand by Soul Machines – a new kind of AI to support better VHs – but we need even more.
David Bowie will never be who I want until these advances happen
And they need to be able to learn. There are people working on this-- new architectures needed
If an avatar doesn’t learn, it doesn’t change, and it doesn’t change – it is significantly less useful.
It seems contradictory at first – I am the one who pilots, or inhabits my avatar – so I am always there to tell it what to say, how to behave.
I can control my avatar but ONLY when I am logged into it.
My friends in the virtual world cannot see me, or interact with me. I am NOT THERE for them.
But for now we don’t have those avatars – we have more and more sophisticated virtual assistants – our siris and alexas
Personalised? like the Young Victorian Girl’s Primer from ….
Who decides what we will learn, what social values we will have?
At some point we have to ask ourselves:
https://www.youtube.com/watch?v=txSOaY-je-o
We have not begun to discover all the issues this may take. Here are only a few.
We have seen the first one as the subject of a new comedy film, but there are serious ramifications.
Like Microsoft’s AI bot Tay, that had to be shut down because it devolved into being so racist.
Who makes the rules, the social expectations, the ethical decisions making algorithms?
How much control can we really have is AI’s are programmed to evolve on their own?
David Bowie shown earlier. James Dean and more dead actors going digitally ”live”
What does SSI cover? Crucible is calling this a “digital soul” and links this to your digital reputation.
Several other companies are also working to make this happen.
Do you own any photos of your great grandfather?
A diary from an ancestor?
A library of fully interactive scans form something like the Shoah foundation?
What is your responsibility to these artifacts of someone’s life?
Back to the future. We opened the treasure chest of a few possibilities the future could hold,
especially as it relates to how we might experience it as human beings.
Moving towards a seamless blending of the physical and the virtual,
the imaginary and the yet to be discovered – maybe the spiritual, the meta physical. There are the coming fluid realities.