Malik Majette is researching data-driven simulations of virtual social environments under the advisorship of Dr. Stephen Guy at North Carolina State University. Specifically, Majette aims to use virtual reality, facial animations, and architectural evaluation to recreate gatherings in the ancient Pnyx structure in Athens through intelligent virtual humans. By basing agent behaviors on universal laws of pedestrian movement and incorporating realistic facial expressions collected from medical studies, Majette's model can simulate large crowds naturally interacting in the deteriorated structure and reproduce genuine human emotions to provide an immersive experience of ancient democratic assemblies.
Paper title: Syncretic Social Agency: Deterritorialised Robotics and Mixed Reality Data Transfer Systems.. Apolgies for formatting issues from this being a .doc!!
Ubiquitous Commons workshop at transmediale 2015, Capture AllSalvatore Iaconesi
Here are the slides from the workshop, with a framing of the concept of Ubiquitous Commons, a series of examples and links, and an update about how the development of the toolkits (legal, technological, philosophical, aesthetic) are going, together with some source code and prototypes.
More info can also be gathered here:
human-ecosystems.com/home/ubiquitous-commons-the-slides-from-the-workshop-at-transmediale-festival-in-berlin
Paper title: Syncretic Social Agency: Deterritorialised Robotics and Mixed Reality Data Transfer Systems.. Apolgies for formatting issues from this being a .doc!!
Ubiquitous Commons workshop at transmediale 2015, Capture AllSalvatore Iaconesi
Here are the slides from the workshop, with a framing of the concept of Ubiquitous Commons, a series of examples and links, and an update about how the development of the toolkits (legal, technological, philosophical, aesthetic) are going, together with some source code and prototypes.
More info can also be gathered here:
human-ecosystems.com/home/ubiquitous-commons-the-slides-from-the-workshop-at-transmediale-festival-in-berlin
Imageability today. Telling stories in images.
In the context of this conference, my talk will not be about the representation of the image, but about the imageability of digital images. I’m particularly interested in what actually takes place inside the image and how this affects the value of the image – so not what is the story of image but what is the story in images. Storytelling here is no longer telling stories in a narrative way, but rather storytelling as an abstracted form that creates shifts in agency, which I will argue is constructed by human-machine relationships. It is clear that today’s images are not made through light and chemical processes anymore, and while even those materials could be used and manipulated in various ways to show or hide certain things, what happens when more and more images are made by webcams, satellites, security cameras, traffic cops, eBay sellers, Google StreetView cars, and tourists on a quest for the exact same photograph? Or, as Trevor Paglan mentioned, when referring to machine-vision, what happens when “the overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop” [Invisible Images (Your Pictures Are Looking at You), 2016].
In this new ecology of images, the actual taking of a photograph –if that is still the case– is merely one step in a long chain of abstractions in which the image is manipulated, recontextualized, sometimes in combinations with other images, at times these processes happen in unpredictable or irreverent ways. In other words, where does the image begin and end? While there is an over-abundance of photos and images around today, I will highlight 3 different positions that I think are crucial when discussing these specific aspects of contemporary images, and show how they relate to storytelling. This is an abstracted sense of storytelling taking place below the surface, while different narratives start to emerge. First, the digital as a tool in which traditional models of institutional cultural authority and disciplinary expertise still rule, here a digital image emphasizes but also questions the power of the original image through different modes of circulation; Secondly, the effect of optimization or automatic evaluation of image content in semi-automated algorithms; and related to that 3. The construction of value through machine vision [obscure algorithmic processes].
Slides from a series of talks for the IET's IoT India Congress and some associated events - SRM Chennai, PES Bengaluru, Srishti Bengaluru. I used different subsets of the slides in each talk - this is the whole deck.
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common SenseBoston Global Forum
Recent progress in deep learning is essentially based on a "big data for small tasks" paradigm, under which massive amounts of data are used to train a classifier for a single narrow task. In this paper, we call for a shift that flips this paradigm upside down. Specifically, we propose a "small data for big tasks" paradigm, wherein a single artificial intelligence (AI) system is challenged to develop "common sense", enabling it to solve a wide range of tasks with little training data. We illustrate the potential power of this new paradigm by reviewing models of common sense that synthesize recent breakthroughs in both machine and human vision. We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense. When taken as a unified concept, FPICU is concerned with the questions of "why" and "how", beyond the dominant "what" and "where" framework for understanding vision. They are invisible in terms of pixels but nevertheless drive the creation, maintenance, and development of visual scenes. We therefore coin them the "dark matter" of vision. Just as our universe cannot be understood by merely studying observable matter, we argue that vision cannot be understood without studying FPICU. We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning. In summary, we argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
This is the official website biography for Kristine Deray on Kristine Deray's official website, KristineDeray.com. More information can be found by exploring the pages of the site.
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Sticky Data and Superstitious Patterns: Visualization beyond CognitivismDietmar Offenhuber
Visualization is often exclusively treated as an affair between the eye and the mind, based on the idea that perceiving and thinking are forms of pattern recognition and computation. But patterns can be misleading, and visual languages play a much larger role in mediating our interactions, facilitating, and constraining our awareness of the systems we are embedded in. My work deals with the roles of visual representations for understanding and governing large urban systems. Using examples from remote sensing, waste systems, street lighting and others, I will discuss critical issues of working with data in the context of socio-technical systems.
Talk at the Data Visualization program at the New School, NY, Nov. 3, 2015
A series of graphics from integralMENTORS integral UrbanHub work on IMP and Thriveable Cities
This work shows the graphics from a dynamic deck that accompany a presentation on Visions & WorldViews and Thriveable Cities. The history of the co-evolution of cities, evolving WorldViews, Visions & Mindsets in urban Habitats and technology is presented in an integral framework.
Integral theory is simply explained as it relates to these themes.
This volume is part of an ongoing series of guides to integrally inform practitioners.
Imageability today. Telling stories in images.
In the context of this conference, my talk will not be about the representation of the image, but about the imageability of digital images. I’m particularly interested in what actually takes place inside the image and how this affects the value of the image – so not what is the story of image but what is the story in images. Storytelling here is no longer telling stories in a narrative way, but rather storytelling as an abstracted form that creates shifts in agency, which I will argue is constructed by human-machine relationships. It is clear that today’s images are not made through light and chemical processes anymore, and while even those materials could be used and manipulated in various ways to show or hide certain things, what happens when more and more images are made by webcams, satellites, security cameras, traffic cops, eBay sellers, Google StreetView cars, and tourists on a quest for the exact same photograph? Or, as Trevor Paglan mentioned, when referring to machine-vision, what happens when “the overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop” [Invisible Images (Your Pictures Are Looking at You), 2016].
In this new ecology of images, the actual taking of a photograph –if that is still the case– is merely one step in a long chain of abstractions in which the image is manipulated, recontextualized, sometimes in combinations with other images, at times these processes happen in unpredictable or irreverent ways. In other words, where does the image begin and end? While there is an over-abundance of photos and images around today, I will highlight 3 different positions that I think are crucial when discussing these specific aspects of contemporary images, and show how they relate to storytelling. This is an abstracted sense of storytelling taking place below the surface, while different narratives start to emerge. First, the digital as a tool in which traditional models of institutional cultural authority and disciplinary expertise still rule, here a digital image emphasizes but also questions the power of the original image through different modes of circulation; Secondly, the effect of optimization or automatic evaluation of image content in semi-automated algorithms; and related to that 3. The construction of value through machine vision [obscure algorithmic processes].
Slides from a series of talks for the IET's IoT India Congress and some associated events - SRM Chennai, PES Bengaluru, Srishti Bengaluru. I used different subsets of the slides in each talk - this is the whole deck.
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common SenseBoston Global Forum
Recent progress in deep learning is essentially based on a "big data for small tasks" paradigm, under which massive amounts of data are used to train a classifier for a single narrow task. In this paper, we call for a shift that flips this paradigm upside down. Specifically, we propose a "small data for big tasks" paradigm, wherein a single artificial intelligence (AI) system is challenged to develop "common sense", enabling it to solve a wide range of tasks with little training data. We illustrate the potential power of this new paradigm by reviewing models of common sense that synthesize recent breakthroughs in both machine and human vision. We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense. When taken as a unified concept, FPICU is concerned with the questions of "why" and "how", beyond the dominant "what" and "where" framework for understanding vision. They are invisible in terms of pixels but nevertheless drive the creation, maintenance, and development of visual scenes. We therefore coin them the "dark matter" of vision. Just as our universe cannot be understood by merely studying observable matter, we argue that vision cannot be understood without studying FPICU. We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning. In summary, we argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
This is the official website biography for Kristine Deray on Kristine Deray's official website, KristineDeray.com. More information can be found by exploring the pages of the site.
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Sticky Data and Superstitious Patterns: Visualization beyond CognitivismDietmar Offenhuber
Visualization is often exclusively treated as an affair between the eye and the mind, based on the idea that perceiving and thinking are forms of pattern recognition and computation. But patterns can be misleading, and visual languages play a much larger role in mediating our interactions, facilitating, and constraining our awareness of the systems we are embedded in. My work deals with the roles of visual representations for understanding and governing large urban systems. Using examples from remote sensing, waste systems, street lighting and others, I will discuss critical issues of working with data in the context of socio-technical systems.
Talk at the Data Visualization program at the New School, NY, Nov. 3, 2015
A series of graphics from integralMENTORS integral UrbanHub work on IMP and Thriveable Cities
This work shows the graphics from a dynamic deck that accompany a presentation on Visions & WorldViews and Thriveable Cities. The history of the co-evolution of cities, evolving WorldViews, Visions & Mindsets in urban Habitats and technology is presented in an integral framework.
Integral theory is simply explained as it relates to these themes.
This volume is part of an ongoing series of guides to integrally inform practitioners.
1. Malik Majette
Data-driven Simulation and Evaluation of Virtual Social Environments
Advisor: Dr. Stephen Guy
Home Institution: North Carolina State University
Abstract: Modeling virtual humans in settings to intelligently interact and instinctively gesture is
an active research area with broad applications. In our research we focus on the combination of
virtual reality, facial animations, and architectural evaluation to present a realistic immersion in
ancient structures that have deteriorated, and no longer support the initial magnitude of
occupancy. Specifically, the Pnyx, an ancient Athens structure said to be the birthplace of
democratic assemblies, is currently unable to actualize these archaic events to tourists due to its
decay. We developed a virtual program to represent gatherings in the Pnyx that naturally
condense towards a speaker’s podium and make a variety of facial expressions in response to
each other. Constructing this model presented a computational challenge in large simulations
since the authenticity of human emotion is difficult to invoke in graphical simulations, and there
is no previous data of how the Athenians organized or to support the claim that the structure had
capacity for 16,000 men. We address these issues using a data-driven approach to simulate
intelligent agents in the Pnyx based on the concept of a universal power law governing
pedestrian behavior. In our model large crowds can enter into the structure while conforming to
natural behaviors such as avoiding collisions, natural dispersion, and social space. Furthermore,
in collaboration with doctors from the UMN Otolaryngology Department we collect and evaluate
facial features across a large sample, which is utilized to reproduce genuine emotions in virtual
agents. With real-world pedestrian interactions and data-driven facial animations our model
provides a convincing and accurate representation of democratic assemblies from the beginning
of human recording.