A computer system automatically organizes, retrieves, anno
tates and/or presents media data files as collections of media
data files associated with one or more entities, such as
individuals, groups of individuals or other objects, using
context captured in real time from a viewing environment.
The computer system presents media data from selected
media data files on presentation devices in the viewing
environment and receives and processes signals from sen
sors in that viewing environment. The processed signals
provide context, which can be used to select and retrieve
media data files, and can be used to further annotate the
media data files and/or other data structures representing
collections of media data files and/or entities. In some
implementations, the computer system can be configured to
be continually processing signals from sensors in the view
ing environment to continuously identify and use the context
from the viewing environment.
The document discusses several topics related to e-Discovery:
1) Document retention policies are important for companies to establish in order to efficiently manage electronic discovery requests if litigation occurs. These policies help reduce unnecessary data and ensure ethical data deletion practices.
2) Electronic discovery, or e-Discovery, refers to the discovery process for electronically stored information in civil lawsuits. It presents unique challenges compared to paper documents due to issues like metadata and data volume.
3) The federal rules of civil procedure have been amended to address e-Discovery procedures and obligations regarding items like initial disclosures, discovery scope, and electronically stored information. Legal holds are also discussed as an important part of the e-Discovery process.
This document describes a system and method for capturing, storing, retrieving, and publishing data. Data objects such as images, video, audio, and text are captured using various devices and stored in frames along with metadata. Frames can then be queried, retrieved, aggregated into mosaics, and published to targeted receivers. This allows for organized collection, search, and sharing of multimedia content.
Linking HPC to Data Management - EUDAT Summer School (Giuseppe Fiameni, CINECA)EUDAT
EUDAT and PRACE joined forces to help research communities gain access to high quality managed e-Infrastructures whose resources can be connected together to enable cross-utilization use cases and make them accessible without any technical barrier. The capability to couple data and compute resources together is considered one of the key factors to accelerate scientific innovation and advance research frontiers. The goal of this session was to present the EUDAT services, the results of the collaboration activity achieved so far and delivers a hands-on on how to write a Data Management Plan or DMP. The DMP is a useful instrument for researchers to reflect on and communicate about the way they will deal with their data. It prompts them to think about how they will generate, analyse and share data during their research project and afterwards.
Visit: https://www.eudat.eu/eudat-summer-school
This document describes the design of a hydrological data storage and dissemination system. It discusses the major components of the system including databases to store different types of hydrological data (e.g. field data, processed data, maps), a catalogue to allow users to search and access data, and interfaces to allow external organizations and users to input and retrieve data. It provides specifications for hardware, software, security, and other technical aspects required to build the hydrological information system. The overall aim is to create a centralized, standardized system for permanently storing all types of hydrological data from various agencies and making it accessible to authorized users.
This document contains a practice exam for the DEA-1TT5 Associate - Information Storage and Management exam. It includes 18 multiple choice questions covering topics like capacity management, machine learning, Internet of Things, data protection, storage infrastructure, networking, file systems, software-defined networking, fibre channel, and hypervisors. The questions are accompanied by explanations of the correct answers.
This poster (or slides) describes the main concepts which relate data format (HDF5), metadata, and data organization, as they are being applied to NPOESS. It includes the motivation for selecting HDF5; the motivation and general implementation of FGDC base metadata and remote sensing extensions; and the current attempt to use best experiences with HDF-EOS swath and netCDF Climate & Forecast conventions to guide NPOESS product definition.
This document provides operation guidelines for a Data Storage Centre (DSC) that stores hydrological data. Key points:
1) The DSC stores field and processed hydrological data and generates metadata and catalogues without processing data.
2) Staff include a manager, database administrator, IT expert, and secretary. They are responsible for database management, metadata, security, backups, and user support.
3) Guidelines cover data transfer protocols, catalogue generation, maintenance procedures, and interactions with data providers and users. The aim is to ensure consistent database operation across DSCs.
The document discusses several topics related to e-Discovery:
1) Document retention policies are important for companies to establish in order to efficiently manage electronic discovery requests if litigation occurs. These policies help reduce unnecessary data and ensure ethical data deletion practices.
2) Electronic discovery, or e-Discovery, refers to the discovery process for electronically stored information in civil lawsuits. It presents unique challenges compared to paper documents due to issues like metadata and data volume.
3) The federal rules of civil procedure have been amended to address e-Discovery procedures and obligations regarding items like initial disclosures, discovery scope, and electronically stored information. Legal holds are also discussed as an important part of the e-Discovery process.
This document describes a system and method for capturing, storing, retrieving, and publishing data. Data objects such as images, video, audio, and text are captured using various devices and stored in frames along with metadata. Frames can then be queried, retrieved, aggregated into mosaics, and published to targeted receivers. This allows for organized collection, search, and sharing of multimedia content.
Linking HPC to Data Management - EUDAT Summer School (Giuseppe Fiameni, CINECA)EUDAT
EUDAT and PRACE joined forces to help research communities gain access to high quality managed e-Infrastructures whose resources can be connected together to enable cross-utilization use cases and make them accessible without any technical barrier. The capability to couple data and compute resources together is considered one of the key factors to accelerate scientific innovation and advance research frontiers. The goal of this session was to present the EUDAT services, the results of the collaboration activity achieved so far and delivers a hands-on on how to write a Data Management Plan or DMP. The DMP is a useful instrument for researchers to reflect on and communicate about the way they will deal with their data. It prompts them to think about how they will generate, analyse and share data during their research project and afterwards.
Visit: https://www.eudat.eu/eudat-summer-school
This document describes the design of a hydrological data storage and dissemination system. It discusses the major components of the system including databases to store different types of hydrological data (e.g. field data, processed data, maps), a catalogue to allow users to search and access data, and interfaces to allow external organizations and users to input and retrieve data. It provides specifications for hardware, software, security, and other technical aspects required to build the hydrological information system. The overall aim is to create a centralized, standardized system for permanently storing all types of hydrological data from various agencies and making it accessible to authorized users.
This document contains a practice exam for the DEA-1TT5 Associate - Information Storage and Management exam. It includes 18 multiple choice questions covering topics like capacity management, machine learning, Internet of Things, data protection, storage infrastructure, networking, file systems, software-defined networking, fibre channel, and hypervisors. The questions are accompanied by explanations of the correct answers.
This poster (or slides) describes the main concepts which relate data format (HDF5), metadata, and data organization, as they are being applied to NPOESS. It includes the motivation for selecting HDF5; the motivation and general implementation of FGDC base metadata and remote sensing extensions; and the current attempt to use best experiences with HDF-EOS swath and netCDF Climate & Forecast conventions to guide NPOESS product definition.
This document provides operation guidelines for a Data Storage Centre (DSC) that stores hydrological data. Key points:
1) The DSC stores field and processed hydrological data and generates metadata and catalogues without processing data.
2) Staff include a manager, database administrator, IT expert, and secretary. They are responsible for database management, metadata, security, backups, and user support.
3) Guidelines cover data transfer protocols, catalogue generation, maintenance procedures, and interactions with data providers and users. The aim is to ensure consistent database operation across DSCs.
2006-03-14 WG on HTAP-Relevant IT Techniques, Tools and Philosophies: DataFed...Rudolf Husar
The document discusses the need for integrated air quality information systems and proposes a federated approach using web services and open standards. Key points:
- Current air quality data is siloed across different sources, making it difficult to access and analyze.
- The DataFed system advocates a federated approach where data providers maintain autonomy but expose data through wrappers and web services for unified access.
- DataFed structures heterogeneous data into "where-when-what cubes" to simplify accessing and exploring the data using slicing and dicing tools.
- The system demonstrates networking of diverse data types and analysis tools through open standards like OGC WCS to facilitate more informed decision making.
This document provides operation guidelines for a Data Storage Centre (DSC). Key points:
- The DSC stores and disseminates authenticated hydrological data and generates metadata catalogs.
- Staff include a manager, database administrator, IT expert, and secretary. They are responsible for database management, security, backups, metadata, and user support.
- Data processing is done separately at Data Processing Centres to avoid mixing raw and validated data and ensure database integrity. The DSC receives processed data from these centers.
- Standard procedures are described for database administration, metadata handling, reporting, maintenance, and interactions with other organizations in the hydrological information system.
The Role of OAIS Representation Information in the Digital Curation of Crysta...ManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) and Simon Coles (EPSRC NCS, University of Southampton) at the 5th IEEE International Conference on e-Science
9-11th December 2009, Oxford, UK
IRJET - Virtual Data Auditing at Overcast EnvironmentIRJET Journal
This document discusses a proposed scheme for remote data integrity checking of files stored in the cloud while hiding sensitive confidential information. The scheme works as follows:
1) A user hides confidential data blocks in an original file and generates signatures before uploading the blinded file to a sanitizer.
2) The sanitizer cleans up the blinded data blocks to remove confidential information and converts the signatures to be valid for the cleaned file.
3) The cleaned file and signatures are then uploaded to the cloud. A third party auditor can remotely check the integrity of the stored data while the confidential information is protected. The scheme supports data sharing in the cloud while maintaining privacy and allowing for integrity audits.
PyModESt: A Python Framework for Staging of Geo-referenced Data on the Coll...Andreas Schreiber
PyModESt is a Python framework that allows meteorological data providers to easily implement staging scripts for the Collaborative Climate Community Grid (C3-Grid) in a modular way. It handles common tasks like communication, metadata handling, and file management so data providers can focus on retrieving and packaging their data. Open issues include standardizing variable names across data sets and improving authorization methods.
In an expert webinar on April 15th 2020 we discussed (in Finnish) how the FAIR data principles affect service development in RDM services. I presented some relevant outputs from the FAIRsFAIR project. These are the slides (in English). The webinar will be published on the fairdata.fi service site https://www.fairdata.fi/koulutus/koulutuksen-tallenteet/
This document provides information about a geospatial metadata and spatial data workshop. The workshop will include a presentation session on metadata standards and application profiles, as well as a demonstration session using the Geodoc Metadata Editor tool and Go-Geo! portal. The document also provides background information on geospatial metadata, standards such as INSPIRE and UK AGMAP 2, and resources available for metadata creation and publication including the Geodoc tool and Go-Geo! portal.
The document outlines requirements for a new health care research data system being developed by the WHO. It will integrate diverse data types from multiple research sites into a centralized database. The system must allow various users to access, analyze, visualize, download and report on the standardized health care data. It will require a database, client application, and tools to view and interact with the stored data.
The document outlines requirements for a new health care research data system being developed by the WHO. It will integrate diverse data types from multiple research sites into a centralized database. The system must allow various users to access, analyze, visualize, download and report on the standardized health care data. It will require a database, client application, and tools to view and interact with the stored data.
Associative Relevancy USPTO Patent 11263535 Granted On 3-1-2022Cheryl Brown
This patent describes an associative relevancy knowledge profiling system that allows users to define associations between known reference sets to capture and formalize their knowledge about a subject matter domain. The system allows users to create knowledge profiles by selecting reference sets, establishing associations between them, and restricting or modifying the reference sets. These knowledge profiles can then be used to perform operations on data sources.
Method and apparatus for scheduling resources on a switched underlay networkTal Lavian Ph.D.
A method and apparatus for resource scheduling on a switched underlay network enables coordination, scheduling, and scheduling optimization to take place taking into account the availability of the data and the network resources comprising the switched underlay network. Requested transfers may be fulfilled by assessing the requested transfer parameters, the availability of the network resources required to fulfill the request, the availability of the data to be transferred, the availability of sufficient storage resources to receive the data, and other potentially conflicting requested transfers. In one embodiment, the requests are under-constrained to enable transfer scheduling optimization to occur. The under-constrained nature of the requests enables requests to be scheduled taking into account factors such as transfer priority, transfer duration, the amount of time it has been since the transfer request was submitted, and many other factors.
https://www.google.com/patents/US20050076336?dq=20050076336&hl=en&sa=X&ei=e-JVVOb3J8G0uATNmILgAQ&ved=0CB8Q6AEwAA
The document discusses the Air Force's strategy to move from privately stored data to an enterprise information environment where authorized users can access and share data. It outlines barriers to identifying, accessing, and understanding data across different organizations. The strategy involves using communities of interest, metadata registries, and web services to make data visible, accessible, and understandable across the Department of Defense.
2005 02 14 C2 I S R C O I Brief A K MaitraAmit Maitra
The document discusses the Air Force's strategy to move from privately stored data to an enterprise information environment where authorized users can access and share data. It outlines barriers to identifying, accessing, and understanding data across different organizations. The strategy involves using communities of interest, metadata registries, and web services to make data visible, accessible, and understandable across the Department of Defense.
An Overview of General Data Mining ToolsIRJET Journal
This document provides an overview of several popular general data mining tools: Weka, Rapid Miner, IBM SPSS, Tanagra, KNIME, Orange, and R. It describes the key characteristics of each tool, including their interfaces, available algorithms, licensing, community support, and abilities for workflows and big data processing. Overall, the document evaluates and compares the capabilities of these seven major data mining software platforms.
Quality of Groundwater in Lingala Mandal of YSR Kadapa District, Andhraprades...IRJET Journal
This document provides an overview of several popular general data mining tools: Weka, Rapid Miner, IBM SPSS, Tanagra, KNIME, Orange, and R. It describes the key characteristics of each tool, including their interfaces, available algorithms, licensing, community support, and abilities for workflows and big data processing. Overall, the document finds that while each tool has strengths in different areas, Weka, Rapid Miner, IBM SPSS, KNIME, and Orange provide robust functionality for a wide range of general data mining tasks through their graphical user interfaces and available algorithms.
Sharing Agricultural Events Information: When and where is that workshop?Gauri Salokhe
The document discusses requirements and a proposed standard for sharing information about agricultural events using RSS and other metadata standards. It proposes an "Ag-Event Application Profile" that defines a minimal set of mandatory elements like title, link, description, dates and location to provide basic event information in a standardized way across different applications. Two example applications, AgriFeeds and GFIS, that currently reuse event feeds structured according to this profile are mentioned.
The document discusses how companies can foster innovation during the COVID-19 pandemic. It argues that innovation is more important than ever to help companies adapt and spot new opportunities. Some key points are:
1) Companies should invest in innovation to increase opportunity discovery and improve employee morale.
2) To empower remote innovation, companies can create an online innovation portal with resources, ideas, and a community for innovators.
3) Strategic initiatives like prototyping as a service, measuring innovation engagement, and recognizing top performers can help establish innovation as an "always-on" function.
The Minimum Viable product and why it is critical for a startup. How to get from an idea to an MVP through a prototype. How to speed up your software prototyping process. Techniques to help you experiment and capture feedback.
As a founder, It is very important to deeply understand the notion of the MVP. You need to use it as part of a method or a framework to help you make better product decisions – and mitigate or avoid known risks. So this definition by Eric Ries, defines the MVP as ‘ …a product with just enough features to satisfy early customers, and to provide feedback’.
Your MVP must solve the problem for your customers; your users should get value out of it; your MVP should be good enough so the users engage with it and potentially pay for it;
Your early customers should be so happy with your product to act as promoters – to recommend it to others and publicly share positive feedback.
https://www.theinnovationmode.com/
More Related Content
Similar to ORGANIZATION, RETRIEVAL, ANNOTATION AND PRESENTATION OF MEDIA DATA FLES USING SIGNALS CAPTURED FROM A VIEWING ENVIRONMENT
2006-03-14 WG on HTAP-Relevant IT Techniques, Tools and Philosophies: DataFed...Rudolf Husar
The document discusses the need for integrated air quality information systems and proposes a federated approach using web services and open standards. Key points:
- Current air quality data is siloed across different sources, making it difficult to access and analyze.
- The DataFed system advocates a federated approach where data providers maintain autonomy but expose data through wrappers and web services for unified access.
- DataFed structures heterogeneous data into "where-when-what cubes" to simplify accessing and exploring the data using slicing and dicing tools.
- The system demonstrates networking of diverse data types and analysis tools through open standards like OGC WCS to facilitate more informed decision making.
This document provides operation guidelines for a Data Storage Centre (DSC). Key points:
- The DSC stores and disseminates authenticated hydrological data and generates metadata catalogs.
- Staff include a manager, database administrator, IT expert, and secretary. They are responsible for database management, security, backups, metadata, and user support.
- Data processing is done separately at Data Processing Centres to avoid mixing raw and validated data and ensure database integrity. The DSC receives processed data from these centers.
- Standard procedures are described for database administration, metadata handling, reporting, maintenance, and interactions with other organizations in the hydrological information system.
The Role of OAIS Representation Information in the Digital Curation of Crysta...ManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) and Simon Coles (EPSRC NCS, University of Southampton) at the 5th IEEE International Conference on e-Science
9-11th December 2009, Oxford, UK
IRJET - Virtual Data Auditing at Overcast EnvironmentIRJET Journal
This document discusses a proposed scheme for remote data integrity checking of files stored in the cloud while hiding sensitive confidential information. The scheme works as follows:
1) A user hides confidential data blocks in an original file and generates signatures before uploading the blinded file to a sanitizer.
2) The sanitizer cleans up the blinded data blocks to remove confidential information and converts the signatures to be valid for the cleaned file.
3) The cleaned file and signatures are then uploaded to the cloud. A third party auditor can remotely check the integrity of the stored data while the confidential information is protected. The scheme supports data sharing in the cloud while maintaining privacy and allowing for integrity audits.
PyModESt: A Python Framework for Staging of Geo-referenced Data on the Coll...Andreas Schreiber
PyModESt is a Python framework that allows meteorological data providers to easily implement staging scripts for the Collaborative Climate Community Grid (C3-Grid) in a modular way. It handles common tasks like communication, metadata handling, and file management so data providers can focus on retrieving and packaging their data. Open issues include standardizing variable names across data sets and improving authorization methods.
In an expert webinar on April 15th 2020 we discussed (in Finnish) how the FAIR data principles affect service development in RDM services. I presented some relevant outputs from the FAIRsFAIR project. These are the slides (in English). The webinar will be published on the fairdata.fi service site https://www.fairdata.fi/koulutus/koulutuksen-tallenteet/
This document provides information about a geospatial metadata and spatial data workshop. The workshop will include a presentation session on metadata standards and application profiles, as well as a demonstration session using the Geodoc Metadata Editor tool and Go-Geo! portal. The document also provides background information on geospatial metadata, standards such as INSPIRE and UK AGMAP 2, and resources available for metadata creation and publication including the Geodoc tool and Go-Geo! portal.
The document outlines requirements for a new health care research data system being developed by the WHO. It will integrate diverse data types from multiple research sites into a centralized database. The system must allow various users to access, analyze, visualize, download and report on the standardized health care data. It will require a database, client application, and tools to view and interact with the stored data.
The document outlines requirements for a new health care research data system being developed by the WHO. It will integrate diverse data types from multiple research sites into a centralized database. The system must allow various users to access, analyze, visualize, download and report on the standardized health care data. It will require a database, client application, and tools to view and interact with the stored data.
Associative Relevancy USPTO Patent 11263535 Granted On 3-1-2022Cheryl Brown
This patent describes an associative relevancy knowledge profiling system that allows users to define associations between known reference sets to capture and formalize their knowledge about a subject matter domain. The system allows users to create knowledge profiles by selecting reference sets, establishing associations between them, and restricting or modifying the reference sets. These knowledge profiles can then be used to perform operations on data sources.
Method and apparatus for scheduling resources on a switched underlay networkTal Lavian Ph.D.
A method and apparatus for resource scheduling on a switched underlay network enables coordination, scheduling, and scheduling optimization to take place taking into account the availability of the data and the network resources comprising the switched underlay network. Requested transfers may be fulfilled by assessing the requested transfer parameters, the availability of the network resources required to fulfill the request, the availability of the data to be transferred, the availability of sufficient storage resources to receive the data, and other potentially conflicting requested transfers. In one embodiment, the requests are under-constrained to enable transfer scheduling optimization to occur. The under-constrained nature of the requests enables requests to be scheduled taking into account factors such as transfer priority, transfer duration, the amount of time it has been since the transfer request was submitted, and many other factors.
https://www.google.com/patents/US20050076336?dq=20050076336&hl=en&sa=X&ei=e-JVVOb3J8G0uATNmILgAQ&ved=0CB8Q6AEwAA
The document discusses the Air Force's strategy to move from privately stored data to an enterprise information environment where authorized users can access and share data. It outlines barriers to identifying, accessing, and understanding data across different organizations. The strategy involves using communities of interest, metadata registries, and web services to make data visible, accessible, and understandable across the Department of Defense.
2005 02 14 C2 I S R C O I Brief A K MaitraAmit Maitra
The document discusses the Air Force's strategy to move from privately stored data to an enterprise information environment where authorized users can access and share data. It outlines barriers to identifying, accessing, and understanding data across different organizations. The strategy involves using communities of interest, metadata registries, and web services to make data visible, accessible, and understandable across the Department of Defense.
An Overview of General Data Mining ToolsIRJET Journal
This document provides an overview of several popular general data mining tools: Weka, Rapid Miner, IBM SPSS, Tanagra, KNIME, Orange, and R. It describes the key characteristics of each tool, including their interfaces, available algorithms, licensing, community support, and abilities for workflows and big data processing. Overall, the document evaluates and compares the capabilities of these seven major data mining software platforms.
Quality of Groundwater in Lingala Mandal of YSR Kadapa District, Andhraprades...IRJET Journal
This document provides an overview of several popular general data mining tools: Weka, Rapid Miner, IBM SPSS, Tanagra, KNIME, Orange, and R. It describes the key characteristics of each tool, including their interfaces, available algorithms, licensing, community support, and abilities for workflows and big data processing. Overall, the document finds that while each tool has strengths in different areas, Weka, Rapid Miner, IBM SPSS, KNIME, and Orange provide robust functionality for a wide range of general data mining tasks through their graphical user interfaces and available algorithms.
Sharing Agricultural Events Information: When and where is that workshop?Gauri Salokhe
The document discusses requirements and a proposed standard for sharing information about agricultural events using RSS and other metadata standards. It proposes an "Ag-Event Application Profile" that defines a minimal set of mandatory elements like title, link, description, dates and location to provide basic event information in a standardized way across different applications. Two example applications, AgriFeeds and GFIS, that currently reuse event feeds structured according to this profile are mentioned.
Similar to ORGANIZATION, RETRIEVAL, ANNOTATION AND PRESENTATION OF MEDIA DATA FLES USING SIGNALS CAPTURED FROM A VIEWING ENVIRONMENT (20)
The document discusses how companies can foster innovation during the COVID-19 pandemic. It argues that innovation is more important than ever to help companies adapt and spot new opportunities. Some key points are:
1) Companies should invest in innovation to increase opportunity discovery and improve employee morale.
2) To empower remote innovation, companies can create an online innovation portal with resources, ideas, and a community for innovators.
3) Strategic initiatives like prototyping as a service, measuring innovation engagement, and recognizing top performers can help establish innovation as an "always-on" function.
The Minimum Viable product and why it is critical for a startup. How to get from an idea to an MVP through a prototype. How to speed up your software prototyping process. Techniques to help you experiment and capture feedback.
As a founder, It is very important to deeply understand the notion of the MVP. You need to use it as part of a method or a framework to help you make better product decisions – and mitigate or avoid known risks. So this definition by Eric Ries, defines the MVP as ‘ …a product with just enough features to satisfy early customers, and to provide feedback’.
Your MVP must solve the problem for your customers; your users should get value out of it; your MVP should be good enough so the users engage with it and potentially pay for it;
Your early customers should be so happy with your product to act as promoters – to recommend it to others and publicly share positive feedback.
https://www.theinnovationmode.com/
Moving from an idea to a Minimum Viable Product
A quick introduction to the notion of the MVP – what a Minimum Viable Product is, why you need, and why it is a critical success factor for startups
How to move from a problem to a properly-defined MVP - steps, activity and best practices to follow
the book: https://www.theinnovationmode.com/
No matter the type (corporate or public one) a hackathon is always a great opportunity to showcase your talent and skills: yes, hackathons are also about team spirit, innovation, collaboration and fun but the primary motivation of the typical participant is to win it and capitalize on that (reputation, opportunity, networking).
The competition is tough, the event itself is demanding with several hours or even days of ideation, coding, iterations and in some cases team challenges.
Is a great idea enough to win a hackathon? The short answer is NO.
You also need the right team, working practices, mentality and the right strategy. Consider the following practical hints to … hack the next hackathon.
Corporate hackathons can inspire teams and promote creativity if properly run. To be successful, a hackathon needs clear objectives, participation rules, deliverables, and assessment criteria. It should generate novel ideas and prototypes, with the goal of boosting innovation culture. Success is defined by participation rates, number of actionable ideas, and impact on team dynamics and morale. The event involves a preparation phase, intensive work period, and assessment of deliverables. Winners receive awards like bonuses or resources to further develop ideas. Overall, hackathons can drive cultural shifts and produce strategic product concepts.
What’s next on Artificial Intelligence, Augmented Reality, Robotics, Data & Visualization and Blockchain
Technology is moving at an incredible pace. We live in an amazing era where things like autonomous cars, personalized medicine and quantum computing are becoming real as we speak; Artificial Intelligence, crypto-currencies, advanced automation, deep learning and concepts like Universal Basic Income are about to reshape our world.
The years to come will bring impressive technological breakthroughs with massive impact on our lives, markets and societies. In our connected world, with the unprecedented level of information, knowledge and ideas exchange, innovation is happening continuously, at scale and in several forms; it is driven by corporations, secret labs, universities, startups, research scientists or simply by thousands of creative individuals across the globe.
Datamine provides data-intensive information technology solutions & services for Telecoms, Banking & Retail industry. Our offering answers the needs of modern management for analytics, process insight, business & market intelligence.
Our products are based on CAS platform - Competition Analysis System - which enables a new range of capabilities for Market insight, price analytics, personalized customer offerings & market strategy. Click 'download' below to get a full corporate profile.
Expertise
Business Intelligence
Market Intelligence
Price Analytics
Data mining
Customer Segmentation
Marketing databases
Recommendation engines
Online CRM components
Campaign Optimization
An interactive product development & evaluation framework …… enabling quick definition of new banking products, and analysis against actual usage
A powerful Customer Interaction framework …… automating and optimizing product recommendations, according to strict business rules
A sophisticated Customer Analytics platform …… enabling customer, billing & usage analysis along with product suitability measures
CAS/R enables marketing users to browser and analyze competition in terms of pricing on specific products. Beyond competition insight, CAS/R offers powerful tools for actionable decisions on your pricing policies, loyalty programs, personalized offers and more.
Intelligent Online marketing
CAS/R can be defined as an analytics framework for modern online retailers offering a wide range of marketing applications including product management, market analysis, Business Intelligence and predictive modeling. It is based on a powerful pricing database against your product catalogue and the ‘market’.
CAS/R includes several additional components enabling real-time, activity-based proposal generation for existing customers, dynamic product pricing schemes, personalized discount models, interactive market analysis and the intelligent alerting suite.
Competition Analysis, for Telcos
Datamine’s Competition Analysis System is a sophisticated, highly engineered data processing and visualization platform providing telcos with outstanding analytical capabilities.
Tariff Optimization, Market Insight
CAS encapsulates the complexity of the typical, fuzzy telecom market enabling telecom professionals and decision makers to understand, study and model competitors, products and strategies through a set of well-defined, easy-to-use business components. By utilizing a powerful billing engine and numerous intelligent data management components, CAS makes tariff optimization and relevant decision-making a simple, robust and fruitful business procedure.
CAS extends to an interactive decision support environment with a wide range of KPIs and statistical figures on the evolution of the customer, tariff suitability, tariff performance, market positioning, competitor’s strategy and more.
Customer Feedback Management
Corporate Criterion enables systematic, on-going user experience management and analysis, across a wide range of customer touch points - continuous user experience feeds towards your management dashboards.
The Survey process. Redefined
Design complex, structured questionnaires, define surveys in terms of target group (consumers to be contacted), execution resources, planning and administration. Depending on the configured channels and timing, consumers will be invited to participate as they become eligible according to target group membership, triggers and randomization.
Customer experience measurements through Corporate Criterion become parts of the customer record – within the data warehousing or the Marketing database – ready to be cross-analyzed against any customer dimension and attribute.
Example apparatus and methods provide a gamified adaptive digital disc jockey (DDJ) that optimizes a media presentation based on an audience response according to a gamification process. The DDJ receives data about audience members and determines a state and dynamic of the audience in response to a portion of the media presentation or the dynamics of the media presentation. The DDJ identifies audience leaders or laggards from gamification data or patterns about audience members. The gamification scores may be computed from the reactions or behaviors of audience members. The DDJ automatically adapts the media presentation based on the state and dynamic of the audience in general and/or based on the reactions of people with certain gamification scores. Data relating states, dynamics, gamification scores, and tracks or sequences of tracks from previous presentations may help plan and optimize the presentation and may be stored for planning future presentations.
A device is disclosed for enabling a user to navigate between content using gaze tracking. The device includes a presentation component for presenting a plurality of content to a user in user interface (UI) windows. A camera captures the gaze of the user, and components described herein control focus of the UI windows presented to the user based on the gaze. Multiple UI windows of varying content may be displayed to the user. A processor determines, from the captured gaze, where the user is looking, and the UI window located at the user's gaze point is brought into focus and controlled by the user's gaze. As the user looks to other UI windows, focus is seamlessly moved from one UI window to the next to correspond with what the user is looking at, providing an implicit way to control digital content without conventional computer commands, actions, or special requests.
Datamine Ltd is a Greek software company that develops data-driven applications like CAS (Competition Analysis System), a tariff optimization tool, and Corporate Criterion, a customer satisfaction measurement platform. CAS allows telecommunications companies to analyze customer usage and tariff data, develop new tariffs, automate recommendations, and optimize customer interactions. The document describes the features and benefits of CAS for tasks like tariff development, negotiations, recommendations, and acquiring new customers.
Feedback Management System The Criterion platform is a modern IT infrastructure which simplifies and empowers customer and employee survey lifecycle. Offers a new range of possibilities including continuous data flows (towards your marketing databases) and real-time analysis of the results. Corporate Criterion lets you design complex questionnaires and define surveys in terms of participants - consumers to be asked, execution resources, planning and administration. Electronic questionnaires become available to the authorized users, posting the answers directly to your database systems (data warehouse or marketing database). Data analysis and presentation is easier than ever through powerful reports performing in real time mode. Either
This document discusses Datamine's services for telecommunications companies, including building a customer-centric IT infrastructure to enable behavioral modeling and advanced customer segmentation using metrics, attributes, demographics and behavior scores. It provides solutions to simplify understanding complex customer data by encapsulating data handling complexities and implementing procedures to provide reliable customer information. Datamine also offers consulting services in IT, analytics, CRM and BI including requirement gathering, system architecture, data modeling and user interface design using standards like UML and RUP.
Το ςφγχρονο ανταγωνιςτικό περιβάλλον, ςε ςυνδυαςμό με τθ διαρκϊσ εντεινόμενθ προςπάκεια των
επιχειριςεων για άμεςθ και εφςτοχθ ανταπόκριςθ ςτισ ανάγκεσ του πελάτθ, οδθγεί ςε απαιτθτικζσ και ςφνκετεσ
διαδικαςίεσ πϊλθςθσ - τόςο ωσ προσ τθν εκτζλεςθ όςο και ωσ προσ τθν παρακολοφκθςι τουσ. Ταυτόχρονα, οι
διαδικαςίεσ αυτζσ πρζπει να είναι ευκυγραμμιςμζνεσ με ςυγκεκριμζνεσ εςωτερικζσ πολιτικζσ, ςτόχουσ ανάπτυξθσ
ι και περιοριςμοφσ – όπωσ κακορίηονται από τθν εκάςτοτε διοίκθςθ. Η νζα αυτι πραγματικότθτα - πολλαπλζσ
(και κατά περίπτωςθ ανταγωνιςτικζσ) διαδικαςίεσ που εμπλζκονται ςτον κφκλο πώλθςθσ τθσ τυπικισ επιχείρθςθσ,
ςε ςυνδυαςμό με τισ ιδιαίτερα αυξθμζνεσ απαιτιςεισ από τθν πλευρά του πελάτθ- αναδεικνφει τθν ανεπάρκεια
των παραδοςιακϊν μεκόδων ανάλυςθσ πωλιςεων και προςδιορίηει τθν ανάγκθ για αποτελεςματική ανάλυςη
του κφκλου πώληςησ.
This document provides an overview of a specialized information system that combines technologies, statistical models, and business knowledge to automate and optimize the manual credit checking process for contract activations. The key objectives of the system are to automate and optimize credit checking, minimize bad debt, provide risk measurements at the customer level, support flexible business rules, and provide advanced reporting capabilities. The system architecture utilizes a three-tier design with components for scoring models, a customer viewer interface, rule management, and reporting. Customer data, accounts, credit scores, and business rules are used to make automated credit checking decisions.
Information technologies & Analytics for Telcos & ISPsGeorge Krasadakis
Datamine is a Greek analytics company founded in 2005 that provides data-driven solutions like CRM, business intelligence, and customer loyalty programs. It has a team with experience in statistics, data mining, and telecom/banking. Major customers include Greek banks and telecom companies for projects involving campaign management, data warehousing, customer risk assessment, and more. Datamine offers services around data preparation, modeling, reporting, and analytical applications to help customers better understand their data and customers.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
ORGANIZATION, RETRIEVAL, ANNOTATION AND PRESENTATION OF MEDIA DATA FLES USING SIGNALS CAPTURED FROM A VIEWING ENVIRONMENT
1. (19) United States
US 2017019.9872A1
(12) Patent Application Publication (10) Pub. No.: US 2017/01998.72 A1
Krasadakis (43) Pub. Date: Jul. 13, 2017
(54) ORGANIZATION, RETRIEVAL,
ANNOTATION AND PRESENTATION OF
MEDIA DATA FLES USING SIGNALS
CAPTURED FROM A VIEWING
ENVIRONMENT
(71) Applicant: Microsoft Technology Licensing, LLC,
Redmond, WA (US)
(72) Inventor: Georgios Krasadakis, Dublin (IE)
(21) Appl. No.: 14/993,035
(22) Filed: Jan. 11, 2016
Publication Classification
(51) Int. Cl.
G06F 7/30 (2006.01)
processing
Data component
processing 190
Component(s) .
174
Sensors 170
Story
Input devices
180 Display 104
: Speakers106
(52) U.S. Cl.
CPC. G06F 17/30038 (2013.01); G06F 17/30525
(2013.01); G06F 17/30498 (2013.01); G06F
17/30289 (2013.01)
(57) ABSTRACT
Acomputersystem automatically organizes, retrieves, anno
tates and/orpresents media data files as collections ofmedia
data files associated with one or more entities, such as
individuals, groups of individuals or other objects, using
context captured in real time from a viewing environment.
The computer system presents media data from selected
media data files on presentation devices in the viewing
environment and receives and processes signals from sen
sors in that viewing environment. The processed signals
provide context, which can be used to select and retrieve
media data files, and can be used to further annotate the
media data files and/or other data structures representing
collections of media data files and/or entities. In some
implementations, the computer system can be configured to
be continually processing signals from sensors in the view
ing environmentto continuously identify and usethe context
from the viewing environment.
/ Virtualized mediastorage f
system 120 7 f ,
/ Media f f
roy index /A
f 124 y
/ Media
of data files
w 122
AwA Selected
media
data 112 ?.
Presentation device(s) 102
Viewing Environment 100
2. Patent Application Publication Jul. 13, 2017 Sheet 1 of 7 US 2017/O1998.72 A1
-m;A. A :::... Ty
A , A Virtualized mediastorage A ,
f A f system 120 7 f A Story A f A Media A fi a
i A database / 7 index A
A 140 A 124A. A
A A. A |
A Entity A Media A i
A database * data files A fd :
^ 160 f
w ------------------------------------ A
---
A
A Context
A 176 A
A.
A
—f processing
Data component
processing 190
component(s) -
74 A A
A Selected
' media /
data 1 12 A
Presentation device(s) 102
Input devices
18O Display 104/
- -- - - - - - - -- - - Speakers 106
Viewing Environment 100
FIG.1
3. Patent Application Publication Jul. 13, 2017 Sheet 2 of 7 US 2017/O1998.72 A1
Entity database200 MediaIndex230
entities 2O1 : Media data files 231
: entity ID202 file ID 232
name 203 ; : file name 233
; other data ; : storage ID 234
Group 211 file name 235
group ID 212 other metadata
name 213
other data Storage System 241
; : storage ID 242
name 243 (optional)
Access information 244
...T.T.T. e.g., network address,
Story database 250 ; e.g., login credentials
Story251
: story ID 252
title 253
text 254
tag data 255
ntity-media file tableEntity-grouptable 260 270 Entity-story table 280
Entry 261 Entry 271 Entry 281
; entry ID 262 entry ID 282A. : entry ID 272 A.
entity ID 263 entity ID 283
group ID 264 entity ID 273 story ID 284s media file ID 274
addl data ; : addl data addl data
Story-media file table 290
Entry 291
entry ID 292
story ID 293
media file ID 294
add 1 data
FIG.2
4. Patent Application Publication Jul. 13, 2017 Sheet 3 of 7 US 2017/O1998.72 A1
-300
SELECT A SET OF MEDIA DATA FILES l
INITIATE PRESENTATION OF THE MEDIA
DATA FILES
CAPTURE CONTEXTFROM THE VIEWING | -304ENVIRONMENT
GENERATEANDSTOREDATA FORASTORY r306ASSOCIATED WITH THE MEDIA DATA FILES--------------------------------------------------------------------------------------------------------------------------------------------
FIG. 3
5. Patent Application Publication Jul. 13, 2017 Sheet 4 of 7 US 2017/O1998.72 A1
CAPTURE CONTEXTFROM THE VIEWING -400
ENVIRONMENT
SELECT A STORY FROM THE STORY -402
DATABASEBASED ON THE CONTEXT
ACCESSMEDIADATAFILESASSOCIATED r404WITH SELECTED STORY
| INITIATETRANSMISSION OF MEDIA DATA TO f
PRESENTATION DEVICE
--------------------------------------------------------------------------------------------------------------------------------------------
FIG. 4
6. Patent Application Publication Jul. 13, 2017 Sheet 5 of 7 US 2017/O1998.72 A1
RECEIVE INPUT DATA f
CREATES A NEW STORY IN THE STORY -502
DATABASE r•
ASSOCIATESELECTEDMEDIADATAFILES r504WITH THE STORY r
RECOGNIZED SPEECH AS TAGS OF STORY
STORE KEYWORDS EXTRACTED FROM -506
STORETEXTOF RECOGNIZED SPEECH AS (508
TEXT OF STORY
STORE ASSOCIATIONSOFENTITIESWITHTHEi?r
STOREASSOCIATIONSOFEXTRACTEDDATA ??
WITHMEDIADATAFILESOFTHESTORY
FIG. 5
7. Patent Application Publication Jul. 13, 2017 Sheet 6 of 7 US 2017/O1998.72 A1
-600
RECEIVE CONTEXT AS INPUT DATA -
-602BUILD A QUERY -/
RECEIVESELECTIONOFSTORYORMEDIA ? 606DATA FILES r
ACCESSANDPRESENTMEDIADATAFOR roo
MEDIADATAFILESOF SELECTEDSTORY
FIG. 6
9. US 2017/019.9872 A1
ORGANIZATION, RETRIEVAL,
ANNOTATION AND PRESENTATION OF
MEDIA DATA FILES USING SIGNALS
CAPTURED FROM A VIEWING
ENVIRONMENT
BACKGROUND
0001 Individuals, as well as families andother groups of
individuals, are increasingly generating and storing large
collections of media data files, such as data files of media
data including but not limited to photos, videos and audio
and related rich media data, in digital form. These media
data files are captured using multiple computer devices and
are stored in multiple computer storage systems, including
but not limited to non-removable storage devices in com
puters, removable storage devices, online storage systems
accessible by computers over computer networks, and
online services, such as Social media accounts. Such media
data are also being transmitted and shared among individu
als through multiple transmission and distribution channels.
The large volume of media data, and distribution of media
data files among multiple different storage systems, and
multiple transmission and distribution channels, can make
overall management, administration, retrieval and use of
media data files both difficult and time consuming for
individuals or groups of individuals. While some systems
can index large Volumes of media data, Such systems gen
erally are limited to processing the media data itself or to
responding to explicit user instructions so as to generate
metadata about the media data or about the media data files.
Asa result, management, administration, retrieval and use of
Such media data files also is limited generally to the meta
data available for the media data and the media data files.
SUMMARY
0002 This Summary is provided to introduce a selection
of concepts in a simplified form that are further described
below in the Detailed Description. This Summary is
intended neither to identify key or essential features, nor to
limit the scope, ofthe claimed Subject matter.
0003. A computer system automatically organizes,
retrieves, annotates and/or presents media data files as
collections of media data files associated with one or more
entities, such as individuals, groups of individuals or other
objects, using context derived from signals captured in real
time from a viewing environment. Media data files are data
files of media data including but not limited to photos,
Videos and audio and related rich media data, including
combinations ofthese, in digital form. The computersystem
presents media data from selected media data files on
presentation devices in the viewing environment and
receives and processes signals from sensors in that viewing
environment. The processed signals provide context, which
can be used to select and retrieve media data files, and can
be used to further annotate the media data files and/or other
data structures representing collections of media data files
and/or entities. The context is any information derived by
processing the signals from the sensors in the viewing
environment. This information can have one or more asso
ciated times or time frames of reference, which allows the
context to be correlated in time with media data being
presented in the viewing environment. In some implemen
tations, the computer system can be configured to be con
Jul. 13, 2017
tinually processing signals from sensors in the viewing
environment to continuously identify and use the context
from the viewing environment. The continuously identified
context can be used to further retrieve and present media
files related to that context, and to continuously annotatethe
media data files and/or other data structures representing
collections of media data files and/or entities.
0004. The context can include, for example, information
extracted from an audio signal from a microphone in the
viewing environment, using speech recognition and natural
language processing and other techniques. The audio signal
can include, for example, discussions among individuals
within the viewing environment. The information extracted
from Such an audio signal can include, for example but not
limited to, keywords, phrases, descriptions, references to
entities, text, identities ofindividuals present in the viewing
environment, time information, location information, occa
sion information, other entity information, emotion infor
mation and sentiment information.
0005. As another example, context can include informa
tion extracted from an image signal from a camera in the
viewing environment, using facial recognition, gesture rec
ognition, facial expression recognition, gaze analysis and
other image processing techniques. Such image data can be
processed to identify, for example, individuals in the view
ing environment, as well as their gestures and expressions,
and reactions to, Such as interest in, the presented media
data.
0006. Using such image data, the data processing com
ponent can, for example, extract gaze statistics from one or
more persons in the viewing environment looking at par
ticular media data being displayed on a display device in the
viewing environment. The computer system can map Such
gazestatisticsto correspondingparts ofthe media data being
displayed for a specific amount oftime. Yet other data can
be mapped to specific parts of media data based on their
correspondence to gaze statistics.
0007 As another example, the data processing compo
nent can process the image data to identify objects in the
viewing environment, from which the computer system can
derive additional information about an occurrence of an
event. For example, ifan object recognition module identi
fies a birthday cake in an image, the computer system can
infer that the event is a birthday.
0008. The context can be used to further annotate col
lections of media data files and/or individual media data
files. Annotation means associating metadata, Such as the
context information, with other data, such as a media file.
Such association occurs by Storing the metadata, Such as
tags, scores, text (natural language), Sound, references, etc.,
in the database or in another format in a manner that
associates the metadata with the data. The context can be
used to organize a selected set of media data files being
presented into a collection and to annotate the collection.
The context also can be used to retrieve and present a set of
media data files and organize them into a collection, and/or
to retrieve and present a previously organized collection of
media data files. Such annotations on media data files and
collections of media data files can include feedback, reac
tions and other engagement information determined from
the context. By annotating specific media data with infor
mation Such as feedback and reactions of individuals, the
computer system develops an interaction history for the
specific media, thus enriching the knowledge by the com
10. US 2017/019.9872 A1
puter system for the particular media item. The computer
system can then use this knowledge to retrieve and present
media data for an audience.
0009. The context can be used to associate entities with
individual media data files and/or collections of media data
files. The context can be used to associate information with
individuals and/or groups ofindividuals, such as implicitly
and explicitly identified preferences, viewing patterns, spe
cific locations, events and dates, relationships with other
entities.
0010. By processing signals from a viewing environment
in various ways to extract context, the computer system,
using that context, can dynamically select media data to
retrieve and present, can continually organize and annotate
media data, and, atthe sametime, can associateentities with
media data, associate other information with entities, and
track reactions to the media data from people in the viewing
environment, all with minimal or no direction and interven
tion by the individuals. Such retrieval, presentation and
annotation of the media data and entities thus can happen
seamlessly. Using Such context, the computer system also
annotates media data files with additional information and
statistics each time the media data files are presented in the
viewing environment. The processing of signals from the
viewing environment also can provide statistical informa
tion about behavioral patterns and implicitly stated prefer
ences for individual users or group of users and families
(what media files or stories each user prefers, time-related
patterns, session duration, effects and animation applied,
etc.).
0011. The computer system can use a set of data struc
tures to organize the metadata for media data files, entities
and context from the viewing environment. In particular, a
story data structure is a data structure that represents a
collection of one or more media data files, associated with
one or more entities, such as individuals or group of indi
viduals, and other data based on the context from the
viewing environment. The computer system can use the
context from the viewing environmentto define Such a story
data structure for a collection of media data files being
presented, and/or to select such a story data structure for
presentation of its collection of media data files.
0012. A story data structure also can include playback
parameter data specifying how the collection of media data
files is to be synthesized for presentation. Such playback
parameter data can specify, for example, video transitions,
audio cross-fades, animation, titles, captioning, background
music and other effects to be applied to media data when
presented. The computer system can dynamically adjust the
playback parameter data during presentation based on the
COInteXt.
0013 Thecomputersystem further manages access to the
underlying storage of media data, Such photos and videos
and music and other media data and associated metadata,
which are stored in diverse storage systems.
0014. In the following description, reference is made to
the accompanying drawings which form a part hereof, and
in which are shown, by way ofillustration, specific example
implementations.Otherimplementations maybe madewith
out departing from the scope ofthe disclosure.
DESCRIPTION OF THE DRAWINGS
0015 FIG. 1 is a combined block diagram and data flow
diagram of a computer system.
Jul. 13, 2017
0016 FIG. 2 is a schematic diagram of an example
implementation of database schemas.
0017 FIG. 3 is a flowchart describing an example imple
mentation of a first mode of operation of the computer
system.
0018 FIG. 4 is a flowchart describing an example imple
mentation of a second mode of operation of the computer
system.
0019 FIG. 5 is a flowchart describing an example imple
mentation of associating data extracted from a sensor with
selected media files.
0020 FIG. 6 is a flowchart describing an example imple
mentation ofselecting media files using data extracted from
a SSO.
0021 FIG. 7 is a block diagram of an example general
purpose computer.
0022. In the data flow diagrams, a parallelogram indi
cates data, whereas a rectangle indicates a module of a
computer that performs processing on data.
DETAILED DESCRIPTION
0023. A computer system as described herein provides a
natural user interface to organize, retrieve, annotate and/or
present media data files as collections of media data files
associated with one or more entities, such as individuals,
groups ofindividuals or other objects, using context from a
viewing environment. Media data files are data files of
media data including but not limited to photos, videos and
audio and related rich media data, in digital form. The
computer system presents media data from selected media
data files on presentation devices in the viewing environ
ment and receives and processes signals from sensors in that
viewingenvironment, such as audio signals including verbal
communications ofindividuals in the viewing environment.
0024. Referring to FIG. 1, an illustrative example of a
computer system for organization, retrieval, annotation and/
or presentation of media data files and entity information
will now be described.
0025. In FIG. 1, a viewing environment 100 represents a
room or other physical location in which one or more
individuals can be present. In general, multiple individuals
may be present and interacting with each other in the
viewing environment 100. The computer system presents
media data in the viewing environment 100 through one or
more presentation devices 102. Apresentation device can be
any device or combination of devices that displays images
(e.g., photos or videos or computer generated animation)
and/or presents Sound (e.g., music, recorded dialog, Sound
effects).
0026. An example configuration ofsuch a viewing envi
ronment is a room, Such as in a house, apartment, or other
dwelling, which includes one or more displays 104. Such as
for a television or for a computer or game console, and
speakers 106 or integrated media systems. Anotherexample
viewing environment is an office, meeting room, conference
room orotherwork space, such as in a place ofemployment
or labor which includes one or more displays. Another
example of Such a viewing environment is any gathering
place for people which includes one or more displays.
Another example configuration of Such a viewing environ
ment is an individual’s personal space in which they use a
device. Such as a Smartphone, with a display 104 and
speakers 106.
11. US 2017/019.9872 A1
0027. To present the visual and audio media data on the
presentation devices 102, a host computer 110 is configured
to cause media data 112 to betransmitted from storageto the
presentation devices 102 over a media connection 114. The
media connection can be implemented in a number ofways,
Such as wired (e.g., cable or computer network) or wireless
(e.g., WiFi, Bluetooth, cellular, satellite) connections; a
computer network (e.g., Ethernet, IP) or computer bus (e.g.,
USB, HDMI). The media data 112 can be transmitted, for
example, as streaming data, analog signals oras one or more
data files. The presentation devices 102 can be merely
displays and/or speakers, or can be computers or other
devices, such as asettopbox orgameconsoleorSmartphone
or virtual reality device or augmented reality device, that
communicates with the host computer 110 and receives and
processes the mediadataforpresentation on thepresentation
devices 102.
0028. The host computer accesses media data 112 from
media data files 122 stored in a virtualized media storage
system 120. A virtualized media storage system comprises a
media index 124 of the media data files 122. The media
index includes, for each file, at least data sufficient to
provide the host computer access to where the media data
file isactually stored. The media data files 122 may be stored
in a large number ofpossible storage systems, including but
not limited to, Social media accounts online, cloud-based
storage, local device storage, cache layers or other devices
Such as tablet computers or mobile phones connected to the
host computer over a local network, local network attached
storage, etc. Generally speaking, Social media accounts and
cloud-based storage are domains on a computer network
which store media data files for users and are accessible
using an application, Such as a browser application or
dedicated application for the domain, running on a client
computer. The application accesses files on the domain
through user accounts on the domain using usernames and
passwords and/ or security tokens and/or other methods of
authentication and a specified communication protocol for
the domain. In some configurations, the virtualized media
storage system 120 maintains data to access the media data
files from other locations; in other configurations, the Vir
tualized media storage system can maintain local copies of
the media data files in local storage. In the latter case, the
virtualized media storage system can be configured to peri
odically synchronize with remote storage systems. Such a
virtualized media storage system can provide access to
multiple sources of media data transparently to the host
computer.
0029. In some configurations, the host computer 110 can
read media data from the media data files 122 and then
transmit that media data over the media connection 114 to
the presentation devices 102. In such a configuration, the
host computer can be configured to transform the media data
file into a format that can be processed by the presentation
devices 102. In some configurations, the host computer 110
can instruct a storage device to transmit media data from a
media data file over the media connection 114 to the
presentation devices 102. In some configurations, the host
computer 110 can instruct a presentation device to request
the storage device to transmit the media data from a media
data file over the media connection 114 to the presentation
device.
0030 The host computer also accesses other data that is
used by the computer system to organize, retrieve, annotate
Jul. 13, 2017
and present media data as collections ofmedia data associ
ated with one or more entities. In the example implemen
tation described herein, a story is a data structure stored in
computer storage, whether in persistent storage or in
memory, that represents a collection of one or more media
files, associated with one or more entities, which is anno
tated based on context from the viewing environment. A
story database 140 includes data representing a plurality of
stories. An entity database 160 includes data representing a
plurality ofentities, which can include but are not limited to
individuals, groups of individuals and objects, such as but
not limited to houses, cars, gardens, cities, roads, monu
ments, countries, continents, events, landscapes, toys, tools,
clothes, flowers, furniture, electronic devices, food, etc.
0031. The data stored in the media index 124, story
database 140 and entity database 160 also includes data
representing several many-to-many relationships: each story
is associated with one or more entities; each story also is
associated with one or more media files; each media file is
associated with one or more entities; each media file can be
associated with one or more stories; an entity, or group of
entities, can beassociated with one or more stories; an entity
can be associated with Zero or more groups of entities; a
group ofentities is associated with one or more entities; an
entity orgroup ofentities can beassociatedwith oneormore
media files. In the foregoing, a particularexample ofinterest
ofan entity or group ofentities is an individual or group of
individuals, particularly in their associations to stories,
media files and other entities.
0032. The media index, story database and entity data
base can be implemented using any of a set of storage
techniques and data management technologies, such as a
relational database, object-oriented database, graph data
bases, and structured data stored in data files, and the like.
Example implementations of these databases are described
in more detail below.
0033. Thecomputer system also can include one or more
input devices 180 in the environment 100 which provide
user inputs to the host computer or other computer in the
viewing environment that connects to the host computer.
The input devices 180 in combination with the presentation
devices 122 can also provide a graphical and mechanical
userinterface to the hostcomputer 110, orother computerin
the viewing environment that connects to the host computer
110. Such an interface can be provided to allow a user to
browse, search, select, edit, create, and delete entries or
objects in the story database, entity database and media file
database, for example.
0034. The computer system also includes one or more
sensors 170 in the viewing environment 100. The sensors
generate signals 172 in response to physical stimuli from the
viewingenvironment.These signals are inputto one ormore
data processing components 174, which extract data to be
provided to the host computer 110 as the context 176 from
the viewing environment. The kind of data that can be
extracted depends on the nature of the sensor and the kind
of data processing components that are used. A data pro
cessing component may be implemented as part ofthe host
computer, or as one or more separate computers or devices.
The data processing components can perform multiple
stages ofprocessing to extract multiple types ofdata. Data
from input devices 180 forming part of a natural user
interface to the host computer, such as described below in
connection with FIG. 7, also can be used as sensors 170.
12. US 2017/019.9872 A1
0035) To receive the signals from the sensors 170, the
signals can be transmitted from the sensors to a data pro
cessingcomponentoverasensorconnection 171.Thesensor
connection 171 can be implemented in a number of ways
Such as a wired (e.g., cable, or computer network) or
wireless (e.g., WiFi, Bluetooth) connection; computer net
work (e.g., Ethernet, IP) or computer bus (e.g., USB,
HDMI). Signals from a sensor can be transmitted, for
example,as streaming data (forcapturebythehost computer
or data processing component), analog signals (for capture
using a peripheral ofthe data processing component or host
computer) or a data file (transmitted to the data processing
component or host computer).
0036 An example sensor 170 is a microphone, which
generates an audio signal in response to Sound waves in the
environment 100. Such an audio signal can be an analog or
digital signal. The data processing component 174 can
process the audio signal, for example, to recognize the
Verbal descriptions of media being presented to the presen
tation devices 102 and extract text data. Such text data can
be further processed to recognize and extract keywords,
entities mentioned such as individuals, locations and other
names, time information, sentiment data. The audio signal
can also be processed to extract other information, such as
the identity of an individual, or characteristics of the indi
vidual. Such as whether a speaker is a child or an adult, or
emotional state of an individual, based on tone of Voice,
pitch, Voice pattern and other audio characteristics ofVoice.
Such information can be correlated with the specific media
file or the collection being presented on the presentation
devices 102, thus enriching media files with user-generated
metadata.
0037 Another example sensor 170 is a conventional
camera, which generates image data in response to natural
visible lightin theenvironment.The image data canbe in the
form ofan analog video signal, streaming digital video data,
one or more image data files, or the like. A camera can
generate one or more photos, i.e., still images. A camera can
generateasequenceofSuch still images atagiven frame rate
to provide video. The camera can include a microphone and
combine video and audio datatogether. Such image data can
be processed to recognize individuals present in the viewing
environment, detect gaze direction, identify a new user, to
capture data describing emotions detected, etc. Such infor
mation can be correlated with the specific media file or the
collection being presented in the presentation devices 102,
thus enriching media files with user-generated metadata.
0038 Another example sensor is a multilens camera,
which can include one or more depth cameras, infrared
cameras and/or Stereoscopic cameras, Such as a KINECT
sensoravailable from Microsoft Corporation. Such a system
also generates image data, but from a different spectrum of
light than a conventional camera for video and photos. Such
a sensor can be combined with other data processing com
ponent executed by a data conversion device to provide data
indicative of gestures of individuals in the environment,
gaze direction, and the like.
0039. The inputs from input devices 180 and context
extracted from thesignals from the sensors 170 areprovided
to a story processing component 190 on the host computer.
The story processing component, in general, implements
several operations to organize, retrieve,annotateand present
media data files. Such operations are described in more
detail below. Ingeneral, the story processingcomponent 190
Jul. 13, 2017
performs operations of using context to select and retrieve
media files and/or stories, using context to organize media
files into stories, and using context to annotate media files,
stories and entities.
0040. More particularly, the context can be used to fur
ther annotate collections of media data files and/or indi
vidual media data files. The context can be used to organize
a selected set of media data files being presented into a
collection, called the story, and to annotate the story, or one
or more specifics media files, with additional metadata. The
context can be used to retrieve and present a set of media
data files and organize them into a collection, and/or to
retrieve and present a previously organized collection of
media data files. The annotations on media data files and
collections of media data files can include feedback, reac
tions, descriptions of the occasion referenced by the media
files and otherengagement information determined from the
context. The context can be used to associate entities with
individual media data files and/or collections of media data
files as stories. The context can be used to associate entities
with each other. The context can be used to associate
information with individuals and/or groups of individuals,
Such as implicitly and explicitly identified preferences,
viewing patterns, specific locations, events and dates, rela
tionships with other entities and other metadata.
0041. The computer networks shown in FIG. 1 are
merely illustrative. The actual networktopology can be any
kind ofcomputer network topology. Some ofthe computers
may communicate with each otherovera local area network,
whereas others may communicate with each other over a
wide area network, and the computer network can include a
combination ofboth private networks, including both wired
and wireless networks, and publicly-accessible networks,
Such as the Internet.
0042. Several deployments ofsuch a system are feasible.
The following are a few illustrative examples that are not
intended to be limitingandalso may beused in combination.
0043. Forexample, the hostcomputer 110 can bea server
computer accessible over a computer network. The presen
tation devices 102 and sensors 170 can be part of a com
puter, Smartphone, Smart television, game console. Such as
theXBOXgameconsole from MicrosoftCorporation, tablet
computer, augmented reality device, or virtual reality
device, such as the HOLOLENS device from Microsoft
Corporation, orotherwearablecomputerwith sensors,orthe
like which communicate with the host computer over the
computer network.
0044 As another example, the host computer 110 can be
a server computer accessible over a computer network. The
sensors 170 can be part ofa device in the viewing environ
ment which is always on and continually transmitting sensor
signals over the computer network to the host computer 110
for processing. The presentation devices 102 can be con
nected to the hostcomputer 110 through another connection
such as over the computer network. The sensors 170 con
tinually transmit signals to the host computer 110 which in
response selects andtransmits media datatothepresentation
devices 102 and then captures context to Submit and possi
bly annotate.
0045. As another example, the host computer 110 can be
a computerand the data processing components can include
multiple server computers that allow remote processing of
the signals from sensors 170, and which communicate the
context information back to the host computer 110.
13. US 2017/019.9872 A1
0046. As anotherexample, the host computer 110 can be
one or more server computers accessible over a computer
network, Such as the internet, from one or more computers
or devices in multiple viewing environments. The host
computer can be configured to provide shared storage in
which the media data files and other databases are provided
for a large number of users with access controls to permit,
limit ordeny sharing ofmedia data files and other databases
among users and collections of users.
0047 Turning now to FIG. 2, an example implementation
of a media index, entity database and story database will
now be described. It should be understood that the following
is an illustrative example and not intended to be limiting.
0048. An entity database 200 includes data representing
variousentities such as individualsandgroups ofindividuals
and other objects. An individual can be associated with zero
or more groups of individuals. A group of individuals can
include one or more individuals, although an actual imple
mentation may permit a user to define an empty group of
individuals. Similarly, any entity can be associated with one
or more other entities or group of entities.
0049. As shown in FIG. 2, data 201 for an individual, or
other singular entity, can includean entity identifier 202 for
the entity, and a name 203 for the entity. Examples ofother
data that can be stored for an individual include, but are not
limited to: contact information, location information, demo
graphic information, preferences, entity classification, usage
patterns, statistics, scores and the like.
0050 Data 211 for a group ofentities such as a group of
individuals (e.g., a family or team) can include a group
identifier 212 for the group and a name 213 for the group.
Examples of other data that can be stored for a group of
individuals include, but are not limited to: contact informa
tion, location information, group type, group size, access
patterns, session histories, classification, group usage pat
terns, statistics, scores and the like. The data stored for a
group of individuals can be aggregated from information
stored for each individual in the group. Similar data can be
stored for other types ofentities.
0051 Data structures that define relationships among
entities and groups ofentities, and relationships of entities
and groups ofentities with stories and media data files, are
described in more detail below.
0052 A media index 230 includes data representing
media data files that are stored in multiple data storage
systems, and data representing those storage systems. The
multiple data storage systems are accessible from a host
computer using the information obtained from the media
index.
0053 Data 231 representing a media data file includes a
file identifier 232 for the media data file and a file name 233
for the media data file. The file name can include a storage
identifier234 ofthedata storage system thatstores the media
data file, anda file name 235 orotheridentifierforthe media
data file used by that data storage system to store the file.
Examples ofother metadata that can be stored for a media
data file include, but are not limited to: a title or other text
description or label for the media data file; technical data,
Such as resolution and format information, Such as frame
rate, rastersize, colorformat, pixel bit depth, file formatand
the like; time information Such as apoint in time or range in
time at which the media data was captured by a device;
location information, such as a geographical location at
which the media data was captured by a device; device
Jul. 13, 2017
information, such as an identifier or other information about
a device or devices used to capture the media data; tag data,
Such as keywords or other data to allow structured search
ing; use information, such as a number of times the media
data file has been searched, accessed, shared or liked, or
other comments made by users or other analytics, such as
gaZeanalytics; emotional classification; access control infor
mation, such as permissions related to access and sharing;
uniform resource locators (URLs), uniform resource iden
tifiers (URIs), identification information in the form of
identifiers, or globally unique identifiers (GUIDs).
0054 Data 241 representing a data storage system
includes a storage identifier 242 for the data storage system,
and optionally a name 243, or other description or label,
which describes the data storage system.Access information
244 also can be included. The access information 244 can
include, for example, but not limited to, a network address
through which thehost computer can access the data storage
system, URLs, URIs, identification information in the form
of IDs or GUIDs and authentication information such as a
user nameandpasswordand/orsecurity tokens foraccessing
media data files on the data storage system.
0055 Data structures that define relationships of the
media data files and/or data storage systems, with entities,
groups ofentities, and stories, are described in more detail
below.
0056. A story database 250 includes data representing
stories. Data 251 representing a story includes a story
identifier252 forthe story, a title 253, orother text label, for
the story, and story text 254. Story text 254 may be a
reference to a data file thatcontains the story text. Examples
ofother data that can be stored for a story include, but are
not limited to: one or more time references. Such as a time
when the story is created, or when events in the story
occurred; one or more location references, such as a location
where events in the story occurred, or where the story was
created; the identified occasion in reference to the story; tag
data 255, such as keywords orother data to allow structured
searching and media file classification; usage information,
Such as a number oftimes the story was accessed, shared or
liked, or other comments made by users; emotional classi
fication; access control information, such as permissions
related to access and sharing; media playback preferences,
Such as specifications for transitions between audio or video
files, volumecontrols, which can be further related to timing
for the story; pointers to other stories.
0057 Data structures that define relationships of the
stories with entities, groups ofentities, and media data files,
are described in more detail below.
0.058 As noted above, these example data structures can
be implemented in a relational database, object oriented
database, graph database, structured data stored in data files,
and the like, when stored in persistent storage. In some
implementations, during processing by the story processing
component, similar data structures can be stored in memory
for data currently being used by the host computer. Such
data structures can be created and filled with data read from
the databases, then can be modified and expanded by the
story processing component, and then can be persisted into
the database for permanent storage.
0059. Some ofthe data representing stories, entities and
media data can involve many-to-many relationships, such as
having multiple individuals as part of many groups of
individuals, and so on. A number of data tables or files or
14. US 2017/019.9872 A1
data structures can be used to representthese many-to-many
relationships as several one-to-many relationships. In one
example implementation, using a relational database, this
data can be represented using a join table. In another
example implementation, object-oriented data structures, or
an object-oriented database, can representsuch relationships
directly using differentclasses to represent different types of
relationships and collections ofobjects. The example shown
in FIG. 2 illustrates join tables.
0060 For example, regarding relationships among enti
ties and groups ofentities, the entity identifier is a primary
key ofthe entity table, and the group identifier is a primary
key of the group table, and a join table can be used to
represent the interrelationships. For example, an entity
group table 260 includes entries 261, where each entry has
an entry identifier 262 as its primary key, and includes an
entity identifier 263 and a group identifier 264 as foreign
keys. Such anentry indicates thattheentity havingthe entity
identifier 263 is a member of the group having the group
identifier 264. To find groups to which a selected entity
belongs, or entities belonging in a selected group, this
entity-group table 260 can be searched. The entries 261 can
include additional fields to store additional data about each
relationship. Such as when the relationship was stored, the
type of relationship between the entity and the group, the
role of the entity within the group, and the like. A similar
table can be used to track relationships between entities.
0061 Regarding relationships among entities and media
data files, the entity identifier is a primary key ofthe entity
table, and the media data file identifier is a primary key of
the media index, and a join table can be used to represent
their interrelationships. For example, an entity-media file
table 270 includes entries 271, whereeach entry has anentry
identifier 272 as its primary key, and includes an entity
identifier 273 and a media data file identifier 274 as foreign
keys. Such anentry indicates thattheentity havingthe entity
identifier 273 is associated with the media data file having
the media data file identifier 274. To find media data files
associated with a selected entity, or the entities associated
with a selected media data file, this entity-media file table
270 can be searched. The entries 271 can include additional
fields to storeadditional data about the relationship between
the entity and the media data file, such as whether the entity
is an individual that is a creator of the media data file, or
whether the entity is present in the media data of the media
data file, or the like.
0062 Regarding relationships amongentities and stories,
the entity identifier is a primary key ofthe entity table, and
the story identifierisaprimary key ofthestory database,and
ajoin table can be used to represent their interrelationships.
For example, an entity-story table 280 includes entries 281,
where each entry has an entry identifier 282 as its primary
key, and includes an entity identifier 283 and a story
identifier 284 as foreign keys. Such an entry indicates that
the entity having the entity identifier 283 is associated with
the story having the story identifier 284. To find stories
associated with a selected entity, or the entities associated
with a selected story, this entity-story table 280 can be
searched. The entries 281 can include additional fields to
store additional data about the relationship between the
entity and the story, Such as whether the entity is an
individual that is an authorofthe story, orwhether theentity
is present in the media data used in the story, oris mentioned
in the story text, or the like.
Jul. 13, 2017
0063 Regarding relationships among stories and media
data files, the story identifier is a primary key of the story
database, and the media data file identifier is a primary key
ofthe media index, and ajoin table can be used to represent
theirinterrelationships. Forexample,a story-media file table
290 includes entries 291, where each entry has an entry
identifier 292 as its primary key, and includes a story
identifier 293 and a media data file identifier 294 as foreign
keys. Such an entry indicates that the story having the story
identifier 293 is associated with the media data file having
the media data file identifier 294. To find stories associated
with a selected media data file, or the media data files
associated with a selected story, this story-media file table
290 can be searched. The entries 291 can include additional
fields to storeadditional data about the relationship between
the story and the media data file.
0064. While the example above described join tables in
the context ofa relational database, in other non-relational
database implementations, such data can be stored in the
form of files, in-document tags or other data Sufficient to
represent the relationship.
0065 Given such a computersystem such as described in
FIG. 1 using a data representation such as described in FIG.
2,thereare several ways in which the system can use context
from a viewing environment to organize, retrieve, annotate
and present media data.
0.066 Referring now to FIG. 3, a first example ofopera
tion of the computer system will now be described.
0067. In this example, a set ofmedia data files is selected
300. Such a selection can arise in a number of ways. For
example, the story processing component on the host com
puter can provide a user interface for the host computer
through the sensors, input devices and presentation devices
in the environment. Such a user interface can be a graphical
user interface, or can be a natural user interface based on
Voice commands, gestures, gaZe analysis and the like. As
another example, the story processing component on the
host computer can automatically select a set of media data
files based on a variety ofinputs or criteria orprocesses. As
another example, the story processing component can use
the context information from the viewing environment to
select a set of media data files.
0068 For example, through the user interface, an end
user can browse, search, select, edit, create, and delete
entries or objects in the story database, entity database and
media file database, with the result ofsuch operations being
a set of new, or altered, or selected media data files. As an
example, an end user can select all media data files in which
one or more entities are present in images. As another
example, an end usercan select an existing story. As another
example, an end user can select media data files captured
from a selected geographical location. As another example,
a set of media data files may be selected based on a
predetermined time period and a set of individuals. As a
specificexample, allofthe media data files createdin thelast
week by an individual or a group of individuals, such as a
family, can be identified.
0069 Given a selection of media data files, the host
computer initiates presentation 302 of the selected media
data files onthepresentation devices. Thehost computercan
maintain data structures to store data used in this presenta
tion process. Such as any currently presented media data file
and time ofpresentation. There can be multiple media data
files presented concurrently. If a particular media data file
15. US 2017/019.9872 A1
includes time-based media data, i.e., media data that is a
sequence of samples over time, such as video or audio, a
current position with the time-based media data also can be
tracked by Such data structures.
0070 While the selected media files are being presented
on the presentation devices, context from the environmentis
captured 304. As described above, signals based on sensors
in the environment are received and Subjected to processing
by the host computer to derive the context from the signals.
The kinds ofdata extracted from the signals are based on the
nature of the signals provided by the sensors and kinds of
processing performed on them.
0071. For example, audio signals can be processed to
extract text, associated with time information indicating
when thetext occurred in time, using speech recognitionand
natural language processing techniques. Extracted text can
be further processed to identify or derive information which
is relevant to the media presentation Such as references to
individuals or other entities, sentiment information, time
references, occasions, detailed descriptions of the media
beingpresentedandthe like.Thecomputersystem uses Such
information to annotate the media files or collection being
presented with metadata, which thereafter can be used for
search and selection logic. When information is also pro
vided by gaZe analytics, described below, the computer
system can combine the gaZe analytics and captured text to
more precisely annotate the media data or other data in the
system.
0072. As another example, image data from camera
devices can be processed to identify individuals in the
viewingenvironment,associatedwith timeinformation indi
cating when the individuals were present in the viewing
environment, using facial recognition techniques. Image
data can also be processed to identify other objects in the
viewing environment and also the type of the viewing
environment itself, e.g., a room, open space, house, kids
party venue, etc., using object recognition techniques.
0073 Image data also can be processedto determine gaze
information, which can be further processed to determine,
forexample, whether aparticular individual is looking at the
presentation device. Such information can be further pro
cessed to generate gaze statistics based on the one or more
persons in the viewing environment looking at particular
media data being displayed on a display device in the
viewing environment. Such gaze statistics can be mapped to
corresponding parts ofthe media data being displayed for a
specific amount of time. Yet other data can be mapped to
specificparts ofmedia databased on their correspondenceto
gaze statistics. For example, Such statistics might include
data indicating that: X% of viewers ofan image focused on
entity A in the image; or, entity A, out ofa total ofX entities
in an image, captures y% of attention of viewers; or, the
right-most part ofan image is X times more likely to attract
user attention. Such statistics can be used by the story
processing component to more effectively select media. For
example, the story processing component can select images
which, based on gaze statistics, are expected to maximize
engagement, interaction and feedback forthe specific view
ing environment and/or audience.
0074 The host computer processes the context from the
viewing environment and the selected media files to gener
ate and store 306 data representing a story in the story
database. This step involves populatingthedata structures in
the entity database, story database, and media file database,
Jul. 13, 2017
andassociated relationship data structures, based on the data
collected, the selected media files, and playback tracking
data structures. An example implementation ofthis process
is described in more detail below in connection with FIG. 5.
0075 Referring now to FIG. 4, a second example of
operation of the computer system will now be described.
0076. In this example, the host computer begins opera
tion by capturing 400 context from the viewing environ
ment. Similar to the capturing of context as described in
connection with the process in FIG. 3, the context is depen
dent on the sensors used in the environment and the appli
cations available to process those signals. In addition, the
context can be limited to one or more selected types ofdata,
either by an end user or through predetermined settings or
programming ofthe host computer. Such a limitation would
be to define how the host computer selects a story.
0077. Thestoryprocessingcomponentprocesses thecon
text to select 402 a story from thestory database. Astory can
be selected based on matching any ofa number of criteria
from the context to the data stored in the databases. For
example, a story related to individuals identified or men
tionedin theenvironmentandkeywords extracted from their
conversation canbe selected. An example implementation of
this process is provided in more detail below in connection
with FIG. 6.
0078. The host computer then accesses 404 the media
data files associated with the selected story. This access
involves using the story-media data file table to identify the
media data filesassociatedwith the story. Thehost computer
accessesthe media indexto identify, foreach media data file,
the data storage system on which the media data file is
stored, and a file name for the media data file. The host
computer accesses the data storage system information to
obtain information to enable access the data storage system
for each media data file.
007.9 The host computer then initiates transmission 406
ofthe media data from these media data files for the selected
story for presentation on the presentation devices. In this
mode ofoperation, data collected from the environment also
can be associated with the selected story using the tech
niques described above in connection with FIG. 3.
0080 FIG. 5 is a flowchart describing an example imple
mentation of using the context from the viewing environ
ment to package selected media data files into a story.
I0081. In this process, the host computer 500 receives as
an input, the context from the viewing environment and
presentation tracking data. The presentation tracking data is
any timing data associated with presentation process that
indicates when the selected media data files are presented on
the presentation devices. In some cases, the list of media
data files andapresentation starttime can be sufficient, ifthe
presentation time of each of the media data files can be
determinedbasedon the media data files and/ortheplayback
Settings.
I0082. The host computer creates 502 a new story in the
story database, and associates 504 the selected media data
files with the new story in the story-media file table. The
extracted data is thenprocessedto createfurtherassociations
in the story database, entity database, media index and
associated relationship data. Forexample, thehost computer
can store 506 keywords extracted from text based on rec
ognized speech from the environment as the keywords or
tags for the story in the story database. The host computer
can store508a transcriptofthe recognized speech as thetext
16. US 2017/019.9872 A1
for the story in the story database. The host computer can
store 510 data indicative of the entities, whether an indi
vidual or group of individuals, either mentioned in the
recognized speech or recognized as present in the environ
ment, as entities associated with the story in the entity-story
table. Additionally, time data associated with the collected
data can be further processed 512 to associate data with
media data files. As an example, speech recognition might
provide data indicating that an individual is present in the
media data ofa particular media data file. Such an associa
tion can be recorded in the entity-media file relationship
data.
0083) To be clear, the computer system uses two kinds of
time references. First, thecontexthas atime reference which
corresponds to the time at which the context was captured.
Similarly, the presentation tracking information includes a
time reference which corresponds to the time at which a
media file is presented. The first kind of time reference
allows the system to associate particular context with a
particular media file. Second, the context can include a time
reference. For example, an individual in the viewing envi
ronment says “This happened yesterday', and the time
reference is “yesterday”. The computer system first recog
nizes the phrase yesterday from the audio signal, and then
processes the phrase by converting it to an absolute date in
reference to the time it was mentioned in the context. A
media file, entity, or story also may include a time reference
indicating a time relevant to that data. In particular, media
files typically have exact timestamps created on creation.
For example, a media file may be marked as captured on a
particular day ata particulartime, oran eventcan be marked
as having occurred on a particular day. Thus, as an example,
if the context derived during presentation of a media file
includes the phrase “This happened yesterday', the com
puter system uses the first time reference to correlate the
detected phrase “This happened yesterday” with the cur
rently displayed mediadata file,by converting yesterday to
an absolute date. The computer system then uses the second
kind of time reference “yesterday” to annotate that media
data file with the exact date determined from processing the
statement “yesterday”. Such annotation allows the media
data file to then be retrieved or organized based on that time
reference. Generally speaking, weaktime references such as
“summer”, “winter”, “last week”, a special holiday, “when
John was at University', and the like, can be converted to
ranges ofdates, including one or more days, weeks, months
and/or years.
0084 FIG. 6 is a flowchart describing an example imple
mentation of selecting media data files using the context.
0085. In this process, the host computer receives 600 as
an input the context from the viewing environment, and any
further implied or derived selection criteria. The selection
criteria can be based on user input, settings in the story
processing component on the host computer, and/or prede
termined criteria that are part ofthe story definition and/or
metadata.
0.086 Based on the received context and the selection
criteria, the host computer builds 602 a query for accessing
the databases. The form ofthe query depends on the kinds
ofdata extracted from the sensors and the selection criteria.
0087. The host computer applies 604 the query to the
databases and receives results ofthe query. The results may
indicate, for example, one or more stories or a collection of
media data files, matching the selection criteria. The host
Jul. 13, 2017
computer can rank the media files or stories using relevance
and/or optimization statistics, such as gaze analytics, feed
back and reactions captured from previous presentations of
each media file, so as to include the more relevant media
files and also those with the highest probability to receive
high levels of engagement for the viewing environment
and/or audience. As part of this ranking and optimization
process, specific media files also can be ruled out depending
the specific viewing environment and/or audience. From
these results, a story or a collection of media data files, for
which a story can be created, can be selected 606. The
selection can be performed by an end user, based on pre
sentation ofthe query results to the end user. The selection
also can be made automatically by the computer, Such as by
selectinga story with thebest match. Thehostcomputer then
can access and present 608 the media data files for the
selected story.
0088. As used herein, the term “match” is intended to
include any form of similarity or difference metric applied
between a set ofinput terms, such as query terms, and a set
offeatures associated with one or more entries in the stored
data. While theexamples above describe matching metadata
from the context to stories, any portion of the context
derived from the viewing environment can beprocessed into
a set ofterms that can be compared to entries in the database
to retrieve items from the database using any ofa variety of
similarity, difference or exact matching techniques.
I0089. Such a computer system can support a wide variety
ofuse cases in which context from the viewing environment
can be used to organize, retrieve, annotate and/or present
media data files as collections of media data files associated
with one or more entities, such as individuals, groups of
individuals or other objects.
0090. For example, the context can include audio signals
including verbal communications ofindividuals in the view
ing environment, such as a conversation among individuals
in the viewing environment and occurring naturally and
possibly independently from the viewing experience, which
is then processed by the computer system to generate a
media experience that accompanies the conversation.
0091 For example, if the viewing environment is in a
home ofa family, the continually processed context can be
used to intelligently serve, and continually annotate media
data from, various media experiences, resulting in a system
that adapts over time in how it presents media experiences
tailored to individuals in the viewing environment. The
context can be used to identify family and individual view
ing patterns, significant dates, occasions, events and objects,
and expressions (faces, Verbal language elements, gestures),
and associate these with various tags or scores, and then
annotate media files and otherdata with these tags orscores.
The annotated media files andother data allow the system to
retrieve and present media databased on insights about the
individuals and the family and individuals currently identi
fied in the viewing environment. For example, words and
phrases commonly used by Some family members, or from
a memorable moment, can be used to annotate the media
data and then used as search entry points to retrieve that
media data. These words and phrases may be obtained from
the context or the media data or a combination ofboth, such
as a voice command received during presentation of the
media data.
0092. As another example, the computer system can
process audio signals from an individual in the viewing
17. US 2017/019.9872 A1
environment, and gaZe and positioning information of the
individual relative to a display. The gaZe and positioning
information can be used to determine ifany words, phrases
or story extracted from the audio signal may be relevant to
media data being presented in the viewing environment.
Using the gaZe analytics, the computer system determines if
the individual is talking about the presented media, using
both natural language processing andalso the information if
the user is looking at on the presented media, and if so,
which specific entity and/or area ofthe presented media. If
it is so determined that the individual is talking about the
presented media, then the computer system further process
the extracted words, phrases or more text such as a descrip
tion or story, and annotates the presented media data file in
the databases with this information.
0093. As another example, audio data, such as music,
recorded Sound, representations ofmelodies and the like, or
other music data, Such as lyrics, mood class, instrumentation
or other properties, can be searchable metadata associated
with a story, a collection of media data files, or individual
media data files, or entities in the database. The computer
system can be configured to, for example in response to a
Voice command, process audio data (Such as a whistled or
hummed tune) to extract information to either search the
database or annotate entries in the database. For example,
music of a particular mood class can be associated with a
story in the database. As another example, music of a
particular mood class can be associated with a specific class
of events, such as a birthday celebration or Christmas
celebration. The stories or media files can then be retrieved
by receiving a specific whistle or hummed tune or song or
melody which is associated with entities, stories or isolated
media files.
0094 AS another example, using file sharing techniques
in a shared storage system, stories, media file and/or entity
information from the various databases can be shared with
other users, such as an “extended family'. The entity data
base can be used to track different collections ofindividuals
as different “families' or “extended families' to enable such
sharing.
0.095 AS another example, natural language processing
applied to audio signals from the viewing environment can
be used in many ways. Given words, phrases and other
information extracted from the audio signal, the databases
can be searchable by any individual with sufficient privi
leges. A user can thus ask the computer system for stories
in many ways, both explicitly and implicitly. For example,
a user can ask for photos ofpersons x, y, and Z, a few years
ago and/or in certain locations and/or in certain circum
stances and/or with certain emotion tags, phrases. The
computer system can then identify matching stories and/or
media files, rank them, define presentation and display
properties, and begin presenting the media files to the user.
Natural language processing can be used, for example, to
provide Subtitles, comments and further annotations. The
viewers’ reactions to the presented media files can then be
captured from the context of the viewing environment and
used to further annotate the selected and presented media
files and stories. For example, rating information, such as
“like” or “don’t like” or a value on a numerical scale, can
also be provided, for example through a voice command
and/orgesture. As anotherexample, reactions ofindividuals
can be implied through processing other context informa
tion, such as facial analysis, Sounds and body movement.
Jul. 13, 2017
This information can be used to associate a media data file
with an individual and the provided reaction or rating.
0096. As another example, the computer system can
provide some editing functionality in responseto voiceinput
and commands. For example, in response to a Voice com
mand, e.g., “annotate', a currently presented media data file
can be annotated with keywords, subtitles, emotion labels,
titles or other information. As another example, in response
to a voice command, e.g., "edit', a media data file or story
can be presented for editing by the user.
(0097. A wide variety of other uses can be provided by
Such a computer system that is configured to use context
fromthe viewing environmentto organize, retrieve, annotate
and/or present media data files as collections of media data
files associated with one or more entities, such as individu
als, groups of individuals or other objects.
0098. Having now described an example implementa
tion, FIG. 7 illustrates an exampleofa computerwith which
components of the computer system of the foregoing
description can beimplemented. This is only oneexample of
a computer and is not intended to suggest any limitation as
to the scope of use or functionality of Such a computer.
0099. The computer can be any of a variety of general
purpose or special purpose computing hardware configura
tions. Someexamples oftypes ofcomputers thatcan be used
include, but are not limited to, personal computers, game
consoles, set top boxes, hand-held or laptop devices (for
example, media players, notebook computers, tablet com
puters, cellular phones, personal data assistants, Voice
recorders), virtual reality and augmented reality devices,
server computers, multiprocessor Systems, microprocessor
based systems, programmable consumer electronics, net
worked personal computers, minicomputers, mainframe
computers, and distributed computing environments that
include any ofthe above types ofcomputers or devices, and
the like.
0100. With reference to FIG. 7, a computer 700 includes
at least one processing unit 702 and memory 704. The
computer can have multiple processing units 702 and mul
tiple devices implementing the memory 704. A processing
unit 702 can include one or more processing cores (not
shown) that operate independently ofeach other. Additional
co-processing units also can be present in the computer. The
memory 704 may include volatile devices (such as dynamic
random access memory (DRAM) or other random access
memory device), and non-volatile devices (such as a read
only memory, flash memory, and the like) or some combi
nation ofthe two. Other storage. Such as dedicated memory
or registers, also can be present in the one or more proces
sors. The computer 700 can include additional storage, such
as storage devices (whether removable or non-removable)
including, but not limited to, magnetically-recorded or opti
cally-recorded disks or tape. Such additional storage is
illustrated in FIG. 7 by removable storage device 708 and
non-removable storage device 710. The various components
in FIG. 7 are generally interconnected by an interconnection
mechanism, such as one or more buses 730.
0101. A computer storage medium is any medium in
which data can be stored in and retrieved from addressable
physical storage locations by the computer. Computer Stor
age media includes Volatile and nonvolatile memory, and
removable and non-removable storage devices. Memory
704, removable storage 708 and non-removable storage 710
areall examples ofcomputer storage media. Some examples
18. US 2017/019.9872 A1
of computer storage media are RAM, ROM, EEPROM,
flash memory or other memory technology, CD-ROM, digi
tal versatile disks (DVD) or other optically or magneto
optically recorded storage device, magnetic cassettes, mag
netic tape, magnetic disk storage or other magnetic storage
devices. Computer storage media and communication media
are mutually exclusive categories of media.
0102 Computer 700 may also include communications
connection(s) 712 that allow the computer to communicate
with other devices over a communication medium. Com
munication media typically transmit computer program
instructions, data structures, program modules or other data
over a wired or wireless Substance by propagating a modu
lated data signal Such as a carrier wave or other transport
mechanism over the substance. The term “modulated data
signal” means a signal that has one or more of its charac
teristics set or changed in Such a manner as to encode
information in the signal, thereby changingtheconfiguration
or state of the receiving device of the signal. By way of
example, and not limitation, communication media includes
wired media, Such as metal or other electrically conductive
wire that propagates electrical signals or optical fibers that
propagate optical signals, and wireless media, such as any
non-wired communication media that allows propagation of
signals, such as acoustic, electromagnetic,electrical, optical,
infrared, radio frequency and other signals. Communica
tions connections 712 are devices such as a wired network
interface, wireless network interface, radio frequency trans
ceiver, e.g., Wi-Fi, cellular, long term evolution (LTE) or
Bluetooth, etc., transceivers, navigation transceivers, e.g.,
global positioning system (GPS) or Global Navigation Sat
ellite System (GLONASS), etc., transceivers, that interface
with the communication media to transmit data over and
receive data from communication media.
0103) The computer 700 may have various input device
(s) 714 such as a pointer device, keyboard, touch-based
input device, pen, camera, microphone, sensors, such as
accelerometers, thermometers, light sensors and the like,
and so on. The computer 700 may have various output
device(s) 716 Such as a display, speakers, and so on. Such
devices are well known in the art and need not be discussed
at length here. Various input and output devices can imple
ment a natural user interface (NUI), which is any interface
technology that enables a user to interact with a device in a
“natural manner, free from artificial constraints imposedby
input devices such as mice, keyboards, remote controls, and
the like.
0104 Examples ofNUI methods include those relying on
speech recognition, touch and stylus recognition, gesture
recognition both on Screen and adjacent to the screen, air
gestures, head and eye tracking, Voice and speech, vision,
touch, gestures, and machine intelligence, and may include
the use of touch sensitive displays, Voice and speech rec
ognition, intention and goal understanding, motion gesture
detection using depth cameras (such as Stereoscopic camera
systems, infrared camera systems, and othercamera systems
and combinations ofthese), motion gesture detection using
accelerometers or gyroscopes, facial recognition, three
dimensional displays, head, eye, and gaze tracking, immer
sive augmented reality and virtual reality systems, all of
which provide a more natural interface, as well as technolo
gies for sensing brain activity using electric field sensing
electrodes (EEG and related methods).
Jul. 13, 2017
0105. The various storage 710, communication connec
tions 712, output devices 716 and input devices 714 can be
integrated within a housing with the rest ofthe computer, or
can be connected through various input/output interface
devices on the computer, in which case the reference num
bers 710, 712, 714 and 716 can indicate either the interface
forconnection to a deviceorthe deviceitselfas the case may
be.
0106 A computer generally includes an operating sys
tem, which is a computer program that manages access to
the various resources ofthe computer by applications. There
may be multiple applications. The various resources include
the memory, storage, input devices and output devices. Such
as display devices and input devices as shown in FIG. 7.
0107 The various modules, components, data structures
andprocesses ofFIGS. 1-6, as well as any operating system,
file system and applications on a computer in FIG. 7, can be
implemented using one or more processing units of one or
more computers with one or more computer programs
processed by the one or more processing units. A computer
program includes computer-executable instructions and/or
computer-interpreted instructions, such as program mod
ules, which instructions are processed by one or more
processing units in the computer. Generally, such instruc
tions define routines, programs, objects, components, data
structures, and so on, that, when processed by a processing
unit, instruct or configure the computer to perform opera
tions on data, or configure the computer to implement
Various components, modules or data structures.
0108. Alternatively, or in addition, the functionality of
one or more ofthe various components described herein can
be performed, at least in part, by one ormore hardware logic
components. For example, and without limitation, illustra
tive types of hardware logic components that can be used
include Field-programmable Gate Arrays (FPGAs), Pro
gram-specific Integrated Circuits (ASICs). Program-specific
Standard Products (ASSPs), System-on-a-chip systems
(SOCs), Complex Programmable Logic Devices (CPLDs),
etc.
0109 Accordingly, in one aspect, a computer system
includes computer-accessible storage configured to store
entity data including data identifying a plurality of entities
and data identifying relationships among the entities, media
data files and media file metadata, wherein each media data
file has associated media file metadata, the metadata includ
ing an indication ofone or more entities associated with the
media data file, and text data, and story data including a
plurality ofstories, each story including at least text data, an
indication of a plurality of the entities associated with the
story and an indication of a plurality of media data files
associated with the story.Thecomputer system also includes
a sensor in an environment, the sensor generating a signal
representing at least verbal communication by an individual
in the environment. The computer system also includes a
presentation device in theenvironment configuredto receive
media data and process the media data to present the media
data in the environment. The computer system also includes
a processing system configured to receive the signal from
the sensor, process the signal to derive context, the context
including at least data extracted from the received signal
representing the Verbal communication by the individual in
the environment and data indicating an entity. The process
ing system is further configured to present media data from
a selection ofmedia data files on the presentation device in
19. US 2017/019.9872 A1
the environment while receiving the signal from the sensor,
and annotate a media data file among the selection ofmedia
data files based at least on the context derived from the
signal from the sensor.
0110. In another aspect, an article of manufacture
includes a computer storage medium and computer program
instructions stored on the computer storage medium, that
when executed by a host computer configure the host
computer to provide a computer system that includes com
puter-accessible storage configured to store entity data
including data identifying a plurality of entities and data
identifying relationships among theentities, media data files
and media file metadata, wherein each media data file has
associated media file metadata, the metadata including an
indication ofone or more entities associated with the media
data file, and text data, and story data including a plurality
of stories, each story including at least text data, an indica
tion ofaplurality oftheentities associated with the storyand
an indication of a plurality of media data files associated
with the story. The computer program instructions further
configurethehost computerto receivea signal from a sensor
in an environment, the sensor generating a signal represent
ing at least verbal communication by an individual in the
environment, and to process the signal to derive context, the
context including at least data extracted from the received
signal representing the verbal communication by the indi
vidual in the environment and data indicating an entity, and
to present media data from a selection ofmedia data files on
a presentation device in theenvironment while receiving the
signal from the sensor, and to annotate a media data file
among the selection of media data files based at least on the
context derived from the signal from the sensor.
0111. In anotheraspect,a computer-implementedprocess
uses a computer system including computer-accessible stor
age configuredto storeentity data including data identifying
a plurality of entities and data identifying collections of
entities from the plurality of entities, media data files and
media file metadata, wherein each media data file has
associated media file metadata, the metadata including an
indication ofone or more entities ofthe plurality ofentities
associated with the media data file, and text data, and story
data including a plurality of stories, each story including at
least text data, an indication of a plurality of the entities
associated with the story and an indication ofa plurality of
media data files associated with the story. The computer
implemented process includes receiving a signal from a
sensor in an environment, the sensor generating a signal
representing at least verbal communication by an individual
in the environment, processing the signal to derive context,
the context including at least data extracted from the
received signal representing the verbal communication by
the individual in the environment and data indicating an
entity, presenting media data from a selection ofmedia data
files on the presentation device in the environment while
receivingthe signal from the sensor, and annotating a media
data file among the selection of media data files based at
least on the context derived from the signal from the sensor.
0112 In another aspect, a computer system includes
means for processing signals received from sensors in a
viewing environment during presentation of media data on
presentation devices in the viewing environment to extract
context from the signals, and means for further processing
media data using the extracted context.
Jul. 13, 2017
0113. Inanotheraspect,a computer-implementedprocess
includes processing signals received from sensors in a
viewing environment during presentation of media data on
presentation devices in the viewing environment to extract
context from the signals, and further processing media data
using the extracted context.
0114. In any of the foregoing aspects, presented media
data can be selected based on the context and/or presented
media data can be annotated based on the context.
0.115. In any ofthe foregoing aspects, the selection ofthe
media data files can be generated by matching the derived
context to metadata associated with media data files in the
Storage.
0116. In any of the foregoing aspects, a media data file
can be annotated by defining a story associated with the
selection of media data files, and annotating the defined
story based at least on the derived context.
0117. In any ofthe foregoing aspects, the selection ofthe
media data files can be generated by matching the derived
context to metadata associated with stories in the storage,
and selecting at least one matching story, and accessing a
collection of media data files associated with the selected
story as the selection of media data files to be presented.
0118. In any of the foregoing aspects, a computer can
further be configured to receive an input indicating the
selection ofthe media data files, initiate the presentation of
the selection of the media files, associate a story with the
selection ofthe media data files, and associate the context
with the story associated with the selected media data files.
0119. In any ofthe foregoing aspects, the media data files
can be stored on a plurality ofdata storage systems, wherein
the metadata for media data files further includes data
indicatingdatastorage systems on which the mediadatafiles
are stored, and the computer-accessible storage is further
configured to store data storage system data including, for
each data storage system, data enabling the computer to
access the data storage system. The data storage systems can
include domains on a computer network which store media
data files for users accessible using a browser application
through user accounts using usernames and passwords.
0120 In any of the foregoing aspects, the computer
accessible storage can be configured as a database including
an entity database including data about entities, a story
database including data about stories, a media index includ
ing data about media data files, and a plurality of relation
ship data structures specifying the associations among the
entities, the stories and the media data files.
0121. In any ofthe foregoing aspects, a computer can be
further configured to associate a time reference with the
derived context, and associate the context with a media data
file based at least on a time reference associated with media
data from the media data file and the time reference asso
ciated with the derived context.
I0122) Any ofthe foregoing aspects may be embodied as
a computer system, as any individual component of Such a
computer system, as a process performed by Such a com
puter system or any individual component of Such a com
puter system, or as an article of manufacture including
computer storage in which computer program instructions
are stored and which, when processed by one or more
computers, configure the one or more computers to provide
Such a computer system or any individual component of
Such a computer system.
20. US 2017/019.9872 A1
0123. It should be understood that the subject matter
defined in the appended claims is not necessarily limited to
the specific implementations described above. The specific
implementations described above are disclosed as examples
only.
What is claimed is:
1. A computer system comprising:
a. computer-accessible storage configured to store:
entity data comprising data identifying a plurality of
entities and data identifying relationships among the
entities;
media data files and media file metadata, wherein each
media data file has associated media file metadata,
the metadata comprising: an indication of one or
more entities associated with the media data file, and
text data; and
story data comprising a plurality ofstories, each story
comprising at least text data, an indication of a
plurality ofthe entities associated with the story and
an indication of a plurality of media data files
associated with the story;
b. a sensor in an environment, the sensor generating a
signal representing at least verbal communication by an
individual in the environment;
c. a presentation device in the environment configured to
receive media data and process the media data to
present the media data in the environment; and
d. a processing system configured to:
receive the signal from the sensor,
process the signal to derive context, the context includ
ing at least data extracted from the received signal
representing the verbal communication by the indi
vidual in the environment and data indicating an
entity;
present media data from a selection ofmedia data files
on the presentation device in the environment while
receiving the signal from the sensor,
annotatea media data fileamongthe selection ofmedia
data files based at least on the context derived from
the signal from the sensor.
2. The computersystem ofclaim 1 wherein theprocessing
system is further configured to:
generate the selection ofthe media data files by matching
the derived context to metadata associated with media
data files in the storage.
3. The computer system ofclaim 2, wherein to annotate
a media data file comprises:
define a story associated with the selection ofmedia data
files;
annotate the defined story based at least on the derived
COInteXt.
4. The computersystem ofclaim 1 wherein theprocessing
system is further configured to:
generate the selection ofthe media data files by matching
the derived context to metadata associated with stories
in the storage; and
Select at least one matching story; and
access a collection of media data files associated with the
selected story as the selection ofmedia data files to be
presented.
5. The computersystem ofclaim 1 wherein theprocessing
system is further configured to:
receive an input indicating the selection ofthe media data
files;
Jul. 13, 2017
initiate thepresentation ofthe selection ofthe media files:
associatea story with the selection ofthe media data files:
and
associate the extracted text data and entity data with the
story associated with the selected media data files.
6. The computer system of claim 1, wherein the media
data files are stored on a plurality of data storage systems,
and wherein the metadata for media data files further com
prises data indicating data storage systems on which the
media data files are stored, and wherein the computer
accessible storage is further configured to store data storage
system data comprising, for each data storage system, data
enabling the computer to access the data storage system.
7. The computer system of claim 6, wherein the data
storage systems include domains on a computer network
which store media data files for users accessible using a
browser application through user accounts using usernames
and passwords.
8. Thecomputersystem ofclaim 1, wherein thecomputer
accessible storage is configured as a database comprising:
an entity database comprising data about entities;
a story database comprising data about stories;
a media index comprising data about media data files; and
a plurality of relationship data structures specifying the
associations among the entities, the stories and the
media data files.
9. The computer system ofclaim 1, further comprising:
associating a time reference with the derived context;
associating the context with a media data file based at
least on a time reference associated with media data
from the media data file and the time reference asso
ciated with the derived context.
10. An article of manufacture, comprising:
a computer storage medium;
computer program instructions stored on the computer
storage medium, that when executed by a host com
puter configure the host computer to provide a com
puter system comprising:
a. computer-accessible storage configured to store:
entity data comprising data identifying a plurality of
entities and data identifying relationships among the
entities;
media data files and media file metadata, wherein each
media data file has associated media file metadata, the
metadata comprising: an indication of one or more
entities associated with the media data file, and text
data; and
story data comprising a plurality of stories, each story
comprisingatleast text data, an indication ofaplurality
of the entities associated with the story and an indica
tion ofaplurality ofmedia data files associated with the
story; and
b. a processing system configured to:
receive the signal from a sensor in an environment, the
sensor generating a signal representing at least verbal
communication by an individual in the environment;
process the signal to derive context, the context including
at least data extracted from the received signal repre
senting the verbal communication by the individual in
the environment and data indicating an entity;
present media data from a selection ofmedia data files on
a presentation device in the environment while receiv
ing the signal from the sensor,
21. US 2017/019.9872 A1
annotate a media data file among the selection of media
data files based at least on the context derived from the
signal from the sensor.
11. The article of manufacture of claim 10 wherein the
processing system is further configured to generate the
selection of the media data files by matching the derived
context to metadata associated with media data files in the
Storage.
12. The article of manufacture of claim 11 wherein to
annotate a media data file comprises:
define a story associated with the selection ofmedia data
files; and
annotate the defined story based at least on the derived
COInteXt.
13. The article of manufacture of claim 10, wherein the
processing system is further configured to:
generate the selection ofthe media data files by matching
the derived context to metadata associated with stories
in the storage; and
Select at least one matching story; and
access a collection of media data files associated with the
selected story as the selection ofmedia data files to be
presented.
14. The article of manufacture of claim 10, wherein the
processing system is further configured to:
receive an input indicating the selection ofthe media data
files;
initiate the presentation oftheselection ofthe media files:
associate a story with the selection ofthe media data files:
and
associate the extracted text data and entity data with the
story associated with the selected media data files.
15. The article of manufacture of claim 10, wherein the
media data files are stored on a plurality of data storage
systems, and wherein the metadata for media data files
further comprises data indicating data storage systems on
which the media data files are stored, and wherein the
computer-accessible storage is further configured to store
data storage system data comprising, for each data storage
system, data enabling the computer to access the data
Storage System.
16. The article of manufacture of claim 15, wherein the
data storage systems include domains on a computer net
work which store media data files for users accessible using
a browser application through user accounts using user
names and passwords.
17. The article of manufacture of claim 10, wherein the
computer-accessible storageis configuredto storea database
comprising:
Jul. 13, 2017
an entity database comprising data about entities;
a story database comprising data about stories;
a media index comprising data about media data files; and
a plurality of relationship data structures specifying the
associations among the entities, the stories and the
media data files.
18. The article of manufacture of claim 10, further com
prising:
determining a first time reference corresponding to time
of occurrence of the derived context;
based on a second time reference associated with presen
tation ofmedia data from a media data file, associating
the derived context with media data from a media data
file having a second time reference matching the first
time reference.
19. A computer-implemented process using a computer
system comprising computer-accessible storage configured
to store: entity datacomprisingdataidentifyinga plurality of
entities and data identifying collections ofentities from the
plurality ofentities; mediadatafilesand mediafile metadata,
wherein each media data file has associated media file
metadata, the metadata comprising an indication of one or
more entities ofthe plurality ofentities associated with the
media data file, and text data; and story data comprising a
plurality of stories, each story comprising at least text data,
an indication ofa plurality ofthe entities associated with the
story and an indication of a plurality of media data files
associated with the story; and a sensor in an environment,
the sensor generating a signal representing at least verbal
communication by an individual in the environment; and a
presentation device in theenvironment configuredto receive
media data and process the media data to present the media
data in theenvironment, the computer-implementedprocess,
performed by the computer system, comprising:
receiving the signal from the sensor,
processingthe signalto derive context, the context includ
ing at least data extracted from the received signal
representing the verbal communication by the indi
vidual in the environmentand data indicating an entity;
presenting media data from a selection ofmedia data files
on the presentation device in the environment while
receiving the signal from the sensor,
annotating a media data file among the selection ofmedia
data files based at least on the context derived from the
signal from the sensor.
20. The process ofclaim 19, further comprising:
generating the selection ofthe media data files by match
ing the derived context to metadata associated with
media data files in the storage.
k k k k k