This document summarizes a study that used mixed methods to evaluate the visual exploration tool VisGets. 763 participants explored the tool through unmoderated remote using, generating both quantitative interaction data and qualitative feedback. The study aimed to understand the benefits, challenges, and performance of visual exploration systems. Results found potential benefits like supporting multiple search dimensions and comparative information seeking, but also challenges like learning curves and empty results. Performance analysis found the system could be slow. The mixed methods approach provided a multi-faceted view of users' needs, system performance, and explained unexpected findings. Limitations included a limited population and potential variations in participation. Overall, the study provided initial insights into visual search and directions for future research.
This presentation was provided by Huajin Wang of Carnegie Mellon University, on Sunday, Feb 23, during the NISO Plus Conference in Baltimore, Maryland.
White Paper based on imaging informatics workshops
https://wiki.nci.nih.gov/x/3K-4Ew
Blog entry announcing the release of the white paper
https://ncip.nci.nih.gov/blog/imaging-key-component-cancer-data-ecosystem/
Developing high content image analysis software for biologistsClaire McQuin
ImageXD presentation 30 March 2017. Developing software for biological image analysis using classic compute vision techniques and looking forward to deep learning segmentation and classification.
This presentation was provided by Huajin Wang of Carnegie Mellon University, on Sunday, Feb 23, during the NISO Plus Conference in Baltimore, Maryland.
White Paper based on imaging informatics workshops
https://wiki.nci.nih.gov/x/3K-4Ew
Blog entry announcing the release of the white paper
https://ncip.nci.nih.gov/blog/imaging-key-component-cancer-data-ecosystem/
Developing high content image analysis software for biologistsClaire McQuin
ImageXD presentation 30 March 2017. Developing software for biological image analysis using classic compute vision techniques and looking forward to deep learning segmentation and classification.
Using Feedback from Data Consumers to Capture Quality Information on Environm...Anusuriya Devaraju
Data quality information is essential to facilitate reuse of Earth science data. Recorded quality information must be sufficient for other researchers to select suitable data sets for their analysis and confirm the results and conclusions. In the research data ecosystem, several entities are responsible for data quality. Data producers (researchers and agencies) play a major role in this aspect as they often include validation checks or data cleaning as part of their work. It is possible that the quality information is not supplied with published data sets; if it is available, the descriptions might be incomplete, ambiguous or address specific quality aspects. Data repositories have built infrastructures to share data, but not all of them assess data quality. They normally provide guidelines of documenting quality information. Some suggests that scholarly and data journals should take a role in ensuring data quality by involving reviewers to assess data sets used in articles, and incorporating data quality criteria in the author guidelines. However, this mechanism primarily addresses data sets submitted to journals. We believe that data consumers will complement existing entities to assess and document the quality of published data sets. This has been adopted in crowd-source platforms such as Zooniverse, OpenStreetMap, Wikipedia, Mechanical Turk and Tomnod. This paper presents a framework designed based on open source tools to capture and share data users’ feedback on the application and assessment of research data. The framework comprises a browser plug-in, a web service and a data model such that feedback can be easily reported, retrieved and searched. The feedback records are also made available as Linked Data to promote integration with other sources on the Web. Vocabularies from Dublin Core and PROV-O are used to clarify the source and attribution of feedback. The application of the framework is illustrated with the CSIRO’s Data Access Portal.
The future of scholarly communications professionalsNancy Pontika
The scholarly communications profession is constantly changing, and a wide range of skills are required in the advertised job descriptions. In an effort to investigate what kind of skills future information professionals need, during the period March 2015 to September 2017 job postings advertising positions relating to Open Access were collected. The total number of the collected job postings was 72.
The collection was done manually throughout this whole period from job advertising sites, such as Jobs.ac.uk, CILIP Lisjobnet and the Times of Higher Education. In addition, the author is subscribing to open listserves, such as the Jisc-Repositories, OAGoodPractice and a closed one, the UKCoRR-Discussion list, and managed to collect job descriptions from those list servers as well.
The aim of this work is to identify the most important skills required in the jobs advertised in our field, educate the new comers in the field and identify how our profession is evolving.
Studying Public Medical Images from Open Access Literature and Social Networks for Model Training and Knowledge Extraction
Henning Müller, Vincent Andrearczyk, Oscar Jimenez, Anjani Dhrangadhariya
Leveraging Computational Methods for Theorizing IS PhenomenaMalmi Amadoru
The rapid development of computational methods expands the horizon of opportunities in research methods. Scholars have acknowledged the potential of computationally intensive research approaches for theorizing IS phenomena. However, computationally intensive theory building is still at a nascent stage. This presentation focuses on how to leverage computational methods in the theorizing process, associated challenges, and respective strategies.
Discovering Common Motifs in Cursor Movement DataYandex
Mouse cursor movements can provide valuable information on how users interact and engage with web documents. This interaction data is far richer than traditional click data, and can be used to improve evaluation and presentation of web information systems. Unfortunately, the diversity and complexity inherent in this interaction data make it more difficult to capture salient behavior characteristics through traditional feature engineering. To address this problem, we introduce a novel approach of automatically discovering frequent subsequences, or motifs, in mouse cursor movement data. In order to scale our approach to realistic datasets, we introduce novel optimizations for motif discovery, specifically designed for mining cursor movement data. We show that by encoding the motifs discovered from thousands of real web search sessions as features, enables significant improvements in important web search tasks. These results, complemented with visualization and qualitative analysis, demonstrate that our approach is able to automatically capture key characteristics of mouse cursor movement behavior, providing a valuable new tool for online user behavior analysis. In addition to the application of motifs to web mining, we demonstrate that similar technique can be successfully applied in medical domain for the task of predicting future decline of memory function and subsequent development of the Alzheimer Disease.
This was part of a webinar from the Materials Research Society on Machine Learning, AI, and Data-Driven Materials Development and Design. The spoken content (including Q&A) is available through MRS.
Making methods with vision APIs, online data & network building (lessons lear...Janna Joceli Omena
Research project on building and interpreting computer vision networks with the purpose to develop visual digital methods for social and media research. Project diary: https://thesocialplatforms.wordpress.com/2020/09/10/computer-vision-networks/
Automating Data Science over a Human Genomics Knowledge BaseVaticle
# Automating Data Science over a Human Genomics Knowledge Base
Radouane Oudrhiri, the CTO of Eagle Genomics, will talk about how Eagle Genomics is building a platform for automating data science over a human genomics knowledge base. Rad will dive into the architecture Eagle Genomics and also discuss how Grakn serves as the knowledge base foundation of the system. Rad also give a brief history of databases, semantic expressiveness and how Grakn fits in the big picture.
# Radouane Oudrhiri, CTO, Eagle Genomics
Radouane has an extensive experience in leading world-class software and data-intensive system developments in different industries from Telecom to Healthcare, Nuclear, Automotive, Financials. Radouane is Lean/Six Sigma Master Black Belt with speciality in high-tech, IT and Software engineering and he is recognised as the leader and early adaptor of Lean/Six Sigma and DFSS to IT and Software. He is a fellow of the Royal Statistical Society (RSS) and member of the ISO Technical Committee (TC69: Applications of Statistical methods) where he is co-author of the Lean & Six Sigma Standard (ISO 18404) as well as the new standard under development (Design for Six Sigma). He is also part of the newly formed international Group on Big Data (nominated by BSI as the UK representative/expert). Radouane has also been Chair of the working group on Measurement Systems for Automated Processes/Systems within the ISPE (International Society for Pharmaceutical Engineering).
Social Information Access: A Personal UpdateDaqing He
A Presentation given at Nanjing University of Science and Technology. Summarizes the relevant work developed at IRIS lab at School of Information Sciences, University of Pittsburgh.
This presentation hopes to illuminate how Search, Content Strategy, Information Architecture, User Experience, Interaction Design can break down silos to take back relevance. Because, in the end, we, the people, should be the arbiters of experience, not machines and certainly not math.
Using Feedback from Data Consumers to Capture Quality Information on Environm...Anusuriya Devaraju
Data quality information is essential to facilitate reuse of Earth science data. Recorded quality information must be sufficient for other researchers to select suitable data sets for their analysis and confirm the results and conclusions. In the research data ecosystem, several entities are responsible for data quality. Data producers (researchers and agencies) play a major role in this aspect as they often include validation checks or data cleaning as part of their work. It is possible that the quality information is not supplied with published data sets; if it is available, the descriptions might be incomplete, ambiguous or address specific quality aspects. Data repositories have built infrastructures to share data, but not all of them assess data quality. They normally provide guidelines of documenting quality information. Some suggests that scholarly and data journals should take a role in ensuring data quality by involving reviewers to assess data sets used in articles, and incorporating data quality criteria in the author guidelines. However, this mechanism primarily addresses data sets submitted to journals. We believe that data consumers will complement existing entities to assess and document the quality of published data sets. This has been adopted in crowd-source platforms such as Zooniverse, OpenStreetMap, Wikipedia, Mechanical Turk and Tomnod. This paper presents a framework designed based on open source tools to capture and share data users’ feedback on the application and assessment of research data. The framework comprises a browser plug-in, a web service and a data model such that feedback can be easily reported, retrieved and searched. The feedback records are also made available as Linked Data to promote integration with other sources on the Web. Vocabularies from Dublin Core and PROV-O are used to clarify the source and attribution of feedback. The application of the framework is illustrated with the CSIRO’s Data Access Portal.
The future of scholarly communications professionalsNancy Pontika
The scholarly communications profession is constantly changing, and a wide range of skills are required in the advertised job descriptions. In an effort to investigate what kind of skills future information professionals need, during the period March 2015 to September 2017 job postings advertising positions relating to Open Access were collected. The total number of the collected job postings was 72.
The collection was done manually throughout this whole period from job advertising sites, such as Jobs.ac.uk, CILIP Lisjobnet and the Times of Higher Education. In addition, the author is subscribing to open listserves, such as the Jisc-Repositories, OAGoodPractice and a closed one, the UKCoRR-Discussion list, and managed to collect job descriptions from those list servers as well.
The aim of this work is to identify the most important skills required in the jobs advertised in our field, educate the new comers in the field and identify how our profession is evolving.
Studying Public Medical Images from Open Access Literature and Social Networks for Model Training and Knowledge Extraction
Henning Müller, Vincent Andrearczyk, Oscar Jimenez, Anjani Dhrangadhariya
Leveraging Computational Methods for Theorizing IS PhenomenaMalmi Amadoru
The rapid development of computational methods expands the horizon of opportunities in research methods. Scholars have acknowledged the potential of computationally intensive research approaches for theorizing IS phenomena. However, computationally intensive theory building is still at a nascent stage. This presentation focuses on how to leverage computational methods in the theorizing process, associated challenges, and respective strategies.
Discovering Common Motifs in Cursor Movement DataYandex
Mouse cursor movements can provide valuable information on how users interact and engage with web documents. This interaction data is far richer than traditional click data, and can be used to improve evaluation and presentation of web information systems. Unfortunately, the diversity and complexity inherent in this interaction data make it more difficult to capture salient behavior characteristics through traditional feature engineering. To address this problem, we introduce a novel approach of automatically discovering frequent subsequences, or motifs, in mouse cursor movement data. In order to scale our approach to realistic datasets, we introduce novel optimizations for motif discovery, specifically designed for mining cursor movement data. We show that by encoding the motifs discovered from thousands of real web search sessions as features, enables significant improvements in important web search tasks. These results, complemented with visualization and qualitative analysis, demonstrate that our approach is able to automatically capture key characteristics of mouse cursor movement behavior, providing a valuable new tool for online user behavior analysis. In addition to the application of motifs to web mining, we demonstrate that similar technique can be successfully applied in medical domain for the task of predicting future decline of memory function and subsequent development of the Alzheimer Disease.
This was part of a webinar from the Materials Research Society on Machine Learning, AI, and Data-Driven Materials Development and Design. The spoken content (including Q&A) is available through MRS.
Making methods with vision APIs, online data & network building (lessons lear...Janna Joceli Omena
Research project on building and interpreting computer vision networks with the purpose to develop visual digital methods for social and media research. Project diary: https://thesocialplatforms.wordpress.com/2020/09/10/computer-vision-networks/
Automating Data Science over a Human Genomics Knowledge BaseVaticle
# Automating Data Science over a Human Genomics Knowledge Base
Radouane Oudrhiri, the CTO of Eagle Genomics, will talk about how Eagle Genomics is building a platform for automating data science over a human genomics knowledge base. Rad will dive into the architecture Eagle Genomics and also discuss how Grakn serves as the knowledge base foundation of the system. Rad also give a brief history of databases, semantic expressiveness and how Grakn fits in the big picture.
# Radouane Oudrhiri, CTO, Eagle Genomics
Radouane has an extensive experience in leading world-class software and data-intensive system developments in different industries from Telecom to Healthcare, Nuclear, Automotive, Financials. Radouane is Lean/Six Sigma Master Black Belt with speciality in high-tech, IT and Software engineering and he is recognised as the leader and early adaptor of Lean/Six Sigma and DFSS to IT and Software. He is a fellow of the Royal Statistical Society (RSS) and member of the ISO Technical Committee (TC69: Applications of Statistical methods) where he is co-author of the Lean & Six Sigma Standard (ISO 18404) as well as the new standard under development (Design for Six Sigma). He is also part of the newly formed international Group on Big Data (nominated by BSI as the UK representative/expert). Radouane has also been Chair of the working group on Measurement Systems for Automated Processes/Systems within the ISPE (International Society for Pharmaceutical Engineering).
Social Information Access: A Personal UpdateDaqing He
A Presentation given at Nanjing University of Science and Technology. Summarizes the relevant work developed at IRIS lab at School of Information Sciences, University of Pittsburgh.
This presentation hopes to illuminate how Search, Content Strategy, Information Architecture, User Experience, Interaction Design can break down silos to take back relevance. Because, in the end, we, the people, should be the arbiters of experience, not machines and certainly not math.
Presentation by Hugo Leroux and Liming Zhu, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
While we have been busy trying to "define the damn thing" IA or answering the age old question of who rules, UX, IxDA or IA, the search engines have been busily transitioning to a machine mediated experience model for ranking. This means that SEO is now the responsibility of UX/IA whether we like it or not. This presentation lays out how search engines evaluate user experience and how we can influence this evaluation with an optimized design.
From Exploration to Construction - How to Support the Complex Dynamics of In...TimelessFuture
Search engines on the Web provide a world of information at our fingertips, and the answers to many of our common questions are just one click away. However, for the complex and multifaceted tasks involving a process of knowledge construction, various information seeking models describe an intricate set of cognitive stages (Kuhlthau, 2004; Vakkari, 2001). These stages influence the interplay of users’ feelings, thoughts and actions. Despite the evidence of the models, common search engines, nowadays the prime intermediaries between information and user, still feature a streamlined set of 'ten blue links'. While efficient for lookup tasks, this approach may not be beneficial for supporting sustained information-intensive tasks and knowledge construction. Would there be other approaches to support the complex dynamics of these ventures? Based on previous experiments, this talk discusses how the utility of search functionality during different stages of complex tasks is essentially dynamic. This provides opportunities for designing 'stage-aware' search systems, which may evolve along with a user's information journey.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
8. Research Questions
1. iii) Introduction: Research Questions
What are the possible benefits of visual exploration?
What are the possible challenges of visual exploration?
How does a visual exploration system perform during use?
9. Methodology
3. i, ii, iiii) Research Methods
Pragmatism
Unmoderated usability evaluation
Convergent Parallel Mixed Methods Design
QUANT + QUAL
Mixed in Interpretation
11. Factors and Analysis Methods
3. v) Research Methods
Quantitative
Backend system data
Frontend UI data
Correlational analysis
Qualitative
Feedback form
(comments & critiques)
Thematic analysis
15. Why Mixed Methods?
4. i, ii, iii) Analysis of Mixed Methods
User needs/perception System Performance
Data AnalysisApproach Data Collection
16. Strengths of Mixed Methods
5. i, ii, iii) Analysis of Mixed Methods
Multi-faceted
view of data
Simple to execute
Data in same phase
Explains
unexpected results
Data AnalysisApproach Data Collection
17. 6. Analysis of Mixed Methods
Limitations
Limited population
& dataset
Single Coder?
Variation in
participation levels
Data AnalysisApproach Data Collection
18. • Soft claims / High-level findings
• Possible future research directions
• Did not include potential confounds
Quality of Interpretations
7. Analysis of Quality
19. • Detailed limitations provided
• Pathway to future research areas for visual search
Quality of Design Knowledge
8. Analysis of Design Knowledge
22. Key Terms
Multi-faceted
Having many definable aspects that make up a subject or
object.
Visual exploration
High-level form of information seeking.
Search-and-Browse Paradigm
Traditional means of discovering text-based content on the Web.
3. iv) Research Methods
Web-based information visualization
A multifaceted interactive graphical representation of
information can that be manipulated by interacting via the
graphical elements or traditional text queries.