Academic research on data visualization has seen an explosive growth in the last 15 years. In this presentation I use Elsevier's Scopus to search for scholarly research on data visualization and to present visual summaries of the vast literature.
Using full-text data to create improved term mapsNees Jan van Eck
Presentation at the 16th International Conference on Scientometrics & Informetrics, Wuhan, China, October 19, 2017.
A term map offers a visualization of a network of terms that co-occur in scientific publications. Term maps are usually created based on the titles and abstracts of publications. In this paper, we explore the use of full-text data for creating term maps. We create and compare a series of term maps based on the full text of publications in Journal of Informetrics. We use our results to discuss the advantages and disadvantages of different approaches for creating term maps.
Accuracy of citation data in Web of Science and ScopusNees Jan van Eck
Presentation at the 16th International Conference on Scientometrics & Informetrics, Wuhan, China, October 19, 2017.
We present a large-scale analysis of the accuracy of citation data in the Web of Science and Scopus databases. The analysis is based on citations given in publications in Elsevier journals. We reveal significant data quality problems for both databases. Missing and incorrect references are important problems in Web of Science. Duplicate publications are a serious problem in Scopus.
Presentation at the Colloquium Research Information Systems and Science Classifications: Revisiting the NARCIS Classification, Museum Meermanno, The Hague, The Netherlands, September 28, 2018.
PR173 : Automatic Chemical Design Using a Data-Driven Continuous Representati...Sunghoon Joo
Paper review slide.
Title : Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
Paper url : https://pubs.acs.org/doi/full/10.1021/acscentsci.7b00572
video url : https://youtu.be/hk4e8ZCkNWg
Using full-text data to create improved term mapsNees Jan van Eck
Presentation at the 16th International Conference on Scientometrics & Informetrics, Wuhan, China, October 19, 2017.
A term map offers a visualization of a network of terms that co-occur in scientific publications. Term maps are usually created based on the titles and abstracts of publications. In this paper, we explore the use of full-text data for creating term maps. We create and compare a series of term maps based on the full text of publications in Journal of Informetrics. We use our results to discuss the advantages and disadvantages of different approaches for creating term maps.
Accuracy of citation data in Web of Science and ScopusNees Jan van Eck
Presentation at the 16th International Conference on Scientometrics & Informetrics, Wuhan, China, October 19, 2017.
We present a large-scale analysis of the accuracy of citation data in the Web of Science and Scopus databases. The analysis is based on citations given in publications in Elsevier journals. We reveal significant data quality problems for both databases. Missing and incorrect references are important problems in Web of Science. Duplicate publications are a serious problem in Scopus.
Presentation at the Colloquium Research Information Systems and Science Classifications: Revisiting the NARCIS Classification, Museum Meermanno, The Hague, The Netherlands, September 28, 2018.
PR173 : Automatic Chemical Design Using a Data-Driven Continuous Representati...Sunghoon Joo
Paper review slide.
Title : Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
Paper url : https://pubs.acs.org/doi/full/10.1021/acscentsci.7b00572
video url : https://youtu.be/hk4e8ZCkNWg
Visual mining of science citation data for benchmarking scientific and techno...Gurdal Ertek
In this paper we present a study where we visually analyzed science citation data to investigate the competitiveness of world countries in selected categories of science. The dataset that we worked on in our study includes the number of papers published
and the number of citations made in the ESI (Essential Science Indicators) database in 2004. The dataset lists these values for practically every country in the world. In analyzing the data, we employ methods and software tools developed and used in the data mining and information visualization fields of the Computer Science. Some of the questions for which we look for answers in this study are the following: (a) Which countries are most competitive in the selected categories of science? (i.e.
Engineering, Computer Science, Economics & Business) (b) What type of correlations exist between different categories of science? For example, do countries with many published papers in the field of Engineering science also have many papers published on Computer Science or Economics & Business? (c) Which countries produce the most influential papers? This analysis is needed since a country may have
many papers published but these papers may be cited very rarely. (d) Can we gain useful and actionable insights by combining science citation data with socioeconomic and geographical data?
http://research.sabanciuniv.edu.
Visual Mining of Science Citation Data for Benchmarking Scientific and Techno...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/visual-mining-of-science-citation-data-for-benchmarking-scientific-and-technological-competitiveness-of-world-countries/
In this paper we present a study where we visually analyzed science citation data to investigate the competitiveness of world countries in selected categories of science. The dataset that we worked on in our study includes the number of papers published and the number of citations made in the ESI (Essential Science Indicators) database in 2004. The dataset lists these values for practically every country in the world. In analyzing the data, we employ methods and software tools developed and used in the data mining and information visualization fields of the Computer Science. Some of the questions for which we look for answers in this study are the following: (a) Which countries are most competitive in the selected categories of science? (i.e. Engineering, Computer Science, Economics & Business) (b) What type of correlations exist between different categories of science? For example, do countries with many published papers in the field of Engineering science also have many papers published on Computer Science or Economics & Business? (c) Which countries produce the most influential papers? This analysis is needed since a country may have many papers published but these papers may be cited very rarely. (d) Can we gain useful and actionable insights by combining science citation data with socioeconomic and geographical data?
Data Science & Analytics (light overview) Shalin Hai-Jew
This draft slideshow is a print version of an Adobe Spark presentation planned for the TILTed Event at Fort Hays State University (FHSU) in March 2019. The URL to the Spark presentation is https://spark.adobe.com/page/jaOglkNI9Jjp1/.
In the last decade, several Scientific Knowledge Graphs (SKG) were released, representing scientific knowledge in a structured, interlinked, and semantically rich manner. But, what kind of information they describe? How they have been built? What can we do with them? In this lecture, I will first provide an overview of well-known SKGs, like Microsoft Academic Graph, Dimensions, and others. Then, I will present the Academia/Industry DynAmics (AIDA) Knowledge Graph, which describes 21M publications and 8M patents according to i) the research topics drawn from the Computer Science Ontology, ii) the type of the author's affiliations (e.g, academia, industry), and iii) 66 industrial sectors (e.g., automotive, financial, energy, electronics) from the Industrial Sectors Ontology (INDUSO). Finally, I will showcase a number of tools and approaches using such SKGs, supporting researchers, companies, and policymakers in making sense of research dynamics.
Decomposing Social and Semantic Networks in Emerging “Big Data” ResearchHan Woo PARK
빅데이터가 학문으로 등장한 배경을 잘 정리한 논문
http://www.sciencedirect.com/science/article/pii/S1751157713000473
Park, H.W.@, & Leydesdorff, L. (2013). Decomposing Social and Semantic Networks in Emerging “Big Data” Research. Journal of Informetrics. 7 (3), 756-765. DOI information: 10.1016/j.joi.2013.05.004
Big data is prevalent in our daily life. Not surprisingly, big data becomes a hot topic discussedby commercial worlds, media, magazines, general publics and elsewhere. From academic point of view, isit a research area of potential worth being explored? Or it is just another hype? Are there only computer orIS related scholars suitable for big data research due to its nature? Or scholars from other research areas are alsosuitable for this subject? This study aims to answer these questions through the use of informetricsapproach and data source form the SSCI Journal database, leveraging informetric‟s robust natures ofquantitative power of analyze information in any form onto the data source of representativeness. This research shows that big data research is at its growth phase with an exponential growth patternsince 2012 and with great potential for years to come. And perhaps surprisingly, computer or IS relateddisciplinesare not on the top 5 research areas fromthis research results. In fact, the top five research disciplinesare more diversified then expected: business economics (#1), Government Law (#2), InformationScience/ Library Science (#3), Social Science (#4) and Computer Science (#5). Scholars from the USuniversities are the most productive in this subject while Asian countries, including Taiwan, are alsovisible. Besides, this study also identifies that big data publications from SSCI journal database during2005-2015 do fit Lotka‟s law. This study contributes tounderstand the current big data research trends and also show the ways toresearchers who are interested to conduct future research in big data regardless of their research backgrounds.
Substantial amounts of digital data are produced in the scientific
enterprise, and much of it is carefully analyzed and processed. Often
resulting from a good deal of intellectual effort, many of these
highly-processed products are published in the scholarly literature.
Many of these data - or more precisely, representations of these data -
are committed to the scholarly record in the forms of figures and tables
that appear within articles: the AAS journals publish more than 30,000
figures and nearly 10,000 tables each year. For more than a decade, the
AAS journals have accepted machine-readable tables that provide the data
behind (some of) the tables, and recently the journals have started to
encourage the submission of the data behind figures. (See the related
poster by Greg Schwarz.) During this time, the journals have been
refining techniques for acquiring and managing the digital data that
underlie figures and tables. In 2012 the AAS was awarded a grant by the
US NSF so that the journals can extend the methods for providing access
to these data objects, through a deeper collaboration with the VO and
with organizations like DataCite, and by spearheading discussions about
the formats and metadata that will best facilitate long-term data
management and access. An important component of these activities is
educating scientists about the importance and benefits of making such
data sets available.
Integration of research literature and data (InFoLiS)Philipp Zumstein
Talk at CNI 2015 Spring Membership Meeting in Seattle on April 14th, 2015, see http://www.cni.org/events/membership-meetings/upcoming-meeting/spring-2015/
Abstract: The goal of the InFoLiS project is to connect research data and publications. Links between data and literature are created automatically by means of text mining and made available as Linked Open Data (LOD) for seamless integration into different retrieval systems. This enables scientists to directly access information about corresponding research data in a literature information system, and, vice versa, it is possible to directly find different interpretations and analyses in the literature of the same research data. In our talk, we will describe our methods for generating the links and give insight into the Linked Data infrastructure including the services we are currently building. Most importantly, we will detail how our solutions can be used by other institutions and invite all interested participants to discuss with us their ideas and thoughts on the requirements for these services to ensure broad interoperability with existing systems and infrastructures. InFoLiS is a joint project by the GESIS – Leibniz Institute for the Social Sciences, Cologne, Mannheim University Library, and Mannheim University supported by a grant from the DFG – German Research Foundation.
Tutors India offers assistance in quantitative studies and data analyses and explains the steps to conduct a bibliometric analysis.
https://www.tutorsindia.com/blog/how-to-conduct-bibliometric-analyses/
Contact Address:
UK
10 Park Place,
Manchester M4 4EY
+44-1143520021
INDIA
10, Kutty Street,
Nungambakkam,
Chennai – 600034
+91-4448137070
Mail ID- info@tutorsindia.com
Social medial address
► Website: https://www.tutorsindia.com/
► Facebook: https://www.facebook.com/TutorsIndiaGlobalAcademia/
► Youtube:https://www.youtube.com/@tutorsindia9273
► Instagram: https://www.instagram.com/tutors_india/
► Twitter: https://twitter.com/tutorsindia/
► Linkedin: https://www.linkedin.com/company/tutors-india/
► Pinterest: https://in.pinterest.com/TutorsIndia/
Applying machine learning techniques to big data in the scholarly domainAngelo Salatino
Slides of the Lecture at the 5th International School on Applied Probability Theory,Communications Technologies & Data Science (APTCT-2020)
12 Nov 2020
Linked Open Data: Combining Data for the Social Sciences and Humanities (and ...Richard Zijdeman
A glimpse of how we are used to connecting datasets on our laptops and how, imho, need to move to the Web of Data, including a demo connecting various sources all from your(!) machine.
3-D geospatial data for disaster management and developmentKeiko Ono
Japan is a high income country at an advanced stage of epidemiological transition. One of its remaining public health challenges is response to natural disasters. This presentation explores the potential of 3-D geospatial data in disaster response and management.
Visual mining of science citation data for benchmarking scientific and techno...Gurdal Ertek
In this paper we present a study where we visually analyzed science citation data to investigate the competitiveness of world countries in selected categories of science. The dataset that we worked on in our study includes the number of papers published
and the number of citations made in the ESI (Essential Science Indicators) database in 2004. The dataset lists these values for practically every country in the world. In analyzing the data, we employ methods and software tools developed and used in the data mining and information visualization fields of the Computer Science. Some of the questions for which we look for answers in this study are the following: (a) Which countries are most competitive in the selected categories of science? (i.e.
Engineering, Computer Science, Economics & Business) (b) What type of correlations exist between different categories of science? For example, do countries with many published papers in the field of Engineering science also have many papers published on Computer Science or Economics & Business? (c) Which countries produce the most influential papers? This analysis is needed since a country may have
many papers published but these papers may be cited very rarely. (d) Can we gain useful and actionable insights by combining science citation data with socioeconomic and geographical data?
http://research.sabanciuniv.edu.
Visual Mining of Science Citation Data for Benchmarking Scientific and Techno...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/visual-mining-of-science-citation-data-for-benchmarking-scientific-and-technological-competitiveness-of-world-countries/
In this paper we present a study where we visually analyzed science citation data to investigate the competitiveness of world countries in selected categories of science. The dataset that we worked on in our study includes the number of papers published and the number of citations made in the ESI (Essential Science Indicators) database in 2004. The dataset lists these values for practically every country in the world. In analyzing the data, we employ methods and software tools developed and used in the data mining and information visualization fields of the Computer Science. Some of the questions for which we look for answers in this study are the following: (a) Which countries are most competitive in the selected categories of science? (i.e. Engineering, Computer Science, Economics & Business) (b) What type of correlations exist between different categories of science? For example, do countries with many published papers in the field of Engineering science also have many papers published on Computer Science or Economics & Business? (c) Which countries produce the most influential papers? This analysis is needed since a country may have many papers published but these papers may be cited very rarely. (d) Can we gain useful and actionable insights by combining science citation data with socioeconomic and geographical data?
Data Science & Analytics (light overview) Shalin Hai-Jew
This draft slideshow is a print version of an Adobe Spark presentation planned for the TILTed Event at Fort Hays State University (FHSU) in March 2019. The URL to the Spark presentation is https://spark.adobe.com/page/jaOglkNI9Jjp1/.
In the last decade, several Scientific Knowledge Graphs (SKG) were released, representing scientific knowledge in a structured, interlinked, and semantically rich manner. But, what kind of information they describe? How they have been built? What can we do with them? In this lecture, I will first provide an overview of well-known SKGs, like Microsoft Academic Graph, Dimensions, and others. Then, I will present the Academia/Industry DynAmics (AIDA) Knowledge Graph, which describes 21M publications and 8M patents according to i) the research topics drawn from the Computer Science Ontology, ii) the type of the author's affiliations (e.g, academia, industry), and iii) 66 industrial sectors (e.g., automotive, financial, energy, electronics) from the Industrial Sectors Ontology (INDUSO). Finally, I will showcase a number of tools and approaches using such SKGs, supporting researchers, companies, and policymakers in making sense of research dynamics.
Decomposing Social and Semantic Networks in Emerging “Big Data” ResearchHan Woo PARK
빅데이터가 학문으로 등장한 배경을 잘 정리한 논문
http://www.sciencedirect.com/science/article/pii/S1751157713000473
Park, H.W.@, & Leydesdorff, L. (2013). Decomposing Social and Semantic Networks in Emerging “Big Data” Research. Journal of Informetrics. 7 (3), 756-765. DOI information: 10.1016/j.joi.2013.05.004
Big data is prevalent in our daily life. Not surprisingly, big data becomes a hot topic discussedby commercial worlds, media, magazines, general publics and elsewhere. From academic point of view, isit a research area of potential worth being explored? Or it is just another hype? Are there only computer orIS related scholars suitable for big data research due to its nature? Or scholars from other research areas are alsosuitable for this subject? This study aims to answer these questions through the use of informetricsapproach and data source form the SSCI Journal database, leveraging informetric‟s robust natures ofquantitative power of analyze information in any form onto the data source of representativeness. This research shows that big data research is at its growth phase with an exponential growth patternsince 2012 and with great potential for years to come. And perhaps surprisingly, computer or IS relateddisciplinesare not on the top 5 research areas fromthis research results. In fact, the top five research disciplinesare more diversified then expected: business economics (#1), Government Law (#2), InformationScience/ Library Science (#3), Social Science (#4) and Computer Science (#5). Scholars from the USuniversities are the most productive in this subject while Asian countries, including Taiwan, are alsovisible. Besides, this study also identifies that big data publications from SSCI journal database during2005-2015 do fit Lotka‟s law. This study contributes tounderstand the current big data research trends and also show the ways toresearchers who are interested to conduct future research in big data regardless of their research backgrounds.
Substantial amounts of digital data are produced in the scientific
enterprise, and much of it is carefully analyzed and processed. Often
resulting from a good deal of intellectual effort, many of these
highly-processed products are published in the scholarly literature.
Many of these data - or more precisely, representations of these data -
are committed to the scholarly record in the forms of figures and tables
that appear within articles: the AAS journals publish more than 30,000
figures and nearly 10,000 tables each year. For more than a decade, the
AAS journals have accepted machine-readable tables that provide the data
behind (some of) the tables, and recently the journals have started to
encourage the submission of the data behind figures. (See the related
poster by Greg Schwarz.) During this time, the journals have been
refining techniques for acquiring and managing the digital data that
underlie figures and tables. In 2012 the AAS was awarded a grant by the
US NSF so that the journals can extend the methods for providing access
to these data objects, through a deeper collaboration with the VO and
with organizations like DataCite, and by spearheading discussions about
the formats and metadata that will best facilitate long-term data
management and access. An important component of these activities is
educating scientists about the importance and benefits of making such
data sets available.
Integration of research literature and data (InFoLiS)Philipp Zumstein
Talk at CNI 2015 Spring Membership Meeting in Seattle on April 14th, 2015, see http://www.cni.org/events/membership-meetings/upcoming-meeting/spring-2015/
Abstract: The goal of the InFoLiS project is to connect research data and publications. Links between data and literature are created automatically by means of text mining and made available as Linked Open Data (LOD) for seamless integration into different retrieval systems. This enables scientists to directly access information about corresponding research data in a literature information system, and, vice versa, it is possible to directly find different interpretations and analyses in the literature of the same research data. In our talk, we will describe our methods for generating the links and give insight into the Linked Data infrastructure including the services we are currently building. Most importantly, we will detail how our solutions can be used by other institutions and invite all interested participants to discuss with us their ideas and thoughts on the requirements for these services to ensure broad interoperability with existing systems and infrastructures. InFoLiS is a joint project by the GESIS – Leibniz Institute for the Social Sciences, Cologne, Mannheim University Library, and Mannheim University supported by a grant from the DFG – German Research Foundation.
Tutors India offers assistance in quantitative studies and data analyses and explains the steps to conduct a bibliometric analysis.
https://www.tutorsindia.com/blog/how-to-conduct-bibliometric-analyses/
Contact Address:
UK
10 Park Place,
Manchester M4 4EY
+44-1143520021
INDIA
10, Kutty Street,
Nungambakkam,
Chennai – 600034
+91-4448137070
Mail ID- info@tutorsindia.com
Social medial address
► Website: https://www.tutorsindia.com/
► Facebook: https://www.facebook.com/TutorsIndiaGlobalAcademia/
► Youtube:https://www.youtube.com/@tutorsindia9273
► Instagram: https://www.instagram.com/tutors_india/
► Twitter: https://twitter.com/tutorsindia/
► Linkedin: https://www.linkedin.com/company/tutors-india/
► Pinterest: https://in.pinterest.com/TutorsIndia/
Applying machine learning techniques to big data in the scholarly domainAngelo Salatino
Slides of the Lecture at the 5th International School on Applied Probability Theory,Communications Technologies & Data Science (APTCT-2020)
12 Nov 2020
Linked Open Data: Combining Data for the Social Sciences and Humanities (and ...Richard Zijdeman
A glimpse of how we are used to connecting datasets on our laptops and how, imho, need to move to the Web of Data, including a demo connecting various sources all from your(!) machine.
Similar to Visualizing data visualization using scopus (20)
3-D geospatial data for disaster management and developmentKeiko Ono
Japan is a high income country at an advanced stage of epidemiological transition. One of its remaining public health challenges is response to natural disasters. This presentation explores the potential of 3-D geospatial data in disaster response and management.
A narrative review of NLP applications to political science
人工知能、機械学習の急速な発展とともに、そうした分析で利用できる「データ」の範囲が拡大しつつある。人が発話・作成した言葉を人工知能が読み解いて、翻訳・要約、さらには特徴・パターンを見つけるなど高度な分析をする「自然言語処理」はすでに多くの分野で実用化されている。この発表では政治学における自然言語処理を用いたこれまでの研究をレビューし、今後の可能性について検討する。Keywords: artificial intelligence, natural language processing (NLP), text mining, political science, data science
US presidential selection: the Electoral College challenged (again)Keiko Ono
Constitutional design for selecting the chief executive
Historical evolution since 1789
The Electoral College
How it works today
Implications and criticism
Alternatives
Reapportionment and post-2020 projections
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. Two largest databases of academic literature.
In this presentation I use Elsevier’s Sciverse
Scopus.
2
3. Using “data visualization” (with quotation marks) as search term in title,
abstract or keywords produces over 20,000 results.
3
4. That’s a lot of information. To narrow the scope and find what
we’re really looking for, we could use additional search terms.
But sometimes we want to understand, make sense of the
really big body of literature (in this case n=20,262)
In Scopus there’s a default built-in “Analyze Results” function
that allows visual summary of the bibliometric data.
For each, there’s a frequency table too, so one can create own
graphics.
4
5. One can use “Analyze results” function to see how (1) the # of
citations have changed over the years…
Until 1987, the number
of citations per year was
single digit. It reached
133 in 1996 and 1,168 in
2007. In 2016, there were
2,217 scholarly
publications on data
visualization!
5
6. Or to see (2) which journals have published the greatest
number of articles on this topic…
6
No.1. on the list
is Lecture
Notes in
Computer
Science
(including
Subseries
Lecture Notes
in Artificial
Intelligence and
Lecture Notes
in
Bioinformatics)
7. Or to see (3) who the leading authors are on this topic.
7
8. And (4) where they are affiliated.
8
The leaders in data
visualization
research are U of
California Davis,
Georgia Institute of
Technology, U of
Utah…and so on.
University of Tokyo is
ranked #18.
10. (6) Scopus also visually presents the document type
breakdown
10
11. (7) And by subject area
11
Not clear how
these
percentages are
calculated…??
12. In summary, research on data visualization has
seen an explosive growth in the 21st century. The
leading fields of research are computer science,
engineering, and mathematics. The United States
is the leader in this area of research, followed by
China, Germany and UK.
Alternatively, using “infographics” as seach term
produces far fewer results (only about 200).
12