The use of Digital Tools and Geoinformation for Developmentbfnd
This document provides an overview of ICT4D (Information and Communication Technologies for Development) and how technology and geospatial information can support development efforts. It discusses the history and definition of ICT4D, examples of ICT4D tools and applications in various sectors like health, finance, agriculture, and humanitarian response. The document also outlines challenges in ICT4D project implementation and lessons learned from Gnucoop's experience with geospatial technologies on projects in Malawi, Jordan, and Haiti. It concludes with suggestions for a successful ICT4D project and considerations for the future of the field.
Predictive analytics in uae government organizationsSaeed Al Dhaheri
This presentation is to create awareness of the use the use of predicative analytics in public sector organizations with emphasis on UAE government organizations.
Seema Hafeez infopoverty presentation New York 2009SEEMA HAFEEZ
This document introduces the United Nations Electronic Mobile Government Knowledge Repository (emGKR). The emGKR is a global online knowledge sharing platform managed by the UN Department of Economic and Social Affairs to disseminate information on electronic and mobile government practices among UN member states. The emGKR collects best practices, case studies, research, and other resources to support member states' e-government development efforts and capacity building. It is hosted on the UN Global Alliance for Information and Communication Technologies and Development (GAID) platform. The overall goals of the emGKR are to facilitate knowledge transfer on e-government, provide access to country practices and policies, and help members replicate successful programs to support achieving the UN's Millennium Development
This document summarizes the Future Internet Assembly 2013 conference. It provides details on the participants, sessions, posters, demos, discussions, and programs from the 3 day event in Dublin, Ireland. Key topics included Internet of Things architectures, open data, and bridging the gap between researchers and entrepreneurs. Interoperability, standardization, and building communities around consensus were discussed as important to further innovation.
A short introduction to GEO governance, the GEO Work Programme and the GEO community for the FOSS4G audience. Contributions on GEOGLOWS, eShape and GEOHack19 from Julia Wagemann, Valentina Balcan and Diana Mastracci.
This document discusses the use of mobile devices like PDAs and smartphones for public health work. It provides examples of software tools that have been used for data collection and medical protocols on mobile devices. These include EpiSurveyor for epidemiological surveys, Satellite Forms for application design, and Voxiva for large-scale projects. The document also discusses considerations for mobile health projects, like optimizing for small screens and taking advantage of connectivity. It announces a scholarship program to review literature on handheld computers in healthcare.
This document provides a summary of a research project on the impact of information technology on Nigeria's economy. It includes an approval page signed by the student's supervisor and external examiner. It also includes a declaration by the student and sections dedicated to the table of contents, acknowledgements and abstract. The research aims to examine the impact of information technology on various economic sectors in Nigeria through a literature review and analysis of economic indicators. It also identifies challenges facing ICT operators in the country.
The use of Digital Tools and Geoinformation for Developmentbfnd
This document provides an overview of ICT4D (Information and Communication Technologies for Development) and how technology and geospatial information can support development efforts. It discusses the history and definition of ICT4D, examples of ICT4D tools and applications in various sectors like health, finance, agriculture, and humanitarian response. The document also outlines challenges in ICT4D project implementation and lessons learned from Gnucoop's experience with geospatial technologies on projects in Malawi, Jordan, and Haiti. It concludes with suggestions for a successful ICT4D project and considerations for the future of the field.
Predictive analytics in uae government organizationsSaeed Al Dhaheri
This presentation is to create awareness of the use the use of predicative analytics in public sector organizations with emphasis on UAE government organizations.
Seema Hafeez infopoverty presentation New York 2009SEEMA HAFEEZ
This document introduces the United Nations Electronic Mobile Government Knowledge Repository (emGKR). The emGKR is a global online knowledge sharing platform managed by the UN Department of Economic and Social Affairs to disseminate information on electronic and mobile government practices among UN member states. The emGKR collects best practices, case studies, research, and other resources to support member states' e-government development efforts and capacity building. It is hosted on the UN Global Alliance for Information and Communication Technologies and Development (GAID) platform. The overall goals of the emGKR are to facilitate knowledge transfer on e-government, provide access to country practices and policies, and help members replicate successful programs to support achieving the UN's Millennium Development
This document summarizes the Future Internet Assembly 2013 conference. It provides details on the participants, sessions, posters, demos, discussions, and programs from the 3 day event in Dublin, Ireland. Key topics included Internet of Things architectures, open data, and bridging the gap between researchers and entrepreneurs. Interoperability, standardization, and building communities around consensus were discussed as important to further innovation.
A short introduction to GEO governance, the GEO Work Programme and the GEO community for the FOSS4G audience. Contributions on GEOGLOWS, eShape and GEOHack19 from Julia Wagemann, Valentina Balcan and Diana Mastracci.
This document discusses the use of mobile devices like PDAs and smartphones for public health work. It provides examples of software tools that have been used for data collection and medical protocols on mobile devices. These include EpiSurveyor for epidemiological surveys, Satellite Forms for application design, and Voxiva for large-scale projects. The document also discusses considerations for mobile health projects, like optimizing for small screens and taking advantage of connectivity. It announces a scholarship program to review literature on handheld computers in healthcare.
This document provides a summary of a research project on the impact of information technology on Nigeria's economy. It includes an approval page signed by the student's supervisor and external examiner. It also includes a declaration by the student and sections dedicated to the table of contents, acknowledgements and abstract. The research aims to examine the impact of information technology on various economic sectors in Nigeria through a literature review and analysis of economic indicators. It also identifies challenges facing ICT operators in the country.
This lecture delivered by Professor Mohamed Fahmy Tolba at the monthly meeting of the Scientific Research Group in Egypt (SRGE) on Saturday 6 June 2015 at DAR ELDEYAFA - Ain Shams university
This document provides a summary of a project report on big data Twitter data retrieval and text mining. The project involved creating a Twitter application, installing and loading R packages for Twitter API access and text analysis, authenticating with Twitter via OAuth, extracting text from Twitter timelines, transforming and analyzing the text through techniques like stemming words and finding frequent terms and word associations, and showcasing results with a word cloud. The project was completed as part of a summer training program at the GEOPIC center of ONGC in India under the guidance of a chief manager.
This document discusses several topics related to data and data-driven businesses. It begins by outlining trends in big data and machine learning. It then discusses how to build data-centric businesses by identifying data opportunities and sources, understanding the data lifecycle, and extracting value from data. Examples are provided of Netflix as a data-driven company. The future of professions in a data-driven world is also examined, as well as talent scarcity issues and the need for data-savvy managers. The document provides an overview of many relevant topics at the intersection of data and business.
High-level Meeting & Workshop on Environmental and Scientific Open Data for Sustainable Development Goals in Developing Countries. Madagascar, 4-6 December 2017
Presentation of the research activities of IMU (Information Management Unit) a multi-disciplinary research lab of the Institute of Communication and Computer Systems (ICCS) at the National Technical University of Athens, Greece.
See http://imu.iccs.gr
MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACHijitcs
In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating
healthcare applications and services is one of the vast data approaches that can be adapted to mobile
cloud computing. This work proposes a framework of a global healthcare computing based combining both
mobile computing and cloud computing. This approach leads to integrate all of the required services and overcoming the barriers through facilitating both privacy and security.
Crowdsourcing technologies and applications for tablets can help address challenges in developing areas. Case studies show how platforms like Ushahidi and OpenStreetMap use crowd-sourced data to monitor events, map areas, and create open data maps. Tablet devices are well-suited for developing contexts and can be used for education through applications like in the One Tablet Per Child program in Ethiopia and for healthcare as shown by India's Kalam-Raju tablet. Lessons indicate technologies should be tools that fit needs and contexts, have potential for innovation, and match beneficiary capacities in program applications.
Presentation to NASA executives concerning how Web 2.0 empowers organizations to achieve Performance in the 21st Century. Presented by John S. Hale, founder and principal of MINDWEST Strategies (www.mindwest.net)
The document discusses the potential of Industry 4.0 in India. It outlines the four industrial revolutions that have occurred historically. Key technologies of Industry 4.0 that are discussed include data analytics, cloud computing, the Internet of Things, artificial intelligence, automation, virtual reality, 3D printing, blockchain, and robotics. The document highlights how these technologies can benefit India through alleviating poverty, improving healthcare, enhancing agriculture, and strengthening infrastructure. New jobs that may emerge in sectors like IT/BPM, automotive, textiles, and banking are also outlined. Skills like adaptability, critical thinking and technical skills will be important for Industry 4.0. Overall opportunities for India in areas like telemedicine, education,
January 2021: Top Ten Cited Article in Computer Science, Engineering IJCSEA Journal
International Journal of Computer Science, Engineering and Applications (IJCSEA) is an open access peer-reviewed journal that publishes articles which contribute new results in all areas of the computer science, Engineering and Applications. The journal is devoted to the publication of high quality papers on theoretical and practical aspects of computer science, Engineering and Applications.
NEC Public Safety | The Global Risk to Policing from the Fusion of the Real W...NEC Public Safety
Michael O'Connell, Vice President and Executive Advisor of NEC Corporation speaks at the II International Security Symposium on 19 and 20 March.
During his presentation, he shared the importance of using biometrics to ensure the work of law enforcement agencies, ensuring unrivaled agility and accuracy in preventing and combating cybercrime.
Reach out at safety@nec.com.sg if you would like to have more details.
This document outlines a presentation on empowerment and technology. It discusses how technology empowers individuals by providing access to information and enabling communication. The document then reviews the history of empowerment technology, noting key periods from early mechanical technologies to modern electronic devices. Examples of empowerment initiatives in the Philippines are provided, such as digital literacy programs, e-government services, online education platforms, and tech startups focused on areas like health, education and agriculture.
This document provides a strategic research agenda for the ICT Innovationplatform Productsoftware. It identifies 19 research topics to investigate how to better enable loose coupling between user-owned data, customer-owned functions, and vendor-owned processing. The goal is to simplify user access to societal and economic services through netcentric processing and reducing complexity. The vision is to create a "softportNL" hub that enables collective management of software ecosystems aligned with societal information networks through federated software product management. The strategy is to build competence in this area through executing nationally and European funded research projects.
The document provides an overview of several production grids including the Open Science Grid, DEISA, NAREGI, the Nordic DataGrid Facility, EGEE, and TeraGrid. It describes the organizations, resources, users, operations, software, and partnerships of each grid. The grids provide petascale resources and support a wide range of scientific applications in fields like high energy physics, life sciences, earth sciences, and engineering. They are internationally collaborative efforts that aim to enable open scientific research through distributed computing infrastructures.
This document provides an overview of DRAXIS, a Greek company founded in 2004 that provides software engineering, ICT consulting, and digital preservation services. It has 14 employees and focuses on research and innovation projects, public procurement, and private sector work. Notable projects include ENORASIS, an integrated decision support system for irrigation management, and PERICLES, an EU-funded project to develop tools and processes for digital preservation. DRAXIS aims to ensure digital information remains accessible despite changing technologies and environments over time.
Harshad Katikar Raya Al Mohammed The Annual IT for Government Dubai Summit 2012 GenericPR
The Government of any country is the biggest producer of data and information. In fact the Government’s business consists of data processing and using information within its own departments for decision making, policy management as well as disseminating it in public for the benefi t of the citizens. The Advent of ICT has enabled Governments to have citizens participate in governance. Though still evolving in application worldwide , the use of Information Technology in Governance is paying off
Dr. Saeed Khalfan Al Dhaheri
| Ministry of Foreign AffairsU.A.E
Advisor to The Minister
The DREAM IT project aims to empower Mongolians through information and communications technology. It is a program of projects that addresses challenges in Mongolia's ICT sector. The project develops research capacities and strategic linkages. It monitors six sub-projects related to health, education, governance, and the environment. The project team involves over 100 members. Current activities include training workshops, revising research designs, and launching a project portal. Benefits expected from partnering with DECI include revising the evaluation plan using a utilization-focused approach and identifying improvements to projects and disseminating results.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
This lecture delivered by Professor Mohamed Fahmy Tolba at the monthly meeting of the Scientific Research Group in Egypt (SRGE) on Saturday 6 June 2015 at DAR ELDEYAFA - Ain Shams university
This document provides a summary of a project report on big data Twitter data retrieval and text mining. The project involved creating a Twitter application, installing and loading R packages for Twitter API access and text analysis, authenticating with Twitter via OAuth, extracting text from Twitter timelines, transforming and analyzing the text through techniques like stemming words and finding frequent terms and word associations, and showcasing results with a word cloud. The project was completed as part of a summer training program at the GEOPIC center of ONGC in India under the guidance of a chief manager.
This document discusses several topics related to data and data-driven businesses. It begins by outlining trends in big data and machine learning. It then discusses how to build data-centric businesses by identifying data opportunities and sources, understanding the data lifecycle, and extracting value from data. Examples are provided of Netflix as a data-driven company. The future of professions in a data-driven world is also examined, as well as talent scarcity issues and the need for data-savvy managers. The document provides an overview of many relevant topics at the intersection of data and business.
High-level Meeting & Workshop on Environmental and Scientific Open Data for Sustainable Development Goals in Developing Countries. Madagascar, 4-6 December 2017
Presentation of the research activities of IMU (Information Management Unit) a multi-disciplinary research lab of the Institute of Communication and Computer Systems (ICCS) at the National Technical University of Athens, Greece.
See http://imu.iccs.gr
MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACHijitcs
In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating
healthcare applications and services is one of the vast data approaches that can be adapted to mobile
cloud computing. This work proposes a framework of a global healthcare computing based combining both
mobile computing and cloud computing. This approach leads to integrate all of the required services and overcoming the barriers through facilitating both privacy and security.
Crowdsourcing technologies and applications for tablets can help address challenges in developing areas. Case studies show how platforms like Ushahidi and OpenStreetMap use crowd-sourced data to monitor events, map areas, and create open data maps. Tablet devices are well-suited for developing contexts and can be used for education through applications like in the One Tablet Per Child program in Ethiopia and for healthcare as shown by India's Kalam-Raju tablet. Lessons indicate technologies should be tools that fit needs and contexts, have potential for innovation, and match beneficiary capacities in program applications.
Presentation to NASA executives concerning how Web 2.0 empowers organizations to achieve Performance in the 21st Century. Presented by John S. Hale, founder and principal of MINDWEST Strategies (www.mindwest.net)
The document discusses the potential of Industry 4.0 in India. It outlines the four industrial revolutions that have occurred historically. Key technologies of Industry 4.0 that are discussed include data analytics, cloud computing, the Internet of Things, artificial intelligence, automation, virtual reality, 3D printing, blockchain, and robotics. The document highlights how these technologies can benefit India through alleviating poverty, improving healthcare, enhancing agriculture, and strengthening infrastructure. New jobs that may emerge in sectors like IT/BPM, automotive, textiles, and banking are also outlined. Skills like adaptability, critical thinking and technical skills will be important for Industry 4.0. Overall opportunities for India in areas like telemedicine, education,
January 2021: Top Ten Cited Article in Computer Science, Engineering IJCSEA Journal
International Journal of Computer Science, Engineering and Applications (IJCSEA) is an open access peer-reviewed journal that publishes articles which contribute new results in all areas of the computer science, Engineering and Applications. The journal is devoted to the publication of high quality papers on theoretical and practical aspects of computer science, Engineering and Applications.
NEC Public Safety | The Global Risk to Policing from the Fusion of the Real W...NEC Public Safety
Michael O'Connell, Vice President and Executive Advisor of NEC Corporation speaks at the II International Security Symposium on 19 and 20 March.
During his presentation, he shared the importance of using biometrics to ensure the work of law enforcement agencies, ensuring unrivaled agility and accuracy in preventing and combating cybercrime.
Reach out at safety@nec.com.sg if you would like to have more details.
This document outlines a presentation on empowerment and technology. It discusses how technology empowers individuals by providing access to information and enabling communication. The document then reviews the history of empowerment technology, noting key periods from early mechanical technologies to modern electronic devices. Examples of empowerment initiatives in the Philippines are provided, such as digital literacy programs, e-government services, online education platforms, and tech startups focused on areas like health, education and agriculture.
This document provides a strategic research agenda for the ICT Innovationplatform Productsoftware. It identifies 19 research topics to investigate how to better enable loose coupling between user-owned data, customer-owned functions, and vendor-owned processing. The goal is to simplify user access to societal and economic services through netcentric processing and reducing complexity. The vision is to create a "softportNL" hub that enables collective management of software ecosystems aligned with societal information networks through federated software product management. The strategy is to build competence in this area through executing nationally and European funded research projects.
The document provides an overview of several production grids including the Open Science Grid, DEISA, NAREGI, the Nordic DataGrid Facility, EGEE, and TeraGrid. It describes the organizations, resources, users, operations, software, and partnerships of each grid. The grids provide petascale resources and support a wide range of scientific applications in fields like high energy physics, life sciences, earth sciences, and engineering. They are internationally collaborative efforts that aim to enable open scientific research through distributed computing infrastructures.
This document provides an overview of DRAXIS, a Greek company founded in 2004 that provides software engineering, ICT consulting, and digital preservation services. It has 14 employees and focuses on research and innovation projects, public procurement, and private sector work. Notable projects include ENORASIS, an integrated decision support system for irrigation management, and PERICLES, an EU-funded project to develop tools and processes for digital preservation. DRAXIS aims to ensure digital information remains accessible despite changing technologies and environments over time.
Harshad Katikar Raya Al Mohammed The Annual IT for Government Dubai Summit 2012 GenericPR
The Government of any country is the biggest producer of data and information. In fact the Government’s business consists of data processing and using information within its own departments for decision making, policy management as well as disseminating it in public for the benefi t of the citizens. The Advent of ICT has enabled Governments to have citizens participate in governance. Though still evolving in application worldwide , the use of Information Technology in Governance is paying off
Dr. Saeed Khalfan Al Dhaheri
| Ministry of Foreign AffairsU.A.E
Advisor to The Minister
The DREAM IT project aims to empower Mongolians through information and communications technology. It is a program of projects that addresses challenges in Mongolia's ICT sector. The project develops research capacities and strategic linkages. It monitors six sub-projects related to health, education, governance, and the environment. The project team involves over 100 members. Current activities include training workshops, revising research designs, and launching a project portal. Benefits expected from partnering with DECI include revising the evaluation plan using a utilization-focused approach and identifying improvements to projects and disseminating results.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
1. A Government Decision Analytics Framework
Based on Citizen Opinion (Gov-DAF):
Elaboration of the Knowledge Base Component
Mohamed Adel Rezk¹², Adegboyega Ojo², Ghada A. El Khayat¹ and Safaa Hussein¹
¹Department of Information Systems and Computers, Alexandria University, Egypt.
²Insight Centre for Data Analytics, National University of Ireland Galway, Ireland.
2016 6th International Conference on ICT in our lives, Information Systems in a Connected World
17 - 20, December, 2016, Alexandria University, Alexandria, Egypt.
11. GOV-DAF PIPELINE -
➢ Approach independent Gov-DAF knowledge base building
pipeline stages
○ Input Public Policy
○ Relate Public Policy Keywords to Aspects
○ Extend CPPV and Populate Gov-DAF Knowledge Base
23. CONCLUSION AND FUTURE WORK -
➢ Gov-DAF Knowledge base building pipeline implementation.
➢ Accuracy measures.
➢ Gov-DAF will adopt and enhance the Topic Modeling Based
Methodology.
ABSTRACT (summary of the problem and the solution)
1.1. Gov-DAF stages[12][13] .
INTRODUCTION (with some background)
1.1. Satisfaction measuring in literature (references from [1-10]).
1.2. Bandari et al work for news article popularity forecasting [11].
1.3. Our satisfaction rate estimation formulas (Fig 1).
1.4. Background on tools and vocabularies we used.
A- CPPV [14] (fig 2) and CKAN [15].
B- NER [16] and LDA [17]
C- DISCO [19][20]
GOV-DAF pipeline
Approach independent Gov-DAF knowledge base building pipeline stages
Input Public Policy
Relate Public Policy Keywords to Aspects
Extend CPPV and Populate Gov-DAF Knowledge Base
Named Entity Recognition Methodology dependent pipeline stages
Extract Origin Keywords
Generate Branch Keywords
Topic Modeling Methodology dependent pipeline stages
Detect Topics Clusters
Testing and Results
Testing Methodology and Measures
Results
CONCLUSION AND FUTURE WORK
What is the problem?
Citizens’ satisfaction index towards public policies is a core political research question. Seeking agile and efficient
public policies, public policy makers are perpetually investigating how to measure citizens’ satisfaction towards
their policies, in order to overhaul faulty public policy aspects or topics that outcome a negative citizens’
satisfaction index. Bearing in mind that a correct calculation will grant a significant success for the public policy and the
public policy makers. Our perspective is that nevertheless the index was well calculated or not it is always too late, public
policy is already issued and citizen reactions are ensue. Hence, we previously proposed a public policy satisfaction
prediction framework. This framework reckons on a knowledge base that allows formulating the prediction
formulae. To develop this knowledge base we extend the Core Public Policy Vocabulary (CPPV), apply Named Entity
Recognition and Topic Modeling, in parallel, for keywords extraction and semantic similarity measuring in order to
relate the detected keywords with the pre-defined public policy aspects. The aforementioned methods will allow an
automated population of the prediction knowledge base.
Our methodology to solve it (Gov-DAF pipline [12][13]) .
Discuss the Table as stages of solving the satisfaction measuring problem (regarding that first stage will ignore relating back to aspects and branching)
{
THE PROPOSED DECISION ANALYTICS FRAMEWORK
This section highlights the major elements of our framework which will tackle the second research question above (see Figure 1).
Policy - Policy documents will be fed as text input into the proposed system that will initiate the whole processing cycle. Contents on social media are assumed to be related to one or more public policies.
Keyword Extraction and Recognition - Origin Keyword is a keyword extracted from the original policy text fed to the system. Named Entity Recognition Algorithm will be used to produce the origin keywords.
Semantically Related Keyword Recognition (Branch Keyword): a set of keyword generated by applying semantic relatedness algorithm over origin keywords.
Policy Aspects Detection - An aspect of the input Policy to be detected by applying semantic relatedness algorithm over origin keywords and branch keywords towards a set of domain aspects gathered previously.
Harvesting Citizens’ content related to Policy (Real-time Scenario only) - Using origin and branch keywords, the system will start harvesting contents generated by citizens, starting with the Twitter platform.
Opinion Mining – It involves the use of sentiment analysis algorithm on citizen contents and aggregation of computed sentiments over contents associated with a Policy aspect. The output is used both in the knowledge acquisition phase and in the prediction phase.
Knowledge Base (KB) Construction - Real-time Scenario – Constructed based on the Public Policy Ontology and relates the original and branch keywords with policy aspects. Policy aspects are associated with sentiments value. It provides data for estimating sentiments for new policies.
Citizens’ Satisfaction Rates Computation - Element calculates the satisfaction rates on policy aspects using the KB.
Citizens’ Satisfaction Insights – Gives the user an indication for the required action for policy aspects e.g. needs revising, good.
In order to break down a policy into aspects, we are proposing a semantic relatedness approach, which calculates the distance between extracted keywords from the policy text and a pre-defined aspects in the Policy Domain Model (see Figure 2).
There are two usage scenarios for the system. The first scenario called the Real-time Scenario involves continuous monitoring and generation of sentiments and opinions related to Policy aspects and storing the information in the KB. The second scenario called the “Prediction scenario” involves applying a Bayesian process in predicting the likely opinion (or citizen satisfaction rate) on a new policy based on the opinions and sentiments associated with policy aspects and keywords using the information in the KB. Predicting the likely citizen opinion over a new policy is based on estimating of the estimated opinion on its various policy aspects. The opinion on a policy aspect is computed as an aggregate of the sentiments on related keywords (both origin and branch keywords) as shown in Figure 3.
Thus in a government decision context, our system could be used as-as a real-time citizens’ satisfaction rate calculator based on an input policy text.
}
Our methodology to solve it (Gov-DAF pipline [12][13]) .
Discuss the Table as stages of solving the satisfaction measuring problem (regarding that first stage will ignore relating back to aspects and branching)
{
THE PROPOSED DECISION ANALYTICS FRAMEWORK
This section highlights the major elements of our framework which will tackle the second research question above (see Figure 1).
Policy - Policy documents will be fed as text input into the proposed system that will initiate the whole processing cycle. Contents on social media are assumed to be related to one or more public policies.
Keyword Extraction and Recognition - Origin Keyword is a keyword extracted from the original policy text fed to the system. Named Entity Recognition Algorithm will be used to produce the origin keywords.
Semantically Related Keyword Recognition (Branch Keyword): a set of keyword generated by applying semantic relatedness algorithm over origin keywords.
Policy Aspects Detection - An aspect of the input Policy to be detected by applying semantic relatedness algorithm over origin keywords and branch keywords towards a set of domain aspects gathered previously.
Harvesting Citizens’ content related to Policy (Real-time Scenario only) - Using origin and branch keywords, the system will start harvesting contents generated by citizens, starting with the Twitter platform.
Opinion Mining – It involves the use of sentiment analysis algorithm on citizen contents and aggregation of computed sentiments over contents associated with a Policy aspect. The output is used both in the knowledge acquisition phase and in the prediction phase.
Knowledge Base (KB) Construction - Real-time Scenario – Constructed based on the Public Policy Ontology and relates the original and branch keywords with policy aspects. Policy aspects are associated with sentiments value. It provides data for estimating sentiments for new policies.
Citizens’ Satisfaction Rates Computation - Element calculates the satisfaction rates on policy aspects using the KB.
Citizens’ Satisfaction Insights – Gives the user an indication for the required action for policy aspects e.g. needs revising, good.
In order to break down a policy into aspects, we are proposing a semantic relatedness approach, which calculates the distance between extracted keywords from the policy text and a pre-defined aspects in the Policy Domain Model (see Figure 2).
There are two usage scenarios for the system. The first scenario called the Real-time Scenario involves continuous monitoring and generation of sentiments and opinions related to Policy aspects and storing the information in the KB. The second scenario called the “Prediction scenario” involves applying a Bayesian process in predicting the likely opinion (or citizen satisfaction rate) on a new policy based on the opinions and sentiments associated with policy aspects and keywords using the information in the KB. Predicting the likely citizen opinion over a new policy is based on estimating of the estimated opinion on its various policy aspects. The opinion on a policy aspect is computed as an aggregate of the sentiments on related keywords (both origin and branch keywords) as shown in Figure 3.
Thus in a government decision context, our system could be used as-as a real-time citizens’ satisfaction rate calculator based on an input policy text.
}
This section provides examples of two scenarios in which the proposed system could be used.
First Scenario - Real-time example: A government introduced a Housing Tax policy three months ago and would like to know the level of citizen’s satisfaction based on citizen comments and opinions expressed on Twitter. The information will be used to tune or adjust the policy towards better citizen satisfaction. Using our system, in this case, involves first feeding the policy text as input into the system charge to extract origin keywords and branch keywords then detect policy aspects (Housing tax and Tax rates) and start harvesting relevant contents from Twitter. Next, the system will apply opinion mining and calculate satisfaction rates towards policy and its detected policy aspects. Finally, it will give insights based on the computed opinions to advise decision makers about what aspects of the policy may need revision.
Second Scenario - Prediction example - In this case a new Investment policy is under analysis before introducing it to citizens, and the decision maker seeks to predict citizen satisfaction rate before introducing it to the public. By using the system in prediction mode, it will extract origin and branch keywords then detect policy aspects using NER and Semantic relatedness algorithms (Tax rates and Duty-free areas). The final step involves the use of the Bayesian prediction to compute the probable citizen satisfaction based on already stored opinion and sentiments information in the information in the KB.
Background on Citizen Satisfaction measuring [1-10].
To achieve citizen’s satisfaction measurement multiple automated and manual citizen's’ satisfaction index calculation methods are applied [1]–[10],
(some did surveys but surveys can´t be compared with microposts due to 2 things : 1 (big data size) and 2( non biased data))
News article popularity forecasting [11].
In 2012 Bandari et al [11] introduced their news article popularity forecasting model. In this model, the news article popularity variable is measured by the number of news article url sharing in
twitter. They used four independent variables to build the predictive model; those variables are news source, news category, article subjectivity and named entities mentioned in the news article.
(Those guys had same problem of forecasting how popular an article would be before it is published with different domain and different analysis aspects)
Proposed satisfaction rate estimation formulas (Fig 1).
In our Government Decision Analytics Framework Based on Citizen Opinion (Gov-DAF), we are building two predictive models for forecasting the actual
public policy acceptance rate as a dependent variable. This variable is quantified using Actual Satisfaction Rate function Fig. 1, against the independent variable in this case which is
Social Media Public Policy Acceptance quantified using Micropost Satisfaction Rate function that uses the sentiment analysis of the tweets as the scoring method Fig. 1.
Gov-DAF is contributing and extending multiple algorithms and tools for building a solution to analyze the tweets sentiments in order to solve the Micropost Satisfaction Rate equation Table 1.
Proposed satisfaction rate estimation formulas (Fig 1).
In our Government Decision Analytics Framework Based on Citizen Opinion (Gov-DAF), we are building two predictive models for forecasting the actual
public policy acceptance rate as a dependent variable. This variable is quantified using Actual Satisfaction Rate function Fig. 1, against the independent variable in this case which is
Social Media Public Policy Acceptance quantified using Micropost Satisfaction Rate function that uses the sentiment analysis of the tweets as the scoring method Fig. 1.
Gov-DAF is contributing and extending multiple algorithms and tools for building a solution to analyze the tweets sentiments in order to solve the Micropost Satisfaction Rate equation Table 1.
Background on technologies
CPPV [14] (fig 2) and CKAN [15].
Gov-DAF reckons on the mined keywords exists in public policies as assets for achieving the ultimate target of obtaining meaningful insights about public policies that help public policymakers. Thus, semantically structured collection and indexing of public policies or assets for Gov-DAF analytical purposes was one of the main motivations of creating CPPV [14] as part of the research. CPPV offers semantic indexing of public policies’ (assets) metadata Fig. 2, and CKAN “Comprehensive Knowledge Archive Network” [15] the world’s leading open-source data portal platform offers public policy physical documents indexing. Within Gov-DAF Knowledge base building pipeline we extended CPPV with the public policy analytics classes (cppv-ext:AnalyticalAspect, :Keyword), and properties (cppv-ext:type, :occorance_count, :extends, :composed_of) as the Gov-DAF knowledge base elements that will enable opinion harvesting and analysis in later phases Table 1 .
NER [16] and LDA [17]
To populate Gov-DAF knowledge base with keywords extracted from public policies we applied two text analysis methods and measured their accuracy indices in our usage
domain, which does not necessarily indicate their overall accuracy in other usage domains. This is discussed in section 3. The first method applied for public policy text analysis is
Named Entity Recognition using stanford NER [16] where Gov-DAF uses stanford NER to extract persons, places and organizations that are composing the public policy. The second method applied for public policy text analysis is Topic Modeling using Mallet implementation of LDA “Latent Dirichlet Allocation” [17]. Here Gov-DAF applies Mallet LDA to cluster keywords composing the public policy into topics vector and then NER is applied to discover keywords types with the possibility to apply Stanford Entity Resolution Framework [18].
DISCO [19][20]
Semantic relatedness is used in two cases in this work. The first use case is when NER Methodology is applied as Gov-DAF uses Semantic Relatedness through Extracting DIStributionally related words using CO-occurrences (DISCO) [19], [20] for generating branch keywords Fig. 4 . The second use case is just before applying opinion harvesting, sentiment analysis and satisfaction estimation according to Gov-DAF Table 1. Gov-DAF first relates keywords back to certain public policy aspects for deeper Fig. 2. CPPV Conceptual Design [14]public policy analysis. In other words, for a multi-level satisfaction estimation, Gov-DAF applies Semantic Relatedness using DISCO “Extracting DIstributionally related words using CO-occurrences” for the target of relating extracted public policy keywords with public policy aspects using the semantic relatedness based algorithm illustrated in [13], [12].
CPPV [14] (fig 2) and CKAN [15].
Gov-DAF reckons on the mined keywords exists in public policies as assets for achieving the ultimate target of obtaining meaningful insights about public policies that help public policymakers. Thus, semantically structured collection and indexing of public policies or assets for Gov-DAF analytical purposes was one of the main motivations of creating CPPV [14] as part of the research. CPPV offers semantic indexing of public policies’ (assets) metadata Fig. 2, and CKAN “Comprehensive Knowledge Archive Network” [15] the world’s leading open-source data portal platform offers public policy physical documents indexing. Within Gov-DAF Knowledge base building pipeline we extended CPPV with the public policy analytics classes (cppv-ext:AnalyticalAspect, :Keyword), and properties (cppv-ext:type, :occorance_count, :extends, :composed_of) as the Gov-DAF knowledge base elements that will enable opinion harvesting and analysis in later phases Table 1 .
Gov-DAF follows the initial approach “Named Entity Recognition Based Methodology” [13], [12], and also the new enhanced approach “Topic Modeling Based Methodology” for designing Gov-DAF knowledge base
building pipeline. Named Entity Recognition based Methodology was the initial methodology that we presented previously which uses Stanford NER for public policy text analysis Fig. 4. Topic Modeling based Methodology alter the initial Gov-DAF knowledge base building pipeline in public policy text analysis phases, as it uses Mallet LDA for topic modeling to recognize and cluster keywords Fig. 5. Gov-DAF knowledge base building pipeline has two
implementations as discussed above. Following is an overall presentation of the Gov-DAF knowledge base building pipeline components:
A. Approach independent Gov-DAF knowledge base building pipeline stages
1) Input Public Policy
Gov-DAF inputs or assets are the public policies required to be analyzed along with the public policy analytical aspects i.e. public policy objectives entered by the domain experts. A policy can be either an old public policy that is under analysis or a new public policy that is under discussion to be introduced. Public policy documents can be input in many document formats e.g. pdf. Input public policy is divided into sentences to suit Mallet LDA analysis according to the Topic Modeling Methodology. Public Policy aspects reflect the main components of a public policy. Aspects are defined and fed into the developed system by domain experts and decided upon by the user during public policy input phase. A public policy raw text, sentences, and aspects vector are the output of this phase.
2) Relate Public Policy Keywords to Aspects
Public Policy aspects are to be connected to a set of origin and branch keywords or topic clusters. This set should be strongly descriptive of the aspect. Keywords will be used for both citizen opinions collection and analysis. Automating this process is carried out using semantic similarity score approach by calculating the semantic relatedness score between public policy aspects and public policy keywords then nominating top related keywords for every aspect. This process quality is measured and reported in section 3.
3) Extend CPPV and Populate Gov-DAF Knowledge Base
The CPPV is extended with public policy analytics extension (cppv-extended) with Class (cppv-extended: Keyword). The keywords are extracted from the public policy either using NER method or Topic Modeling method. It is also extended by properties (cppv-extended:type, occurrence_count) as shown in Fig. 3, and Table 2,. Then the Gov-DAF knowledge base is populated with public policy aspects i.e. objectives defined by domain experts and Gov-DAF users.
A. Approach independent Gov-DAF knowledge base building pipeline stages
1) Input Public Policy
Gov-DAF inputs or assets are the public policies required to be analyzed along with the public policy analytical aspects i.e. public policy objectives entered by the domain experts. A policy can be either an old public policy that is under analysis or a new public policy that is under discussion to be introduced. Public policy documents can be input in many document formats e.g. pdf. Input public policy is divided into sentences to suit Mallet LDA analysis according to the Topic Modeling Methodology. Public Policy aspects reflect the main components of a public policy. Aspects are defined and fed into the developed system by domain experts and decided upon by the user during public policy input phase. A public policy raw text, sentences, and aspects vector are the output of this phase.
2) Relate Public Policy Keywords to Aspects
Public Policy aspects are to be connected to a set of origin and branch keywords or topic clusters. This set should be strongly descriptive of the aspect. Keywords will be used for both citizen opinions collection and analysis. Automating this process is carried out using semantic similarity score approach by calculating the semantic relatedness score between public policy aspects and public policy keywords then nominating top related keywords for every aspect. This process quality is measured and reported in section 3.
3) Extend CPPV and Populate Gov-DAF Knowledge Base
The CPPV is extended with public policy analytics extension (cppv-extended) with Class (cppv-extended: Keyword). The keywords are extracted from the public policy either using NER method or Topic Modeling method. It is also extended by properties (cppv-extended:type, occurrence_count) as shown in Fig. 3, and Table 2,. Then the Gov-DAF knowledge base is populated with public policy aspects i.e. objectives defined by domain experts and Gov-DAF users.
A. Approach independent Gov-DAF knowledge base building pipeline stages
1) Input Public Policy
Gov-DAF inputs or assets are the public policies required to be analyzed along with the public policy analytical aspects i.e. public policy objectives entered by the domain experts. A policy can be either an old public policy that is under analysis or a new public policy that is under discussion to be introduced. Public policy documents can be input in many document formats e.g. pdf. Input public policy is divided into sentences to suit Mallet LDA analysis according to the Topic Modeling Methodology. Public Policy aspects reflect the main components of a public policy. Aspects are defined and fed into the developed system by domain experts and decided upon by the user during public policy input phase. A public policy raw text, sentences, and aspects vector are the output of this phase.
2) Relate Public Policy Keywords to Aspects
Public Policy aspects are to be connected to a set of origin and branch keywords or topic clusters. This set should be strongly descriptive of the aspect. Keywords will be used for both citizen opinions collection and analysis. Automating this process is carried out using semantic similarity score approach by calculating the semantic relatedness score between public policy aspects and public policy keywords then nominating top related keywords for every aspect. This process quality is measured and reported in section 3.
3) Extend CPPV and Populate Gov-DAF Knowledge Base
The CPPV is extended with public policy analytics extension (cppv-extended) with Class (cppv-extended: Keyword). The keywords are extracted from the public policy either using NER method or Topic Modeling method. It is also extended by properties (cppv-extended:type, occurrence_count) as shown in Fig. 3, and Table 2,. Then the Gov-DAF knowledge base is populated with public policy aspects i.e. objectives defined by domain experts and Gov-DAF users.
Named Entity Recognition Methodology dependent pipeline stages Fig. 5
1) Extract Origin Keywords
Public policy text contains places, persons and organizations within its sentences. All these elements or entities are possible candidates for being the main public policy actors. Using Stanford NER, entities are recognized and tagged with its type for possible multidimensional correlation analysis. Filtering redundant occurrences of those keywords hold their occurrence score to use it as a significance weight of the keyword, sort keywords based on the significance weight and finally use top 20 keywords as the possible main policy keywords candidates.
2) Generate Branch Keywords
After origin keywords recognition and filtration process, a keywords network exploration process is started using DISCO library to extract distributional related words using co-occurrences. DISCO is founded over the similar words clustering algorithm introduced in [21]. The keywords network is then relaxed and expanded by nominating the top 20 related branch keywords to every origin keyword according to their semantic relatedness score calculated by DISCO library.
Named Entity Recognition Methodology dependent pipeline stages Fig. 5
1) Extract Origin Keywords
Public policy text contains places, persons and organizations within its sentences. All these elements or entities are possible candidates for being the main public policy actors. Using Stanford NER, entities are recognized and tagged with its type for possible multidimensional correlation analysis. Filtering redundant occurrences of those keywords hold their occurrence score to use it as a significance weight of the keyword, sort keywords based on the significance weight and finally use top 20 keywords as the possible main policy keywords candidates.
2) Generate Branch Keywords
After origin keywords recognition and filtration process, a keywords network exploration process is started using DISCO library to extract distributional related words using co-occurrences. DISCO is founded over the similar words clustering algorithm introduced in [21]. The keywords network is then relaxed and expanded by nominating the top 20 related branch keywords to every origin keyword according to their semantic relatedness score calculated by DISCO library.
Topic Modeling Methodology dependent pipeline stages Fig. 6
1) Detect Topics Clusters
Using Mallet LDA topic modeling methodology [17] over public policy sentences extracted in phase one, will allow for an enhanced approach for Public policy text analysis and keywords extractions and clustering.
Topic Modeling Methodology dependent pipeline stages Fig. 6
1) Detect Topics Clusters
Using Mallet LDA topic modeling methodology [17] over public policy sentences extracted in phase one, will allow for an enhanced approach for Public policy text analysis and keywords extractions and clustering.
To test Gov-DAF knowledge base building pipeline accuracy, the aforementioned methodologies were applied over two different public policy sample documents (UK Bioenergy Strategy and Irish National Plan for Equity of Access to Higher Education [22], [23]) for keywords extraction and keywords clustering around manually extracted public policy aspects i.e. objectives. The extracted keywords using both methods were related back to objectives using human evaluation to test both entity recognition accuracy of NER and Mallet LDA, and the classification accuracy of DISCO.
The following performance measures used in machine learning domain [24] were calculated to assess the results Fig 6.
Accuracy, Precision, Recall, F measure and lower Error rate.
Depending on those results Gov-DAF knowledge base pipeline will adopt and enhance Topic Modeling Methodology in the coming phases of this research.
The following performance measures used in machine learning domain [24] were calculated to assess the results Fig 6.
Accuracy, Precision, Recall, F measure and lower Error rate.
Depending on those results Gov-DAF knowledge base pipeline will adopt and enhance Topic Modeling Methodology in the coming phases of this research.
The following performance measures used in machine learning domain [24] were calculated to assess the results Fig 6.
Accuracy, Precision, Recall, F measure and lower Error rate.
Depending on those results Gov-DAF knowledge base pipeline will adopt and enhance Topic Modeling Methodology in the coming phases of this research.
The following performance measures used in machine learning domain [24] were calculated to assess the results Fig 6.
Accuracy, Precision, Recall, F measure and lower Error rate.
Depending on those results Gov-DAF knowledge base pipeline will adopt and enhance Topic Modeling Methodology in the coming phases of this research.
Gov-DAF was proposed in [13], [12], Gov-DAF is addressing the problem of lack of tools to support critical government decision making in which knowledge of citizen opinions expressed on social media constitute a critical input.
In this paper, Gov-DAF knowledge base component was presented, named “Gov-DAF knowledge base building pipeline” and implemented. Gov-DAF Knowledge base building pipeline was implemented using two methods; Named Entity Recognition Based Methodology, and Topic Modeling Based Methodology. Both Methods implementation details are reported and accuracy measures are presented. Reckoning to the accuracy indices results,
Gov-DAF will adopt and enhance the Topic Modeling Based Methodology for the next phases of Gov-DAF implementation; namely (Opinion Harvesting, Opinion Mining and Satisfaction Estimation).