Being out of touch with a loved one is concerning and not hearing from someone you care about is terrifying. Several cases
of missing people have been reported for many years, where most of the searches turn out unsuccessful. In order to quickly reunite
families and friends with their missing loved ones, a solution for effectively searching for the missing people is presented. In
evaluation of this solution, an F1 score test was simulated using 20 scenarios, out of which an impressive score of 0.72 was attained.
The study concludes that we need to leverage on mobile based technology to device a more efficient method of finding missing
persons more easily and quickly.
Online Data Preprocessing: A Case Study ApproachIJECEIAES
Besides the Internet search facility and e-mails, social networking is now one of the three best uses of the Internet. A tremendous number of volunteers every day write articles, share photos, videos and links at a scope and scale never imagined before. However, because social network data are huge and come from heterogeneous sources, the data are highly susceptible to inconsistency, redundancy, noise, and loss. For data scientists, preparing the data and getting it into a standard format is critical because the quality of data is going to directly affect the performance of mining algorithms that are going to be applied next. Low-quality data will certainly limit the analysis and lower the quality of mining results. To this end, the goal of this study is to provide an overview of the different phases involved in data preprocessing, with a focus on social network data. As a case study, we will show how we applied preprocessing to the data that we collected for the Malaysian Flight MH370 that disappeared in 2014.
The Design of an Online Social Network Site for Emergency Management: A One-S...guest636475b
Web 2.0 is creating new opportunities for communication and collaboration. Part of this explosion is the increase in popularity and use of Social Network Sites (SNSs) for general and domain-specific use. In the emergency domain there are a number of websites, wikis, SNSs, etc. but they stand as silos in the field, unable to allow for cross-site collaboration. In this paper we describe ongoing design science research to develop and refine guiding principles for developing an SNS that will bring together emergency domain professionals in a “one-stop-shop.” We surveyed emergency professionals who study crisis information systems, to ascertain potential functionalities of such an SNS. Preliminary results suggest that there is a need for the envisioned SNS. Future research will continue to explore possible solutions to issues addressed in this paper.
Chung-Jui LAI - Polarization of Political Opinion by News MediaREVULN
In 2016 US election, social media played a vital role in shaping public opinions as expressed by the news media that have created the phenomenon of polarization in the United States. Because social media gave people the ability to follow, share, post, comment below everything, the phenomenon of political opinions being spread easily and quickly on social media by the news agencies is bringing out a significantly polarized populace.
Consequently, it’s very important to understand the language differences on Twitter and figure out how propaganda spread by different political parties that influence or perhaps mislead public opinion. This talk will introduce the relationship among the social media, public opinion, and news media, then suggests the method to collect the tweets from Twitter and conduct sentimental and logistic regression analysis on them. Furthermore, this talk points out the special aspect on the relationship between the polarization and the topic of this conference (fake news, disinformation and propaganda).
Main points:
- situation in Taiwan
- research on fake news
- methods for fighting fake news
Social media visualization for crisis managementMustafa Alkhunni
PhD proposal about the use of data mining and information visualization techniques to manage and guide people within crisis time .
Under the supervision of Dr.Robert Johnathan from Bangor university
MSc.Mustafa ALKHUNNI
How does fakenews spread understanding pathways of disinformation spread thro...Araz Taeihagh
What are the pathways for spreading disinformation on social media platforms? This article addresses this question by collecting, categorising, and situating an extensive body of research on how application programming interfaces (APIs) provided by social media platforms facilitate the spread of disinformation. We first examine the landscape of official social media APIs, then perform quantitative research on the open-source code repositories GitHub and GitLab to understand the usage patterns of these APIs. By inspecting the code repositories, we classify developers' usage of the APIs as official and unofficial, and further develop a four-stage framework characterising pathways for spreading disinformation on social media platforms. We further highlight how the stages in the framework were activated during the 2016 US Presidential Elections, before providing policy recommendations for issues relating to access to APIs, algorithmic content, advertisements, and suggest rapid response to coordinate campaigns, development of collaborative, and participatory approaches as well as government stewardship in the regulation of social media platforms.
Comprehensive Social Media Security Analysis & XKeyscore Espionage TechnologyCSCJournals
Social networks can offer many services to the users for sharing activities events and their ideas. Many attacks can happened to the social networking websites due to trust that have been given by the users. Cyber threats are discussed in this paper. We study the types of cyber threats, classify them and give some suggestions to protect social networking websites of variety of attacks. Moreover, we gave some antithreats strategies with future trends.
Online Data Preprocessing: A Case Study ApproachIJECEIAES
Besides the Internet search facility and e-mails, social networking is now one of the three best uses of the Internet. A tremendous number of volunteers every day write articles, share photos, videos and links at a scope and scale never imagined before. However, because social network data are huge and come from heterogeneous sources, the data are highly susceptible to inconsistency, redundancy, noise, and loss. For data scientists, preparing the data and getting it into a standard format is critical because the quality of data is going to directly affect the performance of mining algorithms that are going to be applied next. Low-quality data will certainly limit the analysis and lower the quality of mining results. To this end, the goal of this study is to provide an overview of the different phases involved in data preprocessing, with a focus on social network data. As a case study, we will show how we applied preprocessing to the data that we collected for the Malaysian Flight MH370 that disappeared in 2014.
The Design of an Online Social Network Site for Emergency Management: A One-S...guest636475b
Web 2.0 is creating new opportunities for communication and collaboration. Part of this explosion is the increase in popularity and use of Social Network Sites (SNSs) for general and domain-specific use. In the emergency domain there are a number of websites, wikis, SNSs, etc. but they stand as silos in the field, unable to allow for cross-site collaboration. In this paper we describe ongoing design science research to develop and refine guiding principles for developing an SNS that will bring together emergency domain professionals in a “one-stop-shop.” We surveyed emergency professionals who study crisis information systems, to ascertain potential functionalities of such an SNS. Preliminary results suggest that there is a need for the envisioned SNS. Future research will continue to explore possible solutions to issues addressed in this paper.
Chung-Jui LAI - Polarization of Political Opinion by News MediaREVULN
In 2016 US election, social media played a vital role in shaping public opinions as expressed by the news media that have created the phenomenon of polarization in the United States. Because social media gave people the ability to follow, share, post, comment below everything, the phenomenon of political opinions being spread easily and quickly on social media by the news agencies is bringing out a significantly polarized populace.
Consequently, it’s very important to understand the language differences on Twitter and figure out how propaganda spread by different political parties that influence or perhaps mislead public opinion. This talk will introduce the relationship among the social media, public opinion, and news media, then suggests the method to collect the tweets from Twitter and conduct sentimental and logistic regression analysis on them. Furthermore, this talk points out the special aspect on the relationship between the polarization and the topic of this conference (fake news, disinformation and propaganda).
Main points:
- situation in Taiwan
- research on fake news
- methods for fighting fake news
Social media visualization for crisis managementMustafa Alkhunni
PhD proposal about the use of data mining and information visualization techniques to manage and guide people within crisis time .
Under the supervision of Dr.Robert Johnathan from Bangor university
MSc.Mustafa ALKHUNNI
How does fakenews spread understanding pathways of disinformation spread thro...Araz Taeihagh
What are the pathways for spreading disinformation on social media platforms? This article addresses this question by collecting, categorising, and situating an extensive body of research on how application programming interfaces (APIs) provided by social media platforms facilitate the spread of disinformation. We first examine the landscape of official social media APIs, then perform quantitative research on the open-source code repositories GitHub and GitLab to understand the usage patterns of these APIs. By inspecting the code repositories, we classify developers' usage of the APIs as official and unofficial, and further develop a four-stage framework characterising pathways for spreading disinformation on social media platforms. We further highlight how the stages in the framework were activated during the 2016 US Presidential Elections, before providing policy recommendations for issues relating to access to APIs, algorithmic content, advertisements, and suggest rapid response to coordinate campaigns, development of collaborative, and participatory approaches as well as government stewardship in the regulation of social media platforms.
Comprehensive Social Media Security Analysis & XKeyscore Espionage TechnologyCSCJournals
Social networks can offer many services to the users for sharing activities events and their ideas. Many attacks can happened to the social networking websites due to trust that have been given by the users. Cyber threats are discussed in this paper. We study the types of cyber threats, classify them and give some suggestions to protect social networking websites of variety of attacks. Moreover, we gave some antithreats strategies with future trends.
Social Media, Crisis Communication and Emergency Management: Leveraging Web 2...Connie White
Detailing guidelines and safe practices for using social media across a range of emergency management applications‚ Social Media, Crisis Communication, and Emergency Management: Leveraging Web 2.0 Technologies supplies cutting-edge methods to help you inform the public‚ reduce information overload‚ and ultimately‚ save more lives.
Introduces collaborative mapping tools that can be customized to your needs
Explores free and open-source disaster management systems‚ such as Sahana and Ushahidi
Covers freely available social media technologies—including Facebook‚ Twitter‚ and YouTube
Privacy Perspectives, Requirements and Design trade-offs of Encounter- based ...AM Publications
Encounter-based social networks link users who share a location at the same time, as opposed to the traditional
social network model of linking users who have an offline friendship. Privacy is one of the friction points that emerge when
communications get mediated in Encounter-based Social Networks. Different communities of computer science researchers have
framed the ‘Online Social Network privacy problem’ as one of surveillance, institutional or social privacy. In this article, we first
provide an introduction to the surveillance, social and institutional privacy perspectives. We then explore the differences between
these approaches in order to understand their complementarity. In this paper, we explore the privacy requirements for Encounterbased
social networks. We provide an overview on the privacy guarantees and feasibility of SMILE and also its drawback in meeting certain requirements.
Understanding Online Socials Harm: Examples of Harassment and RadicalizationAmit Sheth
https://dbsec2019.cse.sc.edu/Keynote.html
Abstract: As social media permeates our daily life, there has been a sharp rise in the misuse of social media affecting our society in large. Specifically, harassment and radicalization have become two major problems on social media platforms with significant implications on the well-being of individuals as well as communities. A 2017 Pew Research survey on online harassment found that 66% of adult Internet users have observed online harassment and 41% have personally experienced it. Nearly 18% of Americans have faced severe forms of harassment online such as physical threats, harassment over a sustained period, sexual harassment or stalking. Moreover, malicious organizations (e.g., terrorist groups, white nationalists not classified legally as terrorists but as a group with extreme ideology) have been using social media for sharing their propaganda and misinformation to persuade individuals and eventually recruit them to propagate their ideology. These communications related to harassment and radicalization are complex concerning their language and contextual characteristics, making recognition of such narratives challenging for researchers as well as social media companies. As most of the existing approaches fail to capture fundamental nuances in the language of these communications, two prominent challenges have emerged: ambiguity and sparsity. Sole data level bottom-up analysis has been unsuccessful in revealing the actual meaning of the content. Considering the significant sensitivity of these problems and its implications at individual and community levels, a potential solution requires reliable algorithms for modeling such communications.
Our approach to understanding communications between source and target requires deciphering the unique language, semantic and contextual characteristics, including sentiment, emotion, and intention. This context-aware and knowledge-enhanced computational approach to the analysis of these narratives breaks down this long-running and complex process into contextual building blocks that acknowledge inherent ambiguity and sparsity. Based on prior empirical and qualitative research in social sciences, particularly cognitive psychology, and political science, we model this process using a combination of contextual dimensions -- e.g., for Islamist radicalization: religion, ideology, and hate -- each elucidating a degree of radicalization and highlighting independent features to render them computationally accessible.
DPSY 6121 Wk2 ASSGN: Electronic Media Influence Part 1eckchela
This is a Walden University course (DPSY 6121 and 8121), Electronic Media Influence Part 1 and 2. It is written in APA format, includes references, and has been graded (A) by Dr. Elizabeth Essel ," Nice job on Part 1 of this assignment, Orlanda. You nicely discussed how the media you chose impacted yourself and how it might impact you as a professional. You also did a very nice job highlighting some important milestones about the media you chose. For part 2, you did a great job discussing how some of theories we learned about in our class this week could explain the behaviors you discussed in part 1. Overall, you included some really good sources to support your paper. Great job! Note from Orlanda Haynes: Higher-education assignments are, usually, submitted to Turnitin, so remember to paraphrase. Let us begin.
Untapped Potential: Evaluating State Emergency Management Agency Web Sites 2...Dawn Dawson
The 2007-8 Study and Survey that sparked my interest in SMEM and passion for Preparedness & Public Safety . It was clear to me that communicating virtually through various platforms would open communication to the public and could reduce if not eliminate injuries/fatalities. Analysis began in Jan-March, survey in May, myself as Marketing Coordinator C.E.R.T. for the City of Independence/Eastern Jackson County EJC/EOC Fire Station #1 Independence, Mo joined Twitter to show my EM how it could be used on 13 June 2008 . . .
Helping Crisis Responders Find the Informative Needle in the Tweet HaystackCOMRADES project
Leon Derczynski - University of Sheffield,
Kenny Meesters - TU Delft, Kalina Bontcheva - University of Sheffield, Diana Maynard- University of Sheffield
WiPe Paper – Social Media Studies
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Public Health Crisis Analytics for Gender ViolenceHemant Purohit
Research-progress talk on the use of data analytics methods for one of the major public health crisis in the world Gender-based Violence and the campaign engagement in the initiatives of Non-profit organizations.
Disasters 2.0: Real Time Collaboration: Documentation and MappingConnie White
Objective 1: Cover the available technologies that are free that help EM create real-time documents, spreadsheets, presentations and forms that are available online (Google Suite) for many to use collaboratively and simultaneously and offline in a traditional singleton sense (OpenWord)
Objective 2: Demonstrate the free available mapping tools that are user friendly and very powerful for response efforts -- these are web based collaborative mapping tools that can be used in advance or in an ad hoc fashion - including the GeoLocation devices that can be leveraged. (WikiMapia, Open Street Maps, etc.)
GRBN Trust and Personal Data Survey Report - Part 2 - Regions and countries -...Andrew Cannon
The report deep dives into the results from GRBN's 24 country survey on Trust & Personal Data, detailing the findings by region (Americas, APAC and Europe) and country
GRBN Trust and Personal Data Survey report - Part 1 - Concern, familiarity, t...Andrew Cannon
A detailed report on the results from GRBN's 24 country global survey on the issue of Trust & Personal Data. The report dives into how the level of familiarity with the issue as well as the level of concern about the abuse of personal data varies across the globe. The report compares how trustworthy people consider different types of both public and private organisations to be, and looks at how sensitive people consider different types of personal data to be.
Social Media, Crisis Communication and Emergency Management: Leveraging Web 2...Connie White
Detailing guidelines and safe practices for using social media across a range of emergency management applications‚ Social Media, Crisis Communication, and Emergency Management: Leveraging Web 2.0 Technologies supplies cutting-edge methods to help you inform the public‚ reduce information overload‚ and ultimately‚ save more lives.
Introduces collaborative mapping tools that can be customized to your needs
Explores free and open-source disaster management systems‚ such as Sahana and Ushahidi
Covers freely available social media technologies—including Facebook‚ Twitter‚ and YouTube
Privacy Perspectives, Requirements and Design trade-offs of Encounter- based ...AM Publications
Encounter-based social networks link users who share a location at the same time, as opposed to the traditional
social network model of linking users who have an offline friendship. Privacy is one of the friction points that emerge when
communications get mediated in Encounter-based Social Networks. Different communities of computer science researchers have
framed the ‘Online Social Network privacy problem’ as one of surveillance, institutional or social privacy. In this article, we first
provide an introduction to the surveillance, social and institutional privacy perspectives. We then explore the differences between
these approaches in order to understand their complementarity. In this paper, we explore the privacy requirements for Encounterbased
social networks. We provide an overview on the privacy guarantees and feasibility of SMILE and also its drawback in meeting certain requirements.
Understanding Online Socials Harm: Examples of Harassment and RadicalizationAmit Sheth
https://dbsec2019.cse.sc.edu/Keynote.html
Abstract: As social media permeates our daily life, there has been a sharp rise in the misuse of social media affecting our society in large. Specifically, harassment and radicalization have become two major problems on social media platforms with significant implications on the well-being of individuals as well as communities. A 2017 Pew Research survey on online harassment found that 66% of adult Internet users have observed online harassment and 41% have personally experienced it. Nearly 18% of Americans have faced severe forms of harassment online such as physical threats, harassment over a sustained period, sexual harassment or stalking. Moreover, malicious organizations (e.g., terrorist groups, white nationalists not classified legally as terrorists but as a group with extreme ideology) have been using social media for sharing their propaganda and misinformation to persuade individuals and eventually recruit them to propagate their ideology. These communications related to harassment and radicalization are complex concerning their language and contextual characteristics, making recognition of such narratives challenging for researchers as well as social media companies. As most of the existing approaches fail to capture fundamental nuances in the language of these communications, two prominent challenges have emerged: ambiguity and sparsity. Sole data level bottom-up analysis has been unsuccessful in revealing the actual meaning of the content. Considering the significant sensitivity of these problems and its implications at individual and community levels, a potential solution requires reliable algorithms for modeling such communications.
Our approach to understanding communications between source and target requires deciphering the unique language, semantic and contextual characteristics, including sentiment, emotion, and intention. This context-aware and knowledge-enhanced computational approach to the analysis of these narratives breaks down this long-running and complex process into contextual building blocks that acknowledge inherent ambiguity and sparsity. Based on prior empirical and qualitative research in social sciences, particularly cognitive psychology, and political science, we model this process using a combination of contextual dimensions -- e.g., for Islamist radicalization: religion, ideology, and hate -- each elucidating a degree of radicalization and highlighting independent features to render them computationally accessible.
DPSY 6121 Wk2 ASSGN: Electronic Media Influence Part 1eckchela
This is a Walden University course (DPSY 6121 and 8121), Electronic Media Influence Part 1 and 2. It is written in APA format, includes references, and has been graded (A) by Dr. Elizabeth Essel ," Nice job on Part 1 of this assignment, Orlanda. You nicely discussed how the media you chose impacted yourself and how it might impact you as a professional. You also did a very nice job highlighting some important milestones about the media you chose. For part 2, you did a great job discussing how some of theories we learned about in our class this week could explain the behaviors you discussed in part 1. Overall, you included some really good sources to support your paper. Great job! Note from Orlanda Haynes: Higher-education assignments are, usually, submitted to Turnitin, so remember to paraphrase. Let us begin.
Untapped Potential: Evaluating State Emergency Management Agency Web Sites 2...Dawn Dawson
The 2007-8 Study and Survey that sparked my interest in SMEM and passion for Preparedness & Public Safety . It was clear to me that communicating virtually through various platforms would open communication to the public and could reduce if not eliminate injuries/fatalities. Analysis began in Jan-March, survey in May, myself as Marketing Coordinator C.E.R.T. for the City of Independence/Eastern Jackson County EJC/EOC Fire Station #1 Independence, Mo joined Twitter to show my EM how it could be used on 13 June 2008 . . .
Helping Crisis Responders Find the Informative Needle in the Tweet HaystackCOMRADES project
Leon Derczynski - University of Sheffield,
Kenny Meesters - TU Delft, Kalina Bontcheva - University of Sheffield, Diana Maynard- University of Sheffield
WiPe Paper – Social Media Studies
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Public Health Crisis Analytics for Gender ViolenceHemant Purohit
Research-progress talk on the use of data analytics methods for one of the major public health crisis in the world Gender-based Violence and the campaign engagement in the initiatives of Non-profit organizations.
Disasters 2.0: Real Time Collaboration: Documentation and MappingConnie White
Objective 1: Cover the available technologies that are free that help EM create real-time documents, spreadsheets, presentations and forms that are available online (Google Suite) for many to use collaboratively and simultaneously and offline in a traditional singleton sense (OpenWord)
Objective 2: Demonstrate the free available mapping tools that are user friendly and very powerful for response efforts -- these are web based collaborative mapping tools that can be used in advance or in an ad hoc fashion - including the GeoLocation devices that can be leveraged. (WikiMapia, Open Street Maps, etc.)
GRBN Trust and Personal Data Survey Report - Part 2 - Regions and countries -...Andrew Cannon
The report deep dives into the results from GRBN's 24 country survey on Trust & Personal Data, detailing the findings by region (Americas, APAC and Europe) and country
GRBN Trust and Personal Data Survey report - Part 1 - Concern, familiarity, t...Andrew Cannon
A detailed report on the results from GRBN's 24 country global survey on the issue of Trust & Personal Data. The report dives into how the level of familiarity with the issue as well as the level of concern about the abuse of personal data varies across the globe. The report compares how trustworthy people consider different types of both public and private organisations to be, and looks at how sensitive people consider different types of personal data to be.
1) With Modern Surveillance technologies the government has the .docxteresehearn
1) With Modern Surveillance technologies the government has the option to know its people by using the latest technology. The data relating to activities and interests of people is gathered from Information and Communication Technology, which has become the closest partner of people, including the use of the internet. It further explains how the government can keep a check on its people, while maintaining their privacy level, along with contextual integrity. The government makes satellite monitoring to get locational data. This information can be used in the Police department for the investigation of crimes. the application of data in this modern era, where behavioral data can become a source of revenue, and humans are considered part of this process and not the endpoint of revenue making process. It first explains the fact that Google which is considered a safe and secure website also holds user’s data and this data can be handed over to authorities if needed. The source explains how Google is violating the privacy of millions of people for the collection of big data, like they retain browsing histories, and takes photos of people’s houses without even asking permission.
Annotated Bibliography
Source #1
Giroux, H. A. (2015). Totalitarian paranoia in the post-Orwellian surveillance state.
Cultural Studies
,
29
(2), 108-140.
The source “Totalitarian paranoia in the past-Orwellian surveillance state” (Giroux, 2015) objects to discover the public privacy threats arise by government spying. The source then explains the impacts of surveillance and says that people are deprived of their freedom of thinking and speech because of the institution of being tracked. It says that the latest information and communication technology devices like microphones, internet, videos, cameras, and text messages provide more facility to surveillance organizations than customers. Private and public space is easily violated and even a third party keeps records of a person’s shopping choices and personal messages from social media. Finally it says that we are leading a surveillance culture by a surveillance state.
This source changed my research approach and provided me multiple valid points which I further used in my paper. For instance, the use of biometric bracelets that tells the attention of students sitting in the hall. It cleared my position by saying that this surveillance is against freedom and democracy which gave people the right to keep their things private. It helped me explain the fact the NSA is a threat to privacy and freedom. Companies can spy any customer or individual because they run their own data-mining setup.
Source #2
Schaefer, A. T., & Claridge-Chang, A. (2012). The surveillance state of behavioral automation.
Current opinion in neurobiology
,
22
(1), 170-176.
The source “The surveillance state of behavioral automation” aims to explain the latest use of behavioral data. It says that even complex behaviors can be analyzed and obser.
1) With Modern Surveillance technologies the government has the .docxaulasnilda
1) With Modern Surveillance technologies the government has the option to know its people by using the latest technology. The data relating to activities and interests of people is gathered from Information and Communication Technology, which has become the closest partner of people, including the use of the internet. It further explains how the government can keep a check on its people, while maintaining their privacy level, along with contextual integrity. The government makes satellite monitoring to get locational data. This information can be used in the Police department for the investigation of crimes. the application of data in this modern era, where behavioral data can become a source of revenue, and humans are considered part of this process and not the endpoint of revenue making process. It first explains the fact that Google which is considered a safe and secure website also holds user’s data and this data can be handed over to authorities if needed. The source explains how Google is violating the privacy of millions of people for the collection of big data, like they retain browsing histories, and takes photos of people’s houses without even asking permission.
Annotated Bibliography
Source #1
Giroux, H. A. (2015). Totalitarian paranoia in the post-Orwellian surveillance state.
Cultural Studies
,
29
(2), 108-140.
The source “Totalitarian paranoia in the past-Orwellian surveillance state” (Giroux, 2015) objects to discover the public privacy threats arise by government spying. The source then explains the impacts of surveillance and says that people are deprived of their freedom of thinking and speech because of the institution of being tracked. It says that the latest information and communication technology devices like microphones, internet, videos, cameras, and text messages provide more facility to surveillance organizations than customers. Private and public space is easily violated and even a third party keeps records of a person’s shopping choices and personal messages from social media. Finally it says that we are leading a surveillance culture by a surveillance state.
This source changed my research approach and provided me multiple valid points which I further used in my paper. For instance, the use of biometric bracelets that tells the attention of students sitting in the hall. It cleared my position by saying that this surveillance is against freedom and democracy which gave people the right to keep their things private. It helped me explain the fact the NSA is a threat to privacy and freedom. Companies can spy any customer or individual because they run their own data-mining setup.
Source #2
Schaefer, A. T., & Claridge-Chang, A. (2012). The surveillance state of behavioral automation.
Current opinion in neurobiology
,
22
(1), 170-176.
The source “The surveillance state of behavioral automation” aims to explain the latest use of behavioral data. It says that even complex behaviors can be analyzed and obser.
ALBAY EMERGENCY RESPONSE AND REPORT TOOL (ALERRT)csandit
Resilient public alert and warning tools are essential to save lives and protect property during times of national, regional, and local emergencies. Nowadays, immediate emergency alerts became one of the priority in both national and local government. The Provincial Government of Albay is geared towards becoming the most liveable province of the Philippines, which means that it would be known for good education, good healthcare and good environment where people are healthy, happy, employed and lives to their full potential. To achieve this goal, disaster risk reduction and climate change adaptation must be anchored well so as to move to its destination of shared socioeconomic advancement. Supporting this vision, this study focuses on the design and development of a mobile based Albay Emergency Reporting and Response Tool (ALERRT). It is a mobile based resilient form of emergency alert notification that aids the concerned citizens of any emergencies, accidents and concerns that require immediate response from the government sector concerned.
ScenarioYou are an employee at D&B Investigations, a firm that c.docxjeffsrosalyn
Scenario
You are an employee at D&B Investigations, a firm that contracts with individuals, companies, and government agencies to conduct computer forensics investigations. D&B employees are expected to observe the following tenets, which the company views as the foundation for its success:
· Give concerted attention to clients’ needs and concerns.
· Follow proper procedures and stay informed about legal issues.
· Maintain the necessary skill set to apply effective investigative techniques using the latest technologies.
Your manager has just scheduled a meeting with an important prospective client, and she has asked you to be part of the team that is preparing for the meeting. The prospective client is Brendan Oliver, a well-known celebrity. Last night, Mr. Oliver’s public relations team discovered that someone obtained three photos that were shot on his smartphone, and tried to sell the photos to the media. Due to the sensitive nature of the photos, Mr. Oliver and his team have not yet contacted law enforcement. They would like to know if D&B can provide any guidance or support related to the investigation—or, at the very least, if D&B can help them prevent similar incidents from occurring in the future. At this time, they do not know how the photos were acquired. The public relations team is wondering if a friend, family member, or employee could have gained direct access to Mr. Oliver’s phone and obtained the photos that way, although the phone is usually locked with a passcode when Mr. Oliver is not using it. In addition, Mr. Oliver emailed the photos to one other person several months ago. He has not spoken with that person in the last few weeks, but he does not believe that person would have shared the photos with anyone else.
Your manager plans to use this initial meeting with Mr. Oliver and his public relations team to establish rapport, learn more about the case, and demonstrate the firm’s expertise. The company sees this as an opportunity to build future business, regardless of whether they are retained to help with the investigation of this case.
Tasks
To help the team prepare for the meeting, your manager asks you (and your colleagues) to consider and record your responses the following questions:
· What is the nature of the alleged crime, and how does the nature of the crime influence a prospective investigation?
· Based on the limited information provided in the scenario, what is the rationale for launching an investigation that uses computer forensic activities? Would D&B and/or law enforcement need
· additional information in order to determine if they should proceed with an investigation? Why or why not?
· What would you share with the client about how investigators prepare for and conduct a computer forensics investigation? Identify three to five key points that are most relevant to this case.
· What sources of evidence would investigators likely examine in this case? Provide concrete examples and explain your rationale.
.
Ethical Implications of Social Media Data Mining by Police.docxtheodorelove43763
Ethical Implications of Social Media Data Mining by Police
University of Maryland University College (UMUC)
Group Epsilon
Group Epsilon
Executive Summary
Introduction
History
Current Trends
Alternatives
Conclusion
References
2
TABLE OF CONTENTS
Group Epsilon
EXECUTIVE SUMMARY
Social media is constantly inundated with posts that contain rich and timely information about events relevant to public safety
Social media can be used track people as they move from one location to the next
Software can be deployed to identify key words like “gun”, “fight”, and “shoot” to track posts that are indicative of danger and/or criminal activity
The goal of these programs and the partnership with law enforcement is to bypass privacy options of the social media sites
Social media data mining has great potential to make policing more proactive – But is it legal? Ethical?
Authorities are already using social media, such as posts and comments, to store information and to keep tabs on people
Facebook and Instagram oppose this effort
3
Group Epsilon
INTRODUCTION
Is social media data mining ethical?
Should the government and law enforcement agencies be legally authorized to undermine resident privacy in an effort to prevent/deter crime?
What is the public’s reasonable expectation of privacy?
Is social data mining considered a warrantless search?
Social media posts are public – does that make it legal and ethical to monitor an individual’s posts over a period of time?
Law enforcement agencies should reveal what data they are collecting, how it is being collected, and what it is being used for
Public education/engagement about this effort should be mandatory!
Clear guidelines and regulations must be imposed on this process!
4
Group Epsilon
HISTORY
100 Representatives attended a Social Media networking workshop.
Included federal, state, and local governments, private sector, and news media (to share case studies for learning).
Two goals to help emergency management learn how to:
Better protect communities.
Improve communication during crisis situations.
Police have been increasingly using social media
76% to gain tips on crime
72% to monitor public
70% for intelligence gathering
5
Group Epsilon
HISTORY (cont.)
California fires in 2018 used real-time updates on evacuations and effected areas via social media outlets
Used to be done by TV and Radio which not everyone got the information.
Amber Alerts posted on Facebook and twitter to increase exposure.
Jurys using social media during trials is astoundingly high.
Juror was “friending” female defendant and got out of jury duty.
Results in numerous new trials and overturned verdicts.
Arkansas Supreme Court reverses capital murder conviction because of juror repeatedly tweeted comments during trial.
6
Group Epsilon
Social Media posts can be loaded with useful data for policing. This data can assist law enforcement with:
Quicker Interventions – Crime Prevention, Incid.
To use Twitter to its fullest potential for public communications, emergency management, and other functions, law enforcement agencies must first understand the medium -- not only how
citizens use it, but also how their peers use it both officially and unofficially. This study, a survey of 1,089 police and police-related Twitter accounts, used 25 different criteria to show how agencies and officers are using Twitter, where they can improve, and implications for their future use.
A large scale study of daily information needs captured in situWookjae Maeng
The goal of this work is to provide a fundamental understanding of the daily information needs of people through a large-scale, in-depth, quantitative investigation. To this end, we have conducted one of the most comprehensive studies of information needs to date, spanning a 3-month period and involving more than 100 users. The study employed a contextual experience sampling method, a snippet-based diary technique using SMS technology, and an online Web diary to gather in situ insights into the types of needs that occur from day to day, how those needs are addressed, and how contextual, technological, and demographic factors impact on those needs. Our results not only complement earlier studies but also provide a new understanding of the intricacies of people’s daily information needs.
A web-based survey and theoretical research focuses mainly on the hazards that children are exposed to while surfing the digital world. It addresses the problem from parents/caregivers perspective and tries to shed light over the best ways of understanding and precautionary means. It is important for families to take all preventive measures to protect their kids from such hazards.
GIVING UP PRIVACY FOR SECURITY: A SURVEY ON PRIVACY TRADE-OFF DURING PANDEMIC...ijcisjournal
While the COVID-19 pandemic continues to be as complex as ever, the collection and exchange of data in the light of fighting coronavirus poses a major challenge for privacy systems around the globe. The disease’s size and magnitude are not uncommon but it appears to be at the point of hysteria surrounding it. Consequently, in a very short time, extreme measures for dealing with the situation appear to have become
the norm. Any such actions affect the privacy of individuals in particular. In some cases, there is intensive monitoring of the whole population while the medical data of those diagnosed with the virus is commonly circulated through institutions and nations. This may well be in the interest of saving the world from a deadly disease, but is it appropriate and right? Although creative solutions have been implemented in many countries to address the issue, proponents of privacy are concerned that technologies will eventually erode privacy, while regulators and privacy supporters are worried about what kind of impact this could bring. While that tension has always been present, privacy has been thrown into sharp relief by the sheer urgency
of containing an exponentially spreading virus. The essence of this dilemma indicates that establishing the right equilibrium will be the best solution. The jurisprudence concerning cases regarding the willingness of public officials to interfere with the constitutional right to privacy in the interests of national security or public health has repeatedly proven that a reasonable balance can be reached.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
Using a Mobile Based Web Service to Search for Missing People – A Case Study Of Kenya
1. International Journal of Computer Applications Technology and Research
Volume 4– Issue 7, 507 - 511, 2015, ISSN:- 2319–8656
www.ijcat.com 507
Using a Mobile Based Web Service to Search for Missing
People – A Case Study Of Kenya
Thomas M. Omweri
School of Computing and Informatics,
University of Nairobi, Kenya
Andrew M. Kahonge
School of Computing and Informatics,
University of Nairobi, Kenya
Abstract: Being out of touch with a loved one is concerning and not hearing from someone you care about is terrifying. Several cases
of missing people have been reported for many years, where most of the searches turn out unsuccessful. In order to quickly reunite
families and friends with their missing loved ones, a solution for effectively searching for the missing people is presented. In
evaluation of this solution, an F1 score test was simulated using 20 scenarios, out of which an impressive score of 0.72 was attained.
The study concludes that we need to leverage on mobile based technology to device a more efficient method of finding missing
persons more easily and quickly.
Keywords: Mobile Application; Emergency communication system; National Disaster Operation Centre(NDOC); Emergency
communication system(ECS); United Nations(UN); Missing Persons Community of Interest(MPCI); International Committee of the
Red Cross(ICRC)
1. INTRODUCTION
Reports of missing persons worldwide have increased
significantly in the past recent years, from roughly 450,000 in
1990 to about 10,000,000 this year [1]. The increase was
driven in part by the ever growing population. The numbers
indicate that more people are becoming victims each day. An
astounding 2,300 Americans are reported missing every day,
including both adults and children. Kenya on the other hand
has at least 20,000 missing people on record every year. Out
of the reported number,40% are located after a long period of
search while 30% are left untraced. Only 30% of the reported
victims are found within a reasonably short period of up to 3
months.
More recently, the abductions of children and adults have
reawakened public concern about missing people. In most
parts of the world, the police and non-governmental
organizations working with missing people have recently
reviewed their policies and are planning to improve
coordination of their work [2]. People end up missing in
different scenarios [3]. The circumstances that may lead
adults or children to become missing people are often
complex and multi-layered. The missing phenomenon is best
understood as a continuum in which a break in contact may be
either intentional or unintentional. Some people make a
conscious decision to leave, albeit often not in circumstances
of their own choosing, while others may drift apart from
family members over time. Some may never have intended to
be missing, and indeed may not conceptualise their experience
in these terms, while others may be forced apart through the
actions of others. Some of the causes entailed herein are
natural disasters, psychological complications, abduction and
domestic conflicts [4].
2. PREVIOUS EFFORTS DONE IN
THE SEARCH FOR MISSING PEOPLE
Research concerning missing persons has been done in the
past. A few of the research efforts have been successfully
implemented while others did not see the light of the day for a
number of reasons. These past researches can enable us to
develop a lens through which we can view the phenomenon
under this important study [5].
Advances in technology have had a major impact on tracing,
mainly by speeding up the transmission of information to
huge numbers of people, according to the International
Committee of the Red Cross (ICRC) Central Tracing Agency.
The ICRC started tracing in the late 1800s to alert families to
the whereabouts and well-being of detained relatives. It
currently relays hundreds of thousands of messages linking
families back together and providing the peace of mind and
closure so often absent in times of crises. In 2009 alone, more
than 253,000 messages were collected and delivered. Tracing
assisted the repatriation of Congolese prisoners of war, and
enabled nearly 200 video calls between detainees and their
families[6].
Following Haiti’s earthquake in January 2010, Google
developed an open source web application, Person Finder,
which is a registry and message board for survivors, family
and friends to post and search for information about one
another's whereabouts following a natural disaster. Up until
now, following five natural disasters, the registry has
collected more than 200,000 victim names [7].
The Dutch government has also adopted a mobile phone
danger alert system that sends text messages to people who
could be affected by natural disasters or terrorist attacks. The
system, called Cell Broadcast, uses GSM technology to
identify cell phone users in a particular area [8]. If a disaster
occurs, a message is sent to all phones in the area, warning of
the danger.
3. METHODOLOGY
The goal was to come up with a prototype of a solution for
finding missing persons fast enough to find them safe and
sound. The solution is an innovation leveraging on the use of
the readily available mobile phone devices and the internet.
This kind of approach has shown success in the past [9]. It is
also intended to be a solution that takes into consideration
privacy and other legally constraining issues that surround
missing people [10]. The block diagram in figure 1 below
represents the conceptual model of the solution.
2. International Journal of Computer Applications Technology and Research
Volume 4– Issue 7, 507 - 511, 2015, ISSN:- 2319–8656
www.ijcat.com 508
Figure 1. The Conceptual model
2.1 Reporting a Missing Person
In case a person goes missing, family members, friends or
acquaintances should be able to report the case on the system.
They should be able to do so by registering the person's
details such as name, age, tribe, place of origin and
description. This should give an exhaustive description of the
missing person to increase chances of the reader spotting and
reporting them.
2.2 Reporting a Found Person
A person having been reported as missing, can be reported as
found if spotted anywhere. Anyone with the leading
information should be able to post them on the system, about
the person in question.
Even the missing people can report themselves in case they
are able to access the system and in a condition that may
allow them to do so. The current location and contact details
of the missing person should be provided, as well as those of
the person reporting the case. The reporter may need to be
contacted for further details.
2.3 Sending Email Notification Alerts
In case there is a match between a missing and a found person
the system will send email alerts to those who have reported
missing cases and have subscribed to the service. The
algorithm used to match involves some major details like
names, gender and tribe of the missing person.
2.4 Search for a Missing Person
Users should be able to search for their missing loved ones on
the system. The system provides a search criteria to make
their search easier and relevant e.g name of missing person,
tribe, age and gender.
The user is then able to see feeds or updates about the missing
person they reported. There may be multiple entries about a
single missing person, reported by different people at different
times. All these updates should appear if they are associated
with the missing person, as they may provide quality leading
information to finding the missing person. Figure 2 below is a
screen shot that captures the search results.
Figure 2. Screen shot showing search results of missing
people
3. GENERAL PROCEDURE
The general procedure followed by the system is as illustrated
in figure 3 below. A family member or friend reports a
missing person case. Anyone with recovery details about the
missing person updates the records by providing leading
information. The missing case reporter keeps searching for
any leading information from the system.
3. International Journal of Computer Applications Technology and Research
Volume 4–
Fig 3. General procedure flow chart
The stakeholders of this emergency communication system
include the lost case reporter, found case reporter and missing
person. The lost case reporter may be a friend, family member
of a person acquainted to the missing person. The interactions
of the stakeholders with the system is depicted in the use case
diagram in figure 4 below.
Fig 3. General procedure flow chart
The stakeholders of this emergency communication system
include the lost case reporter, found case reporter and missing
person. The lost case reporter may be a friend, family member
of a person acquainted to the missing person.
4. RESULTS
After full implementation and testing of the system,
evaluation of the prototype was done with the aim to
determine if the developed system is delivering the expected
results. The following areas were evaluated to provide
answers to the research questions set at the feasibility study of
the project, which are in line with the project objectives and
requirements. This information was realized by use of both
qualitative and quantitative methods during the co
data [11].
4.1 Determining the search success rate
To determine this rate, 20 people were reported as missing in
the prototype. The success rate was recorded in the database
clearly showing matches of people reported as missing and
those reported as found. Whenever there is a match, a
notification alert is sent to the reporter of the missing person
together with the leading information concerning the victim's
current whereabouts. Table 1 below is a summary of the
evaluation results.
International Journal of Computer Applications Technology and Research
– Issue 7, 507 - 511, 2015, ISSN:- 2319–8656
The stakeholders of this emergency communication system
include the lost case reporter, found case reporter and missing
person. The lost case reporter may be a friend, family member
of a person acquainted to the missing person. The interactions
of the stakeholders with the system is depicted in the use case
The stakeholders of this emergency communication system
include the lost case reporter, found case reporter and missing
person. The lost case reporter may be a friend, family member
After full implementation and testing of the system,
evaluation of the prototype was done with the aim to
determine if the developed system is delivering the expected
luated to provide
answers to the research questions set at the feasibility study of
the project, which are in line with the project objectives and
requirements. This information was realized by use of both
qualitative and quantitative methods during the collection of
Determining the search success rate
To determine this rate, 20 people were reported as missing in
the prototype. The success rate was recorded in the database
clearly showing matches of people reported as missing and
ported as found. Whenever there is a match, a
notification alert is sent to the reporter of the missing person
together with the leading information concerning the victim's
current whereabouts. Table 1 below is a summary of the
Table 1. Summary of the evaluation results of finding
missing persons
Cases reported Search success
20 16
The following pie chart in figure 4 illustrates the portion of
success rate versus the failure rate.The success rate is
significant enough to quaify the prototype as having satisfied
the objectives and expectations of the study.
Figure 5. Pie Chart showing the success rate of finding
missing people by using the mobile application
During the tests carried out in evaluation it was determined
that reasons why a missing person may not be identified are:
• The person is not reported as missing in the system.
• The person may be reported with different names
and other details from the ones used to search.
The prototype was tested under each of the fo
scenarios:
• Unit testing - Each functional module was tested
during and after development to ensure that it meets
the requirements. Additionally, basic validation has
been done to ensure the correct input data on each
module.
• Integration testing - This testing
during and after integration of all the modules. It
therefore checks that the system has the flow
required, from the point of reporting lost cases to
missing cases and outputting notification alerts.
Summary of the evaluation results of finding
Search failure
4
The following pie chart in figure 4 illustrates the portion of
success rate versus the failure rate.The success rate is
to quaify the prototype as having satisfied
the objectives and expectations of the study.
Figure 5. Pie Chart showing the success rate of finding
missing people by using the mobile application
During the tests carried out in evaluation it was determined
that reasons why a missing person may not be identified are:
The person is not reported as missing in the system.
The person may be reported with different names
and other details from the ones used to search.
The prototype was tested under each of the following
Each functional module was tested
during and after development to ensure that it meets
Additionally, basic validation has
been done to ensure the correct input data on each
This testing was done before,
during and after integration of all the modules. It
therefore checks that the system has the flow
required, from the point of reporting lost cases to
missing cases and outputting notification alerts.
4. International Journal of Computer Applications Technology and Research
Volume 4– Issue 7, 507 - 511, 2015, ISSN:- 2319–8656
www.ijcat.com 510
• Acceptance testing - We conducted a beta testing
of the system on a sample of users. Some of the
users disguised as lost case reporters while others as
found case reporters. Wherever there is a match of
records, email notifications are sent to the lost case
reporters, providing them with the leading
information about their missing people. The search
function was also included in the test.
Besides, a computation of F1 score test was conducted to
measure the system's accuracy. This being a statistical
analysis [12], the F1 score considered both precision p and the
recall r of the test to compute the score. Precision(p) is the
number of correct results divided by the number of all
returned results while recall(r) is the number of correct results
divided by the number of correct results that should have been
returned. The F1 score can be interpreted as a weighted
average of the precision and recall, where an F1 score reaches
its best value at 1 and worst score at 0.
After considering 20 search test cases in the system, the
following results were yielded:
Total scenarios = 20
Successful searches = 16
Correct searches = 13
Precision = 13/16
Recall = 13/20
F1 = 2(0.528125/1.4626)
F1 = 0.72
4.2 Measuring access of web database
The measures of efficiency considered were:
• Successful lost case report
• Successful found case report
• Successful search for reported case
• Success in receiving notification alert emails
Twenty entries were done and data on the above measures
was taken and recorded on Mysql server database table. Table
6 below is a summary of the client access efficiency data as
analyzed.
Table 2. Database client access success measure
Access Efficiency Measures Success Failure
Lost case report 20 0
Found case report 20 0
Search for reported person 16 4
Notification alerts 16 4
4.3 Measuring rate of success in the identity
levels of missing people
The users suggested that the system should include photos of
the missing people to be able to identify them more easily.
The use of a photo is significant as some people may
recognize the missing person by view of their photo even
when they do not have the additional descriptive details like
the name.
It was also suggested that the identification details need to be
more flexible and exhaustive in order to increase chances of
identifying a missing person. For instance, the missing
person's age should be a range of numbers rather than an
absolute number because sometimes it's not easy to tell the
exact age of a person. However, an age bracket consisting of a
range of years may work better in this case for instance 25-30
years.
4.4 Discussion of results
The results indicate that the system accuracy is high when
reporting the lost and found cases. This is simply because all
that occurs at this point is to fill in the respective forms and
submitting them.
However when it comes to searching for the missing persons,
the accuracy goes down by 20 % because this step involves a
search algorithm that takes into considerations many
parameters. In the event that some parameters used during the
search do not match with those used during the reporting of
the lost or found case, the intended result returns null even
when it should have retrieved the record. This results into a
false negative. This is a common phenomenon in social
research methods [13].
Consequently, the email alerts or notifications are affected by
the result of the preceding step. The alerts step is equally
affected by 20% and does not send notifications to all the
recipients as it should.
In the general overview, the system achieves an accuracy
level of approximately 80% which is impressive. Even in
cases where a false positive or false negative is returned,
repeated search with different parameters may increase
chances of returning the desired true results.
From the results we can compare the performance of the new
system with the pre-existing systems in the same domain.
Being a mobile based application, its more convenient and
accessible in comparison with the web based solutions like
Google's person finder. The new patanisha application
leverages on the readily available and accessible mobile
devices and internet technology as opposed to dependency on
desktop computers.
Additionally, unlike some of the legacy systems, the new
application gives the public and in some cases even the lost
person a chance to report themselves in case they are in the
position to. Some legacy systems only allow the administrator
5. International Journal of Computer Applications Technology and Research
Volume 4– Issue 7, 507 - 511, 2015, ISSN:- 2319–8656
www.ijcat.com 511
to enter the records of missing persons and this level of
bureaucracy and limit of accessibility rights becomes a
hindrance to the reporting of some missing cases.
The new system is also cost effective to develop and maintain
as it does not involve much resources. It does not require the
rather expensive hardware installation. Since its also based on
an emphasis of good will from the public in the reporting and
updates of missing cases, it does not require much
administrative resources e.g human resources.
5. CONCLUSION
These findings are consistent with other studies. A significant
number of missing people has been traced in developed
countries like the United States in the last 3 years by using
various technology based solutions like social media and
personal phone location applications. Kenya is slowly
adopting this strategy but there is need to do better. These
results should be a wakeup call for us to embrace the readily
available technology resources in solving our own problems.
The project was indeed a good opportunity to unveil what an
innovation using the readily available and widely accepted
mobile technology and the internet can achieve. Literature
cited alludes that there exists a gap in prompt reporting,
location and identification of missing people in this country.
Indeed this research comes in handy as a technology that will
allow for timely reporting and identification of missing
people.
Results from the evaluations carried out verify that once a
case has been reported, there is 72 % chance that the victim
will be found. This is a significant improvement from the 30
% probability experienced by use of the old manual system.
The ultimate objective is to reunite friends and family who
have been separated by natural disasters or other reasons.
Evaluations carried out to measure the success level of users
accessing the database returned positive results. Users could
access the mobile application, register missing persons, report
found persons and search for their loved ones.
It is recommended that the mobile phone based application
that has been developed as a prototype should be optimized
more and be adopted to locate actual missing persons. The
application will then give an opportunity to friends and family
members to report missing cases, where the public can view
and revert with leading information regarding the missing
people. In this manner, more families will be reunited with
their missing loved ones and never have to worry again.
The state humanitarian agencies should embrace and promote
this system. This will increase publication of information
about unidentified people and remains, enlisting the public to
help maximise the chances of identification. This way,
families and friends of the missing people will be empowered
to play an active part in searching for their loved ones and
bring vital closure if they are identified. The application will
also feature general information around missing person
investigations and is intended to be a valuable resource if well
tapped on.
6. ACKNOWLEDGMENTS
I express my sincere gratitude to everyone who supported me
throughout the course of this MSC project. I am thankful to
God for using them to grant me inspiring guidance during the
project work. I am sincerely grateful to them for sharing their
truthful and illuminating views on a number of issues related
to the research project.
7. REFERENCES
[1] Paulides, D. (2014). The missing cases:411 Series. 1st
ed. New York: International Publishers.
[2] Smith, W. (2000). Review of national missing persons
agencies.Compass Partnership.
[3] Nina, A. and Fiona, D. (2011). Handbook to Practical
Disaster Preparedness for the Family. 2nd ed. London:
CreateSpace Independent Publishing Platform
[4] Skinner, R. (2010). The missing link to missing people.
1st ed. New York: HarperCollins Publishers.
[5] Lundin, C. (2007). When All Hell Breaks Loose: Stuff
You Need To Survive When Disaster Strikes. 1st ed.
London: Gibbs Smith.
[6] Damon, P. (2006). Introduction to International Disaster
Management. 1st ed. London: Butterworth-Heinemann.
[7] Andy, C. (2010). Using Google's Haiti Missing Persons
Widget, National Public Radio.
[8] Samarajiva, R. (2005). National Early Warning
System.LIRNEasia, [Online]. 2, 2. Available
at:http://lirneasia.net/2005/03/national-early-warning-
system/[Accessed 03 July 2014].
[9] Acharya, M. (2005). Amateur Radio, A potential tool in
emergency operations. 1st ed. New Delhi: A.P.H.
Publishing Corporation.
[10] Levinson, J. and Domb, A. (2013). Disaster Victim
Identification & Privacy. 1st ed. Jerusalem: -The Hebrew
University of Jerusalem.
[11] Creswell, J.W. (2003). Research Design: Qualitative,
Quantitative and Mixed Methods Approaches, 2nd
Ed,
London: Sage Publications.
[12] David M, (2011). Evaluation: From Precision, Recall
and F-Measure to ROC, Informedness, Markedness &
Correlation. Journal of Machine Learning Technologies.
2 (1), 37–63.
[13] Bryman, A. (2008). Social Research Methods, 3rd
Ed,
Oxford: Oxford University Press.