This content describes Call Detail Records (CDR) data format, data acquisition method, visualize in Mobmap and the applications for disaster management.
Analysing Transportation Data with Open Source Big Data Analytic Toolsijeei-iaes
Big data analytics allows a vast amount of structured and unstructured data to be effectively processed so that correlations, hidden patterns, and other useful information can be mined from the data. Several open source big data analytic tools that can perform tasks such as dimensionality reduction, feature extraction, transformation, optimization, are now available. One interesting area where such tools can provide effective solutions is transportation. Big data analytics can be used to efficiently manage transport infrastructure assets such as roads, airports, bus stations or ports. In this paper an overview of two open source big data analytic tools is first provided followed by a simple demonstration of application of these tools on transport dataset.
HIGH SPEED DATA RETRIEVAL FROM NATIONAL DATA CENTER (NDC) REDUCING TIME AND ...IJCSEA Journal
Fast and efficient data management is one of the demanding technologies of today’s aspect. This paper proposes a system which makes the working procedures of present manual system of storing and retrieving huge citizen’s information of Bangladesh automated and increases its effectiveness. The implemented search methodology is user friendly and efficient enough for high speed data retrieval ignoring spelling error in the input keywords used for searching a particular citizen. The main concern in this research is minimizing the total searching time for a given keyword. This can be done if we can pre-establish the idea of getting the data belonging to the searching keyword. The primary and secondary key-code generated by the Double Metaphone Algorithm for each word is used to establish that idea about the word. This algorithm is used for creating the map of the original database, through which the keyword is matched against the data.
HIGH SPEED DATA RETRIEVAL FROM NATIONAL DATA CENTER (NDC) REDUCING TIME AND I...IJCSEA Journal
Fast and efficient data management is one of the demanding technologies of today’s aspect. This paper
proposes a system which makes the working procedures of present manual system of storing and retrieving
huge citizen’s information of Bangladesh automated and increases its effectiveness. The implemented
search methodology is user friendly and efficient enough for high speed data retrieval ignoring spelling
error in the input keywords used for searching a particular citizen. The main concern in this research is
minimizing the total searching time for a given keyword. This can be done if we can pre-establish the idea
of getting the data belonging to the searching keyword. The primary and secondary key-code generated by
the Double Metaphone Algorithm for each word is used to establish that idea about the word. This
algorithm is used for creating the map of the original database, through which the keyword is matched
against the data.
Service Level Comparison for Online Shopping using Data MiningIIRindia
The term knowledge discovery in databases (KDD) is the analysis step of data mining. The data mining goal is to extract the knowledge and patterns from large data sets, not the data extraction itself. Big-Data Computing is a critical challenge for the ICT industry. Engineers and researchers are dealing with the cloud computing paradigm of petabyte data sets. Thus the demand for building a service stack to distribute, manage and process massive data sets has risen drastically. We investigate the problem for a single source node to broadcast the big chunk of data sets to a set of nodes to minimize the maximum completion time. These nodes may locate in the same datacenter or across geo-distributed data centers. The Big-data broadcasting problem is modeled into a LockStep Broadcast Tree (LSBT) problem. And the main idea of the LSBT is defining a basic unit of upload bandwidth, r, a node with capacity c broadcasts data to a set of [c=r] children at the rate r. Note that r is a parameter to be optimized as part of the LSBT problem. The broadcast data are further divided into m chunks. In a pipeline manner, these m chunks can then be broadcast down the LSBT. In a homogeneous network environment in which each node has the same upload capacity c, the optimal uplink rate r, of LSBT is either c=2 or 3, whichever gives the smaller maximum completion time. For heterogeneous environments, an O(nlog2n) algorithm is presented to select an optimal uplink rate r, and to construct an optimal LSBT. With lower computational complexity and low maximum completion time, the numerical results shows better performance.The methodology includes Various Web applications Building and Broadcasting followed by the Gateway Application and Batch Processing over the TSV Data after which the Web Crawling for Resources and MapReduce process takes place and finally Picking Products from Recommendations and Purchasing it.
Analysing Transportation Data with Open Source Big Data Analytic Toolsijeei-iaes
Big data analytics allows a vast amount of structured and unstructured data to be effectively processed so that correlations, hidden patterns, and other useful information can be mined from the data. Several open source big data analytic tools that can perform tasks such as dimensionality reduction, feature extraction, transformation, optimization, are now available. One interesting area where such tools can provide effective solutions is transportation. Big data analytics can be used to efficiently manage transport infrastructure assets such as roads, airports, bus stations or ports. In this paper an overview of two open source big data analytic tools is first provided followed by a simple demonstration of application of these tools on transport dataset.
HIGH SPEED DATA RETRIEVAL FROM NATIONAL DATA CENTER (NDC) REDUCING TIME AND ...IJCSEA Journal
Fast and efficient data management is one of the demanding technologies of today’s aspect. This paper proposes a system which makes the working procedures of present manual system of storing and retrieving huge citizen’s information of Bangladesh automated and increases its effectiveness. The implemented search methodology is user friendly and efficient enough for high speed data retrieval ignoring spelling error in the input keywords used for searching a particular citizen. The main concern in this research is minimizing the total searching time for a given keyword. This can be done if we can pre-establish the idea of getting the data belonging to the searching keyword. The primary and secondary key-code generated by the Double Metaphone Algorithm for each word is used to establish that idea about the word. This algorithm is used for creating the map of the original database, through which the keyword is matched against the data.
HIGH SPEED DATA RETRIEVAL FROM NATIONAL DATA CENTER (NDC) REDUCING TIME AND I...IJCSEA Journal
Fast and efficient data management is one of the demanding technologies of today’s aspect. This paper
proposes a system which makes the working procedures of present manual system of storing and retrieving
huge citizen’s information of Bangladesh automated and increases its effectiveness. The implemented
search methodology is user friendly and efficient enough for high speed data retrieval ignoring spelling
error in the input keywords used for searching a particular citizen. The main concern in this research is
minimizing the total searching time for a given keyword. This can be done if we can pre-establish the idea
of getting the data belonging to the searching keyword. The primary and secondary key-code generated by
the Double Metaphone Algorithm for each word is used to establish that idea about the word. This
algorithm is used for creating the map of the original database, through which the keyword is matched
against the data.
Service Level Comparison for Online Shopping using Data MiningIIRindia
The term knowledge discovery in databases (KDD) is the analysis step of data mining. The data mining goal is to extract the knowledge and patterns from large data sets, not the data extraction itself. Big-Data Computing is a critical challenge for the ICT industry. Engineers and researchers are dealing with the cloud computing paradigm of petabyte data sets. Thus the demand for building a service stack to distribute, manage and process massive data sets has risen drastically. We investigate the problem for a single source node to broadcast the big chunk of data sets to a set of nodes to minimize the maximum completion time. These nodes may locate in the same datacenter or across geo-distributed data centers. The Big-data broadcasting problem is modeled into a LockStep Broadcast Tree (LSBT) problem. And the main idea of the LSBT is defining a basic unit of upload bandwidth, r, a node with capacity c broadcasts data to a set of [c=r] children at the rate r. Note that r is a parameter to be optimized as part of the LSBT problem. The broadcast data are further divided into m chunks. In a pipeline manner, these m chunks can then be broadcast down the LSBT. In a homogeneous network environment in which each node has the same upload capacity c, the optimal uplink rate r, of LSBT is either c=2 or 3, whichever gives the smaller maximum completion time. For heterogeneous environments, an O(nlog2n) algorithm is presented to select an optimal uplink rate r, and to construct an optimal LSBT. With lower computational complexity and low maximum completion time, the numerical results shows better performance.The methodology includes Various Web applications Building and Broadcasting followed by the Gateway Application and Batch Processing over the TSV Data after which the Web Crawling for Resources and MapReduce process takes place and finally Picking Products from Recommendations and Purchasing it.
Due to the arrival of new technologies, devices, and communication means, the amount of data produced by mankind is growing rapidly every year. This gives rise to the era of big data. The term big data comes with the new challenges to input, process and output the data. The paper focuses on limitation of traditional approach to manage the data and the components that are useful in handling big data. One of the approaches used in processing big data is Hadoop framework, the paper presents the major components of the framework and working process within the framework.
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
CarStream: An Industrial System of Big Data Processing for Internet of Vehiclesijtsrd
As the Internet-of-Vehicles (IoV) technology becomes an increasingly important trend for future transportation, de-signing large-scale IoV systems has become a critical task that aims to process big data uploaded by fleet vehicles and to provide data-driven services. The IoV data, especially high-frequency vehicle statuses (e.g., location, engine parameters), are characterized as large volume with a low density of value and low data quality. Such characteristics pose challenges for developing real-time applications based on such data. In this paper, we address the challenges in de-signing a scalable IoV system by describing CarStream, an industrial system of big data processing for chauffeured car services. Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience. Rakshitha K. S | Radhika K. R"CarStream: An Industrial System of Big Data Processing for Internet of Vehicles" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14408.pdf http://www.ijtsrd.com/computer-science/database/14408/carstream-an-industrial-system-of-big-data-processing-for-internet-of-vehicles/rakshitha-k-s
A Big Data Telco Solution by Dr. Laura Wynterwkwsci-research
Presented during the WKWSCI Symposium 2014
21 March 2014
Marina Bay Sands Expo and Convention Centre
Organized by the Wee Kim Wee School of Communication and Information at Nanyang Technological University
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/32c6TnG
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- About the success McCormick has had as a result of seasoning the Machine Learning and Blockchain Landscape with data virtualization
Are ubiquitous technologies the future vehicle for transportation planning a...ijasuc
Origin Destination has become a crucial aspect in long term transportation planning. For Origindestination
estimations, wide variety of methods can be used. Conventional methods like home surveys &
roadside monitoring are slow & less effective. Bluetooth & CCTV cameras are also feasible methods for
doing OD study, but have their own downsides. At present, this information contributes to very less
percentage of data collection. Ubiquitous technologies like mobile phones being deployed in the proposed
research is estimated to enhance the data collection and provide a quick & effective OD estimation. In this
paper we discuss how technology becomes the future vehicle for OD.
Real World Application of Big Data In Data Mining Toolsijsrd.com
The main aim of this paper is to make a study on the notion Big data and its application in data mining tools like R, Weka, Rapidminer, Knime,Mahout and etc. We are awash in a flood of data today. In a broad range of application areas, data is being collected at unmatched scale. Decisions that previously were based on surmise, or on painstakingly constructed models of reality, can now be made based on the data itself. Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences. The paper mainly focuses different types of data mining tools and its usage in big data in knowledge discovery.
Traffic Data Analysis and Prediction using Big DataJongwook Woo
- Denser traffic on Freeways 101, 405, 10
- Rush hours from 7 am to 9 am produce a lot of traffic, the heaviest traffic time start from 3pm and gets better after 6pm.
- Major areas of traffic in DTLA, Santa Monica, Hollywood
- More insights can be found with bigger dataset using this framework for analysis of traffic
- Using such data and platform can also give an opportunity to predict traffic congestions. Prediction can be performed using machine learning algorithm – Decision Forest with the accuracy of 83% for predicting the heaviest traffic jam.
ARE UBIQUITOUS TECHNOLOGIES THE FUTURE VEHICLE FOR TRANSPORTATION PLANNING : ...ijasuc
Origin Destination has become a crucial aspect in long term transportation planning. For Origindestination estimations, wide variety of methods can be used. Conventional methods like home surveys &
roadside monitoring are slow & less effective. Bluetooth & CCTV cameras are also feasible methods for
doing OD study, but have their own downsides. At present, this information contributes to very less
percentage of data collection. Ubiquitous technologies like mobile phones being deployed in the proposed
research is estimated to enhance the data collection and provide a quick & effective OD estimation. In this
paper we discuss how technology becomes the future vehicle for OD.
Application of OpenStreetMap in Disaster Risk ManagementNopphawanTamkuan
This content presents the four procedures were investigated in detail with an emphasis on simplicity for application to disaster management (download from OSM website, download using QGIS plugin, download a file converted to a universal file format (shapefile) and adding rendered map in the background). The use of these data for resilient urban planning are demonstrated including setting a hazard layer (flood Model), setting an exposure layer (population) and exposure analysis using InaSAFE plugin.
Due to the arrival of new technologies, devices, and communication means, the amount of data produced by mankind is growing rapidly every year. This gives rise to the era of big data. The term big data comes with the new challenges to input, process and output the data. The paper focuses on limitation of traditional approach to manage the data and the components that are useful in handling big data. One of the approaches used in processing big data is Hadoop framework, the paper presents the major components of the framework and working process within the framework.
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
CarStream: An Industrial System of Big Data Processing for Internet of Vehiclesijtsrd
As the Internet-of-Vehicles (IoV) technology becomes an increasingly important trend for future transportation, de-signing large-scale IoV systems has become a critical task that aims to process big data uploaded by fleet vehicles and to provide data-driven services. The IoV data, especially high-frequency vehicle statuses (e.g., location, engine parameters), are characterized as large volume with a low density of value and low data quality. Such characteristics pose challenges for developing real-time applications based on such data. In this paper, we address the challenges in de-signing a scalable IoV system by describing CarStream, an industrial system of big data processing for chauffeured car services. Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience. Rakshitha K. S | Radhika K. R"CarStream: An Industrial System of Big Data Processing for Internet of Vehicles" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14408.pdf http://www.ijtsrd.com/computer-science/database/14408/carstream-an-industrial-system-of-big-data-processing-for-internet-of-vehicles/rakshitha-k-s
A Big Data Telco Solution by Dr. Laura Wynterwkwsci-research
Presented during the WKWSCI Symposium 2014
21 March 2014
Marina Bay Sands Expo and Convention Centre
Organized by the Wee Kim Wee School of Communication and Information at Nanyang Technological University
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/32c6TnG
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- About the success McCormick has had as a result of seasoning the Machine Learning and Blockchain Landscape with data virtualization
Are ubiquitous technologies the future vehicle for transportation planning a...ijasuc
Origin Destination has become a crucial aspect in long term transportation planning. For Origindestination
estimations, wide variety of methods can be used. Conventional methods like home surveys &
roadside monitoring are slow & less effective. Bluetooth & CCTV cameras are also feasible methods for
doing OD study, but have their own downsides. At present, this information contributes to very less
percentage of data collection. Ubiquitous technologies like mobile phones being deployed in the proposed
research is estimated to enhance the data collection and provide a quick & effective OD estimation. In this
paper we discuss how technology becomes the future vehicle for OD.
Real World Application of Big Data In Data Mining Toolsijsrd.com
The main aim of this paper is to make a study on the notion Big data and its application in data mining tools like R, Weka, Rapidminer, Knime,Mahout and etc. We are awash in a flood of data today. In a broad range of application areas, data is being collected at unmatched scale. Decisions that previously were based on surmise, or on painstakingly constructed models of reality, can now be made based on the data itself. Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences. The paper mainly focuses different types of data mining tools and its usage in big data in knowledge discovery.
Traffic Data Analysis and Prediction using Big DataJongwook Woo
- Denser traffic on Freeways 101, 405, 10
- Rush hours from 7 am to 9 am produce a lot of traffic, the heaviest traffic time start from 3pm and gets better after 6pm.
- Major areas of traffic in DTLA, Santa Monica, Hollywood
- More insights can be found with bigger dataset using this framework for analysis of traffic
- Using such data and platform can also give an opportunity to predict traffic congestions. Prediction can be performed using machine learning algorithm – Decision Forest with the accuracy of 83% for predicting the heaviest traffic jam.
ARE UBIQUITOUS TECHNOLOGIES THE FUTURE VEHICLE FOR TRANSPORTATION PLANNING : ...ijasuc
Origin Destination has become a crucial aspect in long term transportation planning. For Origindestination estimations, wide variety of methods can be used. Conventional methods like home surveys &
roadside monitoring are slow & less effective. Bluetooth & CCTV cameras are also feasible methods for
doing OD study, but have their own downsides. At present, this information contributes to very less
percentage of data collection. Ubiquitous technologies like mobile phones being deployed in the proposed
research is estimated to enhance the data collection and provide a quick & effective OD estimation. In this
paper we discuss how technology becomes the future vehicle for OD.
Application of OpenStreetMap in Disaster Risk ManagementNopphawanTamkuan
This content presents the four procedures were investigated in detail with an emphasis on simplicity for application to disaster management (download from OSM website, download using QGIS plugin, download a file converted to a universal file format (shapefile) and adding rendered map in the background). The use of these data for resilient urban planning are demonstrated including setting a hazard layer (flood Model), setting an exposure layer (population) and exposure analysis using InaSAFE plugin.
This content presents a guide to access satellite (Landsat-8) and microsatellite (Diwata), and how to use gdal and AROSIC (Python-based open-source software) for co-registration.
Disaster Damage Assessment and Recovery Monitoring Using Night-Time Light on GEENopphawanTamkuan
This content shows the possibility and useful cases of night-time light data to assess disaster damages and recovery in post-disaster situations such as Hokkaido earthquake, dam eruption in Laos and Kerala flood in India. Moreover, how to browse and profiling night-time light on GEE are demonstrated here.
This content presents for basic of Synthetic Aperture Radar (SAR) including its geometry, how the image is created, essential parameters, interpretation, SAR sensor specification, and advantages and disadvantages.
Differential SAR Interferometry Using Sentinel-1 Data for Kumamoto EarthquakeNopphawanTamkuan
This content presents step by step of Differential SAR Interferometry or DInSAR analysis in SNAP. The case study is Kumamoto Earthquake using Sentinel-1.
Earthquake Damage Detection Using SAR Interferometric CoherenceNopphawanTamkuan
This content presents how to apply interferometric analysis for damage detection. The case study is the Kumamoto earthquake in 2016. ALOS-2 images are used to calculate interferometric coherence, and estimate coherence change of images between before- and during earthquake to estimate possible degree of damage areas.
How to better understand SAR, interpret SAR products and realize the limitationsNopphawanTamkuan
This content shows how to better understand SAR (how to interpret SAR images and read SAR interferogram ). Moreover, capacities and limitations of SAR are discussed for each disaster emergency mapping (Flood, Landslide and Earthquake).
This content presents how to detect water or flood areas using ALOS-2 images before and during floods. First, it shows how to calibrate intensity to dB, find threshold value and apply to images.
Differential SAR Interferometry Using ALOS-2 Data for Nepal EarthquakeNopphawanTamkuan
This content presents Differential SAR Interferometry or DInSAR analysis with GMTSAR (on Linux based OS, download DEM, prepare directories for processing). The case study is Nepal earthquake in 2015 using ALOS-2.
This content shows geospatial data sources for Japan and global data, coordinate reference system, and create a map of population density (Vector analysis: dissolve vector, join table, calculate area and population density.
Raster Analysis (Color Composite and Remote Sensing Indices)NopphawanTamkuan
This content shows how to download data from USGS explorer, color composition for Landsat-8 and Sentinel-2, extract specific area, and remote sensing indices (NDVI and NDWI) using raster calculator.
This content presents how to classify satellite image by QGIS Semi-automatic classification plugin. It includes pre-processing, create a region of interest (AOI), and applying classification methods.
This content provides basic python before starting geospatial analysis. It starts from data type, variable, basic coding, condition statement, loop, while, and how to read file.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
1. Center for Research and Application for Satellite Remote Sensing
Yamaguchi University
Visualizing CDR Data
2. CDR data associating the information with the position information of the used
base station ID (Cell ID), it is possible to estimate the position when a certain
mobile phone communicates. Through such data processing, it is possible to
trace the movement / trajectory of a user of a certain mobile phone.
The format of CDR data differs depending on the provider, but basically
includes
● ID
● Time / timestamp,
● Geographical coordinates (latitude and longitude).
Data Format
3. Even if personal information is excluded, CDR data is still business confidential
data for mobile phone companies, and if personal information related to call
records leaks by any chance, it can lead to serious social impact. Therefore,
obtaining CDR data is not easy because it requires careful negotiations and
agreements between mobile phone companies and ministries.
As an alternative, the project investigated open data, which has a format
similar to that of CDR data, and represents the trajectory of people.
Data Acquisition Method
4. Reality Mining Dataset
Data collected by MIT Human Dynamics Lab in 2004, which showing the
trajectory of 100 individuals over a 9-month period.
This is the result of recording the Cell ID and Bluetooth transmission / reception
by the smartphone application for data collection and linking it to the location
information. You can register and download and use it from the project website.
Data Acquisition Method
5. How to download data?
1. Go to website:
http://realitycommons.media.mit.edu/realitymining4.html
1. Fill out the information requested by the website.
2. After submit this section, then you will receive an email in
your email address with a link to the requested dataset.
NOTE:
● The data from Reality Commons cannot use immediately, and it
must be convert format file first.
● https://opencellid.org/ This link is “The world's largest Open
Database of Cell Towers” that stored dataset with cell tower
locations.
Data Acquisition Method
6. iTIC Open Data Archive
● Taxi probe data published by the Intelligent Traffic Information Center Foundation,
a group of Thailand's automobile and traffic-related operators.
● It is containing 1-2 second frequency GPS logs of approximately 4,000 vehicles over
the period June to December 2017.
Although the method, accuracy, and frequency of location information acquisition are
different from CDR data, it is considered to be useful in terms of handling a large amount
of movement trajectory.cal
Data Acquisition Method
7. How to download data?
1. Go to website: https://www.iticfoundation.org/download
2. Fill out in the red box.
1
2
Data Acquisition Method
8. 3. After press “confirm” button in first
page, then iTic Open Data Archives page
will appear.
4. Click the information that you want to
download.
5. Index of/data/probe-data page will
appear.
6. Click the data for download.
3
4
5
6
Data Acquisition Method
9. Because of the large amount of data, it is difficult to calculate and visualize
with general spreadsheet software. The following is a list of useful software.
● 3.3.1 PostgreSQL/PostGIS, Spatialite
● 3.3.2 MobMap
Data Analysis Methods
10. PostgreSQL/PostGIS, Spatialite
By using PostGIS, which extends the function of handling spatial data to
PostgreSQL, which is a typical relational database management system, the
aggregation and weighing of a large amount of movement trajectory data
can be made more efficient. Spatialite is a spatial data extension of SQLite,
which is a simple database management system.
The procedure of analyzing the movement trajectory data is mainly done by
sorting by ID and time / timestamp.
Data Analysis Methods
11. MobMap
MobMap is software that specializes in
visualizing such trajectory data and
provides the function of expressing
individual movements in moving images.
It runs on a web browser and can be
used free of charge.
Data Analysis Methods
12. Note: For data to visualize in Mobmap,
use data from iTic.
For Prepare subsetting data use method
as follow;
1. Download "A bundle of command-
line tools for managing SQLite
database files..." from
https://sqlite.org/download.html
2. Extract the executable binary files.
1
Data Analysis Methods
13. 3. Prepare a SQL script on notepad (or other text editor) with the codes in (a). The [input file name],
[date], [output file name] shall be replaced, and tbl.id_str LIKE 'a%' is for filtering the data to reduce data
size, indicating filtering id_str starting with 'a'. The script might be saved with ".sql" format.
NOTE: Here is the code for prepare SQL script.
CREATE TABLE tbl (id_str varchar, valid integer, lat double, lon double, t
timestamp without time zone, speed integer, heading integer, for_hire_light
integer, engine_acc integer);
.separator ,
.import [input file name] tbl
CREATE TABLE hash (id_int INTEGER PRIMARY KEY, id_str varchar);
INSERT INTO hash (id_str) SELECT DISTINCT id_str FROM tbl;
.output [output file name]
SELECT id_int, tbl.id_str, t, lat, lon FROM tbl
LEFT JOIN hash ON tbl.id_str = hash.id_str
WHERE tbl.t LIKE "[date YYYY-MM-DD] %"
AND tbl.id_str LIKE 'a%'
;
3
a
Data Analysis Methods
14. 4. Execute sqlite3.exe by go to “Command Prompt” and enter the command to access the
program's directory (a). Type command for use sqlite and print output (b).
5. Load the output on Mobmap .
a b
NOTE: Input file and SQL script should be locate in the same directory with sqlite.exe
Data Analysis Methods
15. How to download data?
1. Go to website: https://shiba.iis.u-tokyo.ac.jp/member/ueyama/mm/
2. Press “Launch” button.
1
2
Data Analysis Methods
16. How to download data?
3. After pressed “Launch” button, this page will
appear.
4. Press “Add moving data” icon (in the
position of the Start from here’s arrowhead) for
import file data(.csv).
5. Select data that you want to visualize.
6. Then specify the column including id, XY
coordinate, time.
7. Click “Start loading” button.
3
4
6
7
Data Analysis Methods
17. c. You can edit properties of your data by press the “Open configuration” button.
a. Play and Stop button for
visualize your trajectories data.
b. You can choose
which type of
background you want
to open with your
data by press this
button, which will
have the following
options ------>
a
c
b
Data Analysis Methods
18. Exposure population analysis using CDR data
A long-term analysis of CDR data allows us to observe the immigration
situation. With this data, the situation of evacuation and return after a
disaster can be accurately grasped, and cooperation with evacuation
destination infrastructure development, administrative services, and
reconstruction activities in the stricken area can be promoted efficiently and
effectively.
Use Case
19. Exposure population analysis in the 2015 Nepal earthquake
Analysis of anonymized CDR data of 12 million people after the 2015 Nepal
earthquake revealed shifting migration patterns of the affected people. In
particular, it was estimated that 390,000 people migrated out from the
Kathmandu Valley and moved to the surrounding areas and south-central
Nepal. These data are useful for planning humanitarian assistance activities.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4779046/
Use Case
20. Flood Disaster Management
UN Global Pulse attempted to manage flood disasters by observing people's response to
flood disasters based on the frequency of calls from CDR data. The results are as follows.
● CDR data is extremely useful as a proxy indicator of population distribution.
● Public alerts are not always effective in alerting people.
● The trajectories of people read from the CDR data are useful for understanding the
process of flood impacts.
● Most of the calls made during the disaster were in the most affected areas.
These results indicate that CDR data can be useful in measuring the impacts of floods on
people and infrastructure, and their attention to disasters.
Use Case