This document is the master's thesis of Nguyen Minh Hoa from Vietnam National University, Hanoi, supervised by Dr. Do Van Nguyen and Dr. Tran Quoc Long. The thesis proposes a method to detect moving objects from encoded video bitstreams without requiring decoding. It first segments video frames into regions based on macroblock properties like motion vectors and size in the encoded domain. Then it groups regions into objects and refines the objects to generate the final detection results. The method aims to enable efficient motion analysis of large video datasets for applications like video surveillance.
This PhD thesis examines theoretical and practical aspects of typestate modeling in object-oriented languages. It presents the Hanoi modeling language for representing typestate constraints and describes a dynamic checker for Hanoi models implemented using AspectJ. The thesis also reports on a user study that evaluated whether programmers can effectively reason about typestate models. The study found that programmers were generally able to answer questions about typestate models, suggesting typestate is a comprehensible concept for developers. Overall, the thesis provides insights into making typestate modeling practical and usable in real-world programming.
The document describes a research study that developed an automated billing system with a touch screen interface for the Office of the Treasurer of Municipal Government of Nasugbu, Batangas. The system automates tax billing and allows users to view income reports through a touch screen. The researchers used the waterfall model and tested the system to determine if it improved efficiency, reliability, security and other factors compared to the existing manual system. Results found the proposed automated system was rated as excellent or very satisfactory across all evaluation criteria.
Opinion Formation about Childhood Immunization and Disease Spread on NetworksZhao Shanshan
This thesis examines opinion formation about childhood immunization and disease spread on networks. The author develops an agent-based model using MATLAB to simulate disease spread on a biological network of households and information diffusion on an overlapping social network. In the model, households are connected via two overlapping Erdos-Renyi networks representing biological contacts and social information sharing. The model simulates a disease spreading via the SIR model on the biological network. At the same time, opinions about vaccination spread on the social network according to an information cascade process. The results examine how disease incidence, length and vaccination rates are impacted by parameters like infection rate and social influence. The model aims to understand the relationship between disease spread and evolving views on immunization transmitted socially
PROPOSED NEW BRIDGE IN NAPUYAN ASENIERO DAPITAN CIT1.docxIreneCarbonilla2
The document proposes a new bridge in Napuyan Aseniero Dapitan City, Zamboanga del Norte, Philippines. It will be presented to the faculty of the College of Engineering at Jose Rizal Memorial State University as a research proposal. The proposal will include inspecting the existing bridge, collecting data on river conditions and traffic patterns, and providing an ideal design for the new bridge that meets safety and structural requirements. A timeline outlines the study to be completed by September 2023.
This document discusses the development of a Flow Assurance Tool (FAT1) for simulating flow through subsea pipelines. It acknowledges those who helped in developing the tool, including the advisor Dr. Robert Randall. The tool aims to predict flow patterns, pressures, velocities and temperatures for single-phase and two-phase flow, including through valves, pumps and chokes. It also aims to predict cooldown times during shutdown. The document outlines the development of a black oil flow model for single-phase and two-phase flow, heat transfer calculations, and cooldown time estimation. It then compares results from FAT1 to the commercial software PipeSIM to validate FAT1's accuracy.
This thesis presents a centralized fault location (CFL) system for an MVDC shipboard power system (SPS). The CFL system uses ultrafast communication and processing of sensor data to identify and locate faults. Mathematical models are developed to analyze the CFL system's performance based on factors like frame size, decision time, and operating time. These models are verified using an implemented CFL system on a controller hardware-in-the-loop testbed interfacing RTDS and industrial automation hardware. The testbed demonstrates the CFL system's ultrafast fault detection and precise fault location for different SPS operating conditions and system configurations. Factors affecting practical CFL implementation performance and scaling are also identified and analyzed.
The document is an abstract for a PhD dissertation titled "Approximation Schemes for Euclidean Vehicle Routing Problems" by Aparna Das from Brown University in 2011. The dissertation studies two vehicle routing problems: the unit demand problem and the unsplittable demand problem. For the unit demand problem in constant dimensions, the dissertation provides a quasi-polynomial time approximation scheme. For the unsplittable demand problem in one dimension, it provides asymptotic polynomial time approximation schemes. The techniques involve exploiting the Euclidean structure of the input to design approximation algorithms with arbitrarily good approximations.
This document is the thesis submitted by Doğan Ulus to Boğaziçi University in partial fulfillment of the requirements for a Master of Science degree in Electrical and Electronics Engineering. The thesis studies assertion based verification methodology for analog and mixed-signal designs and improves analog expressiveness of assertions. It introduces the halo concept to formally express analog signals and their tolerances in assertions. It also integrates measurements and circuit analyses into assertions to provide a complete verification methodology for analog and mixed-signal designs. Finally, it develops the AMS-Verify framework to verify properties on simulations using the proposed solutions.
This PhD thesis examines theoretical and practical aspects of typestate modeling in object-oriented languages. It presents the Hanoi modeling language for representing typestate constraints and describes a dynamic checker for Hanoi models implemented using AspectJ. The thesis also reports on a user study that evaluated whether programmers can effectively reason about typestate models. The study found that programmers were generally able to answer questions about typestate models, suggesting typestate is a comprehensible concept for developers. Overall, the thesis provides insights into making typestate modeling practical and usable in real-world programming.
The document describes a research study that developed an automated billing system with a touch screen interface for the Office of the Treasurer of Municipal Government of Nasugbu, Batangas. The system automates tax billing and allows users to view income reports through a touch screen. The researchers used the waterfall model and tested the system to determine if it improved efficiency, reliability, security and other factors compared to the existing manual system. Results found the proposed automated system was rated as excellent or very satisfactory across all evaluation criteria.
Opinion Formation about Childhood Immunization and Disease Spread on NetworksZhao Shanshan
This thesis examines opinion formation about childhood immunization and disease spread on networks. The author develops an agent-based model using MATLAB to simulate disease spread on a biological network of households and information diffusion on an overlapping social network. In the model, households are connected via two overlapping Erdos-Renyi networks representing biological contacts and social information sharing. The model simulates a disease spreading via the SIR model on the biological network. At the same time, opinions about vaccination spread on the social network according to an information cascade process. The results examine how disease incidence, length and vaccination rates are impacted by parameters like infection rate and social influence. The model aims to understand the relationship between disease spread and evolving views on immunization transmitted socially
PROPOSED NEW BRIDGE IN NAPUYAN ASENIERO DAPITAN CIT1.docxIreneCarbonilla2
The document proposes a new bridge in Napuyan Aseniero Dapitan City, Zamboanga del Norte, Philippines. It will be presented to the faculty of the College of Engineering at Jose Rizal Memorial State University as a research proposal. The proposal will include inspecting the existing bridge, collecting data on river conditions and traffic patterns, and providing an ideal design for the new bridge that meets safety and structural requirements. A timeline outlines the study to be completed by September 2023.
This document discusses the development of a Flow Assurance Tool (FAT1) for simulating flow through subsea pipelines. It acknowledges those who helped in developing the tool, including the advisor Dr. Robert Randall. The tool aims to predict flow patterns, pressures, velocities and temperatures for single-phase and two-phase flow, including through valves, pumps and chokes. It also aims to predict cooldown times during shutdown. The document outlines the development of a black oil flow model for single-phase and two-phase flow, heat transfer calculations, and cooldown time estimation. It then compares results from FAT1 to the commercial software PipeSIM to validate FAT1's accuracy.
This thesis presents a centralized fault location (CFL) system for an MVDC shipboard power system (SPS). The CFL system uses ultrafast communication and processing of sensor data to identify and locate faults. Mathematical models are developed to analyze the CFL system's performance based on factors like frame size, decision time, and operating time. These models are verified using an implemented CFL system on a controller hardware-in-the-loop testbed interfacing RTDS and industrial automation hardware. The testbed demonstrates the CFL system's ultrafast fault detection and precise fault location for different SPS operating conditions and system configurations. Factors affecting practical CFL implementation performance and scaling are also identified and analyzed.
The document is an abstract for a PhD dissertation titled "Approximation Schemes for Euclidean Vehicle Routing Problems" by Aparna Das from Brown University in 2011. The dissertation studies two vehicle routing problems: the unit demand problem and the unsplittable demand problem. For the unit demand problem in constant dimensions, the dissertation provides a quasi-polynomial time approximation scheme. For the unsplittable demand problem in one dimension, it provides asymptotic polynomial time approximation schemes. The techniques involve exploiting the Euclidean structure of the input to design approximation algorithms with arbitrarily good approximations.
This document is the thesis submitted by Doğan Ulus to Boğaziçi University in partial fulfillment of the requirements for a Master of Science degree in Electrical and Electronics Engineering. The thesis studies assertion based verification methodology for analog and mixed-signal designs and improves analog expressiveness of assertions. It introduces the halo concept to formally express analog signals and their tolerances in assertions. It also integrates measurements and circuit analyses into assertions to provide a complete verification methodology for analog and mixed-signal designs. Finally, it develops the AMS-Verify framework to verify properties on simulations using the proposed solutions.
Research of the Current Status of Vinyl Records in Context of the InternetSarah Steffen
This document provides a history of sound recording and playback technologies. It discusses early analogue formats like Edison's phonograph from 1877 which used wax cylinders, and Berliner's flat disc records from 1887 which used a lateral recording technique. Standardization occurred around 1902 with mechanical duplication of cylinders and a rotation speed of 160 rpm. Shellac became the primary material for flat discs from 1896. Developments in the 1920s included electrical recording and playback techniques which improved audio fidelity. The 78 rpm disc format was established, while composers criticized mechanical reproduction of music. Later sections will likely analyze the current status of vinyl records in the digital age and internet.
This thesis examines the demand for microinsurance against fire risk in Ghana using a mixed logit model approach. A survey was conducted at the Kumasi Central Market, where traders were presented with hypothetical insurance options varying in coverage levels and premiums. A mixed logit model is estimated using a hierarchical Bayesian method to account for heterogeneity in preferences. The results show traders are more likely to prefer options with higher coverage, higher fire risk, and lower premiums. Willingness to pay estimates from the mixed logit model suggest microinsurance could be profitable in Ghana if offered to traders. The study recommends stakeholders in the insurance industry introduce separate microinsurance policies for traders to help address fire risks facing markets.
This document provides an introduction to the book "Computer Science from the Bottom Up" which aims to teach computer science concepts from a low-level perspective. It explains that the book will cover topics like Unix, C programming, binary representation, computer architecture, operating systems, processes, virtual memory, toolchains and more with the goal of explaining how computers work under the hood rather than just how to use them. It notes that modern operating systems are very complex but this book aims to break them down concept by concept to improve understanding.
Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devi...Mark Tomaszewski
This is the full text of my master's thesis.
Contributions include:
1. Development of original software tools to enable use of Myo and Sphero in MATLAB
2. Theoretical Mathematical framework for modeling human upper limb using Myo and Sphero including intrinsic and extrinsic model calibration and methods for analyzing model assumptions and accuracy
3. Implementation of experiments utilizing upper limb model (2) using Myo and Sphero with the present software tools (1) to validate the model's correctness (i.e. satisfaction of modeling assumptions) and performance (i.e. accuracy)
1) Alice Ciccone's doctoral thesis examines decision making in environmental dilemmas through natural and laboratory experiments.
2) The thesis contains three chapters, with the first analyzing the environmental effects of a vehicle tax reform in Norway using registry data. It finds the reform led to reductions in CO2 emissions from new vehicles.
3) The second chapter uses a bilateral trade experiment to study fairness preferences, finding offers and outcomes tend to be fair but self-interest also plays a role.
4) The third chapter develops a model of sequential bargaining with reference points and loss aversion, then compares predictions to results from a laboratory experiment on bargaining with outside options.
This document is a thesis submitted by Gary Hopkins to the University of Cape Town in partial fulfillment of a Bachelor of Sciences degree in Civil Engineering. The thesis investigates the post-buckling behavior of shell structures through highly non-linear analysis. It implements an elasto-plastic constitutive law within the framework of SESKA, a C++ analysis code, to model shell structures using three-dimensional continuum mechanics while avoiding simplifications of shell geometry and behavior. Simple shell structures are analyzed to gain preliminary understanding of post-buckling behavior and determine the feasibility of the methods employed for further analyses. Results will be benchmarked against other verified analyses that used specialized shell elements and visco-plastic material laws.
Cybersecurity is a constant, and, by all accounts growing, challenge. Although software products are gradually becoming more secure and novel approaches to cybersecurity are being developed, hackers are becoming more adept, their tools are better, and their markets are flourishing. The rising tide of network intrusions has focused organizations' attention on how to protect themselves better. This report, the second in a multiphase study on the future of cybersecurity, reveals perspectives and perceptions from chief information security officers; examines the development of network defense measures — and the countermeasures that attackers create to subvert those measures; and explores the role of software vulnerabilities and inherent weaknesses. A heuristic model was developed to demonstrate the various cybersecurity levers that organizations can control, as well as exogenous factors that organizations cannot control. Among the report's findings were that cybersecurity experts are at least as focused on preserving their organizations' reputations as protecting actual property. Researchers also found that organizational size and software quality play significant roles in the strategies that defenders may adopt. Finally, those who secure networks will have to pay increasing attention to the role that smart devices might otherwise play in allowing hackers in. Organizations could benefit from better understanding their risk posture from various actors (threats), protection needs (vulnerabilities), and assets (impact). Policy recommendations include better defining the role of government, and exploring information sharing responsibilities.
This report examines news consumption patterns in the United States. It analyzes survey data to identify different profiles of how people get their news. Four main news consumption profiles are identified: cable news watchers, social media users, print/NPR listeners, and broadcast television viewers. The report also finds associations between demographic characteristics, political views, and perceptions of the reliability of different news sources and platforms. For example, social media users are younger and more likely to perceive online platforms as reliable sources of news. The analysis aims to provide insights into how attitudes toward media vary and implications for public discourse.
This document summarizes Edwin Hernandez-Mondragon's dissertation which proposes improvements to networking protocols for rapidly moving environments. The dissertation presents two contributions: 1) The Rapid Mobility Network Emulator (RAMON) which combines emulation and simulation to facilitate analysis of wireless protocol performance under high speed and mobility. RAMON allows controlling factors like attenuation and latency. 2) A predictive extension of Mobile IP using Kalman filtering to forecast speed and trajectory, enabling preemptive actions and improving performance at speeds up to 80m/s. Experiments show the predictive Mobile IP improves performance by at least 30% over the standard protocol.
This document is John Reed Richards' doctoral dissertation from the University of Delaware submitted in 1994. It examines the fluid mechanics of liquid-liquid systems through both numerical modeling and experimental analysis. The dissertation contains 6 chapters that study various phenomena involving liquid-liquid interfaces, including static interface shapes with volume constraints, steady laminar liquid-liquid jet flows at high Reynolds numbers, dynamic breakup of liquid-liquid jets, and drop formation in liquid-liquid systems before and after jetting. It was approved by Richards' dissertation committee as meeting the requirements for a PhD in Chemical Engineering.
This document provides an overview of a dissertation on applying operations research techniques in constraint programming. The dissertation contains an introduction and several chapters. The introduction motivates the research by discussing the benefits of combining operations research and constraint programming. The remaining chapters present contributions in the areas of propagation and search, with a focus on combining techniques from both fields.
This document provides information about analytical chemistry concepts and terminology. It begins with an introduction to units of measurement and expressions of concentration commonly used in analytical chemistry. It then discusses the basic equipment and techniques used to measure mass and volume, prepare standard solutions, and record experimental work in a laboratory notebook. The document emphasizes the importance of careful measurements and calculations in analytical chemistry. It aims to establish a foundation of terminology, concepts, and procedures that are fundamental to quantitative chemical analysis.
"Mobile Advertising" is the title of my thesis, which was submitted in partial fulfillment of the requirements of the Bachelor Degree of Arts in Media Management.
This degree is offered by Department of Media and Communication at Royal University of Phnom Penh.
*CC-BY-NC-SA License
This thesis examines the potential incentives a leader may have to initiate war in order to consolidate domestic power and implement demanding policies. The model proposes that threatening the population with the consequences of an external defeat during a time of war can deter them from attempting a revolution. This allows the leader to extract more from the population, who choose to voluntarily relinquish some liberty in order to avoid the costs of war. The thesis then extends the model to consider how generating popular support for a belligerent foreign policy can similarly commit the leader in the short term when other commitment devices may be imperfect. Historical examples from the Franco-Prussian War and Cold War are discussed in relation to the incentives proposed in the models.
This document summarizes a study on the factors affecting the quality of production information when using building information modeling (BIM) based design. It describes a literature review on the problems with existing 2D drawing-based design documentation and potential solutions offered by BIM. It then presents a theoretical framework identifying principal factors influencing production information quality and strategic countermeasures under the PAS 1192-2 standard. The research design section outlines a qualitative study using interviews to analyze these factors for a BIM-based design project.
This document is an after-action assessment report by the U.S. Department of Justice of the police response to demonstrations in Ferguson, Missouri in August 2014 following the shooting of Michael Brown. The report analyzes the police response over the first 17 days through document review and interviews. It finds issues with incident command, use of force, militarization of the police response, orders to protesters to "keep moving", and need for improved training and policies. The report provides lessons learned to help law enforcement improve response to mass demonstrations.
This document evaluates the Strategic Decision Support Centers (SDSCs) implemented by the Chicago Police Department.
The SDSCs are real-time crime centers located in each police district that bring together staff, technologies, and data to support policing operations and strategic decision-making. The evaluation assessed SDSC operations, technologies, and the impact on crime rates.
The evaluation found that the SDSCs functioned as intended by facilitating communication and information sharing. Technologies like ShotSpotter, police cameras, and mapping tools supported response to crimes and monitoring of areas. Statistical analyses estimated that SDSCs were associated with moderate reductions in total crime rates of 5-10% in their respective districts.
This document evaluates the Strategic Decision Support Centers (SDSCs) implemented by the Chicago Police Department.
The SDSCs are real-time crime centers located in each police district that bring together staff, technologies, and data to support policing operations and strategic decision-making. The evaluation assessed SDSC operations, technologies, and the impact on crime rates.
The evaluation found that the SDSCs functioned as intended by facilitating communication and information sharing. Technologies like gunshot detection systems and video feeds provided timely data to police. Crime analysis supported strategic planning. However, opportunities for improvement were identified, such as better integrating technologies and standardizing processes across districts.
Statistical analysis found that monthly crime counts, including homic
Research of the Current Status of Vinyl Records in Context of the InternetSarah Steffen
This document provides a history of sound recording and playback technologies. It discusses early analogue formats like Edison's phonograph from 1877 which used wax cylinders, and Berliner's flat disc records from 1887 which used a lateral recording technique. Standardization occurred around 1902 with mechanical duplication of cylinders and a rotation speed of 160 rpm. Shellac became the primary material for flat discs from 1896. Developments in the 1920s included electrical recording and playback techniques which improved audio fidelity. The 78 rpm disc format was established, while composers criticized mechanical reproduction of music. Later sections will likely analyze the current status of vinyl records in the digital age and internet.
This thesis examines the demand for microinsurance against fire risk in Ghana using a mixed logit model approach. A survey was conducted at the Kumasi Central Market, where traders were presented with hypothetical insurance options varying in coverage levels and premiums. A mixed logit model is estimated using a hierarchical Bayesian method to account for heterogeneity in preferences. The results show traders are more likely to prefer options with higher coverage, higher fire risk, and lower premiums. Willingness to pay estimates from the mixed logit model suggest microinsurance could be profitable in Ghana if offered to traders. The study recommends stakeholders in the insurance industry introduce separate microinsurance policies for traders to help address fire risks facing markets.
This document provides an introduction to the book "Computer Science from the Bottom Up" which aims to teach computer science concepts from a low-level perspective. It explains that the book will cover topics like Unix, C programming, binary representation, computer architecture, operating systems, processes, virtual memory, toolchains and more with the goal of explaining how computers work under the hood rather than just how to use them. It notes that modern operating systems are very complex but this book aims to break them down concept by concept to improve understanding.
Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devi...Mark Tomaszewski
This is the full text of my master's thesis.
Contributions include:
1. Development of original software tools to enable use of Myo and Sphero in MATLAB
2. Theoretical Mathematical framework for modeling human upper limb using Myo and Sphero including intrinsic and extrinsic model calibration and methods for analyzing model assumptions and accuracy
3. Implementation of experiments utilizing upper limb model (2) using Myo and Sphero with the present software tools (1) to validate the model's correctness (i.e. satisfaction of modeling assumptions) and performance (i.e. accuracy)
1) Alice Ciccone's doctoral thesis examines decision making in environmental dilemmas through natural and laboratory experiments.
2) The thesis contains three chapters, with the first analyzing the environmental effects of a vehicle tax reform in Norway using registry data. It finds the reform led to reductions in CO2 emissions from new vehicles.
3) The second chapter uses a bilateral trade experiment to study fairness preferences, finding offers and outcomes tend to be fair but self-interest also plays a role.
4) The third chapter develops a model of sequential bargaining with reference points and loss aversion, then compares predictions to results from a laboratory experiment on bargaining with outside options.
This document is a thesis submitted by Gary Hopkins to the University of Cape Town in partial fulfillment of a Bachelor of Sciences degree in Civil Engineering. The thesis investigates the post-buckling behavior of shell structures through highly non-linear analysis. It implements an elasto-plastic constitutive law within the framework of SESKA, a C++ analysis code, to model shell structures using three-dimensional continuum mechanics while avoiding simplifications of shell geometry and behavior. Simple shell structures are analyzed to gain preliminary understanding of post-buckling behavior and determine the feasibility of the methods employed for further analyses. Results will be benchmarked against other verified analyses that used specialized shell elements and visco-plastic material laws.
Cybersecurity is a constant, and, by all accounts growing, challenge. Although software products are gradually becoming more secure and novel approaches to cybersecurity are being developed, hackers are becoming more adept, their tools are better, and their markets are flourishing. The rising tide of network intrusions has focused organizations' attention on how to protect themselves better. This report, the second in a multiphase study on the future of cybersecurity, reveals perspectives and perceptions from chief information security officers; examines the development of network defense measures — and the countermeasures that attackers create to subvert those measures; and explores the role of software vulnerabilities and inherent weaknesses. A heuristic model was developed to demonstrate the various cybersecurity levers that organizations can control, as well as exogenous factors that organizations cannot control. Among the report's findings were that cybersecurity experts are at least as focused on preserving their organizations' reputations as protecting actual property. Researchers also found that organizational size and software quality play significant roles in the strategies that defenders may adopt. Finally, those who secure networks will have to pay increasing attention to the role that smart devices might otherwise play in allowing hackers in. Organizations could benefit from better understanding their risk posture from various actors (threats), protection needs (vulnerabilities), and assets (impact). Policy recommendations include better defining the role of government, and exploring information sharing responsibilities.
This report examines news consumption patterns in the United States. It analyzes survey data to identify different profiles of how people get their news. Four main news consumption profiles are identified: cable news watchers, social media users, print/NPR listeners, and broadcast television viewers. The report also finds associations between demographic characteristics, political views, and perceptions of the reliability of different news sources and platforms. For example, social media users are younger and more likely to perceive online platforms as reliable sources of news. The analysis aims to provide insights into how attitudes toward media vary and implications for public discourse.
This document summarizes Edwin Hernandez-Mondragon's dissertation which proposes improvements to networking protocols for rapidly moving environments. The dissertation presents two contributions: 1) The Rapid Mobility Network Emulator (RAMON) which combines emulation and simulation to facilitate analysis of wireless protocol performance under high speed and mobility. RAMON allows controlling factors like attenuation and latency. 2) A predictive extension of Mobile IP using Kalman filtering to forecast speed and trajectory, enabling preemptive actions and improving performance at speeds up to 80m/s. Experiments show the predictive Mobile IP improves performance by at least 30% over the standard protocol.
This document is John Reed Richards' doctoral dissertation from the University of Delaware submitted in 1994. It examines the fluid mechanics of liquid-liquid systems through both numerical modeling and experimental analysis. The dissertation contains 6 chapters that study various phenomena involving liquid-liquid interfaces, including static interface shapes with volume constraints, steady laminar liquid-liquid jet flows at high Reynolds numbers, dynamic breakup of liquid-liquid jets, and drop formation in liquid-liquid systems before and after jetting. It was approved by Richards' dissertation committee as meeting the requirements for a PhD in Chemical Engineering.
This document provides an overview of a dissertation on applying operations research techniques in constraint programming. The dissertation contains an introduction and several chapters. The introduction motivates the research by discussing the benefits of combining operations research and constraint programming. The remaining chapters present contributions in the areas of propagation and search, with a focus on combining techniques from both fields.
This document provides information about analytical chemistry concepts and terminology. It begins with an introduction to units of measurement and expressions of concentration commonly used in analytical chemistry. It then discusses the basic equipment and techniques used to measure mass and volume, prepare standard solutions, and record experimental work in a laboratory notebook. The document emphasizes the importance of careful measurements and calculations in analytical chemistry. It aims to establish a foundation of terminology, concepts, and procedures that are fundamental to quantitative chemical analysis.
"Mobile Advertising" is the title of my thesis, which was submitted in partial fulfillment of the requirements of the Bachelor Degree of Arts in Media Management.
This degree is offered by Department of Media and Communication at Royal University of Phnom Penh.
*CC-BY-NC-SA License
This thesis examines the potential incentives a leader may have to initiate war in order to consolidate domestic power and implement demanding policies. The model proposes that threatening the population with the consequences of an external defeat during a time of war can deter them from attempting a revolution. This allows the leader to extract more from the population, who choose to voluntarily relinquish some liberty in order to avoid the costs of war. The thesis then extends the model to consider how generating popular support for a belligerent foreign policy can similarly commit the leader in the short term when other commitment devices may be imperfect. Historical examples from the Franco-Prussian War and Cold War are discussed in relation to the incentives proposed in the models.
This document summarizes a study on the factors affecting the quality of production information when using building information modeling (BIM) based design. It describes a literature review on the problems with existing 2D drawing-based design documentation and potential solutions offered by BIM. It then presents a theoretical framework identifying principal factors influencing production information quality and strategic countermeasures under the PAS 1192-2 standard. The research design section outlines a qualitative study using interviews to analyze these factors for a BIM-based design project.
This document is an after-action assessment report by the U.S. Department of Justice of the police response to demonstrations in Ferguson, Missouri in August 2014 following the shooting of Michael Brown. The report analyzes the police response over the first 17 days through document review and interviews. It finds issues with incident command, use of force, militarization of the police response, orders to protesters to "keep moving", and need for improved training and policies. The report provides lessons learned to help law enforcement improve response to mass demonstrations.
This document evaluates the Strategic Decision Support Centers (SDSCs) implemented by the Chicago Police Department.
The SDSCs are real-time crime centers located in each police district that bring together staff, technologies, and data to support policing operations and strategic decision-making. The evaluation assessed SDSC operations, technologies, and the impact on crime rates.
The evaluation found that the SDSCs functioned as intended by facilitating communication and information sharing. Technologies like ShotSpotter, police cameras, and mapping tools supported response to crimes and monitoring of areas. Statistical analyses estimated that SDSCs were associated with moderate reductions in total crime rates of 5-10% in their respective districts.
This document evaluates the Strategic Decision Support Centers (SDSCs) implemented by the Chicago Police Department.
The SDSCs are real-time crime centers located in each police district that bring together staff, technologies, and data to support policing operations and strategic decision-making. The evaluation assessed SDSC operations, technologies, and the impact on crime rates.
The evaluation found that the SDSCs functioned as intended by facilitating communication and information sharing. Technologies like gunshot detection systems and video feeds provided timely data to police. Crime analysis supported strategic planning. However, opportunities for improvement were identified, such as better integrating technologies and standardizing processes across districts.
Statistical analysis found that monthly crime counts, including homic
Similar to Motion analysis from encoded video bitstream.pdf (20)
THE LINKAGE BETWEEN CORRUPTION AND CARBON DIOXIDE EMISSION - EVIDENCE FROM AS...HanaTiti
The document examines the relationship between corruption and carbon dioxide emissions in Asian countries. It reviews previous literature on the linkages between corruption and economic growth, economic growth and the environment, and corruption and the environment. The study aims to estimate the direct and indirect effects of corruption on carbon dioxide emissions using a three-stage least squares model on data from 42 Asian countries.
The impact of education on unemployment incidence - micro evidence from Vietn...HanaTiti
This document summarizes a thesis submitted by Le Thi Yen Thanh for the degree of Master of Arts in Development Economics at the University of Economics in Ho Chi Minh City, Vietnam. The thesis examines the impact of education on unemployment incidence using microdata from the 2008 Vietnam Household Living Standards Survey. It aims to investigate the relationship between educational attainment and unemployment probability, and whether gender affects unemployment differently at each education level. The analysis controls for factors like age, marital status, health, region, and household economic conditions. The results provide evidence on how education reduces unemployment and inform policy recommendations to address unemployment and improve Vietnam's educational system.
Deteminants of brand loyalty in the Vietnamese neer industry.pdfHanaTiti
This document appears to be a thesis submitted by Ngo Hoang Thi Quynh Oanh to the University of Economics Ho Chi Minh City to earn a Master of Business degree in 2012. The thesis examines the determinants of brand loyalty in the Vietnamese beer industry. It includes an introduction outlining the background and objectives of the study, a literature review on relevant concepts such as brand, brand equity, and brand loyalty. It also presents a proposed research model and hypotheses. The methodology chapter describes the research process, measurement scales, sampling, data collection and analysis methods. The findings and implications are discussed in subsequent chapters.
An Investigation into the Effect of Matching Exercises on the 10th form Stude...HanaTiti
An Investigation into the Effect of Matching Exercises on the 10th form Students’ Vocabulary Improvements at Dinh Tien Hoang High School in Ninh Binh City.pdf
ENERGY CONSUMPTION AND REAL GDP IN ASEAN.pdfHanaTiti
This paper examines the relationship between real GDP and energy consumption in ASEAN countries from 1974 to 2014 using panel data. Panel unit root tests, panel cointegration tests, and VECM Granger causality tests find a long-run causality from real GDP to energy consumption and short-run unidirectional causality from real GDP to energy consumption. The results also show positive effects of energy consumption, imports, capital, and human capital on real GDP, and negative effects of CO2 emissions and exports.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
1. VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF ENGINEERING AND TECHNOLOGY
NGUYEN MINH HOA
MOTION ANALYSIS FROM ENCODED VIDEO
BITSTREAM
MASTER’S THESIS
HA NOI – 2018
2. VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF ENGINEERING AND TECHNOLOGY
NGUYEN MINH HOA
MOTION ANALYSIS FROM ENCODED VIDEO
BITSTREAM
Major: Computer Science
MASTER’S THESIS
Supervisor: Dr. Do Van Nguyen
Co-Supervisor: Dr. Tran Quoc Long
HA NOI - 2018
3. i
AUTHORSHIP
“I hereby declare that the work contained in this thesis is of my own and I have
not submitted this thesis at any other institution in order to obtain a degree. To
the best of my knowledge and belief, the thesis contains no materials previously
published or written by another person other than those listed in the bibliography
and identified as references.”
Signature: ………………………………………………
4. ii
SUPERVISOR’S APPROVAL
“I hereby approve that the thesis in its current form is ready for committee
examination as a requirement for the Master of Computer Science degree at the
University of Engineering and Technology.”
Signature: ………………………………………………
Signature: ………………………………………………
5. iii
ACKNOWLEDGMENTS
First of all, I would like to express special gratitude to my supervisors, Dr. Do
Van Nguyen and Dr. Tran Quoc Long, for their enthusiasm for instructions, the
technical explanation as well as advices during this project.
I also want to give sincere thanks to Assoc. Prof. Dr. Ha Le Thanh, Assoc. Prof.
Dr. Nguyen Thi Thuy for the instructions as well as the background knowledge
for this thesis. And I would like to also thank my teachers, my friends in Human
Machine Interaction Lab for their support.
Thank my friends, my colleagues in the project "Nghiên Cứu Công Nghệ Tóm Tắt
Video", and project “Multimedia application tools for intangible cultural heritage
conservation and promotion”, project number ĐTDL.CN-34/16 for their working
and support.
Last but not least, I want to thank my family and all of my friends for their
motivation and support as well. They stand by and inspire me whenever I face the
tough time.
6. 1
TABLE OF CONTENTS
AUTHORSHIP.......................................................................................................i
SUPERVISOR’S APPROVAL.............................................................................ii
ACKNOWLEDGMENTS....................................................................................iii
TABLE OF CONTENTS......................................................................................1
ABBREVIATIONS...............................................................................................3
List of Figures .......................................................................................................4
List of Tables.........................................................................................................5
INTRODUCTION.................................................................................................6
CHAPTER 1. LITERATURE REVIEW ............................................................9
Moving object detection in the pixel domain..........................................9
Moving object detection in the compressed domain.............................10
1.2.1. Motion vector approaches.............................................................11
1.2.2. Size of Macroblock approaches ....................................................13
Chapter Summarization.........................................................................14
CHAPTER 2. METHODOLOGY ....................................................................15
Video compression standard h264 ........................................................15
2.1.1. H264 file structure.........................................................................15
2.1.2. Macroblock....................................................................................18
2.1.3. Motion vector................................................................................19
Proposed method ...................................................................................21
2.2.1. Process video bitstream.................................................................21
2.2.2. Macroblock-based Segmentation..................................................22
2.2.3. Object-based Segmentation...........................................................24
2.2.4. Object Refinement ........................................................................28
7. 2
Chapter Summarization.........................................................................28
CHAPTER 3. RESULTS ..................................................................................30
The moving object detection application ..............................................30
3.1.1. The process of application ............................................................31
3.1.2. The motion information ................................................................34
3.1.3. Synthesizing movement information ............................................35
3.1.4. Storing Movement Information ....................................................36
Experiments...........................................................................................36
3.2.1. Dataset...........................................................................................36
3.2.2. Evaluation methods.......................................................................40
3.2.3. Implementations............................................................................41
3.2.4. Experimental results......................................................................41
Chapter Summarization.........................................................................44
CONCLUSIONS.................................................................................................45
List of of author’s publications related to thesis.................................................46
REFERENCES....................................................................................................47
9. 4
List of Figures
Figure 1.1. The process of moving object detection with data in the pixel domain
.............................................................................................................................10
Figure 1.2. The process of moving object detection with data in the compressed
domain.................................................................................................................11
Figure 2.1. The structure of a H264 file..............................................................15
Figure 2.2. RBSP structure..................................................................................16
Figure 2.3. Slide structure ...................................................................................18
Figure 2.4. Macroblock structure........................................................................18
Figure 2.5. The motion vector of a Macroblock .................................................20
Figure 2.6. The process of moving object detection method..............................22
Figure 2.7. Skipped Macroblock.........................................................................23
Figure 2.8. (a) An outdoor and in-door frames (b) The "size-map" of frames, (c)
The "motion-map" of frames...............................................................................24
Figure 2.9. Example about the “consistent” of motion vector............................26
Figure 3.1. The implementation process of the approach...................................33
Figure 3.2. Data struct to storage motion information........................................35
Figure 3.3. Example frames of test videos..........................................................37
Figure 3.4. Example frames and their ground truth............................................39
Figure 3.5. An example frame of Pedestrians (a) and ground truth image (b)...40
10. 5
List of Tables
Table 2.1. NALU types .......................................................................................16
Table 2.2. Slide types ..........................................................................................17
Table 3.1. The information of test videos ...........................................................38
Table 3.2. The information of test sequences in group 1....................................39
Table 3.3. The performance of two approachs with Pedestrians, PETS2006,
Highway, and Office ...........................................................................................42
Table 3.4. The experimental result of Poppe’s approach on 2nd
group...............42
Table 3.5. The experimental result of proposed method on 2nd
group ...............43
11. 6
INTRODUCTION
Today, video content is extensively used in the areas of life such as indoor
monitoring, traffic monitoring, etc. The number of videos sharing over the
Internet at any given time is also extremely large. According to statistics,
hundreds of hours of video are uploaded to Youtube every minute [1]. Not only
that, the general trend today is the surveillance cameras installed in homes for
surveillance and sercurity purposes. These cameras will normally operate and
store the surveillance videos automatically. Only when there are some special
situations, or some special events occur, humans will use the video data to revisit.
The problem is that in a short amount of time, how can such a large video volume
be evaluated? For example, when there is a burglary, an intrusion occurs, we can
not spend hours to check each video previously stored. Then, a tool that lets you
determine the moment when an object is moving in a long video is essential to
reducing the time and effort of searching.
Normally, in order to reduce the size of videos for transmission or storing, a video
compression procedure is performed at surveillance cameras. After that, the
compressed information in form of bit stream is stored, or transmitted to a server
for analysis. The video analysis process needs a lot of features to describe
different aspects of vision. Typically, these features are extracted from the pixel
values of each video frame by fully decompressing bitstream. The decompression
procedure requires high computation capacity device to perform. However, with
the trend of "Internet of Things", there are many low processing capacity devices
which are not capable for performing this full video decompression at high speed.
So, it is difficult to perform an approach that requires a lot of computing power in
real time.
Another way to extract the feature from the video is using the data on the
compressed video. These data can be: transform coefficients, motion vectors,
quantization steps, quantization parameters, etc. From the above data, through the
process and analysis, we can handle some important tasks in the computer vision
include moving objects detection, human actions detection, face recognition,
motion objects tracking.
This thesis proposes a new method to determine moving object by exploring and
applying some motion estimation techniques in the video compression domain.
After that, the method will be used to build an application that supports movement
searching in the surveillance videos in the families. The compression format of
12. 7
the videos in the thesis is the H264 compression standard (MPEG-4 part10), a
popular video compression standard today.
Aims
The goal of the thesis is to propose a method for determining moving objects in
the compressed domain of a video. Then, I try to build an application using the
method for support searching the moments which have moving objects in the
video.
Object and Scope of the study
Within the framework of the thesis, I study the algorithms related to determining
moving objects in video, especially the algorithms that determine moving objects
in the compressed domain. The video compression standard is used in the thesis
is H264/AVC.
The theory of video compression and computer vision are taken from scientific
articles related to the video analysis problem on the compression domain,
determine the motion form on the compression domain of the video.
The videos for test and experiment are obtained from the surveillance cameras
both indoor and outdoor.
Method and procedures
- Research on motion analysis and evaluation systems on existing compressed
video, scientific articles related to the analysis and evaluation of motion on
compressed video.
- Experimental research: Conduct experiential settings for each theoretical part
such as extracting video data, compiling data, and evaluating motion based on the
obtained data.
- Experimental evaluation: Each experiment will be conducted independently on
each module and then integrated and deployed.
Contributions
The thesis proposes a new moving object detection method in surveillance video
encoded with H264 compression standard using the motion vector and size of
macroblock.
13. 8
Thesis structure
Apart from the introduction, the conclution and the references, this thesis is
organized into 3 chapters with the following main contents:
Chapter 1 is literature review. This chapter will show the related work of the thesis
include the moving object detection methods in the pixel domain and the moving
object detection methods in the compressed domain.
Chapter 2 mentiones the basic knowledge about video compression standard
H264 such as H264 file structure, macroblocks, motion vectors and describes the
detail of moving object detection method including processing video bitstreams,
macroblock-based segmentation phase, object-based segmentation phase, and
object refinement phase.
Chapter 3 shows the results of method including an application using proposed
method and experimental results.
14. 9
CHAPTER 1.
LITERATURE REVIEW
Today, surveillance cameras are used extensively in the world. The volume of
video surveillance has also grown tremendously. Some problems that are often
encountered with video surveillance include event searching, motion tracking,
abnormal behavior detection, etc. In order to handle these tasks, it is necessary to
have a method that can determine which the moments in each videos exist
movements.
Usually, the video is compressed for storage and transmission. The previous
moving object detection method usually use the data from the pixel images such
as color value, edges, etc. To get the images that can be displayed, or processed,
the system must decode video fully. This consumes a large number of computing
resources, time and memory of the device. I suggest a method that can quickly
determine the moving objects in high resolution videos. The data used in the
method will be taken from the compressed video domain including information
about the motion vector and the size of the macroblock (in bit) after encoding.
The method reduces the processing time of the method considerably compared to
methods implemented with data on the pixel domain.
The problem of motion detection in a video has long been studied. This is the first
step in a series of computer vision problems such as object tracking, object
detection, abnormal movement detection, etc. There are usually two approaches
to address this problem: using fully decoded video data (pixel domain data) or
using live data from an undecoded video (compressed domain data). The
following section will outline the studies based on these two approaches.
Moving object detection in the pixel domain
Typically, to reduce the size of the video for transmission, a video encoding
process is performed inside the surveillance camera and the compressed
information is transmitted as a bit stream to a server for video analysis. Common
video compression standards used today including mp4, H264, H265. To be
viewable, these compressed videos need to be decoded to image frames. We call
these image frames are the pixel domain and the data obtained from these image
frames are the data in the pixel domain. Fig. 1.1 describes the process of moving
object detection methods in the pixel domain. The data in the pixel domain include
the color values of the pixels, the number of color channels of each pixel, the
edges, etc.
15. 10
Figure 1.1. The process of moving object detection with data in the pixel domain
To determine moving objects in the pixel domain, background subtraction
algorithms are commonly used. There are many research results that have been
introduced long ago. These methods usually use data as the relationship between
frames in a time series.
Background subtraction in [2] is defined as: “Background subtraction is a widely
used approach for detecting moving objects in videos from static cameras. The
rationale in the approach is that of detecting the moving objects from the
difference between the current frame and a reference frame, often called The
“background image”, or “background model”. As a basic, the background image
must be a representation of the scene with no moving objects and must be kept
regularly updated so as to adapt to the varying luminarice conditions and
geometry settings.”.
Results of the researchs may include the methods use Gaussian average such as
the method of Wren et al. [3], the method of Koller et al. [4]; the methods use
Temporal median filter such as the method of Lo and Velasti [5], the method of
Cucchiara et al. [6]; the methods using a mixture of Gaussians such as the method
of Stauffer and Grimson [7], methods of Wayne Power and Schoonees [8]; etc.
The above methods have a common characteristic that is the process data are taken
by fully decompress the compressed bitstream and this decompression procedure
requires a highly computational device to perform. However, with the trend of
"Internet of Things," where most low-end devices are not capable of performing
high-speed decompression. Therefore, there should be a video analysis
mechanism that includes only uncompressed video.
Moving object detection in the compressed domain
Normally, the videos will be encoded using some compression standard. Each
compression standard specifies how to shrink the video size by a certain structure.
The compressed videos will contain fewer data. For example, with the H264
compression standard, the data contained in the compressed video includes
16. 11
information about macroblock, motion vector, frame information, etc. We call
these data that the data in the compressed domain or video compression region.
Fig. 1.2 shows the process of moving object detection methods by using the data
in the compressed domain.
Figure 1.2. The process of moving object detection with data in the compressed
domain
In general, the amount of data in the video compression domain is much less than
the data in the pixel domain. The idea of using data in the compressed domain
with the H264 compression standard for video analysis has also been investigated
by some scientists around the world. In order to be able to detect motion in the
compressed video domain, we usually use two types of data. They are the motion
vector and the size (in bit) of the macroblock.
1.2.1. Motion vector approaches
A number of algorithms have been proposed to analyze video content in the H264
compressed domain, whose good performances have been obtained [9] [10]. Zeng
et al. Study in [11] proposed a method to detect moving objects in H264
compressed videos based on motion vectors. Motion vectors are extracted from
the motion field and classified into several types. Then, they are grouped into
blocks through the Markov Random Field (MRF) classification process. Liu et al.
[12] recognized the shape of an object by using a map for each object. This
approach is based on a binary partition tree created by macroblocks. Cipres et al.
[13] presented a moving object detection approach in the H264 compressed
domain based on fuzzy logic. The motion vectors are used to remove the noises
that appear during the encoding process and represent the concepts that describe
the detected regions. Then, the valid motion vectors are grouped into blocks. Each
of them could be identified as a moving object in the video scene. The moving
objects of each frame are described with common terms like shape, size, position,
and velocity. Mak et al. [14] used the length, angle, and direction of motion
vectors to track the objects by applying the MRF. Bruyne et al. [15] estimated the
17. 12
reliability of motion vectors by comparing them with projected motion vectors
from surrounding frames. Then, they combined this information with the
magnitude of motion vectors to distinguish foreground objects from the
background. This method can localize the noisy motion vectors and their effect
during the classification can be diminished. Wang et al. [16] proposed a
background modeling method using the motion vector and local binary pattern
(LBP) to detect the moving object. When a background block was similar to a
foreground block, a noisy motion vector would appear. To obtain a more reliable
and dense motion vector field, the initial motion vector fields were preprocessed
by a temporal accumulation within three inter frames and a 3×3 median filtering.
After that, the LBP feature was introduced to describe the spatial correlation
among neighboring blocks. This approach can reduce the time of extracting
moving objects while also performing an effective synopsis analysis. Marcus
Laumer [17] proposed an approach to segment video frames into the foreground
and background and, according to this segmentation, to identify regions
containing moving objects. The approach uses a map to indicate the "weight" of
each (sub-)macroblock for the presence of a moving object. This map is the input
of a new spatiotemporal detection algorithm that is used to refine the weight that
indicated the level of motion for each block. Then, quantization parameters of
macroblocks are used to apply individual thresholds to the block weights to
segment the video frames. The accuracy of the approach was approximately 50%.
To identify the human action, Tom et al. [18] proposed a quick action
identification algorithm. The algorithm uses quantization parameters gradient
image (QGI) and motion vectors with support vector machines (SVM) to classify
the types of the actions. The algorithm can also handle light, scale and some other
environmental variables with an accuracy rate of 85% on the videos with
resolution 176x144. It can identifies walking, running, etc. Similarly, Tom,
Rangarajan and his colleagues also used QGI and motion vector to propose a new
method to classify human actions as the Projection Based Learning of the Meta-
cognitive Radial Basis Functional Network (PBL-McRBFN).
With the motion tracking problem, Biswas et al. [19] propose a method for
detecting abnormal actions by analyzing motion vector. This method mainly relies
on observing the motion vector to find the difference between abnormal actions
and normal situations. The classifier used here is the Gaussian Mixture Model
(GMM). This approach base on their another approach [20] but improved it by
using the direction of the motion vector. The speed of approach when perform
experimental is about 70fps. Thilak et al. [21] propose a Probabilistic Data
18. 13
Association Filter that detects multiple target clusters. This method can handle
cases in which targets split into multiple clusters or clusters should be detected
(classified) as a target. Similarly, You et al. [22] use the probabilistic spatio-
temporal MB filtering to mark the macroblock as objects and then remove them
from the noise. The algorithm can track many objects with real-time accuracy but
can only be applied in case of fixed camera and objects must be at least two
macroblocks. Kas et al. [23] overcame the fixed camera problem using Global
Motion Estimation and Object History Images to handle background movement.
However, the number of motion objects need to be small and the moving objects
are not occupied most of the frame area.
1.2.2. Size of Macroblock approaches
The methods mentioned above share the trait of using motion vectors to detect
moving objects. However, since motion vectors are usually created at the video
encoder to optimize video compression ratio, they do not always represent the real
motion in the video sequence. As such, due to its coding-oriented nature, to detect
moving objects, the motion vector fields must be preprocessed and refined to
remove the noises.
So, Poppe et al. [24] proposed an approach to detect moving objects in the H264
video by using the size of the macroblocks after encoding (in bit). To achieve Sub-
macroblock-level (4×4) precision, the information from transform coefficients
was also utilized. The system achieved high execution speeds, up to 20 times
faster than the motion vector-based related works. An analysis was restricted to
Predicted (P) frames, and a simple interpolation technique was employed to
handle Intra (I) frames. The whole algorithm was based on the assumption that
the macroblocks that contains an edge of a moving object is more difficult to
compress since it is hard to find a good match for those macroblocks in the
reference frame(s).
Base on Poppe’s idea, Vacavant et al. [25] used the macroblock size to detect
moving objects by applying the Gaussian mixture model (GMM). The approach
can represent the distribution of macroblock sizes well.
Although the method of Poppe and Vacavant is good for removing the
background motion noise, they cannot produce high motion detection results for
videos in high spatial resolution (such as 1920 × 1080 or 1280 × 720). In case
where the moving objects are large and they contain a uniform color region (such
as a black car), then the size of macroblocks corresponding to the inside region of
19. 14
the moving object will be very small (normally around zero), and using a filtering
threshold or parameter (though very small) will not be effective. In those cases,
the algorithm will determine these regions to be background.
Chapter Summarization
In this chapter showed the researchs about moving object detection in both pixel
domain and compressed domain. The approachs using data from pixel domain
usually have high accuracy but taking a large number of computing resources and
time. The approachs using data in compressed domain have lower accuracy
because the data in compressed domain usually contain less information. In the
next chapters, I will propose a method that can efficiently detect moving objects,
especially in high spatial resolution video streams. The method uses the data taken
from the video compressed domain, including the size of the macroblocks to
detect the skeleton of the moving object and the motion vectors to detect the detail
of the moving object.
20. 15
CHAPTER 2.
METHODOLOGY
Video compression standard h264
Before proposing the moving object detection method, this chapter will show
some informations about H264, a popular video compression standard, which is
used to encode and decode the surveillance video in the thesis.
This day, the installation of surveillance cameras in house became quite common.
Normally, video data from a surveillance camera over a long period of time
usually has very huge size. Consequently, videos need to be preprocessed and
encoded before being used and transmitted over the network. There are many
recognized compression standards and widely used. One of these is the H264 or
MPEG-4 part 10 [26], a compression standard recognized by the ITU-T Video
Coding Experts Group and the ISO/IEC Moving Picture Experts Group.
2.1.1. H264 file structure
Normally, the video after being captured from the camera will be compressed
using a common video compression standard such as H261, H263, MP4,
H264/AVC, H265/HEVC, etc. In the thesis, I encode and decode the video by
using H264/AVC. The H264 video codec or MPEG-4 part 10 is recognized by the
ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts
Group.
Typically, an H264 file is splitted into packets called the Network Abstraction
Layer Unit (NALU) [27], as shown in Fig. 2.1.
Figure 2.1. The structure of a H264 file
The first NALU byte indicates the type of NALU. The NALU type shows what
the NALU's structure is. It can be a slice or set parameters for decompression. The
meaning of the NALU in Table 2.1.
21. 16
Table 2.1. NALU types
Type Definition
0 Undefined
1 Slice layer without partitioning non IDR
2 Slice data partition A layer
3 Slice data partition B layer
4 Slice data partition C layer
5 Slice layer without partitioning IDR
6 Additional information (SEI)
7 Sequence parameter set
8 Picture parameter set
9 Access unit delimiter
10 End of sequence
11 End of stream
12 Filler data
13..23 Reserved
24..31 Undefined
Other than NALU, the rest of the NALU is called RBSP (Raw Byte Sequence
Payload). RBSP contains data of SODB (String Of Data Bits). According to the
specification document H264 (ISO/IEC 14496-10) if the SODB is empty (no bits
are present), the RBSP is also empty. The first byte of RBSP (left side) contains
8 bits of SODB; The next byte of the RBSP will contain up to 8 bits of SODB and
continue until less than 8 bits of SODB.
Figure 2.2. RBSP structure
22. 17
A video will normally be divided into frames and the encoder will encode them
one by one. Each frame is encoded into slices. Each slice is divided into
Macroblock (MB). Typically, each frame corresponds to a slice, but sometimes a
frame can be split into multiple slices. The slices are divided into categories as
shown in Fig. 2.2. A slice consists of a header and a data section (Fig. 2.3). The
header of the slice contains information about the type of slice, the type of MB in
the slice, the number of slice frames. The header also contains information about
the reference frame and quantitative parameters. The data portion of the slice is
the information about the macroblock.
Table 2.2. Slide types
Type Description
0 P-slice. Consists of P-macroblocks (each macroblock is predicted using
one reference frame) and/or I-macroblocks.
1 B-slice. Consists of B-macroblocks (each macroblock is predicted
using one or two reference frames) and/or I-macroblocks.
2 I-slice. Contains only I-macroblocks. Each macroblock is predicted
from previously coded blocks of the same slice.
3 SP-slice. Consists of P and/or I-macroblocks and lets you switch
between encoded streams.
4 SI-slice. It consists of a special type of SI-macroblocks and lets you
switch between encoded streams.
5 P-slice.
6 B-slice.
7 I-slice.
8 SP-slice.
9 SI-slice.
Tải bản FULL (53 trang): https://bit.ly/3FkzN8W
Dự phòng: fb.com/TaiHo123doc.net
23. 18
Figure 2.3. Slide structure
2.1.2. Macroblock
The basic principle of a compression standard is to split the video into frame
groups. Each frame is divided into the basic processing units. (For example, in the
H264/AVC standard, it is Macroblock (MB) which is a region 16x16 pixels).
Also, with some data regions carrying more detail, the MBs will be subdivided
into smaller sub-macroblocks (4x4 or 8x8 pixels). Each MB after compression
will contain the information used to recover the video later, including Motion
vector, Residual value, Quantization parameter, etc. as in Fig. 2.4, where:
• ADDR is the position of Macroblock in a frame;
• TYPE is the Macroblock type;
• QUANT is the quantization parameter;
• VECTOR is Motion vector;
• CBP (Coded Block Pattern) show how to split MB into smaller blocks;
• bN is encoded data of residual of color channels (4 Y, 1 Cr, 1 Cb).
Figure 2.4. Macroblock structure
During decompression, the video decoder receives the compressed video data as
a stream of binary data, decodes the elements and extracts the encoded
information, including coefficients of variation, size of MB (in bit), motion
Tải bản FULL (53 trang): https://bit.ly/3FkzN8W
Dự phòng: fb.com/TaiHo123doc.net
24. 19
prediction information, and so on and perform the reverse transformation to
restore the original image data.
2.1.3. Motion vector
With H264 compression, frame-based megabytes are predicted based on the
information that has been transferred from the encoder to the decoder. Usually,
there are two ways of predicting frame prediction and inter-frame prediction.
Frame forecasting uses compressed image data in the same frame as the
compressed macroblock and predicts inter-frame image data using previously
compressed frames. Interframe forecasting is accomplished through a predictive
and compensatory motion process in which the motion predator retrieves the
macroblock in the reference frame closest to the new macroblock and calculates
the motion vector, this vector characterizes the shift of the new macroblock to
encoding compared to the reference frame.
Referenced macroblocks are sent to the subtractor with the new macroblock that
needs coding to find error prediction or residual signal, which will characterize
the difference between the predicted macroblock and the actual macroblock. The
residual signal or prediction error will be converted to Discrete Cosine Transform
and quantized to reduce the number of bits to be stored or transmitted. These
coefficients together with the motion vectors will be applied to the entropy
compressor and the bit stream. Video streams of binary data include conversion
factors, motion prediction information, compressed data structure information,
and more. To perform video compression, one compares the values of the two
frames. A frame is used as a reference. When we want to compress a MB at
position i of a frame, the video compression algorithm tries to find the reference
frame of a MB with the smallest value of MB compared to MB at position i. Then,
if MB is found in the reference frame at position j, the change between i and j is
called the Motion vector (MV) of MB at position i (Fig. 2.5). Normally an MV
will consist of two values: x (the column position of MB) and y (row position of
MB).
6815653