Today most developers utilize source code written by other parties. Because the code is
modified frequently, the developers need to grasp the impact of the modification repeatedly. A
call graph and especially its special type, a call path, help the developers comprehend the
modification. Source code written by other parties, however, becomes too huge to be held in
memory in the form of parsed data for a call graph or path. This paper offers a bidirectional
search algorithm for a call graph of too huge amount of source code to store all parse results of
the code in memory. It refers to a method definition in source code corresponding to the visited
node in the call graph. The significant feature of the algorithm is the referenced information is
used not in order to select a prioritized node to visit next but in order to select a node to
postpone visiting. It reduces path extraction time by 8% for a case in which ordinary path
search algorithms do not reduce the time.
Rejuvenate Pointcut: A Tool for Pointcut Expression Recovery in Evolving Aspe...Raffi Khatchadourian
Invited tool demonstration at the 8th International Conference on Aspect-Oriented Software Development (AOSD '09), Charlottesville, VA, USA, March 2-6, 2009.
This document contains a 25 question multiple choice quiz about computer science topics. The questions cover topics like Boolean algebra, graphs, probability, algorithms, automata theory, programming languages, operating systems, computer networks, databases and software engineering. For each question there are 4 possible answer choices, with one being marked as the correct answer. Explanations are provided for some of the questions.
The document is a sample paper for Class XII Subject Informatics Practices. It consists of 3 sections - Section A with 30 marks, Section B with 20 marks each, and Section C with 20 marks each. Section A contains short answer questions, Section B contains case studies and questions, and Section C contains SQL queries and output questions. The document provides answer keys for all questions.
The document provides a blue print and model question paper for Informatics Practices class 12 exam. The blue print outlines the topics, number of questions and marks distribution. It allocates 38 questions worth 70 marks across topics like Networking, Java Programming, RDBMS and IT Applications. The model question paper has 7 sections with 1 to 6 questions in each worth a total of 70 marks to be attempted in 3 hours. It covers different concepts like HTML, XML, Database, Java etc.
The document provides an interview questions and answers guide for C programming language. It includes questions on topics such as the definition of C language, differences between functions like printf and sprintf, static variables, unions, linked lists, storage classes in C, and hashing. For each question, it provides multiple detailed answers explaining concepts in C programming such as memory allocation, strings, pointers, macros and more.
Automatic Task-based Code Generation for High Performance DSELJoel Falcou
Providing high level tools for parallel programming while sustaining a high level of performance has been a challenge that techniques like Domain Specific Embedded Languages try to solve. In previous works, we investigated the design of such a DSEL – NT2 – providing a Matlab -like syntax for parallel numerical computations inside a C++ library.
Main issues addressed here is how liimtaions of classical DSEL generation and multithreaded code generation can be overcome.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document presents a novel approach for forward error correction (FEC) decoding based on the belief propagation (BP) algorithm in LTE and WiMAX systems. Specifically, it proposes representing tail-biting convolutional codes and turbo codes using parity check matrices, which allows both code types to be decoded using a unified BP algorithm. This provides a lower complexity decoding architecture compared to traditional approaches. Simulation results show the BP algorithm achieves near-identical performance to maximum a posteriori (MAP) decoding for turbo codes, while being less complex. Representing codes with parity check matrices thus enables a universal decoder for LTE and WiMAX using a single BP algorithm.
The document discusses network layer protocols and IPv4 specifically. It provides three key points:
1) IPv4 is the main network layer protocol in the Internet that provides "best effort" delivery of packets called datagrams from source to destination through various networks in a connectionless manner.
2) IPv4 packets, or datagrams, contain a header with fields that provide routing information and a payload section for data. The header fields include source and destination addresses, identification information, flags for fragmentation, and more.
3) IPv4 supports fragmentation of large datagrams into smaller pieces to accommodate the size constraints of different networks. The fragmentation process and header fields related to fragmentation are described.
Rejuvenate Pointcut: A Tool for Pointcut Expression Recovery in Evolving Aspe...Raffi Khatchadourian
Invited tool demonstration at the 8th International Conference on Aspect-Oriented Software Development (AOSD '09), Charlottesville, VA, USA, March 2-6, 2009.
This document contains a 25 question multiple choice quiz about computer science topics. The questions cover topics like Boolean algebra, graphs, probability, algorithms, automata theory, programming languages, operating systems, computer networks, databases and software engineering. For each question there are 4 possible answer choices, with one being marked as the correct answer. Explanations are provided for some of the questions.
The document is a sample paper for Class XII Subject Informatics Practices. It consists of 3 sections - Section A with 30 marks, Section B with 20 marks each, and Section C with 20 marks each. Section A contains short answer questions, Section B contains case studies and questions, and Section C contains SQL queries and output questions. The document provides answer keys for all questions.
The document provides a blue print and model question paper for Informatics Practices class 12 exam. The blue print outlines the topics, number of questions and marks distribution. It allocates 38 questions worth 70 marks across topics like Networking, Java Programming, RDBMS and IT Applications. The model question paper has 7 sections with 1 to 6 questions in each worth a total of 70 marks to be attempted in 3 hours. It covers different concepts like HTML, XML, Database, Java etc.
The document provides an interview questions and answers guide for C programming language. It includes questions on topics such as the definition of C language, differences between functions like printf and sprintf, static variables, unions, linked lists, storage classes in C, and hashing. For each question, it provides multiple detailed answers explaining concepts in C programming such as memory allocation, strings, pointers, macros and more.
Automatic Task-based Code Generation for High Performance DSELJoel Falcou
Providing high level tools for parallel programming while sustaining a high level of performance has been a challenge that techniques like Domain Specific Embedded Languages try to solve. In previous works, we investigated the design of such a DSEL – NT2 – providing a Matlab -like syntax for parallel numerical computations inside a C++ library.
Main issues addressed here is how liimtaions of classical DSEL generation and multithreaded code generation can be overcome.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document presents a novel approach for forward error correction (FEC) decoding based on the belief propagation (BP) algorithm in LTE and WiMAX systems. Specifically, it proposes representing tail-biting convolutional codes and turbo codes using parity check matrices, which allows both code types to be decoded using a unified BP algorithm. This provides a lower complexity decoding architecture compared to traditional approaches. Simulation results show the BP algorithm achieves near-identical performance to maximum a posteriori (MAP) decoding for turbo codes, while being less complex. Representing codes with parity check matrices thus enables a universal decoder for LTE and WiMAX using a single BP algorithm.
The document discusses network layer protocols and IPv4 specifically. It provides three key points:
1) IPv4 is the main network layer protocol in the Internet that provides "best effort" delivery of packets called datagrams from source to destination through various networks in a connectionless manner.
2) IPv4 packets, or datagrams, contain a header with fields that provide routing information and a payload section for data. The header fields include source and destination addresses, identification information, flags for fragmentation, and more.
3) IPv4 supports fragmentation of large datagrams into smaller pieces to accommodate the size constraints of different networks. The fragmentation process and header fields related to fragmentation are described.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
This document summarizes research on optimizing the LZ77 data compression algorithm. The researchers present a method to improve LZ77 compression by using variable triplet sizes instead of fixed sizes. Specifically, when the offset is equal to the match length, they represent it with a "doublet" of just length and next symbol, saving bits compared to the standard triplet representation. They show this optimization achieves higher compression ratios than the conventional LZ77 algorithm in experiments. The decoding process is made slightly more complex to identify doublets versus triplets but still allows decompressing the encoded data back to the original.
This document describes a new algorithm for optimally formatting source code. It uses dynamic programming to choose between alternative layouts of code specified by combinators. The layouts are scored based on a cost function that accounts for line breaks and characters beyond a right margin. By considering the interaction of layout choices across a whole program, the algorithm can pick the globally optimal formatting.
The Web service composition based on colored Petri netIJRESJOURNAL
ABSTRACT: An automatic approach to get colored Petri net model of web service composition was put forward in this paper. Each web service composition had its Petri net which described in PNML( Petri net Markup Language) +OWL(Ontology Web Language) ,where PNML part describes the Petri net structure and OWL part shows out their domain ontology definitions (semantic markups) of place elements of the Petri net. Based on the input / output interaction related places elements among web service colored Petri nets, through place fusion operation, merging sub-web service Petri nets, we can get the colored Petri net model of web service composition, this work proposes the necessary precondition for practical application of colored Petri net in the context of service computing.
This document provides an introduction to coalesced hashing, a technique for storing and retrieving records from a database. Coalesced hashing stores records in two areas: an address region and a cellar. It is sensitive to the relative sizes of these areas, represented by an address factor. The document analyzes how to optimize this factor to minimize search time. It finds the compromise address factor of 0.86 works well. Coalesced hashing with this optimized factor outperforms other methods like separate chaining and linear probing.
Towards Practical Homomorphic Encryption with Efficient Public key GenerationIDES Editor
With the advent of cloud computing several security
and privacy challenges are put forth. To deal with many of
these privacy issues, ‘processing the encrypted data’ has been
identified as a potential solution, which requires a Fully
Homomorphic Encryption (FHE) scheme. After the
breakthrough work of Craig Gentry in devising an FHE,
several new homomorphic encryption schemes and variants
have been proposed. However, all those theoretically feasible
schemes are not viable for practical deployment due to their
high computational complexities. In this work, a variant of
the DGHV’s integer based Somewhat Homomorphic
Encryption (SHE) scheme with an efficient public key
generation method is presented. The complexities of various
algorithms involved in the scheme are significantly low. The
semantic security of the variant is based on the two-element
Partial Approximate Greatest Common Divisors (PAGCD)
problem. Experimental results prove that the proposed scheme
is very much efficient than any other integer based SHE
scheme existing today and hence practical.
This document discusses various tokens and language elements in C++ including keywords, identifiers, constants, strings, punctuators, operators, data types, and control structures. It provides details on the different types of constants, keywords, identifiers and their rules. It also explains basic concepts like tokens, expressions, operators, data types and control structures in C++.
(Costless) Software Abstractions for Parallel ArchitecturesJoel Falcou
Performing large, intensive or non-trivial computing on array like data structures is one of the most common task in scientific computing, video game development and other fields. This matter of fact is backed up by the large number of tools, languages and libraries to perform such tasks. If we restrict ourselves to C++ based solutions, more than a dozen such libraries exists from BLAS/LAPACK C++ binding to template meta-programming based Blitz++ or Eigen. If all of these libraries provide good performance or good abstraction, none of them seems to fit the need of so many different user types.
Moreover, as parallel system complexity grows, the need to maintain all those components quickly become unwieldy. This talk explores various software design techniques - like Generative Programming, MetaProgramming and Generic Programming - and their application to the implementation of a parallel computing librariy in such a way that:
- abstraction and expressiveness are maximized - cost over efficiency is minimized
We'll skim over various applications and see how they can benefit from such tools. We will conclude by discussing what lessons were learnt from this kind of implementation and how those lessons can translate into new directions for the language itself.
This document discusses object-oriented programming concepts like classes, objects, encapsulation, inheritance, polymorphism, and more. It provides examples of defining classes with data members and member functions, creating objects, passing objects as arguments, and using constructors and destructors. Key points include how memory is allocated for classes and objects, characteristics of constructors, constructor overloading, and examples of programs using constructors and destructors to print student details.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document provides an informal introduction to modeling the Resource Reservation Protocol (RSVP) using Coloured Petri Nets (CPNs). It describes the key components of CPNs including places, transitions, types, markings, and dynamic behavior. An example CPN model of a simplified RSVP service specification is presented to illustrate these concepts. The model includes places representing network nodes and protocol states, transitions representing events, and arc expressions defining token movement between places during transition firing. The example demonstrates how CPNs can be used to formally specify and analyze communication protocols like RSVP.
The document provides an introduction to C++ programming. It outlines key learning outcomes which include data types, input/output operators, control statements, arrays, functions, structures, and object-oriented programming concepts. It then discusses some of these concepts in more detail, including an overview of C++, its characteristics as both a high-level and low-level language, object-oriented programming principles, and basic data types.
This document provides an overview of object-oriented concepts and C++ programming. It discusses procedural programming, structured programming, and object-oriented programming approaches. It describes the key characteristics of OOP including modularity, abstraction, encapsulation, inheritance, polymorphism, and dynamic binding. The document also discusses the history and characteristics of the C++ programming language, and covers C++ tokens, identifiers, keywords, and constants.
This document provides programming design guidelines for naming conventions, code documentation standards, and structuring code. It recommends using descriptive names for variables and objects with unique prefixes based on their type. Functions and code sections should be well documented with comments. Code structure is important for readability; related code blocks should be indented consistently.
HDR Defence - Software Abstractions for Parallel ArchitecturesJoel Falcou
Performing large, intensive or non-trivial computing on array like data
structures is one of the most common task in scientific computing, video game
development and other fields. This matter of fact is backed up by the large number
of tools, languages and libraries to perform such tasks. If we restrict ourselves to
C++ based solutions, more than a dozen such libraries exists from BLAS/LAPACK
C++ binding to template meta-programming based Blitz++ or Eigen.
If all of these libraries provide good performance or good abstraction, none of
them seems to fit the need of so many different user types. Moreover, as parallel
system complexity grows, the need to maintain all those components quickly
become unwieldy. This thesis explores various software design techniques - like
Generative Programming, MetaProgramming and Generic Programming - and their
application to the implementation of various parallel computing libraries in such a
way that abstraction and expressiveness are maximized while efficiency overhead is
minimized.
This document provides an overview of arrays and strings in C programming. It discusses:
- Arrays can store a collection of like-typed data and each element is accessed via an index. Common array types include one-dimensional and multi-dimensional arrays.
- Strings in C are arrays of characters that are null-terminated. Functions like printf, scanf, gets and puts can be used to output and input strings.
- Linear and binary search algorithms are described for finding a value within an array. Sorting techniques like bubble, insertion and selection sorts are also mentioned.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
SIMILARITY SEARCH FOR TRAJECTORIES OF RFID TAGS IN SUPPLY CHAIN TRAFFICijdms
In this fast developing period the use of RFID have become more significant in many application
domaindue to drastic cut down in the price of the RFID tags. This technology is evolving as a means of
tracking objects and inventory items. One such diversified application domain is in Supply Chain
Management where RFID is being applied as the manufacturers and distributers need to analyse product
and logistic information in order to get the right quantity of products arriving at the right time to the right
locations. Usually the RFID tag information collected from RFID readers is stored in remote database and
the RFID data is being analyzed by querying data from this database based on path encoding method by
the property of prime numbers. In this paper we propose an improved encoding scheme that encodes the
flows of objects in RFID tag movement. A Trajectory of moving RFID tags consists of a sequence of
tagsthat changes over time. With the integration of wireless communications and positioning technologies,
the concept of Trajectory Database has become increasingly important, and has posed great challenges to
the data mining community.The support of efficient trajectory similarity techniques is indisputably very
important for the quality of data analysis tasks in Supply Chain Traffic which will enable similar product
movements.
1) SystemCV is a tool that combines SystemC hardware/software simulation with visualization capabilities. It allows for visualization of modules and the overall system as well as incremental development of communication and computation models.
2) SystemCV uses the SystemC architecture standard and adds a Visualization Tool Kit (VTK) for visualization. It supports hardware/software co-simulation, concurrent implementation, architecture exploration, and hierarchical modular design.
3) An example case study called Jigsaw is presented which uses SystemCV for designing an imaging system with 2D visualization of data flow and communications between modules.
The document outlines the teaching schedule for February 2016 for the Forensic Medicine Department's 3rd semester students. Various topics related to forensic medicine such as impotency, sterility, virginity, pregnancy, delivery, and sexual offenses will be covered on different dates by teachers including Dr. Pramod Kumar, Dr. Prasanna, Dr. Dhritiman Nath, and Dr. Satish Kumar. An integrated teaching session on drug abuse will also be held on February 15th for both student batches.
Marina Jardine is seeking a housekeeping position and has over 10 years of relevant experience. She has demonstrated cleaning experience, current WHMIS and First Aid/CPR certifications, and is detail-oriented with excellent communication skills. Her experience includes operating vacuums, dusting, washing, sweeping, mopping, scrubbing, making beds and changing linens for various hotels and facilities in Winnipeg between 2008-2015.
El documento describe los conceptos y niveles de la interculturalidad en la educación costarricense. Explica que la educación intercultural es necesaria debido a la diversidad cultural del país. Aborda la interculturalidad a nivel actitudinal, epistemológico, metodológico y de gestión. Implica cambios en las actitudes, contenidos, estrategias de enseñanza y gestión de las instituciones educativas para complementar los conocimientos y procesos de aprendizaje de las diferentes culturas.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
This document summarizes research on optimizing the LZ77 data compression algorithm. The researchers present a method to improve LZ77 compression by using variable triplet sizes instead of fixed sizes. Specifically, when the offset is equal to the match length, they represent it with a "doublet" of just length and next symbol, saving bits compared to the standard triplet representation. They show this optimization achieves higher compression ratios than the conventional LZ77 algorithm in experiments. The decoding process is made slightly more complex to identify doublets versus triplets but still allows decompressing the encoded data back to the original.
This document describes a new algorithm for optimally formatting source code. It uses dynamic programming to choose between alternative layouts of code specified by combinators. The layouts are scored based on a cost function that accounts for line breaks and characters beyond a right margin. By considering the interaction of layout choices across a whole program, the algorithm can pick the globally optimal formatting.
The Web service composition based on colored Petri netIJRESJOURNAL
ABSTRACT: An automatic approach to get colored Petri net model of web service composition was put forward in this paper. Each web service composition had its Petri net which described in PNML( Petri net Markup Language) +OWL(Ontology Web Language) ,where PNML part describes the Petri net structure and OWL part shows out their domain ontology definitions (semantic markups) of place elements of the Petri net. Based on the input / output interaction related places elements among web service colored Petri nets, through place fusion operation, merging sub-web service Petri nets, we can get the colored Petri net model of web service composition, this work proposes the necessary precondition for practical application of colored Petri net in the context of service computing.
This document provides an introduction to coalesced hashing, a technique for storing and retrieving records from a database. Coalesced hashing stores records in two areas: an address region and a cellar. It is sensitive to the relative sizes of these areas, represented by an address factor. The document analyzes how to optimize this factor to minimize search time. It finds the compromise address factor of 0.86 works well. Coalesced hashing with this optimized factor outperforms other methods like separate chaining and linear probing.
Towards Practical Homomorphic Encryption with Efficient Public key GenerationIDES Editor
With the advent of cloud computing several security
and privacy challenges are put forth. To deal with many of
these privacy issues, ‘processing the encrypted data’ has been
identified as a potential solution, which requires a Fully
Homomorphic Encryption (FHE) scheme. After the
breakthrough work of Craig Gentry in devising an FHE,
several new homomorphic encryption schemes and variants
have been proposed. However, all those theoretically feasible
schemes are not viable for practical deployment due to their
high computational complexities. In this work, a variant of
the DGHV’s integer based Somewhat Homomorphic
Encryption (SHE) scheme with an efficient public key
generation method is presented. The complexities of various
algorithms involved in the scheme are significantly low. The
semantic security of the variant is based on the two-element
Partial Approximate Greatest Common Divisors (PAGCD)
problem. Experimental results prove that the proposed scheme
is very much efficient than any other integer based SHE
scheme existing today and hence practical.
This document discusses various tokens and language elements in C++ including keywords, identifiers, constants, strings, punctuators, operators, data types, and control structures. It provides details on the different types of constants, keywords, identifiers and their rules. It also explains basic concepts like tokens, expressions, operators, data types and control structures in C++.
(Costless) Software Abstractions for Parallel ArchitecturesJoel Falcou
Performing large, intensive or non-trivial computing on array like data structures is one of the most common task in scientific computing, video game development and other fields. This matter of fact is backed up by the large number of tools, languages and libraries to perform such tasks. If we restrict ourselves to C++ based solutions, more than a dozen such libraries exists from BLAS/LAPACK C++ binding to template meta-programming based Blitz++ or Eigen. If all of these libraries provide good performance or good abstraction, none of them seems to fit the need of so many different user types.
Moreover, as parallel system complexity grows, the need to maintain all those components quickly become unwieldy. This talk explores various software design techniques - like Generative Programming, MetaProgramming and Generic Programming - and their application to the implementation of a parallel computing librariy in such a way that:
- abstraction and expressiveness are maximized - cost over efficiency is minimized
We'll skim over various applications and see how they can benefit from such tools. We will conclude by discussing what lessons were learnt from this kind of implementation and how those lessons can translate into new directions for the language itself.
This document discusses object-oriented programming concepts like classes, objects, encapsulation, inheritance, polymorphism, and more. It provides examples of defining classes with data members and member functions, creating objects, passing objects as arguments, and using constructors and destructors. Key points include how memory is allocated for classes and objects, characteristics of constructors, constructor overloading, and examples of programs using constructors and destructors to print student details.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document provides an informal introduction to modeling the Resource Reservation Protocol (RSVP) using Coloured Petri Nets (CPNs). It describes the key components of CPNs including places, transitions, types, markings, and dynamic behavior. An example CPN model of a simplified RSVP service specification is presented to illustrate these concepts. The model includes places representing network nodes and protocol states, transitions representing events, and arc expressions defining token movement between places during transition firing. The example demonstrates how CPNs can be used to formally specify and analyze communication protocols like RSVP.
The document provides an introduction to C++ programming. It outlines key learning outcomes which include data types, input/output operators, control statements, arrays, functions, structures, and object-oriented programming concepts. It then discusses some of these concepts in more detail, including an overview of C++, its characteristics as both a high-level and low-level language, object-oriented programming principles, and basic data types.
This document provides an overview of object-oriented concepts and C++ programming. It discusses procedural programming, structured programming, and object-oriented programming approaches. It describes the key characteristics of OOP including modularity, abstraction, encapsulation, inheritance, polymorphism, and dynamic binding. The document also discusses the history and characteristics of the C++ programming language, and covers C++ tokens, identifiers, keywords, and constants.
This document provides programming design guidelines for naming conventions, code documentation standards, and structuring code. It recommends using descriptive names for variables and objects with unique prefixes based on their type. Functions and code sections should be well documented with comments. Code structure is important for readability; related code blocks should be indented consistently.
HDR Defence - Software Abstractions for Parallel ArchitecturesJoel Falcou
Performing large, intensive or non-trivial computing on array like data
structures is one of the most common task in scientific computing, video game
development and other fields. This matter of fact is backed up by the large number
of tools, languages and libraries to perform such tasks. If we restrict ourselves to
C++ based solutions, more than a dozen such libraries exists from BLAS/LAPACK
C++ binding to template meta-programming based Blitz++ or Eigen.
If all of these libraries provide good performance or good abstraction, none of
them seems to fit the need of so many different user types. Moreover, as parallel
system complexity grows, the need to maintain all those components quickly
become unwieldy. This thesis explores various software design techniques - like
Generative Programming, MetaProgramming and Generic Programming - and their
application to the implementation of various parallel computing libraries in such a
way that abstraction and expressiveness are maximized while efficiency overhead is
minimized.
This document provides an overview of arrays and strings in C programming. It discusses:
- Arrays can store a collection of like-typed data and each element is accessed via an index. Common array types include one-dimensional and multi-dimensional arrays.
- Strings in C are arrays of characters that are null-terminated. Functions like printf, scanf, gets and puts can be used to output and input strings.
- Linear and binary search algorithms are described for finding a value within an array. Sorting techniques like bubble, insertion and selection sorts are also mentioned.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
SIMILARITY SEARCH FOR TRAJECTORIES OF RFID TAGS IN SUPPLY CHAIN TRAFFICijdms
In this fast developing period the use of RFID have become more significant in many application
domaindue to drastic cut down in the price of the RFID tags. This technology is evolving as a means of
tracking objects and inventory items. One such diversified application domain is in Supply Chain
Management where RFID is being applied as the manufacturers and distributers need to analyse product
and logistic information in order to get the right quantity of products arriving at the right time to the right
locations. Usually the RFID tag information collected from RFID readers is stored in remote database and
the RFID data is being analyzed by querying data from this database based on path encoding method by
the property of prime numbers. In this paper we propose an improved encoding scheme that encodes the
flows of objects in RFID tag movement. A Trajectory of moving RFID tags consists of a sequence of
tagsthat changes over time. With the integration of wireless communications and positioning technologies,
the concept of Trajectory Database has become increasingly important, and has posed great challenges to
the data mining community.The support of efficient trajectory similarity techniques is indisputably very
important for the quality of data analysis tasks in Supply Chain Traffic which will enable similar product
movements.
1) SystemCV is a tool that combines SystemC hardware/software simulation with visualization capabilities. It allows for visualization of modules and the overall system as well as incremental development of communication and computation models.
2) SystemCV uses the SystemC architecture standard and adds a Visualization Tool Kit (VTK) for visualization. It supports hardware/software co-simulation, concurrent implementation, architecture exploration, and hierarchical modular design.
3) An example case study called Jigsaw is presented which uses SystemCV for designing an imaging system with 2D visualization of data flow and communications between modules.
The document outlines the teaching schedule for February 2016 for the Forensic Medicine Department's 3rd semester students. Various topics related to forensic medicine such as impotency, sterility, virginity, pregnancy, delivery, and sexual offenses will be covered on different dates by teachers including Dr. Pramod Kumar, Dr. Prasanna, Dr. Dhritiman Nath, and Dr. Satish Kumar. An integrated teaching session on drug abuse will also be held on February 15th for both student batches.
Marina Jardine is seeking a housekeeping position and has over 10 years of relevant experience. She has demonstrated cleaning experience, current WHMIS and First Aid/CPR certifications, and is detail-oriented with excellent communication skills. Her experience includes operating vacuums, dusting, washing, sweeping, mopping, scrubbing, making beds and changing linens for various hotels and facilities in Winnipeg between 2008-2015.
El documento describe los conceptos y niveles de la interculturalidad en la educación costarricense. Explica que la educación intercultural es necesaria debido a la diversidad cultural del país. Aborda la interculturalidad a nivel actitudinal, epistemológico, metodológico y de gestión. Implica cambios en las actitudes, contenidos, estrategias de enseñanza y gestión de las instituciones educativas para complementar los conocimientos y procesos de aprendizaje de las diferentes culturas.
El documento describe varias tecnologías nuevas como el mercado libre, Google, APIs, Java y almacenamiento de datos. Explica que el mercado libre permite que los precios se determinen por la oferta y la demanda y que Google genera ventas a través de publicidad. También describe APIs como conjuntos de herramientas que facilitan el desarrollo de aplicaciones y que Java es un lenguaje de programación orientado a objetos y multiplataforma.
concevoir un montage prezi n'est pas compliqué. il faut juste avoir envie de raconter une histoire différemment en s'appuyant sur les effets de zoom et de rotation selon le modèle du thème choisi selon le chemin prédéfini
Este documento resume una investigación sobre la obesidad infantil en niños de 5 a 11 años en México. Explora las causas y efectos de la obesidad infantil, incluyendo factores de riesgo, enfermedades derivadas y tratamientos como dietas balanceadas y estilos de vida saludables. El objetivo es informar sobre la obesidad infantil y formas de prevención, ya que los niveles de obesidad en niños mexicanos están aumentando.
Auguste y Gustave Perret fueron pioneros en el uso del hormigón armado como material constructivo y ornamental. Junto con Tony Garnier, propusieron un nuevo clasicismo que utilizaba materiales modernos pero mantenía las proporciones clásicas. Más tarde, Le Corbusier y Mies van der Rohe desarrollaron obras emblemáticas del movimiento moderno utilizando el acero y el vidrio junto con el hormigón. La Bauhaus también promovió el uso de nuevos materiales industrializados y la funcionalidad en la arquitectura.
Corazón verde (Identificar y caracterizar las fuentes hídricas presentes en l...CTeI Putumayo
El problema surge a partir de que los estudiantes observan a diario que en os alrededores de la institución existe unas fuentes hídricas que han disminuido su caudal, esto ha generado un sin número de inquietudes el porqué de esta situación, cuales son las causas y por lo anterior queremos en primer lugar conoce nuestro contexto y las fuentes hídricas que están presentes en los alrededores de la institución.
The document discusses evidence that supports the theory that Vikings discovered America before Columbus.
Archaeological evidence found in locations like Newfoundland, such as remains of horses dated to around 1000 AD that match Norwegian breeds, and buildings resembling Norse architecture from the same time period at sites like L'Anse aux Meadows provide strong physical proof of Viking presence. Additionally, discovered Viking artifacts like nails, woodworking, and 11th century coins not native to Native Americans indicate Norse exploration.
Written accounts like the Icelandic Sagas of Erik the Red, though sometimes dismissed for lacking validation, describe voyages that historians have found plausible given Viking seafaring practices and place names, providing early documentation of their travels to
People Incorporated offers microenterprise loans up to $50,000 and small business capital loans up to $200,000 for starting or expanding businesses. They also provide training and technical assistance for individuals interested in entrepreneurship through workshops listed on their website. Potential applicants can contact Jenny Knox at the provided address and phone number for more information and to receive a loan application.
Finding the shortest path in a graph and its visualization using C# and WPF IJECEIAES
This document summarizes a study that implemented Dijkstra's algorithm to find the shortest path between two vertices in an undirected graph using C# and WPF. It describes Dijkstra's algorithm and how it was programmed using .NET 4.0, Visual Studio 2010, and WPF. The program allows drawing graphs, finding the shortest path between two vertices by highlighting the path, and displaying the path length. Screenshots show example outputs of the program finding shortest paths in sample graphs.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
Introduction to HDLs: Overview of Digital Design with Verilog HDL, Basic Concepts, Data types, System tasks and Compiler Directives. Hierarchical modeling, concepts of modules and ports Gate level Modeling, Dataflow modeling-Continuous Assignments, Timing and Delays. Programming Language Interface
Design of Arithmetic Circuits using Gate level/ Data flow modeling –Adders, Subtractors, 4- bit Binary and BCD adders and 8-bit Comparators.
Verification: Functional verification, simulation types, Design of stimulus block.
This PPT is intended to provide a thorough coverage of verilog HDL concepts based on fundamental principles of digital design. This is the basic fundamental concept for the programming of the digital electronics.
Performance Analysis of DRA Based OFDM Data Transmission With Respect to Nove...IJERA Editor
In this paper, we have analyzed the performance characteristics of OFDM data transmission with regard to a new high speed RS decoding algorithm. The various characteristics identified are mainly speed and accuracy of the transmission irrespective of channel behaviour. We consider two cases viz. data transmission without error control and data transmission with error control. Each of these cases are duly analyzed and it is proven that high speed RS decoding algorithms can actually benefit OFDM data transmission for advanced communication systems only if implemented at the hardware (VLSI) level because of significant processing overhead involved in software based implementation even though the algorithm may have lower computational complexity.
Bidirectional search routing protocol for mobile ad hoc networksIAEME Publication
This document summarizes a research paper that proposes a new reactive routing protocol called Bidirectional Search Routing (BSR) for mobile ad hoc networks. BSR uses a bidirectional search algorithm where the source and destination nodes simultaneously search for each other to discover routes. This reduces the route discovery time compared to other reactive protocols by up to 53% for small and medium networks. The paper describes the bidirectional search algorithm and how it is applied in BSR. It then compares the performance of BSR to Dynamic Source Routing (DSR) and Ad-Hoc On-Demand Distance Vector (AODV) routing in terms of route discovery time and delay, showing improvements over these existing protocols.
This document discusses code obfuscation techniques for protecting software from reverse engineering. It begins with an abstract discussing the use of code obfuscation to protect proprietary algorithms and keys from extraction during reverse engineering. It then provides definitions of code obfuscation and discusses classifications of obfuscation techniques including layout, data, control, and preventive obfuscations. The document surveys various code obfuscation techniques from literature and evaluates them based on criteria like potency, resilience, cost, and resistance to static and dynamic attacks. It concludes with a discussion of empirical evaluation of obfuscation techniques.
HIGH-LEVEL LANGUAGE EXTENSIONS FOR FAST EXECUTION OF PIPELINE-PARALLELIZED CO...ijpla
The last few years have seen multicore architectures emerge as the defining technology shaping the future
of high-performance computing. Although multicore architectures present tremendous performance
potential, to realize the true potential of these systems, software needs to play a key role. In particular,
high-level language abstractions and the compiler and the operating system should be able to exploit the
on-chip parallelism and utilize underlying hardware resources on these emerging platforms. This paper
presents a set of high-level abstractions that allow the programmer to specify, at the source-code level, a
variety to of parameters related to parallelism and inter-thread data locality. These abstractions are
implemented as extensions to both C and Fortran. We present the syntax of these directives and also
discuss their implementation in the context of source-to-source transformation framework and autotuning
system. The abstractions are particularly applicable to pipeline parallelized code. We demonstrate the
effectiveness of these strategies of a set of pipeline parallel benchmarks on three different multicore
platforms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
De-virtualizing virtual Function Calls using various Type Analysis Technique...IOSR Journals
This document discusses techniques for optimizing virtual function calls in object-oriented programming languages. Virtual function calls are indirect calls that involve lookup through a virtual function table (VFT) at runtime, which has performance overhead compared to direct calls. Various static analysis techniques like Class Hierarchy Analysis (CHA) and Rapid Type Analysis (RTA) aim to resolve some virtual calls by determining the possible target types and replacing indirect calls with direct calls if a single target is possible. CHA uses the class hierarchy and declared types to determine possible target types, while RTA also considers instantiated types in the program to further reduce possible targets. The document analyzes examples to demonstrate how CHA and RTA can optimize some virtual calls.
This document describes a software project for an automated help care center with message storage capabilities. The software allows users to call an emergency number and provide location information via speech recognition, then sends their call to the appropriate emergency services. It also includes a database to store service provider and location information, DTMF decoding circuits to receive calls, and a message storage system to save voice messages when users are unavailable.
A novel deep-learning based approach to DNS over HTTPS network traffic detectionIJECEIAES
Domain name system (DNS) over hypertext transfer protocol secure (HTTPS) (DoH) is currently a new standard for secure communication between DNS servers and end-users. Secure sockets layer (SSL)/transport layer security (TLS) encryption should guarantee the user a high level of privacy regarding the impossibility of data content decryption and protocol identification. Our team created a DoH data set from captured real network traffic and proposed novel deep-learning-based detection models allowing encrypted DoH traffic identification. Our detection models were trained on the network traffic from the Czech top-level domain maintainer, Czech network interchange center (CZ.NIC), and successfully applied to the identification of the DoH traffic from Cloudflare. The reached detection model accuracy was near 95%, and it is clear that the encryption does not prohibit the DoH protocol identification.
IRJET- Estimating Various DHT ProtocolsIRJET Journal
This document compares three distributed hash table (DHT) protocols: Tapestry, Chord, and Kademlia. It analyzes their performance using a simulator under varying parameters like stabilization interval, number of backup nodes, number of successors, and number of parallel lookups. The analysis seeks to determine the optimal cost-performance tradeoff for each protocol based on metrics like lookup latency and number of messages sent. Key differences between the protocols are described, such as Tapestry using a 160-bit identifier space, Chord arranging nodes in a circular identifier space and using a finger table for routing, and Kademlia storing contacts in buckets and finding closer nodes through iterative lookups. Simulation results are used to compare the protocols
Optimization of Latency of Temporal Key Integrity Protocol (TKIP) Using Graph...ijcseit
Temporal Key Integrity Protocol (TKIP) [1] encapsulation consists of multiple-hardware and software
block which can be implemented either software or hardware block or combination of both. This papers
aims to design the TKIP technique using graph theory and hardware software co-design for minimizing the
latency. Simulation results show the effectiveness of the presented technique over Hardware software codesign.
Optimization of latency of temporal key Integrity protocol (tkip) using graph...ijcseit
The document discusses optimization of latency in the Temporal Key Integrity Protocol (TKIP) using hardware-software co-design and graph theory. It presents a mathematical model to partition TKIP algorithms between hardware and software blocks to minimize latency. Simulation results showed the proposed technique achieved lower latency than a hardware-only implementation, reducing latency from 10us to 8us. The technique models TKIP modules as a graph and uses algorithms to assign modules to hardware or software based on latency calculations.
Abstracting Strings For Model Checking Of C ProgramsMartha Brown
This document summarizes a research paper that introduces a new abstract domain called M-String for abstracting strings in C programs. M-String approximates null-terminated character arrays and tracks both the shape of the array structure and the values of characters. It is parameterized based on the representation of array indices and the abstraction of element values. The paper formalizes the concrete and abstract semantics of basic string operations, proves soundness, and describes an implementation within a model checker to automatically analyze C programs and detect bugs related to string handling.
A Novel Design For Generating Dynamic Length Message Digest To Ensure Integri...IRJET Journal
This document proposes a novel algorithm for generating dynamic length message digests to ensure data integrity. It summarizes existing hash algorithms like MD5 and SHA-1 that generate fixed-length digests but have been proven breakable or inefficient. The proposed algorithm generates variable-length digests using a block cipher approach on message chunks and pseudo-random numbers, making it more robust against attacks. Implementation results show the proposed algorithm has better timing performance and avalanche effect than MD5, SHA-1 and a modified MD5 algorithm, demonstrating its efficiency and security for ensuring integrity.
Rapid Development and Performance By Transitioning from RDBMSs to MongoDB
Modern day application requirements demand rich & dynamic data structures, fast response times, easy scaling, and low TCO to match the rapidly changing customer & business requirements plus the powerful programming languages used in today's software landscape.
Traditional approaches to solutions development with RDBMSs increasingly expose the gap between the modern development languages and the relational data model, and between scaling up vs. scaling horizontally on commodity hardware. Development time is wasted as the bulk of the work has shifted from adding business features to struggling with the RDBMSs.
MongoDB, the premier NoSQL database, offers a flexible and scalable solution to focus on quickly adding business value again.
In this session, we will provide:
- Overview of MongoDB's capabilities
- Code-level exploration of the MongoDB programming model and APIs and how they transform the way developers interact with a database
- Update of the exciting features in MongoDB 3.0
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting TechniqueIRJET Journal
This document describes a lossless data compression technique called the Rice algorithm and a modification of the Rice algorithm using curve fitting. The Rice algorithm uses a two-stage approach of preprocessing data to decorrelate it followed by entropy coding to assign variable-length codes. The modified Rice algorithm uses curve fitting to generate a function to predict future data points for improved compression performance compared to the original Rice algorithm. Simulation results show the modified algorithm achieves better compression by selecting an encoding option that uses fewer bits than the original Rice algorithm on a sample data set.
The document summarizes a proposed architecture that uses Butterfly Weight Accumulators (BWAs) to improve the latency and complexity of computing the Hamming distance when comparing data encoded with error correcting codes. Specifically, it recognizes that error correcting codes are often represented in a systematic form where the data and parity parts are separated. The proposed architecture leverages this by starting the data comparison before fully encoding the parity bits, reducing latency. It also uses BWAs as processing elements that can compute Hamming distance faster than previous saturating adder-based approaches. The architecture is analyzed theoretically and shown to reduce latency and complexity compared to prior direct compare methods.
Similar to Efficient call path detection for android os size of huge source code (20)
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
2. 2 Computer Science & Information Technology (CS & IT)
The size of open source code is increasing huge. For instance, open source version of Android
OS source [2] consists of 50 to 100 million lines of code. In spite of that, the required time for
static analysis of source code has been reduced drastically today. Understand ™ [3] by Scientific
Toolworks, Inc., for example, consumes less than one tenth of static analysis time of the previous
tools such as Doxygen [4] in our experience. Static analysis of huge scale source code written by
other parties is now the realistic first step to comprehend them if the following problem is
resolved.
Huge amount of analysis result of huge size source code is, however, still barrier to understand
the source code for an ordinary development environment. A developer usually has a general
type of laptop/desktop computer with tiny memory, at most 16GB. The memory does not store
all the result if the target source code is for Android OS, 50-100 million lines of code or similar
size of code. Actually a server with much more size of memory cannot treat the result efficiently.
It takes much more time to extract a call path. It disturbs developers' source code comprehension.
Our contribution is a bidirectional search algorithm to extract a call path from a call graph of too
huge source code to store all parse results of the code in memory. It reduces 8% of path
extraction time for a case in which ordinary path search algorithms do not reduce the time.The
first characteristic feature of the algorithm is it refers to a method definition in source code
corresponding to the visited node in the call graph. The second significant feature is the
referenced information is used not in order to select a prioritized node to visit next but so as to
select a node to “postpone” visiting. They are dedicated to the search time reduction. In addition,
the algorithm halves the required time for the aforementioned case if all the data is stored in
memory, though it is far from a real situation.
In the rest of this paper, we explain call graphs themselves and graph search algorithms. After
that, we introduce our bidirectional search algorithm for call graphs and its evaluations. Then we
make some discussions followed by concluding remarks.
2. DEFINITIONS
2.1. Call Graph
A call graph is the directed graph where the nodes of the graph are the methods (in Java; the
functions in C++) of the classes of the program; each edge represents one or more invocations of
a method by another method [1]. The former method is referred to as a callee method, and the
latter as a caller method. The staring node of the edge corresponds to the caller method. The
ending node stands for the callee method.
An example of call graphs is depicted in Figure 1, which is retrieved from the source code shown
in Figure 2. Each bold text part in Figure 2 corresponds to the node having the same text in
Figure 1. Information on methods and their invocations is retrieved from source code using static
analysis tools like as [3] and [4]. In practice graph could be more complicated: Each method
might be invoked by other methods than method “transmit()”; Method “transmit()” itself might
be invoked by some methods.
3. Computer Science & Information Technology (CS & IT) 3
Figure 1. An example of call graphs
public class Tranceiver implements ITranceiver {
private Transformer transformer;
private Protocol protocol;
private boolean send(Host destination) { ... }
...
public boolean transmit(String data, Host destination) {
byte[] encodedData = transformer.encode(data);
header = protocol.makeHeader();
send(header, encodedData, destination);
}
}
Figure 2. Source code corresponding to the call graph in Figure 1
Figure 3. A complicated example of call graphs
4. 4 Computer Science & Information Technology (CS & IT)
2.2. Call Path
A call path is a sequence of edges where all the edges are connected and each edge is connected
to at most one incoming edge and at most one outgoing edge. The edge having no incoming edge
is called as an “initial node.” The edge with no outgoing edge is called as a “final node.” The
methods corresponding to the initial node and the final node are called as an “initial method” and
a “final method” respectively.
Figure 4 shows an example of call paths, which is extracted from a little bit complicated call
graph shown in Figure 3. In most cases, extracted call paths are simpler to grasp caller-callee
relationship than general call graphs for developers if they know the names of an initial method
and a final method to be concerned.
Figure 4. An example of call paths, which is extracted from the graph in Figure 3
2.3. Bidirectional Search Algorithm for Call Graphs
A bidirectional graph search algorithm is a graph search algorithm that finds a shortest path from
an initial node to a final node in a directed graph.It traverses nodes forward in the graph and
traverses nodes backward simultaneously [5]. Recent bidirectional algorithms use a heuristic
distance estimate function to select a node to visit next [6] [7] [8]. An example of heuristic
functions is Euclidean distance of a pair of nodes for the corresponding real distance is
Manhattan distance.
To our knowledge, no heuristic estimate function for a call graph has been found yet.
3. BIDIRECTIONAL SEARCH ALGORITHMS FOR CALL GRAPH
We present an algorithm for bidirectional search in a call graph. The algorithm use source code
properties corresponding to a visiting node in order to select a next node to visit, while other
bidirectional search algorithms use a heuristic estimate value between a visiting node and the
initial/final node [6] or frontier nodes [7][8]. In our algorithm 'bidir_postpone', a next visiting
node is decided using the type of method corresponding to the visiting node in addition to the
outgoing or incoming degree (outdegree or indegree) of the node.
procedure bidir_postpone(E, V, initialNode,
finalNode):
1: todoF := {initialNode}
2: todoB := {finalNode}
3: /* Let prevF and prevB be dictionaries
of type V to V.
Let delay, distF, and distB be dictionaries
of type V to integer. */
4: for v in V do
27: if frwd then
28: next := {v | (u->v) in E}
29: else
30: next := {v | (v->u) in E}
31: endif
32: for node v in next; do
33: alt := dist[u] + 1 /* all the edge
weighs one in a call graph.
*/
5. Computer Science & Information Technology (CS & IT) 5
5: prevF[v] := None; prevB[v] := None
6: delay[v] := 0
7: distF[v] := ∞; distB[v] := ∞ /* which
stand for the distance from v along
with
forward (F)/backward (B) path. */
8: endfor
9: intermed := None
10: while |todoB| > 0 or |todoF| > 0 do
11: if |todoB| < |todoF| then
12: frwd := True; todo := todoF;
frontiers := todoB; dist := distF
13: else
14: frwd := False; todo := todoB;
frontiers := todoF; dist := distB
15: endif
16: todo2 := { }
17: for node u in todo; do
18: if delay[u] > 0 then
19: delay[u] := delay[u] - 1
20: add u to todo2
21: continue for-loop with next node u
22: else if the type of the class of
the method corresponding to
u
is interface and not frwd
then
not frwd then
23: add u to todo2
24: delay[u] := 3 - 1
25: continue for-loop with next node u
26: endif
34: if dist[v] > alt then
35: prev[v] := u
36: dist[v] := alt
37: if v in frontiers then
38: intermed := v
39: exit from while
40: endif
41: add v to todo2
42: endif
43: endfor
44: if frwd then todoF := todo2
45: else todoB := todo2 endif
46: todo2 := { }
47: endfor
48: endwhile
49: if intermed is None then
50: output ERROR
51: else
52: v := intermid
53: while prevF[v] is not None do
54: output (prevF[v] -> v)
as a path constituent
55: endwhile
56: v = prevB[intermid]
57: while prevB[v] is not None do
58: output (v -> prevB[v])
as a path constituent
59: endwhile
60: endif
Figure 5. Algorithm ‘bidir_postpone’
Another difference between our algorithm and the previous algorithms is that the selected node
by our algorithm is not a prioritized node to visit next but a node that is high cost to visit. The
node will be scheduled to visit some steps later instead of visiting immediately.
Figure 5 shows our algorithm ‘bidir_postpone.’ E and V in parameters in the figure stand for a
set of edges and a set of nodes in the call graph respectively.
At line 22, the algorithm determines next node to visit should be treated immediately or should be
postponed treating. If the corresponding class type of the visiting node is interface in Java or
abstract class in C++, the treatment is postponed. This type of method can be called by many
methods. Therefore indegree of the corresponding node is greater than usual nodes. That is why
visiting to such nodes should be postponed. If the visiting node is the case, treatment of the node
will be suspended for 3 steps (See the line 24 and lines 18-20).
6. 6 Computer Science & Information Technology (CS & IT)
It is false that the postponement can be achieved by assigning heavy weight to edges adjacent to
the nodes that hold the condition at line 22, instead of the treatment suspension because an
algorithm using such heavy edges may output longer path than algorithm bidir_postpone.
4. EVALUATION
We compare the results of applying four types of algorithms to four pairs of initial and terminal
nodes (hereupon the both nodes are referred to as starting point nodes) under two kinds of
conditions. The result tells (1) if all the data is stored in memory, our algorithm reduces the
duration for the significant case where the original bidirectional search does not reduce time
compared to more naïve unidirectional search. (2) Even if the data is stored in HDD, our
algorithm reduces the path extraction time by 8% for the aforementioned case. The details are
shown hereafter.
4.1. Algorithms, data, and conditions to compare
The algorithms are the following four types. The second and third ones are almost the same as
our algorithm itself:
A1 ‘Bidir_3postpone’: It is the algorithm shown in Figure 5.
A2 ‘Bidir_6postpone’: It is slightly modified version of algorithm of A1 ‘Bidir_3postpone’
with delay 6. It delays node visiting for 6 steps at line 24 in Figure 5 instead of 3 steps. The
purpose to compare the algorithm A1 with A2 is to check whether postponement step of
algorithm A1 is adequate or not.
A3 ‘Bidir_0postpone’: It is almost the same algorithm as A1 ‘Bidir_3postpone’ and A2
‘Bidir_6postpone.’ In this algorithm, the condition at line 22 in Figure 5 is always false while
the property in the source code, which is the type of the corresponding method, is retrieved
from the parse result stored in HDD. The algorithm is for evaluation of an overhead of the
parse result retrieval.
A4 ‘Bidir_balanced’: It is the almost original version (appeared in [9]) of bidirectional search
algorithm with no heuristic estimate functions, due to missing of estimate functions for call
graphs. The difference between the algorithm and A1 in Figure 5 is that lines 18 through 26
are omitted and the rest of the for-loop starting from line 32 is always executed.
The starting point node pairs are as follows:
P1 ‘A->C’: The number of reachable nodes by traversing forward from the initial node is
much more than the number of reachable nodes by traversing backward from the final node.
P2 ‘C->N’: The numbers of forward nodes from the initial node and backward nodes from the
final node are both few.
P3 ‘N->R’: Opposite pattern of P1 ‘A->C’. The number of forward nodes from the initial
node is much less than backward nodes from the final node.
7. Computer Science & Information Technology (CS & IT) 7
P4 ‘A->R’: The numbers of forward nodes from the initial node and backward nodes from the
final node are both many.
Figure 6 shows the numbers of nodes reachable from the initial node and the final node. For P2
and P4, the numbers of both type of nodes are almost the same. They are different from each
other for P1 and P3. Note that the measurement of the number is logarithmic. For instance, the
number for the final node for P3 is 5 times greater than one for the initial node.
Figure 6. The numbers of nodes reachable from starting points
The conditions are the following two patterns. The latter is more similar to practical use case:
C1 ‘In memory’: All graph data is stored in memory.
C2 ‘In HDD’: A part of graph data is constructed on demand from the source code parse
results that are stored in hard disk drive. Some graph edges and all graph nodes correspond to
some parse result straightforwardly. Some other edges should be built up with syntactical
analysis results and lexical analysis results.
C1 is the condition to measure the performance of the algorithms themselves. C2 is the condition
that is almost the same as the actual execution environment with an exception. The environment
contains parse result of source code stored in hard disk drive, and a result extraction program to
convert specified part of the parse result to graph edges and nodes. The exception that differs
from the actual environment is all the data stored in hard disk drive is not cached into memory
(that is disk cache) at initial time. It makes measurement variance due to disk cache very small.
The source code for evaluation from which the call graph is constructed is partial source files of
practical source code of Android OS for smartphones, version 4.4.4_r2 in [2]. The number of
methods in the partial source files set is about 2% of the number in the whole source set. The
parsed data for only 2% of the whole code occupies 2 GBytes size or more. For ordinary lap top
computers of developers, even this size of partial source files are too huge to be held in memory.
4.2 Results
The measurements are executed 3 times. The average measured values are described in Figures 7
to 9, where logarithmic scale is used for all the Y axes. Figure 7 shows the time to traverse under
8. 8 Computer Science & Information Technology (CS & IT)
the condition C1 ‘In memory’. Figure 8 is for the time to traverse under the condition C2 ‘In
HDD’. Figure 9 tells the number of visited nodes. Note that an I-shaped mark at the top of each
plotted box, in Figures 7 and 8, stands for the range of the sample standard deviation, +σ and -σ.
Three times measurements seem enough because the deviations are sufficiently small.
Figure 7 depicts algorithm A1 ‘Bidir_3postpone’ halves the time to traverse for the case in which
the numbers of reachable nodes from the starting point nodes are both large (P4). The precise
ratio is 1.07 seconds for our algorithm (A1) to 2.71 seconds for the original algorithm (A4),
which stands for 60.5% reduction. Actually in other cases P1, P2, and P3, naïve bidirectional
search (A4) runs in much shorter time than a unidirectional search, which is the directed edge
version of Dijkstra algorithm, and is much more naïve than A4. Thus the time reduction in the
case P4 is most desired by developers.
The resulting time to traverse for in-memory access case (C1, in Figure 7) is almost proportional
to the number of nodes to be visited by each algorithm, shown in Figure 9. Therefore the less
nodes are visited by the algorithm, the less traversal time can be achieved.
In contrast to the in-memory access, the results of the cases in which the data is stored in HDD
(C2), in Figure 8, tell our algorithm takes worse time than the original algorithm (A4) for the case
P3. In the case P4 that is most desired to reduce the time by developers, however, our algorithm
(A1) spends 281.9 seconds and the original algorithm (A4) consumes 307.8 seconds. The
difference is 25.9 seconds, which means ours (A1) achieves 8.4% reduction to the original (A4).
Figure 7. Time to traverse (in memory)
9. Computer Science & Information Technology (CS & IT) 9
Figure 8. Time to traverse (in HDD)
Figure 9. The numbers of visited nodes
4.3. Barriers to time reduction
We assume the overhead time for in-HDD case (C2, in Figure 8) comes from extra disk accesses
to retrieve the properties of methods that occur at line 22 in Figure 5. Our disk access method in
our algorithm is still naïve. Therefore the effect of the disk accesses could be reduced by the
result of further investigation.
Figure 10. The numbers of nodes that the algorithms visited forward
Figure 11. The numbers of nodes the the algorithms visited backward
The algorithm A1 visits remarkably large number of nodes for the starting point pair P3, in which
the number of forward edges reachable from the initial node is much less than backward edges
10. 10 Computer Science & Information Technology (CS & IT)
reachable from the final node. The algorithm traverses forward the same number of nodes from
the initial node of P3 as the other algorithms (See Figure 10). It visits backward five times as
large number of nodes from the final node of P3 as the algorithms A3 and A4, as shown in Figure
11. In the both algorithms A1 and A4, the forward search and the backward search meet at the
same node. Note that the meeting point node is expressed as 'intermed' at line 38 in Figure 5.
The node adjacent to the meeting point node (abbreviated as 'MP node' hereafter) in forward
direction holds the postponement condition described at the line 22 in Figure 5. The adjacent
node is on the path between the MP node and the final node. Thus visiting backward to the MP
node is postponed for 3 steps. One hundred and twenty seven extra nodes has been visited while
that. P3 notices our algorithm can be improved using other kinds of information than the type of
the class corresponding to the visited node.
5. DISCUSSION
5.1. Intermediate Nodes
Actually resulting paths for our evaluation data P1 through P3 can be concatenated because the
final node for Pn is the initial node for P(n+1) for n=1, 2. In addition, the concatenated path
starts from the initial node of P4 and ends with the final node of P4.
It is interesting the summation of the numbers of visited nodes for P1 through P3 is much less
than the numbers of visited nodes for P4 (See Figure 9.) It states the possibility of existence of
somewhat “efficient” intermediate nodes for path search of source code call graph. If as
bidirectional search algorithm can start traversing from the “efficient” intermediate nodes in
addition to starting point nodes, it takes less time to extract call path than our current version of
algorithm. We will try to find the feature of such type of intermediate nodes.
Note: The resulting path for P4, however, does not include the initial node and the final node of
P2. Therefore we treat P1 through P4 as almost independent example data from each other.
5.2. More Aggressive Use of Features in Source Code
Our algorithm shown in Figure 5 only uses the class type corresponding to the visiting node as
properties retrieved from source code. As discussed in the last section, there might be more types
of properties that can reduce the number of nodes to be visited.
The result of further investigations on the kinds of useful properties of source code also affects
the parsing way of source code itself. It will change the requirements of the source code parser.
For example, some optional high cost feature of parsing will be executed by default. It will also
change the database schema for the parse results to retrieve the useful properties with less cost.
6. RELATED WORKS
A bidirectional search algorithm in [6] or alike is called front-to-back algorithm. It uses the
heuristic function to estimate the distance from the current visiting node (i.e., front) to the goal
node (i.e., back). The function is similar to the heuristic function of A* search method [10]. The
algorithm is executed under the assumption that the heuristic function estimates a value that is
11. Computer Science & Information Technology (CS & IT) 11
equal to or less than the real value (i.e., not overestimating.) The function is said to be admissible
if the property holds.
A bidirectional search algorithm called front-to-front [7] [8] uses the heuristic estimate function
that calculates the distance from the current visiting node (i.e., front) to the frontier node of the
opposite direction search (i.e., (another) front.) The algorithm achieves the best performance
when the function is admissible and consistent, that is, given nodes x, y, and z where x and z are
the end nodes of a path and y is on the path, the estimated distance between x and z is equal to or
less than the summation of the real distance between x and y and the estimated distance between
y and z.
The former algorithm has been verified by experimental evaluations [6] for at least Fifteen-puzzle
problems. The latter has been proved theoretically in [8].
For call graphs, however, either type of heuristic function has not been found yet to the best of
our knowledge. Hence bidirectional search algorithms with heuristic estimate functions cannot
apply to call graphs.
Although unfortunately we have not found previous works on call graph traversal especially
related to bidirectional search, researchers of call graph visualizer made a comment on
bidirectional search [11]. From experiences of participants attending to a lab study to evaluate the
visualizer, they said “A significant barrier to static traversal were event listeners, implemented
using the Observer Pattern. To determine which methods were actually called, participants would
have to determine which classes implemented the interface and then begin new traversals from
these methods.” Our approach could resolve the difficulty and might make their traversal
processes easier.
7. CONCLUSION
We have offered a bidirectional search algorithms for a call graph of too huge source code to
store all parse results of the code in memory. It reduces 8% of path extraction time for a case in
which ordinary path search algorithms do not reduce the time. The algorithm refers to a method
definition in source code corresponding to the visited node in the call graph. The significant
feature of the algorithm is the referred information is used not in order to select a prioritized node
to visit next but in order to select a node to postpone visiting. They contribute to the search time
reduction.
ACKNOWLEDGEMENTS
We would like to thank all our colleagues for their help and the referees for their feedback.
REFERENCES
[1] Ryder, Barbara G., (1979) “Constructing the Call Graph of a Program”, Software Engineering, IEEE
Transactions on, vol. SE-5, no. 3, pp216–226.
[2] “Android Open Source Project”, https://source.android.com/
12. 12 Computer Science & Information
[3] Scientific Tools, Inc., “Understand ™ Static Code Analysis Tool”, https://scitools.com/
[4] “Doxygen”, http://www.stack.nl/~dimitri/doxygen/
[5] Pohl, Ira, (1969) “Bi-directional and heuristic search in path pr
Science, Stanford University.
[6] Auer, Andreas & Kaindl, Hermann, (2004) “A case study of revisiting best
search”, ECAI. Vol. 16.
[7] de Champeaux, Dennis & Sint, Lenie, (1977) “An improved
Journal of the ACM 24 (2), pp177
[8] de Champeaux, Dennis, (1983) “Bidirectional heuristic search again”, Journal of the ACM 30 (1),
pp22–32, doi:10.1145/322358.322360.
[9] Kwa, James B. H, (1989) “BS
Artificial Intelligence 38.1, pp95
[10] Hart, Peter E. & Nilsson, Nils J. & Raphael, Bertram, (1968) "A Formal Basis for the Heuristic
Determination of Minimum C
SSC4 4 (2), pp100–107, doi:10.1109/TSSC.1968.300136.
[11] LaToza, Thomas D. & Myers, Brad A, (2011) “Visualizing Call Graphs”, 2011 IEEE Symposium on
Visual Languages and Human
AUTHORS
Koji Yamamoto works at Fujitsu Laboratories, Japan, since 2000. He is interested in
software engineering especially using program analysis, formal methods, and (semi
automatic theorem proving and its systems. He received doctoral degree in engineering
from Tokyo Institute of Technology, Japan, in 2000. He is a member of ACM.
Taka Matsutsuka received his M.S. degree in Computer Science from Tokyo Institute
of Technology. He works for Fujitsu Laboratories Ltd., and is e
analysis. He is a visiting professor at Japan Advanced Instit
Technology and a member of Information Processing Society of Japan.
Computer Science & Information Technology (CS & IT)
Scientific Tools, Inc., “Understand ™ Static Code Analysis Tool”, https://scitools.com/
“Doxygen”, http://www.stack.nl/~dimitri/doxygen/
directional and heuristic search in path problems”, Diss. Dept. of Computer
Auer, Andreas & Kaindl, Hermann, (2004) “A case study of revisiting best-first vs. depth
de Champeaux, Dennis & Sint, Lenie, (1977) “An improved bidirectional heuristic search algorithm”,
Journal of the ACM 24 (2), pp177–191, doi:10.1145/322003.322004.
de Champeaux, Dennis, (1983) “Bidirectional heuristic search again”, Journal of the ACM 30 (1),
32, doi:10.1145/322358.322360.
James B. H, (1989) “BS∗: An admissible bidirectional staged heuristic search algorithm”,
Artificial Intelligence 38.1, pp95-109.
Hart, Peter E. & Nilsson, Nils J. & Raphael, Bertram, (1968) "A Formal Basis for the Heuristic
Determination of Minimum Cost Paths", IEEE Transactions on Systems Science and Cybernetics
107, doi:10.1109/TSSC.1968.300136.
LaToza, Thomas D. & Myers, Brad A, (2011) “Visualizing Call Graphs”, 2011 IEEE Symposium on
Visual Languages and Human-Centric Computing, pp117-124, doi: 10.1109/VLHCC.2011.6070388.
works at Fujitsu Laboratories, Japan, since 2000. He is interested in
software engineering especially using program analysis, formal methods, and (semi-)
automatic theorem proving and its systems. He received doctoral degree in engineering
stitute of Technology, Japan, in 2000. He is a member of ACM.
received his M.S. degree in Computer Science from Tokyo Institute
He works for Fujitsu Laboratories Ltd., and is engaged in R&D of OSS
ing professor at Japan Advanced Institute of Science and
Technology and a member of Information Processing Society of Japan.
Scientific Tools, Inc., “Understand ™ Static Code Analysis Tool”, https://scitools.com/
oblems”, Diss. Dept. of Computer
first vs. depth-first
bidirectional heuristic search algorithm”,
de Champeaux, Dennis, (1983) “Bidirectional heuristic search again”, Journal of the ACM 30 (1),
: An admissible bidirectional staged heuristic search algorithm”,
Hart, Peter E. & Nilsson, Nils J. & Raphael, Bertram, (1968) "A Formal Basis for the Heuristic
ost Paths", IEEE Transactions on Systems Science and Cybernetics
LaToza, Thomas D. & Myers, Brad A, (2011) “Visualizing Call Graphs”, 2011 IEEE Symposium on
124, doi: 10.1109/VLHCC.2011.6070388.