This document discusses utilizing spatiotemporal data for future mobility services. It proposes using Redis to store and query this type of data. The key challenges are performing fast range queries over location and time, and efficiently distributing data insertion load across multiple Redis servers. The document proposes addressing this by encoding location, time and ID as a single "ST-code", and splitting it to query a prefix while avoiding expensive Redis KEYS commands. This allows fast ST range queries in a single Redis command. However, it notes load concentration during data insertion still needs to be addressed.
This document describes MIST, a system for large-scale IoT stream processing. MIST uses a cluster of machines to efficiently handle billions of IoT stream queries. It provides query APIs that allow users to define dataflow and complex event processing queries. MIST optimizes processing by sharing code, exploiting locality of code references through query grouping, and merging queries to reuse system resources.
Presentation by Stefan Dziembowski, associate professor and leader of Cryptology and Data Security Group University of Warsaw. In BIU workshop on Bitcoin. Covered exclusively by vpnMentor.com
#PR12 #6번째논문 #Neural_Turing_Machine
제가 발표한 논문은 Neural Turing Machine입니다. Neural network와 memory가 나누어져 있고요 알고리즘을 학습하는 구조입니다.
video in korean: https://www.youtube.com/watch?v=2wbDiZCWQtY&t=1071s
[251] implementing deep learning using cu dnnNAVER D2
This document provides an overview of deep learning and implementation on GPU using cuDNN. It begins with a brief history of neural networks and an introduction to common deep learning models like convolutional neural networks. It then discusses implementing deep learning models using cuDNN, including initialization, forward and backward passes for layers like convolution, pooling and fully connected. It covers optimization issues like initialization and speeding up training. Finally, it introduces VUNO-Net, the company's deep learning framework, and discusses its performance, applications and visualization.
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
This document discusses homomorphic encryption and provides an example implementation of the Paillier cryptosystem in Java. It introduces homomorphic encryption and classifications like partially and fully homomorphic. It then explains the key details of the Paillier cryptosystem like key generation, encryption, decryption, and its homomorphic properties of addition and multiplication. The document outlines an open source Java implementation of Paillier on GitHub that uses BigInteger for the cryptographic operations. It walks through the code for key generation, encryption, decryption, and addition/multiplication of ciphertexts. Finally, it briefly mentions applications like electronic voting and electronic cash.
• Developed standard library cells using IBM 130nm technology in Cadence Virtuoso Layout editor for inverter, nand2, nor2, xnor2, mux2:1, oai2221, aoi22, oai121 and a master-slave negative edge triggered D-flip-flop with minimum area and diffusion breaks. Constructed the schematic, performed DRC-LVS closure of layout and generated a SPICE netlist with Calibre PEX extraction of all the standard cells.
• Simulated the netlists by HSPICE, verified the correctness of its functionality and also made timing analysis of D-flip-flop setup and hold times. Generated a new Synopsys cell library using SiliconSmart ACE and a new Cadence cell library from all the standard cells.
This document describes MIST, a system for large-scale IoT stream processing. MIST uses a cluster of machines to efficiently handle billions of IoT stream queries. It provides query APIs that allow users to define dataflow and complex event processing queries. MIST optimizes processing by sharing code, exploiting locality of code references through query grouping, and merging queries to reuse system resources.
Presentation by Stefan Dziembowski, associate professor and leader of Cryptology and Data Security Group University of Warsaw. In BIU workshop on Bitcoin. Covered exclusively by vpnMentor.com
#PR12 #6번째논문 #Neural_Turing_Machine
제가 발표한 논문은 Neural Turing Machine입니다. Neural network와 memory가 나누어져 있고요 알고리즘을 학습하는 구조입니다.
video in korean: https://www.youtube.com/watch?v=2wbDiZCWQtY&t=1071s
[251] implementing deep learning using cu dnnNAVER D2
This document provides an overview of deep learning and implementation on GPU using cuDNN. It begins with a brief history of neural networks and an introduction to common deep learning models like convolutional neural networks. It then discusses implementing deep learning models using cuDNN, including initialization, forward and backward passes for layers like convolution, pooling and fully connected. It covers optimization issues like initialization and speeding up training. Finally, it introduces VUNO-Net, the company's deep learning framework, and discusses its performance, applications and visualization.
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
This document discusses homomorphic encryption and provides an example implementation of the Paillier cryptosystem in Java. It introduces homomorphic encryption and classifications like partially and fully homomorphic. It then explains the key details of the Paillier cryptosystem like key generation, encryption, decryption, and its homomorphic properties of addition and multiplication. The document outlines an open source Java implementation of Paillier on GitHub that uses BigInteger for the cryptographic operations. It walks through the code for key generation, encryption, decryption, and addition/multiplication of ciphertexts. Finally, it briefly mentions applications like electronic voting and electronic cash.
• Developed standard library cells using IBM 130nm technology in Cadence Virtuoso Layout editor for inverter, nand2, nor2, xnor2, mux2:1, oai2221, aoi22, oai121 and a master-slave negative edge triggered D-flip-flop with minimum area and diffusion breaks. Constructed the schematic, performed DRC-LVS closure of layout and generated a SPICE netlist with Calibre PEX extraction of all the standard cells.
• Simulated the netlists by HSPICE, verified the correctness of its functionality and also made timing analysis of D-flip-flop setup and hold times. Generated a new Synopsys cell library using SiliconSmart ACE and a new Cadence cell library from all the standard cells.
The document describes the process of generating voxelized shadows using a voxel DAG representation. It involves capturing shadow maps from the GPU and transmitting them to system memory. Min/max mip levels are also captured and transmitted. The shadow data is then used to build a voxel DAG from SVO or DAG representations, with nodes marked as lit or shadowed.
Precomputed Voxelized-Shadows for Large-scale Scene and Many lightsSeongdae Kim
The document describes the process of building a voxel directional-occlusion graph (voxel DAG) from a shadow map captured on the GPU. It involves capturing the shadow map on the GPU and transmitting it to system memory, then computing minimum and maximum depth values at each mip level. A voxel DAG is constructed from the shadow data to represent lit and shadowed regions of the scene. Pseudocode is provided for building the root and subnodes of the voxel DAG in C#.
This document discusses various techniques for timing logic in digital design, including reducing logic delay through boolean algebra, Karnaugh maps, identifying and resolving long and critical paths, pipelining, simulation, and static timing analysis. It defines important timing terms like setup time, hold time, clock skew, and describes challenges like meeting timing constraints while avoiding race conditions. The goal is to optimize the logic such that it can be timed according to requirements and meet the specified clock period slack.
Gdc2011 direct x 11 rendering in battlefield 3drandom
The document discusses rendering techniques used in the Frostbite 2 game engine for Battlefield 3, including deferred shading with tile-based lighting computed using compute shaders. It describes how this approach reduces overdraw and bandwidth compared to traditional deferred rendering. It also discusses techniques for displacement mapping terrains, adaptive multi-sample anti-aliasing, and direct stereo 3D rendering support.
This document summarizes random number generation using OpenCL. It discusses the Marsaglia polar method for generating random numbers and Gaussian pairs. It presents pseudocode for the Gaussian pair generation algorithm. Profiling results show that 54% of time is spent generating Gaussian pairs while 46% is for random numbers. The document also discusses optimization techniques like using local memory, coalesced global memory access, and choosing an optimal work group size. Performance results show near linear speedup from 1 to 8 GPUs.
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
K-Means clustering is a popular clustering algorithm in data mining. Clustering large data sets can be time consuming, and in an attempt to minimize this time, our project is a parallel implementation of K-Means clustering algorithm on CUDA using C. We present the performance analysis and implementation of our approach to parallelizing K-Means clustering.
Accelerating microbiome research with OpenACCIgor Sfiligoi
Presented at OpenACC Summit 2020.
UniFrac is a commonly used metric in microbiome research for comparing microbiome profiles to one another. Computing UniFrac on modest sample sizes used to take a workday on a server class CPU-only node, while modern datasets would require a large compute cluster to be feasible. After porting to GPUs using OpenACC, the compute of the same modest sample size now takes only a few minutes on a single NVIDIA V100 GPU, while modern datasets can be processed on a single GPU in hours. The OpenACC programming model made the porting of the code to GPUs extremely simple; the first prototype was completed in just over a day. Getting full performance did however take much longer, since proper memory access is fundamental for this application.
This document summarizes Rajesh Gandham's PhD thesis defense on high-order numerical methods for ocean modeling applications. The thesis goals are to develop accurate PDE models, leverage many-core hardware architectures, and use efficient algorithm techniques. The document outlines work on two-dimensional shallow water modeling using discontinuous Galerkin methods, a pasiDG simulator implementation, and preliminary work on three-dimensional oceanic modeling. Performance results are shown for the pasiDG simulator running on GPUs and CPUs for the 2004 Indian Ocean tsunami test case.
As the leap second approaches, there is no better time to reflect on our misconceptions about time and numerals, past catastrophes and possible mitigation techniques.
The document describes a scenario to analyze access between a constellation of 40 low-Earth orbit satellites and a ground station located at MathWorks Natick. A satellite scenario is created in MATLAB and the constellation satellites are added along with their orbital parameters. Each satellite is equipped with a conical sensor camera with a 90-degree field of view. The ground station representing MathWorks Natick is also added with a minimum elevation angle of 30 degrees. Access analysis is performed between each camera and the ground station to determine the times each camera can photograph the site. The results show the start and end times of access intervals for each camera over the 6-hour period from 1:00 PM to 7:00 PM UTC on May 12,
The document proposes a new symmetric key cryptographic algorithm called RASS that uses bitwise operations for encryption and decryption. Some key features include using two 16-bit keys to encrypt different parts of the plaintext concurrently via separate threads for improved security and performance. The algorithm uses both linear and "crisscross" XOR operations on 16-bit blocks. Performance analysis shows the algorithm has linear time complexity and outperforms previous similar bit-level algorithms.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
This document is update from “fpgax February 2, 2019”
https://www.slideshare.net/ryuz88/lut-network-fpgx201902
Japanese Version
https://www.slideshare.net/ryuz88/lutnetwork-revision2
BinaryBrain
https://github.com/ryuz/BinaryBrain
FCN-Based 6D Robotic Grasping for Arbitrary Placed ObjectsKusano Hitoshi
This is the slide used for IEEE International Conference on Robotics and Automation (ICRA) 2017, Workshop on Learning and Control for Autonomous Manipulation Systems on June 2nd, 2017.
The document analyzes and compares implementations of Dijkstra's algorithm for finding shortest paths in graphs using logic programming and standard graph theory. It finds that while the logic programming implementation uses less memory, it is slower than the graph theory implementation, especially for large graphs. For most applications, the graph theory implementation is superior due to its faster speed and greater capabilities, though logic programming may be preferable for working with ontological knowledge bases. Testing on real airline route data showed the graph theory version was several times faster.
Parallel Implementation of K Means Clustering on CUDAprithan
K-Means clustering is a popular clustering algorithm in data mining. Clustering large data sets can be
time consuming, and in an attempt to minimize this time, our project is a parallel implementation of KMeans
clustering algorithm on CUDA using C. We present the performance analysis and implementation
of our approach to parallelizing K-Means clustering.
The document discusses utilizing spatiotemporal data from IoT devices in Redis. It proposes using a technique called "ST-coding" to encode location and timestamp data into a single code. This addresses two problems: 1) ST range queries were slow due to searching many keys; and 2) data insertion was inefficient due to load concentration on a single Redis server. By splitting the ST-code into a "PRE-code" and "SUF-code", ST range queries can be performed on a single key, avoiding use of the slow KEYS command. This improves query performance and distributes load across Redis servers.
Exploring Parallel Merging In GPU Based Systems Using CUDA C.Rakib Hossain
We present a program that implemented to execute Adaptive merge sort algorithm in parallel on a GPU based system. Parallel implementation is used to get better performance than serial implementation in runtime perspective. Parallel implementation executes independent executable operation in parallel using large number of cores in GPU based system. Results from a parallel implementation of the algorithm is given and compared with its serial implementation on run time basis. The parallel version is implemented with CUDA platform in a system based on NVIDIA GPU (GTX 650)
The document describes the process of generating voxelized shadows using a voxel DAG representation. It involves capturing shadow maps from the GPU and transmitting them to system memory. Min/max mip levels are also captured and transmitted. The shadow data is then used to build a voxel DAG from SVO or DAG representations, with nodes marked as lit or shadowed.
Precomputed Voxelized-Shadows for Large-scale Scene and Many lightsSeongdae Kim
The document describes the process of building a voxel directional-occlusion graph (voxel DAG) from a shadow map captured on the GPU. It involves capturing the shadow map on the GPU and transmitting it to system memory, then computing minimum and maximum depth values at each mip level. A voxel DAG is constructed from the shadow data to represent lit and shadowed regions of the scene. Pseudocode is provided for building the root and subnodes of the voxel DAG in C#.
This document discusses various techniques for timing logic in digital design, including reducing logic delay through boolean algebra, Karnaugh maps, identifying and resolving long and critical paths, pipelining, simulation, and static timing analysis. It defines important timing terms like setup time, hold time, clock skew, and describes challenges like meeting timing constraints while avoiding race conditions. The goal is to optimize the logic such that it can be timed according to requirements and meet the specified clock period slack.
Gdc2011 direct x 11 rendering in battlefield 3drandom
The document discusses rendering techniques used in the Frostbite 2 game engine for Battlefield 3, including deferred shading with tile-based lighting computed using compute shaders. It describes how this approach reduces overdraw and bandwidth compared to traditional deferred rendering. It also discusses techniques for displacement mapping terrains, adaptive multi-sample anti-aliasing, and direct stereo 3D rendering support.
This document summarizes random number generation using OpenCL. It discusses the Marsaglia polar method for generating random numbers and Gaussian pairs. It presents pseudocode for the Gaussian pair generation algorithm. Profiling results show that 54% of time is spent generating Gaussian pairs while 46% is for random numbers. The document also discusses optimization techniques like using local memory, coalesced global memory access, and choosing an optimal work group size. Performance results show near linear speedup from 1 to 8 GPUs.
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
K-Means clustering is a popular clustering algorithm in data mining. Clustering large data sets can be time consuming, and in an attempt to minimize this time, our project is a parallel implementation of K-Means clustering algorithm on CUDA using C. We present the performance analysis and implementation of our approach to parallelizing K-Means clustering.
Accelerating microbiome research with OpenACCIgor Sfiligoi
Presented at OpenACC Summit 2020.
UniFrac is a commonly used metric in microbiome research for comparing microbiome profiles to one another. Computing UniFrac on modest sample sizes used to take a workday on a server class CPU-only node, while modern datasets would require a large compute cluster to be feasible. After porting to GPUs using OpenACC, the compute of the same modest sample size now takes only a few minutes on a single NVIDIA V100 GPU, while modern datasets can be processed on a single GPU in hours. The OpenACC programming model made the porting of the code to GPUs extremely simple; the first prototype was completed in just over a day. Getting full performance did however take much longer, since proper memory access is fundamental for this application.
This document summarizes Rajesh Gandham's PhD thesis defense on high-order numerical methods for ocean modeling applications. The thesis goals are to develop accurate PDE models, leverage many-core hardware architectures, and use efficient algorithm techniques. The document outlines work on two-dimensional shallow water modeling using discontinuous Galerkin methods, a pasiDG simulator implementation, and preliminary work on three-dimensional oceanic modeling. Performance results are shown for the pasiDG simulator running on GPUs and CPUs for the 2004 Indian Ocean tsunami test case.
As the leap second approaches, there is no better time to reflect on our misconceptions about time and numerals, past catastrophes and possible mitigation techniques.
The document describes a scenario to analyze access between a constellation of 40 low-Earth orbit satellites and a ground station located at MathWorks Natick. A satellite scenario is created in MATLAB and the constellation satellites are added along with their orbital parameters. Each satellite is equipped with a conical sensor camera with a 90-degree field of view. The ground station representing MathWorks Natick is also added with a minimum elevation angle of 30 degrees. Access analysis is performed between each camera and the ground station to determine the times each camera can photograph the site. The results show the start and end times of access intervals for each camera over the 6-hour period from 1:00 PM to 7:00 PM UTC on May 12,
The document proposes a new symmetric key cryptographic algorithm called RASS that uses bitwise operations for encryption and decryption. Some key features include using two 16-bit keys to encrypt different parts of the plaintext concurrently via separate threads for improved security and performance. The algorithm uses both linear and "crisscross" XOR operations on 16-bit blocks. Performance analysis shows the algorithm has linear time complexity and outperforms previous similar bit-level algorithms.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
This document is update from “fpgax February 2, 2019”
https://www.slideshare.net/ryuz88/lut-network-fpgx201902
Japanese Version
https://www.slideshare.net/ryuz88/lutnetwork-revision2
BinaryBrain
https://github.com/ryuz/BinaryBrain
FCN-Based 6D Robotic Grasping for Arbitrary Placed ObjectsKusano Hitoshi
This is the slide used for IEEE International Conference on Robotics and Automation (ICRA) 2017, Workshop on Learning and Control for Autonomous Manipulation Systems on June 2nd, 2017.
The document analyzes and compares implementations of Dijkstra's algorithm for finding shortest paths in graphs using logic programming and standard graph theory. It finds that while the logic programming implementation uses less memory, it is slower than the graph theory implementation, especially for large graphs. For most applications, the graph theory implementation is superior due to its faster speed and greater capabilities, though logic programming may be preferable for working with ontological knowledge bases. Testing on real airline route data showed the graph theory version was several times faster.
Parallel Implementation of K Means Clustering on CUDAprithan
K-Means clustering is a popular clustering algorithm in data mining. Clustering large data sets can be
time consuming, and in an attempt to minimize this time, our project is a parallel implementation of KMeans
clustering algorithm on CUDA using C. We present the performance analysis and implementation
of our approach to parallelizing K-Means clustering.
The document discusses utilizing spatiotemporal data from IoT devices in Redis. It proposes using a technique called "ST-coding" to encode location and timestamp data into a single code. This addresses two problems: 1) ST range queries were slow due to searching many keys; and 2) data insertion was inefficient due to load concentration on a single Redis server. By splitting the ST-code into a "PRE-code" and "SUF-code", ST range queries can be performed on a single key, avoiding use of the slow KEYS command. This improves query performance and distributes load across Redis servers.
Exploring Parallel Merging In GPU Based Systems Using CUDA C.Rakib Hossain
We present a program that implemented to execute Adaptive merge sort algorithm in parallel on a GPU based system. Parallel implementation is used to get better performance than serial implementation in runtime perspective. Parallel implementation executes independent executable operation in parallel using large number of cores in GPU based system. Results from a parallel implementation of the algorithm is given and compared with its serial implementation on run time basis. The parallel version is implemented with CUDA platform in a system based on NVIDIA GPU (GTX 650)
Splunk has helped MetroPCS speed up troubleshooting and launch new products faster by allowing them to ingest and analyze call detail records and other network data. It was deployed within 2 weeks and has provided unexpected benefits like subpoena compliance and understanding overall system health. Splunk insights have helped MetroPCS optimize call routing to save hundreds of thousands in costs.
How Do ‘Things’ Talk? - An Overview of the IoT/M2M Protocol Landscape at IoT ...Christian Götz
The document discusses several protocols for Internet of Things (IoT) communication including CoAP, HTTP, XMPP, and MQTT. It provides overviews of each protocol, including key features such as message formats, flows, implementations, and example usage scenarios. While each protocol has advantages for different IoT applications, the document concludes that there is no single solution and protocols need to coexist based on factors like device constraints, network reliability, data rates, and processing needs.
The document summarizes the use of the Sector and Sphere cloud computing software on the Open Cloud Testbed for the SC08 Bandwidth Challenge. Key points include:
- Sector is a distributed storage system and Sphere simplifies distributed data processing using a map-reduce model.
- The Open Cloud Testbed provided 101 nodes across 4 locations for running applications like TeraSort (sorting 1TB of data) and CreditStone (analyzing 3TB of credit card transactions).
- Sector/Sphere applications achieved transfer rates of up to 20Gbps for TeraSort and 7.2Gbps for CreditStone, utilizing the distributed resources for large-scale data processing.
DTrace was used to diagnose and address performance problems with an NFS server running OpenZFS. DTrace probes were added to measure NFS operation latency and identify where CPU time was being spent off-CPU. This revealed that sync writes were taking over 1 second in some cases due to throttling, and the ZFS write lock was a bottleneck. The write throttle was re-written and inefficiencies removed from the locking, dramatically improving performance. The key lessons were to identify the real problem, not just reproductions, iterate with the right tools and questions, and don't hide problems from customers.
Accelerating analytics on the Sensor and IoT Data. Keshav Murthy
Informix Warehouse Accelerator (IWA) has helped traditional
data warehousing performance to improve dramatically. Now,
IWA accelerates analytics over the sensor data stored in relational and timeseries data.
Oracle Exadata is the equivalent of a F1 car in terms of performance but are you sure your application is driving it at its full potential? A simple "lift&shift" approach to Exadata migration might lose significant opportunities for improvements. This session highlights a few example where making little changes dramatically changed the application performance
Beyond PHP - it's not (just) about the codeWim Godden
Most PHP developers focus on writing code. But creating Web applications is about much more than just writing PHP. Take a step outside the PHP cocoon and into the big PHP ecosphere to find out how small code changes can make a world of difference on servers and network. This talk is an eye-opener for developers who spend over 80% of their time coding, debugging and testing.
Strata 2014 Talk:Tracking a Soccer Game with Big DataSrinath Perera
Mobile devices, sensors and GPS are driving the demand to handle big data in both batch and real time. This presentation discusses how we used complex event processing (CEP) and MapReduce based technologies to track and process data from a soccer match as part of the annual DEBS event processing challenge. In 2013, the challenge included a data set generated by a real soccer match in which sensors were placed in the soccer ball and players’ shoes. This session will review how we used CEP to implement DESB challenge and achieved throughput in excess of 100,000 events/sec. It also will examine how we extended the solution to conduct batch processing using business activity monitoring (BAM) using the same framework, enabling users to obtain both instant analytics as well as more detailed batch processing based results.
Mobile devices, sensors and GPS are driving the demand to handle big data in both batch and real time. This presentation discusses how we used complex event processing (CEP) and MapReduce based technologies to track and process data from a soccer match as part of the annual DEBS event processing challenge. In 2013, the challenge included a data set generated by a real soccer match in which sensors were placed in the soccer ball and players’ shoes. This session will review how we used CEP to implement DESB challenge and achieved throughput in excess of 100,000 events/sec. It also will examine how we extended the solution to conduct batch processing using business activity monitoring (BAM) using the same framework, enabling users to obtain both instant analytics as well as more detailed batch processing based results.
This paper proposes SC-GlowTTS, a zero-shot multi-speaker text-to-speech model based on flow-based generative models. SC-GlowTTS uses a speaker encoder trained on a large multi-speaker dataset to condition the model on speaker embeddings. It explores different encoder architectures and fine-tunes a GAN-based vocoder with predicted mel-spectrograms. Evaluation shows the model achieves promising results using only 11 speakers for training, comparable to a Tacotron 2 baseline trained with 98 speakers. This demonstrates potential for zero-shot TTS in low-resource languages.
This is a talk given by Badrish Chandramouli at Portland State University on May 30, 2017, and overviews his recent and ongoing research directions in the space of stream processing and big data analytics.
Apache con 2020 use cases and optimizations of iotdbZhangZhengming
This document summarizes a presentation about IoTDB, an open source time series database optimized for IoT data. It discusses IoTDB's architecture, use cases, optimizations, and common questions. Key points include that IoTDB uses a time-oriented storage engine and tree-structured schema to efficiently store and query IoT sensor data, and that optimizations like schema design, memory allocation, and handling out-of-order data can improve performance. Common issues addressed relate to version compatibility, system load, and error conditions.
How do Things talk? IoT Application Protocols 101Christian Götz
Analysts predict that in 2020 50 billion devices are connected to the internet. Together with the fact that more and more of these "things" are connected over the cellular network, new challenges are introduced to the communication of Internet of Things (IoT) and machine-to-machine (M2M) scenarios. There are a lot of protocols which claim to be ideal for these use cases, for example MQTT and COAP. In this talk you will get an overview of commonly used protocols and their underlying architectural styles. We will also look at advantages/disadvantages, use cases and the eco-system around them for Java developers.
Transport SDN & OpenDaylight Use Cases in KoreaJustin Park
- The document summarizes a presentation on Transport SDN and use cases in Korea. It introduces the speaker and their research team working on Transport SDN. It describes problems with current transport networks and requirements for SDN solutions. It provides an overview of the OpenDaylight platform and how it is being used to develop a Transport SDN controller in Korea called Calamari. It briefly describes implementations with MPLS-TP and testbeds involving multiple vendors. It outlines use cases at two Korean telecom companies, SKT and KT, and concludes with future plans to expand the SDN research.
Abstract:
Many machine learning algorithms can be implemented to run parallel operations on graphics cards. Deeplearning4j is a Java-based machine learning library, which includes implementations of many popular neural-network algorithms. Deeplearning4j uses uses a library called Nd4j to run matrix algebra operations on either CPUs or GPUs with NVIDIA’s CUDA API.
In this talk, I will show how to get a simple machine learning algorithm running on the GPU. I will also cover how to get started with CUDA development: how to get your code to run on the GPU, how to monitor the device, and how to write code to make effective use of parralelization.
Bio: Gary Sieling is a Lead Software Engineer at IQVIA, in Blue Bell, PA, with an interests in database technologies, machine learning, and software engineering practices. He has been involved in curating talks for a company lunch and learn program and the organizing committee for a tech conference. Building on these experiences, he built a search engine called FindLectures.com to help find great talks and speakers.
Paul Dix [InfluxData] The Journey of InfluxDB | InfluxDays 2022InfluxData
The document summarizes the evolution of InfluxDB from its initial version 1.0 in 2013 to the current version 2.0 called IOx. It started as a time series database that stored time series data and associated metadata. Over time it incorporated features like tags, line protocol, TSM storage engine, and an inverted index to improve querying capabilities. Version 2.0 refocused it as an all-in-one platform with a new query language called Flux, and aims to be cloud-first. The latest version IOx leverages a columnar database and federated architecture to solve challenges of scale, providing SQL support and the ability to deploy on cloud or edge environments.
Similar to Real-Time Spatiotemporal Data Utilization For Future Mobility Services: Atsushi Isomura (20)
Redis Day Bangalore 2020 - Session state caching with redisRedis Labs
This document discusses using Redis caching to improve performance for the DBS Paylah mobile wallet application. Paylah aims to significantly increase its user base which will increase load on its backend systems. Caching application data and session state in Redis can reduce latency, improve responsiveness for users, and reduce costs by lowering load on legacy backend databases and mainframes. The document outlines some key Paylah use cases where caching transaction histories and account details in Redis would accelerate retrieval and improve the mobile experience by avoiding the need to access slower backend systems on each request.
Protecting Your API with Redis by Jane Paek - Redis Day Seattle 2020Redis Labs
The document discusses rate limiting and metering using Redis. It begins by introducing rate limiting and metering and why Redis is well-suited for these tasks. It then covers different Redis data structures that can be used, such as lists, hashes, sorted sets and strings. Common Redis commands for counting, setting keys and checking time to live are also presented. Different rate limiting design patterns and anti-patterns are described, including fixed window, sliding window and token bucket approaches. Finally, resources for further information are provided.
The Happy Marriage of Redis and Protobuf by Scott Haines of Twilio - Redis Da...Redis Labs
The document summarizes a presentation about using Protocol Buffers and Redis together. It discusses how Protocol Buffers provide strict data types, versioning, and serialization/deserialization benefits. It then outlines Redis key patterns using namespaces, versions, data categories and identifiers. Examples are provided to show how Protocol Buffers messages can be stored in Redis using these key patterns, including storing connections data in sets, sorted sets and individual messages. Benefits discussed include structure, readability, testing and abstraction.
SQL, Redis and Kubernetes by Paul Stanton of Windocks - Redis Day Seattle 2020Redis Labs
The document discusses common use cases for combining SQL, Redis, and Kubernetes including caching, session management, rate limiting, and data ingestion. It outlines how Kubernetes can be used for scaling microservices while Redis is used for data service scaling. The presentation proposes combining Redis, SQL Server, and Kubernetes with a proxy service, and describes using Redis for caching, session storage, and rate limiting of SQL data. It also discusses running Redis and front-end apps on Kubernetes and deploying SQL as a Kubernetes service through a proxy.
Rust and Redis - Solving Problems for Kubernetes by Ravi Jagannathan of VMwar...Redis Labs
This document discusses using Rust and Redis to build cloud native platforms. It first provides context about devops and the need to do more with less. It then discusses how platforms are becoming more distributed and Kubernetes upends distribution paradigms. The document dives into how Rust addresses issues like concurrency and systems programming. It also discusses how Redis can be used for caching, queues, streams and more. Finally, it mentions that Rust and Redis will be demonstrated.
Redis for Data Science and Engineering by Dmitry Polyakovsky of OracleRedis Labs
This document contains a presentation about using Redis for data science and engineering. It introduces the presenter and provides an agenda that covers using Redis for data science and data engineering. The presentation notes that Redis can be used as both a data store and job queue, has flexible data structures and is fast, though it uses RAM and cannot query by value. It also lists Python Pandas and includes a demo and links for further information.
Practical Use Cases for ACLs in Redis 6 by Jamie Scott - Redis Day Seattle 2020Redis Labs
Jamie Scott from RedisLabs presented on practical use cases for access control lists (ACLs) in Redis 6. The presentation covered new security features in Redis 6 including encryption in transit, key space and command restrictions, and multiple access control list users. It demonstrated how ACLs allow users to define access based on key labels and restrictions. ACLs can facilitate discretionary and mandatory access controls. The presentation showed examples of using ACLs to restrict user access by key labels and commands to enhance operational security.
Moving Beyond Cache by Yiftach Shoolman Redis Labs - Redis Day Seattle 2020Redis Labs
This document summarizes a presentation about Redis version 6 and beyond. Some key points include:
- Redis version 6 includes new features like ACL for security, client-side caching, diskless replication, and multi-threaded I/O.
- Redis is positioned as both a cache and a database due to its speed, data structures, and ability to handle complex data models through modules.
- Redis Enterprise provides additional capabilities like durability, high availability, geo-distribution, security and multi-tenancy.
- Modern data models in Redis modules include Streams, RediSearch, RedisGraph, RedisTimeSeries, RedisAI, RedisJSON and RedisBloom.
- RedisInsight is
Leveraging Redis for System Monitoring by Adam McCormick of SBG - Redis Day S...Redis Labs
The document discusses how Sinclair Broadcast Group leverages Redis for system monitoring of its content delivery network. It operates 193 news stations with 10,000 active pages daily and millions in archive. New stories are posted every 15 seconds and must be visible across its 1,000+ targets within 1 minute. Redis is used to track performance across the multi-level CDN and ensure service level agreements are met with real-time resolution and alerting. It provides a black box view of the audience experience and can scale monitoring to all relevant pages within 30 seconds. Redis acts as a distributed data store to parallelize the monitoring task across the large scale of the network.
JSON in Redis - When to use RedisJSON by Jay Won of Coupang - Redis Day Seatt...Redis Labs
The document summarizes a presentation about when to use the RedisJSON data type. It discusses how Coupang uses Redis extensively for their ad platform. It then compares the performance and memory usage of storing JSON data as strings, hashes, or using the RedisJSON data type. Benchmark results show RedisJSON can provide better performance for retrieving and updating JSON fields compared to strings and hashes, though it uses more memory. The document recommends using RedisJSON for smaller JSON payloads after benchmarking and memory monitoring.
Highly Available Persistent Session Management Service by Mohamed Elmergawi o...Redis Labs
The document discusses the challenges of building a highly available persistent session management service. It describes Zulily's legacy architecture which lacked high availability and required manual intervention. A new architecture is proposed using Redis for persistent storage, Dynomite for real-time replication across data centers, and a connection pooling proxy to improve efficiency and distribute load. The architecture provides high availability through replication, reduces overhead through connection pooling, and handles failures through consistent hashing and health checks. It was tested through simulations and showed a failure rate of only 0.42% during outages.
Anatomy of a Redis Command by Madelyn Olson of Amazon Web Services - Redis Da...Redis Labs
The document describes the process that a Redis command follows from the client side to the server side. On the client side, the command is sent over the network to the Redis server. On the server side, the command is read from the kernel buffers, validated, executed by calling the relevant command handler, and the response is written back to the client over the network. The core functions involved on the server side are ReadQueryFromClient(), ProcessInputBuffer(), ProcessCommand(), Call(), and handleClientsWithPendingWrites(). Redis 6.0 introduced I/O threads to handle reads and writes in parallel for improved performance while still maintaining Redis' single-threaded processing model.
Building a Multi-dimensional Analytics Engine with RedisGraph by Matthew Goos...Redis Labs
This document discusses MDmetrix, a healthcare data intelligence company that uses RedisGraph to provide flexible analysis of hospital data. RedisGraph is a graph database that represents data as nodes and relationships and uses an adjacency matrix and linear algebra to query the graph. MDmetrix models its healthcare data as a property graph in RedisGraph to allow for complex queries across different data dimensions like patients, facilities, procedures and drugs. RedisGraph allows MDmetrix to query the data more easily than traditional OLAP cubes or relational databases due to the semi-structured and flexible nature of the graph model.
RediSearch 1.6 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RediSearch 1.6 includes a new low-level API that allows other Redis modules to embed RediSearch indexing capabilities. It also introduces index aliasing and several performance improvements such as forked thread garbage collection. Based on benchmarks, RediSearch 1.6 shows 48-73% better performance than version 1.4, particularly during high update rates where it maintains more stable read latencies.
RedisGraph 2.0 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RedisGraph 2.0 provides significant improvements including:
- Full text search support through embedded RediSearch 1.6 enabling graph-aided search.
- Support for returning full graph responses to enable better visualization.
- Broad support for Cypher including triadic closure and new graph-aided search capabilities.
- Performance improvements of up to 3.7x faster operations per second and 3.9x faster query times compared to RedisGraph v1.2.
- Support for benchmarking including the LDBC benchmark.
RedisTimeSeries 1.2 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RedisTimeSeries is a time-series database that provides compression to reduce memory usage by up to 98% and improve performance. The RedisTimeSeries 1.2 release includes compression algorithms based on a Facebook paper that provide stable ingestion times independent of the number of data points. It also includes a reviewed API with performance improvements and clearer functionality. Performance testing showed ingestion throughput improved by 2-3% and query performance increased from 15-70% with the new release compared to the previous version.
RedisAI 0.9 by Sherin Thomas of Tensorwerk - Redis Day Bangalore 2020Redis Labs
This document summarizes RedisAI 0.9 and its capabilities for model deployment and benchmarking. It introduces RedisAI's new tensor data type and ability to deploy models to CPU and GPU. It then discusses AIBench, a tool developed to benchmark AI serving solutions like RedisAI, TensorFlow Serving, and REST APIs. The benchmarks show RedisAI providing 5.5x and 2.5x more inferences than REST APIs and TensorFlow Serving respectively, due to its data locality. The document concludes by mentioning RedisAI's integration with MLFlow for model deployment with a single command.
Rate-Limiting 30 Million requests by Vijay Lakshminarayanan and Girish Koundi...Redis Labs
The document discusses how Freshworks uses Redis Labs to rate limit 30 million API requests per day through their API gateway called Fluffy. Fluffy stores rate limit policies and maintains counters to track requests. Redis Labs allows Fluffy to easily scale to handle the high volume of requests by providing a fast, in-memory data store for managing rate limiting counters. The system was able to successfully rate limit 30 million requests per day with Redis Labs.
Solving Complex Scaling Problems by Prashant Kumar and Abhishek Jain of Myntr...Redis Labs
Redis was used by Myntra to solve several complex scaling problems. It was used to build a scalable user segment service to support high read throughput of up to 5 million requests per minute with low latency. Redis allowed the service to scale beyond a single instance and included features like automatic backups and memory management. Redis also helped build a scalable mobile verification platform to reliably handle 100,000 requests per minute and scale to support higher future volumes. It was used as both a transient store and persistent backend. Finally, Redis locks helped build a scalable A/B testing platform by allowing experiments to be created and updated in an orderly concurrent fashion.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
13. 01 2. 021 . 3 3 2
1-3. Requirements and current technology
1. Insert bunch of ST-data in real-time (<10ms)
2. Search by ST-range query in real-time (<100ms)
3. Distribute data equally regardless of density changes
- All requirements must be satisfied
Data store AppsCars
1. over 20M rec/s[1]
[1] : Fuji Keizai Marketing Research “Connected car related markets and telematics strategy 2017”
(Estimation only in Japan)
2. lng:x1~x2 lat:y1~y2 time:t1~t2
Value
No matured technology that could satisfy all requirements.
ST-range query
3.
14. 01 2. 4 021 . 4 2
1-4. Which data store to use?
Of course we selected “redis”
We searched for…
- blazingly fast performance
- geo features
- secondary indexing
- data distribution
We studied from RedisConf…
redisconf17
Using “Geohash-encoding” & “Sorted-set”
enable ST-data management in redis
18. 01 2. 021 . 2
2-2. What’s “Geohash”?
Useful feature
Prefix match = Range query of longitude & latitude
0
00
01
0
10
101100
010
011
1
1
11
10…
1001…
100110…
19. 0 1 10 .. 2 2 1
2-3. Insert/Search requirements
- Insert : longitude(x), latitude(y), time(t), and value
- Search : range query of location and time
x y t value
37.798° -122.402° April 2nd 2019 14:10:15 30 km/h
… … … …
Query : Search all values of…
- GEOHASH with prefix of ‘x1y1…xqyq ’
- TIMESTAMP between t1 and t2
q : length of each dimension for prefix search
20. 0 1 10 .. 2 2 1
>ZADD time_a geohash_a “ID, …”
(integer) 1
>GEOADD time_a geohash_a “ID, …”
(integer) 1
2-4. Possible Key-Value design
-Key
Timestamp
(string)
-Score
Geohash
(int)
-Value
time_a
geohash_a ID, …
… …
time_b
… …
… …
… … …
-Key
Geohash
(string)
-Score
Timestamp
(int)
-Value
geohash_a
time_a ID, …
… …
geohash_b
… …
… …
…
Pattern 1. Time key sorted by Geohash Pattern 2. Geohash key sorted by Time
Either of them works fine
>ZADD geohash_a time_a “ID, …”
(integer) 1
21. 0 1 10 .. 2 2 1
2-5. How to search by range
>ZRANGEBYSCORE t1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+2 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+3 x1y1…xqyq…00 x1y1…xqyq…11
…
>ZRANGEBYSCORE t2 x1y1…xqyq…00 x1y1…xqyq…11
>KEYS x1y1…xqyq*
(return list[i] of all keys that start with x1y1…xqyq )
>ZRANGEBYSCORE list[0] t1 t2
>ZRANGEBYSCORE list[1] t1 t2
…
>ZRANGEBYSCORE list[i] t1 t2
query by circle : GEORADIUS instead of ZRANGEBYSCORE
-Key
Timestamp
(string)
-Score
Geohash
(int)
-Value
time_a
geohash_a ID, …
… …
-Key
Geohash
(string)
-Score
Timestamp
(int)
-Value
geohash_a
time_a ID, …
… …
Pattern 1 Pattern 2
Query : Search all values of…
- GEOHASH with prefix of ‘x1y1…xqyq ’
- TIMESTAMP between t1 and t2
(q : length of each dimension for query)
22. 0 1 10 .. 2 2 1
>KEYS x1y1…xqyq*
(return list[i] of all keys that start with x1y1…xqyq )
>ZRANGEBYSCORE list[0] t1 t2
>ZRANGEBYSCORE list[1] t1 t2
…
>ZRANGEBYSCORE list[i] t1 t2
>ZRANGEBYSCORE t1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+2 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+3 x1y1…xqyq…00 x1y1…xqyq…11
…
>ZRANGEBYSCORE t2 x1y1…xqyq…00 x1y1…xqyq…11
2-6. Range query takes time
Pattern 1 Pattern 2
Turn around time/Query 1.3 s 535 s
Simple test by using 5 redis-servers
(concurrent connections : 256, number of values : 10 million, search only)
Pattern 1 Pattern 2
[1] : https://redis.io/commands/KEYS
Search too many Keys.
Slow!
Danger![1] Too slow!
23. 01 2. 021 . 3 3 2
2-7. Range query takes time
>ZRANGEBYSCORE t1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+2 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+3 x1y1…xqyq…00 x1y1…xqyq…11
…
>ZRANGEBYSCORE t2 x1y1…xqyq…00 x1y1…xqyq…11
Pattern 1
Turn around time/Query 1.3 s
Simple test by using 5 redis-servers
(concurrent connections : 256, number of values : 10 million, search only)
Pattern 1
Search too many Keys.
Slow!
It takes more than 1s.
Let’s reduce the Keys
24. 01 2. 4 021 . 4 2
2-7. Range query takes time
>ZRANGEBYSCORE t1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+1 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+2 x1y1…xqyq…00 x1y1…xqyq…11
>ZRANGEBYSCORE t1+3 x1y1…xqyq…00 x1y1…xqyq…11
…
>ZRANGEBYSCORE t2 x1y1…xqyq…00 x1y1…xqyq…11
Pattern 1
Turn around time/Query 1.3 s
Simple test by using 5 redis-servers
(concurrent connections : 256, number of values : 10 million, search only)
Pattern 1
Search too many Keys.
Slow!
It takes more than 1s.
Let’s reduce the Keys
Wait!
Problem is left!
25. 01 2. 021 . 25
2-8. Another problem?
Suppose that…
- Tons of cars send data continuously
- Applications require current data
- Multiple Redis-servers are available
AppsCars
redis1
redis2
redis3…
redisN
What will happen?
-Key
Timestamp
(string)
-Score
Geohash
(int)
-Value
time_a
geohash_a ID, …
… …
Pattern 1
26. 0162. 021 . 2
redis1
redis2
redis3…
redisN
2-8. Load concentration (intensive access)
current timestamp key
Idle
busy
We send
current data!
We need
current data!
AppsCars
27. 01 2. 7 021 . 2
redis1
redis2
redis3…
redisN
2-8. Load concentration (intensive access)
current timestamp key
Idle
busy
We send
current data!
We need
current data!
AppsCars
28. 01 2. 021 . 2
2-8. Load concentration
1 2 3 4
24
24 redis-servers
Simple test by using 24 redis-servers
( concurrent connections : 256 (data insertion only) )
Cannot use CPU resource efficiently
CPU usage (%)
User/System
usage(%)Idle(%)
0
100
spike
0
50
29. 0 1 10 .. 2 2 1
2-9. Problems we need to solve
Problem 1.
- ST-range query is slow due to
- searching too many Keys
- using the “KEYS” command
Problem 2.
- ST-data insert is inefficient due to
- load concentration
31. 01 2. 021 . 3 3 2
3-1. Applying “ST-code”
0
00
01
0
10
101100
010
011
1
1
11
Morton-curve transform for longitude, latitude, and time
timestamp
[1] Jan Jezek, “STCode : The Text Encoding Algorithm for Laitute/Longitude/Time”,
Springer International Publishing Switzerland 2014
ST-code[1] : x1y1t1 x2y2t2 x3y3t3 … xnyntn
prefix match = range query
timestamp
Min.
timestamp
Max.
current time
0 1
1110
100 101
32. 01 2. 021 . 3 3 2
3-1. Applying “ST-code”
-Key -Score -Value
PRE-code_a
SUF-code_a ID, …
… …
… … …
ST-code : x1y1t1 x2y2t2 x3y3t3 … xnyntn
split
PRE-code : x1y1t1 … xsysts
(express WIDE st-range)
SUF-code : xs+1ys+1ts+1 … xnyntn
(express NARROW st-range)
>ZADD PRE-code_a SUF-code_a “ID5, …”
(integer) 1
s : where you split
Don’t make me use the
KEYS command!
33. 01 2. 021 . 3 3 2
3-1. Applying “ST-code”
-Key -Score -Value
PRE-code_a
SUF-code_a ID5, …
… …
… …
>ZRANGEBYSCORE PRE-code_a
xs+1ys+1ts+1…xqyqtq…000 xs+1ys+1ts+1…xqyqtq…111
Very Fast! Problem solved!?
(restriction : s < q)
s : where you split
q : length of each dimension for prefix search
ST-range query only in one command!
Query : Search all values of…
- GEOHASH with prefix of ‘x1y1…xqyq ’
- TIMESTAMP between t1 and t2
35. 12 3 .5 132 00 .5 3
Problems we need to solve
Problem 1. (Solved by ST-code!)
- ST-range query is slow due to
- searching too many Keys
- using the “KEYS” command
Problem 2. (not yet)
- ST-data insert is inefficient due to
- load concentration
search only 1 key
“ZRANGEBYSCORE”
36. 12 3 . 132 00 . 36
3-2. Limited node distribution
insert
• Select multiple nodes based on the hashed value of ST-code(PRE-code).
• Insert to “one” of the selected nodes.
• Search from “all” of the selected nodes.
San Francisco, 7:00
7:03
…
7:00
7:01
7:02
search
7:00~7:01
San Francisco,
ST-range query
avoid load concentration efficient search
#works as above when applying ST-code(PRE-code) as Key
time
selected nodes
37. 1273 . 132 00 . 3
Problems we need to solve
Problem 1. (Solved by ST-code!)
- ST-range query is slow due to
- searching too many Keys
- using the “KEYS” command
Problem 2. (Solved by Limited node distribution)
- ST-data insert is inefficient due to
- load concentration
search only 1 key
“ZRANGEBYSCORE”
load distribution
38. 12 3 . 8 132 00 . 3
3-3. Architecture Overview
(A)ST-code & (B)Limited node distribution are applied.
calculate ST-code
split ST-code into PRE-code & SUF-code
calculate hashed value of PRE-code
calculate insert/search node number
1 2 3 4 5
PRE-code ⇒ “Key”
SUF-code ⇒ “Score”
PRE-code ⇒ “Key”
SUF-code ⇒ range query of “Score”
Cars (insert) Application (search)
time lat lng value
Redis
(B)
ST-code value ST-code
PRE-code
valuenode num
valuetime lat lng
(A)
SUF-code
PRE-code SUF-code node num PRE-code SUF-code
41. 01 2. 4 021 . 4 2
4-2. Experimental conditions
Concurrency
(max)
Data size
(KB)
Redis server nodes
“selected nodes”
for proposed method
insert 640
10 24 8
search 320
Data inserted (10 million data) Data searched (100,000 query)
time range : 15min
area : 3km2
(1) : http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml
dense/sparse
depending on area(2)
time Current timestamp
longitude
NY Taxi open data(1)
latitude
value ID, speed, etc.
(2) : referred from https://toddwschneider.com/posts/analyzing-1-1-billion-nyc-taxi-and-uber-trips-with-a-vengeance/
51. 01 2. 021 . 25
input = [lon_input, lat_input, time_input]
maxmin = [(-90.0, 90.0), (-180.0, 180.0), (0.0, 2018304000.0)]
def st_encode_FAST(input, maxmin, precision=96):
bins=[]
precision = int(precision/3)
for (i, m) in zip (input, maxmin):
tmp = (i-m[0])/(m[1]-m[0])*(2**precision)
tmp = format(int(tmp),'b')
n_lost = precision-len(tmp)
bins.append('0' * n_lost + tmp)
st_code = ''.join(b1+b2+b3 for b1,b2,b3 in zip(bins[0],bins[1],bins[2]))
return st_code
5-1. ST-code generation ( stencode/stencode_fast.py)
Much faster
52. 01 2. 021 . 25
5-2. Demo (console)
- Data insert (st_insert.py)
- Data search (st_search.py)
redis client
PyPIredis
MW
OS
st_insert.py st_search.py
redis server
redis
OS
- key : PRE CODE
- score : SUF CODE
- value : ID, lat, lng, time
- key : PRE CODE
- score : SUF CODE
- value : ID, lat, lng, time
57. 127 .5 1 2 00 .5
Limited node distribution (insert)
-Insert
1. Calculate multiple nodes according to PRE-code
2. Insert to one of the nodes selected randomly
N(number of nodes) = 5 D(number of distribution) = 3
# node1 node2 node3
1 redis1 redis2 redis3
2 redis1 redis2 redis4
… … … …
NCD redis3 redis4 redis5
combination_table
Longitude(x), Latitude(y), Time(t)
i = hash(ST-code.PRE) mod NCD
3d_morton(x, y, t, precision=max_length)
combination_table(i)
ST-code
redis1 redis2 redis3 redis4 redis5
random.choose(candidates)
redis1 redis2 redis3 redis4 redis5
Define “max_length” beforehand
- 63bit : 20m 10m 20s
- 96bit : 20cm 10cm 0.2s
58. 12 .58 1 2 00 .5
Limited node distribution (search)
-Search
1. Calculate multiple nodes according to pre-code
2. Search to multiple nodes
range of Longitude(x), Latitude(y), Time(t)
i = hash(ST-code.PRE) mod NCD
3d_morton(x, y, t, precision=search_length)
combination_table(i)
ST-code
redis1 redis2 redis3 redis4 redis5
N(number of nodes) = 5 D(number of distribution) = 3
# node1 node2 node3
1 redis1 redis2 redis3
2 redis1 redis2 redis4
… … … …
NCD redis3 redis4 redis5
combination_table
“search_length” is calculated
depending on the range width
59. 01 2. 021 . 25
Research background
Related Technology
60. 0162. 021 . 2
2-1. Research Background
• ,A/ A D C <D < < A < AD
• AD AD D < A ./ . <A / AD <
L < AD AD 1 1 D <
1 < < D< AD D< D< D <
DA < AD < A < DA
(- 1 - 1-
. < ( A L
. < < ( < L
. < < ( L
-
< 1 < < D<
D -
), / <
DA <
) D A < D
A < D
) D A
D <
, AD <A
I DA
AD D < D < A < <
(- 1
2 3 /
A < A
2
3
61. 0162. 021 . 2
2-2. Research Positioning
I D
KBE
K A
BJ A - BK
, GJ KB D L IB J NBKA A - BK
A
, G1 J
IC K
T
GJK,-
R GJK I S
-E EGMB IG JJB
B JB K J J
II O
K J
T
-E EGMB J JB K A GDG O
R-G , 3( K S
-G J JGI K E B3 DB
3 DB
G
DGL
- ( - ) (
• )G JB IB J KBG K E GI D E B K A GDG O GI GKA
PJKIL KLI K DG J JKIB J M DL J K Q
P G JKIL KLI K EGMB J JGL J K Q
J KBG K E GI D E B -)
-E EGMB JKI E IG JJB
-) G LJB K A GDG O
T 3I D B D Rasdaman SciDB, TileDB
( 2) , G IC KB D IC
T , G. 32R1G G ( D JKB I A
2G
T
Geo databases
KIL KLI K 2G JKIL KLI K
62. 0162. 021 . 2
2-3. Geospatial Database
( /
+:C G F:P
Client Client
C ( F
+
E E
Server AP
OS/MW
+
E E
:EI:E
/
+
E E
+:C
• E B L G CB C +:C B CEA G CB
) )
) ) ) ) (
- CDC C G:OM AD : : G E:F D: G CB CE JE GG:B B - I
+) M AD : : G E:F D: G CB CE JE GG:B B
• :E: E: F GJC G D:F C F FG:A F B + E E :F
G B CB: G D: F FG:AO DD + E E BF : F:EI:E
+:C G F: G D: F FG:A F B G F: G G DECI :F + E E
63. 9 3 A 9 66 3 2 2 21
2-4. JTS Topology Suite
• CHGL
O1, 3 AD GH F 3D G CB C 32 J GG B B / I
• 133 G G D CI F /43
D B G L B A AC G D CI , , B C D
CA F L G F G G B F D C FF B G C - F ,CC G
, C3 I L3 I FC GJ C G B B F B , C G
9
2. 3
96 39
6 9
6 2. 3
6 96 39
29 2 96620 9
A G C G CI GCH F FG B
CHG
G G C
CB DC CB
G G DC BGF
BF CB
DC CB
G G B
B DC CB
CBG G
H G G
A B AHA
FG B
F DC CB DC CB DC BGF DC CB B
GH B 4 H F 4 H F FG B
A
polygon1
polygon face
face
distance
Calculation examples for polygons
[1] https://github.com/locationtech/jts
[2] https://www.gridgain.com/technology/apache-ignite
[3] http://www.geomesa.org/
[4] http://geoserver.org/
polygon
line
O, CA G 4 D C F
65. CID F ( DC ) 2 F 29 9D 9
2-6. Spatio-temporal Database
• 1 VVJR TF NS JQTSVFP IF FGF J FVJ VJFPN JI G] NRL MJ JHMRSPSL] SK
A : SA : A71 N [SVONRL SR 7R QJQSV] SVFLJ JHMRN J S VJFPN J VJFP NQJ
TF NS JQTSVFP IF F TVSHJ NRL QFRFLNRL
1.
L CC DF FD5A 5 F A
1.
L A C9D D 5A 9 5A 5 56 FI
-A 9 DI
F D5 9
VJFP NQJ
09D F9A 9
F D5 9
65F
D E / A V J] SK ATF NS BJQTSVFP 2F FGF J J JFVHM B L ]JR J FP 3I . /1772A , : /7 ) TT ) ,
D E A V J] SK ATF NS BJQTSVFP 2F FGF J B/ /A /0 /6/ / 2 8 6 22719 5JS7RKSVQF NHF (. -- ---
7R QJQSV] IF F LVNI
2 0A
7R QJQSV] 9J] CFP J SVJ
SA :
S LVJA : S 57A
( -
SRLS20 3PF NH AJFVHM 5JS 8A
S LVJA : 7RKP 20 TJRBA20 a
) / - LJS
NQJ
JVNJ
BFVLJ SK
A71
67. 127 . 1 2 00 . 6
3-1. Overview
• 7: 1 / 7 02 3 1 2 1 3 8 7 72 1 82 : 7 2 05
1 33 0 7 0 2 205 :7:4
:B I MI : M D /- D 187
:B I MI : M D K D B : B D D D B
K C D /- 187
I
DI Q
I CKB
C DI D B
I
K C D
DI D D
D
DI D CB
B : B D
K D DI Q
I D
I D Q
C B BD L B
, ,() L
, , ) ) L
,) , ( L
P P P
:B :B
/- , 4 I 752 3 752
187 , I / D C
1 8 BK
, ,() L
, , ) ) L
,) , ( L
P P
/- 187
70 KI D D
-221 2 05
02
2 320 8 05
: 7 : /72
DI
B M DI
D CBD D
7: 1 0: 02 :
77 :12 2 05
69. 0162. 021 . 2
3-2. Spatiotemporal index
• 2 A A E A - . K A EA 7 9 EA E 9 5E E 9 5 A E 9 EA 7 95E9
5E AE9 A 5 9 3 9 9 7A 9 E EA 09 5 45 9
• ::97E
1A 99 A: 5E 9 E 99 9 56 9 6 6 1 6
2 9: 95 7 9 56 9 6 0 6 3 0 1 21 1 -
3 6 26 032 1 9 2 6
E N E N N
E 9 6 E 5E E 9 6 E A E 9 6 E
33 02 621 1 3 6
0 1 21 02 621
01211-1
A
) 0
) 0
7
7
N
N
N N
1
3 9
-
(
) (
1(
(
E E E E(E)E E N E , ( (E N E
( 95
E A: 95 E
9 E A: 95 E12
1 9 6
6 9 A 9
71. 01 2. 7 021 . 2
-+ A? ? C ?
5A C C ?C ?A 1 5?
CA 5C C I ?A ( 4 C ? 1 5?
5 5D C D ? 1 5?
5 5D C AC A5 ? D 4 A
1 2 3 4 5 6
?A ( 4 C C?A C? I.
CC A ( 4 C C C?A C? I D
5C C5 A5 K ?A ( 4 C
A A5 K CC A )) 4 C
-?1 5 AC 5 C ? A5
C C C
/ CA 4DC ?A .
(B)
1 5? C 1 5?
1 5? K (4 C
1 5? C? D 1 5?? D
CC C
(A)
3-4. Proposed Method
• /DA A? ? C ? ( ) ( ( ( ( ( (
5 D C C 4 ?
74. 127 . 1 2 00 . 4 4
4-1. Verification : Data structure
•
8CA 6 : GC 8 : G A:G C : G :6 8 : C A6 8: 8 :6 :
P 8CA 6 : GC 8 : G A:G C C6 8C 8: G 6G C GC 6 G 8 6 : : 6 C :
2 C C : 1:G C P : G 1:G C
6G6 G 8G :
:
G
: C A6
8:
8C A C 6G: 8C A C 6G:
C6
6 6 8:
: G C : 6AC A G : C : K 6G6 C 6A: 6 :6 C: GC G : 6A: C :
:6
8
: C A6
8:
G A: C :6 8 G :: G A: C :6 8 G ::
: 8 : 8 C 688: GC C : G 6G 6 : 6G6 M C 688: GC C : G 6G 6 : 6G6
0: 56 :
- G 2 A6 5
G ::
34 8C :
C A:
34 8C :
6GG:
0: 56 :
- G 2 A6 2 A6 5
G :: G ::
/:C 8C :
C A:
/:C 8C :
6GG:
G A:
4C C 1 6 C 8 G
, 4C C 1 6 C 8 G , ),(1 C A68
1 C A68
),(
76. -,3ST UOMNW ( BB 3SUT 1PP OMNW U J
4-2. System configuration
• 5: 1 -, ,-1 5 1 -, ,-1 5 , , - :
• 05 : 5 :,5- , 5 - ,-1 1 -, ,-1 5
ASLW GU U OSR
3PO RW USMUG PL IU GW J
( D :G G . F +
) 4GWG WSU 1TGIN 9MROW ( ( )
A CHXRWX , BA
-,: 5 0 ,
05 : 5 :,5-
fe
U U
GPP OR
IS SR
9RW P E SR 5+ (, . ISU ( (78 (+ IG N
(+,72 44 ( 533 57 49 a)(72 .b
AA4 0 ( +ORIN . 72 A1B1) _(
844 0 ( +ORIN B2 -( UT A1B1) _(
) D 7 AD
c 5B751 OWINd
EA- (B 721A5 B# A6 A GUW OWIN
05 : 5 :,5-
5:
-,: 5 0 ,
05 : 5 :,5-
05 : 5 :,5-
5:
77. 01 2. 7 021 . 2
4-3. Conditions
• 1 4 4D 4 C C 64 6 D 4C D 5
6 CC 6 4 4 D
D C 5 D C
BC B D
D C )
D 4C6
1 0
,
1 0 )
0
4 : 4 4
, (
1 0
/2 4I
4 4
/2 4I
4 4
D C :
. D C D
N N ST
/2 4I
4 4
/2 4I
4 4
/2 4I
4 4
D C :
0 4C6 D
C4 : (
4C 4
B 6 : 6 45 C B3C 6 C 3 4 4 D
78. 127 . 8 1 2 00 .
4-4. Results insert performance
• 1 D D 1 1 C DC 8 D C D D CD C C 5
• / C 6 D 6 C % D C 5 DD D D D C 5 DD 1 1
• / C 6 D 6 6 D 4D D 64D4 C 6 CD 5 D 6 D C C A 4
1 1 2 C3
0., I
4D 8 C D 6 64D4
%
/ C 6 , D
C
C %
C
C
C
%
1 D 2 A C3
0., Iproposed current
proposed current
79. 4 201 4 33 201 7 7 9 .
4-5. Results search performance
• 6 %606 I B I I I AI I : B
• 4 I C AI ( , AC I : A # AC I : A 606
• 4 I C B AI AI A: I I C BB
I
I
C C
,
( )
) .
(
(
B A A I
0. 0
0. 10
[ / 1
A
532 S PT S
(
)
532 S PT S
2
2
) (
)
0.01
30
6067CI8
7 # %I8
proposed current
proposed current
81. AF .. 99 , A , C 2
A 90 A
8
8 1 2
-
1A 9
5-1. Space filling curves(SFC)
• , , , 1 C 1 A A A
2 9 A
• , , , 1 C A 1 9 1 A
1 A A A A 1 9 1 1A A 9
• , , .