This document discusses optimizations for data loading in HPCC Systems using a new approach called iNFORMER. iNFORMER aims to eliminate bottlenecks in the current landing zone approach by allowing Thor nodes to directly read data records in their original format and location. It consists of modules like MATE to replace MapReduce, the Virtual Data Integrator (VDI) to provide a unified access API, and Data Storage Adapters (DSA) to interface with different storage formats. An initial feasibility study examines using this approach to efficiently load compressed files from Amazon S3 into Thor nodes on AWS for distributed decompression and spraying to DFS in parallel.
HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is now Open Source!
Vancouver Uses FME to Open Data to the WorldSafe Software
"Attend this session to learn how the City of Vancouver uses FME to:
• Provide data for both their Open Data and Road Ahead initiatives.
• Improve the City's Road Ahead map with more current information and open formats. (Learn how FME is being used to retrieve the source data from SQL Server and Oracle Spatial, and to generate SDF3/SDF2 files and a GeoRSS/RSS feed integrated with Google Street View, which is accessible from the Road Ahead home page. Plus, see how the Road Ahead FME workspace is also published to FME Server where the relevant data format files can be generated on the fly via data streaming services.)
• Prepare datasets for the City's OpenData website on a weekly basis. "
HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is now Open Source!
Vancouver Uses FME to Open Data to the WorldSafe Software
"Attend this session to learn how the City of Vancouver uses FME to:
• Provide data for both their Open Data and Road Ahead initiatives.
• Improve the City's Road Ahead map with more current information and open formats. (Learn how FME is being used to retrieve the source data from SQL Server and Oracle Spatial, and to generate SDF3/SDF2 files and a GeoRSS/RSS feed integrated with Google Street View, which is accessible from the Road Ahead home page. Plus, see how the Road Ahead FME workspace is also published to FME Server where the relevant data format files can be generated on the fly via data streaming services.)
• Prepare datasets for the City's OpenData website on a weekly basis. "
BDT301 High Performance Computing in the Cloud - AWS re: Invent 2012Amazon Web Services
Join Matt Wood to discuss the story of high performance computing on Amazon Web Services, from an introduction to EC2 Cluster Compute Instances all the way to tips and tricks, like optimizing your code for the Intel Xeon processor E5 family. We'll welcome guest speakers on stage to discuss real world case studies on driving more teraflops for your applications.
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
Those who out-compute can many times out-compete. The cloud gives you access to a massive amount of compute power when you need it. This talk will present an introduction to HPC in the cloud, including, the benefits of HPC in the cloud, how to get started, some tools to use, and how you can manage data. We will showcase several examples of HPC in the cloud by a number of public sector and commercial customers.
Created by: Dr. Jeff Layton, Principal, Solutions Architect
at this blog i am going to speak us about my family and our famr in carmen de viboral, it blog is in past but i still live my anecdote at this moment, it is a too short but i wait for you like it.
Large Scale On-Demand Image Processing For Disaster ReliefRobert Grossman
This is a status update (as of Feb 22, 2010) of a new Open Cloud Consortium project that will provide on-demand, large scale image processing to assist with disaster relief efforts.
BDT301 High Performance Computing in the Cloud - AWS re: Invent 2012Amazon Web Services
Join Matt Wood to discuss the story of high performance computing on Amazon Web Services, from an introduction to EC2 Cluster Compute Instances all the way to tips and tricks, like optimizing your code for the Intel Xeon processor E5 family. We'll welcome guest speakers on stage to discuss real world case studies on driving more teraflops for your applications.
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
Those who out-compute can many times out-compete. The cloud gives you access to a massive amount of compute power when you need it. This talk will present an introduction to HPC in the cloud, including, the benefits of HPC in the cloud, how to get started, some tools to use, and how you can manage data. We will showcase several examples of HPC in the cloud by a number of public sector and commercial customers.
Created by: Dr. Jeff Layton, Principal, Solutions Architect
at this blog i am going to speak us about my family and our famr in carmen de viboral, it blog is in past but i still live my anecdote at this moment, it is a too short but i wait for you like it.
Large Scale On-Demand Image Processing For Disaster ReliefRobert Grossman
This is a status update (as of Feb 22, 2010) of a new Open Cloud Consortium project that will provide on-demand, large scale image processing to assist with disaster relief efforts.
Introduction to HPC Programming Models - EUDAT Summer School (Stefano Markidi...EUDAT
Stefano will give an introduction to the most common and used programming models for performing parallel I/O on supercomputers. He will first give a broad overview of parallel APIs for programming I/O on supercomputers. He will then introduce MPI I/O, one of the most used programming interfaces for parallel I/O, presenting its basic concepts, providing programming examples and guidelines for achieving high performance I/O on supercomputers.
Visit: https://www.eudat.eu/eudat-summer-school
DAOS is a open-source storage-as-a-service stack designed from the ground up to address many of the problems that arise when scaling out storage. DAOS takes advantage of next generation non-volatile memory technologies while presenting a rich and scalable storage interface providing features such as transactional non-blocking list I/O, data resiliency on top of commodity hardware, fine grained data control, and storage tiering to optimize performance and cost.
Come hear a brief overview on the direction the HPCC Systems platform is heading, and get a glimpse into some of the likely highlights included in the next minor and major versions.
This talk will explain the reasoning behind the release cycle changes, and how overcoming the challenges faced in the previous practice of automated testing has introduced new benefits and wider acceptance from the wider community.
Advancements in HPCC Systems Machine LearningHPCC Systems
This presentation will provide an overview of the latest advancements in Machine Learning modules over the past year, including Clustering, Natural Language Processing, Deep Learning, and the Expanded Model Evaluation Metrics.
Clustering Methods of the HPCC Systems Machine Learning Library
The clustering method is an important part of unsupervised learning. To gain the unsupervised learning capability, two widely applied clustering methods, KMeans and DBSCAN are adopted to the current Machine Learning library. This presentation will introduce the newly developed clustering algorithms and the evaluation methods.
Learn how to package the HPCC Systems Platform in a Docker container and deploy it locally, and build an HPCC Systems Platform AMI followed by an AWS deployment.
Expanding HPCC Systems Deep Neural Network CapabilitiesHPCC Systems
The training process for modern deep neural networks requires big data and large computational power. Though HPCC Systems excels at both of these, HPCC Systems is limited to utilizing the CPU only. It has been shown that GPU acceleration vastly improves Deep Learning training time. In this talk, Robert will explain how HPCC Systems became the first GPU accelerated library while also greatly expanding its deep neural network capabilities.
Leveraging Intra-Node Parallelization in HPCC SystemsHPCC Systems
HPCC Systems offers parallelization by assuming data-independent tasks. For operations having complex predicates, such as the set similarity join (SSJ) predicates, this assumption might create a tradeoff. On the one hand, one may choose a data replication strategy with large intermediate data groups assuring that all intermediate results fit into main memory. However, this can lead to an underutilization of today's massively parallel CPUs. On the other hand, you may choose a higher degree of replication for better CPU utilization. Such a choice may lead to an overutilization of main memory on the compute nodes. Our research focuses on the parallel implementation of the set similarity join (SSJ) operator. This operator finds all pairs of records which have a similarity above a defined threshold using a similarity measure such as Jaccard. Our goal is a robust approach for executing SSJ that does not over-utilize memory and exploits CPU parallelization as much as possible. This approach requires data sharing between tasks/threads which is not foreseen in HPCC Systems so far. In this talk, we describe how we implemented multithreaded user-defined functions for the SSJ operator. To be able to control NUMAspecific parallelization conditions, we implemented a C++ plugin for HPCC Systems. Furthermore, we show how we visualized the relevant system parameters (CPU and memory usage). This talk is intended for anyone interested in extending HPCC Systems by plugins and monitoring distributed program execution.
DataPatterns - Profiling in ECL Watch HPCC Systems
DataPatterns.Profile() has been evolving since the last time you may have seen it. It does more. It looks better. It has been integrated into the ECL standard library and into ECL Watch. Learn what this data profiler can do for you and how its built-in visualization easily summarizes the results.
Join us for an introductory walk-through of using the Spark-HPCC Systems ecosystem to analyze your HPCC Systems data using a collaborative Apache Zeppelin notebook environment.
The Workunit Analyser examines the entire workunit to produce advice that both novices and experienced ECL developers should find useful. The Workunit Analyser is a post-execution analyser that identifies potential issues and assists users in writing better ECL.
Dapper Tool - A Bundle to Make your ECL NeaterHPCC Systems
Have you ever written a long project for a simple column rename and thought, this should be easier? What about nicely named output statements? Yeah they bother me too. Oh, and DEDUP(SORT(DISTINCT()))? There is a better way! Learn how Dapper can help!
A Success Story of Challenging the Status Quo: Gadget Girls and the Inclusion...HPCC Systems
Join NSU University School student and program leader for girls in robotics, Ronnie Shashoua, as she talks about Gadget Girls - a project in collaboration with the NSU Alvin Sherman Library, NSU University School, the South Florida Girl Scouts, and sponsorship from the HPCC Systems Academic Program. Gadget Girls is a program aimed at encouraging girls in fourth and fifth grade to explore their interests in and love for STEM, especially robotics and engineering. Shashoua will discuss the underrepresentation of girls in the Florida Vex Robotics circuit, such as how it demonstrates a larger trend of low numbers of women undertaking STEM educational and career paths and the role it played in inspiring the creation of Gadget Girls.
Beyond the Spectrum – Creating an Environment of Diversity and Empowerment wi...HPCC Systems
Hear how the Florida Atlantic University Center for Autism and Related Disabilities has partnered with the HPCC Systems community to provide young people with autism both the technology and professional skills needed to compete in today’s workplace. Mentoring and hands-on coding through ECL workshops have positively impacted students, opening doors to new opportunities for both students and employers.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
HPCC Systems Engineering Summit Presentation - Improving Thor Data Loading using Parallel Format-agnostic Direct Spraying
1. Improving Thor Data Loading using Parallel Format-agnostic Direct Spraying
Mohammad Rashti – RNET Technologies 1
2. Outline
Data Loading in HPCC Systems
Potential Optimizations
iNFORMER - The Big Picture of the SBIR project
iNFORMER Modules
The HPCC on AWS Case – S3 Parallel Data Access
Feasibility study and experiments
Ongoing and Future Work 2
4. Data Loading in HPCC Systems 4
In Thor, the data needs to be transferred into a single drop zone (Landing Zone)
Landing zone node sprays the data over the DFS on the Thor nodes
Imposes overhead on data loading
Landing zone and spraying can become a bottleneck
Especially for large data sets or partial loads
5. Optimizations for Spraying Process
Eliminate the landing zone bottleneck
Read data records directly from their original data format in their original place
Use distributed data handling
Use multiple Thor nodes to load (and potentially decompress) the data
Each Thor node can read its own data
Load the data directly into destination files on DFS
Optimize partial data loads
Add support for in-memory processing
Skip trips to disk for iterative computing 5
7. SBIR Project
This work is part of an SBIR project from the Department of Energy
Carried on in two phases
Phase I (9 months)
Feasibility study of the ideas and commercial potential
Design of the prototype
Phase II (2 years)
Prototype implementation
Commercialization of the product
Competitive renewal proposal submission
The Phase I is currently in progress (ending in Dec.)
Phase II may start in 2Q 2015 7
8. iNFORMER’s General Architecture
iNFORMER is the ultimate product developed in this project
It is designed to provide:
Efficient batch and in-situ data analytics
Low-overhead native-format data access through a unified API
iNFORMER components can be integrated into HPCC Systems
Next slide shows a big-picture diagram of:
iNFORMER components (the dark boxes)
Where it can fit in HPCC systems. 8
9. iNFORMER in HPCC – The Big Picture
DFS
Any FS
Various data stores
Thor Cluster
iNFORMER MATE
data transformation
Access API
iNFORMER Virtual Data Integrator
ECL Program
Big Picture view of iNFORMER components and potential integration with HPCC
new module
existing link
new link
Landing Zone 9
10. iNFORMER MATE
MATE is meant to replace MapReduce
Using the concept of Reduction Object
Replace map and reduce with generalized reduction
Avoid intermediate shuffle/sort of key/value pairs
Main benefits of the Reduction Object
Not tied to the limitations of the key/value model
More efficient data processing with less data movement
Lower memory requirements
Performance improvement: 1.50 to 20 fold 10
11. Virtual Data Integrator (VDI)
VDI offers a unified API for native-format data access
Access the data in their original storage format
Improves data access experience over various file- systems/data formats
Parallel File Systems: Thor DFS, HDFS, PVFS, GPFS, etc.
Data Formats: Plain Text, XML, NetCDF, HDF5, etc.
Eliminates the need to transform the entire data into a platform-specific format (e.g., DFS)
Especially for in-memory processing
VDI has two parts:
Data Storage Adapter (DSA)
Virtual Storage API (VSA) 11
12. iNFORMER Virtual Data Integrator (VDI) with HPCC
Data Storage Adapter (DSA) Modules
data records
configure with FS/format specifics
Virtual Storage API (VSA)
FS/format customized
modules
Fixed API
HPCC Systems’ Thor
Any FS
(e.g., S3)
HPCC DFS
data copy
iNOFORMER VDI and potential integration with HPCC Systems
new data flow
existing data flow
Drop (Landing) Zone
data transformation 12
13. Data Storage Adapter (DSA)
A set of adapters that interface with various storage formats
Access the respective data records in their original format
Consolidates data integration process
Allows for partial data load/access
Eliminates the need for excessive data transformation
Data records on the distributed FS will be extracted and forwarded to the analysis engine and back. 13
14. Some Internals of the DSA Component
Internal Architecture of the DSA File Format Adapter 14
15. Virtual Storage API (VSA)
VSA is a unified FS/format agnostic interface:
The data access calls are not bound to a format
The storage format specifics will guide the VSA implementation to choose the right DSA module
VSA will have well-defined hooks for easy implementation of modules for new formats
A set of pre-implemented FS/format adapters will be developed. 15
16. The HPCC on AWS Case
S3 Parallel Data Access 16
17. Phase I - HPCC Feasibility Study 17
Efficient loading of AWS S3-resident compressed files into Thor
Data loading / spraying optimizations in this project
Distributed data load / decompress / read from S3 into the memory of Thor nodes
Distributed data Spray to DFS
Each node should load its own portion of data to avoid shuffling (communication)
Use a zip index file to facilitate this
Study the potential for in-memory processing
Avoid multiple disk accesses
18. S3 to EC2 Distributed Data Load 18
Two potential plans for the proposed distributed S3 loading module
Test data: Elsevier sample data sets hosted on S3
XML-based records
Multi-entry zip files
Amazon S3
Thor @EC2
S3 Parallel Load / Decompress
Kafka Producer
Kafka to DFS
Spray
Memory to DFS Spray
1
2
Landing Zone
Decompress & Spray
S3 Load
Current approach
19. Development Steps 19
Step 1: Distributed Spraying (currently implemented)
Parallel loading, decompression and spraying of XML files from an S3 archive
Each process extracts an XML file data directly from the archive
One process per node
Load balancing applied among the processes
Message Passing Interface (MPI) used for inter-node communication and shuffling
Step II: Skipping the Shuffling and the Landing Zone (next step)
Each process reads it own relevant portion of the file directly from the archive and decompresses on the fly.
Allows for partial data load from S3
May need full compression index for the archives
A separate module to be developed to do this
20. Step I – Distributed Spraying 20
Extract XML file
Decompress
Spray (shuffle)
DFS
Processing on four (4) EC2-based Thor Nodes
Multi-file Zip archive
Amazon S3
21. Step I Results 21
Comparing Sequential and Parallel Load/Spray from S3 into Thor
23. Future Development Steps
Next immediate steps (SBIR Phase I)
Complete spraying into Thor DFS
Evaluation using more data sets
Fully eliminate the scattering process by partial file reading
Design the zip indexing module
Full VDI module prototype design
Future (SBIR Phase II) Plans
S3 adapter module as part of the VDI
Expansion of the adapter module to support general non-S3 cases
Full integration with HPCC to bypass the landing zone
Support for in-memory spraying and data processing
23
24. 24
Contact Info
Mohammad Rashti
240 W. Elmwood Dr., Dayton, OH
(937) – 433 - 2886
mrashti@RNET-Tech.com