API Software and Development Toolkit as presented by Andy Newton at ARIN's Public Policy and Members Meeting in April 2014. All ARIN 33 presentations are posted online at: https://www.arin.net/ARIN33_materials
The document discusses various strategies for achieving high availability of web applications and databases. It covers evaluating business requirements, DNS configuration, using cloud infrastructure or owning hardware, basic setups with application and database servers, database replication and clustering options, load balancing tools for Linux and cloud environments, auto scaling features, and monitoring. The key strategies presented include replicating databases, load balancing web traffic, auto-scaling cloud resources, and configuring failover between redundant application and database servers.
IBM Aspera - Moving the world’s data at maximum speedMohamed Morsi
Is there anywhere in your business where delays in moving data is impacting a key business or IT process?
IBM® Aspera® solutions enable organizations to move, share and synchronize large files and data sets, digital assets and media quickly and securely. These highly scalable solutions are built to handle the largest data requirements at maximum speed, regardless of data size, type, distance or network conditions.
Hadoop Summit San Jose 2014 - Analyzing Historical Data of Applications on Ha...Zhijie Shen
Apache Hadoop YARN is the default platform for running distributed apps - batch & interactive apps and long running services. A YARN cluster may run lots of apps of different frameworks and from different users, groups and organizations. It's of significant value to monitor and visualize what has happened to these apps, i.e., application history, to glean important insights - how their performance changes over time, how queues get utilized, changes in workload patterns etc. It’s also useful to ensure application history accessible whether apps are finished, or failed for some reasons, such as master restart, crash or memory pressure. In this talk, we’ll describe how YARN enables storage of all sorts of historical information, both generic and framework-specific, of any kinds of apps, and how YARN exposes the historical information and provide users the tools to view it, conduct any analysis, and understand various dimensions of YARN clusters over time. We'll cover a number of technical highlights, such as persisting information into a pluggable & reliable storage like HDFS, establishing a history-server for users to easily access via command-line tools, web & REST interfaces in a secure manner, and enabling apps to define and publish framework specific information. Moreover, the talk will also brief developers and administrators about how to make use of the new YARN feature.
Amazon RDS makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. In this webinar, we'll discuss practical ways of migrating applications to Amazon RDS for Oracle. Customer case studies will illustrate how customers moved to Amazon RDS for Oracle and how they benefited.
This document discusses improvements to Hive performance and functionality in the Stinger initiative. Stinger includes changes to Hive and a new project called Tez, with two main goals: improve Hive performance by 100x and extend Hive SQL for analytics. Stinger is divided into three phases, with phase 1 focusing on optimizations, phase 2 adding YARN resource management and Hive on Tez, and phase 3 adding a buffer cache and cost-based optimizer. Hive 0.11 delivers performance gains through optimizations like improved map joins and collapsing jobs. It also introduces new technologies like Tez, ORC files, and vectorization. Standard queries now run much faster, with some seeing over 50x speedup. Future work will further reduce query startup
Migrating and Running DBs on Amazon RDS for OracleMaris Elsins
The process of migrating Oracle DBs to Amazon RDS is quite complex. Some of the challenges are - capacity planning, efficient loading of data, dealing with limitations of RDS, provisioning instance configurations, and lack and SYSDBA's access to the database. The author has migrated over 20 databases to Amazon RDS, and will provide an insight into how these challenges can be addressed. Once done with the migrations – the support of the databases is very different too, because the SYSDBA access is not provided. The author will talk about his experience on migrating to and supporting databases on Amazon RDS for Oracle from Oracle DBAs perspective, and will reveal the different problems encountered as well the solutions applied.
AWS Database Services-Philadelphia AWS User Group-4-17-2018Bert Zahniser
The document summarizes a presentation on Amazon Web Services (AWS) database services. It provides an overview of AWS Relational Database Service (RDS) and other database offerings, including benefits of RDS like scalability and availability features. Specific RDS configurations, security options, monitoring, and pricing are also discussed. Non-relational database services and migration tools are briefly mentioned.
The Aspera Solution enables telecommunication companies with a complete portfolio of file transfer, distribution, synchronization, and automation software to
- Systematically achieve maximum transfer speeds of big data, regardless of network conditions and transfer distance.
- Centrally manage, monitor and control your transfer activity, server infrastructure, bandwidth utilisation
- Automate your file transfer workflows and schedule transfer activity and bandwidth availability
- Uncompromising security and reliability
The document discusses various strategies for achieving high availability of web applications and databases. It covers evaluating business requirements, DNS configuration, using cloud infrastructure or owning hardware, basic setups with application and database servers, database replication and clustering options, load balancing tools for Linux and cloud environments, auto scaling features, and monitoring. The key strategies presented include replicating databases, load balancing web traffic, auto-scaling cloud resources, and configuring failover between redundant application and database servers.
IBM Aspera - Moving the world’s data at maximum speedMohamed Morsi
Is there anywhere in your business where delays in moving data is impacting a key business or IT process?
IBM® Aspera® solutions enable organizations to move, share and synchronize large files and data sets, digital assets and media quickly and securely. These highly scalable solutions are built to handle the largest data requirements at maximum speed, regardless of data size, type, distance or network conditions.
Hadoop Summit San Jose 2014 - Analyzing Historical Data of Applications on Ha...Zhijie Shen
Apache Hadoop YARN is the default platform for running distributed apps - batch & interactive apps and long running services. A YARN cluster may run lots of apps of different frameworks and from different users, groups and organizations. It's of significant value to monitor and visualize what has happened to these apps, i.e., application history, to glean important insights - how their performance changes over time, how queues get utilized, changes in workload patterns etc. It’s also useful to ensure application history accessible whether apps are finished, or failed for some reasons, such as master restart, crash or memory pressure. In this talk, we’ll describe how YARN enables storage of all sorts of historical information, both generic and framework-specific, of any kinds of apps, and how YARN exposes the historical information and provide users the tools to view it, conduct any analysis, and understand various dimensions of YARN clusters over time. We'll cover a number of technical highlights, such as persisting information into a pluggable & reliable storage like HDFS, establishing a history-server for users to easily access via command-line tools, web & REST interfaces in a secure manner, and enabling apps to define and publish framework specific information. Moreover, the talk will also brief developers and administrators about how to make use of the new YARN feature.
Amazon RDS makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. In this webinar, we'll discuss practical ways of migrating applications to Amazon RDS for Oracle. Customer case studies will illustrate how customers moved to Amazon RDS for Oracle and how they benefited.
This document discusses improvements to Hive performance and functionality in the Stinger initiative. Stinger includes changes to Hive and a new project called Tez, with two main goals: improve Hive performance by 100x and extend Hive SQL for analytics. Stinger is divided into three phases, with phase 1 focusing on optimizations, phase 2 adding YARN resource management and Hive on Tez, and phase 3 adding a buffer cache and cost-based optimizer. Hive 0.11 delivers performance gains through optimizations like improved map joins and collapsing jobs. It also introduces new technologies like Tez, ORC files, and vectorization. Standard queries now run much faster, with some seeing over 50x speedup. Future work will further reduce query startup
Migrating and Running DBs on Amazon RDS for OracleMaris Elsins
The process of migrating Oracle DBs to Amazon RDS is quite complex. Some of the challenges are - capacity planning, efficient loading of data, dealing with limitations of RDS, provisioning instance configurations, and lack and SYSDBA's access to the database. The author has migrated over 20 databases to Amazon RDS, and will provide an insight into how these challenges can be addressed. Once done with the migrations – the support of the databases is very different too, because the SYSDBA access is not provided. The author will talk about his experience on migrating to and supporting databases on Amazon RDS for Oracle from Oracle DBAs perspective, and will reveal the different problems encountered as well the solutions applied.
AWS Database Services-Philadelphia AWS User Group-4-17-2018Bert Zahniser
The document summarizes a presentation on Amazon Web Services (AWS) database services. It provides an overview of AWS Relational Database Service (RDS) and other database offerings, including benefits of RDS like scalability and availability features. Specific RDS configurations, security options, monitoring, and pricing are also discussed. Non-relational database services and migration tools are briefly mentioned.
The Aspera Solution enables telecommunication companies with a complete portfolio of file transfer, distribution, synchronization, and automation software to
- Systematically achieve maximum transfer speeds of big data, regardless of network conditions and transfer distance.
- Centrally manage, monitor and control your transfer activity, server infrastructure, bandwidth utilisation
- Automate your file transfer workflows and schedule transfer activity and bandwidth availability
- Uncompromising security and reliability
This document discusses Amazon Redshift, a fully managed data warehousing service. It provides petabyte-scale data warehousing capabilities with performance up to 3x faster and 80% lower cost than traditional data warehousing solutions. The document outlines use cases, architecture details, pricing and total cost of ownership, security features, integration options and best practices. It also shares customer examples and an ecosystem of partners building solutions on Amazon Redshift.
This document summarizes Aspera's solutions for enabling high-speed data transport over wide area networks (WANs) and directly to cloud object storage like Amazon S3. It discusses the challenges of moving big data files over WANs and to the cloud using standard TCP and HTTP. Aspera addresses these challenges with its fasp transport technology, which can achieve near line-rate throughput for any file size over any distance or network conditions. It describes Aspera on Demand which provides Aspera software on AWS for high-speed direct-to-S3 transfers at scale.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...Michael Stack
This document discusses techniques used by Pinterest to improve the reliability of HBase and reduce the cost and complexity of backing up HBase data. It describes how Pinterest uses geo-replication across data centers to provide high availability of HBase clusters. It also details Pinterest's upgrade to their backup pipeline to allow direct export of HBase snapshots and write-ahead logs to Amazon S3, avoiding the need for an intermediate HDFS backup cluster. Additionally, it covers their use of an offline deduplication tool called PinDedup to further reduce S3 storage usage by identifying and replacing duplicate files across backup cycles. This combination of techniques significantly reduced infrastructure costs and backup times for Pinterest's critical HBase data.
(MBL314) Build World-Class Cloud-Connected Products: SonosAmazon Web Services
Sonos is a smart system of hi-fi wireless speakers and audio components. It unites your digital music collection in one app that you control from any device. Sonos leverages the Amazon Kinesis stream-processing platform to run near real-time streaming analytics on device data logs from connected Sonos hi-fi audio equipment. It analyzes usage, performance, quality logs, and other data feeds collected from Sonos-connected devices in near real-time to better understand its customer experience. In this session, Sonos will focus on the design and architecture considerations that drove their selection of AWS services for their platform, diving deep on Amazon Kinesis and Amazon DynamoDB. They will discuss architecture tradeoffs, such as Kinesis vs. Kafka and using its device data to gain some insights that differentiate Sonos in the music industry.
Oracle Databases on AWS - Getting the Best Out of RDS and EC2Maris Elsins
More and more companies consider moving all IT infrastructure to cloud to reduce the running costs and simplify management of IT assets. I've been involved in such migration project to Amazon AWS. Multiple databases were successfully moved to Amazon RDS and a few to Amazon EC2. This presentation will help you understand the capabilities of Amazon RDS and EC2 when it comes to running Oracle Databases, it will help you make the right choice between these two services, and will help you size the target instances and storage volumes according to your needs.
DCPython: Architecture at PBS (Jun 7, 2011)Drew Engelson
Drew Engelson and Edgar Roman present on how PBS uses Python, Django, Celery, Solr and autoscales Amazon EC2 to power the highly trafficked http://www.pbs.org/ and related sites (such as http://video.pbs.org/).
Big Data Day LA 2015 - What's new and next in Apache Tez by Bikas Saha of Hor...Data Con LA
Apache Tez is a library to build data processing engines in Hadoop/YARN. It takes care of many common building blocks like scheduling, fault tolerance, speculation, security etc. so that the engine can focus on its core features. E.g. Apache Hive can focus on SQL optimization. There has been rapid adoption in projects like Hive, Pig, Flink, Cascading, Scalding and commercial products like Datameer and Syncsort. We will provide a brief overview of Tez and then look at new features for job monitoring in the Tez UI and performance debugging tools for Tez applications. Finally we will explore upcoming features like hybrid scheduling that open up new areas of performance and functionality.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the six available database engines (with the seventh on the way), and examine customer success stories.
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...Amazon Web Services
This document provides an overview of database migration using AWS Database Migration Service (DMS). It discusses how DMS helps provision databases quickly in the cloud with minimal downtime for scaling and patching. It also summarizes how DMS can automate migrations between on-premises and cloud databases with change data capture and replication. The document shares examples of how Expedia and Thomas Publishing have used DMS to migrate databases to Amazon Aurora. It concludes by listing resources available to customers for DMS and the AWS Schema Conversion Tool.
El almacenamiento en la nube es un componente crítico de la informática en la nube, que guarda la información que utilizan las aplicaciones. El análisis de big data, los almacenes de datos, el Internet de las cosas, las bases de datos y las aplicaciones de backup y archivado dependen de algún tipo de arquitectura de almacenamiento de datos. El almacenamiento en la nube, por lo general, es más fiable, escalable y seguro que los sistemas de almacenamiento en las instalaciones tradicionales.
AWS ofrece una gama completa de servicios de almacenamiento en la nube para respaldar los requisitos de conformidad de las aplicaciones y el archivado. Seleccione entre servicios de almacenamiento de objetos, archivos y por bloques, así como opciones de migración de datos a la nube para comenzar a diseñar las bases de su entorno de TI en la nube.
Aspera on Cloud provides a secure file sharing and delivery solution that enables customers to automate file movement workflows across hybrid cloud environments. Key capabilities include high-speed transfer of large files between on-premises and cloud storage, real-time monitoring and reporting, centralized administration of hybrid infrastructures, and an automation application to schedule complex transfer workflows. The solution offers desktop, browser and mobile access along with APIs to integrate transfers and leverage cloud object storage.
Arrow Flight is a proposed RPC layer for Apache Arrow that allows for efficient transfer of Arrow record batches between systems. It uses GRPC as the foundation to define streams of Arrow data that can be consumed in parallel across locations. Arrow Flight supports custom actions that can be used to build services on top of the generic API. By extending GRPC, Arrow Flight aims to simplify the creation of data applications while enabling high performance data transfer and locality awareness.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Deep Dive: Amazon RDS by Toby Knight, Manager Solutions Architecture, 18 April 2016
Hive & HBase for Transaction Processing Hadoop Summit EU Apr 2015alanfgates
The document discusses using Hive, HBase, Phoenix, and Calcite to build a single data store for both analytics and transaction processing. It describes some recent improvements to Hive like LLAP (Live Long and Process) that aim to achieve sub-second query response times, as well as using HBase as the Hive metastore to improve performance.
Advanced data migration techniques for Amazon RDSTom Laszewski
Migrating on premise data from Oracle and MySQL Databases to AWS Oracle and MySQL RDS. These techniques will work for AWS EC2 as well. Scripts included in the slides.
The document provides information about AWS services including EC2, S3, and CloudFront. It discusses EC2 instance types, pricing models, and storage options. It describes S3's 99.999999999% durability, storage tiers including standard, infrequent access, and glacier, and encryption options. CloudFront is introduced as a CDN that caches content at edge locations to improve distribution.
APNIC Infrastructure and Development Director Che-Hoo Cheng gives an overview of RPKI as another security consideration for peering at Peering Asia 2.0, held in Hong Kong from 24 to 25 October 2018.
This document discusses Amazon Redshift, a fully managed data warehousing service. It provides petabyte-scale data warehousing capabilities with performance up to 3x faster and 80% lower cost than traditional data warehousing solutions. The document outlines use cases, architecture details, pricing and total cost of ownership, security features, integration options and best practices. It also shares customer examples and an ecosystem of partners building solutions on Amazon Redshift.
This document summarizes Aspera's solutions for enabling high-speed data transport over wide area networks (WANs) and directly to cloud object storage like Amazon S3. It discusses the challenges of moving big data files over WANs and to the cloud using standard TCP and HTTP. Aspera addresses these challenges with its fasp transport technology, which can achieve near line-rate throughput for any file size over any distance or network conditions. It describes Aspera on Demand which provides Aspera software on AWS for high-speed direct-to-S3 transfers at scale.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...Michael Stack
This document discusses techniques used by Pinterest to improve the reliability of HBase and reduce the cost and complexity of backing up HBase data. It describes how Pinterest uses geo-replication across data centers to provide high availability of HBase clusters. It also details Pinterest's upgrade to their backup pipeline to allow direct export of HBase snapshots and write-ahead logs to Amazon S3, avoiding the need for an intermediate HDFS backup cluster. Additionally, it covers their use of an offline deduplication tool called PinDedup to further reduce S3 storage usage by identifying and replacing duplicate files across backup cycles. This combination of techniques significantly reduced infrastructure costs and backup times for Pinterest's critical HBase data.
(MBL314) Build World-Class Cloud-Connected Products: SonosAmazon Web Services
Sonos is a smart system of hi-fi wireless speakers and audio components. It unites your digital music collection in one app that you control from any device. Sonos leverages the Amazon Kinesis stream-processing platform to run near real-time streaming analytics on device data logs from connected Sonos hi-fi audio equipment. It analyzes usage, performance, quality logs, and other data feeds collected from Sonos-connected devices in near real-time to better understand its customer experience. In this session, Sonos will focus on the design and architecture considerations that drove their selection of AWS services for their platform, diving deep on Amazon Kinesis and Amazon DynamoDB. They will discuss architecture tradeoffs, such as Kinesis vs. Kafka and using its device data to gain some insights that differentiate Sonos in the music industry.
Oracle Databases on AWS - Getting the Best Out of RDS and EC2Maris Elsins
More and more companies consider moving all IT infrastructure to cloud to reduce the running costs and simplify management of IT assets. I've been involved in such migration project to Amazon AWS. Multiple databases were successfully moved to Amazon RDS and a few to Amazon EC2. This presentation will help you understand the capabilities of Amazon RDS and EC2 when it comes to running Oracle Databases, it will help you make the right choice between these two services, and will help you size the target instances and storage volumes according to your needs.
DCPython: Architecture at PBS (Jun 7, 2011)Drew Engelson
Drew Engelson and Edgar Roman present on how PBS uses Python, Django, Celery, Solr and autoscales Amazon EC2 to power the highly trafficked http://www.pbs.org/ and related sites (such as http://video.pbs.org/).
Big Data Day LA 2015 - What's new and next in Apache Tez by Bikas Saha of Hor...Data Con LA
Apache Tez is a library to build data processing engines in Hadoop/YARN. It takes care of many common building blocks like scheduling, fault tolerance, speculation, security etc. so that the engine can focus on its core features. E.g. Apache Hive can focus on SQL optimization. There has been rapid adoption in projects like Hive, Pig, Flink, Cascading, Scalding and commercial products like Datameer and Syncsort. We will provide a brief overview of Tez and then look at new features for job monitoring in the Tez UI and performance debugging tools for Tez applications. Finally we will explore upcoming features like hybrid scheduling that open up new areas of performance and functionality.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the six available database engines (with the seventh on the way), and examine customer success stories.
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...Amazon Web Services
This document provides an overview of database migration using AWS Database Migration Service (DMS). It discusses how DMS helps provision databases quickly in the cloud with minimal downtime for scaling and patching. It also summarizes how DMS can automate migrations between on-premises and cloud databases with change data capture and replication. The document shares examples of how Expedia and Thomas Publishing have used DMS to migrate databases to Amazon Aurora. It concludes by listing resources available to customers for DMS and the AWS Schema Conversion Tool.
El almacenamiento en la nube es un componente crítico de la informática en la nube, que guarda la información que utilizan las aplicaciones. El análisis de big data, los almacenes de datos, el Internet de las cosas, las bases de datos y las aplicaciones de backup y archivado dependen de algún tipo de arquitectura de almacenamiento de datos. El almacenamiento en la nube, por lo general, es más fiable, escalable y seguro que los sistemas de almacenamiento en las instalaciones tradicionales.
AWS ofrece una gama completa de servicios de almacenamiento en la nube para respaldar los requisitos de conformidad de las aplicaciones y el archivado. Seleccione entre servicios de almacenamiento de objetos, archivos y por bloques, así como opciones de migración de datos a la nube para comenzar a diseñar las bases de su entorno de TI en la nube.
Aspera on Cloud provides a secure file sharing and delivery solution that enables customers to automate file movement workflows across hybrid cloud environments. Key capabilities include high-speed transfer of large files between on-premises and cloud storage, real-time monitoring and reporting, centralized administration of hybrid infrastructures, and an automation application to schedule complex transfer workflows. The solution offers desktop, browser and mobile access along with APIs to integrate transfers and leverage cloud object storage.
Arrow Flight is a proposed RPC layer for Apache Arrow that allows for efficient transfer of Arrow record batches between systems. It uses GRPC as the foundation to define streams of Arrow data that can be consumed in parallel across locations. Arrow Flight supports custom actions that can be used to build services on top of the generic API. By extending GRPC, Arrow Flight aims to simplify the creation of data applications while enabling high performance data transfer and locality awareness.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Deep Dive: Amazon RDS by Toby Knight, Manager Solutions Architecture, 18 April 2016
Hive & HBase for Transaction Processing Hadoop Summit EU Apr 2015alanfgates
The document discusses using Hive, HBase, Phoenix, and Calcite to build a single data store for both analytics and transaction processing. It describes some recent improvements to Hive like LLAP (Live Long and Process) that aim to achieve sub-second query response times, as well as using HBase as the Hive metastore to improve performance.
Advanced data migration techniques for Amazon RDSTom Laszewski
Migrating on premise data from Oracle and MySQL Databases to AWS Oracle and MySQL RDS. These techniques will work for AWS EC2 as well. Scripts included in the slides.
The document provides information about AWS services including EC2, S3, and CloudFront. It discusses EC2 instance types, pricing models, and storage options. It describes S3's 99.999999999% durability, storage tiers including standard, infrequent access, and glacier, and encryption options. CloudFront is introduced as a CDN that caches content at edge locations to improve distribution.
APNIC Infrastructure and Development Director Che-Hoo Cheng gives an overview of RPKI as another security consideration for peering at Peering Asia 2.0, held in Hong Kong from 24 to 25 October 2018.
The Next Generation Internet Number Registry ServicesMyNOG
This document provides an overview of registry services, including the Registration Data Access Protocol (RDAP) and the Resource Public Key Infrastructure (RPKI). RDAP is designed to replace the aging WHOIS protocol by providing structured query and response formats to enable automation. RDAP also supports access control, internationalization, redirection and extensibility. RPKI is a PKI framework that adds Internet number resource information to certificates to cryptographically validate resource ownership and authorization of routing announcements. It enables applications like route origin validation to secure the routing system. The document discusses how RDAP and RPKI work and provide benefits like improved security, automation and verification of registry data.
This document discusses API design fundamentals including REST constraints, developer experience, scalability, sustainability, and consistency. It reviews REST constraints like statelessness and uniform interfaces. It emphasizes designing for developer experience by making APIs easy to use and well documented. Other topics covered include resource modeling, collections, filtering, versioning, and using hypermedia to link related resources.
by Avijit Goswami, Sr. Solutions Architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
ThaiNOG Day 2019: Internet Number Registry Services, the Next GenerationAPNIC
APNIC Director General Paul Wilson gives a presentation on Internet number registry services - the next generation at ThaiNOG 2019, held with BKNIX 2019 in Bangkok, Thailand from 7 to 8 May 2019.
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Building Scalable Big Data Infrastructure Using Open Source Software Presenta...ssuserd3a367
1) StumbleUpon uses open source tools like Kafka, HBase, Hive and Pig to build a scalable big data infrastructure to process large amounts of data from its services in real-time and batch.
2) Data is collected from various services using Kafka and stored in HBase for real-time analytics. Batch processing is done using Pig and data is loaded into Hive for ad-hoc querying.
3) The infrastructure powers various applications like recommendations, ads and business intelligence dashboards.
by Mamoon Chowdry, Solutions Architect
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
APAN 50: RPKI industry trends and initiatives APNIC
APNIC Infrastructure and Development Director Che-Hoo Cheng gives an overview of the RPKI, why it is important, and how to create ROAs and ROVs to secure routing announcements.
This document discusses RESTful microservices and best practices for designing REST APIs. It covers topics like why REST is important for API design, common REST principles, naming conventions, resource relationships, security, versioning, documentation, and management of REST APIs. It also provides examples of how various companies implement practices like filtering, searching, paging, and error handling in their REST APIs. Finally, it discusses how the WebSphere Liberty application server supports REST APIs through features like API discovery and collective APIs.
Internet Routing Registry Tutorial, by Nurul Islam Roman [APRICOT 2015]APNIC
The document provides an introduction to the APNIC Routing Registry and Routing Policy Specification Language (RPSL). It discusses (1) the objectives of explaining concepts of the global Internet Routing Registry (IRR), outlining benefits of APNIC's registry, and discussing RPSL; (2) an overview of what an IRR is, how routing policies are represented, and how objects in the APNIC database are related and protected; and (3) how routing information is integrated into the APNIC database and distributed through the global IRR system.
Using Familiar BI Tools and Hadoop to Analyze Enterprise NetworksDataWorks Summit
This document discusses using Apache Drill and business intelligence (BI) tools to analyze network data stored in Hadoop. It provides examples of querying network packet captures and APIs directly using SQL without needing to transform or structure the data first. This allows gaining insights into issues like dropped sensor readings by analyzing packets alongside other data sources. The document concludes that SQL-on-Hadoop technologies allow network analysis to be done in a BI context more quickly than traditional specialized tools.
APNIC Director General Paul Wilson presents on the next generation of Internet number registry services, namely RDAP and RPKI at the 31st TWNIC OPM and TWNOG in Taipei, Taiwan from 27 to 28 November 2018.
- Apache Thrift is a cross-language services framework that allows for the easy definition of data types and remote procedure calls (RPCs).
- It uses an interface definition language (IDL) to define data types and services, and generates code in various languages to implement clients and servers.
- Apache Thrift supports a wide range of languages and transports, making it useful for building high-performance, scalable distributed applications and microservices.
This document provides an overview of the SESAM project, which aims to increase the usage and quality of an archive system for an energy company by automatically enriching document metadata and connecting documents to structured business data. It describes how metadata is extracted from source systems into a triple store using separate ontologies for each system. Documents can then be searched across systems and metadata can be translated between them. When archiving documents, additional metadata is automatically attached based on information from the triple store.
The document discusses the history and technical components of the World Wide Web. It describes how Tim Berners-Lee invented the World Wide Web in 1989-1990 at CERN as a system for simultaneously transferring text and graphics. In 1994, Mark Andreesen developed Mosaic, the first graphical web browser, which helped popularize the web. The core technical components that enable the web are discussed, including clients/browsers, servers, HTTP, HTML, URIs, and how they interact.
Using Familiar BI Tools and Hadoop to Analyze Enterprise NetworksMapR Technologies
This document discusses using Apache Drill and business intelligence (BI) tools to analyze network data stored in Hadoop. It provides examples of querying network packet captures, OpenStack data, and TCP metrics using SQL with tools like Tableau and SAP Lumira. The key benefits are interacting with diverse network data sources like JSON and CSV files without preprocessing, and gaining insights by combining network data with other data sources in the BI tools.
Robust WordPress Installation using L2MP StackAlex Bertens
This presentation talks about the benefits and performance improvements achieved when running WordPress in a L2MP stack. The presentation also covers the additional performance gains when adding Redis Database Caching and Security practices to use when running a Wordpress Instance.
Similar to ARIN API Software and Development Toolkit (20)
See how the Internet has grown and how several major companies are leading us into a brighter IPv6-enabled future. For more info visit: http://teamarin.net/2016/02/17/growing-the-internet-with-ipv6/
CES 2016 Panel: Your Customers Are on the New Internet – Are you?ARIN
Slides from CES 2016 Panel: Your Customers Are on the New Internet – Are you? The new Internet, built on IPv6, is the only way to reach the 30 billion new IoT devices and the next 1 billion people that will be connected. Learn how this shift will positively impact your business and your customers.
Moderator:
Brian Markwalter, Sr. VP, Research & Standards, Consumer Technology Association
Panelists:
Samir Vaidya, Director, Device Technology, Verizon Wireless
John Curran, President and CEO, ARIN
Paul Saab, Software Engineer, Facebook
John Jason Brzozowski, Fellow and Chief IPv6 Architect, Comcast Cable
Limor Schafman, Chair Emeritus and Director of Content Development, IPv6 Forum Israel, TIA
The document provides guidance on transitioning to IPv6 by outlining several key steps:
1. Verify equipment and software supports IPv6, and ensure new purchases are compatible with both IPv4 and IPv6.
2. Educate yourself on how IPv6 works and check if your internet and hosting providers support IPv6.
3. Create an IPv6 addressing plan and obtain IPv6 addresses and connectivity from your internet provider.
4. Experiment with IPv6 in a test environment before full deployment.
The document discusses the need for campus IT networks to transition to IPv6 due to the depletion of IPv4 addresses. It provides a checklist for campus IT which includes: getting IPv6 address space from ARIN, setting up IPv6 connectivity, configuring systems and tools for IPv6, upgrading hardware, training staff, and enabling IPv6 on websites. Making this transition now will allow networks to continue connecting new devices and customers to the internet as IPv4 addresses run out and IPv6 becomes the standard.
The document summarizes a presentation from ARIN on security overlays for core internet protocols. It discusses how DNSSEC attaches digital signatures to DNS responses to validate responses and prevent spoofing. It also covers how ARIN has implemented changes like permitting delegated management and signing in-addr.arpa and ip6.arpa zones to enable DNSSEC for reverse lookups of IP addresses issued by ARIN.
The document provides an overview of Internet Protocol (IP) addressing and the role of the American Registry for Internet Numbers (ARIN) and other Regional Internet Registries (RIRs). It discusses IP addresses and autonomous system numbers, the domain name system, IP address allocation and management, and the purpose of WHOIS directories. ARIN is responsible for managing IP address space and ASNs in its service region, which includes Canada, the US, and many Caribbean islands.
The document discusses the history and role of ARIN, one of the five Regional Internet Registries (RIRs) that oversees the allocation and management of IP address blocks. It describes how ARIN was established in 1997 as the RIR for Canada and the US in response to the growing commercialization of the internet. The document also outlines ARIN's policies for conserving and efficiently allocating the remaining IPv4 addresses and transitioning to IPv6 to meet future internet growth.
IETF IPv6 Activities Report by Cathy Aronson at ARIN 36. Presentation and webcast archive available at: https://www.arin.net/participate/meetings/reports/ARIN_36/ppm.html
- Engineering at ARIN has grown significantly in 2015 with 13.5 new FTEs added across operations, development, UX and other teams. Hiring has been ongoing but slow.
- Positions are now full across operations, development, software integration and project management, with just two openings remaining for systems administration and security engineering.
- Accomplishments in the last year include deploying RDAP, completing automation projects, improving fault tolerance, and increasing OT&E (operational testing and evaluation) capabilities.
- Usage and transactions of services like ARIN Online, Reg-RWS and Whois continue increasing steadily with migration to IPv6 and other new features underway.
This document provides an overview of Registration Services Department (RSD) at ARIN, which handles functions related to IPv4 and IPv6 requests, database record maintenance, customer support, and policy development. It discusses trends in IPv4 depletion, the changing dynamics of requests, and increasing workload. Key metrics and statistics are presented on IPv4 and IPv6 allocations/assignments over time, completed inter-RIR transfers, specified recipient transfers, and the telephone helpdesk volume.
ARIN 36 Advisory Council Report by Dan Alexander. Presentation and video archives at: https://www.arin.net/participate/meetings/reports/ARIN_36/ppm.html
The ARIN Board of Trustees report covered their activities since the previous ARIN meeting. They adopted changes to clarify election procedures, seated committees for nominations, elections, and fellowships. They accepted the financial audit, discussed diversity and facility needs. The board reviewed and adopted the strategic plan, considered services working groups. They adopted policy changes recommended by the ARIN Advisory Council. Finally, they discussed IANA transition planning and appointed an interim representative to the Number Resource Organization Number Council.
NRO Activities Report by Axel Pawlik at ARIN 36. Presentation and video archives at: https://www.arin.net/participate/meetings/reports/ARIN_36/ppm.html
The document provides an update on activities of the Number Resource Organization (NRO). It summarizes that the NRO supports coordination between the Regional Internet Registries (RIRs), engages globally on internet governance issues, and fulfills the role of ICANN's Address Supporting Organization (ASO). Key areas of focus for the NRO include supporting RIR coordination, global collaboration, and monitoring internet governance discussions. It provides details on NRO leadership, finances, activities like participating in the Internet Governance Forum, and improvements to accountability and operations.
ARIN 35: Internet Number Resource Status ReportARIN
ARIN 35: Internet Number Resource Status Report by Leslie Nobile. Video archives at: https://www.arin.net/participate/meetings/reports/ARIN_35/ppm.html
ARIN 35: CRISP Panel by Michael Abejuela, John Sweeting, and Bill Woodcock. Video archives available at: https://www.arin.net/participate/meetings/reports/ARIN_35/ppm.html
ARIN 35 Tutorial: How to certify your ARIN resources with RPKIARIN
ARIN 35 Tutorial: How to certify your ARIN resources with RPKI by Andy Newton. Presentation at: https://www.arin.net/participate/meetings/reports/ARIN_35/premeeting.html
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
2. Progress(ion)
• ARIN has limited Engineering resources
• Creating featureful APIs enables others to
create good tools instead of relying on
ARIN
– ARIN is dedicated to keep this APIs stable
and highly available so as to empower the
community
• http://projects.arin.net
• arin-tech-discuss@arin.net
Legacy / Inherited Programmatic / REST
6. Provisioning (Classic)
• Email templates are not going away.
- usage is up
• Hand-editing of SWiP templates
happens every day
• Templates can cheat by associating
an email address
* Deactivate API Keys if you no longer need them.
7. Reg-RWS
• Very popular – usage greater than
templates and continuing to grow
• XML using RESTful HTTP
• Only programmatic way to
– Do simple reassigns of IPv6
– Manage reverse DNS
– Access ARIN X-* tickets
– Manage Hosted CA ROAs in RPKI (new)
• https://www.arin.net/resources/restful-
interfaces.html
8. Testing Your Reg-RWS Code
• We offer an Operational Test &
Evaluation environment for Reg-RWS
• Your real data, but isolated
– Helps you develop against a real system
without the worry that real data could get
corrupted.
• https://www.arin.net/resources/ote.ht
ml
10. Bulk Whois
• You must first sign an AUP
– ARIN staff will review your need to access
bulk Whois data
• Also requires an API Key
• More information
– https://www.arin.net/resources/request/b
ulkwhois.html
• Can be accessed RESTfully via
www.arin.net
11. Whois & Whois-RWS
• Port 43
– Classic, but not formally
structured/standardized and everybody does
it differently
• Whois-RWS
– XML and/or JSON over RESTful HTTP
– Only an ARIN “standard”
– Higher query load than Port 43
– https://www.arin.net/resources/whoisrws/ind
ex.html
17. rdns – Manage Reverse DNS
$TTL 86400 ; 24 hours could have been written as 24h or 1d
$ORIGIN 136.136.192.IN-ADDR.ARPA.
@ 1D IN SOA ns1.example.com. mymail.example.com. (
2002022401 ; serial
3H ; refresh
15 ; retry
1w ; expire
3h ; minimum
)
IN NS ns1.example.com.
IN NS ns2.example.com.
; server host definitions
1 IN PTR ns1.example.com.
2 IN PTR www.example.com.
; non server domain hosts
3 IN PTR bill.example.com.
4 IN PTR fred.example.com.
19. RDAP
• Registry Data Access Protocol
– Upcoming IETF standard from the WEIRDS
working group
• http://datatracker.ietf.org/wg/weirds/
– JSON over RESTful HTTP
– ALL 5 RIRs have RDAP pilots (and
VeriSign, Afilias, & NeuStar)
• http://rdappilot.arin.net/rdapbootstrap
– ICANN requiring it in new TLD contracts
• And have contracted with CNNIC to create an
open source server for DNRs and RIRs and an open
source client. Not yet available.
20. ARIN’s RDAP Pilot & Code
• ARIN Registry Pilot
– http://rdappilot.arin.net/restfulwhois/rdap
• A pilot bootstrap server
– http://rdappilot.arin.net/rdapbootstrap
– Aim your RDAP client here and it will refer to
the proper RIR or DNR
• Code is open sourced @projects.arin.net
• NicInfo
– Command-line RDAP client
– Only RDAP client currently availalbe
– Open sourced @projects.arin.net