ION Tokyo, 17 November 2014, slides from Kaname Nishikuza given during the "IPv6 in Asia Pacific: Untangling the Web" panel.
IPv6 has been available from the Regional Internet Registries for over 15 years. How do different types of organizations formulate their plans to deploy IPv6, and what’s taking so long? Will reliance on Carrier Grade NATs (CGNs) affect the development and accessibility of the Internet in Asia Pacific?
Panelists will discuss IPv6 vs. CGNs: issues, problems and solutions. The discussion will also encompass panelists’ experiences deploying IPv6 in Asia Pacific; the technical, organizational, and political challenges they face, and the current status of their deployments.
The Internet has been evolving. One of the major reasons IPv6 should be deployed now is to restore the end-to-end principle of the Internet. However, as the Internet has changed dramatically in the last decade, returning to its original form is very difficult. In this presentation, I will discuss what is happening today and how we can best sustain and improve the Internet.
ION Tokyo, 17 November 2014, slides from Kaname Nishikuza given during the "IPv6 in Asia Pacific: Untangling the Web" panel.
IPv6 has been available from the Regional Internet Registries for over 15 years. How do different types of organizations formulate their plans to deploy IPv6, and what’s taking so long? Will reliance on Carrier Grade NATs (CGNs) affect the development and accessibility of the Internet in Asia Pacific?
Panelists will discuss IPv6 vs. CGNs: issues, problems and solutions. The discussion will also encompass panelists’ experiences deploying IPv6 in Asia Pacific; the technical, organizational, and political challenges they face, and the current status of their deployments.
The Internet has been evolving. One of the major reasons IPv6 should be deployed now is to restore the end-to-end principle of the Internet. However, as the Internet has changed dramatically in the last decade, returning to its original form is very difficult. In this presentation, I will discuss what is happening today and how we can best sustain and improve the Internet.
We have published a document, "A Global Data Infrastructure for Data Sharing Between Businesses".
This document introduces the current trends toward the implementation of digital management tools that support cross border data sharing between businesses, which will be indispensable for future business transformations and pandemic responses. Today we find ourselves at the confluence of multiple evolving global trends. These include the emergence of new data driven business models, the expansion of B2B platform business, the accelerating pace of digital transformation, the growing expectations for the fulfillment of Sustainable Development Goals (SDGs) and other social needs, the rise of New Glocalism, the growth of stakeholder capitalism, and the Great Reset. In this article, we discuss the challenges of establishing a global data infrastructure for data sharing between businesses as a key ICT infrastructure for the construction of a next generation society, and the efforts that are being made to address these challenges.
NTT Laboratories
J. Arai, S. Yagi, H. Uchiyama, T. Honjo, T. Inagaki, K. Inaba, T. Ikuta, H. Takesue, K. Horikawa
This material is a poster exhibited at the ITBL community booth in SC19 (The International Conference for High Performance Computing, Networking, Storage, and Analysis 2019).
NTT Software Innovation Center
Hiroki Miura, Kota Tsuyuzaki, Junya Arai, Kohei Yamaguchi, Kengo Okitsu, Shinji Morishita
This material is a poster exhibited at the ITBL community booth in SC19 (The International Conference for High Performance Computing, Networking, Storage, and Analysis 2019).
NTT is developing a hybrid sourcing approach to address Japan's projected shortage of 430,000 IT engineers by 2025, known as the "Digital Cliff 2025". Their approach combines crowdsourcing, using platforms like Topcoder, with innersourcing by decomposing projects into microtasks that can be completed by both internal and external workers. In a case study, they developed a B2B application using this hybrid model, with crowdsourced and innersourced workers completing 49% and 39% of the code respectively. They aim to create a framework to promote this hybrid sourcing approach within NTT to help organizations overcome skills shortages and achieve digital transformations.
1) The document proposes a method for layer-level pruning of ResNet models to reduce computation costs during inference.
2) It introduces weights to Residual Units to determine their importance, allowing less important units to be erased. Units with small absolute weight values on their nonlinear maps can be erased with little impact.
3) The method repeats training and erasing layers based on unit importance. It erases layers after training and retrains, iteratively erasing more layers until accuracy drops, to prune the model while maintaining performance.
Edge computing solves issues with IoT deployment like data privacy and volume. It also allows companies to gain valuable customer and product data rather than relying on web giants. For CIOs, edge computing influences strategies around data infrastructure, organization, and IT architecture - shifting from offline to real-time analytics, human-readable to machine formats, and app-centric to data-centric designs.
BuildKit is a next-generation build system that provides efficient caching, multi-stage builds, and secure access to private assets without requiring root privileges. It can be deployed on Kubernetes using a DaemonSet or StatefulSet for caching benefits. Build definitions can be provided via Dockerfiles, Buildpacks, or CRDs like Tekton to build images on Kube nodes and push to a remote registry. Consistent hashing with StatefulSets ensures builds always hit the fastest daemon-local cache.
The document discusses utilizing spatiotemporal data from IoT devices in Redis. It proposes using a technique called "ST-coding" to encode location and timestamp data into a single code. This addresses two problems: 1) ST range queries were slow due to searching many keys; and 2) data insertion was inefficient due to load concentration on a single Redis server. By splitting the ST-code into a "PRE-code" and "SUF-code", ST range queries can be performed on a single key, avoiding use of the slow KEYS command. This improves query performance and distributes load across Redis servers.
The document discusses challenges for implementing persistent memory (PMEM) aware applications using the Persistent Memory Development Kit (PMDK). It describes how to use PMDK with Direct Access (DAX) filesystems and outlines challenges in rewriting PostgreSQL to use PMEM, including resizing checkpoint files and selecting appropriate sync functions for write ahead logging (WAL) files. Performance evaluation challenges are also discussed.
The document discusses applying RDMA (Remote Direct Memory Access) to improve performance in distributed deep learning frameworks. It describes implementing RDMA in MXNet, a distributed deep learning framework that uses a parameter server model. The implementation reduces memory copies and network overhead. Initial results showed a 1.5x speedup over the initial RDMA implementation, but the existing implementation using ZeroMQ was still faster. Further optimizations to RDMA are needed to fully realize its performance benefits.
This document summarizes a presentation on introducing the Persistent Memory Development Kit (PMDK) into PostgreSQL to utilize persistent memory (PMEM). The presentation covers: (1) hacking the PostgreSQL write-ahead log (WAL) and relation files to directly memory copy to PMEM, (2) evaluating the hacks which showed a 3% improvement to transactions and 30% reduction to checkpoint time, and (3) tips for PMEM programming like cache flushing and avoiding volatile layers.