Decentralized storage systems like IPFS and Swarm allow users to store and access files in a decentralized peer-to-peer manner without relying on centralized servers. IPFS in particular aims to build a better web by making files addressable through content hashes rather than locations and improving availability, security, and cost efficiency compared to HTTP. It works by breaking files into chunks that are distributed across the network and retrieved by hash rather than location. Basic IPFS commands demonstrated include adding files, pinning for local access, and downloading content from the decentralized network.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
IPFS is a protocol designed to create a permanent and decentralized method of storing and sharing files faster, safer and more openly. It aims to replace HTTP and build a better web for all. IPFS uses a fully distributed network where each client acts as both client and server. It allows for distributing high volumes of data with high efficiency unlike HTTP which downloads from a single source. Any data structure can be represented as a directed acyclic graph (DAG) in IPFS.
This is basically about the hybrid cloud and steps to implement them, starting from what is cloud, hybrid cloud to its implementation. Hybrid Cloud is nowadays implemented by many organisations and transitioning a traditional IT setup to a hybrid cloud model is no small undertaking. So, one should know about it and how it is implemented.
IPFS is a peer-to-peer hypermedia protocol designed to preserve and grow humanity's knowledge by making the web more resilient and open. It uses content addressing to uniquely identify each file in a global namespace, connecting IPFS hosts to transfer data in a decentralized way. Data is stored in IPFS as chunks that are cryptographically hashed and given a content identifier to allow for permanent storage and versioning of files. While IPFS promises advantages over the current HTTP system like bandwidth savings and preservation of data, challenges include a lack of economic incentives, unreliability for private data, and inability to verify data integrity without a solution like Filecoin.
The document provides an overview of the InterPlanetary File System (IPFS) and its key components. IPFS aims to create a distributed file system that addresses issues with the existing internet such as bandwidth, latency, offline support, and data security. It utilizes various technologies including distributed hash tables (DHTs), BitTorrent exchanges, and a Merkle directed acyclic graph (DAG) to store and retrieve versioned files in a decentralized manner. The document discusses IPFS concepts like content identifiers (CIDs), IPNS for mutable links, pinning for long-term data retention, and UnixFS for file representation. It also outlines several potential use cases for IPFS and challenges around automatic data replication.
This document discusses decentralized storage systems. It begins by outlining the goals of storage systems, including availability, reliability, and scalability. It then notes that most existing systems use a centralized storage server, which presents a single point of failure. Decentralized storage systems aim to address this by distributing data across multiple nodes for redundancy. The rest of the document outlines key design issues for decentralized storage systems, such as fault tolerance and load balancing. It provides examples of decentralized systems like Cassandra and Glacier, and concludes by arguing that decentralized storage will be increasingly important as data volumes continue rising.
The document discusses several new trends in cloud computing including cloud as an innovation platform for mobile, social, and big data applications. It also discusses the growth of Platform-as-a-Service (PaaS), software-defined hardware, big data analytics in the cloud, security in the cloud, and cloud-based collaboration across generations in the workplace. A survey found that cloud adoption is now strategic for many companies and SaaS adoption has grown significantly while IaaS and PaaS are reaching a tipping point. The amount of data residing in the cloud is also expected to grow significantly in the next two years.
Dynamic Routing with FRR - pfSense Hangout December 2017Netgate
This document provides an overview and instructions for configuring dynamic routing on pfSense using the FRR routing daemon. It discusses the key differences between interior routing protocols like OSPF and exterior protocols like BGP. It then provides step-by-step instructions for installing the FRR package, configuring the global settings, and setting up OSPF and BGP configurations including necessary preparations and neighbor/interface definitions. Tips are also provided for converting from OpenBGPD or Quagga configurations to FRR format.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
IPFS is a protocol designed to create a permanent and decentralized method of storing and sharing files faster, safer and more openly. It aims to replace HTTP and build a better web for all. IPFS uses a fully distributed network where each client acts as both client and server. It allows for distributing high volumes of data with high efficiency unlike HTTP which downloads from a single source. Any data structure can be represented as a directed acyclic graph (DAG) in IPFS.
This is basically about the hybrid cloud and steps to implement them, starting from what is cloud, hybrid cloud to its implementation. Hybrid Cloud is nowadays implemented by many organisations and transitioning a traditional IT setup to a hybrid cloud model is no small undertaking. So, one should know about it and how it is implemented.
IPFS is a peer-to-peer hypermedia protocol designed to preserve and grow humanity's knowledge by making the web more resilient and open. It uses content addressing to uniquely identify each file in a global namespace, connecting IPFS hosts to transfer data in a decentralized way. Data is stored in IPFS as chunks that are cryptographically hashed and given a content identifier to allow for permanent storage and versioning of files. While IPFS promises advantages over the current HTTP system like bandwidth savings and preservation of data, challenges include a lack of economic incentives, unreliability for private data, and inability to verify data integrity without a solution like Filecoin.
The document provides an overview of the InterPlanetary File System (IPFS) and its key components. IPFS aims to create a distributed file system that addresses issues with the existing internet such as bandwidth, latency, offline support, and data security. It utilizes various technologies including distributed hash tables (DHTs), BitTorrent exchanges, and a Merkle directed acyclic graph (DAG) to store and retrieve versioned files in a decentralized manner. The document discusses IPFS concepts like content identifiers (CIDs), IPNS for mutable links, pinning for long-term data retention, and UnixFS for file representation. It also outlines several potential use cases for IPFS and challenges around automatic data replication.
This document discusses decentralized storage systems. It begins by outlining the goals of storage systems, including availability, reliability, and scalability. It then notes that most existing systems use a centralized storage server, which presents a single point of failure. Decentralized storage systems aim to address this by distributing data across multiple nodes for redundancy. The rest of the document outlines key design issues for decentralized storage systems, such as fault tolerance and load balancing. It provides examples of decentralized systems like Cassandra and Glacier, and concludes by arguing that decentralized storage will be increasingly important as data volumes continue rising.
The document discusses several new trends in cloud computing including cloud as an innovation platform for mobile, social, and big data applications. It also discusses the growth of Platform-as-a-Service (PaaS), software-defined hardware, big data analytics in the cloud, security in the cloud, and cloud-based collaboration across generations in the workplace. A survey found that cloud adoption is now strategic for many companies and SaaS adoption has grown significantly while IaaS and PaaS are reaching a tipping point. The amount of data residing in the cloud is also expected to grow significantly in the next two years.
Dynamic Routing with FRR - pfSense Hangout December 2017Netgate
This document provides an overview and instructions for configuring dynamic routing on pfSense using the FRR routing daemon. It discusses the key differences between interior routing protocols like OSPF and exterior protocols like BGP. It then provides step-by-step instructions for installing the FRR package, configuring the global settings, and setting up OSPF and BGP configurations including necessary preparations and neighbor/interface definitions. Tips are also provided for converting from OpenBGPD or Quagga configurations to FRR format.
Here are the key steps:
1. Kill any existing controllers running on the system
2. Clear out any existing Mininet topology using mn -c
3. Start the Ryu OpenFlow controller by running:
ryu-manager --verbose ./simple_switch_13.py
This starts the Ryu controller with the simple_switch_13.py application, which provides basic OpenFlow switch functionality. The --verbose flag prints debug information from the controller. We have now initialized the SDN environment with Ryu acting as the controller.
The document discusses various models of parallel and distributed computing including symmetric multiprocessing (SMP), cluster computing, distributed computing, grid computing, and cloud computing. It provides definitions and examples of each model. It also covers parallel processing techniques like vector processing and pipelined processing, and differences between shared memory and distributed memory MIMD (multiple instruction multiple data) architectures.
Overview of VPN protocols.
VPNs (Virtual Private Networks) are often viewed from the perspective of security with the goal of providing authentication and confidentiality.
However, the primary purpose of VPNs is to connect 2 topologically separated private networks over a public network (typically the Internet).
VPNs basically hook a network logically into another network so that both appear as one private local network.
Security is a possible add-on to VPNs. In many cases it makes perfectly sense to secure the VPNs communication over the unsecure public network.
VPN protocols typically employ a tunnel where data packets of the local network are encapsulated in an outer protocol for transmission over the public network.
The most important VPN protocols are IPSec, PPTP and L2TP. In recent years SSL/TLS based VPNs such as OpenVPN have gained widespread adoption.
This document provides instructions for setting up and attending an eBPF workshop. It includes links for setting up the workshop platform, background slides, and code repository. It also lists an agenda with topics that will be covered, including setting up the eBPF lab, an introduction, eBPF 101, writing eBPF programs, BCC, and a tutorial. Attendees are asked to let the presenter know if they have any problems setting up.
Rapid Miner is an open source platform for data mining that was first released in 2006. It has over 250,000 users including large companies like eBay, Intel, and PepsiCo. Rapid Miner offers different versions including Rapid Miner Studio, Rapid Miner Server, and Rapid Miner Cloud. It provides an integrated environment for all steps of data mining with features like loading data from various sources, preprocessing, modeling, and evaluation.
This document discusses the limitations of existing networks and introduces the concept of software-defined networking (SDN) as a solution. It outlines that current networks have separate control and data planes, making them difficult to program and innovate on. SDN is proposed to separate the control and data planes, making the network programmable through open interfaces and allowing for centralized control. This enables experimentation, flexibility, and easier integration of new applications and services. The key aspects of SDN architecture include the infrastructure, control, and application layers that communicate through the OpenFlow protocol to enable remote programming of forwarding rules in switches.
The slides defines IoT and show the differnce between M2M and IoT vision. It then describes the different layers that depicts the functional architecture of IoT, standard organizations and bodies and other IoT technology alliances, low power IoT protocols, IoT Platform components, and finally gives a short description to one of IoT low power application protocols (MQTT).
This document provides guidance and assets for creating Google Cloud Platform (GCP) architectural diagrams. It includes examples of diagram elements like user cards, product cards, zones, and expanded product cards. Specifications are given for formatting zones, titles, and other elements. Icons for GCP products, services, and Material Design are also referenced. The goal is to help users accurately diagram GCP architectures in a consistent visual style.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
1. Virtual Private Networks (VPNs) allow employees to securely access a company's private network from remote locations over the public Internet rather than using a private leased line.
2. VPNs use encryption, authentication, and tunneling protocols to create a secure connection between a user's device and the private network. This allows employees to work remotely while maintaining the security of the private network.
3. There are different types of VPN implementations including intranet VPNs within an organization, extranet VPNs for connections outside an organization, and remote access VPNs for individual employees to connect to the business network remotely. Common protocols used include PPTP, L2TP, and IPsec.
A virtual private network (VPN) allows users to securely send and receive data across shared or public networks as if they are directly connected to a private network. VPNs use authentication and encryption to allow employees to access a company's private network remotely. There are three main types of VPNs: remote access VPNs for employees to connect from various locations, intranet VPNs to connect locations within an organization, and extranet VPNs to securely connect organizations. Common VPN protocols include PPTP, L2TP/IPSec, and OpenVPN. VPNs provide security benefits like authentication, access control, confidentiality and data integrity while allowing remote access and mobility.
Istio is an open platform to connect, manage, and secure microservices.
This is presented at Bangalore Docker meetup #35.
https://www.meetup.com/Docker-Bangalore/events/244197013/
NAT (network address translation) & PAT (port address translation)Netwax Lab
NAT (Network Address Translation) allows private IP networks to connect to the Internet by translating private IP addresses to public IP addresses. It operates on a router, connecting internal and external networks. NAT provides security by hiding internal network addresses and conserving IP addresses. There are various NAT types, including static NAT for one-to-one address mapping, dynamic NAT for mapping private addresses to public addresses from a pool, and NAT overload/PAT for mapping multiple private addresses to a single public address using ports.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1RJcfss.
Juan Batiz-Benet makes a short intro of IPFS (the InterPlanetary File System), a new hypermedia distribution protocol, addressed by content and identities. He also discusses the IPLD data model and example data structures (unixfs, keychain, post). Filmed at qconsf.com.
Juan Batiz-Benet is an Independent Scientist.
This document provides an overview of task scheduling algorithms for load balancing in cloud computing. It begins with introductions to cloud computing and load balancing. It then surveys several existing task scheduling algorithms, including Min-Min, Max-Min, Resource Awareness Scheduling Algorithm, QoS Guided Min-Min, and others. It discusses the goals, workings, results and problems of each algorithm. It identifies the need for an optimized task scheduling algorithm. It also discusses tools like CloudSim that can be used to simulate scheduling algorithms and evaluate performance.
Cloud load balancing distributes workloads and network traffic across computing resources in a cloud environment to improve performance and availability. It routes incoming traffic to multiple servers or other resources while balancing the load. Load balancing in the cloud is typically software-based and offers benefits like scalability, reliability, reduced costs, and flexibility compared to traditional hardware-based load balancing. Common cloud providers like AWS, Google Cloud, and Microsoft Azure offer multiple load balancing options that vary based on needs and network layers.
The document introduces Pixeom, a personal cloud device that allows users to store and share content privately or as part of a global exchange network without subscription fees. The Pixeom X1 device facilitates file sharing, social networking, and e-commerce through apps. It uses an ARM processor and Linux operating system. Users can indefinitely increase storage by connecting multiple Pixeom devices. The device aims to offer an alternative to corporate cloud services by giving users more control over their data.
Exploring Universal API Management And Flex Gatewayshyamraj55
The document summarizes an upcoming Patna MuleSoft Meetup event on exploring universal API management and Flex Gateway. The agenda includes an introduction to universal API management, the Anypoint API Catalog CLI, Anypoint API Governance, and Anypoint Flex Gateway. It provides overviews of each topic, including the purpose of universal API management in addressing API sprawl challenges, how the API Catalog CLI can be used to catalog APIs, and how Flex Gateway delivers performance and security for APIs running anywhere. The event will conclude with a Q&A session.
This document provides an overview of Interplanetary File System (IPFS) and how to connect to and use it with Python. IPFS is a decentralized file storage and distribution system where data is addressed by a cryptographic hash rather than IP addresses. It breaks files into chunks and stores them across nodes. To retrieve data, a client requests it from IPFS by hash. The document demonstrates installing IPFS, connecting to it from Python, uploading a file, viewing its metadata, and fetching the file content. Potential use cases of IPFS include distributed storage for blockchains and more robust content delivery.
IPFS is a protocol designed to create a permanent and decentralized method of storing and sharing files faster, safer and more openly. It aims to replace HTTP and build a better web for all. IPFS uses a fully distributed network where all nodes act as both clients and servers. It allows for distributing high volumes of data with high efficiency unlike HTTP which downloads from a single source. IPFS also provides versioning of data to prevent the daily deletion of humanity's history and makes it simple to set up resilient networks.
Here are the key steps:
1. Kill any existing controllers running on the system
2. Clear out any existing Mininet topology using mn -c
3. Start the Ryu OpenFlow controller by running:
ryu-manager --verbose ./simple_switch_13.py
This starts the Ryu controller with the simple_switch_13.py application, which provides basic OpenFlow switch functionality. The --verbose flag prints debug information from the controller. We have now initialized the SDN environment with Ryu acting as the controller.
The document discusses various models of parallel and distributed computing including symmetric multiprocessing (SMP), cluster computing, distributed computing, grid computing, and cloud computing. It provides definitions and examples of each model. It also covers parallel processing techniques like vector processing and pipelined processing, and differences between shared memory and distributed memory MIMD (multiple instruction multiple data) architectures.
Overview of VPN protocols.
VPNs (Virtual Private Networks) are often viewed from the perspective of security with the goal of providing authentication and confidentiality.
However, the primary purpose of VPNs is to connect 2 topologically separated private networks over a public network (typically the Internet).
VPNs basically hook a network logically into another network so that both appear as one private local network.
Security is a possible add-on to VPNs. In many cases it makes perfectly sense to secure the VPNs communication over the unsecure public network.
VPN protocols typically employ a tunnel where data packets of the local network are encapsulated in an outer protocol for transmission over the public network.
The most important VPN protocols are IPSec, PPTP and L2TP. In recent years SSL/TLS based VPNs such as OpenVPN have gained widespread adoption.
This document provides instructions for setting up and attending an eBPF workshop. It includes links for setting up the workshop platform, background slides, and code repository. It also lists an agenda with topics that will be covered, including setting up the eBPF lab, an introduction, eBPF 101, writing eBPF programs, BCC, and a tutorial. Attendees are asked to let the presenter know if they have any problems setting up.
Rapid Miner is an open source platform for data mining that was first released in 2006. It has over 250,000 users including large companies like eBay, Intel, and PepsiCo. Rapid Miner offers different versions including Rapid Miner Studio, Rapid Miner Server, and Rapid Miner Cloud. It provides an integrated environment for all steps of data mining with features like loading data from various sources, preprocessing, modeling, and evaluation.
This document discusses the limitations of existing networks and introduces the concept of software-defined networking (SDN) as a solution. It outlines that current networks have separate control and data planes, making them difficult to program and innovate on. SDN is proposed to separate the control and data planes, making the network programmable through open interfaces and allowing for centralized control. This enables experimentation, flexibility, and easier integration of new applications and services. The key aspects of SDN architecture include the infrastructure, control, and application layers that communicate through the OpenFlow protocol to enable remote programming of forwarding rules in switches.
The slides defines IoT and show the differnce between M2M and IoT vision. It then describes the different layers that depicts the functional architecture of IoT, standard organizations and bodies and other IoT technology alliances, low power IoT protocols, IoT Platform components, and finally gives a short description to one of IoT low power application protocols (MQTT).
This document provides guidance and assets for creating Google Cloud Platform (GCP) architectural diagrams. It includes examples of diagram elements like user cards, product cards, zones, and expanded product cards. Specifications are given for formatting zones, titles, and other elements. Icons for GCP products, services, and Material Design are also referenced. The goal is to help users accurately diagram GCP architectures in a consistent visual style.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
1. Virtual Private Networks (VPNs) allow employees to securely access a company's private network from remote locations over the public Internet rather than using a private leased line.
2. VPNs use encryption, authentication, and tunneling protocols to create a secure connection between a user's device and the private network. This allows employees to work remotely while maintaining the security of the private network.
3. There are different types of VPN implementations including intranet VPNs within an organization, extranet VPNs for connections outside an organization, and remote access VPNs for individual employees to connect to the business network remotely. Common protocols used include PPTP, L2TP, and IPsec.
A virtual private network (VPN) allows users to securely send and receive data across shared or public networks as if they are directly connected to a private network. VPNs use authentication and encryption to allow employees to access a company's private network remotely. There are three main types of VPNs: remote access VPNs for employees to connect from various locations, intranet VPNs to connect locations within an organization, and extranet VPNs to securely connect organizations. Common VPN protocols include PPTP, L2TP/IPSec, and OpenVPN. VPNs provide security benefits like authentication, access control, confidentiality and data integrity while allowing remote access and mobility.
Istio is an open platform to connect, manage, and secure microservices.
This is presented at Bangalore Docker meetup #35.
https://www.meetup.com/Docker-Bangalore/events/244197013/
NAT (network address translation) & PAT (port address translation)Netwax Lab
NAT (Network Address Translation) allows private IP networks to connect to the Internet by translating private IP addresses to public IP addresses. It operates on a router, connecting internal and external networks. NAT provides security by hiding internal network addresses and conserving IP addresses. There are various NAT types, including static NAT for one-to-one address mapping, dynamic NAT for mapping private addresses to public addresses from a pool, and NAT overload/PAT for mapping multiple private addresses to a single public address using ports.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1RJcfss.
Juan Batiz-Benet makes a short intro of IPFS (the InterPlanetary File System), a new hypermedia distribution protocol, addressed by content and identities. He also discusses the IPLD data model and example data structures (unixfs, keychain, post). Filmed at qconsf.com.
Juan Batiz-Benet is an Independent Scientist.
This document provides an overview of task scheduling algorithms for load balancing in cloud computing. It begins with introductions to cloud computing and load balancing. It then surveys several existing task scheduling algorithms, including Min-Min, Max-Min, Resource Awareness Scheduling Algorithm, QoS Guided Min-Min, and others. It discusses the goals, workings, results and problems of each algorithm. It identifies the need for an optimized task scheduling algorithm. It also discusses tools like CloudSim that can be used to simulate scheduling algorithms and evaluate performance.
Cloud load balancing distributes workloads and network traffic across computing resources in a cloud environment to improve performance and availability. It routes incoming traffic to multiple servers or other resources while balancing the load. Load balancing in the cloud is typically software-based and offers benefits like scalability, reliability, reduced costs, and flexibility compared to traditional hardware-based load balancing. Common cloud providers like AWS, Google Cloud, and Microsoft Azure offer multiple load balancing options that vary based on needs and network layers.
The document introduces Pixeom, a personal cloud device that allows users to store and share content privately or as part of a global exchange network without subscription fees. The Pixeom X1 device facilitates file sharing, social networking, and e-commerce through apps. It uses an ARM processor and Linux operating system. Users can indefinitely increase storage by connecting multiple Pixeom devices. The device aims to offer an alternative to corporate cloud services by giving users more control over their data.
Exploring Universal API Management And Flex Gatewayshyamraj55
The document summarizes an upcoming Patna MuleSoft Meetup event on exploring universal API management and Flex Gateway. The agenda includes an introduction to universal API management, the Anypoint API Catalog CLI, Anypoint API Governance, and Anypoint Flex Gateway. It provides overviews of each topic, including the purpose of universal API management in addressing API sprawl challenges, how the API Catalog CLI can be used to catalog APIs, and how Flex Gateway delivers performance and security for APIs running anywhere. The event will conclude with a Q&A session.
This document provides an overview of Interplanetary File System (IPFS) and how to connect to and use it with Python. IPFS is a decentralized file storage and distribution system where data is addressed by a cryptographic hash rather than IP addresses. It breaks files into chunks and stores them across nodes. To retrieve data, a client requests it from IPFS by hash. The document demonstrates installing IPFS, connecting to it from Python, uploading a file, viewing its metadata, and fetching the file content. Potential use cases of IPFS include distributed storage for blockchains and more robust content delivery.
IPFS is a protocol designed to create a permanent and decentralized method of storing and sharing files faster, safer and more openly. It aims to replace HTTP and build a better web for all. IPFS uses a fully distributed network where all nodes act as both clients and servers. It allows for distributing high volumes of data with high efficiency unlike HTTP which downloads from a single source. IPFS also provides versioning of data to prevent the daily deletion of humanity's history and makes it simple to set up resilient networks.
This document provides an introduction to Filecoin, including:
1) Core concepts of Filecoin such as using IPFS for data retrieval and Filecoin for data persistence and verifiability on a decentralized storage network.
2) Examples of how storage helpers can simplify storing and retrieving data on Filecoin by handling dealmaking and verification.
3) An overview of the different layers that make up a Web3-enabled architecture using Filecoin and IPFS for decentralized storage.
iSCSI provides a standard way to access Ceph block storage remotely over TCP/IP. SUSE Enterprise Storage 3 includes an iSCSI target driver that allows any iSCSI initiator to connect to Ceph storage. This provides multiple platforms with standardized access to Ceph without needing to join the cluster. Optimizations are made in iSCSI to efficiently handle SCER operations by offloading work to OSDs.
openATTIC provides a web-based interface for managing Ceph and other storage. It currently allows pool, OSD, and RBD management along with cluster monitoring. Future plans include extended pool and OSD management, CephFS and RGW integration, and deployment/configuration of Ceph nodes via Salt.
Open System SnapVault (OSSV) is a disk-to-disk backup solution that uses block-level incremental backups to efficiently backup data from non-NetApp systems to NetApp storage. OSSV can also be used for data migrations by performing block-level incremental transfers, which significantly reduces transfer times compared to file-level tools when migrating large frequently changed files. During a restore, the DFM server uses the NDMP protocol to communicate with both the OSSV host and NetApp filer to initiate and manage the restore process.
Spectrum Scale Unified File and Object with WAN CachingSandeep Patil
This document provides an overview of IBM Spectrum Scale's Active File Management (AFM) capabilities and use cases. AFM uses a home-and-cache model to cache data from a home site at local clusters for low-latency access. It expands GPFS' global namespace across geographical distances and provides automated namespace management. The document discusses AFM caching basics, global sharing, use cases like content distribution and disaster recovery. It also provides details on Spectrum Scale's protocol support, unified file and object access, using AFM with object storage, and configuration.
Software Defined Analytics with File and Object Access Plus Geographically Di...Trishali Nayar
Introduction to Spectrum Scale Active File Management (AFM)
and its use cases. Spectrum Scale Protocols - Unified File & Object Access (UFO) Feature Details
AFM + Object : Unique Wan Caching for Object Store
The document describes an experiment to use OpenStack Swift as the storage system for Apache Hadoop instead of HDFS. Key points covered include:
1) An overview of OpenStack Swift and how it works as a distributed object storage system.
2) A brief introduction to Apache Hadoop and HDFS, its native distributed file system.
3) The concept and software setup of the experiment, which involved installing Swift, packaging the Java cloudfiles library for Hadoop, and copying Swift file system code into Hadoop source code.
4) Configuration details for connecting Hadoop to Swift, such as settings in core-site.xml and testing the new Swift file system.
IPFS is a protocol designed to store and share files in a decentralized manner without a central authority. The document provides instructions for installing IPFS and adding a sample image file to demonstrate how it works. It describes downloading the IPFS software, extracting and moving the executable, adding an image file which generates a hash identifier, starting the daemon, and viewing the image in a browser using the hash as the URL.
OpenStack Project Freezer is a backup and restore service that automates the data backup and restore process. It supports backing up and restoring various platforms and storage types. Freezer has a low memory footprint and supports incremental backups, old backup removal, restoring from a specific date, bandwidth limiting, and clustering of backups across multiple servers. The document then describes Freezer's architecture, components, experiences optimizing backups to Swift storage, and concludes with an overview of demonstrations of Freezer's backup and restore of Nova VMs, Cinder volumes, file systems, and MySQL databases.
Introduction to IPFS & Filecoin - longer versionTinaBregovi
The document provides an overview of an introduction to IPFS and Filecoin. It discusses key concepts like IPFS being a peer-to-peer protocol for a decentralized web and Filecoin being an Airbnb-like network for data storage. It outlines the agenda which includes IPFS and Filecoin concepts, use cases, social good initiatives, opportunities, and resources. It then dives deeper into explaining technical aspects of IPFS, Filecoin, and related development tools.
NameNode Analytics - Querying HDFS Namespace in Real TimePlamen Jeliazkov
An isolated, read-only NameNode called NameNode Analytics (NNA) is described that embeds a custom query engine to enable near real-time analysis of HDFS metadata. NNA keeps its metadata set up-to-date by tailing the edit logs from JournalNodes on the live HDFS cluster. It can respond to filter and histogram queries over dimensions like users, filesizes, and modification times to provide insights like top small file creators, largest unused datasets, and files being actively written. Examples are given of how NNA has helped issues like slow operations, pushing NameNode limits, and small file prevention. Future work ideas are discussed like integrating TTL management and improving query performance.
Apache Spark and Object Stores —for London Spark User GroupSteve Loughran
The March 2017 version of the "Apache Spark and Object Stores", includes coverage of the Staging Committer. If you'd been at the talk you'd have seen the projector fail just before the demo. It worked earlier! Honest!
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
Peter Tiernan - Ceph at the Digital Repository of Irelanddri_ireland
The DRI has a need for vastly scalable and dynamic storage. In this presentation we explore 4 storage solutions and describe how we made the choice to use Ceph.
Ceph at the Digital Repository of Ireland - Ceph Day Frankfurt Ceph Community
Peter Tiernan presents on using Ceph storage at the Digital Repository of Ireland (DRI). The DRI follows the Open Archival Information System model and requires its storage to meet standards for long-term preservation of digital assets. Several solutions were tested, but Ceph was chosen for its distributed architecture, high availability, scalability, data security features, and rich APIs. Performance testing showed improvements after adding more object storage devices. Future goals include leveraging more of Ceph's erasure coding, tiering, and replication capabilities.
AIDevWorld 23 Apache NiFi 101 Introduction and Best Practices
https://sched.co/1RoAO
Timothy Spann, Cloudera, Principal Developer Advocate
In this talk, we will walk step by step through Apache NiFi from the first load to first application. I will include slides, articles and examples to take away as a Quick Start to utilizing Apache NiFi in your real-time dataflows. I will help you get up and running locally on your laptop, Docker or in CDP Public Cloud.
Wednesday November 1, 2023 12:00pm - 12:25pm PDT
VIRTUAL AI DevWorld -- Main Stage https://app.hopin.com/events/api-world-2023-ai-devworld/stages
Retail & E-Commerce AI (Industry AI Conference)
Session Type OPEN TALK
Track or Conference Retail & E-Commerce AI (Industry AI Conference), Industry AI Conference, VIRTUAL, Tensorflow & PyTorch & Open Source Frameworks (AI/ML Engineering Conference), AI/ML Engineering Conference, AI DevWorld
In-Person/Virtual Virtual, Virtual Exclusive
apache nifi
Timothy Spann
Cloudera
Principal Developer Advocate for Data in Motion
Tim Spann is the Principal Developer Advocate for Data in Motion @ Cloudera where he works with Apache Kafka, Apache Flink, Apache NiFi, Apache Iceberg, TensorFlow, Apache Spark, big data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal Field Engineer at Cloudera, a Senior Solutions Architect at AirisData and a senior field engineer at Pivotal. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on big data, the IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as IoT Fusion, Strata, ApacheCon, Data Works Summit Berlin, DataWorks Summit Sydney, and Oracle Code NYC. He holds a BS and MS in computer science.
cloudera dataflow
Introduction to Stacki - World's fastest Linux server provisioning ToolSuresh Paulraj
Stacki is an open source tool for provisioning and managing Linux servers at scale. It provides fast, reliable provisioning of servers from bare metal to a fully configured system. PayPal uses Stacki to manage their Hadoop infrastructure, which includes over 3,000 nodes spread across multiple datacenters. Stacki automates tasks like disk formatting, partitioning, OS installation, and integration with other tools to quickly provision new servers. It helped PayPal reduce provisioning time from hours to just 14 minutes for 288 servers.
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Community
Big Data Analytics on Ceph* Object Storage
The document discusses using Ceph* object storage for big data analytics workloads on OpenStack. It covers deployment considerations for analytics clusters using options like VMs, containers, or bare metal. It details the design of using Ceph* RADOS Gateway (RGW) with an SSD cache tier for storage, and developing an RGW file system adapter and proxy for scheduling. Sample performance testing showed container overhead of 1.46x and VM overhead of 2.19x compared to bare metal. The next steps are to complete development and performance testing of the Ceph*/RGW solution.
Global Internet is not ready for IPv6 only
Cisco support NAT64 above ASR Series
User bandwidth management is way to complex
464 XLAT does not support general WiFi routers
Existing server system support
Operation & Security Policy
We have almost 50k active Customer and planning for 500k
Overhead cost of deployment (NAT64 & DNS64)
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
2. Agenda
What is Decentralisation + Storage?
What is Decentralised Storage?
Why Decentralised Storage?
Decentralised Storage - Current Projects
Swarm
IPFS - Deep Dive
Basic Commands
Advanced Commands
Demo
Appendix
2
3. What is Decentralisation + Storage?
Decentralisation is understood as the transfer of authority from a central entity to
a more localised and ‘liberal’ system.
Storage is defined as the retention of retrievable data on a computer or other
electronic system.
3
4. What is Decentralised Storage?
Decentralised storage is a system of being able to store your files without having
to rely on large, centralized silos of data that don’t undermine important values
such as privacy and freedom of your information.
It is Content-Addressable, rather than Location-Addressable. Every file has a
unique hash of its content.
4
5. Why Decentralised Storage?
● Availability
○ Censorship Resistant
○ Data geographically spread
○ No "404 Page Not found" error
● Security & Privacy
○ No centralised server storage hence very difficult to hack and breach data
○ Files are not stored directly but as chunks of data spread across multiple nodes
● Cost reduction due to more efficiency
5
6. Decentralised Storage - Current Projects
● Swarm
● IPFS (Inter Planetary File System)
● Sia
● Storj
6
7. Swarm
7
Swarm is a distributed storage platform and content distribution service, a native
base layer service of the ethereum web3 stack.
The primary objective of Swarm is to provide a sufficiently decentralized and
redundant store of Ethereum’s public record, in particular to store and distribute
dapp code and data as well as blockchain data.
Refer Swarm Documentation for further details
9. What is IPFS?
9
IPFS is a distributed peer-to-peer (p2p) file sharing system for storing and
accessing files, websites, applications, and data.
IPFS aims to replace HTTP and build a better web for all of us.
10. HTTP v/s IPFS
Today, the Internet is based on HyperText
Transfer Protocol (HTTP). HTTP relies on
location addressing which uses IP addresses to
identify the specific server that is hosting the
requested information. This means that the
information has to be fetched from the origin
server.
IPFS is meant to be a replacement for HTTP.
Most notably, IPFS never has a single point of
failure. It’s a peer-to-peer distributed file system
that would decentralize the Internet and make it
much more difficult for a service provider or
hosting network to pull the plug and make
published information suddenly disappear. 10
HTTP vs. IPFS [Source: https://www.maxcdn.com/one/visual-
glossary/interplanetary-file-system/]
11. How IPFS works?
11
IPFS works by connecting all computing devices with the same system
of files via a system of nodes. It uses a “distributed hash table, an
incentivized block exchange, and a self-certifying namespace.”
In simpler terms, it acts similarly to a torrent system, except that instead of
sharing and exchanging media, IPFS exchanges git objects. This means that the
whole system is based around a simple key-value data store. Any type of content
can be inserted, and it will give back a key that can be used to retrieve the content
again at any time. This is what allows for content addressing instead of location
addressing: The key is completely independent of the origin of the information and
can be hosted anywhere.
12. How IPFS stores data?
When you add any content on IPFS network, the data is split into chunks of 256Kb.
Each chunk is identified with it’s own hash. These chunks are then distributed to
various nodes on network which have there hash closest to peerId.
12
13. How IPFS stores data? ...Continued
1. Let us assume that there are 4 nodes with peerId 6789,
789a, 89ab, 9abc respectively
2. We try to add a file name(size= 1Mb) something.mp4. Your
node first calculates that hash of the file, say 7abc.
Additionally the file is broken into 4 chunks of 256 Kb each.
Your node then calculates the hash of the each chunk, say
(7aaa, 8abc, 9a23, 5bcd)
3. Now node broadcasts the each chunk to node with has the
closest peerId numerically. In our mentioned example
chunk with hash 7aaa it closest to hash 789a. Hence this
chunk is send to node with peerId 789a.
4. Similarly, all chunks are send and there address in updated
in DHT.
5. Lastly, the object root hash i.e 7abc is stored, (Root hash
can be stored anywhere, it is assumed that in current
example it is stored in our system) and hashes that it links
to i.e 7abc → [7aaa, 8abc, 9a23, 5bcd]
[Source: https://medium.com/@akshay_111meher/how-ipfs-works-
545e1c890437] 13
How data is divided and stored. Root hash is assumed to be stored on your node.
It is however stored in same way chunks are stored. It could be on anyone,
including yours.
15. How data is retrieved?
On IPFS network, the file is identified solely by it HASH (root hash), in our case
7abc. Once the user requests a file, the request traverses to nodes where hash is
existing using the DHT. If the data points to other chunks (like in our case), even
they are searched same way. Once all chunks are obtained, all of them are simply
concatenated to obtain the main object.
15
16. Basic
Commands
16
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer Nodes
17. Get IPFS version
17
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer nodes
ipfs version
ipfs version 0.4.18
18. Initialize the IPFS repository
18
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer nodes
ipfs init
initializing ipfs node at
/Users/jbenet/.go-ipfs
generating 2048-bit RSA
keypair...done
peer identity:
Qmcpo2iLBikrdf1d6QU6vXuNb6P7hwrbNPW9k
LAH8eG67z
to get started, enter:
ipfs cat
/ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPp
HdWEz79ojWnPbdG/readme
19. Get IPFS node id
19
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer nodes
ipfs id
"ID":
"QmP7JssmhNTpayGoK5ZhBt78hRRBi3VBYQ
yqwMsBsSZBSW"
20. Start IPFS node
20
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer nodes
ipfs daemon
Initializing daemon...
go-ipfs version: 0.4.18-
Repo version: 7
System version: amd64/darwin
Golang version: go1.11.1
Successfully raised file descriptor limit to 2048.
Swarm listening on /ip4/127.0.0.1/tcp/4001
.
Swarm announcing /ip4/127.0.0.1/tcp/4001
.
Daemon is ready
21. Check peer nodes
21
● Install IPFS
● Get IPFS version
● Initialize the IPFS repository
● Get IPFS node id
● Start IPFS node
● Check Peer nodes
ipfs swarm peers
22. Advanced
Commands
22
● Check IPFS repository
statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
23. Check IPFS repository statistics
23
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs stats repo
NumObjects: 4817
RepoSize: 127963949
StorageMax: 10000000000
RepoPath: /Users/anuragd/.ipfs
Version: fs-repo@7
24. Add a file to IPFS
24
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs add temp
added
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u temp
12 B / 12 B
[==================================
===================================
===================================
===============================]
100.00%
25. Pin objects to local storage
25
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs pin add
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
pinned
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u recursively
26. Remove pinned objects from local storage
26
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs pin rm
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
unpinned
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
27. Download IPFS objects
27
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs get
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
Saving file(s) to
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
20 B / 20 B
[==================================
===================================
===================================
============================]
100.00% 0s
28. Show IPFS object data
28
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs cat
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u
Hello World
ipfs cat
QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy
47CnJDgvs8u > temp1
cat temp1
Hello World
29. List objects pinned to local storage
29
● Check IPFS repository statistics
● Add a file to IPFS
● Pin objects to local storage
● Remove pinned objects from
local storage
● Download IPFS objects
● Show IPFS object data
● List objects pinned to local
storage
ipfs pin ls
Show http://localhost:5001/webui
http://localhost:8080/ipfs/<<hash of content>>
Ipfs add, automatically pins the files
Ipfs add, automatically pins the files
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
Check
Ipfs add, automatically pins the files
Check without ipfs daemon - you can check files
Check with ipfs daemon - you can see files, which files are uploaded when you are online
ipfs.io/ipfs/QmZcR3AvZHxjh8hDxLxmECDzjiSDXLgJnfVogsKw2kJiUY or use webui
https://itnext.io/build-a-simple-ethereum-interplanetary-file-system-ipfs-react-js-dapp-23ff4914ce4e
Execute this commands
Ipfs add
Ipfs get
Ipfs cat