Enchancing the Data Collection in Tree based Wireless Sensor Networksijsrd.com
Number of techniques used in Wireless Sensor Network to improve data collection from sensor nodes. It achieve by minimize the schedule length and dynamic channel assignment. Schedule length minimized by BFS algorithm without interfering links. Interfering links can be eliminated by transmission power control and multi frequency. The power can be save by using beacon signal. Collection of data can also be limited by topology of network. So the nodes are arranged in form. The capacitated minimal spanning trees and degree- constrained spanning trees give significant improvement in scheduling. Finally the data collection is enhancing in terms of security by using T-Hash Chain algorithm.
Distributed Dynamic Replication Management Mechanism Based on Accessing Frequ...May Sit Hman
Thesis (Master) of Computer Science. It's based on Distributed System and Database System. It's about management replica on distributed database systemm.
The document discusses analyzing climate data over fast networks and parallel mesh refinement. It describes two climate analysis applications that are either computationally or data intensive. It then discusses accessing netcdf climate data files from remote repositories over networks, distributing the input files across processes, and using batch processing or clouds to retrieve the remote data. It also describes adaptive mesh refinement used to process large climate data in parallel by distributing the mesh and synchronizing propagation paths between processes.
This document discusses traditional communication architectures for multiprocessor systems and proposes that Active Messages is a better communication architecture. It analyzes three traditional low-level communication layers - message passing, message driven, and shared memory - and argues that they are best viewed as communication models implemented on top of a general-purpose communication architecture like Active Messages, rather than as architectures themselves. The document provides an example implementation of the send and receive communication model using Active Messages on the CM-5 to demonstrate how it can be implemented efficiently while gaining flexibility.
Enchancing the Data Collection in Tree based Wireless Sensor Networksijsrd.com
Number of techniques used in Wireless Sensor Network to improve data collection from sensor nodes. It achieve by minimize the schedule length and dynamic channel assignment. Schedule length minimized by BFS algorithm without interfering links. Interfering links can be eliminated by transmission power control and multi frequency. The power can be save by using beacon signal. Collection of data can also be limited by topology of network. So the nodes are arranged in form. The capacitated minimal spanning trees and degree- constrained spanning trees give significant improvement in scheduling. Finally the data collection is enhancing in terms of security by using T-Hash Chain algorithm.
Distributed Dynamic Replication Management Mechanism Based on Accessing Frequ...May Sit Hman
Thesis (Master) of Computer Science. It's based on Distributed System and Database System. It's about management replica on distributed database systemm.
The document discusses analyzing climate data over fast networks and parallel mesh refinement. It describes two climate analysis applications that are either computationally or data intensive. It then discusses accessing netcdf climate data files from remote repositories over networks, distributing the input files across processes, and using batch processing or clouds to retrieve the remote data. It also describes adaptive mesh refinement used to process large climate data in parallel by distributing the mesh and synchronizing propagation paths between processes.
This document discusses traditional communication architectures for multiprocessor systems and proposes that Active Messages is a better communication architecture. It analyzes three traditional low-level communication layers - message passing, message driven, and shared memory - and argues that they are best viewed as communication models implemented on top of a general-purpose communication architecture like Active Messages, rather than as architectures themselves. The document provides an example implementation of the send and receive communication model using Active Messages on the CM-5 to demonstrate how it can be implemented efficiently while gaining flexibility.
PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
To get this project in ONLINE or through TRAINING Sessions, Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83.
Landmark: Next to Kotak Mahendra Bank.
Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9.
Landmark: Next to VVP Nagar Arch.
Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org
Blog: www.jpinfotech.blogspot.com
Quality of service (QoS) refers to a network's ability to achieve maximum bandwidth and handle factors like latency, error rate, and uptime. QoS involves controlling network resources by prioritizing specific data types and four key characteristics are reliability, delay, jitter, and bandwidth. Flow control techniques like scheduling, traffic shaping, resource reservation, and admission control help improve QoS.
These slides cover a topic on Virtual circuit & message switching in Data Communication. All the slides are explained in a very simple manner. It is useful for engineering students & also for the candidates who want to master data communication & computer networking
Multiplexing and switching(TDM ,FDM, Data gram, circuit switching)Adil Mehmoood
Multiplexing techniques such as time division multiplexing (TDM) and frequency division multiplexing (FDM) allow multiple users to share network links. TDM divides time into slots that are allocated to users on a fixed or dynamic basis. FDM assigns each user a unique frequency band to transmit in simultaneously. Switching networks move data packets through intermediate nodes using either circuit switching, which establishes a dedicated path, or packet switching, which breaks messages into packets that travel independently through the network.
The document discusses digital data communication techniques, including asynchronous and synchronous transmission, error detection using parity and cyclic redundancy checks, error correction using block error correction codes, and line configurations such as point-to-point, multi-point, half-duplex, and full-duplex topologies. Asynchronous transmission uses fewer overhead bits but clocks may drift, while synchronous transmission embeds a clock signal and uses frames for more efficient transmission but requires clock synchronization. Error detection identifies errors using techniques like parity checks, while error correction deduces the original message despite some errors using redundancy.
A distributed system is a collection of independent computers that appear as a single coherent system to users. Middleware acts as a bridge between operating systems and applications, especially over a network. Examples of distributed systems include the World Wide Web, the internet, and intranets within organizations. Distributed systems provide benefits like increased reliability, scalability, performance, and flexibility compared to centralized systems. However, they also present challenges around security, software complexity, and system failures.
Packet-switching networks transfer information as packets that may experience random delays and loss. There are two main approaches: connectionless datagram service which routes packets independently, and connection-oriented virtual circuits which establish paths for packets belonging to a connection. Routing determines the best paths for packets using distributed algorithms that adapt to network changes. Large packet switches use techniques like self-routing, shared memory, and crossbar switches to efficiently route high volumes of packets.
This document summarizes circuit switching and packet switching approaches in computer networks. It discusses how circuit switching establishes a dedicated path but wastes bandwidth when no data is being sent. Packet switching breaks messages into packets that are transmitted independently and can make more efficient use of bandwidth. The document also describes protocols like X.25 that were used for packet switched networks and Frame Relay, which was designed to reduce overhead and improve performance compared to X.25.
Balman dissertation Copyright @ 2010 Mehmet Balmanbalmanme
This document discusses scheduling data transfer operations with advance reservation and provisioning. It proposes dividing time into windows where network bandwidth availability is stable. When a data transfer request is received, the scheduler checks all possible time windows to see if the request can fit within bandwidth constraints. If no window is available, it tries shifting existing transfers to earlier windows if they have less "desire" based on number of occupied time slots and order of the window. This allows requests to be scheduled in advance while minimizing disruption to existing transfers.
Circuit switching directly connects the sender and receiver through a dedicated physical path. Message switching transmits entire messages from node to node without establishing a dedicated path. Packet switching breaks messages into packets that can take different routes to the destination and are reassembled, allowing for more efficient use of bandwidth but introducing complexity.
This document discusses data placement scheduling between distributed repositories. It introduces Stork, a batch scheduler for data placement activities that supports plug-in data transfer modules and scheduling of data movement jobs. The document discusses techniques used by Stork such as throttling concurrent transfers, fault tolerance, job aggregation, and adaptive tuning of data transfer protocols. It also covers topics like network reservation, failure awareness, and directions for future work including priority-based scheduling and advance resource reservation.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
Circuit switching and packet switching are two methods for transferring data across networks. Circuit switching establishes a dedicated communication path between two stations by reserving bandwidth for the duration of the call. Packet switching breaks messages into packets that are transmitted independently across the network and reassembled at the destination. It allows for more efficient use of bandwidth by allowing packets from multiple messages to share transmission resources.
Switching Techniques (Lecture #2 ET3003 Sem1 2014/2015)Tutun Juhana
The slides discuss the techniques used to switch (transfer) information in a networks. Circuit switching, datagram packet switching, and virtual circuits packet switching, are disscused
This document discusses different methods for allowing one-to-one communication between nodes in large networks, including direct connections, central controllers, and common buses. It focuses on switching networks, which consist of interlinked switches that can create temporary connections between devices. There are three main types of switching networks: circuit switching, packet switching, and message switching. Packet switching breaks messages into small packets that contain user data and control information and are briefly stored at nodes before being passed to the next node.
Circuit switching and packet switching are the two main switching technologies used in communications networks. Circuit switching establishes a dedicated communication path between two stations for the duration of the connection. Packet switching breaks messages into packets that are transmitted individually over a network and reassembled at the destination. It provides more efficient use of network bandwidth than circuit switching.
The document discusses congestion in data networks. It defines congestion as occurring when the number of packets being transmitted approaches the network's handling capacity. This can cause packets to be lost if buffers fill up. Mechanisms for congestion control include backpressure from congested nodes to slow down incoming traffic, choke packets sent to sources to cut back transmission, and implicit or explicit signaling of congestion levels to sources. Frame relay and ATM networks employ various techniques for traffic management, policing, and scheduling to control congestion and meet quality of service guarantees for different connections.
Gurpinder Singh Ghuman has a M.S. in Electrical Engineering from USC and a B.E. in Electronics and Communication from Punjab Technical University. He has experience as an intern at Infosys and Access Point and currently works as a graduate assistant at USC. His technical skills include programming languages like C, C++, Python and MATLAB as well as networking protocols like TCP/IP, UDP and Ethernet. Some of his academic projects involve software defined networking using Ryu and Mininet, building an operating system kernel in C called Weenix, and designing algorithms for power allocation in MIMO systems and CSMA/CA networks.
This document discusses distributed data center architectures and disaster recovery strategies. It begins by providing background on the evolution of data centers and then covers key aspects of distributed data center design like replication, high availability, and disaster recovery plans. The objectives of disaster recovery plans, such as recovery point and recovery time objectives, are explained. Different disaster recovery architectures like warm and hot standbys are also summarized.
Datacenter 101 provides an overview of key concepts related to data centers including:
1) Data centers are facilities used to house large amounts of electronic equipment like computers and communication hardware.
2) Reasons for data center consolidation include safety during disasters and efficient data storage and hardware virtualization.
3) Physical infrastructure of data centers includes thick walls, HVAC, racks, UPS/generators, and security cameras. Network infrastructure consists of routers, switches, firewalls, peering, bandwidth, and carrier services.
PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
To get this project in ONLINE or through TRAINING Sessions, Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83.
Landmark: Next to Kotak Mahendra Bank.
Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9.
Landmark: Next to VVP Nagar Arch.
Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org
Blog: www.jpinfotech.blogspot.com
Quality of service (QoS) refers to a network's ability to achieve maximum bandwidth and handle factors like latency, error rate, and uptime. QoS involves controlling network resources by prioritizing specific data types and four key characteristics are reliability, delay, jitter, and bandwidth. Flow control techniques like scheduling, traffic shaping, resource reservation, and admission control help improve QoS.
These slides cover a topic on Virtual circuit & message switching in Data Communication. All the slides are explained in a very simple manner. It is useful for engineering students & also for the candidates who want to master data communication & computer networking
Multiplexing and switching(TDM ,FDM, Data gram, circuit switching)Adil Mehmoood
Multiplexing techniques such as time division multiplexing (TDM) and frequency division multiplexing (FDM) allow multiple users to share network links. TDM divides time into slots that are allocated to users on a fixed or dynamic basis. FDM assigns each user a unique frequency band to transmit in simultaneously. Switching networks move data packets through intermediate nodes using either circuit switching, which establishes a dedicated path, or packet switching, which breaks messages into packets that travel independently through the network.
The document discusses digital data communication techniques, including asynchronous and synchronous transmission, error detection using parity and cyclic redundancy checks, error correction using block error correction codes, and line configurations such as point-to-point, multi-point, half-duplex, and full-duplex topologies. Asynchronous transmission uses fewer overhead bits but clocks may drift, while synchronous transmission embeds a clock signal and uses frames for more efficient transmission but requires clock synchronization. Error detection identifies errors using techniques like parity checks, while error correction deduces the original message despite some errors using redundancy.
A distributed system is a collection of independent computers that appear as a single coherent system to users. Middleware acts as a bridge between operating systems and applications, especially over a network. Examples of distributed systems include the World Wide Web, the internet, and intranets within organizations. Distributed systems provide benefits like increased reliability, scalability, performance, and flexibility compared to centralized systems. However, they also present challenges around security, software complexity, and system failures.
Packet-switching networks transfer information as packets that may experience random delays and loss. There are two main approaches: connectionless datagram service which routes packets independently, and connection-oriented virtual circuits which establish paths for packets belonging to a connection. Routing determines the best paths for packets using distributed algorithms that adapt to network changes. Large packet switches use techniques like self-routing, shared memory, and crossbar switches to efficiently route high volumes of packets.
This document summarizes circuit switching and packet switching approaches in computer networks. It discusses how circuit switching establishes a dedicated path but wastes bandwidth when no data is being sent. Packet switching breaks messages into packets that are transmitted independently and can make more efficient use of bandwidth. The document also describes protocols like X.25 that were used for packet switched networks and Frame Relay, which was designed to reduce overhead and improve performance compared to X.25.
Balman dissertation Copyright @ 2010 Mehmet Balmanbalmanme
This document discusses scheduling data transfer operations with advance reservation and provisioning. It proposes dividing time into windows where network bandwidth availability is stable. When a data transfer request is received, the scheduler checks all possible time windows to see if the request can fit within bandwidth constraints. If no window is available, it tries shifting existing transfers to earlier windows if they have less "desire" based on number of occupied time slots and order of the window. This allows requests to be scheduled in advance while minimizing disruption to existing transfers.
Circuit switching directly connects the sender and receiver through a dedicated physical path. Message switching transmits entire messages from node to node without establishing a dedicated path. Packet switching breaks messages into packets that can take different routes to the destination and are reassembled, allowing for more efficient use of bandwidth but introducing complexity.
This document discusses data placement scheduling between distributed repositories. It introduces Stork, a batch scheduler for data placement activities that supports plug-in data transfer modules and scheduling of data movement jobs. The document discusses techniques used by Stork such as throttling concurrent transfers, fault tolerance, job aggregation, and adaptive tuning of data transfer protocols. It also covers topics like network reservation, failure awareness, and directions for future work including priority-based scheduling and advance resource reservation.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
Circuit switching and packet switching are two methods for transferring data across networks. Circuit switching establishes a dedicated communication path between two stations by reserving bandwidth for the duration of the call. Packet switching breaks messages into packets that are transmitted independently across the network and reassembled at the destination. It allows for more efficient use of bandwidth by allowing packets from multiple messages to share transmission resources.
Switching Techniques (Lecture #2 ET3003 Sem1 2014/2015)Tutun Juhana
The slides discuss the techniques used to switch (transfer) information in a networks. Circuit switching, datagram packet switching, and virtual circuits packet switching, are disscused
This document discusses different methods for allowing one-to-one communication between nodes in large networks, including direct connections, central controllers, and common buses. It focuses on switching networks, which consist of interlinked switches that can create temporary connections between devices. There are three main types of switching networks: circuit switching, packet switching, and message switching. Packet switching breaks messages into small packets that contain user data and control information and are briefly stored at nodes before being passed to the next node.
Circuit switching and packet switching are the two main switching technologies used in communications networks. Circuit switching establishes a dedicated communication path between two stations for the duration of the connection. Packet switching breaks messages into packets that are transmitted individually over a network and reassembled at the destination. It provides more efficient use of network bandwidth than circuit switching.
The document discusses congestion in data networks. It defines congestion as occurring when the number of packets being transmitted approaches the network's handling capacity. This can cause packets to be lost if buffers fill up. Mechanisms for congestion control include backpressure from congested nodes to slow down incoming traffic, choke packets sent to sources to cut back transmission, and implicit or explicit signaling of congestion levels to sources. Frame relay and ATM networks employ various techniques for traffic management, policing, and scheduling to control congestion and meet quality of service guarantees for different connections.
Gurpinder Singh Ghuman has a M.S. in Electrical Engineering from USC and a B.E. in Electronics and Communication from Punjab Technical University. He has experience as an intern at Infosys and Access Point and currently works as a graduate assistant at USC. His technical skills include programming languages like C, C++, Python and MATLAB as well as networking protocols like TCP/IP, UDP and Ethernet. Some of his academic projects involve software defined networking using Ryu and Mininet, building an operating system kernel in C called Weenix, and designing algorithms for power allocation in MIMO systems and CSMA/CA networks.
This document discusses distributed data center architectures and disaster recovery strategies. It begins by providing background on the evolution of data centers and then covers key aspects of distributed data center design like replication, high availability, and disaster recovery plans. The objectives of disaster recovery plans, such as recovery point and recovery time objectives, are explained. Different disaster recovery architectures like warm and hot standbys are also summarized.
Datacenter 101 provides an overview of key concepts related to data centers including:
1) Data centers are facilities used to house large amounts of electronic equipment like computers and communication hardware.
2) Reasons for data center consolidation include safety during disasters and efficient data storage and hardware virtualization.
3) Physical infrastructure of data centers includes thick walls, HVAC, racks, UPS/generators, and security cameras. Network infrastructure consists of routers, switches, firewalls, peering, bandwidth, and carrier services.
The document discusses high availability and fault tolerance using Novell Cluster Services. It defines key concepts like availability, mean time between failures, and mean time to repair. It then covers best practices for deploying Novell Cluster Services, including hardware and software setup, connectivity rules, naming and addressing, and testing the cluster. It also discusses which types of resources can be clustered, like file sharing, iPrint, iFolder, and DHCP.
This document presents application layer anycasting as a server selection architecture for replicated web services. It discusses problems with existing server selection methods and outlines an anycasting communication paradigm where a client connects to the best server in an anycast group. The proposed architecture uses anycast domain names and filters at clients and resolvers to select the optimal server. Experimental results show this approach improves response times over other selection methods and balances load more effectively as more clients are added.
This document provides an overview of networking concepts including different types of networks and the OSI model. It begins with an introduction to networking courses and then covers local area networks (LANs), personal area networks (PANs), metropolitan area networks (MANs), wide area networks (WANs), campus area networks (CANs), storage area networks (SANs), and peer-to-peer and client-server models. It also provides details on the seven layers of the OSI model and examples of protocols and functions for each layer.
This document provides an overview of networking concepts including different types of networks, the OSI model, transmission media, and common networking devices and protocols. It begins with an introduction to networking courses and their benefits. It then covers local area networks (LANs), personal area networks (PANs), metropolitan area networks (MANs), wide area networks (WANs), campus area networks (CANs), and storage area networks (SANs). The document also explains the seven layer OSI model and compares it to the TCP/IP model. Finally, it discusses different transmission media including twisted pair cable, coaxial cable, fiber optic cable, radio waves, microwaves, and infrared.
This document provides an overview of a course on broadband and TCP/IP fundamentals. It discusses the topics that will be covered in each of the four sessions, including basics of TCP/IP networks, switching and scheduling, routing and transport, and applications and security. It also lists some recommended textbooks and references for the course.
This document outlines the syllabus for the 15-744 Computer Networking course. It introduces the professor, TAs, and course objectives. The course will cover networking from the network layer to application layer, focusing on protocol rules, algorithms, and tradeoffs. Topics will include routing, transport, naming systems, and recent areas like multicast, mobility, and security. Assignments include problem sets, reading responses, a class project, and exams. The next lecture will discuss design considerations for splitting functionality across layers and nodes.
This networking course document discusses the key topics covered in a networking course including why networking is important, common network types, the OSI model, and networking devices. The document provides details on local area networks (LANs), personal area networks (PANs), metropolitan area networks (MANs), wide area networks (WANs), campus area networks (CANs), and storage area networks (SANs). It also summarizes the seven layers of the OSI model and their functions.
- Clustering involves connecting multiple independent systems together to achieve reliability, scalability, and availability. The systems appear as a single machine to external users.
- There are different types of clustering including high performance computing (HPC), batch processing, and high availability (HA). HPC focuses on performance for parallelizable applications. Batch processing distributes jobs like rendering frames. HA aims to provide continuous availability.
- Achieving high availability involves techniques like heartbeat monitoring, failover configurations, shared storage, and RAID configurations to ensure redundancy in the event of failures.
The document provides an overview of storage concepts including:
1) It defines online, nearline and offline storage and their characteristics.
2) It discusses the evolution of storage technologies from DAS to SAN and some advantages of SAN such as increased performance and scalability.
3) It describes some common storage components and technologies used in SAN implementations like HBAs, switches, fabrics and replication.
This document discusses high availability and database replication techniques. It describes a non-stop service system that aims to minimize downtime through planned and unplanned maintenance. High availability ensures near 100% uptime through techniques implemented at the software and hardware levels. Database replication synchronizes databases across nodes to enable failover. There are two main architectures: shared nothing uses replication over a network while shared disk shares storage. Key considerations in choosing an approach include performance, costs, distance between nodes, and data consistency. The document then outlines the features and benefits of database replication, including its use for high availability, load balancing, and disaster recovery.
This document summarizes key concepts in distributed computing systems from a lecture on the topic. It defines parallel and distributed systems, describes models like client-server, clusters and grids, discusses issues like transparency, reliability and performance, and covers technologies like RPC and DCE. The goal of distributed systems is to share computing resources and information across a network through loosely coupled components communicating by message passing.
This document discusses distributed systems. It defines a distributed system as a collection of independent computers that appear as a single system to users. Advantages of distributed systems include economics, speed, reliability, incremental growth, and enabling collaboration. Challenges include developing software, network issues, and security. The document discusses hardware concepts like taxonomy, bus vs switched systems, and multiprocessors. It also covers software concepts like network operating systems, distributed systems, and multiprocessor operating systems. Key design issues for distributed systems are transparency, flexibility, reliability, performance, and scalability.
This document discusses distributed systems. It defines a distributed system as a collection of independent computers that appear as a single system to users. Advantages of distributed systems include economics, speed, reliability, incremental growth, and enabling collaboration. Challenges include developing software, network issues, and security. The document discusses hardware concepts like taxonomy, bus vs switched systems, and multiprocessors. It also covers software concepts like network operating systems, distributed systems, and multiprocessor operating systems. Key design issues for distributed systems are transparency, flexibility, reliability, performance, and scalability.
Networks connect computers and devices to share resources. Peer-to-peer networks allow direct communication between devices while client/server networks rely on centralized servers with more power. Common network elements include clients, servers, protocols, and transmission media. Networks provide services like file/print sharing, email, internet access, and remote management. Becoming a network professional requires technical skills, soft skills, certification, and involvement in professional associations.
This document discusses strategies for achieving high availability and maximizing uptime for applications and services. It outlines the need to engineer data centers, networks, servers, operating systems, applications, and staffing. Some key aspects covered include eliminating single points of failure, automatically detecting and addressing errors through redundant backup systems, balancing high availability with increased complexity, and monitoring systems and services.
This document discusses network operating systems and remote access. It covers two categories of network operating systems: peer-to-peer and client/server. Peer-to-peer NOSes lacked centralized authentication and scalability. Client/server NOSes use a server-based model. The document also discusses requirements of modern NOSes like application services, directory services, and integration/migration services. It covers remote access methods like remote control, tunneling protocols, and VPNs which allow remote clients to securely access enterprise networks.
Similar to Designing High Availability Networks, Systems, and Softwarefor the University Environment (20)
DANE and DNSSEC Authentication Chain Extension for TLSShumon Huque
This document proposes a new TLS extension called "dnssec_chain" that allows the TLS server to deliver the DNSSEC authentication chain needed for a DANE record to the TLS client. The client then authenticates the chain locally using a preconfigured trust anchor. This avoids the client needing to perform DNS queries itself and works around middleboxes that could interfere with DANE/DNSSEC lookups. The rationale is that the client can authenticate the DANE record without needing a secure connection to a validating DNS resolver. Prototypes of the dnssec_chain extension are being developed.
Client Certificates in DANE TLSA RecordsShumon Huque
This document discusses using TLS Server Name Indication (SNI) extensions and DNS-based Authentication of Named Entities (DANE) Transport Layer Security (TLS) records (TLSA) to authenticate client certificates. It proposes a format for publishing TLSA records bound to client identities and outlines requirements for clients to have matching certificates and records, and servers to verify them. A future extension may explicitly signal the client identity to avoid unnecessary DNS queries during the TLS handshake.
Query-name Minimization and Authoritative Server BehaviorShumon Huque
This document discusses query name minimization in DNS resolution and examines how some authoritative DNS servers behave when handling minimized queries. It finds that while name minimization aims to improve privacy, some CDNs and DNS hosting providers respond incorrectly to queries for empty non-terminal names, returning NXDOMAIN instead of NODATA. This prevents complete resolution. The document suggests providers will need to address this to allow wider adoption of name minimization.
This document discusses the application uses of DNSSEC and the Domain Name System Security Extensions protocol. It provides examples of how cryptographic keys can be stored and authenticated using DNSSEC records like SSHFP, TLSA, and OPENPGPKEY. These records allow applications to securely obtain keys from the DNS to enable or strengthen application layer security protocols for services like SSH, TLS, PGP, and email. The document focuses on how the TLSA record and DANE protocol can help address issues with the public certificate authority model by providing name constraints and directly authenticating certificates in the DNS.
This document provides an overview and tutorial of the getdns API, which is a new DNS API specification created by and for application developers. It aims to provide a natural follow-on to the getaddrinfo() function. The getdns API and its first implementation, getdns, highlight features like bootstrapping encrypted channels to prevent man-in-the-middle attacks. The tutorial covers DNSSEC and how getdns allows applications to directly query for and validate DNSSEC records like TLSA to securely establish TLS connections using DANE, bypassing the need to trust recursive resolvers. It demonstrates simple getdns functions for full recursion, stub resolution, and fallback options.
This document provides an overview of application uses of DNSSEC and the DANE protocol. It discusses how DNSSEC allows cryptographic keys to be stored and verified in the DNS using records like SSHFP, IPSECKEY and TLSA. It describes issues with the current public CA model for TLS certificates and how DANE addresses these by binding certificates to domain names using DNSSEC. The document also provides background on DNSSEC deployment status and examples of application uses like validating SSH host keys.
The document is a presentation on migrating to IPv6 given by Shumon Huque at the USENIX LISA conference on November 4th, 2013. It discusses IPv6 addressing and protocol details, including the larger 128-bit address space of IPv6 compared to 32-bit IPv4 addresses. It also covers topics like IPv6 network prefixes, special use IPv4 addresses, IPv6 in URLs, and IPv6 DNS records. The presentation aims to provide an introduction to IPv6 and guidance on migrating networks to support both IPv4 and IPv6.
The document is a slide deck for a DNSSEC tutorial presented at the USENIX LISA conference in 2013. It provides an overview of DNSSEC, including how it uses public key cryptography and digital signatures to authenticate DNS data and establish a chain of trust. It also covers topics like configuring DNSSEC in BIND, using the dig tool to perform queries, and prospects for new applications of DNSSEC. The presentation was given by Shumon Huque, an IT director at the University of Pennsylvania.
IPv6 Transition in Research & EducationShumon Huque
This document summarizes a presentation given by Shumon Huque on IPv6 transition efforts in research and education networks. It discusses how organizations like Internet2 and Educause have tried motivating campus IT departments through presentations, workshops, and reports. While some see education networks as behind the curve on IPv6 adoption, the presenter argues all communities are behind and education has some leading adopters. The presentation then provides details on IPv6 deployment at the University of Pennsylvania, including challenges of extending adoption beyond central IT to decentralized groups and outsourced services. It closes by inviting questions from the audience.
The document discusses authorization at the University of Pennsylvania. It describes Penn's Kerberos deployment with two main realms and various departmental Windows realms in a one-way trust relationship. It also discusses the central Kerberos servers, software and hardware configuration, and additional authorization systems used. The document outlines Penn's efforts to establish a unified user namespace across the university and centralize authentication and authorization to facilitate single sign-on, simplify access management, and stay compliant with policies. Challenges of centralization include gaining buy-in for change and translating local policies to a new centralized format.
The document summarizes an IPv6 deployment panel from October 2012. It includes:
- Introductions from panelists from various organizations discussing their IPv6 deployment activities and plans.
- Presentations on measurement results from the 2012 World IPv6 Launch and on application deployment surveys.
- Individual talks were given by each panelist on their organization's IPv6 experiences and plans, including challenges faced.
- A discussion period followed for questions.
Major internet service providers, home networking companies, and web companies are permanently enabling IPv6 by June 6, 2012 as part of the World IPv6 Launch. This represents a major milestone in the global deployment of IPv6, which is critical to the continued growth of the internet as IPv4 addresses are exhausted. Over 3,000 websites, 66 network operators, and 5 home router vendors have registered for the event. Penn has also deployed IPv6 across its campus network since 2005.
The University of Pennsylvania began deploying IPv6 in 2002 and has made steady progress, but more work remains. About 20% of customer subnets are IPv6 enabled currently, with plans to enable the rest this summer. Several central applications like DNS, NTP, and internal SSH are IPv6 capable. Security concerns include ensuring security infrastructure like firewalls and IDS support IPv6, and addressing potential local attacks. The focus is on operational security through endpoint protections rather than perimeter defenses. Overall risks are seen as qualitatively similar to IPv4 once implementation bugs are addressed.
The University of Pennsylvania has deployed a campus-wide Voice over IP system using open source server components and protocols. The system includes SIP registrar, proxy, and presence servers using software like SER and OpenSIPS. Handsets are from Polycom. The deployment has faced challenges with features like bridged line appearances and interoperability issues, but open source allows customization. Future plans include SIP trunking and security improvements.
Kerberos at Penn (MIT Kerberos Consortium)Shumon Huque
This document summarizes Shumon Huque's presentation on Kerberos deployment and usage at the University of Pennsylvania. The University of Pennsylvania initially deployed Kerberos in 2000-2002 to replace a legacy authentication system. Kerberos is used campus-wide with some departmental Windows servers performing cross-realm authentication. While efforts have been made to promote native Kerberos authentication, it remains a challenge with heterogeneous and unmanaged devices. The university also uses RADIUS, Shibboleth federation, and LDAP with Kerberos. Future enhancements discussed include upgrading Kerberos versions, testing FAST for AS exchange protection, and migrating to stronger encryption types.
IPv6 Campus Deployment Updates panel; University of Pennsylvania (Shumon Huque), IIJ (Randy Bush), U of Hawaii (Alan Whinery) - Joint Techs Workshop; February 2010
The document summarizes the EDU DNSSEC testbed project. It describes how EDUCAUSE manages the .edu domain and plans to sign it with DNSSEC in 2010 after the root zone is signed. A testbed was run from September to December 2009 with several universities to test the registration system and validate records. The tests confirmed connectivity, addition and removal of DS records, and key rollovers worked as expected with some minor bugs addressed. References provide more information on the DNSSEC deployment for .edu.
DNSSEC at Penn aims to deploy DNSSEC across Penn's DNS infrastructure to authenticate DNS data and secure delegations. The summary tests DNSSEC records and signatures for the jabber.upenn.edu domain, showing authenticated data. Penn has developed homegrown tools to manage secure zone transitions and key rollovers. Initial results from Penn's DNSSEC testbed show a 3-4x increase in zone size and server resource usage after enabling DNSSEC.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Generating privacy-protected synthetic data using Secludy and Milvus
Designing High Availability Networks, Systems, and Softwarefor the University Environment
1. Designing High Availability
Networks, Systems, and Software
for the University Environment
Deke Kassabian and Shumon Huque
The University of Pennsylvania
January 14, 2004
2. About Penn
The University of Pennsylvania was founded
by Ben Franklin in 1751
Penn is part of the Ivy League
Located in western Philadelphia
Community of more than 30,000 people
3. General Goals
Networked services available as expected
by our users
Minimized time to repair (TTR) for when
outages do occur
Ability to perform maintenance and
upgrades (planned downtime) non-
disruptively
Cost effectiveness in meeting these goals
6. Definitions
Basic System - a Basic System is a
{Network, System, Service} with only the
most basic of protections against outages
Examples:
A network recoverable using spare parts
A single computer system with RAID disk
A service recoverable from tape backups
7. Definitions
Availability - the percentage of total
time that a {Network, System, Service}
is available for use
Related points:
Advertised periods of availability
Availability as advertised
Absolute availability
8. Definitions
High Availability (HA) - a {Network,
System, Service} with specific design
elements intended to keep availability
above a high threshold (eg, 99.99%)
9. Definitions
Rapid Recovery (RR) - a {Network,
System, Service} with specific design
elements intended to recover from
downtime very quickly (eg, 15 minutes)
10. Metrics
Economics of high availability (the
costs of non-available)
Calculating availability
How availability measurements are
performed
11. Economics of high availability
What is the cost of an outage in your
Student Courseware systems and student record
systems
Financial systems
Primary campus web site and Email servers
DNS, DHCP and AuthN systems
Internet connection(s)
Development / Gifts systems
How much should you be willing to spend to
minimize downtime of any or all of these?
12. Calculating availability
Availability can be measured directly through
periodic polling (eg, SNMP, Mon, Nagios)
A formula for predicting availability of a single
component
MTBF
(MTBF+TTR)
1
TTR
(MTBF+TTR)or
13. Design Principals
Towards HA
Minimize points of catastrophic failure
Maximize redundancy
Minimize fault zones
Minimize complexity and cost
Applying the above principles to
Networks
Systems
Services
14. Specific examples at Penn
High Availability Services
Rapid Recovery Services
15. High Availability Design
Strategies employed to achieve HA:
Server redundancy
Hardware component redundancy
Storage redundancy (RAID)
Network redundancy
Redundant power, A/C, cooling etc
Application protocols that can transparently
failover to alternate servers
Secondary offsite hosting (of some services like
DNS)
16. Rapid Recovery Design
Strategies employed to achieve RR:
Standby servers and storage
Some HA design elements:
Hardware redundancy, storage redundancy, network
redundancy, power, A/C redundancy etc
Note: services deployed in the RR model typically
don’t have an easy way to transparently failover to
alternate servers (eg. E-mail, Web etc)
17. Network Aggregation Point
Abbreviation: NAP
Machine rooms in separate campus locations
that house critical network electronics and
servers.
Good environmentals and extensive
connectivity to campus fiber-optic cable plant
Both HA and RR services utilize multiple
NAPs
18. Central Infra. Networks
AKA “NOC Networks” (historical name)
3 highly redundant IP networks that house systems
providing critical infrastructure services
Each network is triply connected to campus routing
core via distinct NAP locations
Network wiring traverses physically diverse fiber
conduit pathways
Use of router redundancy protocols (VRRP) & Layer-
2 path redundancy (802.1D) for high availability
19. HA Server Platforms
Two sets of three replicated servers
3 KDC servers: central authentication
3 NOC servers: everything else
Kerberos runs on separate systems mainly
for security reasons.
20. High Availability: KDCs
KDCs (3):
3 distinct machines (kdc1, kdc2, kdc3)
Run only Kerberos AS and TGS
Each located in a different campus machine room
Each connected to a distinct IP network
Via a distinct IP core router
Additionally each network is triply connected to the
campus routing core via 3 NAPs
21. High Availability: NOCs
3 “NOC” systems (a historical name)
Provide: DNS, DHCP, NTP, RADIUS plus a few
homegrown services
Same physical and network connectivity as the
KDCs
In addition: some servers have a secondary
interface on a different NOC network (for reasons
to be explained later)
22.
23.
24.
25.
26.
27. HA Application Failover
Kerberos
DNS
RADIUS
NTP
DHCP
Current spec supports only 2 failover systems
Non-HA homegrown services: PennNames
28. Rapid Recovery service
Example: E-mail and Web service
A set of servers and storage is replicated at two sites: primary
and standby
Primary site: active servers and storage
Secondary site: standby servers and replicated storage
Data from 1st site is synchronously replicated to 2nd
Two separate fibrechannel networks interconnect systems and
storage at both sites
Catastrophic failure event: system can be manually reconfigured
to use the standby servers and/or secondary storage ( ~ 30
minutes)
Servers are located on the HA primary infrastructure network
29.
30. Experiences at Penn
Where these approaches have been helpful
Higher availability, non-disruptive maintenance
Where they have not
Complexity can be hard to manage!
Where cost has been high
Replicated systems and networks, high-end
storage solutions
Real availability experience
DNS, a critical service, went from 99.0% to
99.999% availability!
31. Future Enhancements
Making RR services highly available:
“clustering”, IETF rserpool etc
Metropolitan area DR (or better)
Rolling disaster protection
Others:
IP Multipathing
Trunking links to servers
802.3ad, SMLT, DMLT or similar
Rapid Spanning Tree (IEEE 802.1w)
Multi-master KADM service
Improved management and monitoring
infrastructure