From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
This document summarizes a presentation on supporting data intensive applications. It discusses the Janet end-to-end performance initiative which aims to engage with data intensive research communities to help optimize performance. Some key points include:
- Seeing increasing data intensive science applications and remote computation scenarios requiring high bandwidth.
- Importance of understanding researcher requirements and setting expectations on practical throughput limits.
- Using perfSONAR to measure network characteristics and identify performance issues between sites on the Janet network.
- Adopting the "Science DMZ" model of separating research and campus traffic to avoid bottlenecks and optimize data transfer performance.
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Hyper efficient data centres – key ingredient intelligence networkshop44Jisc
In this presentation Willie O'Connell will examine how Finland’s Centre for Science (CSC) has implemented a modular data centre solution, to support high performance compute for the Finnish University and Research Network (FUNET), while achieving an annualised power usage effectiveness (PUE) of less than 1.1. In addition he will outline how CSC has used data centre infrastructure management (DCIM) to micromanage the facilities ensuring reliability while maximising asset utilisation and return on investment.
Challenges and Issues of Next Cloud Computing PlatformsFrederic Desprez
Cloud computing has now crossed the frontiers of research to reach industry. It is used every day , whether to exchange emails or make
reservations on web sites. However, many research works remain to be done to improve the performance and functionality of these platforms of tomorrow. In this talk, I will do an overview of some these theoretical and appliead researches done at INRIA and particularly around Clouds distribution, energy monitoring and management, massive data processing and exchange, and resource management.
The document provides an overview of networking and storage concepts, including:
- Defining the differences between networking and storage considerations such as information movement vs. repository and data over distance vs. time.
- Explaining basic storage network technologies like direct-attached storage (DAS), network-attached storage (NAS), and storage area networks (SAN).
- Describing how virtual SCSI cables work over different connection types like SCSI, Fibre Channel, SAS, and iSCSI to connect initiators and targets.
Cloud present, future and trajectory (Amazon Web Services) - JIsc Digifest 2016Jisc
In Jisc's future of cloud computing horizon scan report, we identified three strategic areas where Jisc could support universities and colleges in moving to the cloud – cloud as a utility, app as a service, and working to build capability in cloud technologies.
Come along to this session to hear more about this work from Jisc futurist Martin Hamilton, and find out how you can get involved.
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
This document summarizes a presentation on supporting data intensive applications. It discusses the Janet end-to-end performance initiative which aims to engage with data intensive research communities to help optimize performance. Some key points include:
- Seeing increasing data intensive science applications and remote computation scenarios requiring high bandwidth.
- Importance of understanding researcher requirements and setting expectations on practical throughput limits.
- Using perfSONAR to measure network characteristics and identify performance issues between sites on the Janet network.
- Adopting the "Science DMZ" model of separating research and campus traffic to avoid bottlenecks and optimize data transfer performance.
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Hyper efficient data centres – key ingredient intelligence networkshop44Jisc
In this presentation Willie O'Connell will examine how Finland’s Centre for Science (CSC) has implemented a modular data centre solution, to support high performance compute for the Finnish University and Research Network (FUNET), while achieving an annualised power usage effectiveness (PUE) of less than 1.1. In addition he will outline how CSC has used data centre infrastructure management (DCIM) to micromanage the facilities ensuring reliability while maximising asset utilisation and return on investment.
Challenges and Issues of Next Cloud Computing PlatformsFrederic Desprez
Cloud computing has now crossed the frontiers of research to reach industry. It is used every day , whether to exchange emails or make
reservations on web sites. However, many research works remain to be done to improve the performance and functionality of these platforms of tomorrow. In this talk, I will do an overview of some these theoretical and appliead researches done at INRIA and particularly around Clouds distribution, energy monitoring and management, massive data processing and exchange, and resource management.
The document provides an overview of networking and storage concepts, including:
- Defining the differences between networking and storage considerations such as information movement vs. repository and data over distance vs. time.
- Explaining basic storage network technologies like direct-attached storage (DAS), network-attached storage (NAS), and storage area networks (SAN).
- Describing how virtual SCSI cables work over different connection types like SCSI, Fibre Channel, SAS, and iSCSI to connect initiators and targets.
Cloud present, future and trajectory (Amazon Web Services) - JIsc Digifest 2016Jisc
In Jisc's future of cloud computing horizon scan report, we identified three strategic areas where Jisc could support universities and colleges in moving to the cloud – cloud as a utility, app as a service, and working to build capability in cloud technologies.
Come along to this session to hear more about this work from Jisc futurist Martin Hamilton, and find out how you can get involved.
This document discusses the convergence of IoT devices, edge computing, fog computing, and cloud computing infrastructures. It notes the exponential growth in connected devices and data generated, and need for distributed computing resources closer to users to address latency, bandwidth and other constraints. Key research issues discussed include locality-aware resource management, deployment and reconfiguration of edge sites, energy monitoring and optimization, and resilience across distributed infrastructures.
Analyzing Big Data in Medicine with Virtual Research Environments and Microse...Ola Spjuth
This document discusses analyzing big data in medicine using virtual research environments and microservices. It notes the vast amount of data being generated and challenges of data management, analysis and scaling. The European Open Science Cloud aims to enable access to shared scientific data across borders. Contemporary analysis uses high-performance computing but has limitations. Cloud computing, virtual machines, containers and microservices can help address these challenges by providing on-demand resources and decomposing functionality into independent services. The PhenoMeNal project is building a standardized e-infrastructure using these approaches to enable users to access tools and data. This improves sustainability, reliability, scalability and enables agile development and science.
This is Part III of a workshop presented by ICPSR at IASSIST 2011. This section focuses on data management including data management plans, secure computing environments, and restricted data contract management.
Grid computing is a distributed computing approach that allows users to access networked computer systems and resources located across different areas. It provides computing resources like processors, storage, and applications to users regardless of where those resources are located, similar to how the electrical power grid provides electricity to users without knowledge of its source. Key benefits of grid computing include solving large-scale problems through parallel processing and optimally utilizing idle computing resources. Security, licensing, and performance monitoring present challenges to grid computing's adoption.
This document summarizes a workshop on developing standards profiles for cloud computing. The workshop agenda included welcoming remarks, a summary of previous discussions and points of contention, and breakout groups to discuss specific topics like policy and technical standardization. The breakout groups provided feedback on initial standards profiles created through clustering projects based on their characteristics. Key discussion points included refining definitions of characteristics like "advanced security" and ensuring standards profiles allow for conformance with incorporated standards. The workshop aimed to further develop the methodology for creating profiles and move the process forward.
The document discusses Internet2, an advanced networking consortium that operates a 15,000 mile fiber optic network for research and education. It provides very high speed connectivity and collaboration technologies to facilitate large data sharing and frictionless research. Examples are given of life sciences projects utilizing Internet2's high-speed network for genomic research and agricultural applications involving terabytes of satellite and sensor data. The network is expanding to include cloud computing resources and supercomputing centers to enable global-scale distributed scientific computing and collaboration.
This document discusses cloud computing and identifies top technical and non-technical obstacles and opportunities. It begins with definitions of cloud computing, software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), public clouds, and private clouds. It then examines 10 major obstacles for cloud computing, including issues with business continuity, data lock-in, data security, performance unpredictability, and software licensing. Each obstacle is accompanied by potential opportunities to address the challenge through approaches like standardization, virtualization, reputation services, and flexible licensing models.
Cloud Computing: Security, Privacy and Trust Aspects across Public and Privat...Marco Casassa Mont
This document discusses cloud computing and related security issues. It provides background on cloud computing models and services. It discusses how cloud computing impacts enterprise security lifecycle management and control. Current trends of increasing cloud services adoption and consumerization of enterprise IT are described. Requirements for cloud computing like identity management, assurance, compliance and privacy are outlined. Initiatives to develop best practices for cloud security are also mentioned. Potential future research directions around trusted infrastructure, security analytics, economics of cloud stewardship and privacy management are proposed.
This document outlines a presentation on policy-based validation of SAN (storage area network) configurations. It introduces SANs and compares them to NAS (network-attached storage). It then discusses factors like global access, economics, issues, and challenges in SAN management. It covers relevant data structures, protocols, components like HBAs. The future work section outlines an architecture for policy-based validation including a policy evaluator, request generator, and action handler.
ENDA - Presentation - MCC workshop - v1.11Jiwei Li
This document presents ENDA, a proposed solution for embracing network inconsistency in mobile cloud computing. ENDA is a three-tier architecture that aims to make the most energy efficient offloading decisions for smartphones by selecting the optimal Wi-Fi access point based on predicted user trajectory, workload balancing among cloudlets, and minimizing communication overhead. Preliminary results from a GUI-based simulation show ENDA's ability to choose the most energy efficient Wi-Fi network path according to a user's predicted movement. The solution seeks to address issues with current offloading approaches and overcome constraints of limited cloudlet resources and Wi-Fi coverage.
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Running Enterprise Workloads with an open source Hybrid Cloud Data ArchitectureDataWorks Summit
The document discusses Hortonworks DataPlane Service (DPS), a platform that provides consistent security, governance, and management of data across hybrid cloud environments. Key capabilities of DPS include data lifecycle management using Data Lifecycle Manager (DLM), data discovery and profiling through Data Steward Studio (DSS), and self-service analytics with Data Analytics Studio (DAS). DPS provides a global data fabric to address challenges of securing, governing, and delivering data across multiple data sources and locations.
This document defines storage area networks (SANs) and discusses their architecture, technologies, management, security and benefits. A SAN consists of storage devices connected via a dedicated network that allows servers to access storage independently. Fibre Channel is the most widely used technology but iSCSI and FCIP allow block storage over IP networks. Effective SAN management requires coordination across storage, network and system levels. Security measures like authentication, authorization and encryption help protect data in this shared storage environment.
The title of this talk is a crass attempt to be catchy and topical, by referring to the recent victory of Watson in Jeopardy.
My point (perhaps confusingly) is not that new computer capabilities are a bad thing. On the contrary, these capabilities represent a tremendous opportunity for science. The challenge that I speak to is how we leverage these capabilities without computers and computation overwhelming the research community in terms of both human and financial resources. The solution, I suggest, is to get computation out of the lab—to outsource it to third party providers.
Abstract follows:
We have made much progress over the past decade toward effective distributed cyberinfrastructure. In big-science fields such as high energy physics, astronomy, and climate, thousands benefit daily from tools that enable the distributed management and analysis of vast quantities of data. But we now face a far greater challenge. Exploding data volumes and new research methodologies mean that many more--ultimately most?--researchers will soon require similar capabilities. How can we possible supply information technology (IT) at this scale, given constrained budgets? Must every lab become filled with computers, and every researcher an IT specialist?
I propose that the answer is to take a leaf from industry, which is slashing both the costs and complexity of consumer and business IT by moving it out of homes and offices to so-called cloud providers. I suggest that by similarly moving research IT out of the lab, we can realize comparable economies of scale and reductions in complexity, empowering investigators with new capabilities and freeing them to focus on their research.
I describe work we are doing to realize this approach, focusing initially on research data lifecycle management. I present promising results obtained to date, and suggest a path towards large-scale delivery of these capabilities. I also suggest that these developments are part of a larger "revolution in scientific affairs," as profound in its implications as the much-discussed "revolution in military affairs" resulting from more capable, low-cost IT. I conclude with some thoughts on how researchers, educators, and institutions may want to prepare for this revolution.
Mellanox has a worldwide presence with sales offices across North America, Europe, Asia, and other regions. It employs a push/pull sales strategy working with OEMs, distributors, solution providers, and directly with end users in markets like HPC, government, finance, and cloud. Key growth drivers include increased adoption of high-speed InfiniBand in hyperscale and HPC, new storage solutions and appliances, and opportunities in big data, virtualized environments, and government infrastructure investment. Case studies provide examples of Mellanox solutions for an OpenStack cloud, Asian webscale provider, and European scientific compute facility.
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Enabling efficient movement of data into & out of a high-performance analysis...Jisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Solving Network Throughput Problems at the Diamond Light SourceJisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
This document discusses the convergence of IoT devices, edge computing, fog computing, and cloud computing infrastructures. It notes the exponential growth in connected devices and data generated, and need for distributed computing resources closer to users to address latency, bandwidth and other constraints. Key research issues discussed include locality-aware resource management, deployment and reconfiguration of edge sites, energy monitoring and optimization, and resilience across distributed infrastructures.
Analyzing Big Data in Medicine with Virtual Research Environments and Microse...Ola Spjuth
This document discusses analyzing big data in medicine using virtual research environments and microservices. It notes the vast amount of data being generated and challenges of data management, analysis and scaling. The European Open Science Cloud aims to enable access to shared scientific data across borders. Contemporary analysis uses high-performance computing but has limitations. Cloud computing, virtual machines, containers and microservices can help address these challenges by providing on-demand resources and decomposing functionality into independent services. The PhenoMeNal project is building a standardized e-infrastructure using these approaches to enable users to access tools and data. This improves sustainability, reliability, scalability and enables agile development and science.
This is Part III of a workshop presented by ICPSR at IASSIST 2011. This section focuses on data management including data management plans, secure computing environments, and restricted data contract management.
Grid computing is a distributed computing approach that allows users to access networked computer systems and resources located across different areas. It provides computing resources like processors, storage, and applications to users regardless of where those resources are located, similar to how the electrical power grid provides electricity to users without knowledge of its source. Key benefits of grid computing include solving large-scale problems through parallel processing and optimally utilizing idle computing resources. Security, licensing, and performance monitoring present challenges to grid computing's adoption.
This document summarizes a workshop on developing standards profiles for cloud computing. The workshop agenda included welcoming remarks, a summary of previous discussions and points of contention, and breakout groups to discuss specific topics like policy and technical standardization. The breakout groups provided feedback on initial standards profiles created through clustering projects based on their characteristics. Key discussion points included refining definitions of characteristics like "advanced security" and ensuring standards profiles allow for conformance with incorporated standards. The workshop aimed to further develop the methodology for creating profiles and move the process forward.
The document discusses Internet2, an advanced networking consortium that operates a 15,000 mile fiber optic network for research and education. It provides very high speed connectivity and collaboration technologies to facilitate large data sharing and frictionless research. Examples are given of life sciences projects utilizing Internet2's high-speed network for genomic research and agricultural applications involving terabytes of satellite and sensor data. The network is expanding to include cloud computing resources and supercomputing centers to enable global-scale distributed scientific computing and collaboration.
This document discusses cloud computing and identifies top technical and non-technical obstacles and opportunities. It begins with definitions of cloud computing, software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), public clouds, and private clouds. It then examines 10 major obstacles for cloud computing, including issues with business continuity, data lock-in, data security, performance unpredictability, and software licensing. Each obstacle is accompanied by potential opportunities to address the challenge through approaches like standardization, virtualization, reputation services, and flexible licensing models.
Cloud Computing: Security, Privacy and Trust Aspects across Public and Privat...Marco Casassa Mont
This document discusses cloud computing and related security issues. It provides background on cloud computing models and services. It discusses how cloud computing impacts enterprise security lifecycle management and control. Current trends of increasing cloud services adoption and consumerization of enterprise IT are described. Requirements for cloud computing like identity management, assurance, compliance and privacy are outlined. Initiatives to develop best practices for cloud security are also mentioned. Potential future research directions around trusted infrastructure, security analytics, economics of cloud stewardship and privacy management are proposed.
This document outlines a presentation on policy-based validation of SAN (storage area network) configurations. It introduces SANs and compares them to NAS (network-attached storage). It then discusses factors like global access, economics, issues, and challenges in SAN management. It covers relevant data structures, protocols, components like HBAs. The future work section outlines an architecture for policy-based validation including a policy evaluator, request generator, and action handler.
ENDA - Presentation - MCC workshop - v1.11Jiwei Li
This document presents ENDA, a proposed solution for embracing network inconsistency in mobile cloud computing. ENDA is a three-tier architecture that aims to make the most energy efficient offloading decisions for smartphones by selecting the optimal Wi-Fi access point based on predicted user trajectory, workload balancing among cloudlets, and minimizing communication overhead. Preliminary results from a GUI-based simulation show ENDA's ability to choose the most energy efficient Wi-Fi network path according to a user's predicted movement. The solution seeks to address issues with current offloading approaches and overcome constraints of limited cloudlet resources and Wi-Fi coverage.
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Running Enterprise Workloads with an open source Hybrid Cloud Data ArchitectureDataWorks Summit
The document discusses Hortonworks DataPlane Service (DPS), a platform that provides consistent security, governance, and management of data across hybrid cloud environments. Key capabilities of DPS include data lifecycle management using Data Lifecycle Manager (DLM), data discovery and profiling through Data Steward Studio (DSS), and self-service analytics with Data Analytics Studio (DAS). DPS provides a global data fabric to address challenges of securing, governing, and delivering data across multiple data sources and locations.
This document defines storage area networks (SANs) and discusses their architecture, technologies, management, security and benefits. A SAN consists of storage devices connected via a dedicated network that allows servers to access storage independently. Fibre Channel is the most widely used technology but iSCSI and FCIP allow block storage over IP networks. Effective SAN management requires coordination across storage, network and system levels. Security measures like authentication, authorization and encryption help protect data in this shared storage environment.
The title of this talk is a crass attempt to be catchy and topical, by referring to the recent victory of Watson in Jeopardy.
My point (perhaps confusingly) is not that new computer capabilities are a bad thing. On the contrary, these capabilities represent a tremendous opportunity for science. The challenge that I speak to is how we leverage these capabilities without computers and computation overwhelming the research community in terms of both human and financial resources. The solution, I suggest, is to get computation out of the lab—to outsource it to third party providers.
Abstract follows:
We have made much progress over the past decade toward effective distributed cyberinfrastructure. In big-science fields such as high energy physics, astronomy, and climate, thousands benefit daily from tools that enable the distributed management and analysis of vast quantities of data. But we now face a far greater challenge. Exploding data volumes and new research methodologies mean that many more--ultimately most?--researchers will soon require similar capabilities. How can we possible supply information technology (IT) at this scale, given constrained budgets? Must every lab become filled with computers, and every researcher an IT specialist?
I propose that the answer is to take a leaf from industry, which is slashing both the costs and complexity of consumer and business IT by moving it out of homes and offices to so-called cloud providers. I suggest that by similarly moving research IT out of the lab, we can realize comparable economies of scale and reductions in complexity, empowering investigators with new capabilities and freeing them to focus on their research.
I describe work we are doing to realize this approach, focusing initially on research data lifecycle management. I present promising results obtained to date, and suggest a path towards large-scale delivery of these capabilities. I also suggest that these developments are part of a larger "revolution in scientific affairs," as profound in its implications as the much-discussed "revolution in military affairs" resulting from more capable, low-cost IT. I conclude with some thoughts on how researchers, educators, and institutions may want to prepare for this revolution.
Mellanox has a worldwide presence with sales offices across North America, Europe, Asia, and other regions. It employs a push/pull sales strategy working with OEMs, distributors, solution providers, and directly with end users in markets like HPC, government, finance, and cloud. Key growth drivers include increased adoption of high-speed InfiniBand in hyperscale and HPC, new storage solutions and appliances, and opportunities in big data, virtualized environments, and government infrastructure investment. Case studies provide examples of Mellanox solutions for an OpenStack cloud, Asian webscale provider, and European scientific compute facility.
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Enabling efficient movement of data into & out of a high-performance analysis...Jisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Solving Network Throughput Problems at the Diamond Light SourceJisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Archiving data from Durham to RAL using the File Transfer Service (FTS)Jisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
This document discusses the concept of a Science DMZ, which consists of three key components: 1) a dedicated "friction-free" network path with high-performance networking devices located near the site perimeter to facilitate science data transfer, 2) dedicated high-performance data transfer nodes optimized for data transfer tools, and 3) a performance measurement/test node. It contrasts this approach with the typical ad-hoc deployment of a data transfer node wherever space allows, which often fails to provide necessary performance. Details of an example Science DMZ deployment at Lawrence Berkeley National Laboratory are provided.
Electron Microscopy Between OPIC, Oxford and eBICJisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
BT Security provides protection for customers by monitoring for potential security incidents and threats. They review BTID operations proactively to prevent incidents from occurring. The talk discussed reactive monitoring, blocking IP addresses temporarily due to reallocation issues, and intelligence scanning to identify ways to improve security processes. BT Security recommends choosing strong, unique passwords and changing them regularly to help protect customer accounts and information.
Data and information governance: getting this right to support an information...Jisc
This document discusses establishing data and information governance to support an information security program. It outlines establishing frameworks for information security and data management with defined roles, policies, procedures and tools. This includes classifying data, establishing data management principles, oversight groups and governance bodies to define strategies, manage risks and ensure compliance. The goal is to understand and promote the value of data assets while protecting confidentiality, integrity and availability. It also describes applying these frameworks and changing roles and responsibilities to better manage information assets.
Cyber crime is increasing in sophistication, impact, and frequency according to a presentation by Charlie McMurdie of PwC. A wide range of threat actors carry out attacks including organized criminals, nation states, hackers, and insiders. Common motivations include financial gain, hacktivism, and espionage. High profile breaches have stolen personal and payment details impacting millions. Companies face direct costs like investigation, indirect costs like loss of customers, and intangible costs like damage to brand. Cyber attacks are now conducted on an industrial scale by organized criminal networks. Recent news reports highlight teenage hackers operating underground forums and groups like Anonymous targeting financial institutions. McMurdie argues a network approach is needed to counter
The document discusses the role of the Chief Information Security Officer (CISO) at the University of Edinburgh. It outlines that the CISO was appointed to provide central leadership on information security risks across the university. The CISO's main responsibilities include leading the information security strategy, managing information security risks from internal and external threats, advising on security threats, and developing security policies and governance. Initial priorities for the CISO included recruiting a security team, focusing on users, overhauling risk governance, and supporting strategic projects. Keys to success are aligning with the university's digital transformation strategy, gaining buy-in from colleges, ensuring business areas own their risks, and providing supporting services through collaboration.
The document discusses cyber incident handling and reporting. It notes that 65% of large firms and 1 in 4 businesses experienced a cyber breach or attack in the past year. It outlines steps for businesses to take to prepare for and handle cyber incidents, including having an incident response plan, understanding network topology, and ensuring key points of contact. It provides details on where to report historic or ongoing cyber incidents and crimes. It also describes the Cyber Information Sharing Partnership (CiSP), a platform for sharing cyber threat information between government and industry.
Certifying and Securing a Trusted Environment for Health Informatics Research...Jisc
The document discusses the certification and securing of a trusted environment for health informatics research data at the University of Dundee. It provides an overview of the Health Informatics Centre, its research data management platform, safe haven architecture, and ISO27001 certification. The platform standardizes data extraction and release, adds metadata and quality checks. A safe haven uses pseudonymized data and virtual environments prevent data from leaving. ISO27001 certification provides governance and reduces documentation through standardized information security practices.
Nick Moore discusses working with students at the University of Gloucestershire on ISO27001, an international information security standard. He proposes involving computing students who are now in the industry to provide a real-life scenario that builds links between students and staff while developing IT Services' defensive capabilities with a managed risk profile. The key is maintaining balance between business goals, student expectations, and quantified risks.
Network Engineering for High Speed Data SharingGlobus
Network Engineering for High Speed Data Sharing
The document discusses modernizing network architecture to improve data sharing performance for science. It proposes separating portal logic from data handling by placing data on dedicated high-performance infrastructure in science DMZs. This allows data to be efficiently transferred between facilities while portals focus on search and access. The Petascale DTN project achieved over 50Gbps transfers between HPC sites using this model. Long-term, interconnected science DMZs could create a global high-performance network enabling efficient data movement for discovery.
Common Design Elements for Data Movement Eli DartEd Dodds
Eli Dart, Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory Cosmology CrossConnects Workshop Berkeley, CA February 11, 2015
- James Blessing is the Deputy Director of Network Architecture at Future Services. He discussed Ciena's MCP network management software, the need for automation of network provisioning through APIs, and the JiscMail NETWORK-AUTOMATION mailing list as a resource.
- The document then covered topics like Netpath services, layer 2 and 3 VPNs, network function virtualization, IPv6 adoption, the Janet end-to-end performance initiative, science DMZ principles, network performance monitoring with perfSONAR, and working with the GÉANT project.
PROnet is an NSF-supported research project being conducted by researchers at the University of Texas at Dallas. PROnet is dedicated to enabling the design, development, demonstration and deployment of innovative ultrahigh-speed low-latency applications being created in and across North Texas and beyond.
Iaetsd survey on big data analytics for sdn (software defined networks)Iaetsd Iaetsd
This document discusses using software-defined networking and OpenFlow to improve network architectures for scientific data sharing. It proposes exploring a virtual switch network abstraction combined with SDN concepts to provide a simple, adaptable framework for science users. The challenges of current campus networks not being optimized for large data flows are outlined. Leveraging SDN could help build end-to-end network services with traffic isolation to meet the needs of data-intensive science applications and collaborations.
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
The document discusses the syllabus for a course on internetworking using TCP/IP. It covers topics like basics of internetworking, types of computer networks, synchronous vs asynchronous communication, Ethernet, bandwidth vs throughput, latency and jitter, protocol layers in networking, and an overview of the OSI 7-layer model. The document appears to be class notes or a syllabus for a networking course that introduces foundational concepts.
UNIT 4 computer networking powerpoint presentation .pdfshubhangisonawane6
This document provides an overview of computer networking concepts. It defines a computer network as multiple computers connected together to share information and resources. The document discusses different types of networks including LAN, MAN, WAN and internetworks. It describes key network features like performance, reliability, security and expandability. The document also covers network topologies, advantages and disadvantages of networks, and provides examples of mesh, star, tree and bus topologies.
This document outlines the course DCN 330 which covers data communication and network interconnectivity, including distinguishing network devices, analyzing network designs, quality of service, cloud computing, and gaining hands-on experience through lectures, labs, and a course project using tools like Cisco Packet Tracer and lab equipment. Students will be evaluated through exams, quizzes, lab assignments, and a course project presentation and report.
The document provides information about presentations that cover chapter objectives from a Network+ Guide to Networks textbook. It states that the presentations include the objectives listed at the beginning of each chapter and figures from the chapters. Instructors can customize the presentations for their class needs and additional images can be found on the instructor companion site.
This document presents a LAN design project report for an organization with 70 users. It proposes a network with wired and wireless connectivity using Ethernet cables, switches, a router and access points. A central data server would be installed with antivirus software and connected to the network along with a DHCP server for dynamic IP address allocation. The network diagram and configurations are presented, along with the hardware, software and design guidelines required to implement the LAN.
The document provides an overview of networking technologies and concepts covered during a summer training program. It discusses network topologies including physical, logical and different types of networks. It also covers networking devices like routers, switches and cables. Concepts like IP addressing, classes, subnetting, VLANs and routing are explained. The training took place at HCL Career Development Centre and involved projects on addressing schemes, internet connections and configuration of switches and routers.
This document provides an overview of computer networks and protocols. It discusses different types of networks like local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs). It also covers different networking technologies like circuit switching, packet switching, datagram, and virtual circuit. Key networking protocols like TCP/IP and OSI reference model are introduced. Specific protocols at the data link layer like CSMA/CD and error control protocols are discussed in detail.
This document provides an overview of computer networks and protocols. It begins by explaining why computer networks exist and some common communication tasks. It then describes different types of networks, including switching networks, broadcast networks, local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs). The document discusses circuit switching and packet switching, including datagram and virtual circuit approaches. It also covers internetworking, the OSI reference model, and examples of link layer protocols. In closing, it defines what a protocol is and discusses relevant standards bodies.
A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Netw...Tal Lavian Ph.D.
Data intensive Grid applications often deal with multiple terabytes and even petabytes of data. For them to be effectively deployed over distances, it is crucial that Grid infrastructures learn how to best exploit high-performance networks
(such as agile optical networks). The network footprint of these Grid applications show pronounced peaks and valleys in utilization, prompting for a radical overhaul of traditional network provisioning styles such as peak-provisioning, point-and-click or operator-assisted provisioning. A Grid stack must become capable to dynamically orchestrate a complex set of variables related to application requirements, data services, and network provisioning services, all within a rapidly and continually changing environment. Presented here is a platform that addresses some of these issues. This service platform closely integrates a set of large-scale data services with those for dynamic bandwidth allocation, through a network resource middleware service, using an OGSA-compliant interface allowing direct access by external applications. Recently, this platform has been implemented as an experimental research prototype on a unique wide area optical networking testbed incorporating state-of-the-art photonic
components. The paper, which presents initial results of research conducted on this prototype, indicates that these methods have the potential to address multiple major challenges related to data intensive applications. Given the complexities of this topic, especially where scheduling is required, only selected aspects of this platform are considered in this paper.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
This document provides an overview of networking fundamentals including network history, topologies, protocols, and devices. It discusses the evolution of networks from standalone computers connecting via modems to today's large networks. It describes common network topologies like bus, star, and ring. The document outlines the OSI and TCP/IP models and explains the functions of common networking devices like hubs, bridges, routers, and gateways. It also covers wired media like coaxial cable and fiber optic cable as well as wireless networking standards.
IoT to Cloud: Middle Layer (e.g Gateway, Hubs, Fog, Edge Computing)Bob Marcus
The document discusses the role of a middle layer between IoT devices and cloud computing resources. It presents several alternatives for the middle layer, including IoT gateways, edge/fog computing, and multi-level architectures. The optimal approach depends on the use case. For large-scale applications, a multi-level architecture with components at the device, edge, and cloud layers will likely be necessary. The middle layer poses challenges around data processing, communication standards, and extending cloud models to support IoT applications.
The document discusses the Energy Sciences Network (ESnet), which provides networking infrastructure for the U.S. Department of Energy's Office of Science. ESnet enables large-scale collaborative science by supporting massive data sharing, thousands of collaborators worldwide, and distributed data processing and management. The network has evolved over time and now connects multiple research institutions both within the U.S. and internationally to support large scientific projects producing enormous amounts of data.
The document announces a community launch event for digital storytelling in January 2024. It discusses using digital storytelling in higher education to support learning and teaching. Examples include using digital stories for formative assessment, reflective exercises, and research dissemination across various disciplines. Feedback from students and staff who participated in digital storytelling workshops was very positive and found it to be transformative and help give voice to their experiences. The document also profiles speakers who will discuss using digital stories to explore difficult concepts, hear the student voice, and facilitate staff reflections. It emphasizes that digital storytelling can introduce humanity and creativity into pedagogy and help develop core skills. Attendees will participate in a Miro activity to discuss benefits, applications,
This document summarizes a Jisc strategy forum that took place in Northern Ireland on December 14, 2023. It outlines Jisc's planned services and initiatives for 2023-2024, including expanding network access and launching new cybersecurity, analytics, and equipment services. It discusses feedback received from further and higher education members on how Jisc can better deliver solutions, empower communities, and provide vision/strategy. Activities at the forum focused on understanding members' needs/challenges and discussing how Jisc can better support key priorities in Northern Ireland, such as affordable infrastructure, digital skills, and cybersecurity for FE and efficiency, student experience, and collaboration for HE.
This document summarizes a Jisc Scotland strategy forum that took place on December 12, 2023. It outlines Jisc's planned solutions and services for 2023-2024 including deploying resilient Janet access, IT health checks, online surveys, SD-WAN services, and more. The document discusses how Jisc engages stakeholders through relationship management, research, communities, training and events. It summarizes feedback from further education and higher education members on how Jisc can improve advocacy by delivering the right solutions, empowering communities, and having a clear vision and strategy. Finally, it outlines activities for the forum, including understanding members' needs and priorities and discussing how Jisc supports national priorities in Scotland.
The Jisc provided a strategic update to stakeholders. Key highlights included:
- Achievements from the last year like data collection and analysis following the HESA merger, digital transformation support, and cost savings from licensing deals.
- Customer testimonials from Bridgend College on extending eduroam and from the University of Northampton on curriculum design support from Jisc.
- Priorities for the coming year like connectivity upgrades, new cybersecurity services, and improved customer experience.
- A financial summary showing income sources like membership fees and expenditures on areas like connectivity and cybersecurity.
This document summarizes VirtualSpeech, a company that provides virtual reality (VR) and artificial intelligence (AI) powered professional development training. It offers over 150 online courses covering topics like public speaking, leadership, and sales. Users can practice skills in immersive VR scenarios and receive feedback from conversational AI. The training is used by over 450,000 individuals across 130 countries and 150 universities. VirtualSpeech aims to enhance traditional learning with interactive VR practice sessions and real-time feedback to boost skills retention.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
20240609 QFM020 Irresponsible AI Reading List May 2024
The Science DMZ
1. Eli Dart, Campus network engineering workshop
19/10/2016 The Science DMZ
2. The Science DMZ
Eli Dart
Network Engineer
ESnet Science Engagement
Lawrence Berkeley National Laboratory
JISC Campus Network Engineering for Data Intensive
Science Workshop
October 19, 2016
31. Say NO to SCP (2016)
• Using the right data transfer tool is very important
• Sample Results: Berkeley, CA to Argonne, IL (near Chicago ) RTT = 53 ms, network capacity =
10Gbps.
• Notes
– scp is 24x slower than GridFTP on this path!!
– to get more than 1 Gbps (125 MB/s) disk to disk requires RAID array.
– (Assumes host TCP buffers are set correctly for the RTT)
Tool Throughput
scp 330 Mbps
wget, GridFTP, FDT, 1 stream 6 Gbps
GridFTP and FDT, 4 streams 8 Gbps (disk limited)
31 – ESnet Science Engagement (engage@es.net) -
11/1/2016
72. Sample Data Transfer Results (2005)
• Using the right tool is very important
• Sample Results: Berkeley, CA to Argonne, IL (near Chicago). RTT = 53 ms, network capacity =
10Gbps.
Tool Throughput
scp: 140 Mbps
HPN patched scp: 1.2 Gbps
ftp 1.4 Gbps
GridFTP, 4 streams 5.4 Gbps
GridFTP, 8 streams 6.6 Gbps
• Note that to get more than 1 Gbps (125 MB/s) disk to disk requires RAID.
72 – ESnet Science Engagement (engage@es.net) -
11/1/2016