This document discusses using grid technology for distributed media processing tasks like video transcoding. It presents the MediaGrid concept of sharing heterogeneous storage and computational resources across organizations. Test results show distributing video transcoding across multiple servers can significantly reduce processing time. Simulation results indicate total job time is highly dependent on available WAN bandwidth when outsourcing to remote resource providers. The conclusions are that grid technology is viable for media production tasks by enabling parallelism, but technical limitations exist when using remote resources over insufficient network connections.
Tungsten University: Setup and Operate Tungsten ReplicatorsContinuent
Do you have the background necessary to take full advantage of Tungsten Replicator in your environments? Tungsten offers enterprise-quality replication features in an open source package hosted on Google Code. This virtual course teaches you how to set up innovative topologies that solve complex replication problems. We start with single MySQL servers running MySQL replication and show a simple path migration path to Tungsten.
Course Topics
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tungsten-installer utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. We give you the background that makes replication easier to set up, and allows you to take full advantage of the Tungsten Replicator benefits. Learn how to configure and use it more effectively for your projects in the cloud as well as on-premises hardware.
Tim Bell from CERN presented on how they are using Puppet and other configuration management tools to manage their large infrastructure. CERN operates the Large Hadron Collider and worldwide computing grid. They are moving to adopt open source tools to better manage scaling their infrastructure from 7,000 to 15,000 servers. Puppet is helping CERN provision OpenStack cloud resources and automate configuration of complex applications and systems.
The document discusses smart grid technology. It begins with an introduction and then covers related work, components of a smart grid like connectivity networks and access networks, how smart grids work using two-way communication, features, comparisons to traditional grids, advantages like reduced losses and carbon footprint, and disadvantages like intermittent renewable sources. It concludes that smart grids will modernize energy supply and create smart homes and cities. The future scope is improved infrastructure and widespread adoption like the Internet. References are provided.
This document discusses how grid technologies can be used for disaster management and infrastructure protection. It describes how a grid of grids approach allows linking of various resources like commanders, troops, data, simulations and more. It also discusses using grids for crisis management and response by linking sensors, managers and first responders to decision support systems. Various examples of infrastructure grids for floods, earthquakes are also presented.
Grid computing is the application of several computers to a single problem
at the same time.
This Presentation deals with the idea of Grid Computing, its Design
Considerations, How a Grid Works, and some of the existing Grids in the
World today.
The document provides an overview of grid computing, including:
1) Grid computing involves sharing distributed computational resources over a network and providing single login access for users. Resources may be owned by different organizations.
2) Examples of current grids discussed include the NSF PACI/NCSA Alliance Grid, the NSF PACI/SDSC NPACI Grid, and the NASA Information Power Grid.
3) The document also discusses various grid middleware tools and projects for using grid resources, such as Globus, Condor, Legion, Harness, and the Internet Backplane Protocol.
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
Tungsten University: Setup and Operate Tungsten ReplicatorsContinuent
Do you have the background necessary to take full advantage of Tungsten Replicator in your environments? Tungsten offers enterprise-quality replication features in an open source package hosted on Google Code. This virtual course teaches you how to set up innovative topologies that solve complex replication problems. We start with single MySQL servers running MySQL replication and show a simple path migration path to Tungsten.
Course Topics
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tungsten-installer utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. We give you the background that makes replication easier to set up, and allows you to take full advantage of the Tungsten Replicator benefits. Learn how to configure and use it more effectively for your projects in the cloud as well as on-premises hardware.
Tim Bell from CERN presented on how they are using Puppet and other configuration management tools to manage their large infrastructure. CERN operates the Large Hadron Collider and worldwide computing grid. They are moving to adopt open source tools to better manage scaling their infrastructure from 7,000 to 15,000 servers. Puppet is helping CERN provision OpenStack cloud resources and automate configuration of complex applications and systems.
The document discusses smart grid technology. It begins with an introduction and then covers related work, components of a smart grid like connectivity networks and access networks, how smart grids work using two-way communication, features, comparisons to traditional grids, advantages like reduced losses and carbon footprint, and disadvantages like intermittent renewable sources. It concludes that smart grids will modernize energy supply and create smart homes and cities. The future scope is improved infrastructure and widespread adoption like the Internet. References are provided.
This document discusses how grid technologies can be used for disaster management and infrastructure protection. It describes how a grid of grids approach allows linking of various resources like commanders, troops, data, simulations and more. It also discusses using grids for crisis management and response by linking sensors, managers and first responders to decision support systems. Various examples of infrastructure grids for floods, earthquakes are also presented.
Grid computing is the application of several computers to a single problem
at the same time.
This Presentation deals with the idea of Grid Computing, its Design
Considerations, How a Grid Works, and some of the existing Grids in the
World today.
The document provides an overview of grid computing, including:
1) Grid computing involves sharing distributed computational resources over a network and providing single login access for users. Resources may be owned by different organizations.
2) Examples of current grids discussed include the NSF PACI/NCSA Alliance Grid, the NSF PACI/SDSC NPACI Grid, and the NASA Information Power Grid.
3) The document also discusses various grid middleware tools and projects for using grid resources, such as Globus, Condor, Legion, Harness, and the Internet Backplane Protocol.
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
The document discusses Grid Computing, which uses distributed computing resources like computer clusters connected via high-speed networks to provide high computational power. It describes the Globus Toolkit, an open-source software toolkit that provides basic services for building Grids. Key components of the Globus Toolkit allow for resource management, security, data management, and communication. The document also discusses parallel programming using MPI (Message Passing Interface) and potential applications of Grid Computing such as distributed supercomputing, real-time systems, and data-intensive processing.
The document discusses grid computing and provides examples. It begins with an introduction to supercomputers and provides Param Padma as an example. It then defines grid computing, discussing its evolution and advantages over supercomputers. Design considerations for grid computing include assigning work randomly to nodes to check for accurate results due to lack of central control. Implementation involves using middleware like BOINC and Alchemi, which are described. The document outlines service-oriented grid architecture and challenges. It provides examples of grid initiatives worldwide like TeraGrid in the US and Garuda in India.
The document discusses the grid, which allows for integrated and collaborative use of geographically separated computing resources. Grid computing enables sharing and aggregation of distributed autonomous resources dynamically based on availability, capability, performance, cost and user requirements. Key characteristics of grid systems include coordinating resources not controlled by a central authority, using open standards, and providing quality of service.
Energy efficiency optimization in oil and gas industrySaeed Alipour
Energy integration is a key solution in chemical process and crude refining industries to minimize external fuel consumption and to face the impact of growing energy crises. Typical energy integration projects can reach a reduction of heating fuels and cold utilities by 10%-30% compared with original designs or existing installations. Pinch Analysis is a leading tool and regarded as an efficient method to increase energy efficiency and minimize fuel flow consumption. It can practically be applied to synthesize a HEN (heat exchange network) or modify an existing preheat train for minimum energy consumption.
Grid computing involves linking together distributed computer resources from multiple administrative domains to achieve a common goal. Resources in a grid are heterogeneous and geographically dispersed. A grid differs from a cluster in that it provides a consistent, dependable, and transparent collection of computing resources across wide distances. Grid infrastructure must respect local autonomy, handle heterogeneous hardware, and be resilient and dynamic.
The document discusses smart grids as a modernization of existing power systems. It describes smart grids as using information technology and communication networks to create a more decentralized, efficient and renewable-based electric grid. Some key benefits of smart grids include improved energy efficiency, higher power reliability, lower costs for consumers, and better integration of renewable energy sources. However, smart grids also face challenges such as high installation costs and potential cybersecurity and privacy issues. The document provides an overview of smart grid components and technologies as well as examples of smart grid pilot projects being implemented in India.
This document discusses smart grid technology. It defines smart grid as an electric grid that uses information and communication technology to gather data and act on information about supplier and consumer behavior. The key components of a smart grid are smart meters, phasor measurement, information transfer, and distributed generation. A smart grid offers benefits like reduced carbon footprint, improved distribution management, self-healing capabilities, and increased efficiency. Specific ideas presented for a smart grid include a power management app that provides household electricity usage insights and allows selling regenerative power back to the grid.
This document is a lecture on grid systems and modular design. It discusses the history and uses of grid systems in graphic design, architecture, and page layout. Some key points include:
- Grid systems provide order, consistency and flexibility in design by establishing a set of guidelines.
- Early uses of grids can be seen in manuscripts and Greek temples, while graphic designers like Wim Crouwel and Josef Müller-Brockmann popularized grids in the mid-20th century.
- Effective grids divide space into columns and rows to form a modular structure. Common module sizes are based on factors of 12 to allow for flexibility.
- Negative space and variation within the grid help make designs visually
This document discusses ZooKeeper deployment, management, and client use pitfalls. It provides guidance on ideal ZooKeeper cluster sizing, topology design, and server role selection for deployment. For management, it covers dynamic reconfiguration and high failure expectations. For clients, it discusses herd effects, limiting child nodes, and handling high write loads. The presentation aims to help users avoid common ZooKeeper issues.
The document discusses lessons learned from a project to rollout WiFi connectivity and a new digital workflow for 5000 rail employees using mobile devices over 5 months. Key lessons included:
1) Rolling out features frequently through iterations and pilots to demonstrate value, react to problems early, and migrate users gradually.
2) Keeping the core project team small and focused while coordinating with multiple suppliers and stakeholders.
3) Leveraging new technologies like mobile middleware to enable robust data distribution, easy migration between systems, and self-healing capabilities to reduce support costs.
The project was completed on time and within budget despite challenges, and established foundations for future digital initiatives through its use of agile principles and innovative technologies.
This document discusses inter-process communication and synchronization in operating systems. It covers topics like mutual exclusion, solutions to the mutual exclusion problem using software approaches like Dekker's and Peterson's algorithms, hardware support using test-and-set operations, and operating system solutions using semaphores. It also discusses principles of concurrency and interactions between processes like competing processes and cooperating processes.
NetSupport School is classroom management software that allows teachers to monitor and control student computers. It has features like remote viewing and control of student screens, file distribution, messaging, surveys, and a whiteboard. The software connects via TCP/IP and uses ports to communicate between computers on a network. It has options for connectivity, security, audio control, and customizing the user interface.
SCADA Software or Swiss Cheese Software? by Celil UNUVERCODE BLUE
The talk is about SCADA vulnerabilities and exploiting. We will answer some specific questions about SCADA software vulnerabilities with technical details.
The questions are;
- Why are SCADA applications buggy?
- What is the status and impact of the threat?
- How do researchers or hackers discover these vulnerabilities?
In this talk we will also look at some SCADA vulnerabilities that affects well-known SCADA/HMI vendors, and will show how it's easy to hunt these vulnerabilities via reverse engineering , fuzzing etc.
Celil UNUVER
Celil Unuver is co-founder & security researcher of SignalSEC Ltd. He is also founder of NOPcon Security Conference. His areas of expertise include Vulnerability Research & Discovery, Exploit Development, Penetration Testing and Reverse Engineering. He has been a speaker at CONFidence, Swiss Cyber Storm, c0c0n, IstSec, Kuwait Info Security Forum. He enjoys hunting bugs and has discovered critical vulnerabilities affect well-known vendors such as Adobe, IBM, Microsoft, Novell etc.
1) JustRunIt is an experiment-based infrastructure for managing virtualized data centers that uses VM cloning and workload replay to conduct management experiments in a sandbox.
2) Case studies show JustRunIt can determine optimal resource allocations to meet performance targets with minimal resources, outperforming highly accurate modeling.
3) JustRunIt can also evaluate hardware upgrades by running experiments on upgraded sandbox hardware.
1) The document proposes a Byzantine fault-tolerant MapReduce approach for cloud-of-clouds environments to guarantee integrity and availability of data despite task corruptions or cloud outages.
2) A basic scheme replicates MapReduce tasks across clouds but has problems with computation, communication, and job execution control.
3) The proposed approach improves on the basic scheme by using deferred execution to address computation problems, digest communication to reduce inter-cloud communication, and a distributed job tracker to improve job execution control.
4) An evaluation of the approach on word count jobs across 3 clouds showed it added minimal overhead while providing fault tolerance.
Automation of Discovery Technology Lab WorkflowsAvetis Ghukasyan
This presentation is a quick overview of all of my projects for Cubist Pharmaceuticals while I was working as an intern. I have worked on 5 projects for automating discovery technologies lab workflows and all of them are described in full detail in this presentation.
When DevOps and Networking Intersect by Brent Salisbury of socketplane.ioDevOps4Networks
The document discusses the intersection of networks and DevOps. It covers challenges with traditional network operations including lack of programmability. It proposes distributed and software-defined networking approaches but notes hard problems remain. It emphasizes lessons learned around prototyping, understanding user needs, reliability, testing changes, and building a collaborative team culture.
The document discusses 2D viewing and simple animation techniques in computer graphics, including how to define a viewing region, perform viewing transformations, construct basic animations using techniques like double buffering and periodic motion, and manage frame rates for smooth animation playback. It also provides OpenGL code examples for tasks like setting the viewport and scaling images.
Talk on the upcoming Mahout nearest neighbor framework focussing particularly on the k-means acceleration provided by the streaming k-means implementation.
This document provides an overview of using Scalding on Tez. It begins with introducing the presenter and how Scalding was adopted. The document then covers:
1. Setting up Scalding to run on Tez including specifying fabric in build.sbt and job configuration flags.
2. An example job ("wc plus") that computes word frequencies from text is presented.
3. Tips are provided like visualizing the Tez DAG using dot files and load balancing using forceToDisk.
4. Outstanding issues discussed are upgrading Scalding for Cascading 3.0 and resolving Guava dependency conflicts across the stack. Overall, Tez is described as easy for YARN shops to use
The document discusses Grid Computing, which uses distributed computing resources like computer clusters connected via high-speed networks to provide high computational power. It describes the Globus Toolkit, an open-source software toolkit that provides basic services for building Grids. Key components of the Globus Toolkit allow for resource management, security, data management, and communication. The document also discusses parallel programming using MPI (Message Passing Interface) and potential applications of Grid Computing such as distributed supercomputing, real-time systems, and data-intensive processing.
The document discusses grid computing and provides examples. It begins with an introduction to supercomputers and provides Param Padma as an example. It then defines grid computing, discussing its evolution and advantages over supercomputers. Design considerations for grid computing include assigning work randomly to nodes to check for accurate results due to lack of central control. Implementation involves using middleware like BOINC and Alchemi, which are described. The document outlines service-oriented grid architecture and challenges. It provides examples of grid initiatives worldwide like TeraGrid in the US and Garuda in India.
The document discusses the grid, which allows for integrated and collaborative use of geographically separated computing resources. Grid computing enables sharing and aggregation of distributed autonomous resources dynamically based on availability, capability, performance, cost and user requirements. Key characteristics of grid systems include coordinating resources not controlled by a central authority, using open standards, and providing quality of service.
Energy efficiency optimization in oil and gas industrySaeed Alipour
Energy integration is a key solution in chemical process and crude refining industries to minimize external fuel consumption and to face the impact of growing energy crises. Typical energy integration projects can reach a reduction of heating fuels and cold utilities by 10%-30% compared with original designs or existing installations. Pinch Analysis is a leading tool and regarded as an efficient method to increase energy efficiency and minimize fuel flow consumption. It can practically be applied to synthesize a HEN (heat exchange network) or modify an existing preheat train for minimum energy consumption.
Grid computing involves linking together distributed computer resources from multiple administrative domains to achieve a common goal. Resources in a grid are heterogeneous and geographically dispersed. A grid differs from a cluster in that it provides a consistent, dependable, and transparent collection of computing resources across wide distances. Grid infrastructure must respect local autonomy, handle heterogeneous hardware, and be resilient and dynamic.
The document discusses smart grids as a modernization of existing power systems. It describes smart grids as using information technology and communication networks to create a more decentralized, efficient and renewable-based electric grid. Some key benefits of smart grids include improved energy efficiency, higher power reliability, lower costs for consumers, and better integration of renewable energy sources. However, smart grids also face challenges such as high installation costs and potential cybersecurity and privacy issues. The document provides an overview of smart grid components and technologies as well as examples of smart grid pilot projects being implemented in India.
This document discusses smart grid technology. It defines smart grid as an electric grid that uses information and communication technology to gather data and act on information about supplier and consumer behavior. The key components of a smart grid are smart meters, phasor measurement, information transfer, and distributed generation. A smart grid offers benefits like reduced carbon footprint, improved distribution management, self-healing capabilities, and increased efficiency. Specific ideas presented for a smart grid include a power management app that provides household electricity usage insights and allows selling regenerative power back to the grid.
This document is a lecture on grid systems and modular design. It discusses the history and uses of grid systems in graphic design, architecture, and page layout. Some key points include:
- Grid systems provide order, consistency and flexibility in design by establishing a set of guidelines.
- Early uses of grids can be seen in manuscripts and Greek temples, while graphic designers like Wim Crouwel and Josef Müller-Brockmann popularized grids in the mid-20th century.
- Effective grids divide space into columns and rows to form a modular structure. Common module sizes are based on factors of 12 to allow for flexibility.
- Negative space and variation within the grid help make designs visually
This document discusses ZooKeeper deployment, management, and client use pitfalls. It provides guidance on ideal ZooKeeper cluster sizing, topology design, and server role selection for deployment. For management, it covers dynamic reconfiguration and high failure expectations. For clients, it discusses herd effects, limiting child nodes, and handling high write loads. The presentation aims to help users avoid common ZooKeeper issues.
The document discusses lessons learned from a project to rollout WiFi connectivity and a new digital workflow for 5000 rail employees using mobile devices over 5 months. Key lessons included:
1) Rolling out features frequently through iterations and pilots to demonstrate value, react to problems early, and migrate users gradually.
2) Keeping the core project team small and focused while coordinating with multiple suppliers and stakeholders.
3) Leveraging new technologies like mobile middleware to enable robust data distribution, easy migration between systems, and self-healing capabilities to reduce support costs.
The project was completed on time and within budget despite challenges, and established foundations for future digital initiatives through its use of agile principles and innovative technologies.
This document discusses inter-process communication and synchronization in operating systems. It covers topics like mutual exclusion, solutions to the mutual exclusion problem using software approaches like Dekker's and Peterson's algorithms, hardware support using test-and-set operations, and operating system solutions using semaphores. It also discusses principles of concurrency and interactions between processes like competing processes and cooperating processes.
NetSupport School is classroom management software that allows teachers to monitor and control student computers. It has features like remote viewing and control of student screens, file distribution, messaging, surveys, and a whiteboard. The software connects via TCP/IP and uses ports to communicate between computers on a network. It has options for connectivity, security, audio control, and customizing the user interface.
SCADA Software or Swiss Cheese Software? by Celil UNUVERCODE BLUE
The talk is about SCADA vulnerabilities and exploiting. We will answer some specific questions about SCADA software vulnerabilities with technical details.
The questions are;
- Why are SCADA applications buggy?
- What is the status and impact of the threat?
- How do researchers or hackers discover these vulnerabilities?
In this talk we will also look at some SCADA vulnerabilities that affects well-known SCADA/HMI vendors, and will show how it's easy to hunt these vulnerabilities via reverse engineering , fuzzing etc.
Celil UNUVER
Celil Unuver is co-founder & security researcher of SignalSEC Ltd. He is also founder of NOPcon Security Conference. His areas of expertise include Vulnerability Research & Discovery, Exploit Development, Penetration Testing and Reverse Engineering. He has been a speaker at CONFidence, Swiss Cyber Storm, c0c0n, IstSec, Kuwait Info Security Forum. He enjoys hunting bugs and has discovered critical vulnerabilities affect well-known vendors such as Adobe, IBM, Microsoft, Novell etc.
1) JustRunIt is an experiment-based infrastructure for managing virtualized data centers that uses VM cloning and workload replay to conduct management experiments in a sandbox.
2) Case studies show JustRunIt can determine optimal resource allocations to meet performance targets with minimal resources, outperforming highly accurate modeling.
3) JustRunIt can also evaluate hardware upgrades by running experiments on upgraded sandbox hardware.
1) The document proposes a Byzantine fault-tolerant MapReduce approach for cloud-of-clouds environments to guarantee integrity and availability of data despite task corruptions or cloud outages.
2) A basic scheme replicates MapReduce tasks across clouds but has problems with computation, communication, and job execution control.
3) The proposed approach improves on the basic scheme by using deferred execution to address computation problems, digest communication to reduce inter-cloud communication, and a distributed job tracker to improve job execution control.
4) An evaluation of the approach on word count jobs across 3 clouds showed it added minimal overhead while providing fault tolerance.
Automation of Discovery Technology Lab WorkflowsAvetis Ghukasyan
This presentation is a quick overview of all of my projects for Cubist Pharmaceuticals while I was working as an intern. I have worked on 5 projects for automating discovery technologies lab workflows and all of them are described in full detail in this presentation.
When DevOps and Networking Intersect by Brent Salisbury of socketplane.ioDevOps4Networks
The document discusses the intersection of networks and DevOps. It covers challenges with traditional network operations including lack of programmability. It proposes distributed and software-defined networking approaches but notes hard problems remain. It emphasizes lessons learned around prototyping, understanding user needs, reliability, testing changes, and building a collaborative team culture.
The document discusses 2D viewing and simple animation techniques in computer graphics, including how to define a viewing region, perform viewing transformations, construct basic animations using techniques like double buffering and periodic motion, and manage frame rates for smooth animation playback. It also provides OpenGL code examples for tasks like setting the viewport and scaling images.
Talk on the upcoming Mahout nearest neighbor framework focussing particularly on the k-means acceleration provided by the streaming k-means implementation.
This document provides an overview of using Scalding on Tez. It begins with introducing the presenter and how Scalding was adopted. The document then covers:
1. Setting up Scalding to run on Tez including specifying fabric in build.sbt and job configuration flags.
2. An example job ("wc plus") that computes word frequencies from text is presented.
3. Tips are provided like visualizing the Tez DAG using dot files and load balancing using forceToDisk.
4. Outstanding issues discussed are upgrading Scalding for Cascading 3.0 and resolving Guava dependency conflicts across the stack. Overall, Tez is described as easy for YARN shops to use
%w(map reduce).first - A Tale About Rabbits, Latency, and Slim CrontabsPaolo Negri
Slide of the RailsConf 2009 session
Discover how is possible to use parallel execution to batch process large amount of data, learn how to use queues to distribute workload and coordinate processes, increase the throughput on system with high latency. Have fun with EventMachine, AMQP, RabbitMQ and get rid of that every 5mins cronjob
How to Troubleshoot OpenStack Without Losing SleepSadique Puthen
The complex architecture, design, and difficulties while troubleshooting amplifies the effort in debugging a problem with an OpenStack environment. This can give administrators and support associates sleepless nights if OpenStack native and supporting components are not configured properly and tuned for optimum performance, especially with large deployments that involve high availability and load balancing.
The document discusses building a prototype Infrastructure as a Service (IaaS) cloud using OpenStack. It describes configuring the cloud networking using bridges, installing and configuring image storage using Glance, installing various Nova components to manage compute resources and networking, and configuring the single Nova configuration file. It also briefly mentions installing the Dashboard GUI and encourages contributions to the open source project.
This document outlines various high performance computing (HPC) applications and technologies. It discusses computational areas like computational fluid dynamics, quantum mechanics, and climate simulation. It also mentions HPC systems from SGI including Altix, ICE, and UV architectures. These systems provide scalable shared memory and distributed memory computing utilizing technologies like NUMAlink interconnect and InfiniBand fabrics.
Tremashark is a tool for network debugging that collects event logs from multiple sources like packet captures, syslog outputs, and console logs. It combines these logs into a single timeline of events and allows users to analyze the logs using Wireshark. Tremashark is useful for debugging Trema-based OpenFlow controllers as it can collect packet data, system logs, and internal IPC messages between Trema modules.
Similar to Grid technology for next gen media processing (20)
@kaosbeat explaining how we made a 2nd screen app from idea to implementation and then took it further by making tools to be used in the TV studio on top of that, using the same technology. Node.js, redis, EC2, RoR, ...
The document discusses innovation in file-based media production workflows. It begins by describing the transition from linear tape-based workflows to centralized file-based workflows using IT technology. This introduces new challenges around interoperability, asset management, automation and standardization. The rest of the document outlines a CHAMP platform that aims to address these challenges and provide fit-for-purpose tools. It then provides a use case example of using such tools for a tour production. Finally, it discusses future trends like cloud-based production workflows and transmedia storytelling across multiple platforms. The conclusion emphasizes focusing on both operational excellence and differentiating technology.
This document discusses various methods for obtaining and structuring metadata to help organize media content. It outlines both automated and manual approaches, including using extraction tools to generate low-level metadata, crowd-sourcing metadata from user interactions, and enhancing existing metadata by linking it to external knowledge bases using Linked Open Data. The key message is that no single approach is perfect, and the best solution involves trying multiple techniques like manual annotation, repurposing existing metadata, and leveraging automated tools.
MediaCRM is a platform that allows broadcasters to better manage customer relationships by combining television viewing with second screen experiences on devices like tablets, smartphones, and laptops. A survey found that while only 2.5% of respondents currently use tablets, many people regularly use additional devices while watching TV. The MediaCRM platform includes tools for real-time and offline analysis of second screen interactions like polls, messages, and sentiment to provide broadcasters with insights. A trial with a reality TV show saw around 11,000 viewers actively engaged across second screen platforms. Ongoing challenges include improving time synchronization between screens and providing more personalized content and feedback to consumers.
This document discusses research on interactive television viewing behaviors and platforms. It profiles different types of viewers called "The Watchowskys", "Edward Speakoutsky", "Edda Findoutsky", and "Willa & Wilbur Notsky". It also discusses the future of television being diverse and interactive in real-time rather than solely social. Social media influences viewing choices for 53% of viewers. Quizzes, comedy, music and sitcoms are most suited for interactive viewing. Platforms are chosen mostly for their specific interactive features.
Villa Square, one of the latest exploits of the VRT-medialab MediaSquare and MediaCRM teams, is a purpose-built platform for the popular live één-television program “Villa Vanthilt”. This platform is in many ways a pioneering second screen project and has received numerous positive feedback. We will present the stepping stones towards the Villa Square use cases and discuss the underlying technologies we have put to work. Come and see the mayhem HTML5 caused in the broadcast world.
Exploring your media with the Semantic Webvrt-medialab
VRT has a large audiovisual archive that is used daily. However, the contents of videos cannot be automatically interpreted by machines. Textual metadata is needed to help systems find the correct videos. By linking metadata tags to external open data sources like DBpedia, GeoNames, and MusicBrainz using semantic triples, a network of linked knowledge can be created. This allows machines to deduce extra information and provides context about resources mentioned in the metadata. Enhancing metadata with Linked Open Data technology renders the media collection more transparent by enabling innovative exploration features.
BDMA workshop presentation - Using the Second Screen - MediaSquare - MediaCRMvrt-medialab
Het toepassen van de Customer Relationship Management (CRM)-methodologie voor een TV-omroep was tot nu vrij moeilijk omdat er geen directe en persoonlijke band mogelijk was met de mediaconsument. Met de komst van gesynchroniseerde tweedeschermtoepassingen voor smartphones en tablets komt daar verandering in.
Deze applicaties geven de consument de mogelijkheid om rechtstreeks te interageren met programma’s uitgezonden door de omroep (bvb. in primetime).
Hierdoor kan een gedetailleerd inzicht verkregen worden in het kijk- en luistergedrag van mediaconsumenten.
Hierdoor kunnen omroepen hun programma’s en aanbod optimaliseren naar de verwachtingen van de consumenten en zijn adverteerders op hun beurt in staat de effectiviteit van hun campagnes te meten en gerichte, interactieve advertenties te lanceren op een eenvoudige en kostenefficiënte manier.
Aan de hand van de VillaSquare, de tweede scherm applicatie van Villa Vanthilt, tonen we enkele eerste resultaten van dit concept.
The document discusses the CHAMP platform, a cloud-hosted media production platform. It aims to address new challenges in file-based media production by putting the program maker at the center and providing flexible, collaborative tools. CHAMP will feature fit-for-purpose web apps, a story-centric data model, and integrate existing solutions and media workflows in the cloud. A use case of a television program is described to demonstrate how CHAMP could support location-independent, collaborative production workflows in the cloud.
CHAMP is a cloud-hosted platform for autonomous media production. It provides flexible, integrated, and collaborative workflows in the cloud using web applications accessible from multiple devices. The platform aims to support both professional and semi-professional media producers. It functions as an open technology ecosystem where providers can offer applications and services to users.
This document describes the MediaLoep project, which aims to improve media search through the use of linked open data. It discusses how MediaLoep gathers existing metadata from sources like subtitles, news production systems and EPG data. This information is combined, linked and indexed to enable enhanced search capabilities like information pop-ups, structured search filters and multilingual search. The document provides examples of how keywords and concepts are linked to external data sources to integrate additional context and make search results more intelligent.
This document discusses HTML5 features like video playback and summarizes supported video formats across browsers. It also provides details on H.264 video codec licensing and royalty costs for different types of video distribution like subscription or title-by-title. The document concludes with links to demo pages showcasing HTML5 features and a question and answer section.
The document provides an overview of HTML5 and its new features, including sections on semantics, multimedia, 2D/3D drawing, real-time communication and CSS3. It highlights new HTML5 elements like <header>, <footer>, <video>, <audio>, input types and canvas. It also discusses JavaScript APIs, web sockets and browser support for HTML5.
Boost your search with semantic technologyvrt-medialab
MediaLoep combines documents readily available within the broadcasting company (subtitles, news preparation, ...) with semantic web technology to create a powerfull media search application.
Presented at EBU Production Technology Seminar 2011
Media Square : platform for second screen experiencesvrt-medialab
MediaSquare is a second screen app that allows users to interact digitally with primetime TV content by talking about it, having dialogues, voting in polls, recommending content to others, and playing games related to what they are watching. The app tests showed that 61% of 30,307 users participated in a diabetes screening poll, with 25% unaware they had the condition.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
4. Originating problems
Tape-based media to file-based media
Multitude of file-based media transfers and processing
Storage / retrieval / transfer of media
Conforming
Transcoding
Upscaling
Editing
Geographically disperse facilities / resources / media storage
4
5. Grid technology as solution?
Grid technology
a Grid is a distributed processing architecture where heterogeneous
resources are shared between different participating organizations, across
an interconnecting network
Resources
Storage (media archive, temporary storage, etc.)
Computational (rendering farm, work stations, etc.)
Specialized (broadcasting, ingesting, etc.)
High speed interconnecting network (1-10 Gbit/s)
5
10. Grid technology proof-of-concept
Investigated the viability of Grid technology for processing tasks
in media production / distribution companies
Transcoding of media
Upscaling of media
Video transcoding deals with converting a video signal into
another one with different format, such as different bit rate,
frame rate, frame size, or even compression standard
Video transcoding is a resource intense process
I/O
Processing needs
10
11. Need for transcoded / rescaled video
VRT online media YouTube
http://www.deredactie.be http://www.youtube.com
11
12. Distributed video transcoding
How can we accelerate this process?
Server 4
Server 3
Server 1 Server 2
00:00:00 00:51:53
00:00:00 00:13:15 00:13:15 00:26:30 00:26:30 00:39:45 00:39:45 00:51:53
12
20. Setup overview
… TORQUE
… with GPFS cluster as media storage
… Java distributed transcoding front-end
… on each computational resource Transcode libraries
… the will to transcode in a distributed fashion
20
24. Discussion
Old version
Video files were physically split
Split / merge step could introduce artifacts
Current version
File is inspected and navigation file created allowing for easy frame-
addressing
Audio ripped and transcoded in separate step
No artifacts
Less media-transfers than in previous versions
Future version
Pre-fetching / replication of media to remote sites
24
27. Test results
Input media
Vob file
MPEG-2 video encoding
AC3 audio encoding
Size: 1,64 GB
Output media
Avi file
Xvid video encoding
MP3 audio encoding
Size: 700 MB
Currently no HD video input modules!
Not the most optimized video transcoders
Focus on measuring benefits of distributing
27
34. Video (up)scaling
Video scaling is converting video signals from one size or resolution to another: usually
quot;upscalingquot; or quot;upconvertingquot; a video signal from a low resolution (e.g. standard definition)
to one of higher resolution (e.g. high definition television).
00:00:00 00:51:53
00:00:00 00:51:53
720X576 984x752
34
42. Simulation results
Simulations provide very accurate total job turnaround times
Real-life transcoding behaves erroneously when
interconnecting GPFS with computational resource provider by
means of WAN link lower than 35Mbit/s
Control Traffic
Control Traffic
Click Router
Data
Data
Simulation results show what would happen to job turnaround
GPFS
time for lower WAN interconnections
42
46. Conclusions
Grid technology is a viable technology for dealing with media
production / distribution tasks
Inherent support for parallelism can seriously decrease the total
processing time
Need for adaptation of media tasks
Grid overhead is no issue
Outsourcing task processing to remote resource providers
Viable when interconnection is sufficient
Technical limitations (e.g. GPFS time-outs)
MediaGrid simulator can provide accurate performance
predictions
46
47. Questions ?
Feel free to e-mail: Bruno.Volckaert@intec.UGent.be