Broadcasters have many options for storing their media assets and typically employ several types of
storage media throughout their facilities. By taking advantage of techniques such as rule based
file migration or automatic format conversion and combining them with a unified search and
retrieval interface, a MAM system can guarantee ease-of-use for production teams while optimizing
storage infrastructure costs.
This short paper discusses the work happening in the Fibre Channel Industry Association's T-11 committee to develop a new low latency protocol for a flash drive world. This paper is an excellent introduction to it.
A New Approach to Digital News ArchivingNicolas Hans
The move to tapeless production environments creates
new challenges for the production and re-use of news
archives. Too often, newscasters focus their energy on
the choice of a relevant storage system. However, the
true challenge lies in the consistent aggregation of
descriptive metadata and associated digital rights
information. This paper discusses several case studies
and suggests a new approach to digital news archiving:
one that will get approval and support from both
production and management teams.
Genetic Engineering - Teamworking Infrastructure For Post And DIQuantel
This white paper discusses a new teamworking infrastructure called Genetic Engineering from Quantel that aims to improve collaboration in post-production and digital intermediate workflows. It addresses limitations of current file-based and shared storage approaches, such as inefficient use of disk space and lack of support for different formats across applications. Genetic Engineering builds on Quantel's direct-attach storage technology to provide high-performance, fault-tolerant access to media without copying for editing. It also includes media management that tracks relationships at the frame-level to enable more efficient storage usage and collaborative workflows across multiple systems.
The document discusses data backup and recovery strategies. It defines data recovery as retrieving files that have been deleted, forgotten passwords, or recovering damaged hard drives. It discusses challenges with backups like network bandwidth, backup windows, and lack of resources. It also covers backup storage technologies and strategies to improve backups like incremental and block-level backups. The document recommends automating recovery, testing recovery plans, and using tools like BMC's Back-up and Recovery Solution to manage the backup process and improve recovery outcomes.
The tape Industry began in 1952 and the disk Industry in 1956. In 1952, the world’s first
successful commercial tape drive was delivered, the IBM 726 with 12,500 bytes of capacity per
reel. In 1956 the world’s first disk drive was delivered by IBM, the Ramac 350 with 5 megabytes
of capacity. Though no one knew it at the time, two key and lasting events linking disk and tape
for the foreseeable future had just occurred
The document discusses methods for accelerating Perforce workspace syncs and reducing network storage usage as digital assets grow rapidly. It introduces IC Manage Views, which uses dynamic virtual workspaces and local caching to achieve near-instant workspace syncs and reduce network storage by 4x. Benchmark results show IC Manage Views delivers files 2x faster than traditional methods through intelligent file redirection that separates reads from writes. IC Manage Views is compatible with existing storage technologies and scales as users and data grow.
1) The document discusses the growing need for large scale data storage solutions for professional video content due to increasing resolutions, frame rates, and number of cameras producing massive amounts of data.
2) It examines how Linear Tape-Open (LTO) tape storage using the Linear Tape File System (LTFS) can provide a cost effective solution for archiving and preserving large video libraries while still enabling online access to content.
3) Case studies of production companies and media organizations like Major League Baseball that have implemented LTO tape-based active archive solutions from Crossroads Systems are presented to illustrate how they meet workflow needs for protection, accessibility and scalability.
This document discusses Hitachi's Unified Storage (HUS) and Hitachi NAS Platform (HNAS) solutions for file storage. It summarizes that these solutions provide high performance, scalability, and efficiency to help organizations consolidate more file data using less storage. This allows organizations to reduce costs through features like deduplication while improving productivity. The solutions include a range of models and flexibility to address various workload sizes and requirements.
This short paper discusses the work happening in the Fibre Channel Industry Association's T-11 committee to develop a new low latency protocol for a flash drive world. This paper is an excellent introduction to it.
A New Approach to Digital News ArchivingNicolas Hans
The move to tapeless production environments creates
new challenges for the production and re-use of news
archives. Too often, newscasters focus their energy on
the choice of a relevant storage system. However, the
true challenge lies in the consistent aggregation of
descriptive metadata and associated digital rights
information. This paper discusses several case studies
and suggests a new approach to digital news archiving:
one that will get approval and support from both
production and management teams.
Genetic Engineering - Teamworking Infrastructure For Post And DIQuantel
This white paper discusses a new teamworking infrastructure called Genetic Engineering from Quantel that aims to improve collaboration in post-production and digital intermediate workflows. It addresses limitations of current file-based and shared storage approaches, such as inefficient use of disk space and lack of support for different formats across applications. Genetic Engineering builds on Quantel's direct-attach storage technology to provide high-performance, fault-tolerant access to media without copying for editing. It also includes media management that tracks relationships at the frame-level to enable more efficient storage usage and collaborative workflows across multiple systems.
The document discusses data backup and recovery strategies. It defines data recovery as retrieving files that have been deleted, forgotten passwords, or recovering damaged hard drives. It discusses challenges with backups like network bandwidth, backup windows, and lack of resources. It also covers backup storage technologies and strategies to improve backups like incremental and block-level backups. The document recommends automating recovery, testing recovery plans, and using tools like BMC's Back-up and Recovery Solution to manage the backup process and improve recovery outcomes.
The tape Industry began in 1952 and the disk Industry in 1956. In 1952, the world’s first
successful commercial tape drive was delivered, the IBM 726 with 12,500 bytes of capacity per
reel. In 1956 the world’s first disk drive was delivered by IBM, the Ramac 350 with 5 megabytes
of capacity. Though no one knew it at the time, two key and lasting events linking disk and tape
for the foreseeable future had just occurred
The document discusses methods for accelerating Perforce workspace syncs and reducing network storage usage as digital assets grow rapidly. It introduces IC Manage Views, which uses dynamic virtual workspaces and local caching to achieve near-instant workspace syncs and reduce network storage by 4x. Benchmark results show IC Manage Views delivers files 2x faster than traditional methods through intelligent file redirection that separates reads from writes. IC Manage Views is compatible with existing storage technologies and scales as users and data grow.
1) The document discusses the growing need for large scale data storage solutions for professional video content due to increasing resolutions, frame rates, and number of cameras producing massive amounts of data.
2) It examines how Linear Tape-Open (LTO) tape storage using the Linear Tape File System (LTFS) can provide a cost effective solution for archiving and preserving large video libraries while still enabling online access to content.
3) Case studies of production companies and media organizations like Major League Baseball that have implemented LTO tape-based active archive solutions from Crossroads Systems are presented to illustrate how they meet workflow needs for protection, accessibility and scalability.
This document discusses Hitachi's Unified Storage (HUS) and Hitachi NAS Platform (HNAS) solutions for file storage. It summarizes that these solutions provide high performance, scalability, and efficiency to help organizations consolidate more file data using less storage. This allows organizations to reduce costs through features like deduplication while improving productivity. The solutions include a range of models and flexibility to address various workload sizes and requirements.
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
The document discusses the need for converged backup solutions that can simplify and consolidate data protection across mixed server environments. It notes that individual vendor solutions often only address specific proprietary platforms. An optimal solution is a cross-platform approach using intelligent converged backup that applies appropriate data protection services based on each data set's criticality. The document then introduces Storage Director by Tributary Systems as a policy-based data management solution that connects any host to any storage technology and applies services to data based on business importance. Storage Director allows for data backup consolidation and virtualization across heterogeneous environments.
The document discusses storage options and architectures for managing digital image data over long periods of time. It outlines objectives to provide high-level information on storage solutions and initiate discussions on configurations. Sample storage architectures are described, including multi-tier solutions using primary, secondary and tertiary storage, as well as hierarchical storage management. Hardware options like disk, tape, optical storage and virtual tape libraries are also summarized.
Snapshots have been a key feature of primary storage infrastructures that IT professionals have relied on for years. But storage systems have traditionally been able to support only a limited number of active snapshots. And snapshots, being pointers and not actual data, are also susceptible to a primary storage system failure. As a result, most IT professionals use snapshots sparingly for protecting data. In this webinar Storage Switzerland and Nexenta show you how primary storage can be architected so that snapshots are able to meet almost all of the data protection requirements an organization has.
Storage capacity needs are growing rapidly as businesses collect more data. The storage market is expected to be over $32 billion in 2022, with revenues from storage area networks exceeding $10 billion. Different storage options each have advantages and disadvantages for various applications, with no single solution meeting all needs. Key factors in choosing a storage medium include capacity, performance, cost, data longevity and portability.
Powering the Creation of Great Work Solution ProfileHitachi Vantara
Hitachi Data Systems provides scalable storage solutions to power digital workflows in film, video, and game production. Their solutions deliver high performance, capacity, and modular architecture to handle large data volumes and enable simultaneous access. This removes bottlenecks and constraints, allowing creative teams to focus on their work without storage limitations. Hitachi storage drives improved productivity, accelerated rendering, and reduced production costs for studios.
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
The document discusses cloud storage and file systems. It provides an overview of cloud storage, noting that data is stored across multiple servers and locations managed by hosting companies. Customers can purchase storage capacity as needed. File systems for cloud computing allow many clients shared access to data partitioned across chunks stored on remote machines. Popular distributed file systems like GFS and HDFS are designed to handle large datasets across thousands of servers for applications requiring massive parallel processing. Load balancing is important to efficiently distribute workloads.
This document discusses implementing disk-based data protection and recovery. It recommends taking a tiered approach with five layers - application, production, recovery, protection, and archive tiers. The recovery tier is where disk systems would replace tape to provide faster restore times and more frequent backups. For best results, the document advises selecting a single vendor that can provide tightly integrated, modular solutions for each tier built on a common software platform. This comprehensive approach maximizes reliability while minimizing complexity.
The Open Channel architecture is providing a new model for the control and operation of individual SSD devices in data centers which serve multiple tenants and applications.
The document discusses the Turbo vNAS, a network storage device from QNAP. It has a scalable design that can provide over 3,300 MB/s throughput and 172,000 IOPS. It uses Intel Core i7/i5/i3 processors with up to 32GB RAM and supports up to 1088TB of raw storage capacity. A key feature is its integration of QvPC technology, which allows it to function as a virtual PC for applications like file management and data backup.
This document discusses QNAP's TVS-ECx80+ Edge Cloud Turbo vNAS series:
1. It provides centralized management of multiple NAS units through Q'center CMS, allowing monitoring of system status, firmware updates, and logs from one interface.
2. File Station allows easy file management, sharing, and media playback from any browser, along with features like quick search, thumbnails, and a recycle bin.
3. The Qfile mobile app provides access to NAS files on the go for browsing, sharing, and streaming multimedia.
SAN File System provides a simplified way to manage files and data across multiple servers connected through a SAN. It creates a single global namespace and allows files to be accessed from different platforms. SAN File System also implements policy-based automation to better match storage costs with data value through tiered storage management. This simplifies administration and improves storage utilization and data sharing across heterogeneous environments.
SAN File System provides several advantages for managing multimedia files stored on a SAN:
1) It improves performance for accessing large multimedia files by reducing metadata operations and allowing direct access to file data.
2) Extensive caching of metadata and data at the client improves retrieval speeds for popular multimedia files accessed by multiple users.
3) Prefetching of future file blocks helps meet real-time quality of service demands for sequential multimedia file access.
4) A round-robin data placement strategy improves throughput by concurrently reading striped sections of large multimedia files.
1. The document discusses various software-defined storage solutions from vendors like IBM, DataCore, and Nimble that can maximize availability, increase performance, and reduce costs for organizations.
2. It provides an overview of different storage platforms like IBM Storwize, IBM Spectrum Virtualize, DataCore VDSA appliances, and Nimble hybrid storage arrays that offer features like virtualization, high availability, flexibility, efficiency, and automation.
3. Recommendations are provided on which solutions are best suited for different use cases and storage requirements.
This document provides an overview of HPE solutions for challenges in AI and big data. It discusses HPE storage solutions including aggregated storage-in-compute using NVMe devices, tiered storage using flash, disk, and object storage, and zero watt storage to reduce power usage. It also covers the Scality object storage platform and WekaIO parallel file system for all-flash environments. The document aims to illustrate how HPE technologies can provide efficient, scalable storage for challenging AI and big data workloads.
StorNext 5 introduces new appliances built from the ground up to provide faster performance, 5x greater scalability, and optimization for modern workflows. Key features include a new modern core with improved caching, compact metadata, and management. Performance is improved across all file sizes up to 10x faster, and it scales to support up to 5 billion files. StorNext 5 provides topology-agnostic access over Fibre Channel, IP/NAS, and InfiniBand. It is optimized for Quantum's StorNext Q-Series storage and Lattus object storage. StorNext 5 also introduces native LTFS support for tape formats and is architected for long-term efficiency and non-disruptive updates.
The document proposes a cost-effective solution for video streaming and rich media applications using Vela's RapidAccess video server combined with iQstor's iQ1200 SATA storage system. The integrated encoding, decoding and video serving capabilities of RapidAccess are paired with the scalable storage and virtualization features of the iQ1200 SATA storage array to provide a robust yet affordable infrastructure for applications such as video on demand, corporate training and distance learning.
An increasing number of broadcasters and
organizations are considering the digitization of their
media archives. Implementing digital media libraries so
as to ensure the proper preservation of legacy archives
has been recognized as a priority. Yet, many
organizations are faced with a paradox: although
strategic, these digitization projects are postponed
because of budgetary constraints. As a result, little
attention is paid to the opportunity and necessity to
archive day-to-day programming and use that as a
starting point of a digital archiving campaign. This
paper, a follow-up to one recently presented to AES in
Berlin, discusses several case studies and suggests a
new approach to implementing a pragmatic archiving
strategy – one that will get approval and support from
management.
Digital Radio: a cost effective approach to producing enhanced radio programmingNicolas Hans
The advent of Digital Radio is a challenge for radio programmers and production teams. Not only do digital platforms pave the way for program associated data such as main program service and station information services, they create new channels for distributing advanced application services which include multimedia functionalities. Scheduling play-lists and broadcasting programs for such an enhanced radio network require that production teams implement new types of digital content production workflows. This paper details these production challenges and offers possible solutions based on case studies of radio broadcasters that have been faced with similar issues for other digital radio broadcasting platforms.
More Related Content
Similar to Implementing an Intelligent Storage Policy.pdf
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
The document discusses the need for converged backup solutions that can simplify and consolidate data protection across mixed server environments. It notes that individual vendor solutions often only address specific proprietary platforms. An optimal solution is a cross-platform approach using intelligent converged backup that applies appropriate data protection services based on each data set's criticality. The document then introduces Storage Director by Tributary Systems as a policy-based data management solution that connects any host to any storage technology and applies services to data based on business importance. Storage Director allows for data backup consolidation and virtualization across heterogeneous environments.
The document discusses storage options and architectures for managing digital image data over long periods of time. It outlines objectives to provide high-level information on storage solutions and initiate discussions on configurations. Sample storage architectures are described, including multi-tier solutions using primary, secondary and tertiary storage, as well as hierarchical storage management. Hardware options like disk, tape, optical storage and virtual tape libraries are also summarized.
Snapshots have been a key feature of primary storage infrastructures that IT professionals have relied on for years. But storage systems have traditionally been able to support only a limited number of active snapshots. And snapshots, being pointers and not actual data, are also susceptible to a primary storage system failure. As a result, most IT professionals use snapshots sparingly for protecting data. In this webinar Storage Switzerland and Nexenta show you how primary storage can be architected so that snapshots are able to meet almost all of the data protection requirements an organization has.
Storage capacity needs are growing rapidly as businesses collect more data. The storage market is expected to be over $32 billion in 2022, with revenues from storage area networks exceeding $10 billion. Different storage options each have advantages and disadvantages for various applications, with no single solution meeting all needs. Key factors in choosing a storage medium include capacity, performance, cost, data longevity and portability.
Powering the Creation of Great Work Solution ProfileHitachi Vantara
Hitachi Data Systems provides scalable storage solutions to power digital workflows in film, video, and game production. Their solutions deliver high performance, capacity, and modular architecture to handle large data volumes and enable simultaneous access. This removes bottlenecks and constraints, allowing creative teams to focus on their work without storage limitations. Hitachi storage drives improved productivity, accelerated rendering, and reduced production costs for studios.
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
The document discusses cloud storage and file systems. It provides an overview of cloud storage, noting that data is stored across multiple servers and locations managed by hosting companies. Customers can purchase storage capacity as needed. File systems for cloud computing allow many clients shared access to data partitioned across chunks stored on remote machines. Popular distributed file systems like GFS and HDFS are designed to handle large datasets across thousands of servers for applications requiring massive parallel processing. Load balancing is important to efficiently distribute workloads.
This document discusses implementing disk-based data protection and recovery. It recommends taking a tiered approach with five layers - application, production, recovery, protection, and archive tiers. The recovery tier is where disk systems would replace tape to provide faster restore times and more frequent backups. For best results, the document advises selecting a single vendor that can provide tightly integrated, modular solutions for each tier built on a common software platform. This comprehensive approach maximizes reliability while minimizing complexity.
The Open Channel architecture is providing a new model for the control and operation of individual SSD devices in data centers which serve multiple tenants and applications.
The document discusses the Turbo vNAS, a network storage device from QNAP. It has a scalable design that can provide over 3,300 MB/s throughput and 172,000 IOPS. It uses Intel Core i7/i5/i3 processors with up to 32GB RAM and supports up to 1088TB of raw storage capacity. A key feature is its integration of QvPC technology, which allows it to function as a virtual PC for applications like file management and data backup.
This document discusses QNAP's TVS-ECx80+ Edge Cloud Turbo vNAS series:
1. It provides centralized management of multiple NAS units through Q'center CMS, allowing monitoring of system status, firmware updates, and logs from one interface.
2. File Station allows easy file management, sharing, and media playback from any browser, along with features like quick search, thumbnails, and a recycle bin.
3. The Qfile mobile app provides access to NAS files on the go for browsing, sharing, and streaming multimedia.
SAN File System provides a simplified way to manage files and data across multiple servers connected through a SAN. It creates a single global namespace and allows files to be accessed from different platforms. SAN File System also implements policy-based automation to better match storage costs with data value through tiered storage management. This simplifies administration and improves storage utilization and data sharing across heterogeneous environments.
SAN File System provides several advantages for managing multimedia files stored on a SAN:
1) It improves performance for accessing large multimedia files by reducing metadata operations and allowing direct access to file data.
2) Extensive caching of metadata and data at the client improves retrieval speeds for popular multimedia files accessed by multiple users.
3) Prefetching of future file blocks helps meet real-time quality of service demands for sequential multimedia file access.
4) A round-robin data placement strategy improves throughput by concurrently reading striped sections of large multimedia files.
1. The document discusses various software-defined storage solutions from vendors like IBM, DataCore, and Nimble that can maximize availability, increase performance, and reduce costs for organizations.
2. It provides an overview of different storage platforms like IBM Storwize, IBM Spectrum Virtualize, DataCore VDSA appliances, and Nimble hybrid storage arrays that offer features like virtualization, high availability, flexibility, efficiency, and automation.
3. Recommendations are provided on which solutions are best suited for different use cases and storage requirements.
This document provides an overview of HPE solutions for challenges in AI and big data. It discusses HPE storage solutions including aggregated storage-in-compute using NVMe devices, tiered storage using flash, disk, and object storage, and zero watt storage to reduce power usage. It also covers the Scality object storage platform and WekaIO parallel file system for all-flash environments. The document aims to illustrate how HPE technologies can provide efficient, scalable storage for challenging AI and big data workloads.
StorNext 5 introduces new appliances built from the ground up to provide faster performance, 5x greater scalability, and optimization for modern workflows. Key features include a new modern core with improved caching, compact metadata, and management. Performance is improved across all file sizes up to 10x faster, and it scales to support up to 5 billion files. StorNext 5 provides topology-agnostic access over Fibre Channel, IP/NAS, and InfiniBand. It is optimized for Quantum's StorNext Q-Series storage and Lattus object storage. StorNext 5 also introduces native LTFS support for tape formats and is architected for long-term efficiency and non-disruptive updates.
The document proposes a cost-effective solution for video streaming and rich media applications using Vela's RapidAccess video server combined with iQstor's iQ1200 SATA storage system. The integrated encoding, decoding and video serving capabilities of RapidAccess are paired with the scalable storage and virtualization features of the iQ1200 SATA storage array to provide a robust yet affordable infrastructure for applications such as video on demand, corporate training and distance learning.
Similar to Implementing an Intelligent Storage Policy.pdf (20)
An increasing number of broadcasters and
organizations are considering the digitization of their
media archives. Implementing digital media libraries so
as to ensure the proper preservation of legacy archives
has been recognized as a priority. Yet, many
organizations are faced with a paradox: although
strategic, these digitization projects are postponed
because of budgetary constraints. As a result, little
attention is paid to the opportunity and necessity to
archive day-to-day programming and use that as a
starting point of a digital archiving campaign. This
paper, a follow-up to one recently presented to AES in
Berlin, discusses several case studies and suggests a
new approach to implementing a pragmatic archiving
strategy – one that will get approval and support from
management.
Digital Radio: a cost effective approach to producing enhanced radio programmingNicolas Hans
The advent of Digital Radio is a challenge for radio programmers and production teams. Not only do digital platforms pave the way for program associated data such as main program service and station information services, they create new channels for distributing advanced application services which include multimedia functionalities. Scheduling play-lists and broadcasting programs for such an enhanced radio network require that production teams implement new types of digital content production workflows. This paper details these production challenges and offers possible solutions based on case studies of radio broadcasters that have been faced with similar issues for other digital radio broadcasting platforms.
An introduction to the challenges of VAR deploymentsNicolas Hans
Video Assistant Referee is now part of the rules of the game. Rolling out VAR services within a federation remains a challenge. This deck explains why.
A primer on mediation for project managersNicolas Hans
Conflicts and projects go hand-in-hand. Sometimes, differences in values, attitudes, expectations, perceptions or personalities are just too significant—disagreement goes overboard. For such cases, mediation techniques provide a way out. These offer a framework for restoring collaboration across project teams and driving value.
25 years of bandwidth boom made 4K/IP possibleNicolas Hans
Over the past 25 years, bandwidth capabilities have increased exponentially, growing from megabits per second to gigabits and now terabits per second. This massive bandwidth boom has enabled technologies like audio and video over IP that were previously impossible, with 4K video transmission over IP networks now feasible due to bandwidth increasing by factors of 10 around every 5 years from the 1980s to today.
Building a mobile content ecosystem as a foundation to digital arabiaNicolas Hans
Mobile Internet growth in the Middle East and North Africa is strong and accelerating. Although international Internet players have recently boosted their presence in the region, mobile telecommunication operators have a strategic opportunity to secure a control point in the Internet value chain that goes beyond providing connectivity.
Optimizing the HD radio and DAB experienceNicolas Hans
1) Building a digital media library is the first step to producing enhanced radio programming for digital audiences. It allows storage of audio files, interactive services, archives, and more.
2) The production workflow must be re-engineered to systematically collect metadata at every stage from planning to broadcast archiving.
3) Driving interaction through mobile phones and other devices can generate substantial revenue streams, such as $900,000 per year from 5,500 SMS messages daily in one case study.
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
Liberarsi dai framework con i Web Component.pptxMassimo Artizzu
In Italian
Presentazione sulle feature e l'utilizzo dei Web Component nell sviluppo di pagine e applicazioni web. Racconto delle ragioni storiche dell'avvento dei Web Component. Evidenziazione dei vantaggi e delle sfide poste, indicazione delle best practices, con particolare accento sulla possibilità di usare web component per facilitare la migrazione delle proprie applicazioni verso nuovi stack tecnologici.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSIS
Implementing an Intelligent Storage Policy.pdf
1. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 1 / 7
Implementing an intelligent storage policy
with media asset management
by Nicolas Hans Product Strategy Director, Dalet,
nhans@dalet.com.
Abstract
Broadcasters have many options for storing their media assets and typically employ several types of
storage media throughout their facilities. Examples include, but are not limited to, hard disk drive
based systems, optical jukeboxes and tape libraries. Each of these storage solutions has its own cost,
accessibility, capacity, and portability characteristics. Therefore managing them in a cost effective
manner without negatively impacting productivity can be a challenge. Additionally, searching for
and retrieving media assets in a mixed media environment may be another significant issue. These
challenges can be addressed by implementing an intelligent storage policy through the use of a
Media Asset Management (MAM) system. By taking advantage of techniques such as rule based
file migration or automatic format conversion and combining them with a unified search and
retrieval interface, a MAM system can guarantee ease-of-use for production teams while optimizing
storage infrastructure costs.
Introduction
Eliminating video tapes and moving to a file based digital media production system potentially
provides broadcasters with improved synergy, shorter time-to-air, better productivity and lower
operational costs. The ability to share files across the network eliminates the need for tape
duplication. It allows for different departments to access simultaneously the same recordings. The
possibility of transferring video faster than real time improves the turn-around of material and
enables editors and journalists to meet tighter deadlines. The option to preview clips from any
production desktop drastically increases productivity and simplifies the re-use of production
archives.
Today, computer storage technologies are reaching price points that make it financially feasible to
abandon tapes in the production realm. An increasing number of Non Linear Editing (NLE) systems
are networked. Video servers are becoming the norm for play-out operations; Electronic News
Gathering (ENG) teams are already experimenting with hard-disk based recorders. As a result,
broadcasters store an increasing amount of video material on Hard Disk Drives (HDD) and on-line
tape libraries attached to computer servers. Yet manipulating video files across a broadcaster’s
digital infrastructure presents a number of issues which are not met by Hierarchical Storage
Management (HSM) systems and other storage virtualization architectures developed by players of
the Information Technology (IT) world.
In addition to networked attached systems and storage area networks used in corporate
environments, broadcasters heavily rely on video servers. Although modern video servers are built
with standard IT components, they typically run real-time operating systems with built-in quality of
service (QOS) functionality to guarantee disk and network bandwidths. Transferring video files to
and from such ingest and play-out devices requires the use of proprietary protocols or Application
Programming Interfaces (API). As a result, integrating such devices with the rest of a broadcaster’s
IT infrastructure constitutes a challenge that can be solved by implementing an intelligent storage
management policy. Such a policy takes production workflow constraints into account to optimize
storage allocation and minimize associated costs. It leverages a Media Asset Management (MAM)
2. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 2 / 7
system to seamlessly handle files across a local or wide area network and automate the format
conversions that bandwidth constraints impose. Finally, it offers editors and program makers a
unified search and retrieval interface to achieve their day-to-day production tasks.
Analyze workflow needs to optimize storage infrastructure
IT storage systems are not born equal. A wide range of solutions are available to broadcasters.
Storage capacity, bandwidth performance, redundancy and reliability define the technical
characteristics of a given storage system and determine its price. Broadcasters need to take
advantage of the variety of solutions available to minimize the cost of their infrastructure while
guaranteeing that the performance meets their operational requirements.
When considering the workflow of a broadcaster, distinct types of storage areas may be segmented.
As illustrated by FIGURE 1, a multi-channel facility will typically require 100 hours worth of video
to ensure continuous broadcast, 300 hours worth of storage capacity for production and 5000 hours
for production archives. Deep archives require very high storage capacities – typically beyond a
Petabyte (1000 Terabytes!) – to store tens of thousands of hours of material in broadcast quality.
Logarithmic scale
100 000 hours
Deep Archives
5000 hours
Production
Archives
300 hours
Production
Area
100 hours
Broadcast
Buffer
FIGURE 1 – The storage requirements of a broadcast operation can be segmented.
Each of these different storage areas has distinct technical requirements. While a broadcast buffer
requires high availability and extreme reliability, deep archives need to be highly scalable to ensure
for future growth. The storage system used for the production area must sustain high bandwidth
performance to support simultaneous access by multiple users; the one used for archives should
primarily offer large storage capacity with reasonable access times.
By distinguishing the technical requirements of each of these distinct areas, broadcasters can
optimize their storage costs and deploy various classes of storage systems (TABLE 1). These may
range from high performance video servers, to Storage Area Networks (SAN) based on fibre
channel arrays, to Network Attached Storage (NAS) appliances built on Small Computer System
Interface (SCSI) drives, to jukeboxes filled with recordable digital versatile disks (DVD-R) or
robotic libraries that use Linear Tape Open (LTO) cartridges.
3. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 3 / 7
Storage type Average
seek time
Average
bandwidth
Cost for
100 Hours*
Cost for
10 000 hours*
HDD with Fibre
Channel controller
4 ms 70 MBps 55 000 USD 4 000 000 USD
HDD with SCSI
controller
6 to 8 ms 60 MBps 20 000 USD 1 400 000 USD
HDD with IDE
controllers
10 ms 30 MBps 12 000 USD 700 000 USD
LTO Tape Library 3 to 8 minutes 15 MBps 18 000 USD 225 000 USD
(*) On the basis of MPEG-2 4.2.2 i-Frame 30 Mbps encoding i.e., 100 Hours requires 1,5 TB.
TABLE 1 – Distinct storage types for distinct cost and performance levels (Q1/2004).
Merge heterogeneous storage environments
Successfully implementing such a local area data network does not boil down to interconnecting a
collection of different storage devices. A MAM system is required to ensure that file operations are
optimized and made as seamless as possible for production staff. Such intelligent storage
management is all the more a challenge that different storage units often require the use of distinct
access protocols (FIGURE 2). These access protocols use IT standards such as File Transfer
Protocol (FTP), Network File System (NFS), Common Internet File System (CIFS) or NT File
System (NTFS). They also involve proprietary APIs or broadcast specific protocols such as Video
Disk Control Protocol (VDCP) initially developed by Louth or Network Device Control Protocol
(NDCP) introduced by Harris.
FIGURE 2 – Merging multiple storage units requires that different protocols be supported.
Protocols are not the only issue. Hierarchical storage management systems allow for the seamless
combination of HDD based on-line storage with tertiary media near-line systems such as tape
libraries or optical disk jukeboxes. This type of storage systems is well adapted to both production
and deep archives because of its cost. Handling broadcast material requires specific features that are
rarely provided by standard IT solutions.
4. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 4 / 7
Sports and news production units often need partial file retrieval. Consider the recording of a soccer
game from which a producer wants to extract a one minute segment – the “killer goal” for example.
Suppose that the match was saved as two 45 minutes files which are stored on a near-line tape
system. As illustrated by TABLE 2, retrieving on-line the recording of the second period will
typically take 13 to 14 minutes. Although this is a fraction of the time that would be required for
retrieving a tape from a traditional shelf-based archive, it can be dramatically improved. Using a
partial file retrieval system, these thirteen minutes and thirty seconds are cut down to five minutes.
Transfer speed of a tape drive
Theoretical drive speed 30 MBps
Nominal drive speed 15 MBps
Seek time of tape library
Time for moving tape to drive 1 to 2 minutes
Time for positioning head on tape 2 to 5 minutes
Time required for retrieving a 45 minute recording in MPEG-2 4.2.2 at 30 Mbps
Transfer time (depends of nominal speed drive) 6 to 11 minutes
Total time for retrieving 45 minutes 9 to 18 minutes – average: 13’30”
Time required for retrieving 1 minute recording in MPEG-2 4.2.2 at 30 Mbps
Transfer time (depends of nominal speed drive) 8 to 15 seconds
Total time for retrieving 1 minute 3 to 7 minutes – average: 5’
TABLE 2 – Partial file retrieval is a worthwhile extension to HSM systems.
Simplifying the management of a distributed storage architecture is all the more necessary as an
increasing number of broadcasters run multi-site operations. They need to merge content
distribution networks into their standard production infrastructure. An example of such a distributed
model is the use of the Internet as a contribution network by ENG teams. Another is the deployment
of leased data lines to aggregate and consolidate in a central-cast facility news packages and stories
produced by remote local offices. Such models require that video files be managed beyond the
Local Area Network (LAN), across Metropolitan (MAN) or even Wide Area Networks (WAN).
Move from video feeds to data files and streams
In a distributed environment, a MAM system needs to minimize or compensate for the delays
inherent to manipulating video files across a network. Despite the use of compression techniques,
broadcast quality video remains bandwidth hungry. Consider a recording in MPEG-2 4.2.2 i-Frame
at 30 Mbps. Add 1.5 Mbps for a single stereo audio channel. Transfer the resulting 31.5 Mbps data
stream over an Ethernet network: bandwidth consumption nearly reaches 35 Mbps because of
Internet Protocol (IP) communication overheads. A Gigabit Ethernet network provides 600 Mbps of
useful bandwidth. As a result, transferring an MPEG-2 recording such as the one described above
will occur at a maximum speed of 17 times real time. In other words, copying or moving a one
minute clip will take three and half seconds and a one hour package three and a half minutes!
Such figures are dramatically better than the real-time transfer speed achieved through a Serial
Digital Interface (SDI) or the four times faster than real-time Serial Data Transport Interface (SDTI)
networks provide. Video data transfers remain nonetheless time consuming and as such may impact
the production workflow. So as to limit that impact, implementing an intelligent storage
management policy requires video to be managed not only as files but also as data streams. Features
such as edit while record, convert while record or broadcast while record need to be supported
across storage units. In addition, the implementation of a multi-resolution architecture whereby
broadcast quality video material is always available in a lower bit rate format for browsing and
editing is often required.
5. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 5 / 7
Although switched Ethernet and recent storage systems can support high resolution browse and edit
operations over the network, many workflow scenarios require the need for low resolution clones
(often referred to as ‘proxies’). Producers and editors need to browse material archived near-line in
the tape libraries or even off-line on shelved video tapes. Broadcasters that operate multi-site
networks require their editors to browse material available in other stations or on remote video
servers. In such scenarios, the generation of low resolution proxy files that correspond to broadcast
quality material is a necessity. This operation requires the choice of an adapted format as well as the
implementation of rule-based conversion mechanisms to ensure the proper synchronization of both
low and high resolution content.
The selection of a low resolution format is primarily conditioned by the need to provide editing
functionality. Whereas browse-only operations do not require frame accuracy, editing and voice-
over recordings do. Although MPEG-1 was a format of choice until recently, MPEG-4 and
Windows Media provide better image quality at relatively lower bit rates. Beyond the choice of a
proper format, successfully implementing a proxy architecture primarily relies on the ability to
ensure the consistent synchronization of low and high resolution versions. Such a process can only
be ensured by properly tracking all media assets across every step of the workflow – from ingest to
broadcast – and by triggering conversions according to predefined rules (FIGURE 3).
FIGURE 3 – A multi-resolution architecture is driven by stringent rules.
Such rules may also be used to ensure format or resolution conversions that are not related to proxy
generation. For example, production material typically needs to be converted to a different type of
encoding for remote contribution or broadcast purposes. A package in DV will be compressed to
MPEG-2 long GOP so as to be transferred over a WAN or broadcast by a video server. As the
Material eXchange Format (MXF) standard comes of age, automated wrapper conversions will also
be required.
Provide a unified search and retrieval interface
The key to successfully implementing an intelligent storage policy in a broadcast environment is to
provide editors and program staff with a unified view of available content. The underlying
complexity of the network infrastructure as well as the various file and conversion operations must
6. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 6 / 7
be hidden from users. Such a unified user interface provides the search and retrieval features
required by production teams. It also needs to extend beyond media management to provide flexible
media and metadata management and simplified user rights management. Such a media warehouse
can empower the whole production workflow.
A unified search and retrieval interface should not reflect the structure of the network nor refer to
files on specific storage units. It must offer users with a layer of abstraction to provide them with a
relevant view of available assets i.e., recordings, clips, Edit Decision Lists (EDL) and associated
metadata. As such, it requires that technical, descriptive and legal metadata be customizable. In
addition, a flexible category structure can be used to drive assets across the workflow. For example,
by drag-and-dropping a specific clip from one category to another, a producer seamlessly triggers
the conversion and transfer of the corresponding file from one storage unit to another located on the
same LAN or WAN. Less than a passive catalog, such a unified user interface provides a front-
office view to the various back-office processes required to merge heterogeneous storage
environments.
A media warehouse also needs to simplify user rights management. To enable collaboration,
different roles and associated resources must be defined. Whereas all users may have access to the
low resolution version of available material, broadcast quality video should only be accessed by
authorized users. The same logic applies to metadata. Associated information regarding specific
assets is contextual. Whereas journalists focus on descriptive information (the “who, what, where,
when, why” mantra), editors are concerned by technical characteristics of a recording and archivists
by copyright information. As such, the view that users have of an asset needs to depend on their
profile. In addition, the ability to allocate quotas and resources to users is all the more required as
tapes are replaced by files and video information gains in fluidity. The needs for storage capacity
tend to inflate if no control process is implemented.
FIGURE 4 – Workflow drives media migrations across the storage network.
To constitute the backbone of the production process, a media warehouse also needs to optimize the
relationship between assets and production staff. It must embed workflow engine features such as
task assignment, status hierarchy and corresponding notification processes. For example, assigning
the creation of a package notifies the corresponding producer. Changing the status of an EDL from
7. “Implementing an intelligent storage policy with Media Asset Management” – A Dalet White Paper – June 2004 - 7 / 7
‘To be validated’ to ‘Approved’ triggers the rendering of the original material and the creation of a
new clip. Inserting the corresponding asset in a rundown moves the resulting clip from the
production area to the broadcast buffer (FIGURE 4).
Conclusion
By analyzing their workflow, broadcasters can segment their storage requirements and gain
flexibility. By moving away from the proprietary central video server model which was the rule in
the SDI world, they take advantage of the distinct classes of storage systems available today. This
minimizes the cost of their digital production and archive infrastructure. The use of a flexible media
asset management platform empowers them to implement an intelligent storage policy whereby
heterogeneous systems are merged into a unified storage network both locally and across multiple
sites. Such a network provides the infrastructure needed to manipulate the large files and bandwidth
intensive data streams that broadcast quality video requires. By providing a unified search and
retrieval interface to their production teams, they can hide the various media allocation, migration,
conversion and security rules that are required in a distributed broadcast environment. More
importantly, digital broadcasters build the framework they require to manage media and associated
metadata. As such, an intelligent storage policy seeks seeks to rip the productivity gains made
possible by the elimination of tapes and provides broadcasters with the digital backbone they need
to move to an asset production model that covers the whole workflow – from ingest to broadcast,
from archive to distribution.
Aknowledgements
The author wishes to thank his colleagues Benjamin Desbois, Janice Dolan, Stéphane Guez and
Thomas Zugmeyer for their help and support as well as Michael Elhadad for his careful reading of
this paper and his many suggestions.
Document history
This paper was initially presented at Broadcast Asia 2004. Since then, it’s been presented at the
ABE conference in Sydney.