Michelle Munson, CEO & Co-Founder of Aspera, is joined by Jay Migliaccio, Director or Cloud Technologies at Aspera, and Stephane Houet, Product Manager at EVS Broadcast Equipment, for the following session: MED305- Achieving Consistently High Throughput for Very Large Data Transfers with Amazon S3; Media Production & Distribution Track, on Wednesday, Nov 12, 3:30 PM - 4:15 PM – Level 4 - Delfino 4102
(MED305) Achieving Consistently High Throughput for Very Large Data Transfers...Amazon Web Services
A difficult problem for users of Amazon S3 that deal in large-form data is how to consistently transfer ultralarge files and large sets of files at fast speeds over the WAN. Although a number of tools are available for network transfer with S3 that exploit its multipart APIs, most have practical limitations when transferring very large files or large sets of very small files with remote regions. Transfers can be slow, degrade unpredictably, and for the largest sizes fail altogether. Additional complications include resume, encryption at rest, encryption in transit, and efficient updates for synchronization.
Aspera has expertise and experience in tackling these problems and has created a suite of transport, synchronization, monitoring, and collaboration software that can transfer and store both ultralarge files (up to the 5 TB limit of an S3 object) and large numbers of very small files (millions andlt; 100 KB) consistently fast, regardless of region.
In this session, technical leaders from Aspera explain how to achieve very large file WAN transfers and integrate them into mission-critical workflows across multiple industries. EVS, a media service provider to the 2014 FIFA World Cup Brazil explains how they used Aspera solutions for the delivery of high-speed, live video transport, moving real-time video data from sports matches in Brazil to Europe for AWS-based transcoding, live streaming, and file delivery. Sponsored by Aspera.
Netflix Edge Engineering Open House Presentations - June 9, 2016Daniel Jacobson
Netflix's Edge Engineering team is responsible for handling all device traffic for to support the user experience, including sign-up, discovery and the triggering of the playback experience. Developing and maintaining this set of massive scale services is no small task and its success is the difference between millions of happy streamers or millions of missed opportunities.
This video captures the presentations delivered at the first ever Edge Engineering Open House at Netflix. This video covers the primary aspects of our charter, including the evolution of our API and Playback services as well as building a robust developer experience for the internal consumers of our APIs.
(GAM405) Create Streaming Game Experiences with Amazon AppStream | AWS re:Inv...Amazon Web Services
What if you could deliver a console-quality gaming experience to mobile devices anywhere in the world? In this session, learn about Amazon AppStream and how it enables real-time app streaming as a service via a few SDK calls. Hear how CCP has designed a new initial experience for their massive multiplayer game, EVE Online, that streams their character creator from the cloud, while the game downloads in the background, increasing conversions. We look at how Amazon Game Studios is developing hybrid games that run half on the tablet, half in the cloud, enabling console-quality graphics on mobile devices.
Netflix keystone streaming data pipeline @scale in the cloud-dbtb-2016Monal Daxini
Keystone processes over 700 billion events per day (1 peta byte) with at-least once processing semantics in the cloud. We will explore in detail how we leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. We will also share our plans on offering a Stream Processing as a Service for all of Netflix use.
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
NEW LAUNCH IPv6 in the Cloud: Protocol and AWS Service OverviewAmazon Web Services
Recently, AWS announced support for Internet Protocol version 6 (IPv6) for several AWS services, providing significant capabilities for applications and systems that need IPv6. This session provides an overview of IPv6 and covers key aspects of AWS support for the protocol. We discuss Amazon S3 and S3 Transfer Acceleration, Amazon CloudFront and AWS WAF, Amazon Route 53, AWS IoT, Elastic Load Balancing, and the virtual private cloud (VPC) environment of Amazon EC2. The presentation assumes solid knowledge of IPv4 and those AWS services.
Netflix Keystone - How Netflix Handles Data Streams up to 11M Events/SecPeter Bakas
Talk on Netflix Keystone by Peter Bakas at SF Data Engineering Meetup on 2/23/2016.
Topics covered:
- Architectural design and principles for Keystone
- Technologies that Keystone is leveraging
- Best practices
http://www.meetup.com/SF-Data-Engineering/events/228293610/
(MED305) Achieving Consistently High Throughput for Very Large Data Transfers...Amazon Web Services
A difficult problem for users of Amazon S3 that deal in large-form data is how to consistently transfer ultralarge files and large sets of files at fast speeds over the WAN. Although a number of tools are available for network transfer with S3 that exploit its multipart APIs, most have practical limitations when transferring very large files or large sets of very small files with remote regions. Transfers can be slow, degrade unpredictably, and for the largest sizes fail altogether. Additional complications include resume, encryption at rest, encryption in transit, and efficient updates for synchronization.
Aspera has expertise and experience in tackling these problems and has created a suite of transport, synchronization, monitoring, and collaboration software that can transfer and store both ultralarge files (up to the 5 TB limit of an S3 object) and large numbers of very small files (millions andlt; 100 KB) consistently fast, regardless of region.
In this session, technical leaders from Aspera explain how to achieve very large file WAN transfers and integrate them into mission-critical workflows across multiple industries. EVS, a media service provider to the 2014 FIFA World Cup Brazil explains how they used Aspera solutions for the delivery of high-speed, live video transport, moving real-time video data from sports matches in Brazil to Europe for AWS-based transcoding, live streaming, and file delivery. Sponsored by Aspera.
Netflix Edge Engineering Open House Presentations - June 9, 2016Daniel Jacobson
Netflix's Edge Engineering team is responsible for handling all device traffic for to support the user experience, including sign-up, discovery and the triggering of the playback experience. Developing and maintaining this set of massive scale services is no small task and its success is the difference between millions of happy streamers or millions of missed opportunities.
This video captures the presentations delivered at the first ever Edge Engineering Open House at Netflix. This video covers the primary aspects of our charter, including the evolution of our API and Playback services as well as building a robust developer experience for the internal consumers of our APIs.
(GAM405) Create Streaming Game Experiences with Amazon AppStream | AWS re:Inv...Amazon Web Services
What if you could deliver a console-quality gaming experience to mobile devices anywhere in the world? In this session, learn about Amazon AppStream and how it enables real-time app streaming as a service via a few SDK calls. Hear how CCP has designed a new initial experience for their massive multiplayer game, EVE Online, that streams their character creator from the cloud, while the game downloads in the background, increasing conversions. We look at how Amazon Game Studios is developing hybrid games that run half on the tablet, half in the cloud, enabling console-quality graphics on mobile devices.
Netflix keystone streaming data pipeline @scale in the cloud-dbtb-2016Monal Daxini
Keystone processes over 700 billion events per day (1 peta byte) with at-least once processing semantics in the cloud. We will explore in detail how we leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. We will also share our plans on offering a Stream Processing as a Service for all of Netflix use.
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
NEW LAUNCH IPv6 in the Cloud: Protocol and AWS Service OverviewAmazon Web Services
Recently, AWS announced support for Internet Protocol version 6 (IPv6) for several AWS services, providing significant capabilities for applications and systems that need IPv6. This session provides an overview of IPv6 and covers key aspects of AWS support for the protocol. We discuss Amazon S3 and S3 Transfer Acceleration, Amazon CloudFront and AWS WAF, Amazon Route 53, AWS IoT, Elastic Load Balancing, and the virtual private cloud (VPC) environment of Amazon EC2. The presentation assumes solid knowledge of IPv4 and those AWS services.
Netflix Keystone - How Netflix Handles Data Streams up to 11M Events/SecPeter Bakas
Talk on Netflix Keystone by Peter Bakas at SF Data Engineering Meetup on 2/23/2016.
Topics covered:
- Architectural design and principles for Keystone
- Technologies that Keystone is leveraging
- Best practices
http://www.meetup.com/SF-Data-Engineering/events/228293610/
Walmart & IBM Revisit the Linear Road Benchmark- Roger Rea, IBMRedis Labs
The Linear Road benchmark was devised in 2004 to
compare Stream Data Management Systems. Walmart selected Linear Road to compare performance of streaming analytic
offerings. IBM implemented the benchmark application using Redis to maintain state, and IBM Streams to handle the
incoming events and queries. Walmart had to completely revamp the data drivers and test verification to take advantage
of multicore multithreaded servers available today. Tests were run on Microsoft Azure cloud to ensure fair comparison of
vendors. Redis and IBM Streams handled nearly 1 billion events in a 3 hour test on a single 16 core Azure node, and 3.8 billion
when scaled out to 4 nodes. Come learn about the application and near linear scalability of Redis and IBM Streams.
Netflix viewing data architecture evolution - EBJUG Nov 2014Philip Fisher-Ogden
Netflix's architecture for viewing data has evolved as streaming usage has grown. Each generation was designed for the next order of magnitude, and was informed by learnings from the previous. From SQL to NoSQL, from data center to cloud, from proprietary to open source, look inside to learn how this system has evolved. (slides from a talk given at the East Bay Java Users Group MeetUp in Nov 2014)
In this 10-minute long keynote sharing for AWS re:Invent 2016 recap Taiwan, three topics (or issues) we encountered are mentioned. What we got and learnt after the event is quite interesting...
Engineering Leader opportunity @ Netflix - Playback Data SystemsPhilip Fisher-Ogden
Across the globe, 75M Netflix members love watching 125M hours per day of TV shows and movies. They love the ease of starting on one device and resuming on another, and the Playback Data Systems team makes that happen. We’re looking for a senior engineering manager to lead this high-impact team at Netflix.
Attributions for images:
https://www.flickr.com/photos/theholyllama/5738164504/ and https://www.flickr.com/photos/brewbooks/7780990192/, no changes made, https://creativecommons.org/licenses/by-sa/2.0/
https://www.flickr.com/photos/crschmidt/2956721498/, no changes made, https://creativecommons.org/licenses/by/2.0/
Netflix Development Patterns for Scale, Performance & Availability (DMG206) |...Amazon Web Services
This session explains how Netflix is using the capabilities of AWS to balance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.
Keystone processes over 1 trillion events per day with at-least once processing semantics in the cloud. We will explore in detail how we have modified and leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in the Amazon AWS cloud within a year.
We, as KKStream / KKTV / KKBOX, just kicked off the 1st sharing session inside our organization, introducing the event, the new services and potentially some of our insights and opinions. Let's keep fingers crossed for the following deeper sessions.
Multi-master, multi-region MySQL deployment in Amazon AWSContinuent
MySQL data rules the cloud, but recent experience shows us that there's no substitute for maintaining copies of data, across availability zones and regions, when it comes to Amazon Web Services (AWS) data resilience.
In this webinar, we discuss the multi-master capabilities of Continuent Tungsten to help you build and manage systems that spread data across multiple sites. We cover important topics such as setting up large scale topologies, handling failures, and how to handle data privacy issues like removing personally identifiable information or handling privacy law restrictions on data movement. We will conclude with a live demonstration of a distributed MySQL solution with Continuent Tungsten clusters working across multiple AWS availability zones and regions.
In this talk we explore some of the tools we built at Hailo to monitor our microservices platform. By using a combination of instrumentation, in-depth service monitoring, request tracing, event correlation and automation frameworks we manage to present a holistic view of our infrastructure.
AWS 2013 LA Media Event: Scalable Media ProcessingDavid Sayed
Introduction to media processing at scale in the cloud with visual effects and broadcast playout examples. Accompanying video: http://www.youtube.com/watch?v=HnGCtnmvogY&list=PL712EF2B0256960A3&index=2
Aspera - Bridging On Premise and Cloud Deployments for Broadcast ITFrançois Quereuil
Today’s complex media workflows increasingly involve end-to-end, customized and automated operations, often with a combination of both cloud and on-premise storage. They require the integration of multiple geographically dispersed teams and processes, and the ability to publish to multiple screens and platforms. This presentation explores how infrastructure-agnostic technologies can enable these new workflows by allowing large digital content to be ingested, accessed and distributed at maximum speed and with full security - regardless of storage type or network distance.
Walmart & IBM Revisit the Linear Road Benchmark- Roger Rea, IBMRedis Labs
The Linear Road benchmark was devised in 2004 to
compare Stream Data Management Systems. Walmart selected Linear Road to compare performance of streaming analytic
offerings. IBM implemented the benchmark application using Redis to maintain state, and IBM Streams to handle the
incoming events and queries. Walmart had to completely revamp the data drivers and test verification to take advantage
of multicore multithreaded servers available today. Tests were run on Microsoft Azure cloud to ensure fair comparison of
vendors. Redis and IBM Streams handled nearly 1 billion events in a 3 hour test on a single 16 core Azure node, and 3.8 billion
when scaled out to 4 nodes. Come learn about the application and near linear scalability of Redis and IBM Streams.
Netflix viewing data architecture evolution - EBJUG Nov 2014Philip Fisher-Ogden
Netflix's architecture for viewing data has evolved as streaming usage has grown. Each generation was designed for the next order of magnitude, and was informed by learnings from the previous. From SQL to NoSQL, from data center to cloud, from proprietary to open source, look inside to learn how this system has evolved. (slides from a talk given at the East Bay Java Users Group MeetUp in Nov 2014)
In this 10-minute long keynote sharing for AWS re:Invent 2016 recap Taiwan, three topics (or issues) we encountered are mentioned. What we got and learnt after the event is quite interesting...
Engineering Leader opportunity @ Netflix - Playback Data SystemsPhilip Fisher-Ogden
Across the globe, 75M Netflix members love watching 125M hours per day of TV shows and movies. They love the ease of starting on one device and resuming on another, and the Playback Data Systems team makes that happen. We’re looking for a senior engineering manager to lead this high-impact team at Netflix.
Attributions for images:
https://www.flickr.com/photos/theholyllama/5738164504/ and https://www.flickr.com/photos/brewbooks/7780990192/, no changes made, https://creativecommons.org/licenses/by-sa/2.0/
https://www.flickr.com/photos/crschmidt/2956721498/, no changes made, https://creativecommons.org/licenses/by/2.0/
Netflix Development Patterns for Scale, Performance & Availability (DMG206) |...Amazon Web Services
This session explains how Netflix is using the capabilities of AWS to balance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.
Keystone processes over 1 trillion events per day with at-least once processing semantics in the cloud. We will explore in detail how we have modified and leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in the Amazon AWS cloud within a year.
We, as KKStream / KKTV / KKBOX, just kicked off the 1st sharing session inside our organization, introducing the event, the new services and potentially some of our insights and opinions. Let's keep fingers crossed for the following deeper sessions.
Multi-master, multi-region MySQL deployment in Amazon AWSContinuent
MySQL data rules the cloud, but recent experience shows us that there's no substitute for maintaining copies of data, across availability zones and regions, when it comes to Amazon Web Services (AWS) data resilience.
In this webinar, we discuss the multi-master capabilities of Continuent Tungsten to help you build and manage systems that spread data across multiple sites. We cover important topics such as setting up large scale topologies, handling failures, and how to handle data privacy issues like removing personally identifiable information or handling privacy law restrictions on data movement. We will conclude with a live demonstration of a distributed MySQL solution with Continuent Tungsten clusters working across multiple AWS availability zones and regions.
In this talk we explore some of the tools we built at Hailo to monitor our microservices platform. By using a combination of instrumentation, in-depth service monitoring, request tracing, event correlation and automation frameworks we manage to present a holistic view of our infrastructure.
AWS 2013 LA Media Event: Scalable Media ProcessingDavid Sayed
Introduction to media processing at scale in the cloud with visual effects and broadcast playout examples. Accompanying video: http://www.youtube.com/watch?v=HnGCtnmvogY&list=PL712EF2B0256960A3&index=2
Aspera - Bridging On Premise and Cloud Deployments for Broadcast ITFrançois Quereuil
Today’s complex media workflows increasingly involve end-to-end, customized and automated operations, often with a combination of both cloud and on-premise storage. They require the integration of multiple geographically dispersed teams and processes, and the ability to publish to multiple screens and platforms. This presentation explores how infrastructure-agnostic technologies can enable these new workflows by allowing large digital content to be ingested, accessed and distributed at maximum speed and with full security - regardless of storage type or network distance.
This talk is about the relationships and problems between digital identities and the data silos where we are storing all our digital life. I will go into deeper details about the Pandora Security model (a new model for dealing such problems), the need for a truly decentralised and persistent web and how to build a new digital tradition. (see for reference my last blog "what Pirandello, Rifkin and Pandora have to do with digital identity" and "Are we building a new kind of tradition?" on http://blog.zigolab.it)
Lo stack ELK sta diventando il tool di Log Analisys di riferimento in ambito Open Source, grazie alla sua semplicità di utilizzo e alle potenzialità di analisi che fornisce. Con un po di lavoro potrete offrire alla vostra azienda una soluzione che fornisce preziose informazioni sia aggregate che puntuali. I vostri capi impazziranno per quella visibilità che riuscirete a fornire, e diventerete in breve tempo una star dell'azienda.
Présentation du logiciel de contrôle qualité VidCheck. Demande de démo chez VIDELIO Cap'Ciné, revendeur des solutions Telestream : http://www.videlio-capcine.com/
L'architettura di classe enterprise di nuova generazione - Massimo BrignoliData Driven Innovation
La nascita dei data lake - La aziende, ormai, sono sommerse dai dati e il classico datawarehouse fa fatica a macinare questi dati per numerosità e varietà. In molti hanno iniziato a guardare a delle architetture chiamate Data Lakes con Hadoop come tecnologia di riferimento. Ma questa soluzione va bene per tutto? Vieni a capire come operazionalizzare i data lakes per creare delle moderne architetture di gestione dati.
Automation in Post-Production — Boris Polyak for NATEXPO 2016Boris Polyak
Slides for my talk about automation in post-production of entertainment television, the though behind the steps and examples from our Telestream Vantage experience.
Machine Learning Real Life Applications By Examples - Mario CartiaData Driven Innovation
Durante il talk verranno illustrati 3 casi d'uso reali di utilizzo del machine learning da parte delle maggiori piattaforme web (Google, Facebook, Amazon, Twitter, PayPal) per l'implementazione di particolari features. Per ciascun esempio verrà spiegato l'algoritmo utilizzato mostrando come realizzare le medesime funzionalità attraverso l'utilizzo di Apache Spark MLlib e del linguaggio Scala.
Introduction to types of cloud storage and overview and comparison of the SoftLayer Storage Services. Topics covered include Block and File offerings"Codename: Prime", Consistent Performance, Mass Storage Servers (QuantaStor), and Backup (EVault, R1Soft), Object Storage (OpenStack Swift), CDN, Data Transfer Service, and Aspera.
Polyglot Persistence e Big Data: tra innovazione e difficoltà su casi reali -...Data Driven Innovation
Oggi il tema non è più SI o NO ai sistemi NoSQL. Il problema sta nella capacità di essere “poliglotti” nell’uso di tecnologie per la gestione di dati e informazioni. Le strategie di innovazione sui Big Data nelle aziende non può prescindere dalla Polyglot Persistence, ma le difficoltà sono tante, specie in ambienti complessi ed enterprise. Ma la necessità di fare innovazione non è forte solo nelle startup, anzi…
The Top Skills That Can Get You Hired in 2017LinkedIn
We analyzed all the recruiting activity on LinkedIn this year and identified the Top Skills employers seek. Starting Oct 24, learn these skills and much more for free during the Week of Learning.
#AlwaysBeLearning https://learning.linkedin.com/week-of-learning
1) To show you how to spot an Aspera opportunity ! 2) To outline the Aspera portfolio (Sales overview not technical) 3) To look at the Aspera opportunity from Sharepoint 4) Summary / Q and A / Close – But interaction is welcomed throughout.. 5) But before all of that…. This… 2 AGENDA AND OBJECTIVES
Shoot the Bird: Linear Broadcast Distribution on AWS by Usman Shakeel of Amaz...ETCenter
Can traditional live linear content distribution models be effectively evolved from existing satellite communication networks to pure IP-based cloud-centric transit? In this session we will take a look at requirements that must be met to facilitate wide-scale distribution of content at low latency with high levels of availability, durability, reliability and throughput. We’ll look at best practices for high availability and resilience, take a deep dive into topics such as effective erasure correction and deterministic network topologies, factor in advantages around lower cost for compute and bandwidth when utilizing cloud-based infrastructure, and arrive at a reference architecture that can be used to drive B2B content distribution through the cloud at scale.
Deploy ultra low latency at a massive scale with sub-three-second end-to-end latency for audiences as big as you can assemble. Shorten the first and last mile with distribution of datacenters, POPs and nodes across the globe.
Leverage innovative technologies to dramatically reduce time-to-first-frame and provide consistent, low-latency user experience across devices and apps.
Provide intelligent load-balancing and scaling to immediately provide the streaming resources needed to deliver reliable, consistent, ultra low latency viewing experiences to audiences of any size, everywhere.
Enable unprecedented visibility, insight and control throughout the entire streaming workflow, from ingest to playback—allowing you to anticipate, tune and optimize your workflow.
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022HostedbyConfluent
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022
Azure Event Hubs is a hyperscale PaaS event stream broker with protocol support for HTTP, AMQP, and Apache Kafka RPC that accepts and forwards several trillion (!) events per day and is available in all global Azure regions. This session is a look behind the curtain where we dive deep into the architecture of Event Hubs and look at the Event Hubs cluster model, resource isolation, and storage strategies and also review some performance figures.
In search of the perfect IoT Stack - Scalable IoT Architectures with MQTTDominik Obermaier
Web-scale Internet of Things applications have one thing in common: They produce and process massive amounts of data. But how to design the next-generation IoT backend that is able to meet the business requirements and doesn’t explode as soon as the traffic increases? This presentation will cover how to use MQTT to connect millions of devices with commodity servers and process huge amounts of data. Learn all the common design patterns and see the technologies that actually scale. Explore when to use Cassandra, Kafka, Spark, Docker, and other tools and when to stick with your good ol’ SQL database or Enterprise Message Queue.
AWS re:Invent 2016: Accelerating the Transition to Broadcast and OTT Infrastr...Amazon Web Services
In this session, we show how to seamlessly transition VOD, live, and other advanced media workflows from on-premises deployments to the cloud. Cinépolis will provide an overview of their transcoding solution on AWS and how they have seamlessly expanded the solution increasing their customer reach. We'll show real world examples of the API calls used to configure and control all elements of the workflow including compression and origination. And how standard AWS services can be media-optimized with Elemental Technologies to form a robust live solution.
Lightweight and scalable IoT Architectures with MQTTDominik Obermaier
Ambitious Internet of Things applications have one thing in common: They produce massive amounts of data. But how to design the next-generation IoT backend that is able to meet the business requirements and doesn’t explode as soon as the traffic increases? This talk will cover how to use MQTT to connect millions of devices with commodity servers and process huge amounts of data. Learn all the common design patterns and see the technologies that actually scale. Explore when to use Cassandra, Kafka, Spark, Docker, and other tools and when to stick with your good ol’ SQL database or Enterprise Message Queue.
Не так страшен терабит / Вячеслав Ольховченков (Integros)Ontico
HighLoad++ 2017
Зал «Дели + Калькутта», 8 ноября, 10:00
Тезисы:
http://www.highload.ru/2017/abstracts/2933.html
Мы поделимся опытом масштабирования высоконагруженного видеопроекта до емкости в 1 терабит, и как при этом реальность соотносится с ожиданиями.
Основные темы:
- Как построить серверную архитектуру, способную стабильно отдавать до 75 Гбит/с с одной ноды (аппаратная и программная конфигурация, предпочтительный профиль нагрузки).
...
Maximize Application Performance and Bandwidth Efficiency with WAN OptimizationCisco Enterprise Networks
Learn how a two-step strategy that reduces application bandwidth consumption and makes more efficient use of your remaining bandwidth can help you achieve seemingly conflicting business and IT goals.
Register to watch webcast: http://cs.co/9006CAY0.
Kafka for Real-Time Replication between Edge and Hybrid CloudKai Wähner
Not all workloads allow cloud computing. Low latency, cybersecurity, and cost-efficiency require a suitable combination of edge computing and cloud integration.
This session explores architectures and design patterns for software and hardware considerations to deploy hybrid data streaming with Apache Kafka anywhere. A live demo shows data synchronization from the edge to the public cloud across continents with Kafka on Hivecell and Confluent Cloud.
Katpro Technology, a IT solutions company, announced it has been selected by Microsoft Co-corporations as a windows Azure Circle Partner.The Partnership will provide katpro with the ability to service customers needs in the area of cloud, training and support material provided by Microsoft.
Scale Your Load Balancer from 0 to 1 million TPS on AzureAvi Networks
For years, enterprises have relied on appliance-based (hardware or virtual) load balancers. Unfortunately, these legacy ADCs are inflexible at scale, costly due to overprovisioning for peak traffic, and slow to respond to changes or security incidents.
These problems are amplified as applications migrate to the cloud. In contrast, the Avi Vantage Platform not only elastically scales up and down based on real-time traffic patterns, but also offers ludicrous scale at a fraction of the cost.
Watch this webinar to see how Avi can scale up and down quickly on the Microsoft Azure Cloud.
- Configure load balancing on Azure to scale up from 0 to 1 million transactions per second (TPS) and down in under 10 minutes
- Learn why hardware or virtual appliances are not an option for modern load balancing in public clouds
- Understand how Avi’s elastic scale dramatically lowers TCO and enhances security, including DDoS attacks
Watch the full webinar: https://info.avinetworks.com/webinars-ludicrous-scale-on-azure
[AWS LA Media & Entertainment Event 2015]: Shoot the Bird: Linear Broadcast o...Amazon Web Services
This session looks at the technical feasibility us using terrestrial delivery via the AWS cloud as an alternative to satellite transport. The key questions addressed include: can it be implemented without effecting the underlying media layers and can it be architected for low cost at scale. The presentation includes sample architectures and design considerations to achieve content distribution within acceptable latency parameters of satellite.
Similar to AWS re:Invent - Med305 Achieving consistently high throughput for very large data transfers with amazon s3 (aspera) (20)
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
AWS re:Invent - Med305 Achieving consistently high throughput for very large data transfers with amazon s3 (aspera)
1. Michelle Munson - Co-Founder & CEO, Aspera
Jay Migliaccio - Director of Cloud Technologies, Aspera
Stéphane Houet – Product Manager, EVS Broadcast Equipment
2. PRESENTERS
Michelle Munson
Co-founder and CEO Aspera
michelle@asperasoft.com
Jay Migliaccio
Director Cloud Services, Aspera
jay@asperasoft.com
Stephane Houet
Product Manager, EVS
s.houet@evs.com
AGENDA
• Quick Intro to Aspera
• Technology Challenges
• Aspera Direct to Cloud Solution
• Demos
• FIFA Live Streaming Use Case
• Q & A
3.
4.
5.
6.
7.
8. HTTP
Master Database Mapping Object ID to Data Nodes
File Data Replicas and Storing Metadata
Key Value
H(URL) R1, R2, R3
15. TRANSFER DATA TO CLOUD OVER WAN EFFECTIVE THROUGHPUT
Multi-part HTTP
Typical internet conditions
• 50–250ms latency & 0.1–3% packet loss
15 parallel http streams
<10 to 100 Mbps
depending on distance
Aspera FASP Aspera FASP transfer over WAN to Cloud Up to 1Gbps *
10 TB transferred per 24 hours
* Per EC2 Extra Large Instance -independent of distance
16. LOCATION AND AVAILABLE
BANDWIDTH AWS ENHANCED UPLOADER ASPERA FASP
Montreal to AWS East
• 100 Mbps Shared Internet
Connection
30 minutes (7-10 Mbps)
3.7 minutes (80 Mbps)
9X Speed Up
Rackspace in Dallas to AWS
East
• 600 Mbps Shared Internet
Connection
7.5 minutes (38 Mbps)
0.5 minutes (600 Mbps)
15X Speed Up
Other pains … “Enhanced Bucket Uploader” requires java applet, very large
transfers time out, no good resume for interrupted transfers, no downloads
17. EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 4.4 GB/15691 FILES (AVERAGE SIZE 300KB)
LOCATION AND AVAILABLE
BANDWIDTH AWS HTTP MULTIPART ASPERA ASCP
New York to AWS East Coast
• 1 Gbps Shared Connection
334 seconds (113 Mbps)
107 seconds (353 Mbps)
3.3X Speed Up
New York to AWS West Coast
• 1 Gbps Shared Connection
8.7 GB in 1032 seconds (36 Mbps)
8.7 GB in 110 seconds (353 Mbps)
9.4 X Speed Up
EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 8.7 GB/18,995 FILES (AVERAGE SIZE 9.6MB)
LOCATION AND AVAILABLE
BANDWIDTH AWS HTTP MULTIPART ASPERA ASCP
New York to AWS East Coast
• 1 Gbps Shared Connection
477 seconds (156 Mbps)
178 seconds (420 Mbps)
2.7 X Speed Up
New York to AWS West Coast
• 1 Gbps Shared Connection
967 seconds (77 Mbps)
177 seconds (420 Mbps)
5.4 X Speed Up
18.
19.
20.
21. – Maximum speed single stream transfer
– Support for large files and directory sizes in a single transfer
– Network and disk congestion control provides automatic
adaptation of transmission speed to avoid congestion and overdrive
– Automatic retry and checkpoint resume of any transfer from point of
interruption
– Built in over-the-wire encryption and encryption-at-rest (AES 128)
– Support for authenticated Aspera docroots using
private cloud credentials and platform-specific role based access control including Amazon IAMS.
– Seamless fallback to HTTP(s) in restricted network
environments
– Concurrent transfer support scaling up to ~50 concurrent transfers
per VM instance
22. New Clients connect
to “available” pool
Existing client
transfers
Utilization > high w/m Available Pool
Console
• Collect / aggregate
transfer data
• Transfer activity /
reporting (UI, API)
Shares
• User management
• Storage access
control
KEY COMPONENTS
• Cluster Manager for Auto-scale and
Scaled DB
• Console Management UI + Reporting API
• Enhanced Client for Shares
Authorizations
• Unified Access to Files/Directories
(Browser, GUI, Commend Line, SDK)
Scaling Parameters
• Min/max number of t/s
• Utilization low/high
watermark
• Min number of t/s in
“available” pool
• Min number of idle t/s in
”available” pool
Management and Reporting
Cluster Manager
• Monitor cluster nodes
• Determine eligibility for
transfer scale up / down
• Create / remove db with
replicas
• Add / remove node
Scale DB Persistence Layer
26. • Near Live experiences have highly bursty processing and
distribution requirements
• Transcoding alone is expected to generate 100s of varieties of bitrates
and formats for a multitude of target devices
• Audiences peak to millions of concurrent streams and die off shortly
post event
• Near “Zero Delay” in the video experience is expected
• “Second screen” depends on near instant access / instant replay, which
requires reducing
• Linear transcoding approaches simply can not meet demand (and
are too expensive for short term use!)
• Parallel, “cloud” architectures are essential
• Investing in on premise bandwidth for distribution is also
impractical
• Millions of streams equals terabits per second
27. FASP
Scale Out
High-Speed Transfer by
Aspera
Scale Out
Transcoding by
On Demand Elemental
Multi-screen
capture and
distribution
by EVS
28. Belgian
company
+90%
market share of sports
OB trucks
21
offices
+500
employees
(+50% in R&D)
32. Live Streaming :
REAL TIME CONSTRAINT!
6 feeds @ 10 Mbps = 60 Mbps
X 2 games at the same time
X 2 for safety
WE NEED A SOLUTION !
240 Mbps
VOD Multicam Near-live replays :
Up to 24 clips @ 10 Mbps = 240 Mbps
X 2 games at the same time
480 Mbps
Maximum Throughput (bps) =
TCP-Window-Size (b) / Latency (s)
(65535 * 8) / 0.2 s =
2621400 bps = 2.62 Mbps
33.
34. 6 Live Streams
HLS streaming of 6 HD streams to
tablets & mobiles per match
+20 Replay cameras
On-demand replays of selected events
from up to 20+ cameras on the field
+4000 VoD elements
Exclusive on-demand multimedia
exclusive edits
35. FASP
Scale Out
High-Speed Transfer by
Aspera
Scale Out
Transcoding by
On Demand Elemental
Multi-screen
capture and
distribution
by EVS
36. + 27 TB of video data Key Metrics Total over
62 games
Average
per Game
Transfer Time (in hours) 13,857 216
Number of GB
Transferred 27,237 426
Number of Transfers 14,073 220
Number of Files
Transferred 2,706,922 42,296
< 14,000 hrs video transferred
200 ms of latency over WAN
10% packet loss over WAN
37. Live Streams
660,000 Minutes
Transcoded
Output
x 4.3 =
2.8 Million Minutes
Delivered
Streams
x 321 =
15 Million Hours
35 Million
Unique
Viewers