Learn how LinkedIn makes article recommendations for its users. Talk by Ajit Singh, LinkedIn. To hear about future conferences go to http://dataengconf.com
At Netflix, we've spent a lot of time thinking about how we can make our analytics group move quickly. Netflix's Data Engineering & Analytics organization embraces the company's culture of "Freedom & Responsibility".
How does a company with a $40 billion market cap and $6 billion in annual revenue keep their data teams moving with the agility of a tiny company?
How do hundreds of data engineers and scientists make the best decisions for their projects independently, without the analytics environment devolving into chaos?
We'll talk about how Netflix equips its business intelligence and data engineers with:
the freedom to leverage cloud-based data tools - Spark, Presto, Redshift, Tableau and others - in ways that solve our most difficult data problems
the freedom to find and introduce right software for the job - even if it isn't used anywhere else in-house
the freedom to create and drop new tables in production without approval
the freedom to choose when a question is a one-off, and when a question is asked often enough to require a self-service tool
the freedom to retire analytics and data processes whose value doesn't justify their support costs
Speaker Bios
Monisha Kanoth is a Senior Data Architect at Netflix, and was one of the founding members of the current streaming Content Analytics team. She previously worked as a big data lead at Convertro (acquired by AOL) and as a data warehouse lead at MySpace.
Jason Flittner is a Senior Business Intelligence Engineer at Netflix, focusing on data transformation, analysis, and visualization as part of the Content Data Engineering & Analytics team. He previously led the EC2 Business Intelligence team at Amazon Web Services and was a business intelligence engineer with Cisco.
Chris Stephens is a Senior Data Engineer at Netflix. He previously served as the CTO at Deep 6 Analytics, a machine learning & content analytics company in Los Angeles, and on the data warehouse teams at the FOX Audience Network and Anheuser-Busch.
Consolidating MLOps at One of Europe’s Biggest AirportsDatabricks
At Schiphol airport we run a lot of mission critical machine learning models in production, ranging from models that predict passenger flow to computer vision models that analyze what is happening around the aircraft. Especially now in times of Covid it is paramount for us to be able to quickly iterate on these models by implementing new features, retraining them to match the new dynamics and above all to monitor them actively to see if they still fit the current state of affairs.
To achieve those needs we rely on MLFlow but have also integrated that with many of our other systems. So have we written Airflow operators for MLFlow to ease the retraining of our models, have we integrated MLFlow deeply with our CI pipelines and have we integrated it with our model monitoring tooling.
In this talk we will take you through the way we rely on MLFlow and how that enables us to release (sometimes) multiple versions of a model per week in a controlled fashion. With this set-up we are achieving the same benefits and speed as you have with a traditional software CI pipeline.
In this webinar, Adam will explain the benefits and restrictions that are encountered when working with Big Data systems in a modern agile development approach. He will go on to present some of the approaches, both in automation and in their management of testing activities, that his team has successfully adopted in tackling the big data testing challenge.
Tell me more - http://testhuddle.com/resource/big-data-a-new-testing-challenge/
Taking Jupyter Notebooks and Apache Spark to the Next Level PixieDust with Da...Databricks
PixieDust is a new open source library that helps data scientists and developers working in Jupyter Notebooks and Apache Spark be more efficient. PixieDust speeds up data manipulation and display with features like: auto-visualization of Spark DataFrames, real-time Spark job progress monitoring, automated local install of Python and Scala kernels running with Spark, and much more.
Come along and learn how you can use this tool in your own projects to visualize and explore data effortlessly with no coding. Oh, and if you prefer working with a Scala Notebook, this session is also for you, as PixieDust can also run on a Scala Kernel. Imagine being able to visualize your favorite Python chart engines from a Scala Notebook!
We’ll finish the session with a demo combining Twitter, Watson Tone Analyzer, Spark Streaming, and some fun real-time visualizations–all running within a Notebook.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
At Netflix, we've spent a lot of time thinking about how we can make our analytics group move quickly. Netflix's Data Engineering & Analytics organization embraces the company's culture of "Freedom & Responsibility".
How does a company with a $40 billion market cap and $6 billion in annual revenue keep their data teams moving with the agility of a tiny company?
How do hundreds of data engineers and scientists make the best decisions for their projects independently, without the analytics environment devolving into chaos?
We'll talk about how Netflix equips its business intelligence and data engineers with:
the freedom to leverage cloud-based data tools - Spark, Presto, Redshift, Tableau and others - in ways that solve our most difficult data problems
the freedom to find and introduce right software for the job - even if it isn't used anywhere else in-house
the freedom to create and drop new tables in production without approval
the freedom to choose when a question is a one-off, and when a question is asked often enough to require a self-service tool
the freedom to retire analytics and data processes whose value doesn't justify their support costs
Speaker Bios
Monisha Kanoth is a Senior Data Architect at Netflix, and was one of the founding members of the current streaming Content Analytics team. She previously worked as a big data lead at Convertro (acquired by AOL) and as a data warehouse lead at MySpace.
Jason Flittner is a Senior Business Intelligence Engineer at Netflix, focusing on data transformation, analysis, and visualization as part of the Content Data Engineering & Analytics team. He previously led the EC2 Business Intelligence team at Amazon Web Services and was a business intelligence engineer with Cisco.
Chris Stephens is a Senior Data Engineer at Netflix. He previously served as the CTO at Deep 6 Analytics, a machine learning & content analytics company in Los Angeles, and on the data warehouse teams at the FOX Audience Network and Anheuser-Busch.
Consolidating MLOps at One of Europe’s Biggest AirportsDatabricks
At Schiphol airport we run a lot of mission critical machine learning models in production, ranging from models that predict passenger flow to computer vision models that analyze what is happening around the aircraft. Especially now in times of Covid it is paramount for us to be able to quickly iterate on these models by implementing new features, retraining them to match the new dynamics and above all to monitor them actively to see if they still fit the current state of affairs.
To achieve those needs we rely on MLFlow but have also integrated that with many of our other systems. So have we written Airflow operators for MLFlow to ease the retraining of our models, have we integrated MLFlow deeply with our CI pipelines and have we integrated it with our model monitoring tooling.
In this talk we will take you through the way we rely on MLFlow and how that enables us to release (sometimes) multiple versions of a model per week in a controlled fashion. With this set-up we are achieving the same benefits and speed as you have with a traditional software CI pipeline.
In this webinar, Adam will explain the benefits and restrictions that are encountered when working with Big Data systems in a modern agile development approach. He will go on to present some of the approaches, both in automation and in their management of testing activities, that his team has successfully adopted in tackling the big data testing challenge.
Tell me more - http://testhuddle.com/resource/big-data-a-new-testing-challenge/
Taking Jupyter Notebooks and Apache Spark to the Next Level PixieDust with Da...Databricks
PixieDust is a new open source library that helps data scientists and developers working in Jupyter Notebooks and Apache Spark be more efficient. PixieDust speeds up data manipulation and display with features like: auto-visualization of Spark DataFrames, real-time Spark job progress monitoring, automated local install of Python and Scala kernels running with Spark, and much more.
Come along and learn how you can use this tool in your own projects to visualize and explore data effortlessly with no coding. Oh, and if you prefer working with a Scala Notebook, this session is also for you, as PixieDust can also run on a Scala Kernel. Imagine being able to visualize your favorite Python chart engines from a Scala Notebook!
We’ll finish the session with a demo combining Twitter, Watson Tone Analyzer, Spark Streaming, and some fun real-time visualizations–all running within a Notebook.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
Correlate Log Data with Business Metrics Like a JediTrevor Parsons
The Logentries and Hosted Graphite integration allows you to connect two of your favorite Ops tools to easily extract important data from your log files, visualize them as metrics, and share them in Hosted Graphite dashboards.
• Integrate your systems to extract the metrics you need, from both your applications and log data.
• Set-up log metric dashboards based on common use cases (e.g. error tracking, performance, app usage).
• Get off the "complexity elevator" of hosting your own in-house logging or graphite solutions.
• Delight your team and organization with valuable metrics and performance insights.
Big Data Day LA 2016/ Big Data Track - Rapid Analytics @ Netflix LA (Updated ...Data Con LA
This talk explores how Netflix equips its engineers with the freedom to find and introduce the right software for the job - even if it isn't used anywhere else in-house. Examples include how Netflix has enabled analysts to fluidly switch between MPP RDBMS and an auto-scaling Presto cluster, how Spark + NoSQL stores are used when deploying data sets to internal web apps, and how data scientists are enabled to work in the ML framework of their choosing and deploy models as a service.
Do you need Ops in your new startup? If not now, then when? And...what is Ops?
Learn how to scale ruby-based distributed software infrastructure in the cloud to serve 4,000 requests per second, handle 400 updates per second, and achieve 99.97% uptime – all while building the product at the speed of light.
Unimpressed? Now try doing the above altogether without the Ops team, while growing your traffic 100x in 6 months and deploying 5-6 times a day!
It could be a dream, but luckily it's a reality that could be yours.
Gluent Extending Enterprise Applications with Hadoopgluent.
This presentation shows how to transparently extend enterprise applications with the power of modern data platforms such as Hadoop. Application re-writing is not needed and there is no downtime when virtualizing data with Gluent.
The Future of ETL Isn't What It Used to Beconfluent
Speaker: Gwen Shapira, Principal Data Architect, Confluent
Join Gwen Shapira, Apache Kafka® committer and co-author of ""Kafka: The Definitive Guide,"" as she presents core patterns of modern data engineering and explains how you can use microservices, event streams and a streaming platform like Apache Kafka to build scalable and reliable data pipelines designed to evolve over time.
This is part 1 of 3 in Streaming ETL - The New Data Integration series.
Watch the recording: https://videos.confluent.io/watch/q7roRtNZBnjiT9C3ii88fo?.
Kyle Kingsbury Talks about the Jepsen Test: What VoltDB Learned About Data Ac...VoltDB
In this webinar Kyle Kingsbury will talk about the testing process, how Jepsen works and what he found in VoltDB. Then listen to John Hugg, the Founding Engineer at VoltDB to learn why these tests are important, how VoltDB reacted to test findings and the QA process moving forward.
Testistanbul 2016 - Keynote: "Performance Testing of Big Data" by Roland LeusdenTurkish Testing Board
Agile, Continous Intergration, DevOps, Big data are not longer buzzwords but part of the day today process of everyone working in software development and delivery. To cope with applications that need to be deployed in production almost the same moment they were created, software development has changed, impacting the way of working for everyone in the team. In this talk, Roland will discuss the challenges performance testers face with Big Data applications and how Architecture, Agile, Continous Intergration and DevOps come together to create solutions.
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Spark Summit
Here we present a general supervised framework for record deduplication and author-disambiguation via Spark. This work differentiates itself by – Application of Databricks and AWS makes this a scalable implementation. Compute resources are comparably lower than traditional legacy technology using big boxes 24/7. Scalability is crucial as Elsevier’s Scopus data, the biggest scientific abstract repository, covers roughly 250 million authorships from 70 million abstracts covering a few hundred years. – We create a fingerprint for each content by deep learning and/or word2vec algorithms to expedite pairwise similarity calculation. These encoders substantially reduce compute time while maintaining semantic similarity (unlike traditional TFIDF or predefined taxonomies). We will briefly discuss how to optimize word2vec training with high parallelization. Moreover, we show how these encoders can be used to derive a standard representation for all our entities namely such as documents, authors, users, journals, etc. This standard representation can simplify the recommendation problem into a pairwise similarity search and hence it can offer a basic recommender for cross-product applications where we may not have a dedicate recommender engine designed. – Traditional author-disambiguation or record deduplication algorithms are batch-processing with small to no training data. However, we have roughly 25 million authorships that are manually curated or corrected upon user feedback. Hence, it is crucial to maintain historical profiles and hence we have developed a machine learning implementation to deal with data streams and process them in mini batches or one document at a time. We will discuss how to measure the accuracy of such a system, how to tune it and how to process the raw data of pairwise similarity function into final clusters. Lessons learned from this talk can help all sort of companies where they want to integrate their data or deduplicate their user/customer/product databases.
Stateful Stream Processing at In-Memory SpeedJamie Grier
This presentation describes results from a real-world system where I used Apache Flink's stateful stream processing capabilities to eliminate the key-value store bottleneck and the burden of the Lambda Architecture while also improving accuracy and gaining huge improvements in hardware efficiency!
Using PySpark to Process Boat Loads of DataRobert Dempsey
Learn how to use PySpark for processing massive amounts of data. Combined with the GitHub repo - https://github.com/rdempsey/pyspark-for-data-processing - this presentation will help you gain familiarity with processing data using Python and Spark.
If you're thinking about machine learning and not sure if it can help improve your business, but want to find out, set up a free 20-minute consultation with us: https://calendly.com/robertwdempsey/free-consultation
Big Data generates value from the storage and processing of very large quantities of digital information that cannot be analysed with traditional computing techniques.
I work in a Data Innovation Lab with a horde of Data Scientists. Data Scientists gather data, clean data, apply Machine Learning algorithms and produce results, all of that with specialized tools (Dataiku, Scikit-Learn, R...). These processes run on a single machine, on data that is fixed in time, and they have no constraint on execution speed.
With my fellow Developers, our goal is to bring these processes to production. Our constraints are very different: we want the code to be versioned, to be tested, to be deployed automatically and to produce logs. We also need it to run in production on distributed architectures (Spark, Hadoop), with fixed versions of languages and frameworks (Scala...), and with data that changes every day.
In this talk, I will explain how we, Developers, work hand-in-hand with Data Scientists to shorten the path to running data workflows in production.
Agile experiments in Machine Learning with F#J On The Beach
Just like traditional applications development, machine learning involves writing code. One aspect where the two differ is the workflow. While software development follows a fairly linear process (design, develop, and deploy a feature), machine learning is a different beast. You work on a single feature, which is never 100% complete. You constantly run experiments, and re-design your model in depth at a rapid pace. Traditional tests are entirely useless. Validating whether you are on the right track takes minutes, if not hours.
In this talk, we will take the example of a Machine Learning competition we recently participated in, the Kaggle Home Depot competition, to illustrate what "doing Machine Learning" looks like. We will explain the challenges we faced, and how we tackled them, setting up a harness to easily create and run experiments, while keeping our sanity. We will also draw comparisons with traditional software development, and highlight how some ideas translate from one context to the other, adapted to different constraints.
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
What's The Role Of Machine Learning In Fast Data And Streaming Applications?Lightbend
Designed for busy Architects and Managers, this Lightbend Webinar features Dr. Emre Velipasaoglu, Principal Data Scientist at Lightbend, who explains what ML is really all about, the ideal use cases for ML, and how getting it right can benefit your streaming and Fast Data application architectures.
Intro to Machine Learning with H2O and Python - DenverSri Ambati
Presentation at Comcast Denver 03.01.16
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Correlate Log Data with Business Metrics Like a JediTrevor Parsons
The Logentries and Hosted Graphite integration allows you to connect two of your favorite Ops tools to easily extract important data from your log files, visualize them as metrics, and share them in Hosted Graphite dashboards.
• Integrate your systems to extract the metrics you need, from both your applications and log data.
• Set-up log metric dashboards based on common use cases (e.g. error tracking, performance, app usage).
• Get off the "complexity elevator" of hosting your own in-house logging or graphite solutions.
• Delight your team and organization with valuable metrics and performance insights.
Big Data Day LA 2016/ Big Data Track - Rapid Analytics @ Netflix LA (Updated ...Data Con LA
This talk explores how Netflix equips its engineers with the freedom to find and introduce the right software for the job - even if it isn't used anywhere else in-house. Examples include how Netflix has enabled analysts to fluidly switch between MPP RDBMS and an auto-scaling Presto cluster, how Spark + NoSQL stores are used when deploying data sets to internal web apps, and how data scientists are enabled to work in the ML framework of their choosing and deploy models as a service.
Do you need Ops in your new startup? If not now, then when? And...what is Ops?
Learn how to scale ruby-based distributed software infrastructure in the cloud to serve 4,000 requests per second, handle 400 updates per second, and achieve 99.97% uptime – all while building the product at the speed of light.
Unimpressed? Now try doing the above altogether without the Ops team, while growing your traffic 100x in 6 months and deploying 5-6 times a day!
It could be a dream, but luckily it's a reality that could be yours.
Gluent Extending Enterprise Applications with Hadoopgluent.
This presentation shows how to transparently extend enterprise applications with the power of modern data platforms such as Hadoop. Application re-writing is not needed and there is no downtime when virtualizing data with Gluent.
The Future of ETL Isn't What It Used to Beconfluent
Speaker: Gwen Shapira, Principal Data Architect, Confluent
Join Gwen Shapira, Apache Kafka® committer and co-author of ""Kafka: The Definitive Guide,"" as she presents core patterns of modern data engineering and explains how you can use microservices, event streams and a streaming platform like Apache Kafka to build scalable and reliable data pipelines designed to evolve over time.
This is part 1 of 3 in Streaming ETL - The New Data Integration series.
Watch the recording: https://videos.confluent.io/watch/q7roRtNZBnjiT9C3ii88fo?.
Kyle Kingsbury Talks about the Jepsen Test: What VoltDB Learned About Data Ac...VoltDB
In this webinar Kyle Kingsbury will talk about the testing process, how Jepsen works and what he found in VoltDB. Then listen to John Hugg, the Founding Engineer at VoltDB to learn why these tests are important, how VoltDB reacted to test findings and the QA process moving forward.
Testistanbul 2016 - Keynote: "Performance Testing of Big Data" by Roland LeusdenTurkish Testing Board
Agile, Continous Intergration, DevOps, Big data are not longer buzzwords but part of the day today process of everyone working in software development and delivery. To cope with applications that need to be deployed in production almost the same moment they were created, software development has changed, impacting the way of working for everyone in the team. In this talk, Roland will discuss the challenges performance testers face with Big Data applications and how Architecture, Agile, Continous Intergration and DevOps come together to create solutions.
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Spark Summit
Here we present a general supervised framework for record deduplication and author-disambiguation via Spark. This work differentiates itself by – Application of Databricks and AWS makes this a scalable implementation. Compute resources are comparably lower than traditional legacy technology using big boxes 24/7. Scalability is crucial as Elsevier’s Scopus data, the biggest scientific abstract repository, covers roughly 250 million authorships from 70 million abstracts covering a few hundred years. – We create a fingerprint for each content by deep learning and/or word2vec algorithms to expedite pairwise similarity calculation. These encoders substantially reduce compute time while maintaining semantic similarity (unlike traditional TFIDF or predefined taxonomies). We will briefly discuss how to optimize word2vec training with high parallelization. Moreover, we show how these encoders can be used to derive a standard representation for all our entities namely such as documents, authors, users, journals, etc. This standard representation can simplify the recommendation problem into a pairwise similarity search and hence it can offer a basic recommender for cross-product applications where we may not have a dedicate recommender engine designed. – Traditional author-disambiguation or record deduplication algorithms are batch-processing with small to no training data. However, we have roughly 25 million authorships that are manually curated or corrected upon user feedback. Hence, it is crucial to maintain historical profiles and hence we have developed a machine learning implementation to deal with data streams and process them in mini batches or one document at a time. We will discuss how to measure the accuracy of such a system, how to tune it and how to process the raw data of pairwise similarity function into final clusters. Lessons learned from this talk can help all sort of companies where they want to integrate their data or deduplicate their user/customer/product databases.
Stateful Stream Processing at In-Memory SpeedJamie Grier
This presentation describes results from a real-world system where I used Apache Flink's stateful stream processing capabilities to eliminate the key-value store bottleneck and the burden of the Lambda Architecture while also improving accuracy and gaining huge improvements in hardware efficiency!
Using PySpark to Process Boat Loads of DataRobert Dempsey
Learn how to use PySpark for processing massive amounts of data. Combined with the GitHub repo - https://github.com/rdempsey/pyspark-for-data-processing - this presentation will help you gain familiarity with processing data using Python and Spark.
If you're thinking about machine learning and not sure if it can help improve your business, but want to find out, set up a free 20-minute consultation with us: https://calendly.com/robertwdempsey/free-consultation
Big Data generates value from the storage and processing of very large quantities of digital information that cannot be analysed with traditional computing techniques.
I work in a Data Innovation Lab with a horde of Data Scientists. Data Scientists gather data, clean data, apply Machine Learning algorithms and produce results, all of that with specialized tools (Dataiku, Scikit-Learn, R...). These processes run on a single machine, on data that is fixed in time, and they have no constraint on execution speed.
With my fellow Developers, our goal is to bring these processes to production. Our constraints are very different: we want the code to be versioned, to be tested, to be deployed automatically and to produce logs. We also need it to run in production on distributed architectures (Spark, Hadoop), with fixed versions of languages and frameworks (Scala...), and with data that changes every day.
In this talk, I will explain how we, Developers, work hand-in-hand with Data Scientists to shorten the path to running data workflows in production.
Agile experiments in Machine Learning with F#J On The Beach
Just like traditional applications development, machine learning involves writing code. One aspect where the two differ is the workflow. While software development follows a fairly linear process (design, develop, and deploy a feature), machine learning is a different beast. You work on a single feature, which is never 100% complete. You constantly run experiments, and re-design your model in depth at a rapid pace. Traditional tests are entirely useless. Validating whether you are on the right track takes minutes, if not hours.
In this talk, we will take the example of a Machine Learning competition we recently participated in, the Kaggle Home Depot competition, to illustrate what "doing Machine Learning" looks like. We will explain the challenges we faced, and how we tackled them, setting up a harness to easily create and run experiments, while keeping our sanity. We will also draw comparisons with traditional software development, and highlight how some ideas translate from one context to the other, adapted to different constraints.
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
What's The Role Of Machine Learning In Fast Data And Streaming Applications?Lightbend
Designed for busy Architects and Managers, this Lightbend Webinar features Dr. Emre Velipasaoglu, Principal Data Scientist at Lightbend, who explains what ML is really all about, the ideal use cases for ML, and how getting it right can benefit your streaming and Fast Data application architectures.
Intro to Machine Learning with H2O and Python - DenverSri Ambati
Presentation at Comcast Denver 03.01.16
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
DataEngConf SF16 - Tales from the other side - What a hiring manager wish you...Hakka Labs
Tips for suceeding in your data science job interview. Talk by Bridge Mellichamp, Stitch Labs. To hear about future conferences go to http://dataengconf.com
Bringing Sequential Analysis to A/B Testing with examples from his work at Optimizely.
These slides are from a talk given at the SF Data Engineering meetup. http://www.meetup.com/SF-Data-Engineering/events/231047195/
DataEngConf SF16 - Multi-temporal Data StructuresHakka Labs
A mind-bending way of dealing with time syncing when aggregating data from many disparate sources. Talk by Jasmine Tsai and Alyssa Kwan, Clover Health. To hear about future conferences go to http://dataengconf.com
View the companion webinar at: http://embt.co/1L8V6dI
Some claim that, in the age of Big Data, data modeling is less important or even not needed. However, with the increased complexity of the data landscape, it is actually more important to incorporate data modeling in order to understand the nature of the data and how they are interrelated. In order to do this effectively, the way that we do data modeling needs to adapt to this complex environment.
One of the key data modeling issues is how to foster collaboration between new groups, such as data scientists, and traditional data management groups. There are often different paradigms, and yet it is critical to have a common understanding of data and semantics between different parts of an organization. In this presentation, Len Silverston will discuss:
+ How Big Data has changed our landscape and affected data modeling
+ How to conduct data modeling in a more ‘agile’ way for Big Data environments
+ How we can collaborate effectively within an organization, even with differing perspectives
About the Presenter:
Len Silverston is a best-selling author, consultant, and a fun and top rated speaker in the field of data modeling, data governance, as well as human behavior in the data management industry, where he has pioneered new approaches to effectively tackle enterprise data management. He has helped many organizations world-wide to integrate their data, systems and even their people. He is well known for his work on "Universal Data Models", which are described in The Data Model Resource Book series (Volumes 1, 2, and 3).
Speaker: Venkatesh Umaashankar
LinkedIn: https://www.linkedin.com/in/venkateshumaashankar/
What will be discussed?
What is Data Science?
Types of data scientists
What makes a Data Science Team? Who are its members?
Why does a DS team need Full Stack Developer?
Who should lead the DS Team
Building a Data Science team in a Startup Vs Enterprise
Case studies on:
Evolution Of Airbnb’s DS Team
How Facebook on-boards DS team and trains them
Apple’s Acqui-hiring Strategy to build DS team
Spotify -‘Center of Excellence’ Model
Who should attend?
Managers
Technical Leaders who want to get started with Data Science
Over 90% of today’s data has been generated in the last two years, and growth rates continue to climb. In this session, we’ll step through challenges and best practices with data capturing, how to derive meaningful insights to help predict the future, and common pitfalls in data analysis.
Come discover how integrated solutions involving Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon Machine Learning/Deep Learning result in effective data systems for data scientists and business users, alike.
Presumption of Abundance: Architecting the Future of SuccessInside Analysis
Hot Technologies with Dr. Claudia Imhoff, Dr. Robin Bloor and SAS
Live Webcast on Jan. 14, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=9431631f43a8c7561f2ba996750a4612
When resources are scarce, organizations focus heavily on keeping processes intact and costs down. The result is often a cycle of decisions that hinders development and ultimately leads to zero innovation. But these days, the market is teeming with game-changing solutions with more attractive price points, paving the way toward a new mindset and an era of abundance.
Register for this episode of Hot Technologies to learn from veteran Analysts Claudia Imhoff and Robin Bloor as they discuss how the proliferation of data and analytics is forcing the enterprise to rethink and redesign its architecture. They’ll be briefed by Gary Spakes of SAS, who will explain his company’s approach to Big Data analytics. He will show how disruptive technologies like Hadoop can give organizations the scalability and reliability they need, and at the same time boost data discovery, analytic innovation and time-to-value.
Visit InsideAnalysis.com for more information.
Jay Lyman 451 ResearchBrent Beer GitHubSteven Anderson Sendachi talk about these topics:
Cloud, DevOps, agile development capability and adoption of containers are all important in both perception and reality.
Enterprise adoption of cloud computing, DevOps, agile development and containers are all growing, including production use.
Modernizing applications to SaaS & migrating them to the cloud are equally important as net-new, so-called ‘cloud-native’ applications.
Advantages and benefits of these technologies and methodologies center on: flexibility and speed, cost reduction, improvements in resiliency and reliability and fitness for new/emerging applications.
Barriers center on: lack of internal skills, immaturity, lack of familiarity, satisfaction with current technology, cost and security.
Two years ago at Devoxx UK we talked about DevOps, what it was, why it was important and how to get started. Boy, was it scary. Now we’re wiser. More battle-scarred. The large scale of the challenge for application writers exploiting cloud and DevOps is clearer, but so is the path forward. Understanding the DevOps approach is important, but equally you must understand specific deployment technologies, security issues, operational reliability, and how to drive organisational transformation. Whether creating simple applications or sophisticated microservice architectures many of the challenges are the same. Join us to learn how you can apply this within your team and company.
Salesforce Architect Group, Frederick, United States July 2023 - Generative A...NadinaLisbon1
Joined our community-led event to dive into the world of Artificial Intelligence (AI)! Whether you were just starting your AI journey or already familiar with its concepts, one thing was certain: AI was reshaping the future of work. This enablement session was your chance to level up your skills and stay ahead in that rapidly evolving landscape.
As AI news continues to dominate headlines, it's natural to have questions and concerns about its impact on our lives. Will AI take over human jobs? Will it render us obsolete? Rest assured, the outlook is far brighter than you may think. Rather than replacing humans, AI is designed to enhance our capabilities and work alongside us. It won't be replacing marketers, service representatives, or salespeople—it will be empowering them to achieve even greater results. Companies across industries recognize this potential and are embracing AI to unlock new levels of performance.
During this enablement session, you'll have the opportunity to explore how AI advancements can positively influence your professional journey and daily life. We'll debunk common misconceptions, address fears, and showcase real-world examples of how successful AI implementation leads to workforce augmentation rather than replacement. Be prepared to gain valuable insights and practical knowledge that will help you navigate the AI landscape with confidence.
Top Business Intelligence Trends for 2016 by Panorama SoftwarePanorama Software
10 top BI trends for 2016 – by Panorama
Its all about the insight
Visual perception rules
The learning suggestive system - AI gets real
The data product chain becomes democratized
Cloud (finally)
“Mobile”
Automated data integration
Interned of things data accelerating into reality
Hadoop accelerators are the last chance for Hadoop
Fading of the centralized on–premise DWH
Big Data for Data Scientists - Info SessionWeCloudData
In this talk, WeCloudData introduces the Hadoop/Spark ecosystem and how businesses use big data tools and platforms. For more detail about WeCloudData's big data for data scientist course please visit: https://weclouddata.com/data-science/
Kelly O'Briant - DataOps in the Cloud: How To Supercharge Data Science with a...Rehgan Avon
2018 Women in Analytics Conference
https://www.womeninanalytics.org/
Over the last year I’ve become obsessed with learning how to be a better "cloud computing evangelist to data scientists" - specifically to the R community. I’ve learned that this isn’t often an easy undertaking. Most people (data scientists or not) are skeptical of changing up the tools and workflows they’ve come to rely on when those systems seem to be working. Resistance to change increases even further with barriers to quick adoption, such as having to teach yourself a completely new technology or framework. I’d like to give a talk about how working in the cloud changes data science and how exploring these tools can lead to a world of new possibilities within the intersection of DevOps and Data Analytics.
Topics to discuss:
- Working through functionality/engineering challenges with R in a cloud environment
- Opportunities to customize and craft your ideal version of R/RStudio
- Making and embracing a decision on what is “real" about your analysis or daily work (Chapter 6 in R for Data Science)
- Running multiple R instances in the cloud (why would you want to do this?)
- Becoming an R/Data Science Collaboration wizard: Building APIs with Plumber in the Cloud
DataEngConf: Building the Next New York Times Recommendation EngineHakka Labs
By Alex Spangher (Data Engineer, New York Times Digital)
Machine Learning is a discipline characterized by systematic approaches and common threads to seemingly diverse problems. In this talk I'll talk about several approaches taken during our work on the next New York Times Recommendation Engine, specifically focusing on spatial reasoning, dimensionality reduction, and testing strategies. Topics covered will include implicit regression, Bayesian modeling and neural networks. The talk will focus on the commonalities between different approaches taken.
DataEngConf: Measuring Impact with Data in a Distributed World at Conde NastHakka Labs
By Ky Harlin (VP Growth & Data Science, Conde Nast)
The nature of digital content is more distributed than ever before, and measuring its impact presents different challenges than traditional web analytics. How does one manage measurement and analysis in this environment to create meaningful feedback loops for a media company? At its core, this is an engineering problem, but it requires close collaboration with data scientists, content creators, editors, and others in order to be effective. Through real examples from his work at Conde Nast, Ky will review how they have approached this problem and interesting findings from their early work.
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleHakka Labs
By Dmitry Storcheus (Engineer, Google Research)
Feature extraction, as usually understood, seeks an optimal transformation from raw data into features that can be used as an input for a learning algorithm. In recent times this problem has been attacked using a growing number of diverse techniques that originated in separate research communities: from PCA and LDA to manifold and metric learning. The goal of this talk is to contrast and compare feature extraction techniques coming from different machine learning areas as well as discuss the modern challenges and open problems in feature extraction. Moreover, this talk will suggest novel solutions to some of the challenges discussed, particularly to coupled feature extraction.
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...Hakka Labs
By Shawndra Hill (Sr Researcher, Microsoft Research NYC)
Viewers of TV shows are increasingly taking to online sites like Facebook and Twitter to comment about the shows they watch as well as to contribute content about their daily lives. We present a novel recommendation system (RS) based on the user-generated content (UGC) contributed by TV viewers via the social networking site Twitter. In our approach, a TV show is represented by all of the tweets of its viewers who follow the show on Twitter. These tweets, in aggregate, enable us to reliably calculate the affinity between TV shows and to describe how and why certain shows are similar in terms of their audiences in a privacy friendly way.
DataEngConf: The Science of Virality at BuzzFeedHakka Labs
By Adam Kelleher (Sr Data Scientist, BuzzFeed)
BuzzFeed has developed the technology to attribute pageviews to a referring user. Using these data, we can construct diffusion graphs for our articles. These graphs introduce a whole collection of new performance metrics, and their complexity opens the door for a new assortment of complications to go with them. I'll mention some past work that has been done to create similar data from old pageview events. Then, I'll work through how we process these data into graph objects (avoiding some pitfalls), and mention some of the new ways of looking at web analytics implied by these objects. I'll talk about how we can take advantage of the structure of these objects to make certain algorithms more efficient. Finally, I'll cover some of the future applications we're particularly excited about!
DataEngConf: Uri Laserson (Data Scientist, Cloudera) Scaling up Genomics with...Hakka Labs
New DNA sequencing technologies are revolutionizing the life sciences by generating extremely large data sets. Traditional tools for processing this data will have difficulty scaling to the coming deluge of genomics data. We discuss how the innovations of Hadoop and Spark are solving core problems that enable scientists to address questions that were previously out of reach.
DataEngConf: Parquet at Datadog: Fast, Efficient, Portable Storage for Big DataHakka Labs
By Doug Daniels (Director of Engineering, Data Dog)
At Datadog, we collect hundreds of billions of metric data points per day from hosts, services, and customers all over the world. In addition charting and monitoring this data in real time, we also run many large-scale offline jobs to apply algorithms and compute aggregations on the data. In the past months, we’ve migrated our largest data sets over to Apache Parquet—an efficient, portable columnar storage format
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...Hakka Labs
By Alan Gardner (Platform Engineer, Rocana)
Rocana Ops is designed to handle terabytes a day of application logs and system metrics from across multiple data centres. We use Apache Kafka as a durable, high-throughput message bus at the centre of our application architecture. This talk will discuss the design and features of Kafka, its operational characteristics, and why we chose it as the backbone of our data pipeline.
DataEngConf: Building Satori, a Hadoop toll for Data Extraction at LinkedInHakka Labs
By Nikolai Avteniev (Sr Software Engineer, LinkedIn)
LinkedIn is the professional profile of record for our 370M+ members globally, but many people don't realize the full potential of their LinkedIn profile – especially on mobile. Adding blogs, photos and other rich content to your profile on a small screen device can get tedious. That's why LinkedIn created Satori, a Hadoop tool that crawls the web and extracts data to discover members' professional content online. Satori uses machine learning techniques and leverages other open source tools like Nutch and Gobblin in order to help match members with relevant content in order to maximize their professional profile. In this talk, Nikolai will share his experience in building the product and discuss the challenges and opportunities encountered along the way.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
24. Epsilon-Greedy Explore/Exploit
Target’s Red Ink Runs
out in Canada.
Why we need “Economy
Wide” Airline Seats.
How much work is too
much work?
We can learn from barn
raisers.
#1
#2
#3
#4
0.3241%
0.5923%
0.4864%
0.0231%
24