Calista Redmond from IBM presented this deck at the Switzerland HPC Conference.
“The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward.”
Watch the video presentation: http://insidehpc.com/2016/03/openpower-foundation/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How Apache Spark and Apache Hadoop are being used to keep banking regulators ...DataWorks Summit
The global financial crisis showed that traditional IT systems at banks were ill equiped to monitor and manage the daily-changing risk landscape during the global financial crisis. The sheer amount of data that needed to be crunched meant that many of the banks were day(s) behind in calculating, understanding and reporting their risk positions. Post crisis, a review by banking regulator, led the regulators to introduce a new legislation BCBS 239: Principles for effective risk data aggregation and reporting, that requires banks to meet more stringent (timeliness) requirement, in their ability to aggregate and report on their quickly-changing risk positions or risk fines to the tune of $millions. To meet these new requirements, banks have been forced to re-think their traditional IT architectures, which are unable to cope with sheer volume of risk data, and are instead turning to Apache Hadoop and Apache Spark to build out next generation of risk systems. In this talk you will discover, how some of the leading banks in the world are leveraging Apache Hadoop and Apache Spark to meet BCBS 239 regulation.
Speaker
Kunal Taneja
Learn about IBM's Hadoop offering called BigInsights. We will look at the new features in version 4 (including a discussion on the Open Data Platform), review a couple of customer examples, talk about the overall offering and differentiators, and then provide a brief demonstration on how to get started quickly by creating a new cloud instance, uploading data, and generating a visualization using the built-in spreadsheet tooling called BigSheets.
Calista Redmond from IBM presented this deck at the Switzerland HPC Conference.
“The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward.”
Watch the video presentation: http://insidehpc.com/2016/03/openpower-foundation/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How Apache Spark and Apache Hadoop are being used to keep banking regulators ...DataWorks Summit
The global financial crisis showed that traditional IT systems at banks were ill equiped to monitor and manage the daily-changing risk landscape during the global financial crisis. The sheer amount of data that needed to be crunched meant that many of the banks were day(s) behind in calculating, understanding and reporting their risk positions. Post crisis, a review by banking regulator, led the regulators to introduce a new legislation BCBS 239: Principles for effective risk data aggregation and reporting, that requires banks to meet more stringent (timeliness) requirement, in their ability to aggregate and report on their quickly-changing risk positions or risk fines to the tune of $millions. To meet these new requirements, banks have been forced to re-think their traditional IT architectures, which are unable to cope with sheer volume of risk data, and are instead turning to Apache Hadoop and Apache Spark to build out next generation of risk systems. In this talk you will discover, how some of the leading banks in the world are leveraging Apache Hadoop and Apache Spark to meet BCBS 239 regulation.
Speaker
Kunal Taneja
Learn about IBM's Hadoop offering called BigInsights. We will look at the new features in version 4 (including a discussion on the Open Data Platform), review a couple of customer examples, talk about the overall offering and differentiators, and then provide a brief demonstration on how to get started quickly by creating a new cloud instance, uploading data, and generating a visualization using the built-in spreadsheet tooling called BigSheets.
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
Empowering you with Democratized Data Access, Data Science and Machine LearningDataWorks Summit
Data science with its specialized tools and knowledge has been a forte of data scientists. However, it is not easy even for data scientists to get access to data that could be in different data stores in the organization. To unleash the power of data and gain valuable insights, machine learning needs to be made easily consumable by various stake holders and access to data made simpler. As an organization's data volumes continue to grow, delivering these insights real time is a complex challenge to solve.
This session will provide on overview of an approach to building a scalable solution where machine and deep learning and access to data is made much more consumable and simpler by the fastest SQL on Hadoop engine on the planet, a rich data scientist toolset and an infrastructure that can deliver the responsiveness needed for production environments.
Speakers:
Pandit Prasad, Program Director, IBM
Ashutosh Mate, Global Senior Solutions Architect, IBM
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
MongoDB IoT City Tour STUTTGART: Hadoop and future data management. By, ClouderaMongoDB
Bernard Doering, Senior Slaes Director DACH, Cloudera.
Hadoop and the Future of Data Management. As Hadoop takes the data management market by storm, organisations are evolving the role it plays in the modern data centre. Explore how this disruptive technology is quickly transforming an industry and how you can leverage it today, in combination with MongoDB, to drive meaningful change in your business.
In this slidedeck, Infochimps Director of Product, Tim Gasper, discusses how Infochimps tackles business problems for customers by deploying a comprehensive Big Data infrastructure in days; sometimes in just hours. Tim unlocks how Infochimps is now taking that same aggressive approach to deliver faster time to value by helping customers develop analytic applications with impeccable speed.
Data science holds tremendous potential for organizations to uncover new insights and drivers of revenue and profitability. Big Data has brought the promise of doing data science at scale to enterprises, however this promise also comes with challenges for data scientists to continuously learn and collaborate. Data Scientists have many tools at their disposal such as notebooks like Juypter and Apache Zeppelin & IDEs such as RStudio with languages like R, Python, Scala and frameworks like Apache Spark. Given all the choices how do you best collaborate to build your model and then work through the development lifecycle to deploy it from test into production ?
In this session learn the attributes of a modern data science platform that empowers data scientists to build models using all the data in their data lake and foster continuous learning and collaboration. We will show a demo of DSX with HDP with the focus on integration, security and model deployment and management.
Speakers:
Sriram Srinivasan, Senior Technical Staff Member, Analytics Platform Architect, IBM
Vikram Murali, Program Director, Data Science and Machine Learning, IBM
Microsoft and Hortonworks Delivers the Modern Data Architecture for Big DataHortonworks
Joint webinar with Microsoft and Hortonworns on the power of combining the Hortonworks Data Platform with Microsoft’s ubiquitous Windows, Office, SQL Server, Parallel Data Warehouse, and Azure platform to build the Modern Data Architecture for Big Data.
Use Cases from Batch to Streaming, MapReduce to Spark, Mainframe to Cloud: To...Precisely
So you built your Hadoop cluster. How do you get data from hundreds of database tables, streaming Kafka sources, and data shared by 20-year-old COBOL programs, all in there and working together quickly, efficiently and securely? With many customers asking this same question, Hortonworks recently expanded its partnership with Syncsort to provide optimized ETL onboarding for Hadoop. During this talk, we'll discuss how a next-generation ETL tool, built on contributions to the open source community and natively integrated in Hadoop, can drive lasting value for your organization. 1) Seamlessly onboard data from all your enterprise sources – batch and streaming -- into Hadoop for fast and easy analytics. 2) Stay agile and simplify your environment with a "design once, deploy anywhere" approach that minimizes disruption and risk in the face of a rapidly evolving big data ecosystem. 3) Secure, govern and manage your data with full integration with Apache Ambari, Apache Ranger, and more. These benefits come to life with real customer case studies. Learn how a national insurance company and global hotel chain are using Hortonworks HDP and Syncsort DMX-h to get bigger insights from their enterprise data, securely, efficiently, and cost-effectively, without spending hundreds of man-hours.
The Practice of Big Data - The Hadoop ecosystem explained with usage scenarioskcmallu
What's the origin of Big Data? What are the real life usage scenarios where Hadoop has been successfully adopted? How do you get started within your organizations?
Benefits of Transferring Real-Time Data to Hadoop at ScaleHortonworks
Today’s Big Data teams demand solutions designed for Big Data that are optimized, secure, and adaptable to changing workload requirements. Working together, Hortonworks, IBM, and Attunity have designed an integrated solution that transfers large volumes of data to a platform that can handle rapid ingest, processing and analysis of data of all types from all sources, at scale.
https://hortonworks.com/webinar/benefits-transferring-real-time-data-hadoop-scale-ibm-hortonworks-attunity/
Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Pa...MSAdvAnalytics
Lance Olson. Cortana Analytics is a fully managed big data and advanced analytics suite that helps you transform your data into intelligent action. Come to this two-part session to learn how you can do "big data" processing and storage in Cortana Analytics. In the first part, we will provide an overview of the processing and storage services. We will then talk about the patterns and use cases which make up most big data solutions. In the second part, we will go hands-on, showing you how to get started today with writing batch/interactive queries, real-time stream processing, or NoSQL transactions all over the same repository of data. Crunch petabytes of data by scaling out your computation power to any sized cluster. Store any amount of unstructured data in its native format with no limits to file or account size. All of this can be done with no hardware to acquire or maintain and minimal time to setup giving you the value of "big data" within minutes. Go to https://channel9.msdn.com/ to find the recording of this session.
This is a run-through at a 200 level of the Microsoft Azure Big Data Analytics for the Cloud data platform based on the Cortana Intelligence Suite offerings.
Data Engineering: Elastic, Low-Cost Data Processing in the CloudCloudera, Inc.
3 Things to Learn About:
*On-premises versus the cloud: What’s the same and what’s different?
*Benefits of data processing in the cloud
*Best practices and architectural considerations
Accelerating Business Intelligence Solutions with Microsoft Azure passJason Strate
Business Intelligence (BI) solutions need to move at the speed of business. Unfortunately, roadblocks related to availability of resources and deployment often present an issue. What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day. In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI. By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.
Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
Empowering you with Democratized Data Access, Data Science and Machine LearningDataWorks Summit
Data science with its specialized tools and knowledge has been a forte of data scientists. However, it is not easy even for data scientists to get access to data that could be in different data stores in the organization. To unleash the power of data and gain valuable insights, machine learning needs to be made easily consumable by various stake holders and access to data made simpler. As an organization's data volumes continue to grow, delivering these insights real time is a complex challenge to solve.
This session will provide on overview of an approach to building a scalable solution where machine and deep learning and access to data is made much more consumable and simpler by the fastest SQL on Hadoop engine on the planet, a rich data scientist toolset and an infrastructure that can deliver the responsiveness needed for production environments.
Speakers:
Pandit Prasad, Program Director, IBM
Ashutosh Mate, Global Senior Solutions Architect, IBM
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
MongoDB IoT City Tour STUTTGART: Hadoop and future data management. By, ClouderaMongoDB
Bernard Doering, Senior Slaes Director DACH, Cloudera.
Hadoop and the Future of Data Management. As Hadoop takes the data management market by storm, organisations are evolving the role it plays in the modern data centre. Explore how this disruptive technology is quickly transforming an industry and how you can leverage it today, in combination with MongoDB, to drive meaningful change in your business.
In this slidedeck, Infochimps Director of Product, Tim Gasper, discusses how Infochimps tackles business problems for customers by deploying a comprehensive Big Data infrastructure in days; sometimes in just hours. Tim unlocks how Infochimps is now taking that same aggressive approach to deliver faster time to value by helping customers develop analytic applications with impeccable speed.
Data science holds tremendous potential for organizations to uncover new insights and drivers of revenue and profitability. Big Data has brought the promise of doing data science at scale to enterprises, however this promise also comes with challenges for data scientists to continuously learn and collaborate. Data Scientists have many tools at their disposal such as notebooks like Juypter and Apache Zeppelin & IDEs such as RStudio with languages like R, Python, Scala and frameworks like Apache Spark. Given all the choices how do you best collaborate to build your model and then work through the development lifecycle to deploy it from test into production ?
In this session learn the attributes of a modern data science platform that empowers data scientists to build models using all the data in their data lake and foster continuous learning and collaboration. We will show a demo of DSX with HDP with the focus on integration, security and model deployment and management.
Speakers:
Sriram Srinivasan, Senior Technical Staff Member, Analytics Platform Architect, IBM
Vikram Murali, Program Director, Data Science and Machine Learning, IBM
Microsoft and Hortonworks Delivers the Modern Data Architecture for Big DataHortonworks
Joint webinar with Microsoft and Hortonworns on the power of combining the Hortonworks Data Platform with Microsoft’s ubiquitous Windows, Office, SQL Server, Parallel Data Warehouse, and Azure platform to build the Modern Data Architecture for Big Data.
Use Cases from Batch to Streaming, MapReduce to Spark, Mainframe to Cloud: To...Precisely
So you built your Hadoop cluster. How do you get data from hundreds of database tables, streaming Kafka sources, and data shared by 20-year-old COBOL programs, all in there and working together quickly, efficiently and securely? With many customers asking this same question, Hortonworks recently expanded its partnership with Syncsort to provide optimized ETL onboarding for Hadoop. During this talk, we'll discuss how a next-generation ETL tool, built on contributions to the open source community and natively integrated in Hadoop, can drive lasting value for your organization. 1) Seamlessly onboard data from all your enterprise sources – batch and streaming -- into Hadoop for fast and easy analytics. 2) Stay agile and simplify your environment with a "design once, deploy anywhere" approach that minimizes disruption and risk in the face of a rapidly evolving big data ecosystem. 3) Secure, govern and manage your data with full integration with Apache Ambari, Apache Ranger, and more. These benefits come to life with real customer case studies. Learn how a national insurance company and global hotel chain are using Hortonworks HDP and Syncsort DMX-h to get bigger insights from their enterprise data, securely, efficiently, and cost-effectively, without spending hundreds of man-hours.
The Practice of Big Data - The Hadoop ecosystem explained with usage scenarioskcmallu
What's the origin of Big Data? What are the real life usage scenarios where Hadoop has been successfully adopted? How do you get started within your organizations?
Benefits of Transferring Real-Time Data to Hadoop at ScaleHortonworks
Today’s Big Data teams demand solutions designed for Big Data that are optimized, secure, and adaptable to changing workload requirements. Working together, Hortonworks, IBM, and Attunity have designed an integrated solution that transfers large volumes of data to a platform that can handle rapid ingest, processing and analysis of data of all types from all sources, at scale.
https://hortonworks.com/webinar/benefits-transferring-real-time-data-hadoop-scale-ibm-hortonworks-attunity/
Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Pa...MSAdvAnalytics
Lance Olson. Cortana Analytics is a fully managed big data and advanced analytics suite that helps you transform your data into intelligent action. Come to this two-part session to learn how you can do "big data" processing and storage in Cortana Analytics. In the first part, we will provide an overview of the processing and storage services. We will then talk about the patterns and use cases which make up most big data solutions. In the second part, we will go hands-on, showing you how to get started today with writing batch/interactive queries, real-time stream processing, or NoSQL transactions all over the same repository of data. Crunch petabytes of data by scaling out your computation power to any sized cluster. Store any amount of unstructured data in its native format with no limits to file or account size. All of this can be done with no hardware to acquire or maintain and minimal time to setup giving you the value of "big data" within minutes. Go to https://channel9.msdn.com/ to find the recording of this session.
This is a run-through at a 200 level of the Microsoft Azure Big Data Analytics for the Cloud data platform based on the Cortana Intelligence Suite offerings.
Data Engineering: Elastic, Low-Cost Data Processing in the CloudCloudera, Inc.
3 Things to Learn About:
*On-premises versus the cloud: What’s the same and what’s different?
*Benefits of data processing in the cloud
*Best practices and architectural considerations
Accelerating Business Intelligence Solutions with Microsoft Azure passJason Strate
Business Intelligence (BI) solutions need to move at the speed of business. Unfortunately, roadblocks related to availability of resources and deployment often present an issue. What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day. In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI. By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.
Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Puppet and Nano Server provide an amazing mix when it comes to automated cloud deployments. This slide deck is from my session at PuppetCamp NYC and Boston.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
Bo Ewald from D-Wave Systems presented this deck at the HPC Advisory Council Switzerland Conference.
"This talk will provide an introduction to quantum computing and briefly review different approached to implementing a quantum computer. D-Wave’s approach to implementing a quantum annealing architecture and the software and programming environment will be discussed. Finally, some potential applications of quantum computing will also be addressed."
Watch the video presentation: http://wp.me/p3RLHQ-f89
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Hyper-C is OpenStack on Windows Server 2016, based on Nano Server, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Windows. Bare metal deployment features Cloudbase Solutions Juju charms and MAAS.
Amazon Web Services and Red Hat have teamed together to offer Red Hat Enterprise Linux on Amazon EC2
A presentation by Johnathon Meichtry, Principal Solutions Architect, APAC Amazon Web Services at Red Hat Forum Bangalore
In this deck from the 2019 UK HPC Conference, Glyn Bowden from HPE presents: The Eco-System of AI and How to Use It.
"This presentation walks through HPE's current view on AI applications, where it is driving outcomes and innovation, and where the challenges lay. We look at the eco-system that sits around an AI project and look at ways this can impact the success of the endeavor."
Watch the video: https://wp.me/p3RLHQ-kVS
Learn more: https://www.hpe.com/us/en/solutions/artificial-intelligence.html
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
"Microsoft: How to get the most with Cloudera & Microsoft Azure" by Filipe Prezado - Enterprise Architect @Microsoft on the event Cloudera & Big Data Ecosystem
ML Model Deployment and Scoring on the Edge with Automatic ML & DFSri Ambati
Machine Learning Model Deployment and Scoring on the Edge with Automatic Machine Learning and Data Flow
YouTube Video URL: https://youtu.be/gB0bTH-L6DE
Deploying Machine Learning models to the edge can present significant ML/IoT challenges centered around the need for low latency and accurate scoring on minimal resource environments. H2O.ai's Driverless AI AutoML and Cloudera Data Flow work nicely together to solve this challenge. Driverless AI automates the building of accurate Machine Learning models, which are deployed as light footprint and low latency Java or C++ artifacts, also known as a MOJO (Model Optimized). And Cloudera Data Flow leverage Apache NiFi that offers an innovative data flow framework to host MOJOs to make predictions on data moving on the edge.
Start Getting Your Feet Wet in Open Source Machine and Deep Learning Ian Gomez
At H2O.ai we see a world where all software will incorporate AI, and we’re focused on bringing AI to business through software. H2O.ai is the maker behind H2O, the leading open source machine and deep learning platform for smarter applications and data products. H2O operationalizes data science by developing and deploying algorithms and models for R, Python and the Sparkling Water API for Spark.
In this webinar, you will learn about the scalable H2O core platform and the distributed algorithms it supports. H2O integrates seamlessly with the R and the Python environments. We will show you how to leverage the power of H2O algorithms in R, Python and H2O Flow interface. Come with an open mind and some high level knowledge of machine learning, and you will take away a stream of knowledge for your next ML/DL project.
Amy Wang is a math hacker at H2O, as well as the Sales Engineering Lead. She graduated from Hunter College in NYC with a Masters in Applied Mathematics and Statistics with a heavy concentration on numerical analysis and financial mathematics.
Her interest in applicable math eventually lead her to big data and finding the appropriate mediums for data analysis.
Desmond is a Senior Director of Marketing at H2O.ai. In his 15+ years of career in Enterprise Software, Desmond worked in Distributed Systems, Storage, Virtualization, MPP databases, Streaming Analytics Platform, and most recently Machine Learning. He obtained his Master’s degree in Computer Science from Stanford University and MBA degree from UC Berkeley, Haas School of Business.
If you're like most of the world, you're on an aggressive race to implement machine learning applications and on a path to get to deep learning. If you can give better service at a lower cost, you will be the winners in 2030. But infrastructure is a key challenge to getting there. What does the technology infrastructure look like over the next decade as you move from Petabytes to Exabytes? How are you budgeting for more colossal data growth over the next decade? How do your data scientists share data today and will it scale for 5-10 years? Do you have the appropriate security, governance, back-up and archiving processes in place? This session will address these issues and discuss strategies for customers as they ramp up their AI journey with a long term view.
Legacy ERP architecture offers an incredibly, efficient means of operational resource management, but a real challenge comes from extracting business insights from them. Over the past 30 years, ERP data system, such as SAP, can be hard to interact with especially at the source database level. Whether initial translation of business logic and hierarchies create significant customizations, as well as, merging those changes into analytical applications. Overall, the entire process of designing self-service reporting with business level context can be quite cumbersome, looking at an example platform like, SAP, which contain pre-packaged modules (MM, SD, PP, etc), integrating these systems into a series of pre-built analytics.
The orchestration and integration over a wide range of open source technology solutions with some commercial CDC and reporting solutions into a reference solution that mimics several real customer scenarios today, living on relational platforms. Key considerations of extracting from the operational system of record, especially the merging of multiple systems in different time zones, will be addressed. Furthermore, the integration concerns of an analytics Hadoop platform, using HIVE Acid and Merge, as well as, flattening techniques for dimensional models. Many times a customer is temporarily limited in the range of data their ERP can contain, and older data is often offloaded to secondary systems or cold archiving entities. That goes away, but the opportunities expanded with real-time reporting across all of history, and the expanded use cases with advanced machine learnings methods.
Speakers
Jordan Martz, Director of Tech Solutions, Attunity
David Freriks, Technology Evangelist, Qlik
Building an IoT Massive Multiplayer Game in 60 Minutes - TechBash 2017Eran Stiller
The slide deck for my session at TechBash 2017 on "Building an IoT Massive Multiplayer Game in 60 Minutes".
Sample code can be found at https://github.com/estiller/iot-game
Let's talk about what Microsoft has to offer as a platform to help you build an Internet of Things solution. Mainly about Azure cloud but also Machine Learning, Cognitive Services, Windows, Hololens, Open Source
Azure Functions может стать отличным сервисом, с помощью которого ваши бизнес-системы смогут общаться между собой в формате простых задач-единиц, каждая из которых будет выполнять определенную задачу.
Alfa Future People 2016 - Как «воспитать» из плохого бота хорошего помощникаMicrosoft
Кто такие боты? Несут ли они какую-то революцию, или это эволюционное развитие технологий?
Откуда столько «хайпа»?
Как делаются умные боты
Где место ботов?
Как создать своего бота?
Cloud Day 2.0. Машинное обучение и AzureML. Предсказываем вероятность выжить ...Microsoft
https://events.techdays.ru/Cloud-Day/2016-04/
Важнейшее направление искусственного интеллекта – машинное обучение. Microsoft предлагает доступный инструмент Azure ML, который позволяет без глубоких знаний алгоритмов анализировать данные и делать предсказания. Мы научимся использовать простейшие алгоритмы машинного обучения для предсказания вероятности гибели человека на Титанике.
Контейнеры и экосистема разработки Microsoft (Windows Server Containers, Hype...Microsoft
В экосистеме Microsoft становится все больше контейнеров - в облаке и Visual Studio это, например, поддержка Docker, в Windows Server это Windows Server Containers. В докладе посмотрим на то, как это все развивалось, что есть на данный момент и как это использовать разработчику
Аналитика в облаке для современного бизнесаMicrosoft
В докладе про то, как машинное обучение может быть использовано для решения задач современного бизнеса, с демонстрациями определения возраста, эмоций, предиктивной аналитики и предсказания поломок оборудования.
ALM Summit 2016 - Кроссплатформенный ALM с инструментами MicrosoftMicrosoft
Управление жизненным циклом проектов (ALM) – тема достаточно обширная, и в каждой компании часто выстраивается собственный процесс, состоящий из различных инструментов. На семинаре мы покажем вам, как использовать облачные инструменты Microsoft для разработки, тестирования и построения процесса управления жизненным циклом кроссплатформенных проектов. В докладе - про то, как инструменты Microsoft помогают в разработке кроссплатформенных проектов, на примере Android-приложения с подключенным Application Insights и Mobile Engagements, с развертыванием в Github и автоматической сборкой из Github в Visual Studio Team Services.
Visual Studio Connect() Russia Инструменты управления жизненным циклом Micros...Microsoft
Инструменты управления жизненным циклом Microsoft для вашего проекта – в облаке и локально.
Application Insights, Azure, Visual Studio Team Services, нагрузочное тестирование и процессы.
Google Developer Group DevFest Minsk 2015 - Всеобъемлющий взгляд на Интернет ...Microsoft
Google Developer Group DevFest Minsk 2015 - Всеобъемлющий взгляд на Интернет вещей - про Windows 10 IoT, Microsoft Azure, Android, AllJoyn и многое другое.
White Nights 2015 - Облако для геймдева - как делают игры на миллионы пользов...Microsoft
Современные игровые проекты трудно представить без онлайн сервисов, обеспечивающих хранение данных, их обработку, а также взаимодействие игроков между собой и другие важные задачи. Построение инфраструктуры таких сервисов требует значительных вложений, и одним из решений является использование существующих облачных платформ. В этом докладе будет рассказано об основных возможностях этой платформы, инструментах разработки на примерах успешного использования в крупных игровых проектах и реальным опытом команды PinkApp Games
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
3. Why cloud?
Games become services
Approach to delivery the games is changing
As a result, support and maintenance is changing
Monetization is changing constantly and becoming “alive” –
business model of one project constantly adapts to the market
situation
Studios and indie developers get the access to the new markets
6. Security &
Management
Web Apps
Mobile
Apps
API
Management
API
Apps
Logic
Apps
Notification
Hubs
Content Delivery
Network (CDN)
Media
Services
HDInsight Machine
Learning
Stream
Analytics
Data
Factory
Event
Hubs
Mobile
Engagement
Active
Directory
Multi-Factor
Authentication
Automation
Portal
Key Vault
Biztalk
Services
Hybrid
Connections
Service
Bus
Storage
Queues
Store /
Marketplace
Hybrid
Operations
Backup
StorSimple
Site
Recovery
Import/Export
SQL
Database
DocumentDB
Redis
Cache Search Tables
SQL Data
Warehouse Azure AD
Connect Health
AD Privileged
Identity
Management
Operational
Insights
Cloud
Services
Batch Remote App
Service
Fabric
Visual Studio
Application
Insights
Azure SDK
Team Project
VM Image Gallery
& VM Depot
8. Case 1: storage
Performance counters data can be collected every second
Mobile app: every user action = new event
Multiplayer: amount of msgs/sec can be >100 000
We need a big, scalable, fast and flexible storage
17. Azure Mobile Services
Storm Bringer Studios: mobile games
Windows Store
iOS
Android
Xamarin
Windows Phone 8
iOS
Android
HTML5/JS
Open Source SDK
Source
Control
Scripting:
table,
scheduled
& custom
API
REST API
Facebook Twitter Microsoft Google
Active
Directory
SQL
Table
Storage
Blob
Storage
WNS & MPNS APNS GCM
Mongo
DB
19. Blobs
Binary content
SQL Azure
SQL Server as a
service
Virtual Machines
Windows/Linux virtual
machines for heavy
workloads
Web Apps
Admin panel/users
portal
20. Case 4: predictive analytics and big data
Halo 4 - Hadoop HDInsight for getting
insights from raw data
Insights = which modes are preferred
by gamers; session of the length;
amount of kills, etc
21. Case 4: predictive analytics and big data
HDInsight
Hadoop
clusters on
demand
ML
As a service
Stream
Analytics
NRT data
streaming
and/or
processing
Data
Factory
Data
transformati
on
Batch
HPC as a
service
22. Ситуация 5: DevOps
iPhone app does not
work
Slow requests
DB is not
responding Latency
Campaign is not
popular
27. Some experience
Cloud-based games is a good approach
But for doing things right sometimes you need to:
optimize the requests lifecycle
optimize the format of the messages
choose right instruments
testing proactively
doing the system loosely coupled