In this slidecast, Rob Clyde from Adaptive Computing describes Big Workflow-- the convergence of Cloud, Big Data, and HPC in enterprise computing.
"The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics," said Rob Clyde, CEO of Adaptive Computing. "A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage. We are confident that Big Workflow will enable enterprises across all industries to leverage big data that inspires game-changing, data-driven decisions."
Learn more: http://bit.ly/bigworkflow
Watch the video presentation: http://wp.me/p3RLHQ-bp2
The Big Data Gusher: Big Data Analytics, the Internet of Things and the Oil B...Platfora
The proliferation of machine sensors, interaction, and transaction data is driving a significant transformation within the oil and gas industry. Some industry analysts estimate that correctly implementing big data analytics can provide a 4-8% improvement in operational efficiency for oil companies. Other research shows that nearly 90% of oil industry executives rate big data analytics as a top priority, while fewer than a third have implemented solutions.
Customer Spotlight: How WellCare Accelerated Big Data Delivery to Improve Ana...VMware Tanzu
WellCare uses Attunity Replicate to offload data quickly and easily from its SQL Server and Oracle systems into Pivotal Greenplum Database. With the help of Attunity and Pivotal, WellCare has successfully enabled real-time reporting and analytics to gain a competitive edge.
Additional information:
https://pivotal.io/big-data/webinar/healthcare-success-story-wellcare
Webinar recording:
https://youtu.be/ZYiwRUxlY2s
Data as the New Oil: Producing Value in the Oil and Gas IndustryVMware Tanzu
Oil and gas exploration and production activities generate large amounts of data from sensors, logistics, business operations and more. Given the data volume, variety and velocity, gaining actionable and relevant insights from the data is challenging. Learn about these challenges and how to address them by leveraging big data technologies in this webinar.
During the webinar we will dive deep into approaches for predicting drilling equipment function and failure, a key step towards zero unplanned downtime. In the process of drilling wells, non-productive time due to drilling equipment failure can be expensive. We will highlight how the Pivotal Data Labs team uses big data technologies to build models for predicting drilling equipment function and failure. Models such as these can be used to build essential early warning systems to reduce costs and minimize unplanned downtime.
Panelist:
Rashmi Raghu, Senior Data Scientist, Pivotal
Hosted by:
Tim Matteson, Co-Founder -- Data Science Central
Video replay is available to watch here: http://youtu.be/dhT-tjHCr9E
Booz Allen's Cloud cost model offers a total-value perspective on IT cost that evaluates the explicit and implicit value of a migration to cloud-based services.
Jump-Start the Enterprise Journey to the CloudLindaWatson19
In the pre-1880 era onsite power generation was the norm for factories. When the central power stations were built, these factories outsourced their power generation. Cloud infrastructure presents a similar opportunity for organizations wishing to outsource their IT infrastructure.
The Big Data Gusher: Big Data Analytics, the Internet of Things and the Oil B...Platfora
The proliferation of machine sensors, interaction, and transaction data is driving a significant transformation within the oil and gas industry. Some industry analysts estimate that correctly implementing big data analytics can provide a 4-8% improvement in operational efficiency for oil companies. Other research shows that nearly 90% of oil industry executives rate big data analytics as a top priority, while fewer than a third have implemented solutions.
Customer Spotlight: How WellCare Accelerated Big Data Delivery to Improve Ana...VMware Tanzu
WellCare uses Attunity Replicate to offload data quickly and easily from its SQL Server and Oracle systems into Pivotal Greenplum Database. With the help of Attunity and Pivotal, WellCare has successfully enabled real-time reporting and analytics to gain a competitive edge.
Additional information:
https://pivotal.io/big-data/webinar/healthcare-success-story-wellcare
Webinar recording:
https://youtu.be/ZYiwRUxlY2s
Data as the New Oil: Producing Value in the Oil and Gas IndustryVMware Tanzu
Oil and gas exploration and production activities generate large amounts of data from sensors, logistics, business operations and more. Given the data volume, variety and velocity, gaining actionable and relevant insights from the data is challenging. Learn about these challenges and how to address them by leveraging big data technologies in this webinar.
During the webinar we will dive deep into approaches for predicting drilling equipment function and failure, a key step towards zero unplanned downtime. In the process of drilling wells, non-productive time due to drilling equipment failure can be expensive. We will highlight how the Pivotal Data Labs team uses big data technologies to build models for predicting drilling equipment function and failure. Models such as these can be used to build essential early warning systems to reduce costs and minimize unplanned downtime.
Panelist:
Rashmi Raghu, Senior Data Scientist, Pivotal
Hosted by:
Tim Matteson, Co-Founder -- Data Science Central
Video replay is available to watch here: http://youtu.be/dhT-tjHCr9E
Booz Allen's Cloud cost model offers a total-value perspective on IT cost that evaluates the explicit and implicit value of a migration to cloud-based services.
Jump-Start the Enterprise Journey to the CloudLindaWatson19
In the pre-1880 era onsite power generation was the norm for factories. When the central power stations were built, these factories outsourced their power generation. Cloud infrastructure presents a similar opportunity for organizations wishing to outsource their IT infrastructure.
Data technology experts from Pivotal give the latest perspective on how big data analytics and applications are transforming organizations across industries.
This event provides an opportunity to learn about new developments in the rapidly-changing world of big data and understand best practices in creating Internet of Things (IoT) applications.
Learn more about the Pivotal Big Data Roadshow: http://pivotal.io/big-data/data-roadshow
Data Science Case Studies: The Internet of Things: Implications for the Enter...VMware Tanzu
The Internet of Things: Implications for the Enterprise
The Internet Of Things (IoT) is already a reality but getting value out of that is still in its infancy. This session analyzes the implications of IoT for the enterprise with examples from the work we have done.
Rashmi Raghu is a Principal Data Scientist at Pivotal with a focus on the Internet-of-Things and applications in the Energy sector. Her work has spanned diverse industry problems including uncovering patterns & anomalies in massive datasets to predictive maintenance. She holds a Ph.D. in Mechanical Engineering with a minor in Management Science & Engineering from Stanford University. Her doctoral work focused on the development of novel computational models of the cardiovascular system to aid disease research. Prior to that she obtained Master’s and Bachelor’s degrees in Engineering Science from the University of Auckland, New Zealand.
The Vortex of Change - Digital Transformation (Presented by Intel)Cloudera, Inc.
The vortex of change continues all around us – inside the company, with our customers and partners. A new norm is upon us. Business models are being turned upside down – the hunters now the hunted, global equalization – size is no longer a guarantee of success. The innovative survive and thrive…the nervous and slow go under...what does all this change means for you? Find out how does Intel’s strengths help our customers in this world of change.
GITEX Big Data Conference 2014 – SAP PresentationPedro Pereira
Big, Fast and Predictive Data: How to Extract Real Business Value – in real time.
90% of the world’s data was created in the last two years. If you can harness it, it will revolutionize the way you do business. Big Data solutions can help extract real business value – in real time.
Hadoop is regarded as a key capability for implementing Big Data initiatives in the enterprise, but organizations have yet to realize its full business benefits. In this webinar, Pivotal and guest Forrester Research, Inc. Identify the use cases driving Hadoop adoption, and explore what is needed to transform initial investments into results.
Learn about:
Challenges Hadoop introduces, and how the right tools and platforms can help address them
Shifts in the industry with regards to SQL and NoSQL systems and their implications to Big Data analytics
Applying in-memory technologies for data management systems, data analytics, transactional processing and operational databases
Watch the on-demand webinar here:
http://www.pivotal.io/big-data/pivotal-forrester-operationalizing-data-analytics-webinar
Learn how to maximize business value from all of your data here: http://www.pivotal.io/big-data/pivotal-hd
Cloudera + Syncsort: Fuel Business Insights, Analytics, and Next Generation T...Precisely
Effective AI and ML projects require a perfect blend of scalable, clean data funneled from a variety of sources across the business. The only problem? Uncleaned data often lives in hard-to-access legacy systems, and it costs time and money to build the right foundation to deliver that data to answer ever-changing questions from business users. Together, Cloudera and Syncsort enable you to build a scalable foundation of data connections to reinvent the data lifecycle of all your projects in the most efficient way possible.
View this webinar on-demand to learn how innovative solutions from Cloudera and Syncsort enable AI and ML success. You will learn:
• Best practices for transforming complex data into clear, actionable insights for AI and ML projects
• How to visually assess the quality of the sources in your data lake and their completeness, consistency, and accuracy
• The value of an Enterprise Data Cloud and the newly unveiled Cloudera Data Platform
• How Syncsort Connect integrates natively with the Cloudera Data Platform
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Cloudera, Inc.
Join Cloudera as we outline how we use Cloudera technology to strengthen sales engagement, minimize marketing waste, and empower line of business leaders to drive successful outcomes.
Supercharging Smart Meter BIG DATA Analytics with Microsoft Azure Cloud- SRP ...Mike Rossi
Explosive growth of Smart Meter (SM) deployments has presented key infrastructure challenges across the utility industry. The huge volumes of smart meter data has led the industry to a tipping point which requires investments in modernizing existing data warehouses. Typical modernization efforts lead to huge capital expenditures for DW appliances and storage. Sizing this new infrastructure is tricky and can lead to underutilized or poorly performing hardware.
The Cloud is the catalyst to solving these Big Data challenges.
Utilizing a Cloud architecture delivers huge benefits by:
Maximizing use of existing architecture
Minimizing new CapEx expenditures
Lowering overall storage costs
Enabling scale on demand
The Open Compute Project is shining a new spotlight on hardware for carriers.
Rather than replace an entire server every few years, components can be more selectively refreshed.
Exploring the Wider World of Big Data- Vasalis KapsalisNetAppUK
Every second of every day you hear about Electronic systems creating ever increasing quantities of data. Systems in markets such as finance, media, healthcare, government and scientific research feature strongly in the Big Data processing conversation. While extracting business value from Big Data is forecast to bring customer and competitive advantage and benefits. In this session hear Vas Kapsalis, NetApp Big Data Business Development Manager, discuss his views and experience on the wider world of Big Data.
Manufacturing Data Center Fast Facts: Big Data, Storage, Security & RecoveryInsight
Have you heard the phrase, “data is the lifeblood of business”? It’s especially true in the manufacturing space where data provides insight into everything from supply order placements to quality assurance. Explore how your business can benefit from modern data center solutions and best practices.
Learn more: http://ms.spr.ly/6007TZ2ZW
Across all industries, businesses are adapting and saving time with how they are using and managing data today.
Learn how your business can Integrate NetApp storage platforms with healthcare data solutions: http://www.netapp.com/us/solutions/industry/healthcare/
Digital Transformation: How to Run Best-in-Class IT Operations in a World of ...Precisely
IT leaders looking to move beyond reactive and ad hoc troubleshooting need to find the intersection of maintaining existing systems while still driving innovation - solving for the present while preparing for the future. Identifying ways to bring existing infrastructure and legacy systems into the modern world can create the business advantage you need.
View the conversation with Splunk’s Chief Technology Advocate, Andi Mann and Syncsort’s Chief Product Officer, David Hodgson where we discuss the digital transformation taking place in IT and how machine learning and AI are helping IT leaders create a more business-centric view of their world including:
• The importance of data sharing and collaboration between mainframe and distributed IT
• The value of integrating legacy data sources and existing infrastructure into the modern world
• Achieving an end to end view of IT operations and application performance with machine learning
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Optimizing Regulatory Compliance with Big DataCloudera, Inc.
3 Things to Learn:
-There are many challenges in the way financial firms deal with regulatory compliance today
-Some of these challenges are related to data management and can be solved by big data technologies
-Cloudera and its partners Trifacta and Qlik are offering a solution that can accelerate the time to obtain compliance reports by using automated workflows and fast analytics that work on top of Cloudera’s Enterprise Data Hub.
Data technology experts from Pivotal give the latest perspective on how big data analytics and applications are transforming organizations across industries.
This event provides an opportunity to learn about new developments in the rapidly-changing world of big data and understand best practices in creating Internet of Things (IoT) applications.
Learn more about the Pivotal Big Data Roadshow: http://pivotal.io/big-data/data-roadshow
Data Science Case Studies: The Internet of Things: Implications for the Enter...VMware Tanzu
The Internet of Things: Implications for the Enterprise
The Internet Of Things (IoT) is already a reality but getting value out of that is still in its infancy. This session analyzes the implications of IoT for the enterprise with examples from the work we have done.
Rashmi Raghu is a Principal Data Scientist at Pivotal with a focus on the Internet-of-Things and applications in the Energy sector. Her work has spanned diverse industry problems including uncovering patterns & anomalies in massive datasets to predictive maintenance. She holds a Ph.D. in Mechanical Engineering with a minor in Management Science & Engineering from Stanford University. Her doctoral work focused on the development of novel computational models of the cardiovascular system to aid disease research. Prior to that she obtained Master’s and Bachelor’s degrees in Engineering Science from the University of Auckland, New Zealand.
The Vortex of Change - Digital Transformation (Presented by Intel)Cloudera, Inc.
The vortex of change continues all around us – inside the company, with our customers and partners. A new norm is upon us. Business models are being turned upside down – the hunters now the hunted, global equalization – size is no longer a guarantee of success. The innovative survive and thrive…the nervous and slow go under...what does all this change means for you? Find out how does Intel’s strengths help our customers in this world of change.
GITEX Big Data Conference 2014 – SAP PresentationPedro Pereira
Big, Fast and Predictive Data: How to Extract Real Business Value – in real time.
90% of the world’s data was created in the last two years. If you can harness it, it will revolutionize the way you do business. Big Data solutions can help extract real business value – in real time.
Hadoop is regarded as a key capability for implementing Big Data initiatives in the enterprise, but organizations have yet to realize its full business benefits. In this webinar, Pivotal and guest Forrester Research, Inc. Identify the use cases driving Hadoop adoption, and explore what is needed to transform initial investments into results.
Learn about:
Challenges Hadoop introduces, and how the right tools and platforms can help address them
Shifts in the industry with regards to SQL and NoSQL systems and their implications to Big Data analytics
Applying in-memory technologies for data management systems, data analytics, transactional processing and operational databases
Watch the on-demand webinar here:
http://www.pivotal.io/big-data/pivotal-forrester-operationalizing-data-analytics-webinar
Learn how to maximize business value from all of your data here: http://www.pivotal.io/big-data/pivotal-hd
Cloudera + Syncsort: Fuel Business Insights, Analytics, and Next Generation T...Precisely
Effective AI and ML projects require a perfect blend of scalable, clean data funneled from a variety of sources across the business. The only problem? Uncleaned data often lives in hard-to-access legacy systems, and it costs time and money to build the right foundation to deliver that data to answer ever-changing questions from business users. Together, Cloudera and Syncsort enable you to build a scalable foundation of data connections to reinvent the data lifecycle of all your projects in the most efficient way possible.
View this webinar on-demand to learn how innovative solutions from Cloudera and Syncsort enable AI and ML success. You will learn:
• Best practices for transforming complex data into clear, actionable insights for AI and ML projects
• How to visually assess the quality of the sources in your data lake and their completeness, consistency, and accuracy
• The value of an Enterprise Data Cloud and the newly unveiled Cloudera Data Platform
• How Syncsort Connect integrates natively with the Cloudera Data Platform
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Cloudera, Inc.
Join Cloudera as we outline how we use Cloudera technology to strengthen sales engagement, minimize marketing waste, and empower line of business leaders to drive successful outcomes.
Supercharging Smart Meter BIG DATA Analytics with Microsoft Azure Cloud- SRP ...Mike Rossi
Explosive growth of Smart Meter (SM) deployments has presented key infrastructure challenges across the utility industry. The huge volumes of smart meter data has led the industry to a tipping point which requires investments in modernizing existing data warehouses. Typical modernization efforts lead to huge capital expenditures for DW appliances and storage. Sizing this new infrastructure is tricky and can lead to underutilized or poorly performing hardware.
The Cloud is the catalyst to solving these Big Data challenges.
Utilizing a Cloud architecture delivers huge benefits by:
Maximizing use of existing architecture
Minimizing new CapEx expenditures
Lowering overall storage costs
Enabling scale on demand
The Open Compute Project is shining a new spotlight on hardware for carriers.
Rather than replace an entire server every few years, components can be more selectively refreshed.
Exploring the Wider World of Big Data- Vasalis KapsalisNetAppUK
Every second of every day you hear about Electronic systems creating ever increasing quantities of data. Systems in markets such as finance, media, healthcare, government and scientific research feature strongly in the Big Data processing conversation. While extracting business value from Big Data is forecast to bring customer and competitive advantage and benefits. In this session hear Vas Kapsalis, NetApp Big Data Business Development Manager, discuss his views and experience on the wider world of Big Data.
Manufacturing Data Center Fast Facts: Big Data, Storage, Security & RecoveryInsight
Have you heard the phrase, “data is the lifeblood of business”? It’s especially true in the manufacturing space where data provides insight into everything from supply order placements to quality assurance. Explore how your business can benefit from modern data center solutions and best practices.
Learn more: http://ms.spr.ly/6007TZ2ZW
Across all industries, businesses are adapting and saving time with how they are using and managing data today.
Learn how your business can Integrate NetApp storage platforms with healthcare data solutions: http://www.netapp.com/us/solutions/industry/healthcare/
Digital Transformation: How to Run Best-in-Class IT Operations in a World of ...Precisely
IT leaders looking to move beyond reactive and ad hoc troubleshooting need to find the intersection of maintaining existing systems while still driving innovation - solving for the present while preparing for the future. Identifying ways to bring existing infrastructure and legacy systems into the modern world can create the business advantage you need.
View the conversation with Splunk’s Chief Technology Advocate, Andi Mann and Syncsort’s Chief Product Officer, David Hodgson where we discuss the digital transformation taking place in IT and how machine learning and AI are helping IT leaders create a more business-centric view of their world including:
• The importance of data sharing and collaboration between mainframe and distributed IT
• The value of integrating legacy data sources and existing infrastructure into the modern world
• Achieving an end to end view of IT operations and application performance with machine learning
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Optimizing Regulatory Compliance with Big DataCloudera, Inc.
3 Things to Learn:
-There are many challenges in the way financial firms deal with regulatory compliance today
-Some of these challenges are related to data management and can be solved by big data technologies
-Cloudera and its partners Trifacta and Qlik are offering a solution that can accelerate the time to obtain compliance reports by using automated workflows and fast analytics that work on top of Cloudera’s Enterprise Data Hub.
Telcos are facing mounting pressure to dramatically increase speed to market and cut costs. But how?
What if you could go to market in half the time using prebuilt libraries of telco offerings—and leveraging the cloud to lower costs?
In this presentation, find out how Capgemini’s end-to-end solution for telcos uses a hybrid cloud and the Oracle Communications Rapid Offer Design and Order Delivery (Oracle Communications RODOD) stack to provide a competitive edge in today’s tough, dynamic environment.
Learn how to accelerate digital transformation, increase agility, and simplify business to better respond to customer expectations and address growth opportunities. See a concrete demonstration of the solution and its best-in-class CX capabilities, processes, and deployment and run services.
First presented at Oracle OpenWorld 2015.
Telco2 business models and opportunities briefing may 2013Simon Torrance
From the Telco 2.0 Initiative (www.telco2research.com): Next generation telco business models and strategic growth opportunities. Latest core slide set. Contact@telco2.net
Telco 2.0 Transformation Index - Methodology and ApproachSimon Torrance
An outline of the business model framework and analytical methodology of the Telco 2.0 Transformation Index, giving comparative scores and in-depth analysis on leading telcos / Communication Service Providers' transformation to the 'Telco 2.0' business model. Initial companies covered include Telefonica, Vodafone, AT&T, Verizon, Etisalat, Ooredoo (formerly Qtel), Singtel, and Axiata. Areas covered: Markets and Position; Service Offerings, Value Network, Technology, and Finances.
To view recording of this webinar please use the below URL:
http://wso2.com/library/webinars/2015/03/apis-the-foundation-of-the-future-telco
This session will look at how the future telco can accelerate their digital strategies by building an effective developer API ecosystem, specifically discussing
Participatory business models for telco
APIs as the currency for the telco’s digital economy
Reference architectures for a telco API ecosystem
API and Identity federation for telcos
My first Qtel Group Keynote address at the Telecoms World Middle East 2012 conference in Dubai. Its been great fun to do this parting with Europe analysis and the Outlook for the next 8 years. As I will not be totally retired by 2020 please remember that you will be able to hold me accountable for some of my predictions ;-) The Good Ones, The Bad Ones and for sure The Ugly Ones ... Enjoy and shoot back!
Telecom 2020: Preparing for a very different futureRob Van Den Dam
Mega market and technological trends are creating a very new world for consumers, businesses and markets as a whole. A potential role for communications service providers in this new world charts a path to growth
This Altimeter Group webinar explores the findings of our latest research report on digital transformation. Attendees will learn what digital transformation is, how companies are embracing change, the challenges and opportunities that emerge throughout the process, and how to refocus and reorganize teams to modernize, optimize, and integrate digital touchpoints.
Watch the webinar: https://www.slideshare.net/Altimeter/webinar-digital-transformation-with-brian-solis
Download the related report: altimetergroup.com/digitaltransformation/
Developing a Roadmap for Digital TransformationJohn Sinke
Digitally mature companies out-perform their peers in innovation, agility and responsiveness to customers. “Digirati” also enjoy advantages in efficiency and effectiveness in product delivery, marketing, e-commerce, sales and customer service. More importantly, companies that achieve Digital Excellence are 26% more profitable (source: Capgemini Consulting and MIT Centre for Digital Business).
However, building a Roadmap for Digital Transformation requires not only successful collaboration between the CMO and the CIO, it also demands a strong customer-focused orientation and digital culture. During this presentation, John Sinke will share insights from leading marketers and his personal experience of turning Resorts World Sentosa into a “digital business”.
A lecture focusing on how to create a solid and differentiated digital roadmap.
• How to plan for changing consumer expectations
• How to approach digital strategy planning
• What a social media maturity model looks like and how to apply it to your own roadmap efforts.
Is your big data journey stalling? Take the Leap with Capgemini and ClouderaCloudera, Inc.
Transitioning to a Big Data architecture is a big step; and the complexity of moving existing analytical services onto modern platforms like Cloudera, can seem overwhelming.
Where the Warehouse Ends: A New Age of Information AccessInside Analysis
The Briefing Room with Barry Devlin and Composite Software
Live Webcast May 21, 2013
All good things must come to an end, and even though the data warehouse will remain a prominent force in the information age, the handwriting is all over the enterprise: the center of gravity is moving. Whether due to Big Data or real-time demands, Cloud computing or globalization, today's leading organizations have analytical needs that the warehouse simply cannot accommodate. That's why data virtualization continues to attract attention.
Register for this episode of The Briefing Room to hear veteran Analyst Barry Devlin explain why the traditional model for data warehousing is being outmoded by a range of more flexible methods for accessing and analyzing information assets. He'll be briefed by David Besemer of Composite Software who will discuss how his company's data virtualization platform can be used to provide access to all manner of information sources, including data warehouses, Big Data silos, as well as partner and public data sources on demand.
Visit: http://www.insideanalysis.com
Cloud ROI and Implementation - A TechBlocks Solutions GuideTechBlocks
Realize your business potential in the cloud. This guide includes:
- How to improve business productivity in the cloud
- How to measure cloud ROI
- A step-by-step process for building your roadmap for success
1. Putting together a qualified team
2. Creating a written business case and strategy
3. Picking the right deployment and service model
4. Creating governance rules
5. Overcoming compliance and security issues
6. Integrating, validating and managing the cloud
7. Moving forward into the cloud - long term management
Presented at The Open Group Sydney Conference April 17 2013: Enterprise Transformation,
Enterprises are now seriously considering cloud as a viable architectural style . However, without a disciplined process and techniques for evolution, governance, change management and measurement, it will be disastrous to adopt cloud in a random fashion . TOGAF Architecture Development Method (ADM) can provide the approach for adopting cloud in an orderly way.
Key takeaways:
-- Importance of EA for successful implementation of Cloud Computing
-- Identifying the road map for adoption of cloud computing
Capgemini Leap Data Transformation Framework with ClouderaCapgemini
https://www.capgemini.com/insights-data/data/leap-data-transformation-framework
The complexity of moving existing analytical services onto modern platforms like Cloudera can seem overwhelming. Capgemini’s Leap Data Transformation Framework helps clients by industrializing the entire process of bringing existing BI assets and capabilities to next-generation big data management platforms.
During this webinar, you will learn:
• The key drivers for industrializing your transformation to big data at all stages of the lifecycle – estimation, design, implementation, and testing
• How one of our largest clients reduced the transition to modern data architecture by over 30%
• How an end-to-end, fact-based transformation framework can deliver IT rationalization on top of big data architectures
Businesses are having to store more and more digital content and files everyday
The total amount of information stored today by all the businesses around the world is 2.2ZB (That’s more than a billion terabytes)
Information is expected to grow 67% in 2013 for enterprises and 178% for SMBs
2020 Cloudera Data Impact Awards FinalistsCloudera, Inc.
Cloudera is proud to present the 2020 Data Impact Awards Finalists. This annual program recognizes organizations running the Cloudera platform for the applications they've built and the impact their data projects have on their organizations, their industries, and the world. Nominations were evaluated by a panel of independent thought-leaders and expert industry analysts, who then selected the finalists and winners. Winners exemplify the most-cutting edge data projects and represent innovation and leadership in their respective industries.
MongoDB World 2019: Data Digital DecouplingMongoDB
Why data decoupling? Learn how enterprises are pivoting to decouple big monolith and legacy data platform to smaller chunk and freedom to run anywhere and run multi-cloud agility for their business
Software Engineering in the Age of SaaS and Cloud Computing - SERA 2013 - MFF...Jaroslav Gergic
Abstract: Cloud computing is accelerating the pace of migration from on-premise behind-the-firewall software deployment to Software as a Service (SaaS) subscription based delivery model. While accelerated innovation cycle and reduced time to value enabled by the above paradigm shift are well recognized due to their direct business impact, it is often overlooked how the new delivery model is affecting software engineering itself in terms of tools, processes as well as of its theoretical arsenal needed to develop cloud applications.
Updated GoodData platform statistics as of Q2 2013!
Activating Big Data: The Key To Success with Machine Learning Advanced Analyt...Vasu S
A whitepaper of Qubole that How to make all of your data available to users for a multitude of use cases, ranging from analytics to machine learning and artificial intelligence.
https://www.qubole.com/resources/white-papers/activating-big-data-the-key-to-success-with-machine-learning-advanced-analytics
In this video from the IDC Breakfast Briefing at ISC'13, Steve Conway presents: IDC's Perspective on Big Data Outside of HPC.
Watch the presentation video: http://inside-bigdata.com/?p=3257
Check out more talks from the show at our ISC'13 Video Gallery: http://insidehpc.com/isc13-video-gallery/
Similar to Big Workflow Launch from Adaptive Computing (20)
In this deck from the Stanford HPC Conference, Shahin Khan from OrionX describes major market Shifts in IT.
"We will discuss the digital infrastructure of the future enterprise and the state of these trends."
"We work with clients on the impact of Digital Transformation (DX) on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help."
Watch the video: https://wp.me/p3RLHQ-lPP
Learn more: http://orionx.net
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Greg Wahl from Advantech presents: Transforming Private 5G Networks.
Advantech Networks & Communications Group is driving innovation in next-generation network solutions with their High Performance Servers. We provide business critical hardware to the world's leading telecom and networking equipment manufacturers with both standard and customized products. Our High Performance Servers are highly configurable platforms designed to balance the best in x86 server-class processing performance with maximum I/O and offload density. The systems are cost effective, highly available and optimized to meet next generation networking and media processing needs.
“Advantech’s Networks and Communication Group has been both an innovator and trusted enabling partner in the telecommunications and network security markets for over a decade, designing and manufacturing products for OEMs that accelerate their network platform evolution and time to market.” Said Advantech Vice President of Networks & Communications Group, Ween Niu. “In the new IP Infrastructure era, we will be expanding our expertise in Software Defined Networking (SDN) and Network Function Virtualization (NFV), two of the essential conduits to 5G infrastructure agility making networks easier to install, secure, automate and manage in a cloud-based infrastructure.”
In addition to innovation in air interface technologies and architecture extensions, 5G will also need a new generation of network computing platforms to run the emerging software defined infrastructure, one that provides greater topology flexibility, essential to deliver on the promises of high availability, high coverage, low latency and high bandwidth connections. This will open up new parallel industry opportunities through dedicated 5G network slices reserved for specific industries dedicated to video traffic, augmented reality, IoT, connected cars etc. 5G unlocks many new doors and one of the keys to its enablement lies in the elasticity and flexibility of the underlying infrastructure.
Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence."
Watch the video: https://wp.me/p3RLHQ-lPQ
* Company website: https://www.advantech.com/
* Solution page: https://www2.advantech.com/nc/newsletter/NCG/SKY/benefits.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
In this deck from the Stanford HPC Conference, Nicole Xu from Stanford University describes how she transformed a common jellyfish into a bionic creature that is part animal and part machine.
"Animal locomotion and bioinspiration have the potential to expand the performance capabilities of robots, but current implementations are limited. Mechanical soft robots leverage engineered materials and are highly controllable, but these biomimetic robots consume more power than corresponding animal counterparts. Biological soft robots from a bottom-up approach offer advantages such as speed and controllability but are limited to survival in cell media. Instead, biohybrid robots that comprise live animals and self- contained microelectronic systems leverage the animals’ own metabolism to reduce power constraints and body as an natural scaffold with damage tolerance. We demonstrate that by integrating onboard microelectronics into live jellyfish, we can enhance propulsion up to threefold, using only 10 mW of external power input to the microelectronics and at only a twofold increase in cost of transport to the animal. This robotic system uses 10 to 1000 times less external power per mass than existing swimming robots in literature and can be used in future applications for ocean monitoring to track environmental changes."
Watch the video: https://youtu.be/HrmJFyvInj8
Learn more: https://sanfrancisco.cbslocal.com/2020/02/05/stanford-research-project-common-jellyfish-bionic-sea-creatures/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Gilad Shainer from the HPC AI Advisory Council describes how this organization fosters innovation in the high performance computing community.
"The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) and Artificial Intelligence (AI) use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products."
Watch the video: https://wp.me/p3RLHQ-lNz
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today RIKEN in Japan announced that the Fugaku supercomputer will be made available for research projects aimed to combat COVID-19.
"Fugaku is currently being installed and is scheduled to be available to the public in 2021. However, faced with the devastating disaster unfolding before our eyes, RIKEN and MEXT decided to make a portion of the computational resources of Fugaku available for COVID-19-related projects ahead of schedule while continuing the installation process.
Fugaku is being developed not only for the progress in science, but also to help build the society dubbed as the “Society 5.0” by the Japanese government, where all people will live safe and comfortable lives. The current initiative to fight against the novel coronavirus is driven by the philosophy behind the development of Fugaku."
Initial Projects
Exploring new drug candidates for COVID-19 by "Fugaku"
Yasushi Okuno, RIKEN / Kyoto University
Prediction of conformational dynamics of proteins on the surface of SARS-Cov-2 using Fugaku
Yuji Sugita, RIKEN
Simulation analysis of pandemic phenomena
Nobuyasu Ito, RIKEN
Fragment molecular orbital calculations for COVID-19 proteins
Yuji Mochizuki, Rikkyo University
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
In this deck from GTC Digital, William Beaudin from DDN presents: HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD.
Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We'll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems.
The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure. DDN A³I with the EXA5 parallel file system is a turnkey, AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration. The combined solution delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems.
Watch the video: https://wp.me/p3RLHQ-lIV
Learn more: https://www.ddn.com/download/nvidia-superpod-ddn-a3i-ai400-appliance-with-the-exa5-filesystem/
and
https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
Today Xilinx announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.
Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to- market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Learn more: https://insidehpc.com/2020/03/xilinx-announces-versal-premium-acap-for-network-and-cloud-acceleration/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
In this video from the Rice Oil & Gas Conference, Chin Fang from Zettar presents: Moving Massive Amounts of Data across Any Distance Efficiently.
The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally. Both projects have real world motivations, e.g. the ambitious data transfer requirements of Linac Coherent Light Source II (LCLS-II) [1], a premier preparation project of the U.S. DOE Exascale Computing Initiative (ECI) [2]. Their immediate goals are described and explained, together with the solution used for each. Findings and early results are reported. Possible future works are outlined.
Watch the video: https://wp.me/p3RLHQ-lBX
Learn more: https://www.zettar.com/
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Rice Oil & Gas Conference, Bradley McCredie from AMD presents: Scaling TCO in a Post Moore's Law Era.
"While foundries bravely drive forward to overcome the technical and economic challenges posed by scaling to 5nm and beyond, Moore’s law alone can provide only a fraction of the performance / watt and performance / dollar gains needed to satisfy the demands of today’s high performance computing and artificial intelligence applications. To close the gap, multiple strategies are required. First, new levels of innovation and design efficiency will supplement technology gains to continue to deliver meaningful improvements in SoC performance. Second, heterogenous compute architectures will create x-factor increases of performance efficiency for the most critical applications. Finally, open software frameworks, APIs, and toolsets will enable broad ecosystems of application level innovation."
Watch the video:
Learn more: http://amd.com
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
In this deck from the ECSS Symposium, Abe Stern from NVIDIA presents: CUDA-Python and RAPIDS for blazing fast scientific computing.
"We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started. Finally, we will briefly highlight several other relevant libraries for GPU programming."
Watch the video: https://wp.me/p3RLHQ-lvu
Learn more: https://developer.nvidia.com/rapids
and
https://www.xsede.org/for-users/ecss/ecss-symposium
Sign up for our insideHPC Newsletter: http://insidehp.com/newsletter
In this deck from FOSDEM 2020, Colin Sauze from Aberystwyth University describes the development of a RaspberryPi cluster for teaching an introduction to HPC.
"The motivation for this was to overcome four key problems faced by new HPC users:
* The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.
* A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.
* That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.
* That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.
The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered."
Learn more: https://github.com/colinsauze/pi_cluster
and
https://fosdem.org/2020/schedule/events/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Adaptive Computing has proven leadership in accelerating IT to speed and improve business based on our 10 + years in delivering workload management and IT decision engine software. This leadership has been recognized in many ways with:50+patents filed or approved51% revenue growth in 2010 Acceleration centric investments in 2010 from leaders like Intel Capital, Epic ventures and Tudor Ventures who recognize our HPC market strength and cloud solution leadership and opportunityGlobal partnerships with organizations like HP, IBM, Cray, Microsoft and many others who utilize our innovative and leading decision engine to make their solutions more competitive and innovative for their customersAnd most importantly by customers who us to accelerate and manage their IT environments, the most dynamic and scale-intensive on the planet, a few of which are called out here, with heterogeneous, extreme-scale, multiple data centers with complex workloads, resources and priorities and decisions that we help accelerate in self-optimizing environment.In fact our software is used on well over $2 billion of hardware
Adaptive got its start in managing workloads for HPC. So traditionally we’ve played a huge role accelerating productivity, increase uptime, enforce SLAs and make benefits work multiple cluster locations and clouds.Moab HPC Suite – Enterprise Edition provides enterprise-ready HPC workload management that brings together the key enterprise HPC use cases and capabilities as well as implementation and 24x7 support services into a single integrated product to speed the realization of benefits from their HPC system including:Productivity accelerationto get more results done faster at a lower cost with the scalability and intelligent management to maximize utilization and throughput, and even lower power consumption when workload demand fluctuates. This includes overall system and resource productivity, including scarce resources like GPGPUs and Intel Xeon Phi coprocessors. It also includes user and admin productivity with fast and simple job submission and management and reduced complexity, time and management of the cluster and its workload. Uptime automationensures workload completes successfully and reliably, avoiding failures and missed organizational opportunities and objectives. Auto-SLA enforcementschedules and adjusts workload to consistently meet service guarantees and business priorities so the right workloads are completed at the optimal times while enforcing department usage budgetsGrid- and Cloud-ready HPC managementextends the benefits of your traditional HPC environment to more efficiently manage workload and better meet workload demand with Pay-for-use showback and chargeback and the ability tomanage and share workload across multiple remote clusters to meet growing workload demand or surges(with purchase of Moab Grid Suite option)
In 2010 we announced our cloud management suite which gave data centers a full cloud lifecycle saving time and saving money.
Speaker notes: According to Gartner in a multitude of surveys we found the need to harness the power of Big Data is growing to create business opportunity from the results derived from big data. Business leaders feel the results will enhance business performance through better innovation that will deliver a competitive advantage for the business and ultimately ensure success.#1 - In Gartner's 2013 big data study, information and business leaders most often associate the term "opportunity" with "big data.Survey Analysis, Big Data Adoption in 2013 Shows Substance Behind the Hype, Lisa Kart, Nick Heudecker, Frank Buytendijk. September 13, 2013. Page 1, Subhead#2 - Enhanced information use, including big data, is key to enhancing business performance in many industries.Top Industries Predicts 2014: The Pressure for Fundamental Transformation Continues to Accelerate, Kimberly Harris-Ferrante. October 4, 2013. Page 1, 3rd bullet under Key Findings#3 – Results from the Gartner-Forbes 2012 Board of Directors Survey revealed that 50% of respondents see IT as a way to change the rulesof competition in their industry.Three Big Data Challenges for the CIO, Stephen Prentice. July 24, 2012 Refresh November 7, 2013. Page 2, 2nd paragraph under Analysis.
The Enterprise is advancing simulation and big data analysis to pursue game-changing discoveries for the business. These discoveries are not possible without big workflow optimization and scheduling software that enhances the complex, compute-intensive simulation process and analyzes big data faster, more accurately and most cost effectively. Withconverged environments, data analyst can speed the time to discovery. Eureka.
With this constellation, we are able to task, capture and deliver imagery to our customers in as little as 90 minutes; delivering critical, life saving information literally when seconds count.
Our archive holdings are over 2.3Billion Sq KM.
That’s over 15 copies of the earth, and we are now adding a new copy about every 30 days.
That’s 2 PB a year of new data being added.
For example, when typhoon Hainan devastated large portions of Indonesia. Our 16 PB archive of previous collected imagery is combined with the latest acquisitions to build near real-time updates to first responders. This image set a before and after sequence of damage during this event. You can see the devastation to the temple, and destruction of hundreds of homes.
Quote 1 - Earl Joseph, VP of Tech. Computing at IDCQuote 2 - Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC