This document proposes a collaboration between IBM and the university to establish an AI, HPC, and cloud center of excellence. It outlines IBM's leadership in AI and the need for academic-industry partnerships and cross-disciplinary research. The proposed setup would include an IBM AC922 server with CPUs, GPUs, and storage to provide computing resources. It describes various use cases for how students, faculty, researchers could access and use the resources for classes, projects, and adding their own applications. It also outlines an initial 2-year timeline and discusses potential software offerings, courses, and opportunities for skills development, publications, and intellectual property.
SyncHPC: A Multi-Cloud HPC Hosting PlatformSyncious
SyncHPC by Syncious is a Multi-Cloud HPC Hosting Platform. It helps users to access their any Cloud HPC accounts/clusters very effectively to run big computing jobs and projects with any high-end applications like Ansys, Star-CCM+, LS-Dyna, WRF, GROMACS, etc.
Fusion simulations have traditionally required the use of leadership scale HPC resources in order to produce advances in physics. One such package is CGYRO, a premier tool for multi-scale plasma turbulence simulation. CGYRO is a typical HPC application that will not fit into a single node, as it requires several TeraBytes of memory and O(100) TFLOPS compute capability for cutting-edge simulations. CGYRO also requires high-throughput and low-latency networking, due to its reliance on global FFT computations. While in the past such compute may have required hundreds, or even thousands of nodes, recent advances in hardware capabilities allow for just tens of nodes to deliver the necessary compute power. We explored the feasibility of running CGYRO on Cloud resources provided by Microsoft on their Azure platform, using the infiniband-connected HPC resources in spot mode. We observed both that CPU-only resources were very efficient, and that running in spot mode was doable, with minimal side effects. The GPU-enabled resources were less cost effective but allowed for higher scaling.
Four ways to digitally transform with HPC in the cloudTyrone Systems
As cloud computing rapidly becomes better, faster, and cheaper than on-premises, no workload will be left untouched, and companies will need to adapt it to remain competitive over the next decade and beyond. So what is the cloud transformation in HPC? Why are on-premises HPC systems not enough anymore? Check out this slideshare to know more.
Alibaba is one of the fastest growing cloud platform providers and the world’s third largest public cloud provider just after Amazon AWS, Microsoft Azure (Gartner 2017).
The current report is the result of an exercise in understanding the capability of Alibaba cloud comparing to other major cloud platforms, such as, Amazon’s AWS.
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectivePooyan Jamshidi
Cloud migration requires an engineering, verifiable, measurable, transparent and repeatable approach rather than an ad-hoc approach based on trial and error.
We describe a comprehensive set of (multi-)cloud migration patterns from an architectural perspective. In this work, we focus on application components and their migration to the multi-cloud environments. We define and characterize the patterns with concrete usage scenario. We also describe the process for migration pattern selection, composition and extension.
Real-world Cloud HPC at Scale, for Production Workloads (BDT212) | AWS re:Inv...Amazon Web Services
"Running high-performance scientific and engineering applications is challenging no matter where you do it. Join IT executives from Hitachi Global Storage Technology, The Aerospace Corporation, Novartis, and Cycle Computing and learn how they have used the AWS cloud to deploy mission-critical HPC workloads.
Cycle Computing leads the session on how organizations of any scale can run HPC workloads on AWS. Hitachi Global Storage Technology discusses experiences using the cloud to create next-generation hard drives. The Aerospace Corporation provides perspectives on running MPI and other simulations, and offer insights into considerations like security while running rocket science on the cloud. Novartis Institutes for Biomedical Research talks about a scientific computing environment to do performance benchmark workloads and large HPC clusters, including a 30,000-core environment for research in the fight against cancer, using the Cancer Genome Atlas (TCGA)."
SyncHPC: A Multi-Cloud HPC Hosting PlatformSyncious
SyncHPC by Syncious is a Multi-Cloud HPC Hosting Platform. It helps users to access their any Cloud HPC accounts/clusters very effectively to run big computing jobs and projects with any high-end applications like Ansys, Star-CCM+, LS-Dyna, WRF, GROMACS, etc.
Fusion simulations have traditionally required the use of leadership scale HPC resources in order to produce advances in physics. One such package is CGYRO, a premier tool for multi-scale plasma turbulence simulation. CGYRO is a typical HPC application that will not fit into a single node, as it requires several TeraBytes of memory and O(100) TFLOPS compute capability for cutting-edge simulations. CGYRO also requires high-throughput and low-latency networking, due to its reliance on global FFT computations. While in the past such compute may have required hundreds, or even thousands of nodes, recent advances in hardware capabilities allow for just tens of nodes to deliver the necessary compute power. We explored the feasibility of running CGYRO on Cloud resources provided by Microsoft on their Azure platform, using the infiniband-connected HPC resources in spot mode. We observed both that CPU-only resources were very efficient, and that running in spot mode was doable, with minimal side effects. The GPU-enabled resources were less cost effective but allowed for higher scaling.
Four ways to digitally transform with HPC in the cloudTyrone Systems
As cloud computing rapidly becomes better, faster, and cheaper than on-premises, no workload will be left untouched, and companies will need to adapt it to remain competitive over the next decade and beyond. So what is the cloud transformation in HPC? Why are on-premises HPC systems not enough anymore? Check out this slideshare to know more.
Alibaba is one of the fastest growing cloud platform providers and the world’s third largest public cloud provider just after Amazon AWS, Microsoft Azure (Gartner 2017).
The current report is the result of an exercise in understanding the capability of Alibaba cloud comparing to other major cloud platforms, such as, Amazon’s AWS.
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectivePooyan Jamshidi
Cloud migration requires an engineering, verifiable, measurable, transparent and repeatable approach rather than an ad-hoc approach based on trial and error.
We describe a comprehensive set of (multi-)cloud migration patterns from an architectural perspective. In this work, we focus on application components and their migration to the multi-cloud environments. We define and characterize the patterns with concrete usage scenario. We also describe the process for migration pattern selection, composition and extension.
Real-world Cloud HPC at Scale, for Production Workloads (BDT212) | AWS re:Inv...Amazon Web Services
"Running high-performance scientific and engineering applications is challenging no matter where you do it. Join IT executives from Hitachi Global Storage Technology, The Aerospace Corporation, Novartis, and Cycle Computing and learn how they have used the AWS cloud to deploy mission-critical HPC workloads.
Cycle Computing leads the session on how organizations of any scale can run HPC workloads on AWS. Hitachi Global Storage Technology discusses experiences using the cloud to create next-generation hard drives. The Aerospace Corporation provides perspectives on running MPI and other simulations, and offer insights into considerations like security while running rocket science on the cloud. Novartis Institutes for Biomedical Research talks about a scientific computing environment to do performance benchmark workloads and large HPC clusters, including a 30,000-core environment for research in the fight against cancer, using the Cancer Genome Atlas (TCGA)."
Serverless Computing: Driving Innovation and Business ValueAlibaba Cloud
See webinar recording of this presentation at https://resource.alibabacloud.com/webinar/live.htm?&webinarId=68
In 2018, serverless computing dominated the headlines, tech shows, and radio waves, but why? In this webinar, we will explore the key concepts of serverless computing and introduce Function Compute, Alibaba Cloud’s event-driven and fully-managed compute service. Function Compute enables users to build and run applications with minimal effort on underlying compute infrastructure. We will also explore some of the basic patterns and leading practices on deploying serverless architecture. And learn how to get started with serverless computing by using Alibaba Cloud’s Function Compute.
Learn more about Alibaba Cloud’s Function Compute:
https://www.alibabacloud.com/product/function-compute
Workload Transformation and Innovations in POWER Architecture Ganesan Narayanasamy
IT Industry is going through two major transformations. One is adaption of AI and tight integration of the same in the commercial applications and enterprise workflow. Two the transformation in software architecture through the concepts like microservices and the cloud native architecture. These transformation alongside the aggressive adaption of IoT/mobile and 5G in all our day today activities is making the world operate in more real time manner which opens-up a new challenge to improve the hardware architecture to adapt to these requirements. These above two major transformation pushes the boundary of the entire systems stack making the designer rethink hardware. This talk presents you a picture of how the enterprise Industry leading POWER architecture is transforming to fulfill the performance demands of these newer generation workloads with primary focus on the AI acceleration on the chip.
Dror Goldenberg from Mellanox presented this deck at the HPC Advisory Council Switzerland Conference.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Watch the video presentation: http://wp.me/p3RLHQ-f7s
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Regarding Clouds, Mainframes, and Desktops … and LinuxRobert Sutor
In this talk, I'll focus on three areas of great opportunity as well as challenge for Linux: the accelerating market for cloud computing, Linux as a significant operating system for mainframes, and the hope for Linux on the desktop.
Edge 2016 Session 1886 Building your own docker container cloud on ibm power...Yong Feng
The material for IBM Edge 2016 session for a client use case of Spectrum Conductor for Containers
https://www-01.ibm.com/events/global/edge/sessions/.
Please refer to http://ibm.biz/ConductorForContainers for more details about Spectrum Conductor for Containers.
Please refer to https://www.youtube.com/watch?v=7YMjP6EypqA and https://www.youtube.com/watch?v=d9oVPU3rwhE for the demo of Spectrum Conductor for Containers.
IBM Bayesian Optimization Accelerator (BOA) is a do-it-yourself toolkit to apply state-of-the-art Bayesian inferencing techniques and obtain optimal solutions for complex, real-world design simulations without requiring deep machine learning skills. This talk will describe IBM BOA, its differentiation and ease of use, and how researchers can take advantage of it for optimizing any arbitrary HPC simulation.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
The purpose of the lab to the latest skills required for Job opportunities in many industries . This helps faculties to develop their skills and publish papers in intenational conferences and also innovate solutions
Serverless Computing: Driving Innovation and Business ValueAlibaba Cloud
See webinar recording of this presentation at https://resource.alibabacloud.com/webinar/live.htm?&webinarId=68
In 2018, serverless computing dominated the headlines, tech shows, and radio waves, but why? In this webinar, we will explore the key concepts of serverless computing and introduce Function Compute, Alibaba Cloud’s event-driven and fully-managed compute service. Function Compute enables users to build and run applications with minimal effort on underlying compute infrastructure. We will also explore some of the basic patterns and leading practices on deploying serverless architecture. And learn how to get started with serverless computing by using Alibaba Cloud’s Function Compute.
Learn more about Alibaba Cloud’s Function Compute:
https://www.alibabacloud.com/product/function-compute
Workload Transformation and Innovations in POWER Architecture Ganesan Narayanasamy
IT Industry is going through two major transformations. One is adaption of AI and tight integration of the same in the commercial applications and enterprise workflow. Two the transformation in software architecture through the concepts like microservices and the cloud native architecture. These transformation alongside the aggressive adaption of IoT/mobile and 5G in all our day today activities is making the world operate in more real time manner which opens-up a new challenge to improve the hardware architecture to adapt to these requirements. These above two major transformation pushes the boundary of the entire systems stack making the designer rethink hardware. This talk presents you a picture of how the enterprise Industry leading POWER architecture is transforming to fulfill the performance demands of these newer generation workloads with primary focus on the AI acceleration on the chip.
Dror Goldenberg from Mellanox presented this deck at the HPC Advisory Council Switzerland Conference.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Watch the video presentation: http://wp.me/p3RLHQ-f7s
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Regarding Clouds, Mainframes, and Desktops … and LinuxRobert Sutor
In this talk, I'll focus on three areas of great opportunity as well as challenge for Linux: the accelerating market for cloud computing, Linux as a significant operating system for mainframes, and the hope for Linux on the desktop.
Edge 2016 Session 1886 Building your own docker container cloud on ibm power...Yong Feng
The material for IBM Edge 2016 session for a client use case of Spectrum Conductor for Containers
https://www-01.ibm.com/events/global/edge/sessions/.
Please refer to http://ibm.biz/ConductorForContainers for more details about Spectrum Conductor for Containers.
Please refer to https://www.youtube.com/watch?v=7YMjP6EypqA and https://www.youtube.com/watch?v=d9oVPU3rwhE for the demo of Spectrum Conductor for Containers.
IBM Bayesian Optimization Accelerator (BOA) is a do-it-yourself toolkit to apply state-of-the-art Bayesian inferencing techniques and obtain optimal solutions for complex, real-world design simulations without requiring deep machine learning skills. This talk will describe IBM BOA, its differentiation and ease of use, and how researchers can take advantage of it for optimizing any arbitrary HPC simulation.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
The purpose of the lab to the latest skills required for Job opportunities in many industries . This helps faculties to develop their skills and publish papers in intenational conferences and also innovate solutions
Object Automation, a technology company based in California,
has been concentrating on latest technologies and emerging
tech partnerships. These include research and solution
development, the development of onshore and offshore
technology projects, the establishment of tech centers of
excellence in AI, quantum, and chip design, Technology
workshops and boot camps for corporates, special labs for
universities, and cutting-edge industry projects.
Accelerate Digital Transformation with IBM Cloud PrivateMichael Elder
Latest version: https://www.slideshare.net/MichaelElder/accelerate-digital-transformation-with-ibm-cloud-private-81258443
Accelerate the journey to cloud-native, refactor existing mission-critical workloads, and catalyze enterprise digital transformations.
How do you ensure the success of your enterprise in highly competitive market landscapes? How will you deliver new cloud-native workloads, modernize existing estates, and drive integration between them?
The common perception of applying deep learning is that you take an open source or research model, train it on raw data, and deploy the result as a fully self-contained artefact. The reality is far more complex.
For the training phase, users face an array of challenges including handling varied deep learning frameworks, hardware requirements and configurations, not to mention code quality, consistency, and packaging. For the deployment phase, they face another set of challenges, ranging from custom requirements for data pre- and postprocessing, inconsistencies across frameworks, and lack of standardization in serving APIs.
The goal of the IBM Developer Model Asset eXchange (MAX) is to remove these barriers to entry for developers to obtain, train, and deploy open source deep learning models for their business applications. In building the exchange, we encountered all these challenges and more.
For the training phase, we leverage the Fabric for Deep Learning (FfDL), an open source project providing framework-independent training of deep learning models on Kubernetes. For the deployment phase, MAX provides standardized container-based, fully self-contained model artifacts encompassing the end-to-end deep learning predictive pipeline.
Inteligencia artificial, open source e IBM Call for CodeLuciano Resende
Nesta palestra vamos abordar algumas das tendências em Inteligência Artificial e as dificuldades na uso da Inteligência Artificial. Por isso, também apresentaremos algumas ferramentas disponíveis em código livre que podem ajudar a simplificar a adoção da IA. E faremos uma breve introdução ao “Call for Code” que é uma iniciativa da IBM para construir soluções na prevenção e reação a desastres naturais.
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
Leverage Cloud Computing to Accelerate Development and TestRightScale
RightScale Webinar: November 18, 2010 – Watch this webinar to learn more about how you can leverage cloud computing to simplify and accelerate your DB2 development and testing.
This DB2 Chat with the Lab is brought to you in collaboration between IBM and RightScale.
The Libre-SOC Project aims to create an entirely Libre-Licensed, transparently-developed fully auditable Hybrid 3D CPU-GPU-VPU, using the Supercomputer-class OpenPOWER ISA as the foundation.
Our first test ASIC is a 180nm "Fixed-Point" Power ISA v3.0B processor, 5.1mm x 5.9mm, as a proof-of-concept for the team, whose primary expertise is in Software Engineering. Software Engineering training brings a radically different approach to Hardware development: extensive unit tests, source code revision control, automated development tools are normal. Libre Project Management brings even more: bug trackers, mailing lists, auditable IRC logs and a wiki are standard fare for Libre Projects that are simply not normal Industry-Standard practice.
This talk therefore goes through the workflow, from the original HDL through to the GDS-II layout, showing how we were able to keep track of the development that led to the IMEC 180nm tape-out in July 2021. In particular, by following a parallel development process involving "Real" and "Symbolic" Cell Libraries, developed by Chips4Makers, will be shown how our developers did not need to sign a Foundry NDA, but were still able to work side-by-side with a University that did. With this parallel development process, the University upheld their NDA obligations, and Libre-SOC were simultaneously able to honour its Transparency Objectives.
July 16th 2021 , Friday for our newest workshop with DoMS, IIT Roorkee, Concept to Solutions using OpenPOWER Stack. It's time to discover advances in #DeepLearning tools and techniques from the world's leading innovators across industries, research, and public speakers.
Register here:
https://lnkd.in/ggxMq2N
This presentation covers two uses cases using OpenPOWER Systems
1. Diabetic Retinopathy using AI on NVIDIA Jetson Nano: The objective is to classify the diabetic level solely on retina image in a remote area with minimum doctor's inference. The model uses VGG16 network architecture and gets trained from scratch on POWER9. The model was deployed on the Jetson Nano board.
1. Classifying Covid positivity using lung X-ray images: The idea is to build ML models to detect positive cases using X-ray images. The model was trained on POWER9, and the application was developed using Python.
This presentation covers various partners and collaborators who are currently working with OpenPOWER foundation ,Use cases of OpenPOWER systems in multiple Industries , OpenPOWER Workgroups and OpenCAPI features .
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
AI in healthcare and Automobile Industry using OpenPOWER/IBM POWER9 systemsGanesan Narayanasamy
As the adoption of AI technologies increases and matures, the focus will shift from exploration to time to market, productivity and integration with existing workflows. Governing Enterprise data, scaling AI model development, selecting a complete, collaborative hybrid platform and tools for rapid solution deployments are key focus areas for growing data scientist teams tasked to respond to business challenges. This talk will cover the challenges and innovations for AI at scale for the Industires such as Healthcare and Automotive , the AI ladder and AI life cycle and infrastructure architecture considerations.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
Moving object recognition (MOR) corresponds to the localization and classification of moving objects in videos. Discriminating moving objects from static objects and background in videos is an essential task for many computer vision applications. MOR has widespread applications in intelligent visual surveillance, intrusion detection, anomaly detection and monitoring, industrial sites monitoring, detection-based tracking, autonomous vehicles, etc. In this session, Murari provided a poster about the deep learning algorithms to identify both locations and corresponding categories of moving objects with a convolutional network. The challenges in developing such algorithms have been discussed.
Clarisse Hedglin from IBM presented this as part of 3 days International Summit .. She shared the scenarios AI can solve for today using the IBM AI infrastructure.
Dr Murari Mandal from NUS presented as part of 3 days OpenPOWER Industry summit about Robustness in Deep learning where he talked about AI Breakthroughs , Performance improments in AI models , Adversarial attacks , Attacks on semantic segmentation , Attacs on object detector , Defending Against adversarial attacks and many other areas.
IBM experts Girish and Chandu presented about A2O Core implementation on FPGA as part of JNTU A 2 DAYS Workshop . They covered the features of A2O core , how this core can be infused in FPGA
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
2. IBM is leading the way
IBM is teaming with universities, startups,
ISV’s and industries to help develop further
the impact of artificial intelligence for
solutions for real-world opportunities
3. 3
Background and Motivation
The IBM AI Lab will play a major role in the research and
development commercial and industrial development of
emerging AI technologies
There is a strong need for research and development activity
in these domains:
– Encouraging academic-industry partnerships
– Cross-disciplinary and collaborative research
– Making AI accessible to non-technical business students
– Enabling faculty-technologist interaction and learning
– Enabling startups , ISVs and industries to use the platform
to innovate in ways that improve the World condition
.
4. Technologies
and Partners
The AI Lab will include IBM
and other corporate
sponsors, coupled with open
source technologies to
accelerate results
4
5. 5
CoE Charter and Objectives
1. Conduct research on rapidly advancing AI technologies
2. Enable and facilitate industry-academia partnerships in research and
development, and foster relationships through collaborative projects
3. Encourage cross-disciplinary research in applied computing, in critical
scientific and industrial domains, via research proposal submissions to
funding agencies
4. Provide a state-of-the-art R&D facility for students, faculty and
collaborators
5. Offer a comprehensive and meaningful computing environment for
education by:
1. complementing the theoretical coursework in CC with appropriate laboratory
coursework for students, and
2. encouraging team participation and cross-disciplinary problem solving
6. IBM’s AI Lab
OpenPOWER System for
Data Analytics with
Accelerators (GPU)
Collaborative technical projects
Access to IBM Academic Initiative
Toolkit
Graduate, Ph.D. and Post-Doctoral
research
Webinars and Technical Workshops
Projects related to make smart cities
and smart villages
7. 7
Proposed AI cloud setup and specifications - Hardware
College Ethernet Network
4
4
College Ethernet Network
1 IBM AC922 System
128 GB Memory
2 TB Hard drive
40 Cores Power 9 Processor
NVLINK-2 nVidia GPUs – 4
2 Raptor POWER 9 based Talos Servers
1 x86 with 2 K80 Server for CFD
applications
1 EDR Infiniband swtich
16 TB Storage Sub System
IB ConnectX cards
Edge Compute devices
1 RACK ( Which can fit in )
8. The AC922 has 2 POWER9 sockets, each providing extreme levels of IO
and memory bandwidth. As an example, in the configuration proposed, each
socket will communicate with two Nvidia V100 GPUs directly utilizing the
300GB per second NVLink2 bus connections on each POWER9 socket. In
addition, sockets have high memory bandwidth, PCIe Gen4 bandwidth,
and a high bandwidth SMP interconnect.
8
11. 11
Use Case 1 : Students (daily use) requests for compute resource
Basic ML/DL exercises
Login to web portal
with “Student”
profile; browse
service catalog.
Select and request
for desired image,
and usage period
eg. MS Office with
Windows for 2
hours.
Login and access
Docker Container
(Remote Desktop)
AI Cloud Portal AI Cloud
Infrastructure
User / profile
authentication
Service request
processing &
approval
VM & storage
created according to
request
OS deployed into
Docker Container
Application image
deployed into Docker
Container
Login info sent to
user via email
Docker Container
with PowerAI image f
Downloads
completed work
into laptop and logs
off.
Resources made available to
students for daily use will be
restricted. The restriction will
be enforced through profile
management on the cloud
portal.
Students
Students login from
anywhere within the
UM LAN. Cloud portal
is accessed via a web
browser.
Application and OS
images have to be
preconfigured by the
cloud admin before
use.
12. 12
Use Case 2 : Final Year Students requests for compute resource for
AI Projects
Login to webportal
with “FY” profile;
browse service
catalog.
Select and request
for desired image,
and usage period e
Login and access
Docker Container
(Remote Desktop)
AI Cloud Portal Cloud Infrastructure
User / profile
authentication
Service request
processing &
approval
VM & storage
created according to
request
OS deployed into VM
Application image
deployed into VM
Login info sent to
user via email
VM deprovisioned
back into the cloud
Downloads
completed work
into laptop and logs
off.
Resources made available to
final year students will be
restricted. The restriction will
be enforced through profile
management on the cloud
portal.
FY Students
Students login from
anywhere within the
UM LAN. Cloud portal
is accessed via a web
browser.
IP address for VM
deployed within same
subnet. Students
access from laptop.
VM and Storage size :
2-4 cores, 4GB RAM, 10GB
storage
RHEL
Jetson Nano
VM for the FY student will be
operational until the expiration
date stated in his request.
13. 13
Use Case 3 : Final Year Students creates own application image,
and shares image with other FY students.
Student to seek
approval from
Cloud Admin to
create new app
image in cloud infra
New image is
displayed in the
service catalog
Other FY students
proceed to request,
access and use
new application (as
per Use Case 2)
Ai Cloud Admin AI Cloud
Infrastructure
To ensure proper cloud
operations, only the cloud
administrator is allowed to
manage image offereings in
the cloud.
FY Students
In order to allow other
FY students to have
access to the new
application image for
their own project, the
originator of the
application has to work
with the cloud admin to
package the app as an
image offering in the
cloud.
Cloud Portal
Provisioning
manager packages
app image with OS
Image is registered
with service
automation
manager and portal
User / profile
authentication
Service request
processing
VM & storage
created according to
request
OS deployed into VM
Application image
deployed into VM
Login info sent to
user via email
VM deprovisioned
back into the cloud
14. 14
During 2 hr class,
provides VM login
information to 40
students in class /
exam
Use Case 4 : Lecturers prebooking seats for AI/ML/DL class or exam
AI Cloud Portal AI Cloud Infrastructure
User / profile
authentication
Service request
processing
VM & storage
created according to
request
OS deployed into VM
Application image
deployed into VM
Login info sent to
lecturer via email
VMs deprovisioned
back into the cloud
Resources made available to
students for daily use will be
restricted. The restriction will
be enforced through profile
management on the cloud
portal.
Lecturers
VM and Storage size :
40 VMs
2 core, 4GB RAM, 5GB
storage
RHEL
PowerAI Vision
Watson Machine Learning
Accelerator
Lecturer proceed to
request for VMs
with “Lecturer”
profile.
Select and request
for desired image,
and future usage
period eg. 40 VMs
of SPSS with LInux
for 2 hours.
Students access
VMs from laptop /
PC / workstations
Students download
work at end of class
Application and OS
images have to be
preconfigured by the
cloud admin before
use.
IP address for VM
deployed within same
subnet.
15. 15
Use Case 5 : Researchers adding compute capacity with own
applications through the AI cloud
AI Cloud Portal Ai Cloud InfrastructureResearchers
Researchers proceed
to request, access VM
and install own
application (as per
Use Case 2)
User / profile
authentication
Service request
processing
VM & storage
created according to
request
OS deployed into VM
Application image
deployed into VM
Login info sent to
user via email
VM deprovisioned
back into the cloud
VM and Storage size :
8 cores, 16GB RAM, 250GB
storage
RHEL
16. 2 Year Developmental Timeline
a) IBM POWER Academic Initiative
partnership
b) OpenPOWER system and
Accelerator for Deep Learning
and Machine Learning
c) Technical Projects deployment
d) Review of progress in technical
projects, lab coursework
e) Big data and AI curriculums
17. IBM Software
Offerings along
with the Servers
Software Overview
IBM’s hardware offerings for HPC are enhanced when
combined with enterprise class software solutions. These
include Red
Hat Enterprise Linux (RHEL), IBM Watson Machine Learning,
and IBM Spectrum Computing.
Red Hat
The proposed solution includes Red Hat Enterprise Linux 7
(RHEL) with 5-year support on all compute and storage
nodes, RHEL and CentOS are highly compatible Linux
operating systems. Although support is available for both
operating systems on the IBM Power Systems AC922 server,
running RHEL on IBM Power provides clients with enterprise
grade Linux support.
Red Hat is a leading provider of open-source solutions, and
IBM is one of the largest Linux contributors. RHEL 8 for
Power exploits the latest IBM POWER and virtualization
technologies to help maximize system resources and provide
high qualities of service to your end users. RHEL 7 enables
the following functions on POWER:
Simultaneous multithreading
Static micro threading
Transactional memory
17
18. IBM Software
Offerings along
with the Servers
IBM Watson Machine Learning CE) are available at
no charge.
IBM Watson Machine Learning (formerly IBM
PowerAI)
IBM Watson Machine Learning makes deep learning
and machine learning more accessible to your staff,
and the benefits of
AI more obtainable to the University. It combines
popular open source deep learning frameworks,
efficient Artificial Intelligence development tools, and
accelerated IBM. Power Systems™ servers. With
IBM Watson Machine Learning, the University can
deploy a fully optimized and supported AI platform
that delivers blazing performance, along with proven
dependability and resilience.
18
21. AI Cloud (On
Premise)
PowerAI makes deep
learning, machine learning
and AI more accessible and
more performant
By combining this software
platform for deep learning with
IBM Power Systems,
enterprises and Institutions can
rapidly deploy a fully optimized
and supported platform for
machine learning frameworks
and their dependencies. And it
is built for easy and rapid
deployment
PowerAI runs on the IBM Power
System AC922 for High
Performance Computer server
infrastructure
22. Advantages for Your Faculty and
Students
Talent and Skills: (Remote Interns; Skills and Training)
Students and Research scholars will start working on the
advanced technologies will enable them to work on
many applications
Publications and Mindshare: (Press releases, Articles,
and Publications; Conferences and Events)
1. Conference Paper on software-based application
research /development in 6 months
Intellectual Capital: (Patents, Open source; Prototypes,
Demos; Curriculum; Student projects, Theses)
1. Prototype building of many research problems using
software-centric approach (hardware-centric baseline
implementation almost getting completed)
2. Potential to file disclosures
Opportunities: (Seed revenue; Leverage other funding;
Build ecosystems; Build government/client relationships)
1. Once software-centric solution available with
comparable performance using latest technologies ,
your team would create prototypes which can be
demonstrated to several colleges
23. Special Courses
Big Data with docker and Kubernetes
Machine Learning with Python
Data Science Course
Exascale Computing Infra
Quantum Computing Workshop
Faculty Development programs 2
More than 100 hours of technology workshops
24. Processor Core Enablement and
Partnership
1. Introduction of open-ended experiments on A2I
Core in the FPGA Lab curriculum
2. Allotment of Mini Projects to students on
HDL/Verilog/ A2I Core
3. Global Remote Mentoring for students with our
mentors, who have desired FPGA coding skills
4. FDP for faculty on porting & integration of modules
for application design using A2I core
5. Discussion on the creation of data-path for the
development of softcore processor architecture
6. Joint research activities
7. Development of specific solutions for IBM as
sponsored projects / consultancy
8. Sharing of learning materials for A2I core and
relevant tool chain
24
25. Onstitute Platform and Wisconsin
Collaboration-Platteville
By registering to Onstitute, the students can get the
following benefits:
Learn a broad range of data science topics (e.g., big
data analytics, cloud computing, machine learning, deep
learning, etc.) and analytic software tools.
Get access to cutting edge hardware infrastructure
(including supercomputing-level systems with multicore
CPU, multiple GPU, etc.) while learning.
Exposure multiple job opportunities in data-science and
related field.
Involve in real-world big data and AI projects together
with academia and industry leaders.
Opportunities to participate in world-class
workshops/webinars and rewarding hackathons.
25
26. University of Oregon , E4S and TAU
Collaborations
E4S or the Extreme-scale Scientific Software Stack [https://e4s.io] is a community effort to provide open
source software packages for developing, deploying and running scientific applications on high-
performance computing (HPC) platforms. E4S provides from-source builds and containers of a broad
collection of HPC software packages. E4S exists to accelerate the development, deployment and use of
HPC software, lowering the barriers for HPC users.
"TAU Performance System® [http://tau.uoregon.edu] available on
OpenPOWER:
– Profiling and tracing support with 3D profile browsers
– Support for IBM XL, GNU, and LLVM Clang compilers
– Support for PowerAI, Spectrum MPI, and MVAPICH2 GDR, CUDA,
OpenACC
– Multi-platform support in TAU
• IBM Power, Cray XC, ARM64, x86_64, NVIDIA CUPTI and AMD
GPUs (ROCm)
26