I share some simple strategies for any developer to quickly get started with Machine Learning, Cloud Development and more. I'll discuss Leonardo, Amazon API's, Google ML, and of course SAP API Management too.
MLOps refers to applying DevOps practices and principles to machine learning. This allows for machine learning models and projects to be developed and deployed using automated pipelines for continuous integration and delivery. MLOps benefits include making machine learning work reproducible and auditable, enabling validation of models, and providing observability through monitoring of models after deployment. MLOps uses the same development practices as software engineering to ensure quality control for machine learning.
Building a MLOps Platform Around MLflow to Enable Model Productionalization i...Databricks
Getting machine learning models to production is notoriously difficult: it involves multiple teams (data scientists, data and machine learning engineers, operations, …), who often does not speak to each other very well; the model can be trained in one environment but then productionalized in completely different environment; it is not just about the code, but also about the data (features) and the model itself… At DataSentics, as a machine learning and cloud engineering studio, we see this struggle firsthand – on our internal projects and client’s projects as well.
The recent development of the World Wide Web has moved from static HTML pages to more dynamic and intelligent web applications with desktop-like characteristics. AJAX (Asynchronous JavaScript and XML) has been the main driver of this change, allowing for asynchronous updates to web application content from the server without page reloads. AJAX uses a combination of technologies including JavaScript, XML, and HTTP requests to update partial web page content dynamically. This enables web applications to have more interactive interfaces and deliver a more desktop-like experience to users.
Machine learning applications are typically stitched together from hopes and dreams, shell scripts, cron jobs, home-grown schedulers, snippets of configuration clipped from multiple blog posts, thousands of hard-coded business rules, a.k.a. "our SQL corpus," and a few lines of training and testing code. Organizing all the moving parts into something maintainable and supportive of ongoing development is a challenge most teams have on their TODO list, roadmap, or tech debt pile. Getting ahead of the day-to-day demands and settling into a sane architecture often seems like an unattainable goal. The past several years have seen an explosion of tool-building in the data engineering and analytics area, including in Apache projects spanning the areas of search and information retrieval, job orchestration, file and stream formats, and machine learning libraries. In this talk we will cover our product and development teams' choices of architecture and tools, from data ingestion and storage, through transformations and processing, to presentation of results and publishing to web services, reports, and applications.
Scaling Machine Learning from zero to millions of users (May 2019)Julien SIMON
The document provides guidance on scaling machine learning models from small to large scale. It discusses starting with high-level AI services that require no machine learning skills, then progressing to training and deploying custom models on infrastructure like EC2 instances, ECS/EKS Docker clusters, or using the fully managed Amazon SageMaker service. The key steps outlined are choosing the right level of service based on needs, training models locally then in distributed environments, deploying models behind APIs, and using automation and managed services to scale models from handling zero to millions of users.
Apache ® Spark™ MLlib 2.x: How to Productionize your Machine Learning ModelsAnyscale
Apache Spark has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do I deploy these model to a production environment? How do I embed what I have learned into customer facing data applications?
In this webinar, we will discuss best practices from Databricks on
how our customers productionize machine learning models
do a deep dive with actual customer case studies,
show live tutorials of a few example architectures and code in Python, Scala, Java and SQL.
MLOps refers to applying DevOps practices and principles to machine learning. This allows for machine learning models and projects to be developed and deployed using automated pipelines for continuous integration and delivery. MLOps benefits include making machine learning work reproducible and auditable, enabling validation of models, and providing observability through monitoring of models after deployment. MLOps uses the same development practices as software engineering to ensure quality control for machine learning.
Building a MLOps Platform Around MLflow to Enable Model Productionalization i...Databricks
Getting machine learning models to production is notoriously difficult: it involves multiple teams (data scientists, data and machine learning engineers, operations, …), who often does not speak to each other very well; the model can be trained in one environment but then productionalized in completely different environment; it is not just about the code, but also about the data (features) and the model itself… At DataSentics, as a machine learning and cloud engineering studio, we see this struggle firsthand – on our internal projects and client’s projects as well.
The recent development of the World Wide Web has moved from static HTML pages to more dynamic and intelligent web applications with desktop-like characteristics. AJAX (Asynchronous JavaScript and XML) has been the main driver of this change, allowing for asynchronous updates to web application content from the server without page reloads. AJAX uses a combination of technologies including JavaScript, XML, and HTTP requests to update partial web page content dynamically. This enables web applications to have more interactive interfaces and deliver a more desktop-like experience to users.
Machine learning applications are typically stitched together from hopes and dreams, shell scripts, cron jobs, home-grown schedulers, snippets of configuration clipped from multiple blog posts, thousands of hard-coded business rules, a.k.a. "our SQL corpus," and a few lines of training and testing code. Organizing all the moving parts into something maintainable and supportive of ongoing development is a challenge most teams have on their TODO list, roadmap, or tech debt pile. Getting ahead of the day-to-day demands and settling into a sane architecture often seems like an unattainable goal. The past several years have seen an explosion of tool-building in the data engineering and analytics area, including in Apache projects spanning the areas of search and information retrieval, job orchestration, file and stream formats, and machine learning libraries. In this talk we will cover our product and development teams' choices of architecture and tools, from data ingestion and storage, through transformations and processing, to presentation of results and publishing to web services, reports, and applications.
Scaling Machine Learning from zero to millions of users (May 2019)Julien SIMON
The document provides guidance on scaling machine learning models from small to large scale. It discusses starting with high-level AI services that require no machine learning skills, then progressing to training and deploying custom models on infrastructure like EC2 instances, ECS/EKS Docker clusters, or using the fully managed Amazon SageMaker service. The key steps outlined are choosing the right level of service based on needs, training models locally then in distributed environments, deploying models behind APIs, and using automation and managed services to scale models from handling zero to millions of users.
Apache ® Spark™ MLlib 2.x: How to Productionize your Machine Learning ModelsAnyscale
Apache Spark has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do I deploy these model to a production environment? How do I embed what I have learned into customer facing data applications?
In this webinar, we will discuss best practices from Databricks on
how our customers productionize machine learning models
do a deep dive with actual customer case studies,
show live tutorials of a few example architectures and code in Python, Scala, Java and SQL.
This document discusses the challenges of machine learning development circa 2013 and outlines Dato's approach to addressing these challenges. In 2013, machine learning development was difficult, slow, and expensive. It required specialized knowledge and infrastructure. Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world" through high-level toolkits, auto feature engineering, automated machine learning (AutoML), and scalable data structures. The document demonstrates how Dato's tools can build an intelligent application with just a few lines of code and handle large datasets by leveraging out-of-core computation.
1) Learn about Myplanet's Headless CMS solution using Gatsby Preview and Contentful’s UI Extensions (https://www.contentful.com/resources/serverless/)
2) their Serverless project with IBM - using Apache OpenWhisk (https://www.ibm.com/cloud/functions)
3) how Myplanet got involved with AWS DeepRacer - a fun way to get started with Reinforcement Learning (RL), and their racing experience at re:Invent DeepRacer League (https://reinvent.awsevents.com/learn/deepracer/)
4) their Machine Learning (ML) research related to finding DeepRacer’s ideal line (https://medium.com/myplanet-musings/the-best-path-a-deepracer-can-learn-2a468a3f6d64).
BONUS: Two TED Talks referenced in the intro
5) When ideas have sex | Matt Ridley | Jul 14, 2010 https://www.ted.com/talks/matt_ridley_when_ideas_have_sex
6) Why The Best Leaders Make Love The Top Priority | Matt Tenney | Dec 5, 2019 https://www.youtube.com/watch?v=qCVoohdyI6I
VIDEO: https://youtu.be/ZH1xxmBNx5k
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
How to leverage Azure ML, automated machine learning, and Streamlit to build and test machine learning apps quickly? Find out about our favorite Hackathon stack and walk away with some code to build and user-test your own machine learning ideas fast.
Experimentation, bringing machine learning ideas in front of users, is essential to innovation. Yet, in our corporate hackathons, our data science team has struggled many times with how to build and deploy user-facing machine learning ideas in just a single day.
Over the past 2+ years, we have developed a routine around using Azure Machine Learning, automated machine learning, and Streamlit to build and user test machine learning ideas quickly. The aim of this talk is to pass on practical, technical knowledge to fellow data scientists about how to leverage this stack to achieve high build and user test speeds.
During the talk, we will walk through the process of building a computer vision system for identifying trash in images via an app using the open-source TACO dataset (http://tacodataset.org/). Working through a Jupyter notebook, we will load the data into Azure Machine Learning and trigger an automated machine learning run on the data. In this context, we will quickly get to know the training and testing metrics available in Azure ML to evaluate the model. We will then download the machine learning model as a file packaged in the open-source ONNX format (https://onnx.ai/). Using the open-source Python web application framework Streamlit (https://github.com/streamlit/streamlit), we will program an application in which users can upload images and embed the machine learning model in it to identify trash in these images. Using a to-be-published infrastructure-as-code pipeline on Azure DevOps, we will deploy the application to the public internet on the Azure platform. From here, users can test it.
The stack and code presented in this talk will enable fellow data scientists to accelerate their data science development, leading to quicker experimentation and, therefore, to faster innovation of products with machine learning at their core.
- Overview of a use case - Sentiment analysis
- Introduction - Using Jupyter Notebook & AWS SageMaker
- Setup New Project
- Setup and Run the Build CI/CD Pipeline
- Setup the Release Pipeline
- Test Build and Release Pipelines
- Testing the deployed solution
- Examining deployed model performance
This document provides an introduction to Microsoft Flow and PowerApps. It discusses how Flow can be used to automate workflows and how PowerApps allows users to build custom apps, workflows and forms. The document highlights how PowerApps integrates with SharePoint and other data sources. It also outlines the product roadmaps and upcoming improvements, including kicking off flows directly from SharePoint items. The presenter's contact information is provided at the end.
The structure of a Machine Learning code base can have a large impact on effective collaboration and time to production.
In this talk I will present our solution developed for the FutureOps Matching Automation project and talk about lessons learned and best practices.
Tour de France Azure PaaS 6/7 Ajouter de l'intelligenceAlex Danvy
Nous assisterons probablement à une rupture générationnelle entre les apps avec de l'intelligence artificielle et celles sans. Ces dernières, comme les applications en mode caractères à l'arrivée des interfaces graphiques, auront du mal à perdurer.
Azure met à dispositions 3 approches pour ajouter de l'IA dans une app, avec un niveau de difficulté graduel, de l'outil ne nécessitant aucune compétence particulière à celui dédié aux Data Scientistes.
Any startup has to have a clear go-to-market strategy from the beginning. Similarly, any data science project has to have a go-to-production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
In this demo-based talk we will walk through the best practices for simplifying machine learning operations across the enterprise and providing a serverless abstraction for data scientists and data engineers, so they could train, deploy and monitor machine learning models faster and with better quality.
MLflow: Infrastructure for a Complete Machine Learning Life Cycle with Mani ...Databricks
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure. In this session, we introduce MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size. In this deep-dive session, through a complete ML model life-cycle example, you will walk away with:
MLflow concepts and abstractions for models, experiments, and projects
How to get started with MLFlow
Understand aspects of MLflow APIs
Using tracking APIs during model training
Using MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
Package, save, and deploy an MLflow model
Serve it using MLflow REST API
What’s next and how to contribute
C19013010 the tutorial to build shared ai services session 1Bill Liu
This document provides an agenda and overview for a tutorial on building shared AI services. The tutorial consists of two modules: the first module discusses a case study of AI as a service and challenges of traditional machine learning, and how deep learning can help address these challenges. The second module introduces Keras and options for running Keras on Spark, including a use case, code lab, and prerequisites for running the code lab in Docker containers.
The document summarizes an internship at a leading software company in Nepal. The internship's objectives were to gain real-world experience, learn advanced techniques, and develop applicable skills. Over a 2 month period, the intern was assigned to an Android team of 5 members. They received training in Android development and were assigned a project to build a news app that scraped and displayed data from websites using Jsoup. The intern tested the app on various Android devices and concluded the internship gained skills in areas like design, threads, fragments, and using external libraries.
This document provides an introduction and agenda for a presentation on Spark. It discusses how Spark is a fast engine for large-scale data processing and how it improves on MapReduce. Spark stores data in memory across clusters to allow for faster iterative computations versus writing to disk with MapReduce. The presentation will demonstrate Spark concepts through word count and log analysis examples and provide an overview of Spark's Resilient Distributed Datasets (RDDs) and directed acyclic graph (DAG) execution model.
Data Engineer's Lunch 90: Migrating SQL Data with ArcionAnant Corporation
In Data Engineer's Lunch 90, Eric Ramseur teaches our audience how to use Arcion.
From best practices to real-world examples, this talk will provide you with the knowledge and insights you need to ensure a successful migration of your SQL data. So whether you're new to data migration or looking to improve your existing process, join us and discover how Arcion can help you achieve your goals.
The agenda includes presentations on using EC2 for business applications, a panel discussion on why the cloud benefits enterprises, and a networking reception. Steve Lucas from SAP will discuss how enterprises are adopting cloud computing and SAP's strategy involving on-premise, on-demand, and on-device deployment. There will be demonstrations of integrating SAP BusinessObjects BI with enterprise applications running on EC2, and how BI can accelerate applications. Kogent will demonstrate industry-specific analytics running on EC2 integrated with SAP BI.
Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world." The company provides an integrated machine learning platform that handles data engineering, advanced ML techniques, and deployment of models as predictive services. This allows small teams to be highly productive in building intelligent applications like recommenders, fraud detection, and personalized medicine. Dato's platform provides out-of-core computation, tools for feature engineering, rich data type support, and scalable models to help customers in various industries rapidly iterate and deploy ML applications.
SumitK's mobile app dev using drupal as base ststemSumit Kataria
This document discusses using Drupal as a backend system to manage data for mobile applications built with Titanium. It describes how Titanium can be used to build cross-platform native mobile apps using JavaScript, HTML and CSS. It also explains how the Drupal Services API can be leveraged to allow Titanium apps to securely access and manage content and data in Drupal through RESTful web services. Examples are provided of making calls from Titanium to Drupal services to retrieve content and users.
This document discusses tips and best practices for mobile development on the Windows Phone platform. It covers topics like choosing a development language and tools, dealing with asynchronous code, testing for connectivity and memory issues, handling exceptions properly, and ensuring apps work across light and dark themes. It emphasizes thorough testing for things like the back button behavior, tile updates, notifications, and capability usage before submission to the marketplace. The goal is to avoid common pitfalls and create high quality apps that provide a good user experience.
This document contains Anuj Soni's resume. It summarizes his contact information, objective, profile, skills, experience and education. Anuj has over 4 years of experience as a software engineer working with technologies like Java, Spring, Hibernate and SAP UI. He currently works for Xoriant Solutions developing applications for SAP Mobile Services using technologies like Spring, SAML and OAuth. Previously he worked for Premier Biosoft developing applications like SimLipid and SimGlycan using Java Swings.
IW Services provides information discovery and knowledge exploitation services to help organizations improve performance. Their mission is to uncover largely hidden enterprise information and help exploit resulting knowledge. They assist with implementing SharePoint for application frameworks, communication, collaboration, data exchange, and unified work approaches. Successful implementations involve setting goals, mapping a roadmap, piloting, and repeatedly training users.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
More Related Content
Similar to Tech Ed Barcelona - UX Transformation with Machine Learning and SAP Cloud Platform sdk for iOS
This document discusses the challenges of machine learning development circa 2013 and outlines Dato's approach to addressing these challenges. In 2013, machine learning development was difficult, slow, and expensive. It required specialized knowledge and infrastructure. Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world" through high-level toolkits, auto feature engineering, automated machine learning (AutoML), and scalable data structures. The document demonstrates how Dato's tools can build an intelligent application with just a few lines of code and handle large datasets by leveraging out-of-core computation.
1) Learn about Myplanet's Headless CMS solution using Gatsby Preview and Contentful’s UI Extensions (https://www.contentful.com/resources/serverless/)
2) their Serverless project with IBM - using Apache OpenWhisk (https://www.ibm.com/cloud/functions)
3) how Myplanet got involved with AWS DeepRacer - a fun way to get started with Reinforcement Learning (RL), and their racing experience at re:Invent DeepRacer League (https://reinvent.awsevents.com/learn/deepracer/)
4) their Machine Learning (ML) research related to finding DeepRacer’s ideal line (https://medium.com/myplanet-musings/the-best-path-a-deepracer-can-learn-2a468a3f6d64).
BONUS: Two TED Talks referenced in the intro
5) When ideas have sex | Matt Ridley | Jul 14, 2010 https://www.ted.com/talks/matt_ridley_when_ideas_have_sex
6) Why The Best Leaders Make Love The Top Priority | Matt Tenney | Dec 5, 2019 https://www.youtube.com/watch?v=qCVoohdyI6I
VIDEO: https://youtu.be/ZH1xxmBNx5k
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
How to leverage Azure ML, automated machine learning, and Streamlit to build and test machine learning apps quickly? Find out about our favorite Hackathon stack and walk away with some code to build and user-test your own machine learning ideas fast.
Experimentation, bringing machine learning ideas in front of users, is essential to innovation. Yet, in our corporate hackathons, our data science team has struggled many times with how to build and deploy user-facing machine learning ideas in just a single day.
Over the past 2+ years, we have developed a routine around using Azure Machine Learning, automated machine learning, and Streamlit to build and user test machine learning ideas quickly. The aim of this talk is to pass on practical, technical knowledge to fellow data scientists about how to leverage this stack to achieve high build and user test speeds.
During the talk, we will walk through the process of building a computer vision system for identifying trash in images via an app using the open-source TACO dataset (http://tacodataset.org/). Working through a Jupyter notebook, we will load the data into Azure Machine Learning and trigger an automated machine learning run on the data. In this context, we will quickly get to know the training and testing metrics available in Azure ML to evaluate the model. We will then download the machine learning model as a file packaged in the open-source ONNX format (https://onnx.ai/). Using the open-source Python web application framework Streamlit (https://github.com/streamlit/streamlit), we will program an application in which users can upload images and embed the machine learning model in it to identify trash in these images. Using a to-be-published infrastructure-as-code pipeline on Azure DevOps, we will deploy the application to the public internet on the Azure platform. From here, users can test it.
The stack and code presented in this talk will enable fellow data scientists to accelerate their data science development, leading to quicker experimentation and, therefore, to faster innovation of products with machine learning at their core.
- Overview of a use case - Sentiment analysis
- Introduction - Using Jupyter Notebook & AWS SageMaker
- Setup New Project
- Setup and Run the Build CI/CD Pipeline
- Setup the Release Pipeline
- Test Build and Release Pipelines
- Testing the deployed solution
- Examining deployed model performance
This document provides an introduction to Microsoft Flow and PowerApps. It discusses how Flow can be used to automate workflows and how PowerApps allows users to build custom apps, workflows and forms. The document highlights how PowerApps integrates with SharePoint and other data sources. It also outlines the product roadmaps and upcoming improvements, including kicking off flows directly from SharePoint items. The presenter's contact information is provided at the end.
The structure of a Machine Learning code base can have a large impact on effective collaboration and time to production.
In this talk I will present our solution developed for the FutureOps Matching Automation project and talk about lessons learned and best practices.
Tour de France Azure PaaS 6/7 Ajouter de l'intelligenceAlex Danvy
Nous assisterons probablement à une rupture générationnelle entre les apps avec de l'intelligence artificielle et celles sans. Ces dernières, comme les applications en mode caractères à l'arrivée des interfaces graphiques, auront du mal à perdurer.
Azure met à dispositions 3 approches pour ajouter de l'IA dans une app, avec un niveau de difficulté graduel, de l'outil ne nécessitant aucune compétence particulière à celui dédié aux Data Scientistes.
Any startup has to have a clear go-to-market strategy from the beginning. Similarly, any data science project has to have a go-to-production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
In this demo-based talk we will walk through the best practices for simplifying machine learning operations across the enterprise and providing a serverless abstraction for data scientists and data engineers, so they could train, deploy and monitor machine learning models faster and with better quality.
MLflow: Infrastructure for a Complete Machine Learning Life Cycle with Mani ...Databricks
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure. In this session, we introduce MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size. In this deep-dive session, through a complete ML model life-cycle example, you will walk away with:
MLflow concepts and abstractions for models, experiments, and projects
How to get started with MLFlow
Understand aspects of MLflow APIs
Using tracking APIs during model training
Using MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
Package, save, and deploy an MLflow model
Serve it using MLflow REST API
What’s next and how to contribute
C19013010 the tutorial to build shared ai services session 1Bill Liu
This document provides an agenda and overview for a tutorial on building shared AI services. The tutorial consists of two modules: the first module discusses a case study of AI as a service and challenges of traditional machine learning, and how deep learning can help address these challenges. The second module introduces Keras and options for running Keras on Spark, including a use case, code lab, and prerequisites for running the code lab in Docker containers.
The document summarizes an internship at a leading software company in Nepal. The internship's objectives were to gain real-world experience, learn advanced techniques, and develop applicable skills. Over a 2 month period, the intern was assigned to an Android team of 5 members. They received training in Android development and were assigned a project to build a news app that scraped and displayed data from websites using Jsoup. The intern tested the app on various Android devices and concluded the internship gained skills in areas like design, threads, fragments, and using external libraries.
This document provides an introduction and agenda for a presentation on Spark. It discusses how Spark is a fast engine for large-scale data processing and how it improves on MapReduce. Spark stores data in memory across clusters to allow for faster iterative computations versus writing to disk with MapReduce. The presentation will demonstrate Spark concepts through word count and log analysis examples and provide an overview of Spark's Resilient Distributed Datasets (RDDs) and directed acyclic graph (DAG) execution model.
Data Engineer's Lunch 90: Migrating SQL Data with ArcionAnant Corporation
In Data Engineer's Lunch 90, Eric Ramseur teaches our audience how to use Arcion.
From best practices to real-world examples, this talk will provide you with the knowledge and insights you need to ensure a successful migration of your SQL data. So whether you're new to data migration or looking to improve your existing process, join us and discover how Arcion can help you achieve your goals.
The agenda includes presentations on using EC2 for business applications, a panel discussion on why the cloud benefits enterprises, and a networking reception. Steve Lucas from SAP will discuss how enterprises are adopting cloud computing and SAP's strategy involving on-premise, on-demand, and on-device deployment. There will be demonstrations of integrating SAP BusinessObjects BI with enterprise applications running on EC2, and how BI can accelerate applications. Kogent will demonstrate industry-specific analytics running on EC2 integrated with SAP BI.
Dato aims to accelerate the creation of intelligent applications by making sophisticated machine learning as easy as "Hello world." The company provides an integrated machine learning platform that handles data engineering, advanced ML techniques, and deployment of models as predictive services. This allows small teams to be highly productive in building intelligent applications like recommenders, fraud detection, and personalized medicine. Dato's platform provides out-of-core computation, tools for feature engineering, rich data type support, and scalable models to help customers in various industries rapidly iterate and deploy ML applications.
SumitK's mobile app dev using drupal as base ststemSumit Kataria
This document discusses using Drupal as a backend system to manage data for mobile applications built with Titanium. It describes how Titanium can be used to build cross-platform native mobile apps using JavaScript, HTML and CSS. It also explains how the Drupal Services API can be leveraged to allow Titanium apps to securely access and manage content and data in Drupal through RESTful web services. Examples are provided of making calls from Titanium to Drupal services to retrieve content and users.
This document discusses tips and best practices for mobile development on the Windows Phone platform. It covers topics like choosing a development language and tools, dealing with asynchronous code, testing for connectivity and memory issues, handling exceptions properly, and ensuring apps work across light and dark themes. It emphasizes thorough testing for things like the back button behavior, tile updates, notifications, and capability usage before submission to the marketplace. The goal is to avoid common pitfalls and create high quality apps that provide a good user experience.
This document contains Anuj Soni's resume. It summarizes his contact information, objective, profile, skills, experience and education. Anuj has over 4 years of experience as a software engineer working with technologies like Java, Spring, Hibernate and SAP UI. He currently works for Xoriant Solutions developing applications for SAP Mobile Services using technologies like Spring, SAML and OAuth. Previously he worked for Premier Biosoft developing applications like SimLipid and SimGlycan using Java Swings.
IW Services provides information discovery and knowledge exploitation services to help organizations improve performance. Their mission is to uncover largely hidden enterprise information and help exploit resulting knowledge. They assist with implementing SharePoint for application frameworks, communication, collaboration, data exchange, and unified work approaches. Successful implementations involve setting goals, mapping a roadmap, piloting, and repeatedly training users.
Similar to Tech Ed Barcelona - UX Transformation with Machine Learning and SAP Cloud Platform sdk for iOS (20)
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
6. Learning Points
• What is UX. Why Transform it?
• SCP SDK for iOS
• Leonardo ー Not a Ninja Turtle
• AI - How we can all get started now. (No PhD required)
• Bringing it all together - an example or 2.
7. What is UX?
The relationship between a human and the technology she uses
10. Fiori has really changed things
• 8000+ Apps
• Design Standards
• Technology (that is growing)
The default UI for all of SAP
applications, most importantly S/4
HANA
11. Still some flaws in Fiori:
Performance
True mobility
Limited experts
12. From Fiori -> Fiori for iOS
What is it?
• A Wizard!
• A bunch of cool controls.
• Design Guidelines
• Training
* it is only for SCPms -- (I’ll show you why)
13. SAP Cloud Platform SDK for iOS
https://cloudplatform.sap.com/capabilities/mobile/ios-sdk.html
14. SCP SDK for iOS I - Let’s start!
1. Create an OData service on Gateway (HR Days off)
2. Setup SCP Trial Edition (or get your own of course)
3. Setup cloud connector
4. Turn on mobile services.
5. Create a new app with destination to this service
6. Use a Mac (duh).
7. Download the wizard.
8. Connect and create the app and deploy to your phone. (see?)
17. Download the SDK and install and run
You get the SDK here:
https://store.sap.com/sap/cp/ui/resources/stor
e/html/SolutionDetails.html?pid=0000014485
Since you are probably developing for iOS 11
now, use Xcode 9.1 beta 2 —
23. SAP Leonardo—What SAP Says I
“SAP Leonardo is a holistic digital innovation system that
seamlessly integrates future-facing technologies and capabilities
into the SAP Cloud Platform, using our Design Thinking Services.
This powerful portfolio enables you to rapidly innovate, scale new
models, and continually redefine your business.”
-SAP’s “What is SAP Leonardo?”
slide deck
26. SAP Leonardo — GQ: Demystified
• A platform for innovation
• A set of technology capabilities
• Industry templates
• Love the direction -- but more to come :)
28. SAP Leonardo—Machine Learning
• Services:
–They’re there
–They’re alpha
• Product components:
–Resume matching in
Fieldglass
–CoPilot in Fiori 2.0
–Service Ticketing in Hybris
Cloud for Customer
29. What is Machine Learning?
Getting the computer to do something useful, without explicitly
programming it to do it.
This can lead to incredible UX!
30. AI is Everywhere
• iOS Photos - iOS 11 image
recognition in the background.
• Uber - Surge pricing
• Alexa / Siri / Google Home -
NLP
• Facebook - Tagging friends
Snapchat - Lenses
• Amazon - Related products
31. Machine Learning - For the Rest of Us
1. Use existing API’s (Like Leonardo)
2. Use a pre-trained model from the net
3. Train your own model from public data
4. Train your own model on your own data with wizards
5. (After that, it gets hard)
33. Easiest - use a pre-trained model:
Apple CoreML
Example: Inception v3
Detects the dominant objects present in an
image from a set of 1000 categories such
as trees, animals, food, vehicles, people,
and more.
● View original model details
● Download Core ML Model (94.7 MB)
AWS Celebrity Spotter
https://console.aws.amazon.com/re
kognition/home?region=us-east-1#
34. Use public data
• Use data, and create your
OWN API.
• Check out all these data
sets (392!)
http://archive.ics.uci.edu/m
l/index.php
• https://www.openml.org/se
arch?type=data
• https://en.wikipedia.org/wik
i/List_of_datasets_for_ma
chine_learning_research
35. Train a model with wizards
• First we need a pile of data. Export your HR time off data (CSV):
36. Train a model with wizards
• Next upload that data to AWS S3, then try it out!
37. Train a model with wizards
• A new API is born!
Try switching around a few variables, works nicely.
39. TensorFlow example
• From Google
Tutorials:
https://www.tensorflow
.org/tutorials/image_re
cognition
• ~/tensorflow-mnist-
tutorial/python3
mnist_1.0_softmax.py
40. Move TensorFlow Model to SCP
• Build TensorFlow model
locally.
• Build web api’s around it.
• Get CloudFoundry
• Push and test API
• Wire through API Management
or SCPms and connect to iOS
SDK
https://account.hanatrial.ondeman
d.com/cockpit#/home/overview
* https://github.com/cgrotz/tensorflow-cloudfoundy
41. Cool - How about an example?
• Download calendar data.
• Upload to a ML wizard and
create endpoints.
• Build an iOS app with SDK
as starting point.
• Connect live to calendars
and continue training.
Next: Connect to calendars and
project data in SF + HCM.
44. Summary
• Transforming work through UX is the goal. UI is part of it.
• Fiori for iOS adds new great mobile capabilities from cloud.
• Leonardo is an innovation platform of tools.
• ML is part of AI, and it’s easy to get started.
ML will have an incredible impact on UX transformation.