Build applications with generative AI on Google Cloud! We are going to see in action what Gen App Builder is for developers to build and deploy AI-driven applications. We will explore Model Garden powered experiences, then we are going to learn more about the integration of these generative AI APIs. Vertex AI includes a suite of models that work with code. Together these code models are referred to as the PaLM and Codey APIs. The Vertex AI Codey APIs include the code generation API which supports generating code using a natural language description. We will show strategies for creating prompts that work with the model to generate code. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative AI industry trends.
DevBCN Vertex AI - Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated. Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all classic resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner. Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated.
Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all standard resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner.
Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Vertex AI - Unified ML Platform for the entire AI workflow on Google CloudMárton Kodok
Vertex AI is a managed ML platform for practitioners to accelerate experiments and deploy AI models.
Enhanced developer experience
- Build with the groundbreaking ML tools that power Google
- Approachable from the non-ML developer perspective (AutoML, managed models, training)
- Ease the life of a data scientist/ML (has feature store, managed datasets, endpoints, notebooks)
- Infrastructure management overhead have been almost completely eliminated
- Unified UI for the entire ML workflow
- End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
- Explainable AI and TensorBoard to visualize and track ML experiments
Unleashing the Power of Generative AI.pdfTomHalpin9
Deck for session entitled "Unleashing the Power of Generative AI: Python API Integration with ChatGPT, DALL-E, and D-ID Studio" presented at PyCon Ireland Conference on November 11th 2023
Unleashing the Power of Generative AI.pdfeoinhalpin99
Slide deck for session named "Unleashing the Power of Generative AI: Python API Integration with ChatGPT, DALL-E, and D-ID Studio" that was presented on Nov 11th at the PyCon Ireland 2023 Conference.
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
DevBCN Vertex AI - Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated. Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all classic resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner. Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated.
Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all standard resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner.
Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Vertex AI - Unified ML Platform for the entire AI workflow on Google CloudMárton Kodok
Vertex AI is a managed ML platform for practitioners to accelerate experiments and deploy AI models.
Enhanced developer experience
- Build with the groundbreaking ML tools that power Google
- Approachable from the non-ML developer perspective (AutoML, managed models, training)
- Ease the life of a data scientist/ML (has feature store, managed datasets, endpoints, notebooks)
- Infrastructure management overhead have been almost completely eliminated
- Unified UI for the entire ML workflow
- End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
- Explainable AI and TensorBoard to visualize and track ML experiments
Unleashing the Power of Generative AI.pdfTomHalpin9
Deck for session entitled "Unleashing the Power of Generative AI: Python API Integration with ChatGPT, DALL-E, and D-ID Studio" presented at PyCon Ireland Conference on November 11th 2023
Unleashing the Power of Generative AI.pdfeoinhalpin99
Slide deck for session named "Unleashing the Power of Generative AI: Python API Integration with ChatGPT, DALL-E, and D-ID Studio" that was presented on Nov 11th at the PyCon Ireland 2023 Conference.
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
Giovanni Galloro - Make your applications see, understand and talk with Googl...Codemotion
We will explore some of the AI capabilities present on GCP that are making Machine Learning accessible to any developer: from recognizing what’s inside an image to understand what your users are saying and how they feel about it and answering their questions.
GDG DevFest Romania - Architecting for the Google Cloud PlatformMárton Kodok
Learn about FaaS, PaaS architectural patterns that make use of Cloud Functions, Pub/Sub, Dataflow, Kubernetes and platforms that hides the management of servers from the user and have changed how we develop and deploy future software.
We discuss the difference between an event-driven approach - this means that you can trigger a function whenever something interesting happens within the cloud environment - and the simpler HTTP approach. Quota and pricing of per invocation, and the advantages and disadvantages of the serverless systems.
Optimizing your SparkML pipelines using the latest features in Spark 2.3DataWorks Summit
This talk will highlight the recent additions of vectorized UDFs and parallel cross-validation in Apache Spark 2.3. Vectorized UDFs not only enhance performance, but it also opens up more possibilities by using Pandas for input and output of the UDF. Parallel cross validation speeds up tuning ML models by exploiting your Spark cluster resources to the max. Bryan will discuss the details of Apache Arrow in Spark, how it is relevant to the rest of the big data ecosystem, and what is in store for the future. We will share performance results from using parallelism in cross-validation and some on-going work with optimizing ML pipelines.
This talk will also touch upon how we are leveraging this work to enhance the end to end Enterprise AI lifecycle in Open Source for developers and data scientists. Finally, we will highlight some of the other relevant projects at the Center for Open Source Data and AI Technologies and how you can contribute to the same.
Speakers
Vijay Bommireddipalli, Program Director: Center for Open Source Data and AI Technologies, IBM
Bryan Cutler, Software Engineer - Center for Open Source Data and AI Technologies, IBM
Bhadale group of companies projects portfolio - This is a list of public shareable projects for the past 10 years.Technologies used are AI / ML, Scala , Spark, Akka, Play, IoT, Hadoop, React, Javascript and several other related ones
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
MobiCloud: Towards Cloud Mobile Hybrid Application Generation using Semantica...Amit Sheth
Ajith Ranabahu, Amit Sheth, Ashwin Manjunatha, and Krishnaprasad Thirunarayan, 'Towards Cloud Mobile Hybrid Application Generation using Semantically Enriched Domain Specific Languages', International Workshop on Mobile Computing and Clouds (MobiCloud 2010), Santa Clara, CA,October 28, 2010.
Paper: http://knoesis.org/library/resource.php?id=865
Project: http://knoesis.wright.edu/research/srl/projects/mobi-cloud/
Machine learning at scale by Amy Unruh from GoogleBill Liu
Presented at AI NEXTCon Seattle 1/17-20, 2018
http://aisea18.xnextcon.com
join our free online AI group with 50,000+ tech engineers to learn and practice AI technology, including: latest AI news, tech articles/blogs, tech talks, tutorial videos, and hands-on workshop/codelabs, on machine learning, deep learning, data science, etc..
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
Continuous accuracy and efficiency of Large Language Models (LLM) is key to successfully building out your next AI-infused automation, regardless of business use case.
For our next Connector Corner webinar, we’ll explore how using a seamless AI integration process provides access to industry leading models, curated activities, and embeddings that help achieve operational efficiency.
Join us on March 26 to learn about:
Accessing large language models, hosted by UiPath
Reducing complexities of prompt-engineering, by using curated sets of activities
Assuring accuracy and safety, by building an AI Trust Layer to moderate the output of AI models, and their generated results.
Discovering what’s new in embeddings connectivity
Cultivating your AI knowledgebase using Vector Databases
Expect to see these use cases in action:
Leveraging UiPath hosted LLMs and activities
Document comparison using our LLM framework
Please stay tuned for additional use cases
Speakers:
Charlie Greenberg, host
George Roth, Technology Evangelist
Scott Schoenberger, Senior Product Manager
Koji Takimoto, Director Product Support
Tech leaders guide to effective building of machine learning productsGianmario Spacagna
Part 2/2 (Tech Leaders)
Data and Machine Learning (ML) technologies are now widespread and adopted by literally all industries. Although recent advancements in the field have reached an unthinkable level of maturity, many organizations still struggle with turning these advances into tangible profits. Unfortunately, many ML projects get stuck in a proof-of-concept stage without ever reaching customers and generating revenue. In order to effectively adopt ML technologies, enterprises need to build the right business cases as well as to be ready to face the inevitable challenges. In this talk, we will share common pitfalls, lessons learned, and best practices, while building different enterprise products. In particular, we will focus on the generic use case of ML as the core technology enabling customer-facing products regardless of the specific industry or application.
You will:
Understand if ML is the right solution for your business and set the right expectations;
Deal with the additional uncertainty of ML projects with respect to traditional software;
Build a balanced ML team and cover the broad spectrum of skills;
Know how to apply the scientific workflow in an agile development framework;
Learn how to turn research into production systems including engineering practices and tools;
Be able to leverage modern cloud and serverless architecture for scalable, autonomous and cheaper deployments.
GDG Heraklion - Architecting for the Google Cloud PlatformMárton Kodok
Learn about cloud components, architecture overviews to build an app using GCP components.
You will get hands-on information on how to build highly scalable and flexible applications optimized to run in GCP on the same infrastructure that powers Google. We will discuss cloud concepts and highlights various design patterns and best practices.
By the end of the session you will have hands-on experience to build a basic cloud application, it could be a simple web tier, powered by highly distributed database, background tasks executed on a pub/subsystem, and you get information how to go next level with advanced concepts like analytics warehouse, recommendation engines, and ML.
In this workshop we covered an introduction to Generative AI and Large Language Models (LLMs), an explanation of AWS Foundation Models and their role in providing pre-trained LLMs, the benefits of leveraging LLMs in enterprises, deploying LLMs on AWS Infrastructure including infrastructure requirements and available AWS services and tools, and a demo showcasing Text-to-Image and Text Summarization using Foundation Models, as well as utilising Retrieval Augmented Generation and LangChain with AWS tools for Enterprise use cases.
Connect with me for interesting session in future
@https://www.linkedin.com/in/jayyanar/
Large Language Models, Data & APIs - Integrating Generative AI Power into you...NETUserGroupBern
.NET User Group Meetup with Christian Weyer about Large Language Models, Data & APIs - Integrating Generative AI Power into your solutions - with Python and .NET
Building Instruqt, a scalable learning platformInstruqt
On February 15th I gave a talk on how we built Instruqt. We use Kubernetes, Terraform and Google Cloud, and in my talk I explain the benefits of using these tools and services correctly.
MLflow: Infrastructure for a Complete Machine Learning Life Cycle with Mani ...Databricks
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure. In this session, we introduce MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size. In this deep-dive session, through a complete ML model life-cycle example, you will walk away with:
MLflow concepts and abstractions for models, experiments, and projects
How to get started with MLFlow
Understand aspects of MLflow APIs
Using tracking APIs during model training
Using MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
Package, save, and deploy an MLflow model
Serve it using MLflow REST API
What’s next and how to contribute
Discover BigQuery ML, build your own CREATE MODEL statementMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. In this demo session we are going to demonstrate common marketing Machine Learning use cases of how to build, train, eval, and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases: - Customer Segmentation + Product cross sale recommendation - Conversion/Purchase prediction - Inference with other in-built >20 models The audience will get first-hand experience with how to write CREATE MODEL sql syntax to build machine learning models such as: - Multiclass logistic regression for classification - K-means clustering - Matrix factorization - ARIMA time series predictions ... and more Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision-making through predictive analytics across the organization without leaving the query editor. In the end, the audience will learn how everyday developers can build/train/run their own machine-learning models straight from the database query editor, by issuing CREATE MODEL statements
Cloud Run - the rise of serverless and containerizationMárton Kodok
Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. And Cloud Run has become a defacto container runtime service to production in seconds. Based on practical examples we will demonstrate how Cloud Run scores high in terms of developer experience. It differs from functions runtime as You can bring your own container, your own code, a folder, or binarys and it pairs great with the container ecosystem: Cloud Build, Cloud Code, Artifact Registry, and Docker. Each Cloud Run service gets an out-of-the-box stable HTTPS endpoint, with TLS termination handled for you. Map your services to your own domains and use either for web sites, backend APIs, workflows, invoke and connect services with the newest protocols of HTTP/2, WebSockets or gRPC (unary and streaming). Cloud Run is serverless containers, which means you don't have to fiddle with infrastructure or back-end resources to run applications.
More Related Content
Similar to Gen Apps on Google Cloud PaLM2 and Codey APIs in Action
Giovanni Galloro - Make your applications see, understand and talk with Googl...Codemotion
We will explore some of the AI capabilities present on GCP that are making Machine Learning accessible to any developer: from recognizing what’s inside an image to understand what your users are saying and how they feel about it and answering their questions.
GDG DevFest Romania - Architecting for the Google Cloud PlatformMárton Kodok
Learn about FaaS, PaaS architectural patterns that make use of Cloud Functions, Pub/Sub, Dataflow, Kubernetes and platforms that hides the management of servers from the user and have changed how we develop and deploy future software.
We discuss the difference between an event-driven approach - this means that you can trigger a function whenever something interesting happens within the cloud environment - and the simpler HTTP approach. Quota and pricing of per invocation, and the advantages and disadvantages of the serverless systems.
Optimizing your SparkML pipelines using the latest features in Spark 2.3DataWorks Summit
This talk will highlight the recent additions of vectorized UDFs and parallel cross-validation in Apache Spark 2.3. Vectorized UDFs not only enhance performance, but it also opens up more possibilities by using Pandas for input and output of the UDF. Parallel cross validation speeds up tuning ML models by exploiting your Spark cluster resources to the max. Bryan will discuss the details of Apache Arrow in Spark, how it is relevant to the rest of the big data ecosystem, and what is in store for the future. We will share performance results from using parallelism in cross-validation and some on-going work with optimizing ML pipelines.
This talk will also touch upon how we are leveraging this work to enhance the end to end Enterprise AI lifecycle in Open Source for developers and data scientists. Finally, we will highlight some of the other relevant projects at the Center for Open Source Data and AI Technologies and how you can contribute to the same.
Speakers
Vijay Bommireddipalli, Program Director: Center for Open Source Data and AI Technologies, IBM
Bryan Cutler, Software Engineer - Center for Open Source Data and AI Technologies, IBM
Bhadale group of companies projects portfolio - This is a list of public shareable projects for the past 10 years.Technologies used are AI / ML, Scala , Spark, Akka, Play, IoT, Hadoop, React, Javascript and several other related ones
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
MobiCloud: Towards Cloud Mobile Hybrid Application Generation using Semantica...Amit Sheth
Ajith Ranabahu, Amit Sheth, Ashwin Manjunatha, and Krishnaprasad Thirunarayan, 'Towards Cloud Mobile Hybrid Application Generation using Semantically Enriched Domain Specific Languages', International Workshop on Mobile Computing and Clouds (MobiCloud 2010), Santa Clara, CA,October 28, 2010.
Paper: http://knoesis.org/library/resource.php?id=865
Project: http://knoesis.wright.edu/research/srl/projects/mobi-cloud/
Machine learning at scale by Amy Unruh from GoogleBill Liu
Presented at AI NEXTCon Seattle 1/17-20, 2018
http://aisea18.xnextcon.com
join our free online AI group with 50,000+ tech engineers to learn and practice AI technology, including: latest AI news, tech articles/blogs, tech talks, tutorial videos, and hands-on workshop/codelabs, on machine learning, deep learning, data science, etc..
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
Continuous accuracy and efficiency of Large Language Models (LLM) is key to successfully building out your next AI-infused automation, regardless of business use case.
For our next Connector Corner webinar, we’ll explore how using a seamless AI integration process provides access to industry leading models, curated activities, and embeddings that help achieve operational efficiency.
Join us on March 26 to learn about:
Accessing large language models, hosted by UiPath
Reducing complexities of prompt-engineering, by using curated sets of activities
Assuring accuracy and safety, by building an AI Trust Layer to moderate the output of AI models, and their generated results.
Discovering what’s new in embeddings connectivity
Cultivating your AI knowledgebase using Vector Databases
Expect to see these use cases in action:
Leveraging UiPath hosted LLMs and activities
Document comparison using our LLM framework
Please stay tuned for additional use cases
Speakers:
Charlie Greenberg, host
George Roth, Technology Evangelist
Scott Schoenberger, Senior Product Manager
Koji Takimoto, Director Product Support
Tech leaders guide to effective building of machine learning productsGianmario Spacagna
Part 2/2 (Tech Leaders)
Data and Machine Learning (ML) technologies are now widespread and adopted by literally all industries. Although recent advancements in the field have reached an unthinkable level of maturity, many organizations still struggle with turning these advances into tangible profits. Unfortunately, many ML projects get stuck in a proof-of-concept stage without ever reaching customers and generating revenue. In order to effectively adopt ML technologies, enterprises need to build the right business cases as well as to be ready to face the inevitable challenges. In this talk, we will share common pitfalls, lessons learned, and best practices, while building different enterprise products. In particular, we will focus on the generic use case of ML as the core technology enabling customer-facing products regardless of the specific industry or application.
You will:
Understand if ML is the right solution for your business and set the right expectations;
Deal with the additional uncertainty of ML projects with respect to traditional software;
Build a balanced ML team and cover the broad spectrum of skills;
Know how to apply the scientific workflow in an agile development framework;
Learn how to turn research into production systems including engineering practices and tools;
Be able to leverage modern cloud and serverless architecture for scalable, autonomous and cheaper deployments.
GDG Heraklion - Architecting for the Google Cloud PlatformMárton Kodok
Learn about cloud components, architecture overviews to build an app using GCP components.
You will get hands-on information on how to build highly scalable and flexible applications optimized to run in GCP on the same infrastructure that powers Google. We will discuss cloud concepts and highlights various design patterns and best practices.
By the end of the session you will have hands-on experience to build a basic cloud application, it could be a simple web tier, powered by highly distributed database, background tasks executed on a pub/subsystem, and you get information how to go next level with advanced concepts like analytics warehouse, recommendation engines, and ML.
In this workshop we covered an introduction to Generative AI and Large Language Models (LLMs), an explanation of AWS Foundation Models and their role in providing pre-trained LLMs, the benefits of leveraging LLMs in enterprises, deploying LLMs on AWS Infrastructure including infrastructure requirements and available AWS services and tools, and a demo showcasing Text-to-Image and Text Summarization using Foundation Models, as well as utilising Retrieval Augmented Generation and LangChain with AWS tools for Enterprise use cases.
Connect with me for interesting session in future
@https://www.linkedin.com/in/jayyanar/
Large Language Models, Data & APIs - Integrating Generative AI Power into you...NETUserGroupBern
.NET User Group Meetup with Christian Weyer about Large Language Models, Data & APIs - Integrating Generative AI Power into your solutions - with Python and .NET
Building Instruqt, a scalable learning platformInstruqt
On February 15th I gave a talk on how we built Instruqt. We use Kubernetes, Terraform and Google Cloud, and in my talk I explain the benefits of using these tools and services correctly.
MLflow: Infrastructure for a Complete Machine Learning Life Cycle with Mani ...Databricks
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure. In this session, we introduce MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size. In this deep-dive session, through a complete ML model life-cycle example, you will walk away with:
MLflow concepts and abstractions for models, experiments, and projects
How to get started with MLFlow
Understand aspects of MLflow APIs
Using tracking APIs during model training
Using MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
Package, save, and deploy an MLflow model
Serve it using MLflow REST API
What’s next and how to contribute
Discover BigQuery ML, build your own CREATE MODEL statementMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. In this demo session we are going to demonstrate common marketing Machine Learning use cases of how to build, train, eval, and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases: - Customer Segmentation + Product cross sale recommendation - Conversion/Purchase prediction - Inference with other in-built >20 models The audience will get first-hand experience with how to write CREATE MODEL sql syntax to build machine learning models such as: - Multiclass logistic regression for classification - K-means clustering - Matrix factorization - ARIMA time series predictions ... and more Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision-making through predictive analytics across the organization without leaving the query editor. In the end, the audience will learn how everyday developers can build/train/run their own machine-learning models straight from the database query editor, by issuing CREATE MODEL statements
Cloud Run - the rise of serverless and containerizationMárton Kodok
Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. And Cloud Run has become a defacto container runtime service to production in seconds. Based on practical examples we will demonstrate how Cloud Run scores high in terms of developer experience. It differs from functions runtime as You can bring your own container, your own code, a folder, or binarys and it pairs great with the container ecosystem: Cloud Build, Cloud Code, Artifact Registry, and Docker. Each Cloud Run service gets an out-of-the-box stable HTTPS endpoint, with TLS termination handled for you. Map your services to your own domains and use either for web sites, backend APIs, workflows, invoke and connect services with the newest protocols of HTTP/2, WebSockets or gRPC (unary and streaming). Cloud Run is serverless containers, which means you don't have to fiddle with infrastructure or back-end resources to run applications.
BigQuery best practices and recommendations to reduce costs with BI Engine, S...Márton Kodok
best practices and recommendations for tuning BI Engine for your existing BigQuery workloads for cheaper and faster queries. Learn how we at REEA are orchestrating BI Engine reservations, on a 5TB dataset, considered small for BigQuery but with big cost savings and accelerated queries. We are seeing many presentations for big enterprises, but now we are showcasing how our queries perform better with lower costs. We are going to address the top considerations when to turn on BI Engine, how to use cloud orchestration for making this an automatic process, and combined with BigQuery and Datastudio query complexity that might save precious development time, lower bills, faster queries.
Cloud Workflows What's new in serverless orchestration and automationMárton Kodok
understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the newest features that lets you automate the cloud and integration with any Google Cloud product without worrying about authentication
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
BigdataConference Europe - BigQuery MLMárton Kodok
One of the hottest topics in database land these days is BigQuery ML. A new way to use machine learning on top of tabular data straight on your tables without leaving the query editor.
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets.
In this demo session, we are going to demonstrate common marketing Machine Learning use cases how to build, train, eval and predict, your own scalable machine learning models using SQL language.
The audience will get first hand experience how to write CREATE MODEL sql syntax to build machine learning models such as:
– Multiclass logistic regression for classification
– K-means clustering
– Matrix factorization
– ARIMA time series predictions
– Import TensorFlow models for prediction in BigQuery
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
DevFest Romania 2020 Keynote: Bringing the Cloud to you.Márton Kodok
Next OnAir 20 in review,
Real-time AI solutions
like anomaly detection, pattern recognition, and predictive forecasting
2. Recommendations AI rich experience to personalized product recommendations
3. Media Translation API real-time speech translation from streaming audio
4. Lending DocAI solution powered by Document AI for mortgage industry
5. Contact Center AI support over chat/voice calls by identifying intent and providing assistance
Confidential VMs are a breakthrough technology that allow customers to encrypt their most sensitive data in the cloud while it's being processed
Cloud Run: - Minimum idle instances
- Allocate 4 vCPUs and 4GiB memory
- Requests up to 60 minutes
- Server-side HTTP + gRPC streaming
- VPC access support
- External Load Balancing
Serverless orchestration and automation with Cloud Workflows (beta)
- Steps defined in YAML
- Built-in decision and conditional exec
- Subworkflows
- Support for external API calls
- Custom predicate for retries
Predict, recommend and forecast with BigQuery ML
CREATE MODEL syntax in BigQuery to run Machine Learning tasks
Supported models:
- K-means clustering for data segmentation
- Recommend with Matrix Factorization
- Perform time-series forecast
- Import TensorFlow models
Single interface for multiple services with API Gateway
Find Your Topic and Skill Level
Qwiklabs + New Tutorials Center
BigQuery ML - Machine learning at scale using SQLMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the data warehouse environment and training it on massive datasets. We are going to demonstrate how to build, train, eval and predict, your own scalable machine learning models using standard SQL language in Google BigQuery.
We will see how can we use CREATE MODEL sql syntax to build different models such as:
-Linear regression
-Multiclass logistic regression for classification
-K-means clustering
-Import TensorFlow models for prediction in BigQuery
We will see how we can apply these models on tabular data in retail and marketing use cases.
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
Applying BigQuery ML on e-commerce data analyticsMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. We are going to demonstrate common marketing Machine Learning use cases we do at REEA.net to build, train, eval and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases:
Customer Segmentation
Customer Lifetime Value (LTV) prediction
Conversion/Purchase prediction
The audience will get first hand experience how to write CREATE MODEL sql syntax to build machine learning models such as:
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor
Supercharge your data analytics with BigQueryMárton Kodok
Powering interactive data analysis require massive architecture, and Know-How to build a fast real-time computing system. BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, creating tables, columns, views, working with partitions, clustering for cost optimizations, streaming inserts, User Defined Functions, and several use cases for everydaay developer: funnel analytics, behavioral analytics, exploring unstructured data.
The other part will be about BigQuery ML, which enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data.
Vibe Koli 2019 - Utazás az egyetem padjaitól a Google Developer ExpertigMárton Kodok
VIBE Koli 2019 - Vibe Garázs - Gokart.
Kodok Márton, miután elvégezte tanulmányait a Sapientián, IT-s karriert épített ki magának, ma pedig már tagja a Google Developer Expert (GDE) csapatának, így az ország kiemelkedő szakemberei közé tartozik. A VIBE Kolin abban segít neked, hogy megtaláld a saját utad. Bebizonyítja, csak akaraterő kell ahhoz, hogy egy társadhoz képest mást, többet csinálj.
BigQuery ML - Machine learning at scale using SQLMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the data warehouse environment and training it on massive datasets. We are going to demonstrate how to build, train, eval and predict, your own scalable machine learning models using standard SQL language in Google BigQuery.
We will see how can we use CREATE MODEL sql syntax to build different models such as:
Linear regression
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
We will see how we can apply these models on tabular data in retail and marketing use cases.
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
Google Cloud Platform Solutions for DevOps EngineersMárton Kodok
learn the DevOps essentials about cloud components, FaaS, PaaS architectural patterns that make use of Cloud Functions, Pub/Sub, Dataflow, Kubernetes and how we develop and deploy cloud software. You will get hands on information how to build, run, monitor highly scalable and flexible applications optimized to run on GCP. We will discuss cloud concepts and highlights various design patterns and best practices.
6. DISZ - Webalkalmazások skálázhatósága a Google Cloud PlatformonMárton Kodok
Az előadás témája hogyan építhető fel egy rugalmas, jól skálázható szolgáltatás a felhőszolgáltatók platformjain. Hogyan lehet megoldani, hogy a szolgáltatás, amelynek induláskor legfeljebb néhány tíz vagy száz felhasználót kell kiszolgálnia, akár több ezer vagy nagyságrendekkel több felhasználót is képes legyen kiszolgálni rugalmasan? Hátradőlni és csodálni az autoscaling funkciót a Black Friday napján. Beszélni fogunk virtualizációról, platformszintű virtualizációről, szuperkönnyű alkalmazáskonténerekről, a munkaterhek közel valósidejű “pakolgatásával”. Bemutatásra kerül a Google Cloud Platform számos komponense. Bankok, biztosítók, webshopok és így tovább mind a cloudban látják a kitörési pontot.
CodeCamp Iasi - Creating serverless data analytics system on GCP using BigQueryMárton Kodok
Teaser: provide developers a new way of understanding advanced analytics and choosing the right cloud architecture
The new buzzword is #serverless, as there are many great services that helps us abstract away the complexity associated with managing servers. In this session we will see how serverless helps on large data analytics backends.
We will see how to architect for Cloud and implement into an existing project components that will take us into the #serverless architecture that will ingest our streaming data, run advanced analytics on petabytes of data using BigQuery on Google Cloud Platform - all this next to an existing stack, without being forced to reengineer our app.
BigQuery enables super-fast, SQL/Javascript queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, SQL 2011 standard, working with streaming inserts, User Defined Functions written in Javascript, reference external JS libraries, and several use cases for everyday backend developer: funnel analytics, email heatmap, custom data processing, building dashboards, extracting data using JS functions, emitting rows based on business logic.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Gen Apps on Google Cloud PaLM2 and Codey APIs in Action
1. MLCon Berlin, November 2023
GenAppsonGoogleCloud
PaLM2andCodeyAPIsinAction
Márton Kodok
Software Architect at REEA.net
Berlin 2023
2. 1. What is Vertex AI?
2. Generative AI / LLM / Foundation Models
3. Exploring Model Garden
4. Code Demo
5. Conclusions
Agenda
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
3. ● Google Developer Expert on Cloud technologies (2016→)
● Champion of Google Cloud Innovators program (2021→)
● Among the Top 3 romanians on Stackoverflow 205k reputation
● Crafting Cloud Architecture+ML backends at REEA.net
Articles: martonkodok.medium.com
Twitter: @martonkodok
Slideshare:martonkodok
StackOverflow: pentium10
GitHub: pentium10
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
About me
8. @martonkodok
list the top 10 incidents from email for service name "trademark", group results in table, order
by date, hour descending, extract duration, make an analyses if there is a repeated pattern
10. @martonkodok
find all emails from this year from Hidroelectrica that contain "factura" and list all invoice values
for this year, with dates, in a table, order by descending
11. Gen AI and Model Garden on Vertex AI @martonkodok
Google Cloud AI Portfolio
Foundation
Models
Vertex AI
End-to-End ML Platform
Generative Al
App Builder
Text Chat Code Image
Video
Google Cloud Infrastructure - GPUs/TPUs
Contact Center AI Healthcare AI
Discovery AI
Document AI
Conversation AI
Vertex AI
Search
Foundation
Models
Business Users
AI Practitioners
Developers
Audio and
Music
Generative AI
Studio
Generative AI
APIs
Model Garden
Duet Al for
Google Workspace
Duet Al for
Google Cloud
13. “ VertexAI is a managed ML platform for developers
@martonkodok
14. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
VertexAI:Managed unified ML platform
Fine-tuning 1 click deploy
15. “ You can deploy models on VertexAI
and get a HTTPs Endpointsto do
inference rapidly and reliably.
Generative AI and Model Garden on Vertex AI @martonkodok
16. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Vertex AI provides tools to build with Gen AI
Model Garden
Generative AI Studio
Open Source
Models
Task Specific
AutoML and APIs
Foundation
Models
Tuning
Adaptive Layers
Prompt Design
Data Science Workbench
Experiment Train Deploy
MLOps
ML Platform
18. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Model Garden
Task Specific
AutoML and APIs
Open Source
Models
Foundation
Models
Model Garden
19. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Model Garden
Foundation Models
Multi-task Large-scale Minimal training
20. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Language Foundation Models
PaLM 2 for Text
Custom language tasks
PaLM 2 for Chat
Multi-turn conversations with
session context
Codey for
Code Generation
Improve coding and debugging
Chirp
Turning audio containing speech into
formatted text representation
Imagen
Write text prompts to generate new
images or generate new areas of an
existing image.
21. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Introduction
Generative AI Studio
Console-tool Test generative models Rapid prototyping
22. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Generative AI Studio
23. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Custom language Prompt samples
1. Summarization
2. Classification
3. Extraction
4. Writing
5. Ideation
cloud.google.com/vertex-ai/docs/generative-ai/learn/prompt-samples
25. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Model Garden
Open Source Models
Deploy Large-scale Fine-tunable
26. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
Open Source models
53+ other models
Providing open source model ecosystem
enables companies to look for
their particular use case
Falcon-instruct (PEFT)
Popular assistant-style tasks
Falcon-40B-instruct on Vertex AI
Llama 2
Meta's Llama 2 models
on Vertex AI
Code Llama
Designed for general code synthesis and
understanding, designed for Python
More…
28. 1. LLMS have enabled unseen new task paradigm
2. However, LLMs are challenging to deploy to real-world apps due to their largesize
3. 175b LLM requires at least 350GB of GPU memory - using specialized infrastructure
Challenging to serve in practice
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
29. “ VertexAI enables model inference via API
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
30. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
VertexAI:Managed unified ML platform
Fine-tuning 1 click deploy
43. @martonkodok
Fine tune Codey models
1. can use own code
2. improve model quality
3. generate code in language variants to use our standards
4. generate code for custom libraries
47. What’s included in VertexAI?
@martonkodok
Data Labeling
AutoML models
DL Environment (DL VM + DL Container)
Prediction
Feature Store Training
Experiments
Data Readiness
Feature
Engineering
Training/
HP-Tuning
Model
Monitoring
Model serving
Understanding/
Tuning
Edge
Model
Management
Notebooks
Pipelines (Orchestration)
Explainable AI
Hybrid AI
Model
Monitoring
Metadata
Vision-Video-NLP-Translate
Models
Datasets
Custom Models
Containers
Python
Endpoints
BigQuery ML
BigQuery Models
Publisher
Pre-trained
models
Model Garden GenAI Studio
Foundation -> LLM Palm 2 API
48. Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
VertexAI:Managed unified ML platform
Fine-tuning 1 click deploy
50. “ At the end of the day, the largest model
is actually not the right answer
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
51. Vertex AI: From prompt samples to Fine-tuning
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
cloud.google.com/vertex-ai/
docs/generative-ai/models/tune-models
goo.gle/gen-ai-github
cloud.google.com/vertex-ai/
docs/generative-ai/learn/prompt-samples
52. 1. Build with the groundbreaking ML tools that power Google
2. Model Garden provides a curated collection of 100+ models
3. Approachable from the non-ML developer perspective (managed models, fine-tuning training)
4. Accelerate ML with tooling for pre-trained, open source and custom models
5. Deploy to applications with just one-click
6. End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
Vertex AI: Enhanced ML developer experience
Gen Apps on Google Cloud: PaLM2 and Codey APIs in Action @martonkodok
54. Twitter: @martonkodok
Thank you. Q&A.
Reea.net - Integrated web solutions driven by creativity
to deliver projects.
Follow for articles:
martonkodok.medium.com
Slides available on:
slideshare.net/martonkodok