Azure Functions are great for a wide range of scenarios, including working with data on a transactional or event-driven basis. In this session, we'll look at how you can interact with Azure SQL, Cosmos DB, Event Hubs, and more so you can see how you can take a lightweight but code-first approach to building APIs, integrations, ETL, and maintenance routines.
Building workflow solution with Microsoft Azure and Cloud | Integration MondayBizTalk360
Most will agree that a business process can be a workflow. But, what do people think of when running workflows in the Cloud and in particular Azure or Microsoft Cloud. Because, Microsoft Azure and Cloud offer us several options to build them: No-code/low-code, and a code option with Power Automate, Logic Apps, and Durable Functions? In this session, we'll explore each and focus on building workflows with them. Furthermore, we'll see the differences and how each could potentially, complement the other.
Get to know the two stateful programming models of Azure Serverless compute: workflows and actors and how these models can simplify development and how they enable stateful and long-running application patterns within Azure’s compute environments.
So, you have IoT Devices connected to IoT Hub sending telemetry data into the Microsoft Azure cloud. Now what? This session will take you through setting up real-time stream processing of IoT data. We’ll look at integrating services like Azure Stream Analytics, Azure Functions, and Cosmos DB to build a highly scalable stream processing backend for any IoT solution. You’ll leave this session better prepared to handle real-time IoT stream processing in Azure; plus you’ll do it with less code by utilizing serverless Azure Functions.
Building workflow solution with Microsoft Azure and Cloud | Integration MondayBizTalk360
Most will agree that a business process can be a workflow. But, what do people think of when running workflows in the Cloud and in particular Azure or Microsoft Cloud. Because, Microsoft Azure and Cloud offer us several options to build them: No-code/low-code, and a code option with Power Automate, Logic Apps, and Durable Functions? In this session, we'll explore each and focus on building workflows with them. Furthermore, we'll see the differences and how each could potentially, complement the other.
Get to know the two stateful programming models of Azure Serverless compute: workflows and actors and how these models can simplify development and how they enable stateful and long-running application patterns within Azure’s compute environments.
So, you have IoT Devices connected to IoT Hub sending telemetry data into the Microsoft Azure cloud. Now what? This session will take you through setting up real-time stream processing of IoT data. We’ll look at integrating services like Azure Stream Analytics, Azure Functions, and Cosmos DB to build a highly scalable stream processing backend for any IoT solution. You’ll leave this session better prepared to handle real-time IoT stream processing in Azure; plus you’ll do it with less code by utilizing serverless Azure Functions.
Serverless technologies and capabilities are here and are accessible now more than ever.
The power of infinite scale and system capabilities has never been more accessible. This also affects traditional front end development as serverless technologies allow for easy construction of backend support for any frontend with ease and simplicity.
In this talk, we will demonstrate how to build a fully functional Graphql endpoint for FE applications using Apollo Server and Client libraries, utilizing different cloud providers. We will also demonstrate the usage of Servless.com framework to set up the required infrastructure as code to simplify and support this setup
The video of the presentation (Hebrew):
https://youtu.be/8ba4cpdtK-8
Develop in ludicrous mode with azure serverlessLalit Kale
Today, every one of us wants to get things done fast. The fact of the matter is Serverless is a fantastic platform for doing things fast. Because, with Serverless, you really don’t have time to waste in terms of delivering your business value. Turns out you can with the right cloud services. In this talk we’ll create a microservice using Azure Functions and also get introduced to bigger picture of serverless computing.
I presented this session in Global Azure Bootcamp 2019 in Dublin. #GlobalAzure #AzureFunctions #Serverless
Logisland is an event mining OpenSource platform based on Kafka/spark to handle huge amount of event, temporal data to find pattern, detect correlation. Useful for log mining in security, fraud detection, IoT, performance & system supervision
Observability foundations in dynamically evolving architecturesBoyan Dimitrov
Holistic application health monitoring, request tracing across distributed systems, instrumentation, business process SLAs - all of them are integral parts of today’s technical stacks. Nevertheless many teams decide to integrate observability last which makes it an almost impossible challenge - especially if you have to deal with hundreds and thousands of services. Therefore starting early is essential and in this talk we are going to see how we can solve those challenges early and explore the foundations of building and evolving complex microservices platforms in respect to observability.
We are going to share some of the best practices and quick wins that allow us to correlate different telemetry systems and gradually build up towards more sophisticated use-cases.
We are also going to look at some of the standard AWS services such as X-Ray and Cloudwatch that help us get going "for free" and then discuss more complex tooling and integrations building up towards a fully integrated ecosystem. As part of this talk we are also going to share some of the learnings we have made at Sixt on this topic and we are going to introduce some of the solutions that help us operate our microservices stack
Stephane Lapointe, Frank Boucher & Alexandre Brisebois: Les micro-services et...MSDEVMTL
16 Avril 2016
Groupe Azure
Sujet: Les micro-services et Azure Service Fabric
Conférenciers: Alexandre Brisebois, Microsoft, Stéphane Lapointe, Orckestra et Frank Boucher, Lixar IT
Nous vous proposons une journée complète sur les micro-services et Azure Service Fabric, le but étant d'appendre la théorie avec une série de présentations pour ensuite concrétiser le tout avec une partie pratique "hands-on" et des labs.
Pour participer, vous devrez obligatoirement apporter votre ordinateur portable, avoir installé Visual Studio 2015 Update 2 et Service Fabric SDK 2.0.135.
CQRS and Event Sourcing are popular architectural patterns that allow you to build effective event-driven micro-services.
The basic idea of these patterns is to record each event that changes the state of the domain model into the event-storage.
This approach allows you to reduce service latency for any data scale, as well as be able to restore the system without losing any data.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Kakfa summit london 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Between spending hours (or days!) making sure you can code and test locally and the difficulties of keeping remote environments up to date, sometimes we find ourselves falling back on "It works on my machine!". Getting rid of the difficulties in making new development environments and maintaining testing infrastructure is really key to banishing the dreaded phrase. In this session, we'll take you through some of the recent tools and techs that will not only make your life easier but will mean you never have to say "works on my machine" ever again.
Too often, we chuck talented technologists into a promotion to the role of team lead or manager without any training or support. This can lead to a shaky foundation in their management style, many new managers quitting to go back to tech, and people being mis-handled as managers learn.In this session, I'll take you through some of the concepts, frameworks, and support systems I think help a company grow great leaders. This is a great session for existing leaders but useful for those who are "management-curious" and want to see what's involved.
More Related Content
Similar to Working with data using Azure Functions.pdf
Serverless technologies and capabilities are here and are accessible now more than ever.
The power of infinite scale and system capabilities has never been more accessible. This also affects traditional front end development as serverless technologies allow for easy construction of backend support for any frontend with ease and simplicity.
In this talk, we will demonstrate how to build a fully functional Graphql endpoint for FE applications using Apollo Server and Client libraries, utilizing different cloud providers. We will also demonstrate the usage of Servless.com framework to set up the required infrastructure as code to simplify and support this setup
The video of the presentation (Hebrew):
https://youtu.be/8ba4cpdtK-8
Develop in ludicrous mode with azure serverlessLalit Kale
Today, every one of us wants to get things done fast. The fact of the matter is Serverless is a fantastic platform for doing things fast. Because, with Serverless, you really don’t have time to waste in terms of delivering your business value. Turns out you can with the right cloud services. In this talk we’ll create a microservice using Azure Functions and also get introduced to bigger picture of serverless computing.
I presented this session in Global Azure Bootcamp 2019 in Dublin. #GlobalAzure #AzureFunctions #Serverless
Logisland is an event mining OpenSource platform based on Kafka/spark to handle huge amount of event, temporal data to find pattern, detect correlation. Useful for log mining in security, fraud detection, IoT, performance & system supervision
Observability foundations in dynamically evolving architecturesBoyan Dimitrov
Holistic application health monitoring, request tracing across distributed systems, instrumentation, business process SLAs - all of them are integral parts of today’s technical stacks. Nevertheless many teams decide to integrate observability last which makes it an almost impossible challenge - especially if you have to deal with hundreds and thousands of services. Therefore starting early is essential and in this talk we are going to see how we can solve those challenges early and explore the foundations of building and evolving complex microservices platforms in respect to observability.
We are going to share some of the best practices and quick wins that allow us to correlate different telemetry systems and gradually build up towards more sophisticated use-cases.
We are also going to look at some of the standard AWS services such as X-Ray and Cloudwatch that help us get going "for free" and then discuss more complex tooling and integrations building up towards a fully integrated ecosystem. As part of this talk we are also going to share some of the learnings we have made at Sixt on this topic and we are going to introduce some of the solutions that help us operate our microservices stack
Stephane Lapointe, Frank Boucher & Alexandre Brisebois: Les micro-services et...MSDEVMTL
16 Avril 2016
Groupe Azure
Sujet: Les micro-services et Azure Service Fabric
Conférenciers: Alexandre Brisebois, Microsoft, Stéphane Lapointe, Orckestra et Frank Boucher, Lixar IT
Nous vous proposons une journée complète sur les micro-services et Azure Service Fabric, le but étant d'appendre la théorie avec une série de présentations pour ensuite concrétiser le tout avec une partie pratique "hands-on" et des labs.
Pour participer, vous devrez obligatoirement apporter votre ordinateur portable, avoir installé Visual Studio 2015 Update 2 et Service Fabric SDK 2.0.135.
CQRS and Event Sourcing are popular architectural patterns that allow you to build effective event-driven micro-services.
The basic idea of these patterns is to record each event that changes the state of the domain model into the event-storage.
This approach allows you to reduce service latency for any data scale, as well as be able to restore the system without losing any data.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Kakfa summit london 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Between spending hours (or days!) making sure you can code and test locally and the difficulties of keeping remote environments up to date, sometimes we find ourselves falling back on "It works on my machine!". Getting rid of the difficulties in making new development environments and maintaining testing infrastructure is really key to banishing the dreaded phrase. In this session, we'll take you through some of the recent tools and techs that will not only make your life easier but will mean you never have to say "works on my machine" ever again.
Too often, we chuck talented technologists into a promotion to the role of team lead or manager without any training or support. This can lead to a shaky foundation in their management style, many new managers quitting to go back to tech, and people being mis-handled as managers learn.In this session, I'll take you through some of the concepts, frameworks, and support systems I think help a company grow great leaders. This is a great session for existing leaders but useful for those who are "management-curious" and want to see what's involved.
Join Steph and Chris as they run through Microsoft's transformation from shipping boxed products to always-on online services. Developer Velocity is a critical part of that journey to ensure the teams can keep delivering value to their end-users at scale. In this session, you will learn about some of the tips & tricks that Microsoft used along the way.
https://www.youtube.com/watch?v=d_4i0lxKtr0
The Microsoft Well Architected Framework For Data AnalyticsStephanie Locke
With more than a decade of organizations running large data & analytics workloads in the cloud, Microsoft have extended their architecture framework to provide best practices and guidance for businesses. In this session, we’ll introduce the 'Well Architected Framework', go into detail about effective data architectures, and give you concrete next steps you can take whether you already have a cloud data architecture or are planning your first implementation.
Sustainable manufacturing with AI
Improve your processes:
Defect detection to reduce waste
Predictive maintenance to improve energy efficiency
Generative design to reduce materials used in products
Process optimisation to improve energy usage
Inventory optimisation to reduce materials held
Improve your IT:
Go paperless
Move to carbon neutral clouds
Adopt green software products
Optimise your compute usage
Improve with Nightingale HQ
We’re doing bespoke and pilot projects with manufacturers and adjacent industries. Make your business more sustainable.
bit.ly/nhqaichat
Effective data wrangling starts with data collection. Make sure to think about Front-end validation, limiting free text, GDPR, security, accessibility, inclusion, and designing to minimise the amount of data collected.
The next part is effective data storage. You can store data in relational or non-relational data stores. Relational data solutions in Azure include Azure SQL Database, Azure Database for MariaDB, and Azure Database for PostgreSQL. Non-relational data solutions include Azure Storage and Azure Cosmos DB.
You might wrangle data during a data movement process between systems. A batch process moves multiple records, typically on a schedule. A streaming process operates on each individual event/message as it happens. In ETL (Extract Transform Load) you run a process to retrieve data from a system, process it, and then insert it into a new data store. In ELT (Extract Load Transform) you run a process to retrieve data from a system, insert it into a new data store, and the process it.
The modern data warehouse stack (used to combine data from multiple sources) includes Azure Data Factory, Azure Databricks, Azure Synapse Analytics and Azure Analysis Services.
Power Query is a useful language for processing data and can be used in Excel, Power BI, and Azure Data Factory.
Power BI is a self-service data modelling and visualisation tool that can connect to data stored in lots of different places. It has a data modeling area where Power Query can be used to build a dataset. The reports allow you make multi-page analysis of a dataset. A dashboard combines visuals from one or more reports to provide a broader and simpler view.
SQL is a language that works with many relational and non-relational systems. You can use it interact with data, data structures, and the access model for the data objects.
Finally you can write code to wrangle data with Python, R, and dotnet being common data wrangling languages.
Digitalisation from the back office to the factory floorStephanie Locke
AI is a huge set of tools for making computers behave intelligently - Andrew Ng
70% of implementations fail to meet their stated aims. Following the holistic triple transformation approach taken by ‘lighthouses’ seems like a sensible approach to take. Industry 4.0: Reimagining manufacturing operations after COVID-19 McKinsey
Get started with AI:
- Start small with pilot projects to gain momentum
- Strong business case
- Build a team around this
- Provide broad training
- Longer-term develop an AI strategy that aligns with your business objectives
The ethical implications of our work can be staggering but how do we balance commercial needs, ethical requirements, and productivity? Taking a pragmatic approach through starting with simple checklists evolving to the use of automation and structured processes, I look at how we can make our work more robust from an ethical perspective.
Step 0: Get alignment
Step 1: Make others think before you start
Step 2: Work robustly
Step 3: Maintain vigilance
Developer Velocity Series in association with Quest
DevOps: A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers.
DataOps: DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics.
MLOps: MLOps […] enables data science and IT teams to collaborate and increase the pace of model development and deployment via monitoring, validation, and governance of machine learning models.
DevSecOps: DevSecOps automatically bakes in security at every phase of the software development lifecycle, enabling development of secure software at the speed of Agile and DevOps.
ChatOps: ChatOps is a collaboration model that connects people, tools, process, and automation into a transparent workflow. This flow connects the work needed, the work happening, and the work done in a persistent location staffed by the people, bots, and related tools.
NoOps: NoOps is the idea that the software environment can be so completely automated that there’s no need for an operations team to manage it.
GitOps: GitOps is a way of implementing Continuous Deployment for cloud native applications. It focuses on a developer-centric experience when operating infrastructure, by using tools developers are already familiar with, including Git and Continuous Deployment tools.
Developer Velocity is the Grand Unified Theory
Developer velocity: The ability to drive transformative business performance through software development
Top DVI companies are stronger financially
- 5x compound annual growth rate
- 60% more shareholder returns
- 20% higher operating margins
>Companies in the top quartile of the Developer Velocity Index (DVI) outperform others in the market by four to five times. Top-quartile companies also have 60 percent higher total shareholder returns and 20 percent higher operating margins.
Critical areas of focus
- People
Product management
Product management function
Product telemetry
Culture
Psychological safety
Collaboration and knowledge sharing
Continuous improvement culture
Talent management
Incentives
Capability building
- Processes
Working practices
Compliance practices
Security practices
Organisational enablement
Autonomous scoped teams
Dependency management
Culture
Continuous improvement
Talent management
Recruiting
Team health management
- Tooling
Planning tools
Collaboration tools
Development tools
DevOps tools
Cloud
Video at: https://www.quest.com/event/steph-lockes-developer-velocity-series-8148798/
Reproducible machine learning
Steph Locke
Reproducible if Data=Same + Analysis=Same
Replicable if Data=Different + Analysis=Same
Robust if Data=Same + Analysis=Different
Generalisable if Data=Different + Analysis=Different
It’s reproducible if…
With the same ✨ environment
With the raw ✨ data
With the unmodified ✨ code
= Produces exactly the same results
Benefits for You/Team
- Fewer headaches around environments and data
- Less rework of code
- Clear standards
- Easier operationalisation
Stakeholders
- Stable results
- Auditable
- Maintainable / correctable
- Easier operationalisation
Recommendations
FAIR data
Findable
- Unique name / ID
- Documented
Accessible
- Common access methods
- Open protocol
- Metadata stored even after data may be removed
Interoperable
- Common standards
- Terms defined
Reusable
- License and use rights specified
- Provenance documented
What to use
Use
- Logging
- Version control
- Fixed seeds*
- Dependency tracking
Framework examples
- MLFlow
- {drake}
- Azure ML
- FairML
If you're thinking about using AI monitoring in the workplace to help ensure appropriate measures are taken against COVID-19 you need to be thinking about some of the practical implications of using AI.
Supporting article: https://blog.nightingalehq.ai/ethics-considerations-for-ai-monitoring-in-the-post-covid-workplace
Working with relational data in Microsoft AzureStephanie Locke
This slide deck overviews the key technologies businesses considering moving to the cloud may benefit from, and how they can apply strong governance and cost controls to build a more secure data environment.
What should you think about when determining whether your AI strategy should be to Build, Buy, or codify It Depends?
The makings of a business decision
Value
Alignment
Intellectual property
Capabilities
Compliance
Ethics
Value
The _____ line
Return on Investment
Differentiation against the competition
Value over time
Alignment
Overall
Product
AI
Intellectual property
Core vs peripheral
IP stance
Investors
Capabilities
Existing
Hiring
Ongoing overhead
Peripheral skills
Outsourcing approach
Bus factor
Compliance
Oversight
GDPR
Broader laws
Risk-appetite
Regulators
Ethics
Historic data bias
General bias
Misuse considerations
Internal awareness
Public awareness
Framework & handling
AI in manufacturing - a technical perspectiveStephanie Locke
AI in Manufacturing – a Technical Perspective (#AIFightsBack series)
Presented by Steph Locke, CEO @ Nightingale HQ
T: @theStephLocke
Li: /stephanielocke
Covers:
- Overview of AI: AI performs “cognitive” tasks
- Key areas of AI
+ Machine learning & data science
+ Robots may be AI
- AI & ML: What techniques are we most likely to use?
+ Core AI tasks
+ Computer vision
+ Speech
+ Language
+ Core ML tasks
+ Classification
+ Common classification methods
+ Decision trees: Identify ways to split data to get cleanest outcome groups
+ Regression: Predicts the chances of something happening
+ Neural networks: Uses multiple iterations to predict the chances
+ Anomaly detection
+ Anomaly detection types
+ Point anomalies: Unusual inside the whole dataset
+ Contextual anomalies: Unusual compared to neighbouring values
+ Collective anomalies: Connected records that are unusual
+ Patterns
+ K-means clustering: Group records based on “distance”
+ Hierarchical clustering: A multi-level grouping of records
+ Associations: Identify co-occurrences and correlations
- Critical infrastructure: What do we need to have in place?
+ Data
+ Data lake
- Conclusion
+ What should I do next?
+ Key areas of AI
+ The process
Resources
- 7 Quick-win AI Projects paper https://cdn2.hubspot.net/hubfs/5410772/7 Quick Wins/7 Quick Win AI Projects-1.pdf
- AI in Manufacturing article https://blog.nightingalehq.ai/ai-in-manufacturing
- More AI in manufacturing webinars https://nightingalehq.eventbrite.com/
- McKinsey on the future of pharma QC https://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/digitization-automation-and-online-testing-the-future-of-pharma-quality-control
- Otis ONE https://www.otis.com/en/hk/otis-signature-service/otis-one/
- ZEISS investment https://www.zeiss.com/corporate/int/innovation-and-technology/zeiss-ventures/investment-strategy/artificial-intelligence-and-image-data.html
- Amgen manufacturing deviation https://www.bioprocessonline.com/doc/how-amgen-uses-ai-tools-to-improve-manufacturing-deviation-investigations-0001
- Edera Safety with Autodesk https://www.autodesk.com/solutions/generative-design/manufacturing?wvideo=0fgcc0xfxw
- Speedy Hire inventory management https://peak.ai/hub/success-story/speedy/
AI for manufacturing has huge potential. As well as clear AI use cases like robotics and automation, the wealth of data being consolidated into industrial time series via historian appliances presents an opportunity for further AI applications. Using the data being consolidated, we can build early warning systems for critical issues, optimise maintenance programs, and improve processes. Attend to learn about the use cases and the key AI terms you need to start identifying how much AI could save your business.
Overview of AI
------
What is AI?
- AI is just whatever computational task is hard to achieve right now. If it’s become “off-the-shelf”, it isn’t AI. A cynical view
- AI performs “cognitive” tasks
AI usecases
Chatbots
- Provide new interaction route for potential customers
Integrate into omnichannel experience
- Use to inform or support activity
- Letterbox Lab case study
Content repackaging
- Turn content from one format to another intelligently
- Blog -> Video
- Video / voice -> Transcription
Lumen5.com
- Transform blog posts into videos
Metadata enrichment
- Discover new tags, topics, key points in text to generate stronger metadata to support search inside the site and beyond.
Microsoft Text Analytics AI
- Using Microsoft Text Analytics we can easily grab information about our content and even recommended links to sources.
Social listening
- Identify the direct and indirect mentions and conversation opportunities for you to engage with
Using RPA tools
- There are many social listening paid tools out there but tools like Microsoft Power Automate and Zapier can be used to bootstrap your social listening process by integrating off-the-shelf AI APIs.
Hyper-personalization
- Use extracted insights from a number of sources to generate content and funnel processes targeted at the individual to improve conversion rates.
CrystalKnows.com
- Get personality insights based on social and email contents to tailor communication.
Implementing AI quickly
How do I use AI now?
- Address demand
- Start small
- Gather your data
Technologies showcased:
- Microsoft Power Automate
- Microsoft Cognitive Services
- Lumen5
- Crystal
- Chatfuel
AI in manufacturing (#AIFightsBack series)
Watch on YouTube: https://youtu.be/8CJb14vMXjw
You have been hit particularly hard during these times. In this webinar Steph will focus on a set of practical ways to help you cope. Learn how AI can save you time and money in manufacturing, from optimising processes, predicting maintenance, or enforcing quality control.
AI is already having a significant impact on manufacturing and those who are getting it right will reap real benefits. It’s estimated that there is a 4-10% EBITDA increase from predictive maintenance AI solutions alone. AI is set to become a key differentiator in manufacturing processes, and you need to stay ahead of the competition. This webinar will give you robust practical insights and real use cases.
# Overview of AI
## What is AI?
> AI is just whatever computational task is hard to achieve right now. If it’s become “off-the-shelf”, it isn’t AI.
(A cynical view)
## AI performs “cognitive” tasks
- Reasoning: Learning and forming conclusions from imperfect data
- Understanding: Interpreting the meaning of data including text, voice, and images
- Interacting: Engaging with people in natural ways, such as speech
## ZEISS Investments
ZEISS call out AI in Healthcare and Manufacturing, especially quality control as key technologies they are looking to invest in as part of their corporate strategy.
# Key areas of AI
## Expert systems or data-driven?
Experts
- Understands domain
- Has already learnt rules or developed them
- Can provide rules to handle the future
Data
- Represents the domain
- Includes past processes and consequences
- Assumes future is like the past
## Machine learning & data science
- Arificial Intelligence – Cognitive functions
- Machine Learning – Learning from data
- Deep Learning – Adaptive learning from data
## Robots may be AI
- Does it do the same thing every time?
- Can it handle variation?
- Does it “see” and vary it’s actions based on inputs?
- How autonomous is it?
# AI usecases
## Usecases
- Quality control
- Generative design
- Procurement
+ Stock forecasting
+ Supply chain analytics
+ Demand prediction
- Production
+ Predictive Maintenance
+ Process control & optimisation
- HR
+ Recruiting automation
- Finance
+ Automated accounting
+ Asset allocation
+ Reporting and forecasting
- Multi-function
+ Robotic Process Automation
+ Accessible Meetings
## Quality control
- Use data to uncover signals that lead to poor output
- Monitor for signals and identify products for QC
- Use AI to perform some or all QC checks
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. What we’re here to talk
about
• What are Azure Functions?
• How to position serverless for your workloads
• Using Azure Functions for data processing
6. No infrastructure
management
Developers can just focus on
their code—without needing
to worry about provisioning
and managing infrastructure
Instant, event-
driven scalability
Application components
react to events and triggers
in near real-time with
virtually unlimited scalability
Pay-per-use
Only pay for what you use:
billing is typically calculated
on the number of function
calls, code execution time, and
memory used*
*Supporting services, like storage and networking, may be charged separately.
7. Functions-as-a-Service programming model use functions to achieve true serverless compute
Single
responsibility
Functions are single-
purposed, reusable pieces of
code that process an input
and return a result
Short-lived
Functions don’t stick around
when finished executing,
freeing up resources
for further executions
Stateless
Functions don’t hold any
persistent state and
don’t rely on the state of any
other processes
Event-driven
and scalable
Functions respond to
predefined events, and are
instantly replicated
as many times as needed
8. An event-based, serverless compute experience that accelerates app development
Integrated
programming
model
Use built-in triggers and
bindings to define when a
function is invoked and to what
data it connects
End-to-end
development
experience
Take advantage of a complete,
end-to-end development
experience with Functions—from
building and debugging
locally on major platforms like
Windows, macOS, and Linux
to deploying and monitoring in
the cloud
Hosting
options
flexibility
Choose the deployment model
that better fits your business
needs without compromising
development experience
Fully
managed and
cost-effective
Automated and flexible scaling
based on your workload
volume, keeping the focus on
adding value instead of
managing infrastructure
9. Integrated programming model
Azure Functions features input/output bindings
which provide a means of pulling data or
pushing data to other services. These bindings
work for both Microsoft and third-party services
without the need to hard-code integrations.
Trigger
Input binding
Output binding
Trigger object
Your code
Input object
Output object
10. The “Old” Way - Pseudocode
func Run()
{
var connectionString = CloudConfigurationManager.GetSetting("storage:connection")
var storageAccount = CloudStorageAccount.Parse(connectionString)
var eventHubClient = EventHubClient.CreateFromConnectionString(eventHubConnection)
func Poll()
{
var queueClient = storageAccount.CreateCloudQueueClient()
var queue = queueClient.GetQueueReference("myqueue-items")
queue.CreateIfNotExists()
var msg = queue.PeekMessage().AsString
var tableClient = storageAccount.CreateCloudTableClient()
var table = tableClient.GetTableReference("people")
table.CreateIfNotExists()
var customer = table.Execute(TableOperation.Retrieve<Customer>(“Customer”, msg))
// do something with customer - business logic goes here.
eventHubClient.Send(new EventData(...))
queue.DeleteMessage(msg);
Sleep(10 seconds)
Poll()
}
Poll()
}
11. Triggers - Pseudocode
func Run([QueueTrigger("myqueue-items")] string msg)
{
var connectionString = CloudConfigurationManager.GetSetting("storage:connection")
var storageAccount = CloudStorageAccount.Parse(connectionString)
var eventHubClient = EventHubClient.CreateFromConnectionString(eventHubConnection)
var tableClient = storageAccount.CreateCloudTableClient()
var table = tableClient.GetTableReference("people")
table.CreateIfNotExists()
var customer = table.Execute(TableOperation.Retrieve<Customer>("Customer", msg))
// do something with customer - business logic goes here.
eventHubClient.Send(new EventData(...))
}
12. Inputs - Pseudocode
func Run([QueueTrigger("myqueue-items")] string myQueueItem
[Table("people", "my-partition", "{queueTrigger}")] Customer customer)
{
var eventHubClient = EventHubClient.CreateFromConnectionString(eventHubConnection)
// do something with customer - business logic goes here.
eventHubClient.Send(new EventData(...))
}
13. Outputs - Pseudocode
[return: EventHub("event-hub", Connection = "EventHubConnection")]
Func EventData Run([QueueTrigger("myqueue-items")] string myQueueItem
[Table("MyTable", "MyPartition", "{queueTrigger}")] Customer customer)
{
// do something with customer - business logic goes here.
return new EventData(...)
}
14. Streamlining
connections and
improving security
Use Managed Identities for downstream
connections and for clients that invoke
Functions
Leverage Azure Key Vault for services
without support for MI
Managed Identities and Key Vault secrets
allows others to manage access and
minimises risk of leaked credentials
Use Azure Functions v4 for MI support
15. Managed Identities + Infrastructure as Code FTW
Step 1:
Assign the Managed Identity
access to resources
Step 2:
Add simplified values to app
settings
Step 3:
Use simple connection
references in Functions
16. DEMO: Creating an Azure Function project
IaC (bicep), VS Code (Azure Functions and bicep extensions), Azure
Function Core Tools, Github Copilot
18. Automation of scheduled tasks
S C E N A R I O E X A M P L E
Financial services
A customer database is analyzed
for duplicate entries every
15 minutes, to avoid multiple
communications being sent out
to same customers
A function cleans a database
every 15 minutes…
…deduplicating entries
based on business logic
19. Handling data with a schedule
Timer.cs
[FunctionName("TimerTriggerCSharp")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger log)
{
// Business logic goes here…
}
Use NCRONTAB specifications for schedules
Schedules can be managed in appsettings to put all schedules in a single location
Use inputs and outputs to handle whatever you need to do
20. Real-time stream processing
S C E N A R I O E X A M P L E
ISV
Huge amounts of telemetry
data is collected from a massive
cloud app. That data is
processed in near real-time and
stored in a DB for use
in an analytics dashboard
App or device
producing data
Event Hubs ingests
telemetry data A function processes
the data…
…and sends it to
Cosmos DB
Data used for
dashboard
visualizations
21. Handling events
Queue.cs
public static class QueueFunctions
{
[FunctionName("QueueTrigger")]
public static void QueueTrigger(
[QueueTrigger("items")] string myQueueItem,
ILogger log)
{
// Business logic goes here…
}
}
Use a trigger that listens to an event publisher
Process messages with a schema to improve quality
Choose to process single messages or micro-batches
22. Handling data on change
S C E N A R I O E X A M P L E
Financial Services
Colleagues use mobile banking
to reimburse each other for
lunch: the person who paid for
lunch requests payment through
his mobile app, triggering a
notification on his colleagues’
phones.
23. Handling data when there’s changes
CosmosDB.cs
[FunctionName("CosmosTrigger")]
public static void Run([CosmosDBTrigger(
databaseName: "CorpDB",
containerName: "CorpDB",
Connection = "CorpDB",
LeaseContainerName = "leases")]IReadOnlyList<Person> documents,
ILogger log)
{
// Business logic goes here…
}
Use a trigger for a source that has CDC – primarily Cosmos DB and Azure SQL
Process messages with a schema to improve quality
Use inputs to get additional record sets to support processing
24. Handling data on request
S C E N A R I O E X A M P L E
Professional Services
A SaaS solution provides
extensibility through webhooks,
which can be implemented
through Functions, to automate
certain workflows.
25. Handling data when requested
HTTP.cs
[FunctionName("HttpTriggerCSharp")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
HttpRequest req, ILogger log)
{
// Business logic goes here…
}
Use HTTP triggers to create APIs or webhook driven activities
Use OpenAPI decorators to add documentation to your functions
Use Managed Identity on resources that connect or do pass through auth where
possible
26. ?
Workflows and orchestration
with Durable Functions
P A T T E R N S / U S E C A S E S
Durable Functions is an
extension of Azure Functions
that lets you write stateful
functions in a serverless
compute environment
Manageable sequencing +
error handling/compensation
Fanning out and fanning in External events correlation
Flexible automated long-running
process monitoring
Start
Get status
Http-based async long-
running APIs Human interaction
27. Handling data with multiple steps
Durable.cs
[FunctionName("Chaining")]
public static async Task<object> Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var x = await context.CallActivityAsync<object>("F1", null);
await context.CallActivityAsync<object>("F2", x);
}
Orchestrate complex or stateful data flows using Durable Functions
Use for different cases like aggregating, fanning out, human in the loop, or
chaining functions
30. What we talked about
• What are Azure Functions?
• How to position serverless for your workloads
• Using Azure Functions for data processing
31. Try it yourself
Learn with our Cloud Skills
Challenge
aka.ms/sqlbits-dwf
Check out our repo to see
things in detail
aka.ms/sqlbits-dwf-demo
Give it ago with free Azure
resources
Free Services