Supercharge your software development with Azure OpenAI Service! Azure cloud platform provides access to cutting-edge AI models for diverse tasks. Explore different models for generating content, translating languages, and even generating code. Leverage data grounding to fine-tune models for your specific needs. Discover how Azure OpenAI Service accelerates innovation and injects intelligence into your software creations.
2. General
Hany Saad
Technology professional with a rich background in software
engineering and development field. Currently working as a
Custom Development (.Net) TSP at ITWorx.
Worked as lecturer and training manager at the
Information Technology Institute (ITI), Ministry of
Communications and Information Technology, where he
contributes significantly to ICT capacity building in Egypt.
Worked in other different positions with different
organizations through more than 18 years.
https://www.linkedin.com/in/hanysaad
TSP (Technology solution professional) @ITWorx
3. General
Microsoft Azure
3
Microsoft Azure is a comprehensive cloud platform
by Microsoft offering services for computing,
storage, and networking, enabling users to build,
deploy, and manage applications through global
data centers.
5. General
AZURE OPENAI
SERVICE
Azure OpenAI Service is a cloud-based platform provided by
Microsoft that integrates OpenAI's advanced AI models,
including GPT (Generative Pre-trained Transformer), into
Azure's cloud services. This service allows developers and
businesses to incorporate powerful natural language
processing, generation, and understanding capabilities into
their applications, leveraging the scalability and reliability of
Azure's infrastructure.
6. General
AZURE OPENAI SERVICE - MODELS
Azure OpenAI Service models
Models Description
GPT-4 A set of models include GPT-4 and GPT-4 Turbo Preview
GPT-3.5 A set of models that improve on GPT-3 and can understand and generate natural language and code.
Embeddings A set of models that can convert text into numerical vector form to facilitate text similarity.
DALL-E (Preview) A series of models in preview that can generate original images from natural language.
Whisper (Preview) A series of models in preview that can transcribe and translate speech to text.
Text to speech (Preview) A series of models in preview that can synthesize text to speech.
7. General
AZURE OPENAI SERVICE – FINE TUNING
MODELS
Fine Tuning:
Is a machine learning process where a pre-trained model is further trained or adjusted on a new, typically smaller
dataset, to specialize or improve its performance on specific tasks. This process leverages the knowledge the
model has gained during its initial training, making it more effective and efficient for particular applications than
training a model from scratch.
Customize a model with fine-tuning
Azure OpenAI Service lets you tailor our models to your personal datasets by using a process known as fine-
tuning.
The following models support fine-tuning:
• gpt-35-turbo-0613
• gpt-35-turbo-1106
• babbage-002
• davinci-002
8. General
COMPARING AZURE OPENAI AND
OPENAI
Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E,
Whisper, and text to speech models with the security and enterprise promise of Azure.
Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the
other.
With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as
OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
10. General
AZURE OPENAI SERVICE -KEY
CONCEPTS
• Prompt: is a text input given to an AI model to elicit a specific response or output, guiding the model's
generation process or decision-making.
• Completion: is the output or response generated by an AI model based on a given prompt, representing the
model's attempt to complete, answer, or extend the input it received.
• Tokens: For text tokens Azure OpenAI processes text by breaking it down into tokens. Tokens can be words or
just chunks of characters. For example, the word “hamburger” gets broken up into the tokens “ham”, “bur” and
“ger”, while a short and common word like “pear” is a single token. Many tokens start with a whitespace, for
example “ hello” and “ bye”.
• Image tokens: The token cost of an input image depends on two main factors: the size of the image and the
detail setting (low or high) used for each image.
• Resources: Azure OpenAI is a new product offering on Azure. You can get started with Azure OpenAI the
same way as any other Azure product where you create a resource, or instance of the service, in your Azure
Subscription.
• Deployments: Once you create an Azure OpenAI Resource, you must deploy a model before you can start
making API calls and generating text.
• Endpoint: Is a URL used to access OpenAI models on Azure, enabling applications to send requests and
receive AI-generated responses via API calls.
• RAG: RAG, or Retrieval-Augmented Generation, is a methodology in natural language processing that
combines the retrieval of relevant documents or data with a generative model to enhance the generation of text.
11. General
AZURE OPENAI SERVICE – GETTING
STARTED
1. Prerequisites
1. An Azure subscription
2. Access to Azure OpenAI in the selected Azure subscription.
• Currently, you must submit an application to access Azure OpenAI Service. To apply for access,
complete this form
2. Create a resource
3. Configure network security
4. Deploy a model
1. Sign in to Azure OpenAI Studio
2. Choose the subscription and the Azure OpenAI
resource to work with
3. Select Create new deployment and
configure the needed fields
Resources:
Create and deploy an Azure OpenAI Service resource
12. General
AZURE OPENAI SERVICE – MAKING
CALLS
You can start making calls to your deployed model via:
• From The Language Studio
• Language Studio is a set of UI-based tools that lets you explore, build, and integrate features from Azure
AI Language into your applications.
• Getting started with Language Studio
• Using SDKs
• Getting started with the Azure AI SDK
• Azure OpenAI client library for .NET
• Using REST API
• Azure OpenAI Service REST API
13. General
AZURE OPENAI SERVICE – MAKING CALLS
(REST API)
Create a completion
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-
version={api-version}
Parameter Type Required? Default Description
prompt string or array Optional <|endoftext|>
The prompt or prompts to generate
completions for, encoded as a string, or
array of strings. <|endoftext|> is the
document separator that the model
sees during training, so if a prompt isn't
specified the model generates as if from
the beginning of a new document.
max_tokens integer Optional 16
The maximum number of tokens to
generate in the completion. The token
count of your prompt plus max_tokens
can't exceed the model's context length.
Most models have a context length of
2048 tokens (except for the newest
models, which support 4096).
temperature number Optional 1
What sampling temperature to use,
between 0 and 2. Higher values mean
the model takes more risks. Try 0.9 for
more creative applications, and 0
(argmax sampling) for ones with a well-
defined answer. We generally
recommend altering this or top_p but
not both.
role string Yes N/A
Indicates who is giving the current
message. Can be
system,user,assistant,tool, or
function.
14. General
Microsoft Semantic Kernel
14
Semantic Kernel is an open-source SDK that lets you easily
build agents that can call your existing code. As a highly
extensible SDK, you can use Semantic Kernel with models
from OpenAI, Azure OpenAI, Hugging Face, and more! By
combining your existing C#, Python, and Java code with
these models, you can build agents that answer questions
and automate processes.
It integrates Large Language Models (LLMs) like OpenAI,
Azure OpenAI, and Hugging Face with conventional
programming languages like C#, Python, and Java.
Semantic Kernel achieves this by allowing you to define
plugins that can be chained together in just a few lines of
code.
What makes Semantic Kernel special, however, is its ability
to automatically orchestrate plugins with AI. With Semantic
Kernel planners, you can ask an LLM to generate a plan that
achieves a user's unique goal. Afterwards, Semantic Kernel
will execute the plan for the user.