Hello beautiful people, I hope you all are doing great. Here I'm sharing a short PPT on JSON Format for Alexa Request & Response. if you found it helpful. say thanks it's most welcomed.
2. Methods
To get the JSON input & output data in Alexa
We have to go through two basic methods
1. Request
2. Response
3. Request Method
Request method is called when user ask any question to Alexa
FORMAT OF REQUEST METHOD
1. Header
2. Body
4. Request Header
It is a set of codes, which is every time called when user request for any question to Alexa device
It is not visible in json input request, because it is a static code.
It will be the same in every request.
Request Header Syntax
POST / HTTP/1.1
Content-Type : application/json;charset=UTF-8
Host : your.application.endpoint
Content-Length :
Accept : application/json
Accept-Charset : utf-8
Signature:
SignatureCertChainUrl: https://s3.amazonaws.com/echo.api/echo-api-cert.pem
5. Request Body
Body contains all the details about request type
It is visible in Alexa simulator API. Because this information is dependent on type of user’s
request. In other words we can say it’s a dynamic code.
Four Basic Parameters of Body
1. Version
2. Session
3. Context
4. Request
Respectively ….
6. Version: It contains version number. As of now, Alexa is using “1.0” version.
Session: The session object provides additional context associated with the request.
Context: The context object provides your skill with information about the current state of
the Alexa service and device at the time the request is sent to your service.
Request: A request object that provides the details of the user's request.
7. Session Objects
1. New: A Boolean value indicating whether this is a new session or not.
it can return true or false.
True: when session first started
False: else
2. Session Id: A string that represents a unique identifier per a user's active
session.
3. Attributes: A map of key-value pairs. The attributes map is empty for requests where a new
session has started with the property new set to true.
The key is a string that represents the name of the attribute. Type: string
The value is an object that represents the value of the attribute. Type: object
8. Session Objects
4. Application: An object containing an application ID. This is used to verify that the request
was intended for your service:
Application Id: A string representing the “application ID” for your skill.
5. User: An object that describes the user making the request. A user is composed of:
User Id: A string that represents a unique identifier for the user who made the request. The
length of this identifier can vary, but is never more than 255 characters. The “user Id” is
automatically generated when a user enables the skill in the Alexa app.
9. Context Object
Definition : The context object provides your skill with information about the current state of
the Alexa service and device at the time the request is sent to your service.
Context objects:
System Objects:
1. apiAccessToken 4. device
2. apiEndpoint 5. user
3. application
System A system object that provides
information about the current state
of the Alexa service and the device
interacting with your skill
10. System objects
1. apiAccessToken: A string containing a token that can be used to access Alexa-specific
APIs. This token encapsulates:
Any permissions the user has consented to, such as permission to access the user's address
with the Device Location API.
Access to other Alexa-specific APIs, such as the Progressive Response API
2. apiEndpoint: A string that references the correct base URI to refer to by region, for use with
APIs such as the Device Location API and Progressive Response API.
3. application: it contains a unique application ID.
4. user: it contains a unique user ID.
5. device: An object providing information about the device used to send the request. The
device object contains both deviceId and supportedInterfaces properties:
The deviceId property uniquely identifies the device.
The supportedInterfaces property lists each interface that the device supports. For
example, if supportedInterfaces includes AudioPlayer {}, then you know that the device
supports streaming audio using the AudioPlayer interface.
11. Request objects
Type: it contains the type of intent
We have four basic types of intents.
1. LaunchRequest
2. CanFulfillIntentRequest
3. IntentRequest
4. SessionEndedRequest
Request ID: it contains a unique request ID.
Timestamp: it automatically pickup the region based time and date.
Locale: it picks the Alexa region. For ex.- en-US, Now it can reply In English
Intent: it calls the intent based on the user’s request. Within that intent it calls the right slot,
To catch the request more.
12. Different Intents category
1. LaunchRequest: Sent when the user invokes your skill without providing a specific intent.
2. IntentRequest: Sent when the user makes a request that corresponds to one of the intents
defined in your intent schema.
3. SessionEndedRequest: Sent when the current skill session ends for any reason other than
your code closing the session
4. CanFulfillIntentRequest: Sent when the Alexa service is querying a skill to determine
whether the skill can understand and fulfill the intent request with detected slots, before
actually asking the skill to take action.
14. Response Format
Response: it is the best matched answer of user’s question.
In Alexa we use json syntax to get this answer. We call it JSON output.
It also have two basic pre-designed format
1. Json response header
2. Json response body
HEADER: A static piece of code, containing body syntax in it.
Example: HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Content-Length:
15. Body Parameters:
Version: version number of Alexa skill. It should match with request.
sessionAttributes: A map of key-value pairs to persist in the session.
1. The key is a string that represents the name of the attribute. Type: string.
2. The value is an object that represents the value of the attribute. Type: object.
Session attributes are ignored if included in a response to an AudioPlayer or
PlaybackController request.
Response: A response object that defines what to render to the user and whether to end the
current session.
16. Response Object
1. outputSpeech: The object containing the speech to render to the user
2. Card: The object containing a card to render to the Amazon Alexa App.
3. Reprompt: The object containing the outputSpeech to use if a re-prompt is necessary.
This is used if the your service keeps the session open after sending the response, but the
user does not respond with anything that maps to an intent defined in your voice interface
while the audio stream is open.
4. shouldEndSession: A true or false situation for Alexa. It is provided in endpoint code like
when to end the session.
17. OutputSpeech Object
1. type: string containing the type of output speech to render. Valid types are:
"PlainText": Indicates that the output speech is defined as plain text.
"SSML": Indicates that the output speech is text marked up with SSML.
2. text: A string containing the speech to render to the user. Used when type is
"PlainText“.
18. Card Object
1. type: A string describing the type of card to render.
Different types are:
"Simple": A card that contains a title and plain text content.
"Standard": A card that contains a title, text content, and an image to display.
"LinkAccount": A card that displays a link to an authorization URI that the user can use to
link their Alexa account with a user in another system.
"AskForPermissionsConsent": A card that asks the customer for consent to obtain specific
customer information, such as Alexa lists or address information.
19. Card Object
1. title: A string containing the title of the card.
2. content: A string containing the contents of a Simple card. It have the actual text.
3. text: A string containing the text content for a Standard card.
4. Image: An image object that specifies the URLs for the image to display on a Standard
card. Only applicable for Standard cards.
URLs, for use on different sized screens
1. smallImageUrl
2. largeImageUrl