Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Openstack based WebRTC PaaS - Kamailio World 2015

1,588 views

Published on

Frafos WebRTC deployment on Openstack
First Stage
Speakers: Jose + Binan

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Openstack based WebRTC PaaS - Kamailio World 2015

  1. 1. OpenStack based WebRTC PaaS by Frafos May 2015, José Luis Millán, Binan Al Halabi
  2. 2. Intro HOWTO for implementing a simple WebRTC service, using JsSIP and OpenStack-based Frafos WebRTC PaaS •Background: WebRTC, PaaS •JsSIP •Openstack PaaS
  3. 3. Frafos Architectural Assumptions Voice is not a target product anymore •It is becoming an add-on for web apps. •It is easier to add a media channel to an app (like CRM) than vice versa. WebRTC does exactly that. •We integrate the business logic with media using the browser •We therefore contemplate that infrastructural VoIP will be standardized into PaaS so that application integrators won’t be concerned about VoIP.
  4. 4. Frafos PaaS Architecture WebRTC Browser integrates Business Logic with VoIP Channel Business Apps online shops, customer service, eductation, entertainment HTTP WebRTC protocol suite Standardized VoIP Cloud: provides SIP and PSTN connectivity, NAT traversal, media processing, scaling, high-availability, and security.
  5. 5. Example App: Cloud Audio Conference •Faster-to-build than any enterprise IT can: The browser applicationfor organizing web- conferencing implementedand tested within two weeks. •Consumer-gradeeasy to use: three browser steps to be in conference. Start Invite Talk
  6. 6. Using JsSIP (go.areteasea.com)
  7. 7. Openstack-based PaaS
  8. 8. PaaS Objectives •Simple integration --> Less errors •Cost: minimum of extra traffic and processing needed for the integration •Scalability •Service Monitoring •Minimum delay (caching when possible + shortest geographical path) •No single point of failure •Clean shutdown/termination
  9. 9. Automated Scalability •Automation --> Less errors •Better customer experience: •(High demand --> add servers) •Lower costs: •(Lower demands --> remove servers)
  10. 10. Openstack Auto-Scaling WebRTC Cloud Servers Load Balancers Resources + Auto Scaling Group The Resources are scaled up and down based on the policies …..
  11. 11. Heat Client •Using heat client to create the stack/resources # heat stack-create stack01 -f template.yaml ... •Openstack control panel
  12. 12. Heat Template •The template = static architectural design of your application (infrastructure resources in a text file treated as code). •Syntax: HOT, YAML, or JSON •The snippet here is how to define a scaling group group: type: OS::Heat::AutoScalingGroup properties: cooldown: 60 desired_capacity: 2 max_size: 5 min_size: 1 resource: type: OS::Nova::Server::Frafos Here the server is defined in a separate yaml file and mapped in the environment yaml file: resource_registry: "OS::Nova::Server::Frafos": "fserver.yaml"
  13. 13. Orchestration Service •Heat is the orchestration tool used for: •Create resources described in a template •Configure the resources •Installing packages •Auto-scale the resources •Heat interacts with different Openstack services using the APIs (e.g. Heat creates the alarms using Ceilometer API service)
  14. 14. Ceilometer + Heat
  15. 15. But •Originally Ceilometer is designed for metering and not for monitoring •Ceilometer is not for application/service level monitoring • Openstack monitoring as service: •E.g. Monasca that integrates with Openstack (requires Monasca agent to be installed on servers)
  16. 16. Rackspace Monitoring Service •Monitoring agent must be installed on machine to reports metrics •The users can create their own checks: • Predefined metrics (CPU, Memory, Load, ..) • Custom (Frafos metrics: calls, regs, and TCPs) •The service is integrated with autoscaling service using general webhooks created for scaling policies
  17. 17. Rackspace Monitoring Service Autoscaling policy is triggered --> Alarm is sent to Rackspace Autoscaling service
  18. 18. Frafos Custom Metrics •Regs, Calls, and TCP Metrics •The monitoring agent installed on servers reports these metrics periodically (60 s) •Push-Metrics bash script works as plugin to the monitoring agent
  19. 19. Push-Metrics Frafos Plugin •It is a Bash script •SNMP-based: Obtain the measured data using SNMP •Echos the metrics in the following format: metric <name> <type> <value> [<unit>] e.g. "metric calls int32 $calls [Count]" •The agent takes the echoed metrics in that format and sends them to the monitoring service using Metrics API.
  20. 20. No Clean Shutdown •When the scale-down policy is triggered, Nova kills the servers as following: •Immediately and hardly •Killing priority: Pending servers and then the oldest •The problems are: •Gateway case: existing calls •Conference case: existing served conference rooms (long sessions) + late-join requests •Shutdown controlled by the server is needed
  21. 21. Solution: Heat Software Deployment •Available since IceHouse Openstack release (2014) •New resource type OS::Heat::SoftwareConfig which defines the configuration •New resource type: OS::Heat::SoftwareDeployment which bind the configuration with the lifecycle actions: CREATE, DELETE,.. actions •The deployment resource remains in progress until a signal comes from the server --> complete state • Heat agents must be installed on the server: os-collect- config, os-refresh-config, heat-config, heat-config-hook, and heat-config-notify to support the software deployment in Heat
  22. 22. (1) Server Configuration •The user_data_format property must be specified as SOFTWARE_CONFIG in the server definition in the template •The software_config_transport property specifies how the software config metadata goes from Heat to the server server: type: OS::Nova::Server properties: image: ... user_data_format:SOFTWARE_CONFIG software_config_transport: POLL_SERVER_HEAT
  23. 23. (2) DELETE Action Configuration •OS::Heat::SoftwareConfig encapsulates the configuration that we want to apply on DELETE •“group” property is the type of the configuration to be applied •“get_file” property can be remote or local address •The script will be packed in the stack creation package by the heat command (Heat client) •The script will be passed to the server to be executed on DELETE delete_config: type : OS::Heat::SoftwareConfig properties: group: script inputs: ... config: { get_file: scripts/drain_sessions.sh }
  24. 24. (3) DELETE Action Deployment Bind the configuration with the DELETE action - No new connections - Clean (drain_sessions.sh) - Kill - send HEAT_SIGNAL signal using curl to Heat's API service frafos-server-delete: type : OS::Heat::SoftwareDeployment properties: actions: [ DELETE ] config: { get_resource delete_config } input_values: {...} server: ... .... signal_transport : HEAT_SIGNAL
  25. 25. Clean Shutdown - Conference Case JOIN-Late Requests In conference case, people may join a conference late --> the terminating/dying server should not completely isolated - The server is removed from LB - The rest of servers redirect the late-join requests to that server using the public IP - The server drains its sessions - Then the server signals Heat with HEAT_SIGNAL to be killed
  26. 26. Thank You Frafos GmbH, Ahoy office, Windscheidstraße 18, 10627 Berlin, Germany http:// www.frafos.com mailto:info@frafos.com
  27. 27. Visit Our Online Demo go.areteasea.com

×