PPT Version

256 views
185 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
256
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

PPT Version

  1. 1. <ul><li>Proposal for New Work Item </li></ul><ul><li>SIP PERFORMANCE BENCHMARKING </li></ul><ul><li>OF NETWORKING DEVICES </li></ul><ul><li>draft-poretsky-sip-bench-term-02.txt </li></ul><ul><li>draft-poretsky-sip-bench-meth-00.txt </li></ul><ul><li>Co-authors are </li></ul><ul><li>Vijay Gurbani of Lucent Technologies </li></ul><ul><li>Carol Davids of IIT VoIP Lab </li></ul><ul><li>Scott Poretsky of Reef Point Systems </li></ul>67th IETF Meeting – San Diego
  2. 2. Motivation <ul><li>Service Providers are now planning VoIP and Multimedia network deployments using the IETF developed Session Initiation Protocol (SIP). </li></ul><ul><li>The mix of SIP signaling and media functions has produced inconsistencies in vendor reported performance metrics and has caused confusion in the operational community. (Terminology) </li></ul><ul><li>SIP allows a wide range of configuration and operational conditions that can influence performance benchmark measurements. (Methodology) </li></ul>
  3. 3. More Motivation <ul><li>Service Providers can use the benchmarks to compare performance of RFC 3261 devices. Server performance can be compared to other servers, SBCs, and Servers paired with SIP-Aware Firewall/NATs. </li></ul><ul><li>Vendors and others can use benchmarks to ensure performance claims are based on common terminology and methodology. </li></ul><ul><li>Benchmark metrics can be applied to make device deployment decisions for IETF SIP and 3GPP IMS networks </li></ul>
  4. 4. Scope <ul><li>Terminology defines Performance benchmark metrics for black-box measurements of SIP networking devices </li></ul><ul><li>Methodology provides standardized use cases that describe how to collect those metrics. </li></ul>
  5. 5. Devices vs. Systems Under Test <ul><li>DUT </li></ul><ul><ul><li>MUST be a RFC 3261 compliant device. </li></ul></ul><ul><ul><li>MAY include a SIP Aware Firewall/NAT and other functionality </li></ul></ul><ul><ul><li>BMWG does not standardize compliance testing. </li></ul></ul><ul><li>SUT </li></ul><ul><ul><li>A RFC 3261 compliant device with a separate external SIP Firewall/NAT </li></ul></ul><ul><ul><li>Anticipates the need to test the performance of a SIP-aware functional element that is not itself defined in RFC 3261 </li></ul></ul>
  6. 6. Overview of Terminology Draft <ul><li>The terminology document distinguishes between Invite Transactions and Non Invite Transactions </li></ul><ul><li>Thus the document addresses the fact that the SIP-enabled network provides services and applications other than PSTN replacement services. </li></ul><ul><li>The equipment as well as the network needs to support the total load. </li></ul>
  7. 7. Benchmarks defined <ul><li>The following benchmarks have been defined </li></ul><ul><ul><li>Registration Rate </li></ul></ul><ul><ul><li>Session Rate </li></ul></ul><ul><ul><li>Session Capacity </li></ul></ul><ul><ul><li>Associated media sessions - establishment rate </li></ul></ul><ul><ul><li>Associated media sessions - setup delay </li></ul></ul><ul><ul><li>Associated media sessions - disconnect delay </li></ul></ul><ul><ul><li>Standing associated media sessions </li></ul></ul><ul><ul><li>IM rate </li></ul></ul><ul><ul><li>Presence rate </li></ul></ul>
  8. 8. Complements SIPPING Work Item <ul><li>SIPPING - Malas-draft-05 relates to end to end network metrics </li></ul><ul><li>BMWG Poretsky et al - terminology draft 02 and methodology draft 00, relate to network-device metrics </li></ul><ul><ul><li>A network device performs work whether or not a registration succeeds or fails; whether or not an attempted session is created; whether or not a disconnect is successful. Whether or not a media session is created at all. </li></ul></ul><ul><ul><li>A SIP-enabled network also carries signaling traffic whether or not a media session is successfully created. For example, IM, Presence and more generally subscription services all require network resources as well as computing device resources </li></ul></ul><ul><ul><li>For this reason, we think that many of the bmwg metrics complement the malas draft and can also inform that document. </li></ul></ul>
  9. 9. Methodology <ul><li>Two forms of test topology </li></ul><ul><ul><li>Basic SIP Performance Benchmarking Topology with Single DUT and Tester </li></ul></ul><ul><ul><li>Optional SUT Topology with Firewall/NAT between DUT and Tester when Media is present. </li></ul></ul><ul><li>Test Considerations </li></ul><ul><ul><ul><li>Selection of SIP Transport Protocol </li></ul></ul></ul><ul><ul><ul><li>Associated Media </li></ul></ul></ul><ul><ul><ul><li>Session Duration </li></ul></ul></ul><ul><ul><ul><li>Attempted Sessions per Second </li></ul></ul></ul><ul><li>Need to complete test cases. </li></ul><ul><ul><ul><li>Looking for more test cases </li></ul></ul></ul>
  10. 10. Next Steps for Terminology and Methodology <ul><li>Complete methodology </li></ul><ul><li>Incorporate comments from mailing list </li></ul><ul><li>Propose that BMWG make this a work item </li></ul>
  11. 11. BACKUP - Relevance to BMWG <ul><li>-----Original Message----- </li></ul><ul><li>From: Romascanu, Dan (Dan) [mailto:dromasca@avaya.com] </li></ul><ul><li>Sent: Sunday, June 25, 2006 6:00 AM </li></ul><ul><li>I believe that the scope of the 'SIP Performance Metrics' draft is within the scope of what bmwg is doing for a while, quite successfully, some say. On a more 'philosophical plan', there is nothing that says that the IETF work must strictly deal with defining the bits in the Internet Protocols - see http://www.ietf.org/internet-drafts/draft-hoffman-taobis-08.txt . And in any case, measuring how a protocol or a device implementing a protocol behaves can be considered also 'DIRECTLY related to protocol development'. </li></ul>-----Original Message----- From: nahum@watson.ibm.com [mailto:nahum@watson.ibm.com] Sent: Friday, May 26, 2006 2:51 PM SPEC wants to develop and distribute common code for benchmarking, as is done with SPECWeb a SPECJAppServer. That code can and should use the standardized peformance definitions agreed to by SIPPING and/or BMWG.
  12. 12. BACKUP - Industry Collaboration <ul><li>BMWG develops standard to benchmark SIP networking device performance </li></ul><ul><li>SIPPING WG develops standard to benchmark end-to-end SIP application performance </li></ul><ul><li>SPEC to develop industry-available test code for SIP benchmarking in accordance with IETF’s BMWG and SIPPING standards. </li></ul>-----Original Message----- From: Poretsky, Scott Sent: Thursday, June 22, 2006 8:00 PM To: 'Malas, Daryl'; acmorton@att.com; gonzalo.camarillo@ericsson.com; mary.barnes@nortel.com Cc: vkg@lucent.com; Poretsky, Scott Subject: RE: (BMWG/SippingWG) SIP performance metrics Yes Daryl.  I absolutely agree.  The item posted to BMWG focuses on single DUT benchmarking of SIP performance.  Your work in SIPPING is focused on end-to-end application benchmarking.  It would be great (and I would even say a requirement) that the Terminologies for these two work items remain consistent with each other.   

×