LCE13: LNG Testing, benchmarking, etc
Upcoming SlideShare
Loading in...5
×
 

LCE13: LNG Testing, benchmarking, etc

on

  • 858 views

Resource: LCE13

Resource: LCE13
Name: LNG Testing, benchmarking, etc
Date: 12-07-2013
Speaker: Zi Shen LimDate: Friday July 12, 2013
Video: https://www.youtube.com/watch?v=8eY08UKQ1qk

Statistics

Views

Total Views
858
Views on SlideShare
858
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

LCE13: LNG Testing, benchmarking, etc LCE13: LNG Testing, benchmarking, etc Presentation Transcript

  • Linaro Connect, Hong Kong March 2013 July 2013 LCE13 LNG Testing and Benchmarking
  • connect.linaro.org Agenda ● Discussion on how to enhance LAVA framework for networking applications. ● Discussion on LNG testing strategy and goals. https://lce-13.zerista.com/event/member/79674 http://pad.linaro.org/p/LCE13_LNG-Benchmarking
  • connect.linaro.org ● Why: Make sure we don't have regression in functionality and performance. Regression could be introduced by upstream or by us. ● What: Baselines and regression testing. ● How: LAVA. For example: compare LNG kernel with and without RT patchset. LNG Testing/Benchmarking
  • connect.linaro.org LNG Focus Areas ● Big Endian ● Foundation ○ LNG Kernel (LSK + Hugepage + NOHZ_FULL + RT...) ● Data Plane ○ RTE + SOC APIs What are the Key Performance Indicators?
  • connect.linaro.org LNG Test Cases LNG Tests/Benchmarks Tracking Sheet ● Functional Testing ○ Additional patches carried in LNG kernel trees. Some have wider impact (e.g. RT), some may be backported features (e.g. hugepage, nohz_full). ○ Need coverage for additional features, as well as kernel in general post integration. ● Performance Benchmarking ○ Some features improve certain performance characteristics (e.g. hugepage, nohz_full). ○ Need to characterize those and make sure they continue to work well across releases.
  • connect.linaro.org LNG Test Cases Measure: ● Throughput ● Number of dropped packets ● Latency ● CPU utilization ● Power Other Variables: ● Number of cores ● Core frequency ● Number of ports Test Variations: ● Accelerated using Dataplane APIs. ● Inside KVM guest. ○ Guest-network. ○ Guest-guest. ● Kernel with different patchsets (nohz_full, rt, etc.). Example: network packet forwarding. Compare against x86.
  • connect.linaro.org LNG Test Bench (Network Traffic) Tests with network traffic, especially for data plane applications. DUT: member platforms, multi-10G, multi-1G ports. Using x86 machines as traffic generator. Could also be DUT for comparative benchmarking.
  • connect.linaro.org LNG Test Bench (Network Traffic) Some ports directly connected between Traffic Generator and DUT. Some ports connected through switch. Can even repurpose some DUT as Traffic Generator.
  • connect.linaro.org LNG Test Bench (Network Traffic) ● Resource Management ○ Isolated data path between traffic generator and DUT. ○ Automated / programmatic way to access traffic generator, switch, etc. ○ Efficient resource scheduling. ● Test & Resource Coordination ○ Start app on DUT, start traffic, stop traffic, stop app on DUT. ● Test Report & Database ○ Performance trends, passing criteria, statistics ○ Visualization, alerts.
  • connect.linaro.org LNG Test Bench (w/o Network Traffic) Tests without network traffic, primarily for kernel regression testing. ● Resource Management ○ Automated / programmatic way to access traffic generator, switch, etc. ○ Efficient resource scheduling. ● Test Report & Database ○ Performance trends, passing criteria, statistics ○ Visualization, alerts.
  • connect.linaro.org ● What are the target applications? Example code? ● Recommendations of how we should categorize benchmarks? ○ By verticals? e.g. wireless, security? ○ L1-L7? ● Any additional hardware platforms to consider? ● Wrt comparative benchmarking, which platforms do we care about? ● How much do we care about industry benchmarks such as SPEC, EEMBC? ● How do we characterize success of our data plane APIs? ● In case of data plane, there are different solutions: bare metal, Linux userspace RTE, etc. How do we showcase LNG solution? Questions
  • Linaro Connect, Hong Kong March 2013 Questions?
  • More about Linaro: http://www.linaro.org/about/ More about Linaro engineering: http://www.linaro.org/engineering/ How to join: http://www.linaro.org/about/how-to-join Linaro members: www.linaro.org/members