Plan to incorporate the question in this image: http://www.istockphoto.com/stock-illustration-14221720-question-mark-advice.php?st=515baa7So here’s the problem. Your boss comes to you and says we are going to go live with the next version of Blackboard and it better be fast and scalable. He sat in a bunch of marketing presentations that all say this is by far the most scalable version of Blackboard ever! There’s a lot of pressure from faculty, students and administrators on both him/her as well as yourself to make this the most stable and fast performing release ever. You have had a few bumpy years administrating Blackboard in the past. With this upcoming release you plan to have more users for distance learning, heavy adoption of social tools, mobile integration and of course no solution would be complete without full online exams.Funding is a little tight. Resources are sparse. Oh and by the way…we are going to take on the new version in 9 to 12 weeks.You have to do this all on your own or with the student intern. Good luck…You will need it!
Money: No matter how you look at this some money is going to be spent. Costs in terms of gaining skills, paying for consulting, tools, etc…Time: Taking resources away from other projects to accomplish goals.Capability: Does the team have the right skills to pull this off?Is this a one time effort or new set of responsibilities? Are your objectives very narrow and focused, or are you being asked to cast a big net?Do you have any experience whatsoever doing this?Plan on downloading: http://www.istockphoto.com/file_thumbview_approve/6043094/2/istockphoto_6043094-pink-piggy-bank.jpg
To Do: Need to identify images for 2For Yourself: At first glance you might consider this a fairly easy task. You might have even done this before with another application or even with a previous release of Blackboard. Unless it’s your day to day job, tackling this problem might be the greatest challenge you have ever faced. If you are going to do it, follow our best practices and be willing to admit where you need help. Most importantly know what your goals are going to be. Know how to measure your goals. For your team: I’m assuming you are leading a team of one or more additional contributors. Make sure that your planning is well versed. Define your objectives first. Set your schedule second. Division of responsibilities is a great approach. Divide the exercises of infrastructure support versus test definition support. Come back together at the testing phase. For your boss: Quantify the level of effort. Get him/her to sign-off on the objectives. Agree and sign-off on a test plan and schedule.
It’s important to understand why you are asking or being asked to go through a performance/scalability testing exercise. There has to be some set of transparent drivers for going through such an expansive project. Testing doesn’t necessarily provide precision. The outcome of a testing project should be learning experiences and planning, not a guarantee. I use testing in my lab to tell me what I can’t do and not necessarily what I can do. It doesn’t mean I can’t increase my confidence in what I can do.
Once you have a better idea of what you can’t do and a somewhat cloudy perspective on what you can do, you are at a point of plugging your gaps. Learning Experience Figuring Out Your Gaps
There’s a lot of documentation out there that describes the kinds of tests run such as Focus tests, soak tests, steady-state tests, etc… but the key is defining attributes of a test. Once the attributes are defined, the style of a test can be classified.Performance Goals and Objectives: Measureable criteriaWorkloadArrival/Departure RatesTransaction DistributionLength of testConditions of a test: timeouts, abandonmentAcceptance Criteria
SeleniumScripting FrameworksRecord and Playback Systems
No matter how much you plan, you will always underestimate the time, effort and mistakes that go into a benchmark. Yes, mistakes happen during benchmarks. It’s Murphy’s Law…if something wrong could happen it will happen.Define Objectives: Your benchmark should be 100% based on the measureable performance objectives. It’s a variable time based on how focused your benchmark can be defined. Analyzing Behavior: Another variable activity based on the objectives. The more coverage models you introduce, the more you will need to analyze. Analyzing System Data: Most time could be spent on Monitoring SetupInfrastructure SetupSample TestingRestore ProcessCalibration (Testing and Tuning)Scalability TestingAnalyzing ResultsPresenting Results
Content Exchange, Snapshot, API, SQL loading (difficult), Scripting, Manipulation/Obfuscation of data
Very bias to the playback feature in record and playback. Why? Well for the most part the playback was designed by many of the vendors and open source developers for static content. None of the tools I’ve encountered handles dynamic information such as SESSION/JSESSION and timestamp/datestamp information. They also don’t account for parameterized information that can change based on data conditions such COURSE_ID, Navigation Items and specific objects in the system.Record can help capture key information, but it’s up to you to process the information in a programmatic fashion so that the code is reusable and maintainable. You want to be able to programmatically handle parameterized conditions. Tools such as HTTPLiveHeaders, Fiddler and Firebug are fantastic tools because they can present the GET/POST request, passed parameters, headers (response/request), plus see what’s cached or not.
Choosing between partial payloads and full payloads really depends on the goal of the test. Use partial payloads for code simplicity and code management. A page may contain 50 Get requests. Out of those 50, only 1 request is the key request, the initial Servlet or JSP request. The subsequent GETs are things such as images, CSS and JS. Most of those are going to be cached after the first request. Second, depending on the browser, the requests will be parallelized by N depending on the browser. Most of the tools allow the programmer to control concurrent requests, but not eloquently.The advantage of partial payload is being able to rapidly script for server side payloads, while at the same time minimizing management of bulky code down the road. Most of your scripts should be partial payloads. It’s ideal to have key requests that will be frequently called as full page loads.**Be mindful of GET Requests that contain POST requests embedded in them. All of our module pages do this. If you don’t include the POST request, you really aren’t capturing the essence of the request.**Full page loads are ideal for pages that are dominant/frequent requests with fairly static structure. They can contain dynamic content, but must be controlled. A module page such as the Course Home page or My Institution page have a fair amount of customization, but are called frequently. You have the option in your test bed to seed the data to be uniform based on frequent and important modules.Beware that full page loads add a big layer of code management responsibility. If a page request has 50 GET requests, you now have up to 50 elements to manage from release to release versus 1. Also note that Full Page requests are great for First Request to Last Byte transfer (network round trip), but does not cover browser rendering time.Here’s a little nugget to consider: introduce an automated test using a tool such as Selenium which executes a browser and synthesizes the entire browser interaction process. You could have this selenium script execute on a sampled interval over the course of a test.
Need an additional slide to model Period of ActivityMonthlyWeeklyDailyHourly5 Minutes1 MinuteNeed a slide on Load Levels:Value of log analysis tools are great for showing HTTP activityDB transactions per second is another great metric Can also use UXM and ALM tools but must be aware that data will purge over time
So Your Boss Wants You to Performance Test Blackboard
So Your Boss Wants you to Performance Test the Blackboard Learn™ Platform<br />Steve Feldman<br />
Quick Bio<br />Blackboard since 2003<br />Performance Engineering from the start<br />Platform Architecture in 2005<br />Security Engineering in 2010<br />“Love my job…love my team. If you email me, I will respond.”<br />Stephen.feldman@Blackboard.com<br />http://goo.gl/Z4Rq5<br />
Expectation Setting<br />Why are you going through this exercise?<br />What do you expect to get out of it?<br />Who will be working this effort?<br />When will it be accomplished?<br />How much will it cost?<br />Where do we go next once we accomplish it?<br />
Expectation Setting<br />Drive <br />Functional<br />Objectives<br />Mine and Analyze Data<br />Access<br />Right<br />Tools<br />Develop<br />Simulation<br />Scripts<br />Benchmarking Figuring Out Your Gaps<br />Capture<br />Appropriate<br />Metrics<br />Analyze and Respond<br />to the<br />Data<br />Escalate without Confidence<br />
Everything You Need to Know<br />Planning: Goals and Objectives<br />Goals should be measureable and traceable<br />Best goals align to the vision and direction of the business.<br />Performance requirements are preferred<br />Poor goals involve system utilization metrics as they don’t align to the business.<br />
Everything You Need to Know<br />Attributes of a good performance/scale goal<br />Response time percentile<br />Throughput metric: bytes, hits, pages/transactions served or processed<br />Community/Population<br />Definition of a business transaction<br />Workload/Data Condition<br />Database transaction<br />Failure Rate<br />HTTP Error Codes<br />
Everything You Need to Know<br />Planning: Scheduling<br />Defining Objectives<br />Analyzing Behavior<br />Analyzing System Data<br />Functional Script Definition (Coverage Model)<br />Scripting<br />Data Set Construction (Test Bed Data Set)<br />Planning: Test Process<br />Monitoring Setup<br />Infrastructure Setup<br />Sample Testing<br />Restore Process<br />Calibration (Testing and Tuning)<br />Scalability Testing<br />Analyzing Results<br />Presenting Results<br />
Everything You Need to Do<br />Performance Scenario and Modeling<br />Conduct Functional Interviews<br />Functional Analysis (Review Use Cases)<br />Log Mining<br />Data Mining<br />User Experience and Expectations<br />Sequence, Order and Probability<br />Modeling Time<br />Time of day, year and universal behavior<br />
Multiple techniques for creating test bed and data conditions<br />Combination of ContentExchange and Snapshot<br />Use of B2 APIs<br />Direct SQL<br />Two pitfalls to avoid<br />Avoid creating synthetic data with load test scripts.<br />Avoid trying to use real customer data for actual test conditions.<br />Everything You Need to Do<br />
Everything You Need to Do<br />Two Recommended Synthetic Transactions: Production HTTP Drivers and True Browser Rendering.<br />HTTP Drivers: Interval-based simulators usually from external sources.<br />Regulate Frequency, Define Functional Paths and Verify Non-Functional Requirements<br />Browser Rendering: Execute full browser behavior.<br />Show full E2E and not just First to Last Byte from a Server Pespective<br />
Everything You Need to Do<br />Record and Playback vs. HTTP Capture<br />R/P acts like a proxy capturing HTTP and allows playback like a video recorder.<br />HTTP Capture: HTTPLiveHeaders, Fiddler and Firebug<br />
Everything You Need to Do<br />Partial Payloads vs. Full Payloads<br />Use Partial for Code Simplicity and Management<br />Emphasis on Server Side Request<br />Accelerate scripting delivery time<br />Use Full for Total Round Trip Time<br />Dynamic, but controlled content for simplicity purpose<br />Doesn’t get browser time<br />Nugget: Introduce automated Selenium or WebPage Test script(s) on sampled intervals during life of test for browser and end-2-end time.<br />
Everything You Need to Do<br />Arrival Rates and Load Levels<br />
Everything You Need to Do<br />Analytics: Study Both as a Transparent Lens<br />
Everything You Need to Do<br />SLA’s Acceptance Criteria on top of Performance and Scalability Requirements.<br />
When You Can’t Do It Yourself<br />Lead the objective planning phase Gather requirements and conduct functional interviews.<br />Establish relationship w/ PerfEng team at Blackboard<br />Product detailed test plan<br />Execute the benchmark lifecycle<br />Produce summary report<br />Short-term recommendations for configuration<br />Feedback to Blackboard<br />Long-term capacity planning guidance<br />
Please provide feedback for this session by emailingDevConFeedback@blackboard.com. <br />The title of this session is:<br />So Your Boss Wants you to Performance Test the Blackboard Learn™ Platform Before you Go Live with the Next Release<br />