Please note that I just provide this demo document to prove my experience in automated testing.
If you want to watch the demo video on the YouTube, please feel free to contact me.
2. Overview
Test Framework and Tools
Test Automation and Continuous Integration Solution
Functional Test Automation Framework
Server Side Performance Test Automation Structure
DB Architecture and Design Verification
Browser Side Performance Test Solution
4. Web-based Application Testing Tools
Tiers Functional testing Performance
Testing
UI Tier Robot Framework
Se2, Image Library
Dynatrace
Yslow, Firebug
Business Logic Tier Junit, Jmeter,
SoapUI
Jmeter, Java
profiler, Jstat
DB Tier Robot Framework
DB Library
Jmeter, AWR, Linux
monitoring tools
Jenkins, Emma code coverage
5. No problem sir, I’ll build the application
and run the test suites at night, you will
receive an email with functional testing,
performance testing report and test
coverage tomorrow morning.
7. Test Suites/ Test Cases/Keywords
IE, Firefox, Chrome
Or simulators
Test libraries API
Robot Framework
Test data syntax
Test library API
BuildIn SSH DB Extension
Application interfaces
System Under Test
Simulator interfaces
Functional Test Automation Solution
Tools: Jenkins, Selenium2, Robot Framework, EMMA code coverage
8. Extend Test Library For UI Layout Testing
Class Name: Screenshot Comparison
Algorithm: Histogram similarity
Sim(G,S)=
Source Code: To see the comment…
17. Browser Side Performance Test Automation Solution
Tools: Jenkins, Yslow, DynaTrace AJAX Edition
YSlow analyzes web pages and why they're slow based on
Yahoo!'s rules for high performance web sites.
Dynatrace AJAX Edition analyzes the front-end performance of
your web-pages and Identify slow running JavaScript handlers,
customs JavaScript code slow access to the Dom, expensive
Call.
Integrating Yslow and Dynatrace into Jenkins is my future job…
# -*- coding: utf-8 -*- import sys,os import Image class Comp_img(): image_width = 800 image_height = 600 part_width = 50 part_height = 50 part_count = (image_width/part_width)*(image_height/part_height)*1.0 def make_regalur_image(self, img, size = (image_width, image_height)): return img.resize(size).convert('RGB') def split_image(self, img, part_size = (part_width, part_height)): w, h = img.size pw, ph = part_size assert w % pw == h % ph == 0 return [img.crop((i, j, i+pw, j+ph)).copy() for i in xrange(0, w, pw) for j in xrange(0, h, ph)] def hist_similar(self, lh, rh): assert len(lh) == len(rh) return sum(1 - (0 if l == r else float(abs(l - r))/max(l, r)) for l, r in zip(lh, rh))/len(lh) def calc_similar(self, li, ri): #return hist_similar(li.histogram(), ri.histogram()) return sum(self.hist_similar(l.histogram(), r.histogram()) for l, r in zip(self.split_image(li), self.split_image(ri))) / self.part_count def calc_similar_by_path(self, lf, rf): li, ri = self.make_regalur_image(Image.open(lf)), self.make_regalur_image(Image.open(rf)) return self.calc_similar(li, ri) def comp_pic(self, ori_dir,comp_dir,diff_dir,similarity): """ Compare images from two different folders, move them into diff folder if new images are different with original images. """ #files1 = os.listdir(ori_dir) files2 = os.listdir(comp_dir) count = 0 for i in range(len(files2)): try: percent = self.calc_similar_by_path('%s/%s'%(ori_dir,files2[i]), '%s/%s'%(comp_dir,files2[i]))*100 print 'Img_%d: %.6f%%'%(i, percent) if (percent < similarity): Image.open(comp_dir+'\\\\'+files2[i]).save(diff_dir+'\\\\'+files2[i]) print 'Move '+files2[i]+' to '+diff_dir count=count+1 except Exception,e: print 'Img_%d: %s'%(i, e) return count if __name__ == '__main__': c = Comp_img() ori_dir= 'C:\\\\Se2\\\\GAP.cn\\\\logs\\\\original' comp_dir= 'C:\\\\Se2\\\\GAP.cn\\\\logs\\\\1st' diff_dir= 'C:\\\\Se2\\\\GAP.cn\\\\logs\\\\diff' print c.comp_pic(ori_dir,comp_dir,diff_dir,100)
Report details clearly viewable statistics including Pass/Fail ratios and elapsed times. This gives you great overview on the test execution. Log details statistics from each step of the test execution, from keyword to keyword. It enables you to drill down on the specifics of the test in case of failure or otherwise.
iostat -xm 5 | awk '{print strftime("<%H:%M:%S("),$0}' > iostat.log & vmstat -n 5 |(while read -r line; do echo "<$(date +%T:%6N)( $line"; done) > vmstat.log & jstat -gcutil 54434 1000 conn / as sysdba;set echo off;set veri off;set feedback off;set termout on;set heading off;variable rpt_options number;define NO_OPTIONS = 0;define ENABLE_ADDM = 8;-- according to your needs, the value can be 'text' or 'html'define report_type='html';begin:rpt_options := &NO_OPTIONS;end;/variable dbid number;variable inst_num number;variable bid number;variable eid number;begin select max(snap_id)-5 into :bid from dba_hist_snapshot;select max(snap_id) into :eid from dba_hist_snapshot;select dbid into :dbid from v$database;select instance_number into :inst_num from v$instance;end;/column ext new_value ext noprintcolumn fn_name new_value fn_name noprint;column lnsz new_value lnsz noprint;--select 'txt' ext from dual where lower('&report_type') = 'text';select 'html' ext from dual where lower('&report_type') = 'html';--select 'awr_report_text' fn_name from dual where lower('&report_type') = 'text';select 'awr_report_html' fn_name from dual where lower('&report_type') = 'html';--select '80' lnsz from dual where lower('&report_type') = 'text';select '1500' lnsz from dual where lower('&report_type') = 'html';set linesize &lnsz;-- print the AWR results into the report_name file using the spool command:column report_name new_value report_name noprint;select 'awr'||'.'||'&ext' report_name from dual;set termout off;spool &report_name;select output from table(dbms_workload_repository.&fn_name(:dbid, :inst_num,:bid, :eid,:rpt_options ));spool off;set termout on;clear columns sql;ttitle off;btitle off;repfooter off;undefine report_nameundefine report_typeundefine fn_nameundefine lnszundefine NO_OPTIONSexit
Report contains an overview of the test execution results. It shows TPS of subsystem, CPU utilization, database AWR report. The report has links to trend chart, these trend charts shows builds performance trend, normally, a nearly horizontal line is expected result.