Architectures for Software Systems(17-655)


Software Architecture for DBAuditor2




                                    ...
Architectures for Software Systems(17-655)

Document Information

 Title                                Software Architect...
Architectures for Software Systems(17-655)

Table of Contents
    1.Introduction.............................................
Architectures for Software Systems(17-655)

    6.2.1.TestExecuter...........................................................
Architectures for Software Systems(17-655)

Table of Figures
    Figure 1 overall context diagram............................
Architectures for Software Systems(17-655)

    Table 21 Quality Attribute Scenario 1........................................
Architectures for Software Systems(17-655)

1.     Introduction

1.1.    DBAuditor2 System
     The main goal of this proj...
Architectures for Software Systems(17-655)




                                                         1. Reques for benc...
Architectures for Software Systems(17-655)

2.     Architectural Drivers
2.1.       Functional Requirements
     Here are ...
Architectures for Software Systems(17-655)

2.3.       Quality Attribute Requirements
       The DBAuditor2 shall fulfill ...
Architectures for Software Systems(17-655)



2.3.3. Usability
  The DBAuditor2 shall provide an easy way of executing tes...
Architectures for Software Systems(17-655)

3.     Architectural Decisions

3.1.    Architectural Style
3.1.1. Client-Seve...
Architectures for Software Systems(17-655)



  The architecture of DBAuditor2 system basically adopts client-server style...
Architectures for Software Systems(17-655)



  We have done an experiment in order to figure out the difference between R...
Architectures for Software Systems(17-655)

  Even though DBAuditor meets some of the needs, still the tool is not easy to...
Architectures for Software Systems(17-655)

4.     Architectural Drivers
4.1.       Functional Requirements
     Here are ...
Architectures for Software Systems(17-655)

4.3.       Quality Attribute Requirements
       The DBAuditor2 shall fulfill ...
Architectures for Software Systems(17-655)



4.3.4. Modifiability: change of query sets
  The DBAuditor conducts DBMS per...
Architectures for Software Systems(17-655)

5.     Architectural Decisions

5.1.    Architectural Style
5.1.1. Client-Seve...
Architectures for Software Systems(17-655)




              Client                                                       ...
Architectures for Software Systems(17-655)

                  Client                                                      ...
Architectures for Software Systems(17-655)

performance of the target system because Java requires computational overhead....
Architectures for Software Systems(17-655)

adopted the separation of UI tactic. By adopting this tactic, we expect that t...
Architectures for Software Systems(17-655)

                    Client                                                    ...
Architectures for Software Systems(17-655)

6.           C & C Architectural View
     The Component & Connector (C&C) vie...
Architectures for Software Systems(17-655)




                       Client Side                                         ...
Architectures for Software Systems(17-655)

                     these query sets directly and are responsible for synchro...
Architectures for Software Systems(17-655)



6.2.   Server
  The server side is consists of three processes: server appli...
Architectures for Software Systems(17-655)

                                                                          Serv...
Architectures for Software Systems(17-655)


6.2.1. TestExecuter
  The TestExecuter thread executes queries to create tabl...
Architectures for Software Systems(17-655)

                        through JDBC
                             Table 4 Desc...
Architectures for Software Systems(17-655)

6.3.     Client
  In Client side, there is a single process, Client process. T...
Architectures for Software Systems(17-655)

                         test.   TestCommander      associates    with      th...
Architectures for Software Systems(17-655)

7.     Module Architectural View
     The Module view-type partitions the syst...
Architectures for Software Systems(17-655)

7.1.    Representative Module View of the Server
                             ...
Architectures for Software Systems(17-655)



7.2.          Detailed Module View: Sever
   The detailed module view of the...
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Software Architecture Document v0.51.doc
Upcoming SlideShare
Loading in …5
×

Software Architecture Document v0.51.doc

1,021 views
982 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,021
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
52
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Software Architecture Document v0.51.doc

  1. 1. Architectures for Software Systems(17-655) Software Architecture for DBAuditor2 April 22, 2008 Team Metis Jinhee Cho Seung Ick Jang Taekgoo kim GwanPyo Do Final Project 1/53 Metis
  2. 2. Architectures for Software Systems(17-655) Document Information Title Software Architecture Document Based on Team AoB Saved in Physical storage place Document Revisions History S. No Revision Date Author Comments 1 0.1 3/2/08 Taekgoo Kim Initial Creation 2 0.2 4/3/08 Gwanpyo Do, Modify Introduction and Quality Attributes Scenario Seung Ick Jang 3 0.3 4/9/08 Taekgoo Kim, Modify Introduction and Quality Attributes Scenario; Jinhee Cho Create C&C View and module View 4 0.31 4/11/08 Taekgoo Kim Modify Quality Attributes Scenario. 5 0.36 4/12/08 Taekgoo Kim, Modify C&C View; Modify Module view description Jinhee Cho 6 0.4 4/13/08 Jinhee Cho, Seung Modify C&C View and description, Modify module Ick Jang, Gwanpyo view and description. Do, Taekgoo Kim 7 0.41 4/18/08 Jinhee Cho, Seung Quality Attribute Scenarios Ick Jang, Gwanpyo Do, Taekgoo Kim 8 0.5 4/24/08 Gwanpyo Do, Modify Context Diagram, Architectural Drivers and Taekgoo Kim Architectural Decision 9 0.51 4/27/08 Taekgoo Kim, Modify module view and C&C view, Add allocation Jinhee Cho view Final Project 2/53 Metis
  3. 3. Architectures for Software Systems(17-655) Table of Contents 1.Introduction................................................................................................................................7 1.1.DBAuditor2 System.................................................................................................................7 1.2.Business Context......................................................................................................................7 2.Architectural Drivers.................................................................................................................9 2.1.Functional Requirements........................................................................................................9 2.1.1.Project management.............................................................................................................9 2.1.2.Project/Job Status.................................................................................................................9 2.1.3.Query Generation and Execution........................................................................................9 2.1.4.Test Report Generation........................................................................................................9 2.1.5.Wizard...................................................................................................................................9 2.2.Constraints...............................................................................................................................9 2.3.Quality Attribute Requirements...........................................................................................10 2.3.1.Performance........................................................................................................................10 2.3.2.Portability...........................................................................................................................10 2.3.3.Usability...............................................................................................................................11 2.3.4.Availability..........................................................................................................................11 2.3.5.Modifiability: change of query sets...................................................................................11 2.3.6.Modifiability: change of specification...............................................................................11 2.3.7.Modifiability: addition of new benchmark service...........................................................11 3.Architectural Decisions............................................................................................................12 3.1.Architectural Style................................................................................................................12 3.1.1.Client-Sever........................................................................................................................12 4.Architectural Drivers...............................................................................................................16 4.1.Functional Requirements......................................................................................................16 4.1.1.Project management...........................................................................................................16 4.1.2.Project/Job Status...............................................................................................................16 4.1.3.Query Generation and Execution......................................................................................16 4.1.4.Test Report Generation......................................................................................................16 4.1.5.Wizard.................................................................................................................................16 4.2.Constraints.............................................................................................................................16 4.3.Quality Attribute Requirements...........................................................................................17 4.3.1.Performance........................................................................................................................17 4.3.2.Usability..............................................................................................................................17 4.3.3.Availability..........................................................................................................................17 4.3.4.Modifiability: change of query sets...................................................................................18 4.3.5.Modifiability: change of specification...............................................................................18 4.3.6.Modifiability: addition of new benchmark service..........................................................18 5.Architectural Decisions............................................................................................................19 5.1.Architectural Style................................................................................................................19 5.1.1.Client-Sever........................................................................................................................19 5.1.2.Implicit invocation: publish-subscribe style[TBA]..........................................................21 5.2.Tactics.....................................................................................................................................21 5.2.1.Performance: Resource Demand.......................................................................................21 5.2.2.Modifiability: Localize modifications...............................................................................22 5.2.3.Modifiability: Defer binding time.....................................................................................22 5.2.4.Usability: Separation of UI................................................................................................22 5.2.5.Usability: Support User Initiative.....................................................................................23 5.2.6.Portability...........................................................................................................................23 5.3.Other decisions......................................................................................................................23 5.3.1.Project profile repository...................................................................................................23 5.3.2.Comparison[TBD-figure for comparison]........................................................................24 6. C & C Architectural View ......................................................................................................25 6.1.Overall C&C View................................................................................................................25 6.2.Server.....................................................................................................................................28 Preliminary Report 3/53 Metis
  4. 4. Architectures for Software Systems(17-655) 6.2.1.TestExecuter.......................................................................................................................30 6.3.Client......................................................................................................................................32 7.Module Architectural View .....................................................................................................34 7.1.Representative Module View of the Server..........................................................................35 7.2.Detailed Module View: Sever................................................................................................36 7.3.Representative Module View of Client.................................................................................37 7.4.Detail Module View: Client...................................................................................................38 7.5.Detail Module View: EventBusPackages.............................................................................40 8.Allocation Architectural View ................................................................................................41 8.1.Deployment view...................................................................................................................41 8.2.Implementation veiw.............................................................................................................42 9.Mapping between Architectural Views ..................................................................................43 10.Future Works..........................................................................................................................44 11.Appendix.................................................................................................................................45 11.1.Acronym...............................................................................................................................45 11.2.Functional requirements.....................................................................................................45 11.3.Prioritized Quality Attribute Scenarios.............................................................................48 11.4.References............................................................................................................................53 [1]http://dogbert.mse.cs.cmu.edu/mse2008Korea/AoB/repository/documents/AoB_SRS_v1.0 _20080302.doc..............................................................................................................................53 [2]http://dogbert.mse.cs.cmu.edu/mse2008Korea/AoB/repository/misc/Communication_Exp eriment.doc..................................................................................................................................53 [3]Clements, Bachmann, Bass, Garlan, Ivers, Little, Nord, Stafford, Documenting Software Architectures – Views and Beyond, Addison-Wesley, 2003......................................................53 [4]Heng Chen, Myung-Joo Ko, Neel Mullick, Paulo Merson, “SEI ArchE System, Architecture & Design documentation for the ‘Architecture Expert & Design Assistant’, 200453 Preliminary Report 4/53 Metis
  5. 5. Architectures for Software Systems(17-655) Table of Figures Figure 1 overall context diagram.................................................................................................8 Figure 2 Top level runtime view of the legacy DBAuditor.......................................................12 Figure 3 Top level runtime view of DBAuditor2.......................................................................13 Figure 4 Overall context diagram of DBAuditor2....................................................................15 Figure 5 Top level runtime view of DBAuditor2.......................................................................20 Figure 6 Top level runtime view of the legacy DBAuditor.......................................................21 Figure 7 The location of project profile.....................................................................................24 Figure 8 C&C View of the system..............................................................................................26 Figure 9 C&C view of the server................................................................................................29 Figure 10 Detail View of TestExecuter.......................................................................................30 Figure 11 C&C View of the client...............................................................................................32 Figure 12 Top-level module view of Server................................................................................35 Figure 13 Detailed module view of Server.................................................................................36 Figure 14 Top-level module view of Client................................................................................37 Figure 15 Detailed module view of Client..................................................................................38 Figure 16 ServerEventBusPackage............................................................................................40 Figure 17 ClientEventBusPackage.............................................................................................40 Figure 18 Deployment view........................................................................................................41 Figure 19 Package structure for implementation......................................................................42 Table of Tables Table 1 Java RMI vs. TCP/IP socket..........................................................................................14 Table 2 Description of Overall C&C view.................................................................................27 Table 3 Description of C&C view of the server.........................................................................29 Table 4 Description of TestExecuter...........................................................................................31 Table 5 Description of C&C view of the client..........................................................................33 Table 6 Description of Top-level module view of server...........................................................35 Table 7 Description of detail module view of server.................................................................37 Table 8 Description of top-level module view of client..............................................................38 Table 9 Description of detiail module view of client.................................................................40 Table 10 Mapping table between C&C view and Module view...............................................43 Table 11 Project Management....................................................................................................45 Table 12 Project Profile...............................................................................................................45 Table 13 Project Status...............................................................................................................46 Table 14 Query Generation and Execution...............................................................................46 Table 15 Test Report....................................................................................................................46 Table 16 Job Status......................................................................................................................46 Table 17 Wizard...........................................................................................................................47 Table 18 Configuration...............................................................................................................47 Table 19 Connection Control......................................................................................................47 Table 20 Environment.................................................................................................................47 Preliminary Report 5/53 Metis
  6. 6. Architectures for Software Systems(17-655) Table 21 Quality Attribute Scenario 1.......................................................................................48 Table 22 Quality Attribute Scenario 2.......................................................................................48 Table 23 Quality Attribute Scenario 5.......................................................................................50 Table 24 Quality Attribute Scenario 6.......................................................................................50 Table 25 Quality Attribute Scenario 7.......................................................................................51 Table 26 Quality Attribute Scenario 8.......................................................................................51 Table 27 Quality Attribute Scenario 9.......................................................................................51 Preliminary Report 6/53 Metis
  7. 7. Architectures for Software Systems(17-655) 1. Introduction 1.1. DBAuditor2 System The main goal of this project is to enhance a previously developed automated DBMS benchmark tool, DBAuditor. The will-be developed tool, the DBAuditor2, needs to provide benchmark project management which stores configurations and project histories. Also, testers can search the history using filtering. The tool creates reports. Testers can make their own report format, and can choose specific graph type among various available types to make a resource monitoring graph. The usability must be improved by offering batch job, benchmark wizards, predefined queries of each DBMS, and easy method of inputting day/time. The last objective is provision of detailed user guide including basic usage of DBMS. In addition to these functionalities, the performance should be high enough to be competitive in the DBMS testing market. 1.2. Business Context The DBAuditor2 project was created by the Telecommunication Technology Association (TTA). TTA is an IT standards organization, and its services include providing one-stop services for the establishment of IT standards as well as provides testing and certification for IT products. DBMS testing tool tests various kinds of DBMS based on several Transaction processing Performance Councils (TPC) standards. TTA Software Quality Evaluation Center (SQEC) had been using TeamQuest; it is a commercial DBMS testing tool. However, the tool is too heavy to test simple TPC standards because it provides too many features; some of them are useless for the organization. TTA wanted to have lighter and simpler tool; still it should have core important DBMS testing functionalities. According to that needs, DBAuditor2 is developed last year. Even though DBAuditor2 meets some of the needs, still the tool is not easy to use for testers in TTA. Thus, the DBAuditor2 project is created to enhance mainly usability. However, the organization faced some problems related to performance in terms of CPU usage and latency while using DBAuditor2. Now, performance becomes one of main purpose of the DBAuditor2 project. Preliminary Report 7/53 Metis
  8. 8. Architectures for Software Systems(17-655) 1. Reques for benc t hmark 8. Report benchmark result Tester Client 2. T t exec es ution 7. Test report DBAuditor2 Client 3,6. server-client communication 4. Execute TPC-C/TPC-H benchmark test DBAuditor2 Server DBMS 5. T t res es ult Legends Target System Document delivery Boundary DBMS DBMS under test UI Interaction App Server/Client application Network communication Query execution/return Figure 1 overall context diagram Figure 4 Figure 1 shows the context diagram of the DBAuditor2. A client requests a DBMS performance benchmark test to TTA. A tester in TTA requests for performance test for a DBMS under test when TTA accepts the request. The tester interacts with the client application of DBAuditor2. Once the tester finishes his request, the client application communicates with the server of DBAuditor2. The server executes queries for testing the DBMS under test. When the server gets the return of the request, it reports the results to the tester. The tester refers to the report, and makes the benchmark result document. The document is delivered to the client. Preliminary Report 8/53 Metis
  9. 9. Architectures for Software Systems(17-655) 2. Architectural Drivers 2.1. Functional Requirements Here are key functional requirements that the DBAuditor2 shall support. For more detail of functional requirements, refer to SRS documents 11.4. 2.1.1. Project management - Create/delete/modify/search project information. - Import/export of project profile. - Project profile includes base information, query information, and test information. 2.1.2. Project/Job Status - DBAuditor2 shall display the list of the test results when a project is open. - Display current status of the data generation and current job. 2.1.3. Query Generation and Execution - Execute batch of queries. - Cancel during batch processing. - Show progress during batch processing. 2.1.4. Test Report Generation - Generate a report, display it to screen, and store it into a file. - Display system usage information in various ways of displaying. - Generate canceled report when a test is canceled. 2.1.5. Wizard - Provide wizard with TPC-C/H test. 2.2. Constraints - DBAuditor2 is developed in Java programming language; it’s constraint because DBAuditor2 is based on DBAuditor2. - Using TCP/IP socket as a communication method between server and client is mandatory. - DBAuditor2 shall be run on following operating systems: MS Windows, Linux 2.6.x, and HP UX. - DBAuditor2 shall support following DBMS: Oracle, MS SQL, DB2 and MySQL are mandatory. Optionally, Informix and Sybase might be supported. Preliminary Report 9/53 Metis
  10. 10. Architectures for Software Systems(17-655) 2.3. Quality Attribute Requirements The DBAuditor2 shall fulfill the following quality attributes. 2.3.1. Performance The main purpose of the DBAuditor2 is to benchmark various DBMSs’ performance. The benchmark results would be meaningless if the reported CPU and Memory usage is immensely affected by benchmark tool itself. Regarding to the benchmark result, credibility is very important. While executing the benchmark, the DBAuditor2 shall minimize influences on the system where benchmark tool runs on. - Performance quality attribute scenario 1: A tester starts the DBAuditor2. After boot up, the tester connects database. Schema creation and data generation is already done. The tester now executes queries according to TPC-C. While executing queries, CPU usage of the system where the DBAuditor2 runs on becomes 100%. Data transaction process should be separate from the DBAuditor2. In this situation, the process of the DBAuditor2 must occupy less than 5% of CPU and Memory usage. [Environment of this response measure will be added after discussion with the client.] - Performance quality attribute scenario 2: A tester starts the DBAuditor2. After boot up, the tester connects database. Schema creation is already done. The tester now generates test data that will be inserted into DBMS. The generation time is measured; after the tester clicks ‘Generate test data’ button, generation time starts, and it ends just after generation is completed. This generation time shall be within 10% of difference compared to the generation time of DBGEN. DBGEN is a data generation tool which is provided by TPC, and TPC strongly recommends using DBGEN when generating performance test data.[Issue] 2.3.2. Portability The DBAuditor2 shall be capable to test various databases’ performance: DB2, Oracle, MySQL, and MS-SQL. - Portability quality attribute scenario 1: TTA can provide a service of Database performance test for their clients. Because their clients run their Database on various operating systems, TTA provides testing service according to the each client’s operating system. Currently, most IT companies use Windows, Linux 2.6.x and HP UX. Thus, the DBAuditor2 shall be installed on Windows, Linux 2.6.x and HP-UX with no modification of the system. - Portability quality attribute scenario 2: TTA’s clients would request for testing performance of various kinds of Databases. Currently, most IT companies use one of following DBMSs: DB2, Oracle, MySQL, and MS-SQL. Thus, once the DBAuditor2 successfully is installed to test a particular DBMS, the DBAuditor2 shall be operable on DB2, Oracle, MySQL, and MS- SQL with no modification of the system. Preliminary Report 10/53 Metis
  11. 11. Architectures for Software Systems(17-655) 2.3.3. Usability The DBAuditor2 shall provide an easy way of executing tests. In TTA, testers perform benchmark task, spending several hours or days for a test. Thus, the DBAuditor2 shall provide an easy way of installation and test execution, intuitive menu, good learnability and comfortableness. DBAuditor2 shall satisfy end users in terms of easy-to-use. 2.3.4. Availability The DBAuditor2 shall be up while test executions because correctness of benchmark result is very important. When system fails, the system shall provide detailed information: the time failure occurred, last actions performed before the failure, and possible solutions. If a fault is occurred, the DBAuditor2 shall automatically cancel the testing, notify the failure of testing and roll back that testing to “ready-to-go*” status, according to TPC specifications (need guide) * ready-to-go: in this status, all parameters for testing environment are set so that a tester can start the test by clicking start button. 2.3.5. Modifiability: change of query sets The DBAuditor2 conducts DBMS performance testing according to TPC-C/H specification for DB2, Oracle, MySQL, and MS-SQL. Those DBMSs can be newly released in the future. For the new release of DBMSs, the test execution queries may be changed. The tester who is skilled in DBMSs can modify the sets of testing queries with no modification of the system within 1 man-day. 2.3.6. Modifiability: change of specification The client provides TPC-C/H benchmark services. TPC-C and TPC-H specification might be updated in the future. A single developer who is skilled in TPC-C/H specification and understands the updates of the specification shall modify the system so that it can support updated specification within 7 man-days. 7 man-days include development, test, integration, and installation. 2.3.7. Modifiability: addition of new benchmark service The client provides TPC-C/H benchmark services. They might need to add another benchmark services such as TPC-E. A single developer who understands the specification of newly added benchmark service shall modify the system so that it can support the benchmark service within 90 man-days. 90 man-days include development, test, integration, and installation. Preliminary Report 11/53 Metis
  12. 12. Architectures for Software Systems(17-655) 3. Architectural Decisions 3.1. Architectural Style 3.1.1. Client-Sever The legacy system adopts client-server style. The system introduces RMI connection as a communication channel. Also the system uses one additional communication channel for monitoring process. The major reason for this separation of RMI communication was to remove an effect of monitoring the system usage on the benchmark test, so as to satisfy accuracy quality attribute. That is, if one RMI communication is shared by performance test and monitoring the system usage, the results of the benchmark test may be affected by the system usage monitoring. On the contrary, the system usage monitoring may be affected by the performance test, even though testers want to monitor the system usage in a real time. Client Server TestDB Client Server Application1 Application1 Rule/Query /Schema Client Server Application2 Resource Application2 Usage <Legend> System Application Connectors Repository RMIConnT DB RepositoryConnT JDBCConnT File Figure 2 Top level runtime view of the legacy DBAuditor For these reasons, one RMI communication is divided into two RMI communications and both of the accuracy of the performance test and real time monitoring of the system usage can be satisfied with two separate RMI communications. Preliminary Report 12/53 Metis
  13. 13. Architectures for Software Systems(17-655) The architecture of DBAuditor2 system basically adopts client-server style. An end user uses the client application on personal computer and the server application will run on a separate hardware, so client-server architectural style is inevitably adopted. Client Side Server Side SProject CProject Profile Profile SQuerySet System Config Server TargetDB Client Generated Data DataRule CQuerySet DBGen CLog SLog SystemMonitor Resource Usage <Legend> Connectors Repository Process SocketCommT DataAccessT JDBCConnT File System JNIConnT ProcessControlT StreamCommT Database Figure 3 Top level runtime view of DBAuditor2 According to 3, we can find out two most significant changes; one is the communication between the client and the server, and the other is the number of processes in the client. We introduce TCP/IP socket communication in order to promote performance in terms of latency. Because the legacy system uses RMI communication to promote simple and transparent communication between modules on client and the server, it has some drawbacks on latency of communication. RMI is using TCP/IP socket communication in low-level. Therefore, RMI is an abstraction of TCP/IP which generates more overhead. Thus, we decided to use TCP/IP socket communication instead of RMI communication. Preliminary Report 13/53 Metis
  14. 14. Architectures for Software Systems(17-655) We have done an experiment in order to figure out the difference between RMI communication and TCP/IP socket communication. Table 1 Java RMI vs. TCP/IP socket Data length Socket RMI Ratio 10 7.34 sec 10.90 sec 1.48 100 2.56 sec 10.86 sec 4.25 1000 2.56 sec 11.10 sec 4.33 10000 9.85 sec 23.02 sec 2.34 100000 85.15 sec 144.59 sec 1.70 In this experiment, we implemented a server and a client. The server just sends a string of predefined length to the client. Normally, socket communication shows shorter completion time. For more information such as experiment environment, refer to Database Transaction Experiment document 11.4. We can think about two candidates regarding to the number of client processes; we can create a new process or a new thread that accepts system monitoring data. The responsibility of the process/thread is to receive system monitoring data from the server, and to send the information to a user interface. The communication between processes cost more than the communication between threads because threads can share memory more easily than processes. For this reasons, we merged two processes into single process. In 3, two TCP/IP socket communication channels are introduced. One channel is used to send and receive information from the client to the server, and the other channel is used to transfer system monitoring information. The first channel is a duplex channel; the server can send information to the client, and the client can send commands to the server. And the other channel is a simplex channel; the system monitoring process in the server always sends, and the monitor process in the client always receives. By this separation, modifiability can be improved because we have separate connectors: one for protocol, and the other for streaming. The DBAuditor2 project was created by the Telecommunication Technology Association (TTA). TTA is an IT standards organization, and its services include providing one-stop services for the establishment of IT standards as well as provides testing and certification for IT products. DBMS testing tool tests various kinds of DBMS based on several Transaction processing Performance Councils (TPC) standards. TTA Software Quality Evaluation Center (SQEC) had been using TeamQuest; it is a commercial DBMS testing tool. However, the tool is too heavy to test simple TPC standards because it provides too many features; some of them are useless for the organization. TTA wanted to have lighter and simpler tool; still it should have core important DBMS testing functionalities. According to that needs, DBAuditor is developed last year. Preliminary Report 14/53 Metis
  15. 15. Architectures for Software Systems(17-655) Even though DBAuditor meets some of the needs, still the tool is not easy to use for testers in TTA. Thus, the DBAuditor2 project is created to enhance mainly usability. However, the organization faced some problems related to performance in terms of CPU usage and latency while using DBAuditor. Now, performance becomes one of main purpose of the DBAuditor2 project. The DBAuditr2 based on the previous project’s result. Figure 4 shows the context diagram of the DBAuditor2. A tester requests for performance test for a target DB. The DBAuditor2 system queries target database and gets the test result. While executing benchmark testing, system should obtain system usage monitoring data. The DBAuditor2 system sends requests to kernel OS in order to get the system monitoring data. When the DBAuditor2 reports the results to the tester, it sends test results from target DB as well as system monitoring data. 1. Reques for benc t hmark 6. Report benchmark result Tester Client 2. T t exec es ution 5. Test report 3. Execute TPC-C/TPC-H be hmark tes nc t DBMS DB under test MS DBAuditor2 4. T t res es ult Target System 3. Execute TPC-APP WAS B hmark tes enc t W under test AS Figure 4 Overall context diagram of DBAuditor2 Preliminary Report 15/53 Metis
  16. 16. Architectures for Software Systems(17-655) 4. Architectural Drivers 4.1. Functional Requirements Here are key functional requirements that the DBAuditor2 shall support. For more detail of functional requirements, refer to SRS documents. 4.1.1. Project management - Create/delete/modify/search project information. - Import/export of project profile. - Project profile includes base information, query information, and test information. 4.1.2. Project/Job Status - DBAuditor2 shall display the list of the test results when a project is open. - Display current status of the data generation and current job. 4.1.3. Query Generation and Execution - Execute batch of queries. - Cancel during batch processing. - Show progress during batch processing. 4.1.4. Test Report Generation - Generate a report, display it to screen, and store it into a file. - Display system usage information in various ways of displaying. - Generate canceled report when a test is canceled. 4.1.5. Wizard - Provide wizard with TPC-C/H test. 4.2. Constraints - DBAuditor is developed in Java programming language; it’s constraint because DBAuditor2 is based on DBAuditor. - Using TCP/IP socket as a communication method between server and client is mandatory. - DBAuditor2 shall be run on following operating systems: MS Windows, Linux 2.6.x, and HP UX. - DBAuditor2 shall support following DBMS: Oracle, MS SQL, DB2 and MySQL are mandatory. Optionally, Informix and Sybase might be supported. Preliminary Report 16/53 Metis
  17. 17. Architectures for Software Systems(17-655) 4.3. Quality Attribute Requirements The DBAuditor2 shall fulfill the following quality attributes. 4.3.1. Performance The main purpose of the DBAuditor2 is to benchmark various DBMSs’ performance. The benchmark results would be meaningless if the reported CPU and Memory usage is immensely affected by benchmark tool itself. Regarding to the benchmark result, credibility is very important. While executing the benchmark, the DBAuditor2 shall minimize influences on the system where benchmark tool runs on. - Performance quality attribute scenario 1: A tester starts the DBAuditor2. After boot up, the tester connects database. Schema creation and data generation is already done. The tester now executes queries according to TPC-C. While executing queries, CPU usage of the system where the DBAuditor2 runs on becomes 100%. Data transaction process should be separate from the DBAuditor2. In this situation, the process of the DBAuditor2 must occupy less than 5% of CPU and Memory usage. [Environment of this response measure will be added after discussion with the client.] - Performance quality attribute scenario 2: A tester starts the DBAuditor2. After boot up, the tester connects database. Schema creation is already done. The tester now generates test data that will be inserted into DBMS. The generation time is measured; after the tester clicks ‘Generate test data’ button, generation time starts, and it ends just after generation is completed. This generation time shall be within 10% of difference compared to the generation time of DBGEN which is provided by TPC .[Issue] 4.3.2. Usability The DBAuditor2 shall provide an easy way of executing tests. In TTA, testers perform benchmark task, spending several hours or days for a test. Thus, the DBAuditor2 shall provide an easy way of installation and test execution, intuitive menu, good learnability and comfortableness. DBAuditor2 shall satisfy end users in terms of easy-to-use. [Issue: how to meaure the usability? With survey like LikertScale? Or ] 4.3.3. Availability The DBAuditor2 shall not be shut down during tests because correctness of benchmark result is very important. If a fault is occurred, the DBAuditor2 shall automatically cancel the testing, notify the failure of testing and roll back that testing to “ready-to-go*” status, according to TPC specifications (need guide) * ready-to-go: in this status, all parameters for testing environment are set so that a tester can start the test by clicking start button. Preliminary Report 17/53 Metis
  18. 18. Architectures for Software Systems(17-655) 4.3.4. Modifiability: change of query sets The DBAuditor conducts DBMS performance testing according to TPC-C/H specification for various DBMSs. However, the specification can be updated, and as well the DBMS such as Oracle and MySQL can be newly released in the future. For the updates of TPC-C/H specification and the release of DBMSs, the queries which are used for performance test can be changed. The tester who is skilled in various kinds of DBMS can modify the sets of testing queries without affection to or modification of the DBAuditor itself within 1 man-day. 4.3.5. Modifiability: change of specification With the DBAuditor2, the client provides two major benchmark services to a DBMS. Though they currently performs TPC-C/H test with the DBAuditor2, according to TPC specification, they might need to modify TPC-C/H test process by updates of TPC-C/H specification. This modification of features shall be done by a single developer who is skilled in TPC-C/H specification and understands the updates of the specification in 7 man-days. 4.3.6. Modifiability: addition of new benchmark service With the DBAuditor2, the client provides two major benchmark services to a DBMS. Though they currently performs TPC-C/H test with the DBAuditor2, according to TPC specification, they might need to add other kinds benchmark services. These additions of benchmark services shall be done by a single developer who understands the specification of newly added benchmark service in 90 man- days. Preliminary Report 18/53 Metis
  19. 19. Architectures for Software Systems(17-655) 5. Architectural Decisions 5.1. Architectural Style 5.1.1. Client-Sever The architecture of DBAuditor2 system basically adopts client-server style. An end user uses the client application on his own personal computer and the sever application will run on a separate sever, so client-server architectural style is inevitably adopted. In this architecture, the client and the server use one simple TCP/IP socket communication channel. Because the legacy system of DBAuditor uses RMI communication to promote simple and transparent communication between modules on client and the server, it has some drawbacks on latency of communication. RMI is using TCP/IP socket communication in low-level. Therefore, RMI is an abstraction of TCP/IP which generates more overhead. Thus, we decided to use TCP/IP socket communication for DBAuditor2 instead of RMI communication. Preliminary Report 19/53 Metis
  20. 20. Architectures for Software Systems(17-655) Client Server SProject Profile CProject Profile SQuerySet System Client Server Config TargetDB DataRule Generated Data CQuerySet CLog DBGen SLog System Monitoring Resource Usage <Legend> Connectors Repository Process SocketCommT DataAccessT File System JNIConnT JDBCConnT Database Figure 5 Top level runtime view of DBAuditor2 In addition, the legacy DBAuditor uses the other communication channel for monitoring process. The major reason for this separation of RMI communication was to remove an effect of monitoring the system usage on the benchmark test, so as to satisfy accuracy quality attribute. That is, If one RMI communication is shared by performance test and monitoring the system usage,, the results of the performance test may be affected by the system usage monitoring. On the contrary, the system usage monitoring may be affected by the performance test, even though testers want to monitor the system usage in a real time Preliminary Report 20/53 Metis
  21. 21. Architectures for Software Systems(17-655) Client Server TestDB Client Server Application1 Application1 Rule/Query /Schema Client Server Application2 Resource Application2 Usage <Legend> System Application Connectors Repository RMIConnT DB RepositoryConnT JDBCConnT File Figure 6 Top level runtime view of the legacy DBAuditor . For these reasons, one RMI communication is divided into two RMI communications and both of the accuracy of the performance test and real time monitoring of the system usage can be satisfied with two separated RMI communications. However, when adopting TCP/IC socket instead of RMI, the accuracy might be promoted because the monitoring process affects little to the server’s computing time. 5.1.2. Implicit invocation: publish-subscribe style[TBA] In the both of the client and the server, we adopt implicit invocation architectural style, especially publish-subscribe There are several reasons we adopt this publish-subscribe style. The most important factor that leads us to this decision is that there are complex interactions among modules in legacy system; more complex interactions will be added to implement newly-added requirements. Publish-subscribe style helps us to add/modify/remove a module easier. Besides, there are some gaps between documentations and code. When considering that we need to refactoring, that will be problematic in following implementation phase if the interaction among modules is complex. Thus, the publisher-subscriber architectural style might be applicable to this situation. 5.2. Tactics 5.2.1. Performance: Resource Demand Our system will be implemented with Java because the target system has to be run on various operating systems. Even though Java supports various running environments, usually Java inhibits the Preliminary Report 21/53 Metis
  22. 22. Architectures for Software Systems(17-655) performance of the target system because Java requires computational overhead. We adopted the resource demand tactic to reduce the computational overhead. - TCP/IP socket communication: The legacy system uses Java RMI for communication between the server and the client. Java RMI is very convenient and simple communication method, but it consumes much resource than TCP/IP socket communication. We have performed an experiment to figure out which method is suitable for the target system. We will use TCP/IP socket communication according to the result of the experiments. - Java Native Interface: A program complied with Java must run on a Java virtual machine. Therefore, usually Java-based programs consume much resource than other language-based programs such as C or C++. To address this problem, we decided to use the Java Native Interface (JNI) in hotspot codes. By using JNI, we expect that the target system will be faster than the legacy system. 5.2.2. Modifiability: Localize modifications The legacy system uses explicit call-returns. Because of that, it is hard to modify the legacy system. If we modify a module, it affects on other related modules in most cases. To avoid this problem, we will use an event-bus for implementation. - Publish-subscribe style: An Publish-subscribe style promote the modifiability of the target system because each module does not depend on other modules much. By maintaining semantic coherences among modules, we can achieve the modifiability. 5.2.3. Modifiability: Defer binding time The main purpose of the target system is to benchmark various database management systems and operating systems. Thus, the target system shall support detail configurations to promote the modifiability of the system. - System configuration: As we mentioned above, the environment of the target system is varied. We adopted the deferring binding time tactic to solve this problem. The target system will be executed according to the system configurations; the binding will be done at runtime rather than at compile time. - User defined SQL statements: Even though most DBMSs support the standard specification of SQL statements, some DBMSs make a problem with the standard SQL statements. Therefore, we separate the SQL statements from the source code. The users can define their own SQL statements according to their circumstances. 5.2.4. Usability: Separation of UI In many cases, users want to change user interfaces frequently. If the user interface is tightly coupled with the logic of the system, we cannot change the user interface easily. Therefore, we Preliminary Report 22/53 Metis
  23. 23. Architectures for Software Systems(17-655) adopted the separation of UI tactic. By adopting this tactic, we expect that the target system will promote the usability. - Status Manager: To separate the UI from the logic, we added a status manager which manages the status of the system. Because the UI interact with only this manger, if we keep the semantic coherences among the modules, we can change the UI without affecting on other logic modules. 5.2.5. Usability: Support User Initiative As we mentioned above, the target system runs on various environments and has to support various DBMSs. To accomplish this goal, we adopted the support user initiative tactic. - System configuration & User defined queries: The target system promotes the usability by supporting user defined configurations. We expect that users can configure the system easily and can benchmark whatever they want. 5.2.6. Portability The portability is one of most important quality attributes because the target system runs on various operating systems. To promote the portability of the system, we adopted the Java as a development language because Java supports various OSs and various DBMSs through JDBC. 5.3. Other decisions 5.3.1. Project profile repository One of the requirements of the system is managing project profile data. The project profile includes general information of testing project and the result of each benchmarking test. Because the general project information is to be managed by the client and the test results are to be managed by server, there is an issue that which side should mainly manage the project profile data. Preliminary Report 23/53 Metis
  24. 24. Architectures for Software Systems(17-655) Client Server CProject Client Server SProject Profile Profile <Legend> Connectors Repository Process File DataAccessT System Figure 7 The location of project profile Most of controls in the testing process are performed on the client side. To start a benchmark test project, a tester needs to initiate the test on the client side. In addition, to preserve the test result while a test is being performed, the sever needs to save the data temporarily on it and the result of each testing should be transferred from the server to the client after each test is finished. Thus, to reduce communication traffics, it might be applicable that the client takes charge of managing the project profile data. The customer wants to share the project profile among testers, the latest versions of the project profile also need to be stored on the server, which triggers a synchronization issue; the synchronization might be up to the end user when it is necessary. This issue should be covered when the follow-up design is performed through negotiation with the customer. 5.3.2. Comparison[TBD-figure for comparison] The target system is based on the legacy system. It is important to figure out the strengths and weaknesses of two systems, the target system and the legacy system, in terms of critical issues mentioned above. By analyzing those advantages and disadvantages, then we can confirm the architecture of the target system. Preliminary Report 24/53 Metis
  25. 25. Architectures for Software Systems(17-655) 6. C & C Architectural View The Component & Connector (C&C) view-type decomposes the system into components that have some runtime presence such as processes, objects, storage and connectors or that represent pathways of communication such as information flows, and access to shared storage. These components and connectors are the elements represented in the view-types11.4. This view-type was selected because it helps the following roles11.4:  The software architect and project manager can argue and reason about architectural properties and quality attribute requirements that the system must adhere to.  The software architect, programmer and tester can infer progression of data through the system and how the structure of the system changes as it executes.  External stakeholders like customers, project evaluators can understand the system’s principal executing components (including the major shared data sources) and their interactions – therefore serving as a means for verification and validation of system properties.  Maintainers of the project can get an overview of the system as a starting point of future extensions and / or modifications. The view-type has been represented using a combination of the call-return and publishsubscribe styles because separating these two predominant styles into different views would have reduced the understandability of the system as a whole. The interactions of the different components in this view-type warrant the combination of the styles and they individually adhere to the rules of the aforementioned styles. 6.1. Overall C&C View The architecture involves four processes; one is in the client side and other three are contained in the server side. The client side and the server side communicate on socket communication method. Between the client and the server, there are two socket communication channels. One is to execute test on the TargetDB and another one is to report system resource usage. Preliminary Report 25/53 Metis
  26. 26. Architectures for Software Systems(17-655) Client Side Server Side SProject CProject Profile Profile SQuerySet System Config Server TargetDB Client Generated Data DataRule CQuerySet DBGen CLog SLog SystemMonitor Resource Usage <Legend> Connectors Repository Process SocketCommT DataAccessT JDBCConnT File System JNIConnT ProcessControlT StreamCommT Database 33] Figure 8 C&C View of the system Element Description SystemConfig storage contains system configuration information that consists of server SystemConfig address, default DB information and DB connection driver. CProjectProfile storage in the client side is a file that contains project profiles. The CProjectProfile project profile consists of project name, project open date, target database name, and test profile. DataRule storage contains a set of rules for generating data which is used to test the DataRule TargetDB. CQuerySet CQuerySet storage contains a set of queries in the client side. In this storage, there are multiple sets of queries for various kinds of data base; standard SQL query set, Oracle query set, MySQL query set and user defined query set(s). Since client side’s query sets are primary query sets between client and server side, users should be able to manage Preliminary Report 26/53 Metis
  27. 27. Architectures for Software Systems(17-655) these query sets directly and are responsible for synchronizing these query sets to server side’s query sets, SQuerySet. CLogData CLogData storage stores log data of the client side. SLogData SLogData storage stores log data of the server side. SQuerySet storage contains a set of queries in the server side. In this storage, there are SQuerySet multiple sets of queries for various kinds of data base; standard SQL query set, Oracle query set, MySQL query set and user defined query set(s). SProjectProfile storage in the server side is a file that contains project profiles. The SProjectProfile project profile consists of project name, project open date, database name, test profile, and [TBA]. GeneratedData storage contains data that is generated by DBGen and used for test GeneratedData execution. Server Server is responsible for generating test data, executing test and monitoring CPU usage. Client is responsible for user interaction, managing project profile, managing query sets Client and configuring system. TargetDB is the target database to be tested. TestExecuter and MassiveDataUploader TargetDB send queries through JDBC caller to use the TargetDB regardless of any kinds of database. ResourceUsage ResourceUsage storage contains resource usage information. DBGen is an external process that is developed by TPC organization. DBGen generates DBGen TPC-H data and stores it into GeneratedData storage. SystemMonitor is a process that plays a role for monitoring resource usage of server SystemMonitor sides. To reduce affections on the Server’s performance and resource usage, this process is operated independently to the Server Table 2 Description of Overall C&C view Preliminary Report 27/53 Metis
  28. 28. Architectures for Software Systems(17-655) 6.2. Server The server side is consists of three processes: server application process, resource monitoring process and DBGen process. The server application process is the main application to perform actual testing. To do that, server application generates massive test data through DataGenerator for TPC-C test and delegate generation of test data to DBGen process through JNI. This is because DBGen is known as the most optimized in generating TPC-H test data and TPC recommends employing it to promote generation performance. The resource monitoring process is to monitor resource usage of the server during executing test. ResourceMonitor component may request system resource usage information to OS when OS provides the information. Or, it should contain a module that looks up the resource usage and generate related information. Preliminary Report 28/53 Metis
  29. 29. Architectures for Software Systems(17-655) Server Side SProject SProject Profile Manager SQuerySet TestExecuter SCommunicationHandler 1...n TargetDB MassiveData Uploader DataGenerator Generated Data SStatus Manager SLog DBGen Recorder SLog SStreamComm SResource Resource Handler Monitor Usage <Legend> Threads Connectors Repository Process Manager Executer JNIConnT DataAccessT JDBCConnT File Agent Announcer Reciever System CommHandler ProcessControlT StreamT EventBusT Database Figure 9 C&C view of the server Element Description SLogRecorder SLogRecorder is responsible for recording logs of EventBus in the server side. SCommunicationHandler plays a role of socket communication with the client side. SCommunicatioHandler receives/sends packets from/to the client side, transforms SCommunicationHandler messages into a corresponding event with parameterized data, and announces it in EventBus. MassiveDataUuploader extracts generated data for test from GeneratedData and MassiveDataUploader uploads those data to the TargetDB according to the specification of TPC. DataGenerator invokes DBGen to generate data for TPC-H, and generates data for DataGenerator TPC-C. SStatusManager SStatusManager generates the server side’s status and report it to the client side. TestExecuter is to execute sends a set of SQL statements to the TargetDB, and TestExecuter announces the results of each execution. SProjectManager is a project manager in the server side. There is another project SProjectManager manager in the client side. SProjectManager synchronizes ProjectProfile and QuerySet storage with client side’s ProjectProfile storage and QuerySet storage. JDBCCaller is a thread that connects the TestExecuter and the TargetDB via JDBC. JDBCCaller In case of TPC-C test, according to TPC specification, multiple VirtualTerminal shall be run and connect to the TargetDB through respective JDBCCaller. Table 3 Description of C&C view of the server Preliminary Report 29/53 Metis
  30. 30. Architectures for Software Systems(17-655) 6.2.1. TestExecuter The TestExecuter thread executes queries to create tables to the target database via JDBC, to perform TPC-C test with multiple virtual terminal, and to perform TPC-H test. Since common vehicle to pass messages or to request a service is based on the event bus, the JobDistributor handles event subscribing and publishing, and classify events for SchemaBuilder, TPCCExecuter and TPCHExecuter. While any one of these functions is working, rest two functions should not work. Thus, JobDist invokes each executer components via the call-return connection. When each executer component executes its function SchemaBuilder VirturalTerminal2 JobDist VirturalTerminal2 VirturalTerminal TPCCExecuter TPCHExecuter <Legend> Threads Connectors Reciever Announcer JobDistibutor Executer CallReturnT EventBusT JDBCConnT VirtualTerminal TestExecuter ThreadInvokeT DataAccessT Figure 10 Detail View of TestExecuter Element Description JobDistributor is a thread that distributes job to the SchemaBuilder, the TPCCExecuter and the TPCHExecuter. JobDistributer scribes all of events to JobDistributor execute a query and publishes the result of execution of schema building, TPC-C testing and TPC-H testing. VirtualTerminal is a thread which acts like a virtual terminal for the TargetDB. According to TPC-C specification, VirtualTerminal mimics real database users’ VirtualTerminal actions, and multiple VirtualTerminal will be thrived to emulate real database environment. SchemaBuilder SchemaBuiler is responsible for creating tables into target database through JDBC. TPCCExecuter is responsible for executing TPC-C test against the target database through JDBC. According to TPC-C specification, TPC-C test shall be done by TPCCExecuter multiple virtual terminal so that the TPCCExecuter emulates real terminal user’s behavior. TPCHExecuter TPCHExecuter is responsible for executing TPC-H test against the target database Preliminary Report 30/53 Metis
  31. 31. Architectures for Software Systems(17-655) through JDBC Table 4 Description of TestExecuter Preliminary Report 31/53 Metis
  32. 32. Architectures for Software Systems(17-655) 6.3. Client In Client side, there is a single process, Client process. The Client process contains User Interface, CProjectManager, SystemConfigManager, TestCommander, DataRuleManager, QuerySetManager, CStatusManager, CCommunicationHandler, CStreamCommHanlder, and CResoureMonitor. The main connector among these components is an event bus. All messages and service requests are connected via the event bus except resource monitoring function. Client Side CProject CProject Profile Manager SystemConfig System Config Manager CCommunicationHandler CStatus Manager User Test Interface Commander DataRule Manager DataRule QuerySet Manager CQuerySet CLog Recorder CLog CResource CStreamComm Monitor Handler <Legend> Threads Connectors Repository Process UserInterface CommHandler DataAccessT StreamT File Manager Reciever Announcer System Agent EventBusT Figure 11 C&C View of the client Element Description UserInterface UserInterface handles user interactions, and conveys user requests to the EventBus. SystemConfigureManager stores, modifies and loads system configuration SystemConfigureManager information into SystemConfig storage. DataRuleManager creates, revises, updates and deletes (CRUD) data rule in DataRule storage. DataRuleManager have three ports: announce port, receive port, and user port. Through the receive port DataRuleManager receives events for DataRuleManager CRUD requests from EventBus. Through the announce port DataRuleManager announces events to EventBus for CRUD finish-up. Through use port, DataRuleManager stores and reads Data Rule in DataRule storage with the results of CRUD tasks. TestCommander TestCommander interprets user’s command into packet units to request execute Preliminary Report 32/53 Metis
  33. 33. Architectures for Software Systems(17-655) test. TestCommander associates with three UI tasks: SchemaBuilding, QueryExecution, DataGeneration. For example, in SchemaBuilding, once user requests schema building, the user interface announces SchemaBuilding event on EventBus; TestCommander receives schema building event; TestCommander requests corresponding query set to QuerySetManager via EvenBus; QuerySetManager conveys queries on EventBus to deliver queries to TestCommander; TestCommander transforms queries into packets and announces ‘send schema building queries’ event to EventBus. QuerySetManager creates, revises, updates and deletes query and query set in QuerySetManager QuerySet storage in the client side as well as exports/imports query and query set to/from QuerySet Storage in the server side. CLogRecorder LogRecorder records logs of client side; all user interactions and all events CCommunicationHandler sends packets to the server, and receives packets from CCoummunicationHandler the server. CCommunicationHandler transforms a packet received from the server into a corresponding event and data, and announce it with parameterized data. CProjectManager creates, revises, updates and deletes project profile in CProjectManager ProjectProfile storage in client side. As well, ProjectManager exports/imports query and query set to/from QuerySet Storage in the server side StatusManager gets the status of DBAuditor2 from the server side’s CStatusManager SStatusManager through socket communication; status of test execution, status of schema building and status of data generation. ProjectProfile is an object that contains project profile information in the client ProjectProfile side. QuerySet QuerySet is an object that contains query set information in the client side. ResourceMonitor ResourceMonitor is an agent to look up system usage. Table 5 Description of C&C view of the client Preliminary Report 33/53 Metis
  34. 34. Architectures for Software Systems(17-655) 7. Module Architectural View The Module view-type partitions the system into a unique non-overlapping set of hierarchically decomposable implementation units (modules). The goal is to show how the modules are decomposed, as well as the dependencies between modules11.4. This view-type was selected because it helps the following roles11.4:  The architect, who must define work assignments in such a way so as to minimize dependencies between the modules and assign priorities (or sequences) to modules to control existing dependencies,  The project manager, who must form teams, formulate project plans and schedule, knowing the individual priorities (or sequences) of these modules,  Testers who use the modules as their unit of work to create test cases and perform the tests,  The configuration manager who is in charge of maintaining current and past versions of the units in consistent and functional package-able assemblies, being able to produce a running version of the system,  Developers, who are required to implement the modules,  Maintainers, who are tasked with modifying the software modules, The view-type has been represented using the decomposition style. The decomposition style represents the decomposition of the code into systems and subsystems representing a top-down view of the system. This is important to the development team to understand their roles in terms of code development and can be used as the basis of work assignments and completion measures 11.411.4. The module view of DBAuditor2 system consists of 2 top-level modules: client and server. The three main elements are identified at top level view. Each element is described below. Preliminary Report 34/53 Metis
  35. 35. Architectures for Software Systems(17-655) 7.1. Representative Module View of the Server ServerMain ≪ initailize ≫ SeverControlPackage ≪ extend ≫ ServerEventBus Package ≪ invoke ≫ Us ageMonitorPackage ServerSystemMonitor Figure 12 Top-level module view of Server Element Description This is the main class of the server, which invokes all control modules in the server and ServerMain initializes them to be ready. This package contains all logical control modules on the server. This package depends on EventBustPackage because each control module extends the classes in the SeverControlPackage EventBusPackage to implement the event-bus communication. One of control module, TestExecutor, shall initiate ServersystemMonitor module. EventBusPackage This package contains java classes which relate to implementing event bus. UsageMonitorPackag This package includes modules monitoring resources of the server. e Table 6 Description of Top-level module view of server Preliminary Report 35/53 Metis
  36. 36. Architectures for Software Systems(17-655) 7.2. Detailed Module View: Sever The detailed module view of the server side is shown below. As we can see, TestExecutor, ProejctProfileSynchronizer, ServerStatusManager, DataLoader, ServerCommunicationManager, SeverLogManager, and DataGenerator are dependant on ServerEvnetBusPackage; they extend classes in the ServerEventBusPacke to implement event-bus based communication. S erverE ventB us P ac kage ≪ extend ≫ Us ageMonitorP ac kage T es tE xec utor P rojec tProfileS ync hronizer S erverS tatus Manager DataGenerator ServerSys temMonitor DataLoader S erverCommunic ationManager S erverLogManager 1 ≪ invoke ≫ ≪ invoke ≫ 1 1 1 ≪ Data ≫ ≪ External≫ ≪Data ≫ ≪External≫ S oc ketHandler 1 P rojec tP rofile DB Gen Res ourc eUs age MonitorApp * 1 V irtualT erminal * ≪ Data ≫ 1 T es tRes ult 1 MonitorS oc ketHandler 1 0..1 ≪ Data ≫ ≪ Data ≫ 1 DataGenerationRule QueryData 1 ≪ Interfac e ≫ J DB C ≪ invoke ≫ Figure 13 Detailed module view of Server Element Description This module is responsible for executing the benchmark process on the server; the TestExecutor process includes building schema, generating data and, and recording the execution results of the target DBMS being benchmarked. The main purpose of ProjectProfileSynchronizer is to synchronize the server-side ProjectProfile project profile data with the client-side project profile data; the project profile data will Synchronizer be mainly maintained in the client and will be stored also on the server for backup and sharing. This module maintains the status of the server. When the status changes the module ServerStatusManager will notify it to the client. It loads massive data, which are prepared for benchmarking a DBMS on the server, to DataLoader the DBMS so that the system is ready to start the benchmark. ServerCommunication This module is responsible for all communications between the server-side modules Manager and the client side’s. This module logs the events generated by other modules on the server; the log data ServerLogManager will be stored in an external file on the server. DataGenerator This module generates data which are used to benchmark a DBMS; the data will be stored in an external file. And it invokes DBGen to generate TPC-H benchmark test Preliminary Report 36/53 Metis

×