This presentation summarizes the Team Augmented Reality's EOSP project for their client NextGen:PGH. It includes sections on project context, requirements management, development process using Scrum methodology, risk management, project planning and tracking, system architecture, quality management, configuration management, and training. The team's project is to develop an Android augmented reality application to showcase cultural sites around the CMU campus. They have adapted their process to incorporate elements such as risk management, quality assurance, and architecture reviews into their Scrum sprints.
6. Client Background
Currently involved in actively organizing night markets and cultural festivals in
Pittsburgh.
Wishes to leverage AR technology to spread culture and build community.
6
9. Short-term Goals
› Deploy Android app on
Play Store (End of
July)
› Must augment 2D
images on at least 3
places of interest on
the CMU campus
New goal:
Showcase Augmented Reality
capabilities through proof of
concept:
● Render AR Videos
● Interactable AR Content
● Geo-location based AR
Content
Project Goals
9
10. Requirements Management
● Create user stories at the start of every sprint
● Get user stories prioritized and approved by the client after the
sprint planning meeting
● Need client sign-off regarding prototypes, before development can
begin
10
11. ● Product backlog for prioritization of user stories
○ AR tasks high priority
○ Other mobile tasks medium priority
○ Web Portal tasks low priority
● Initial release doc guides us in managing requirements
○ Final aim to achieve campus tours
○ Work on components/features leading to campus tour
Requirements Management
11
13. Process
Showcas
e AR
Features
Deploy
to Play
Store
Project Scope
Changes
# Action Outcome
1. Change process from OUP
to SCRUM
Regular client demo/feedback, Higher flexibility to re-
prioritize tasks. Used old Release plan to make Product
Backlog
2. “Could Haves” priority
increased
Render video, Interact with AR
3. “Should Haves” priority
decreased
Social Media integration, Web Preview Mode, Mobile
Register/Login
13
15. • Tailored: Incorporated risk management, quality assurance and architecture
into the scrum process
• Evolving: in every retrospective meeting we discussed what changes are
required
• Example: In early sprints we realized that some members are not logging
work in time in JIRA, which will impact project tracking, so we decided to
log work before each standup meeting. Made this a routine in our
process.
Process: Scrum+
15
16. Entry:
Finish the last release
Tasks:
1. Discuss with product owner and add new user stories in product backlog, if
needed
2. Pull user stories from product backlog and add it to sprint backlog in JIRA
3. Break the stories into tasks and conduct code design for the functionalities
(class diagram)
4. Review if the detailed code design is consistent with our high level
architecture
5. Estimate the time for each task (Planning Poker)
. . . . . .
Planning process
16
17. Verification:
1. All tasks with estimates added in JIRA by SCRUM master
2. SCRUM master checks if planned time estimate higher than available
development time. If higher, then put the lower priority tasks back into
backlog. Inform product owner about the change
Exit:
1. Add tasks to sprint backlog
2. Assign sprint backlog tasks to individuals
3. Assign the related quality assurance tasks to individuals
Planning process
17
18. Development Process: Organization
3 Dev Teams:
• Web Server Team: Responsible for managing REST API services and
database (MySQL, S3 Bucket)
• Android App Team (Client): Responsible for the AR application
• Web Portal Team (Client): Responsible for artist and moderator functionalities
18
20. Development Process
• Process
• At the start of the sprint, Web Server team will design APIs and expose
them via Swagger.
• Mobile & web portal team will edit/design prototype, via Proto.io, and get
client sign-off
• All 3 teams commence coding once design/prototype is confirmed
20
22. Development Process
• Scrum master checklist:
• Assign quality assurance tasks (static analysis, inspection, unit testing,
etc.)
• Task dependency analysis to determine order of certain tasks
• Coordinate workload between team members
22
23. Reflections
Switching to SCRUM gave us flexibility with requirements. Client was very
happy to see his continuous feedback taken into account
23
25. Risk Management Process in Scrum
● In every daily standup meeting, the team communicate and discuss about
the risk.
● In every week’s client meeting and team meeting, identify the risks based
on the decision making.
● In every sprint planning meeting, discuss and allocated tasks related to risk
mitigation and assign tasks
● In every sprint retrospective meeting, discuss the risk status and complete
the detailed risk document. Discuss the status of the assigned task
● Maintain a risk track document to monitor the status of the risks. If any risk
deadline is a week away, then notify team. 25
27. ID 8
Condition Target tracking is to be carried out mostly using markers.
Date Identified 28-Mar-2016
Impact 1
Probability 2
Consequence 1. Some locations may not allow the placing of markers directly on them.
2. The marker sizes required for tracking from a “satisfactory” distance may be impractical.
Mitigation 1. Experiment to identify relation between marker’s size and tracking distance
2. Communicate with university authorities to discuss possibility of placing markers around campus
3. Explore geolocation as a strategy for augmentation at POI
Deadline 5-May-2016 25-July-2016
Status Mitigated
Updates 6-Apr-2016: Experimentation on marker sizes concluded
13-Apr-2016: Client meets with ex-President of CMU to introduce app.
5-July-2016: We have explore the geolocation augmentation way, and demonstrated it satisfactorily to the client. We need to
ensure that all the final POIs on CMU campus can be tracked using an appropriate marker.
12-July-2016: Ensured and limited four POIs on CMU campus.
Risk Example
27
29. • Strategic Plan: Track using milestone plan
• Tactical Plan: Track using burndown chart, velocity chart, and
time distribution chart
• JIRA used by Scrum Master to assign tasks and manage Product
and Sprint backlog
Tracking
29
30. • Every task’s progress is logged in JIRA for creating burndown
chart
• If we can’t finish all the task in one sprint, we push tasks into the
next sprint based on priority
• We use time percentage chart to view whether we are spending
too much time in meeting or too little time in quality assurance.
Tracking
30
34. Initial Time Distribution Plan
5h = meeting [2h (iteration plan)
+ 1h team + 1h client + 1h mentor]
7h = Common time coding
1h = Documentation
5h = Individual Role + QA
6h = Individual coding & Unit
Overall
55% : Work with team 45% : Work individually
34
36. Reflections
We initially used to plot burndown chart upon
100% task completion
• Gave team wrong impression about effort & status
• Instead we started plotting on basis of progress. This
improved our tracking
36
38. Quality Attributes - Extensibility
Scenarios & Measure
• Ability to add new AR content, new POIs,
new Tours
• No disruption/downtime of any of the
existing services or introducing defects in
system
Design Decision
• 3-Tier Architecture
• API-Driven Design
Tradeoffs
• System complexity increased
• Increased latency (to fetch meta-deta)
Outcome
• Changes reflected in system in real-time
(mobile app, web portal)
• Reusability promoted (APIs reused by clients)
• Allows us to scale in future (DB level) and use
any DB storage solution and strategy
• App re-deployment not required 38
39. Quality Attributes - Modifiability
Design Decision
• Decided to use Wikitude Javascript
SDK instead of Native SDK (Java)
• AR files stored in data tier (S3 bucket)
Tradeoffs
• Increased code complexity
(manage Android Java code as well
as HTML/CSS/JS code)
• Difficult to test using automation
tools
• JS SDK allows us to use HTML, JS, CSS
code for AR functionalities instead of Java
• No need to re-compile/build or deploy code
• Changes reflected in system in real-time
Scenarios & Measures
• AR component should be modifiable -
ability to add/edit AR screens and
functionality dynamically
• Should not require re-deployment
Outcome
39
40. Stateless Vert.x Process -
Scalability Promoted
Process Monitor - Availability
Promoted
Dynamic View -
Web
40
41. Quality Attribute Verification
AIM: Both the software solution and hardware are able to meet the QA
requirements. Evaluate T2.small instance (evaluating machine for PROD)
Scenario 1: Time taken from the initiation of an action to the response being
received must not exceed 5 seconds, when 100 concurrent users are
accessing the system. (Average Load)
Scenario 2: Despite having 100 concurrent requests, the time taken to
download a graphics file of the largest size (HD quality image) must be
kept under 5 seconds. (Peak Load)
41
42. Quality Attribute Verification
AIM: Both the software solution and hardware are able to meet the QA
requirements. Evaluate T2.small instance (evaluating machine for PROD)
Scenario 3: 100 users simultaneously reading/uploading data to server &
response time should be less than 5 seconds. (Mixed Load)
Scenario 4: 100 users simultaneously uploading 10MB content. System should
not crash and should handle these requests. (Stress test)
42
43. QA Scenario 3 - Mixed Load
43
# Users RPS Latency (sec)
1. 100 55 1.7
2. 200 59 3.2
3. 300 78 3.7
Scenario 3: Mixed Load - 100 users simultaneously reading/uploading data to server &
response time should be less than 5 seconds.
Action: POST: /pois/{poiId}/ar/{arId}/uploadContent/{contentType GET requests : POI, AR,
Campus 1 MB data upload size.
Results: CPU utilization: 9% Memory utilization: 17% (300 users)
45. Scenario 4: Stress test - 100 users simultaneously uploading 10MB content. System
should not crash and should handle these requests.
Action: 10 MB data uploaded for 5 minutes
Results: CPU Utilization 44% and Memory Utilization 60% Helped found memory
related bug in code (Vert.x memory settings different from JVM XmX settings)
45
# Users RPS Latency (sec) Comments
1. 25 1.1 15.8 30% CPU 30% Memory
2. 50 1.2 32 40% CPU 32% Memory
3, 75 1 52 30% CPU 56% Memory
4. 100 1 36 44% CPU 60% Memory
QA Scenario 4 - Stress Test
48. Static Analysis
• Integrated the CheckStyle plugin into the IDEs (IntelliJ for both
Android and Java)
• Modified the default ruleset to exclude some rules. For eg. lines
beyond 80 characters
• We had been using Stan4J to generate LCOM4 and cyclomatic
complexity metrics
• In end used SonarQube to verify out metrics. Mainly concerned with
Maintainability index.
48
50. Integration Testing - Android
• Junit unit testing only runs on JVM and not on Android VM.
Therefore, difficult to test Android Activity Lifecycle.
• The close coupling of AR, UI, and location-aware functionality made
unit testing difficult on Android. Java and JS code also heavily
dependent
• Used instrumentation testing. Runs on Android phone on Android
VM. Hooks to control Android Activity Lifecycle
• We used Robotium on Android to write automated scripts to test the
application after each new module.
50
51. Integration Testing – Web Services
• For the web services part, we extend JUnit to call the actual web
services during integration testing (as opposed to mocking web
requests in unit testing)
51
Planned coverage:
Line - 100% Branch - 80%
Actual coverage:
Line - 83% Branch - 35%
Test Cases: 61
52. • We used all-pairs combinatorial testing to test our mobile application
in subsequent phases
• Used the classification tree approach to generate test cases
System Testing
52
53. System Testing
53
Consider various characteristics while designing the classification tree:
• Geofence
• Wifi/Net conditions
• Lighting condition
• AR Content type
• Phone Orientation
• User’s walking speed …..
54. Reflections
• As UI and functionality get more intertwined, it gets more difficult to
test separate units. In such cases, traditional methods and metrics
become hard to follow. We had difficult time differentiating
between unit tests & integration tests and had to define our own
terminologies.
• We initially wanted to automate all our tests, but later realized that
for location and AR use cases, manual testing may be a more
thorough alternative.
54
56. Bitbucket Repository: Code version control (2 repos: Android code, Web server & Web portal
Code)
Jenkins Continuous Integration: Build Changes & Deploy (Archive Successful Builds, Auto-
Polling every minute)
AWS Dev Env: Restart process for latest changes to reflect
Release branch: created for configuration management & deployments
Flow
Commit
Auto-Pull Build
Deploy
Process
Restarted
Branching
56
59. Training
• Trainings for Android, Wikitude SDK (Android), Web Development
(Bootstrap, JQuery), Vert.x Web Server & Bitbucket
• ETVX process followed:
• Entry: Environment setup/Software installation on local machines
• Task: Training Tasks mapped with learning outcomes
• Validation: Demonstration of training tasks
• Exit: Check-in training source code
59
60. Training
60
• Training designed to include common functionalities used in
projects (Derived from list of requirements to be
implemented)
• Different set of trainings for developers with Beginner and
Intermediate experiences
• Different trainings for different developers (based on
assigned system)
61. • Structured process helped everyone to have common
understanding & establish terminology.
• Used common working hours for training. Helped beginners to
learn from more experienced developers. Saved us time in
configuration issues & training task issues
• Local system setup helped us start development immediately after
training
• Still need for individual custom trainings for personal learning
Reflection
61
64. Top 3 Risks
Risk 3 : A number of successful AR SDKs, such as Metaio and ARPA, have been
discontinued with no prior warning to developers. Thus, there is some concern
that the same may happen to the SDK picked by the team.
Risk 18: The marker tracking is dependant on the lighting conditions as well as the
distance.
Risk 16: Communication is delayed sometimes when some issue is experienced,
while working on a task
64
65. ID 17
Condition Client will be unavailable from last week of July onwards till end of project
Date Identified 28-Jun-2016
Impact 1
Probability 4
Consequence Since client is non-technical and will be out of town, it would be difficult to have the handover process
as well as receive final sign-off.
Mitigation 1. Initiate the handover process early
2. Start user acceptance testing early
3. Devise handover plan and give handover when client is available in town, and handover easier
part through phone/mail communication
Deadline 20-July-2016
Status Mitigated
Risk Example - 2
65
66. 6. Make user stories for this release
7. Make prototype in proto.io for these stories
8. Have client approve the prototype
9. If not approved, change the prototype accordingly and get approved
10. Break the stories into tasks and conduct code design for the functions
(class diagram)
11. Review if the detailed code design is consistent with our high level
architecture
12. Estimate the time for each task
13. Put the tasks in the product backlog using JIRA
Backup - Planning process continued
66
67. API Dependency Issue - Backup
API Dependency issue:
• How to coordinate effectively between REST API team and Client team (mobile and web)?
• Communicate and have common understanding about data model
• Understanding regarding JSON data returned
• Common understanding between error handling
• Common understanding between HTTP methods available
• Initially ran into delays due to dependencies
67
68. API Dependency
68
• Solution: Use Swagger
Helps design, test, document APIs
Web UI to explore APIs. Provides documentation. Run APIs
Documentation can be used by future teams as well to understand existing APIs
http://placmakarapi.cf:8081/
70. Process Backup - Scrum+ Example : Stand up meeting
15 minutes daily stand-up
Entry:
4 team members are attending the meeting.
Tasks:
Discuss daily progress, including
1. What each team member did yesterday and what he is working on today
2. What are the problems and difficulties
3. What are the concerns (risks).
4. Log work in JIRA
70
71. Process Backup - Scrum+ Example: Stand up meeting
Verification:
1. All the steps above are followed.
2. JIRA is updated
Exit:
1. Finish the meeting
2. Scrum master updates Burndown chart
71
72. Process Backup - Retrospective Process
Entry:
Finish the last sprint. Finish the sprint review (sprint demo) meeting.
Tasks:
A. Discuss last retrospective decisions: whether they have been met.
B. Discuss what the team should:
Start doing, Stop doing,Continue doing.
A. Vote on the matters:
Extra tasks:
Calculate velocity ;
Tracking: collecting individual feedback on quality assurance tasks as metrics.
Collecting feedback on the above discussion as metrics.
JIRA update
Verification:
All the steps above are followed.
Exit:
The list of decisions are documented. 72
83. Architecture Backup - QA Scenario 2 - Peak Load
Scenario 2: Despite having 100 concurrent requests, the time taken to download a
graphics file of the largest size (HD quality image) must be kept under 5 seconds.
Action: Download from S3 10 MB payload
83
# Users RPS Latency (seconds)
1. 100 25 3.7
2. 200 20 7.8
3. 300 19.3 10.7
84. Scenario 2: Despite having 100 concurrent requests, the time taken to download a graphics
file of the largest size (HD quality image) must be kept under 5 seconds. Action: 3 MB
Payload
84
# Users RPS Latency (seconds)
1. 100 70 1.2
2. 200 82 2.3
3. 300 80 3.5
85. Total Cost Per Month = $28.04
Final Infrastructure Cost
85
# Component Type Quantity Cost Comments
1. AWS EC2
Instance
t2.small 1 $19.04 For hosting the web server,
application servers and
database server (and build
server for continuous
integration, if required)
2. Elastic Block
Storage
SSD Storage 1 $0.50 For permanent data storage
(software installations, source
code, database data)
3. S3 Bucket - 1 $1.50 For storage of multimedia
content
4. CloudWatch/
Auto Scaling
- 1 $6.5 For real-time monitoring,
alerting, availability
5. Route 53 - 1 $0.50 For DNS Mapping
Finalized T2.Small instance for deployment, based on QA Verification
93. • Only for critical modules
• Inspection checklist provides inspectors with guidelines
• Inspection document to keep track of defects encountered during
inspection
Review/Inspect
93
94. Inspection Sample
94
# Module File Type Line Defect Solution
Web Server/
app_server
org.ngpgh.ar.webser
ver.model.PoiModel
Variable name 4 Variable name poidID is incorrect
based on API contract
Re-name to poiID
2 Web Server/
app_server
org.ngpgh.ar.webser
ver.controller.MainRo
uter
API
Implementation
260 GET /pois/{poiID}/ar
(Function getArListByPoiID)
Does not implement query
parameter “size”
3 Web Server/
app_server
org.ngpgh.ar.webser
ver.dao.ArDao
Coding Style 82 ArrayList is created and then
converted to Array which is
returned. Extra code written for this
operation.
Return ArrayList
instead and remove
array conversion
code.
4 Web Server/
app_server
org.ngpgh.ar.webser
ver.controller.MainRo
uter
Refactor 292 GET /pois/:poiId/ar/retrieveByField
implementation has duplicate code
from API - GET /pois/:poiId/ar/
Remove duplication to
improve readability
5 Web Server/
app_server
org.ngpgh.ar.webser
ver.model.UserModel
API
Implementation
6 Variable name userID is missing
from POJO, based on API contract
Add field userID in
POJO and also in
corresponding DAO
6 Web
Server/web_por
tal
login.html Missing field Field zip code is missing in
registration page as requested by
client
Add this field in html
page as well as in
database
95. User Acceptance Testing
• Carried out in every demo meeting with the client
• We ask the client to use a feature implemented for each user story
• His feedback is recorded and any recommended changes are
added as enhancement tasks for the next sprint
95
96. User Story With Acceptance Criteria Example
96
User Story 11: As a CMU admin, I want to render interactable AR content, so that the
students can gain more information about Campus locations/events
UAT-11a: User must be able to render at least one button over the CMU map on Forbes
and Morewood. The user must be able to click this button, which would then invoke a
side drawer. This side drawer would display information on the specific location
corresponding to the button clicked.
User Story 12: As a mobile app user, I want to see all the nearby POIs in a map view,
so that I can easily visualize all the nearby POIs.
UAT-12a : The user must be able to see a Google Map with exactly 5 markers
corresponding to the 5 POIs. On clicking the marker on the map, an info-window must
be displayed to the user with the title and description of the POI.
1.UAT-12b : The user must be able to click the info-window to be directed towards the
page where the details of the specific POI selected are displayed.
97. Training
Sample
Name Android Training 1 - Time Estimate 8 hours
Entry 1. Download Android Studio (https://developer.android.com/studio/index.html)
2. Read “DevPlanEstimationNextSem2.0” document to get better understanding of the
various systems and modules to be developed.
Tasks 1. (For beginners)Create a hello world Android App (2 hours) Outcome: Learn usage of
basic UI components, activity management, intent calls.
https://developer.android.com/training/basics/firstapp/index.html OR (For
intermediate users) Interactive Story app (3 hours) Outcome: Advanced app which
demonstrates usage of MVC pattern, and more control widgets, as well as activity
management. https://teamtreehouse.com/library/build-an-interactive-story-app
2. (Advanced) Weather App (4 hours) Outcome: Learn integration with 3rd party API,
JSON Parsing https://teamtreehouse.com/library/build-a-weather-app
3. GPS location updates (1 hour)
https://developer.android.com/training/location/receive-location-updates.html
4. Storage (2 hours) https://developer.android.com/training/basics/data-
storage/databases.html
5. Inter-app interaction (1 hour)
https://developer.android.com/training/basics/intents/sending.html
Validation 1. First 2 tasks are mandatory for all the developers. Validation will include
demonstration of the Android App deployed on the developer’s
smartphone/emulator.
2. Depending on which developer is responsible for which module, rest of the developers
will need to finish their respective trainings and demonstrate to rest of team to show
their understanding of the concepts.
Exit 1. All the developers finish their respective trainings.
2. At end of training developers will need to push their respective code to GIT repository. 97
98. Training Backup - Design Example
Android Training Objective:
Prioritized list of items, from most important concepts we are interested in learning to
least important:
1. UI Components and Event handling (List view, map view, tabs, controls widgets
[buttons, drop-down, spinner, etc.] derived from prototypes)
2. Activity Management - Activity, Services, Intent calls
3. Integrate with external APIs
4. Sensor Interaction (GPS)
5. Data Persistence (Local storage and MySQL storage)
6. Interact with other Apps (Invoke other mobile apps)
98
We still used our old Release plan to make sure we stick to main agenda (campus tours) but based on user feedback allow re-priortization of tasks
Scalability also covered as we dnt want any disruption when lot of new content is added
Mostly concnerned with maintainbility index and make sure Javadoc comments, hard-coding magic numbers is avoided
Managed 3 diff source codes – Android, 2 Web component
In java did manual testing instead of automated testing
We defined unit test as testing one feature/characteristic and & integration as testing multiple together
QA manager periodically generated reports using Stan4j tool
User acceptance testing, where we demonstrated our product at end of sprint and went through the aceptance criteria of the user stories.
Released code in a separate branch
Rating A, Tech Debt Ratio 4%, 5k LOC 1200 Comments Line
After researching we found out that junit will only test
Java & JVM code, App location aware and Ar component.
Jenkins archive helped us in of our demos as once one of the developers didn’t follow the QA process and pushed changes for demo. When the new deployment broke, we could quickly roll back to previous version to proceed with demo.
Add technical risk- AR tracker detection from particular distance/angle
Handover plan included: User manuals: Android App, Web Portal, AWS Portal, Technical Repo (containing architecture docs, source code), Web API Documentation