2. Who I am?
dalvarez@kabel.es
http://www.linkedin.com/pub/david-alvarez-palomo/4/462/906
3. Who we are?
Cloud Collaboration and
ALM
Windows Azure Content
Application Business
Mobility
Integration (EAI) Inteligence
Management and
Messaging and
Security Optimization of IT
Communications
Infraestructure
13. In Production, nobody hears you
scream!
Misunderstood
requirements
Monitor
• Churn in requirements/priorities
Operate
• Quality is an afterthought Working software
• No traceability in production
• Loss of focus
Value realized
• Unmet user
expectations
16. Operations Integration
IntelliTrace in Production
SCOM 2012 – System Center Operation
Manager
Monitor
17. IntelliTrace in Production
With production errors, root causes are difficult to
identify, debug and resolve
IntelliTrace in Production is easy to run, and collects critical IntelliTrace logs with minimal impact to server performance.
Developers, who are already familiar with using this data in test environments, now have the data to speed root cause
analysis of production bugs, and rapidly identify the needed code fix.
Actionable diagnostics
IntelliTrace in Production speeds up debugging and shortens code fix times.
20. Benefits
One tool for
monitoring apps Without add code
and infraestructure
Solutions
Collect Collect high value
performance infomation for Dev
information and Ops
21. Features
Monitorization:
• Server side
Without add Low impact on
• Client side code performance
Collect data of all
Monitoring in
KPI’s. layers, get down
real time.
to root cause
Reports
22. What information collect???
Events Aplication Errors
• Application Errors • Application failures
• Performance • Connectivity
• Operations info • Security
• System failures
23. What monitoring???
ASP.NET Web
ASP.NET Web ASP.NET MVC
Service
Windows
WCF Services Sharepoint
Services
Non Microsoft
Monitorización
IIS 8 Enviroments
360º
Unix y Linux
25. The Completion Phase
To learn from experience gained during this test and to preserve testware for reuse in a future test. The point here is
that with changes and associated maintenance tests, the testware only requires adjustment, so that it is not necessary
to design a completely new test. During the test process, efforts are made to keep the test cases corresponding with
the test basis and the developed system. If necessary, the selected test cases should be updated.
http://www.tmap.net/en/tmap/4-essentials/structured-test-process/acceptance-and-system/completion-phase
Editor's Notes
We mentioned earlier that requirements don’t always reflect the customer’s intent. And this can result in delivered code that fails to meet user expectations. This is the “Hmm. That’s EXACTLY what we asked for, but not at all what we wanted” situation. In addition, users may have been very clear in their desires, but they failed to think of other, possibly non-functional, requirements that impact their experience with the delivered solution. In many teams testers have a great deal of domain knowledge. In fact, it’s common for manual testers to be former users of earlier versions of the product! With exploratory testing, manual testers can conduct ad-hoc exploratory tests to discover usability, consistency and other problems. They can then create actionable bugs, including IntelliTrace files, to developers to identify the needed fixes. They can even generate manual and automated test cases from ad-hoc exploratory tests, ensuring that later regressions aren’t a problem.We’ve already talked about feedback, but it’s important to revisit it, since the feedback manager can be used to get the early feedback from users that’s so critical for a successful delivery.