we offer online IT training with placements, project assistance in different platforms with real time industry consultants to provide quality training for all it professionals, corporate clients and students etc.
OBIEE online training by quontrasolutions. we are providing excellent OBIEE training by real-time IT industry experts. Our training methodology is very unique our course content covers all the in-depth critical scenarios. We have completed more than 200+ OBIEE training batches through online training program. Our OBIEE classes covers all the real time scenarios, and its completely on hands-on for each and every session.
Course content:
• basics of data warehousing concepts
• OBIEE concepts and overview
• Oracle Business Intelligence (OBIEE)
• Using Administration Tool and Creating Repository
• Using Answers and Creating Interactive Dashboards
• advanced OBIEE concepts
Please Visit us for the Demo Classes, we have regular batches and weekend batches.
Other Courses we offered:
MICROSOFT: ADO .NET, ASP .NET, C# .NET, MSBI, SharePoint, Vb .NET
PROGRAMMING: Core Java, Advanced Java, J2EE, Hibernates, Strutus, Java Scripting, Perl Scripting, Shell Scripting, springs, Ruby on Rails
Mobile Apps: Android, IOS Training, Cloud Computing, Networking, Unix Admin, Sun Solaris,
Testing Tools: Manual Testing, QTP, Selenium
Sales force CRM: SalesForce Developer, SalesForce Administrator
BUSINESS ANALYST, HADOOP
DATABASE: SQL/PL-SQL
Digital Marketing: Digital Marketing Indept, Search Engine Optimization, Search Engine Marketing, Social Media Optimization, Email Marketing
If you are interested in taking any of the training please contact us:
QUONTRA SOLUTIONS
204-226 Imperial Drive, Rayners Lane,
Harrow- HA2 7HH.
Email: info@quontrasolutions.co.uk
Web: http://www.quontrasolutions.co.uk
Phone : +44(0) 20 3734 1498 / 99
2. Oracle BI specialist at Morrisons plc
Big IT development programme at its early
stages implementing OBIEE, OBIA, ORDM, all
on Oracle 11g & HP-UX
3. A Performance Tuning Methodology
OBIEE techie stuff
Learn from my mistakes!
4. Response times
Report
ETL batch
OLTP transaction
System impact
Resource usage
Scalability
5. Check that your system performs
Are the users going to be happy?
Baseline
How fast is fast?
▪ How slow is slow?
Validate system design
Do it right, first time
Capacity planning
6. It’s never too late
“You’ll never catch all your problems in pre-production
testing. That’s why you need a reliable
and efficient method for solving the problems that
leak through your pre-production testing
processes.”
7. Because it makes you better at your job
“At the very least, your performance test plan will
make you a more competent diagnostician (and
clearer thinker) when it comes time to fix the
performance problems that will inevitably occur
during production operation.”
8. Quantifying response times
System impact
User expectations
Problem diagnosis
Design validation
9. Define
Measure
Analyse
Review
Implement
Timebox!
Evaluate design /
config options
Do it right
Don’t “fudge it”
Do more testing
Redefine
test Do more testing
10. Define – what are you going to test
• Aim of the test
• Scope
• Assumptions
• Specifics
• Data, environment, etc
Build – how are you going to test it
OBIEE specific
Define
Measure
Analyse
Review
Implement
•E.g. :
•Check that the system performs
•Baseline performance
•Prove system capacity
•Validate system design
12. Database
Presentation
Services
BI Server
Report /
Dashboard
Logical SQL
Physical SQL
statement(s)
Data set(s)
Data set
Rendered
report
Excludes App/Web server & presentation
services plug-in
Define
Measure
Analyse
Review
Implement
13. Database
Presentation
Services
nqcmd
BI Server
SQL Client
Physical
SQL
LSQL
Physical
SQL
User &
Stopwatch
Load Testing tool
(eg. LoadRunner,
OATS)
Define
Measure
Analyse
Review
Implement
14. Define
Measure
Analyse
Review
Implement
Usage Tracking
or NQQuery.log
Test
script
BI
Server
Data
nqcmd
Logical
SQL
Logical
SQL Logical
SQL
15. Master test script
Define
Measure
Analyse
Review
Implement
Test
script
BI
Server
Data
nqcmd
Logical
SQL
Test
script
nqcmd
Test
script
nqcmd
Test
script
nqcmd
16. Simulates user interaction – HTTP traffic
Powerful, but can be difficult to set up
Ajax complicates things
Do you really need to use it?
Tools
Fiddler2
FireBug
Reference:
My Oracle Support – Doc ID 496417.1
http://rnm1978.wordpress.com/category/loadrunner
Define
Measure
Analyse
Review
Implement
17. Be very clear what the aim of your test is
You probably need to define multiple tests
Different points on the OBIEE stack to
interface
Pick the most appropriate one
Write everything down!
Define
Measure
Analyse
Review
Implement
21. Lots of different ways to measure
Build measurement into your test plan
Automate where possible
▪ Easier
▪ Less error
Define
Measure
Analyse
Review
Implement
35. You won’t get your testing right first time
There’s no shame in that
Don’t cook the books
▪ Better to redefine your test than invalidate its results
Stick to the methodology
Don’t move the goalposts
Very tempting to pick off the “low-hanging fruit”
▪ If you do, make sure you don’t get indigestion…
Timebox
Test your implementation!
36. Define
Measure
Analyse
Review
Implement
Evaluate design /
config options
Do it right
Don’t “fudge it”
Do more testing
Redefine
test Do more testing
Editor's Notes
It’s not too late to put in place a solid performance test methodology around an existing system.
If you set up your performance tests now you will have a set of baselines and a full picture of how your system behaves normally
Then when it breaks or someone complains you’re already set to deal with it
If you don’t, then when you do have problems you have to start from scratch, finding your way through the process.
Which is better – at your leisure, or with the proverbial gun of unhappy users held to your head?
Performance testing isn’t optional. It’s mandatory. It’s just up to you when you do it.
Even if you’re running Exadata or similar and a so confident that you don’t have performance problems and never will – how are you going to capacity plan? Do you know how your system currently behaves in terms of CPU , IO, etc? How many more users can you run?
Which is easier, simulate and calculate up front, or wait until it starts creaking?
Performance Testing requires an extremely thorough understanding of the system.
If you do it properly there is no doubt you will come out of it better equipped to support and develop the system further
Any questions so far?
This is the methodology we’ve developed
It’s a high level view – subsequent slides will give detail
Any questions?
Once you’ve defined your test you need to execute it – and measure the results
Here’s the ways you should consider measuring the different parts of the stack
Once you’ve decided how much of the stack you’re going to test, you need to set about designing the test and how you’re going to capture your metrics
Performance tests are all about collecting metrics that allow you to make statistically valid and quantifiable conclusions about your system.
The primary metric of interest is time. What’s the end-to-end response time, from request to answer, and where’s the time in between spent?
If a user complains that a report take five minutes to run but the DBA says they don’t see the query hit the database for the first two, and then it executes in 30 seconds, what’s happened to the other two and a half minutes?
Other metrics of interest are the environmental statistics like CPU, memory, and IO, and diagnostic statistics such as the execution plan on the database and lower-level information like buffer gets etc.
So – from the top down:
[click]
Web server, eg. Apache log – first log of the user request coming in
App server, eg. OAS
Presentation Serivces plugin, Analytics – (this is where you see the error logs when you get 500 Internal Server Error from analytics)
[click]
sawserver.log - by default this doesn’t record that much, but by changing the logconfig.xml file you can enable extremely detailed logging.
This is useful for diagnosing lots of problems, but also if you’re looking to do an accurate profile of where the time in an Answers request is spent. You can see when it receives the user request, when it sends on the logical SQL to the bI Server, and when it receives the data back
See http://rnm1978.wordpress.com/category/log/ for details
[click]
BI Server – spoilt for choice here. For a production environment I strongly recommend enabling Usage Tracking. For performance work you should also be using NQQuery.log where the variable levels of logging show you logical and physical SQL, BI Server execution plans, response times for each database query run, etc.
[click]
As well as these two features there is the systemsmanagement functionality which exposes some very detailed counters through windows PerfMon or the BI Management Pack for OEM. You can also use the jmx protocol to access the data through clients like Jconsole or jManage
[click]
For the database all the standard monitoring practices apply, depending on what your database is. For Oracle you should be using OEM, ASH, SQL Monitor, etc.
[click]
And finally, for getting a complete picture of the stack’s performance -- Speak to your users! Maybe not as empirically valid as the other components, but just as important.
Useful when prob is suspected on DB, only place that individual physical SQL query response times are kept
Database query times & row counts
EM is good
For pure testing – need to capture data :
SQL Tuning Sets
Good for capturing behaviour of a set of SQL over time – longest running, most IO, etc
Less good for focussing on individual queries because stats are aggregated
SQL Monitor – export from EM (next slide)
+++++++++++++++++++++++++++++++++++
SQL Server
DMVs, SQL Profiler, PerfMon counters, etc
Other RDBMS – pass
Got to mention this – from EM you can export a standalone HTML file that renders like this
brilliant
Lots of different ways to measure
decide what metrics are relevant to your testing
Load testing – system metrics v. Important
Perf testing indiv report – maybe just response time
Plan your measurements as part of the test
Trigger collection scripts automagically
Include manual collection in test instructions
Analysis step is :
Collate data
store in sensible way
- Raw data
- label your tests
better to use a non-meaningful label
analyse it
- visualisation
- analysis will depend on aim of test
- eg loadtesting – identify bottlenecks
[click] raw data,
[click] compared to a previous baseline – illustrate varience
[click] host metrics - IO graph
[click] response time, over time
Testing – understand more about system as you go – prob want to redefine test – that’s part of the process!
Performance Testing is an iterative process. I can’t stress this enough.
You will not get it right the first time you do it
Whatever you do, you’ll probably miss something or invalidate your tests.
Remember that an iterative approach is entirely valid, don’t feel you “got it wrong” and have to fudge the results to cover your mistake. Better to abandon a test and learn from the mistake than produce a “perfect” test that’s complete rubbish.
Stick to method
Benefit of also enforces justification for changes, avoid “we’ve always done it that way”
don’t move the goalposts.
You might find some horrible queries
as you dig into them you notice some obvious “quick wins”
If you rush the fix in without completing your first round of testing, you risk invalidating it
BE METHODICAL!!!!
Timebox the execute/measure/analyse iterations -don’t get lost in diminishing returns
It’s a good idea to timebox your work, and have regular review points
Test your implementation!
parallel config, not tested properly after implement in test env, nearly got to prod without realising
Don’t get so bogged down in the detail that you miss the wood for the trees
You can end up focussing on perfecting one element of the system at the expense of all the others.
This presentation has shown you how to run big workloads against your OBIEE system
But, resist the temptation to dash off and see what happens when you run a thousand users against your system at once.
It’ll be fun, but ultimately a waste of time.
You have to define what you’re going to do.
You need to define what the ultimate aim is.
Are you proving a system performs to specific user requirements? In which case your test definition is almost written for you, you just have to fill in the gaps
If you’re building a performance test for best-practice and all the good reasons I spoke about before then you need to think carefully about what you’ll test.
What’s a representative sample of the system’s workload?
For example: -
Analyse existing usage, pick the most frequently run reports
SPEAK TO YOUR USERS! Which reports do they care about?
Be wary of only analysing the reports that users complain about though – you want to be colleting lots and lots of good metrics. What happens when you fix the slow reports – the old “fast” reports will now appear slow in comparison, so you want to have some baselines for them too
I can’t stress this strongly enough.
Cary Millsap writes excellently on the whole subject of performance. I can’t recommend highly enough his paper “Thinking Clearly About Performance”, as well as many of the articles on his blog.
http://carymillsap.blogspot.com/2010/02/thinking-clearly-about-performance.html
There are books and books written on how you should approach performance testing and tuning, people like Mr Millsap have built their whole careers around it. It’s way outside the scope of this, but I believe it’s essential to understand the approach to follow, otherwise all your testing can be in vain.
It’s not the same as dashing off an OBIEE report that you can bin and recreate next week. Imagine designing your DW schema without good modelling knowledge – or think of ones that you’ve worked with where the person who created it didn’t understand what they were doing. The wasted time and misleading results can be potentially disastrous if you don’t get it right up front.
Take my word for it – time invested up front reading and understanding will repay itself ten-fold.
Preaching over.