I’m here to show you what I’ve learned about best practices…. As fair warning, this is geared towards a BI report writer who’s been using the system at least 6 months or more who wants to take the next step in their reporting, but is not sure how to get there. My name is …, I am the… I handle most of the analysis pertaining to the design and pricing of our benefit plans … and I create most of the workforce metrics that come out of HR for Safelite, like turnover, recruiting and performance management.
What do I mean by Next Level Report? To me it’s using Report Studio as it wants to be used. Query Studio is useful for data-dumps Analysis Studio is for creating quick crosstabs and ‘pretty’ charts Event Studio is for creating automatic notifications – or in my case, sending out 10k incorrect notifications…then doing the same an hour later Report Studio is where you can create real reporting- boardroom ready analysis
When you finish your online BI training, you feel pretty confident you can make a report just like this. You can Drag and Drop fields, group, sort….if you’re feeling giddy you might even add a “count” at the bottom
Then you see something like this. If you came to Connections last year, you would have seen Alex Keever show this off This was definitely not in the training, but it inspired me. Cognos BI can do almost anything you want it to – but you have to be able to organize your data to do it.
So I’m going to try to help you get there – and by “there”, I mean to a place where you can have valuable data. Data that you want to present and that your business partners need. Let’s start with Mapping it out
Plan it like a project: Treat every report like a project. You can’t just say “we’re going to launch Open Enrollment .Net and go live in 2 weeks” without planning….and if you have ever tried to do this, you understand all to well why not. Take the time to sit down, figure out what you’re trying to do, and how you’re going to get there – step by step. Draw a picture, not because that’s often how I get requests (a couple of drawn charts with some words scribbled on it), because I like to draw a picture. No matter what I say about not focusing on the end result, that is the part that matters most. So going into it, you should have an idea of how it’s going to look in the end. Identify ALL your fields and write them down. Any field you’ll need, whether it’s used for the final report, or part of a calculation or a filter, write it down. This will help you figure out exactly the steps you need to take. And the more you do it like this, the more it will help. You’ll also need to decide how many different sets of calculations you’ll need for the product. For turnover, you need to run different calculations on the same data set – job history. 1 to calculate the base headcount – a point in time headcount; 1 to count the number of hires in the period; 1 to count the number of terminations in the period. That’s 3 different calculations on the same data – this means 3 different queries.
After making your map, you need to pick your package.
Most reports are built with one of the specific packages, because they sound like what we’re trying to build – My Employees, Payroll History, Benefits. We’re also taught, except for a few of us, to avoid things with names like “administrator”, it’s scary to have that much power. This is one of the few times where you shouldn’t avoid “Administrator”. Getting to know and love your administrator package will be better in the long run. Yes, it’s bigger. If you want more data, you get more data. There are a lot of folders to go through, especially when most of the time you only want to get the basic employee information. But the more you use it, the more comfortable you’ll be with it. We’ve all had to deal with missing-field regret, it happens to everyone. But with all this package, the chance of you getting all the way through the report before finding out that one of the fields is missing from the “My Employees” package is pretty low…like Supervisor Sometimes you never know the direction a report will be taken. You start off with a basic performance rating report, and after one meeting with a couple executives, you might end up with this. This same report has grown so massive it’s been blamed a few times by Ultimate support for bringing down the BI server…although it’s never been proven.
Use query Explorer. This is the biggest, but most basic requirement to writing better reports. To some of you it won’t be a new idea, but query explorer is the foundation of BI, not ‘Page View’
Page View is for the final product, it’s not where you should start. It’s like frosting a bowl of raw cake batter. Starting on page view limits your view to the end result, not the information you’re trying to present. You’ll also find it’s quicker to work this way. Instead of clicking the filter button and having to scroll and find the fields one by one in pageview, you’ll be able to view all your data and filters right in their own sections. The insertable objects container is always there and in the same spot you left it. Query Explorer lets you focus on the data – By looking at just the available fields, and how they’re being pulled into the query, you’ll have a better grasp of your next steps. You’ll also be able to see if some of the default behaviors of the package might cause you some issues (like the automatic calculations set up on some of the fields). Job History is notorious for causing hours of frustration when it automatically averages the hourly rate for whatever the report is grouped on. And – you can’t create multiple queries if you’re stuck in PageView. Think of Report Studio as having a hundred Query Studios in one. With turnover, you need to be able to count the headcount, hires, and terms within a certain period. The best way to do this is to create multiple queries – one for each of those calculations, and join them together to do the final calculations. If you’re just dragging and dropping fields into Page View, you’re never going to be able to get there.
At the bottom left of the query explorer panel is a little used section for “properties”. These will adjust how your fields are going to behave – and often they will not behave well. As I mentioned just a minute ago, sometimes the default package settings might not work for what you’re trying to do. Please don’t blame the BI guys – they are very busy trying to bring the system back up after I’ve run my performance mgmt report. You can see how most the settings will work for most uses…especially for those that only use pageview. But they don’t work for everything. So what’s the first thing I do after dropping all my fields into the query? Select all the fields and change both the ‘aggregates’ to None. Right in the middle of the properties, there are two fields “Aggregate Function” and “Rollup Aggregate Function”. These are responsible for most field aberrations (like counting the EmpNo, totaling the Annual Salary, or Averaging the Hourly Rt). Most of them are set to do a function like total, automatically, based on the setup of the report – which is affected by the grouping, sorting, and field placement as you’re building You don’t need those, you can make your own. One of the most important words in BI is “FOR”. FOR allows you create calculations based on any field in the query you want. Count Employee Number for Orglvl1, average Hrly rt for JobCode. Using FOR in the expressions translates almost identically to what the package automatically does for aggregates – but you have control over how it works.
You can do a lot within one query, but you don’t have to. Let’s say you’re trying to do a report of the date people have taken a training course – but you want to show everyone whether they’ve taken the class or not. You can do this in one query, but it’s easier not to. A better way is to segment your data. Build one query for all your active employees, and anything else you need about them. Then build a second query with just the course you’re looking for from the Training folder. Finally, join them together into a third query with an outer-join with the Employee data as primary query. The outer join (1=0) ensures that you’ll get all the employees, and the training dates for that class if they’ve taken it, but it won’t drop all the people who haven’t. So back into turnover, the first piece of building turnover is the effective-dated employee list…which is built from job history. While you could create this list all in one query, don’t. If you try that you’ll never be able to figure out what weird legacy data is hosing your report. Make it easy to find errors. Data errors that impact turnover are usually from conversion, but they also come from having to make manual changes, or people that put the same transaction through multiple times. It’s really hard to figure out that you need to exclude all reasoncodes of “CONV” if you try to do it in one query. But breaking it apart into simple pieces gives you better control of the data as you’re putting it together.
Here are the steps to your basic effective-date Headcount, although overly simplified: Your first query is a list of the data you might want from job history (eeid, coid, jobcode, emptype, empstatus, orglvl1, most important – effdt and datetimecreated). All you are trying to do in this one is filter the data for rows before the prompted “date”, and then pick the row or rows where the effdate is the closest to that prompted “date”. The three big calculations are Maxdate: a calculated field to show the latest date row for each person, or EEID; DateFilter: a filter to limit the data for just the rows <= the date you prompt for (these will filter before the “MaxDate” is calculated); and MaxDateFilter: this is done “after aggregation” in the filter properties and will limit the data to the rows where the MaxDate is equal to the EffectiveDate from Job History. This query is your chance to get rid of bad data. After you build the first query, test it out. Look for stuff that just doesn’t make sense, and verify the data to the system. The second query uses “Root” as its source, and is used for filtering the latest “Time”. Many of the people from the first query will have multiple rows, because often transactions are dated the same date, or inserted later to fix a problem. Basically you bring in all the data, and create two calculations: MaxCDateTime: a calculated field to show the latest datetime row for each person; MaxTimeFilter: this is done “after aggregation” in the filter properties and will limit the data to the rows where the MaxCDateTime is equal to the DateTimeCreated from Job History. The whole purpose of this query is to create one good row per person, and their actual job row at that time. Finally, Query1. Use DropDup as the source, and pull everything in. Save all your organization or status filters for here – first being the “Status <>’T’. The reason you save all the other org filters for here is to make sure your data is all organized first. If you filter for a certain Employee Type or OrgLvl, or even status in one of the other queries it can result in bad data. If someone has multiple rows in the prior query but the correct one is actually a term row, filtering for non-terms there, unless done just right, can give you bad results.
SQL is your friend – and there are 3 great reasons to use it in your reporting
First – you can use it to bring data into the report that’s not available in the package. And don’t make it hard, or try to do too much in SQL if you don’t need to. Create the SQL segment, Choose your DataSource, usually CompanyDB, and type in “select * from empcomp” (select ALL from the EMPCOMP table, or whatever table you need). After it’s in the query, you can do any other filters you might need. However, when you use SQL, you bypass security. There’s a way to join it pack into security, but I’ve found the easiest way is to just join it with a regular table from the package in your report. For this, you’re only using SQL to get a field you need, not write the whole report Second – you can use the fact that it bypasses security to your advantage. Compare what people can run under their security to the whole company. Use that Sql query from EmpComp and create a current headcount by job family. Then create a similar headcount from the package and join them together. If you do it right, they’ll be able to see how they compare to the company. With a little practice you can use this for almost anything: Turnover, performance management, training metrics. Third – my favorite, the holy grail…joining databases together. It’s possible to connect Recruiting or the Old Performance Mgmt data to the Core system data. Have you ever wanted to use Recruiting and the core system at the same time? I mentioned before the Data source for SQL queries, one is Recruiting and one is Performance mgmt. You can either write the data extract in SQL, or just use a report you’ve already written. You can copy the SQL out of any query you have. So you can pull the SQL from a query written in Recruiting, then create a new report using the Administrator package, build a Sql Query where Ultipro Recruiting is the Data Source, and paste that Generated Sql into the SQL panel. This allows you to write a report from the Core database, tie it to the data from Recruiting, and can even choose to have it inherit the security from the Core system. The hard part is figuring how to use it, but I bet more than a few or you are brainstorming how this can be used in your company. I’ve used it to create Internal Applicant automated notices to the current manager with recruiting, true performance metrics out of the old Performance Mgmt system to include the org structures and fields not available in the PM system…but are available in Core.
I wanted to put together a quick list of common expressions, and some or most might not be new. I’m not going to go through the more basic ones like Count, Maximum, or Concatenate, but as you start building more complex reports, these will become your bread and butter: Days +/- is all the same expression _add_days. If you’re adding days you use a positive, subtracting days is a negative. For example you want to run a workflow report that pulls transactions within the past week, it would be “workflow date” >= _add_days(current_date,-7) Makedate, or convertdate is a common need, for no other reason than to make date or datetime fields all match. Especially because dates are notoriously difficult to work with. Use Cast(FIELD,date). I showed this one a little earlier in the headcount report and can also be used for other field conversions like time, decimal, etc Trim – while fairly innocuous, is often the only thing that will fix a report. Use trim to remove blank spaces before or after the real characters in a field, like the spaces after the EmpNo. But, if you’ve written multiple query reports, and you join them on non-standard fields, like JobCode….you’ve probably seen the report drop some or all of the records – because it’s matching a field with spaces at the end to one that doesn’t. Add trim to any field you use for joining and it will solve that. Null is pretty standard – most people have used this with dates. But if you don’t remember null with filtering on a non-required field, like Union Code and only filter for <>, you’ll drop anyone without a value. Case when is nothing more than a fancy IF, but it lets you be more organized if your expression is more complex. You can do almost infinite When/then’s in a row without extending and it’s easy to review for troubleshooting FOR – I mentioned it earlier, but it can be used for almost any aggregation – Max, min, count, running-count, etc, including ones that need a “distinct” Substring – most of us spent a little effort in designing jobcodes or other fields with a repeatable logic, so often the first 2, or middle 2, or last 2 characters of a job code has a meaning that is standard. You can use substring to filter for those. It uses the field, then asks for a start and length to pull. In this example, it starts at the 3 rd character and pulls two characters. Last is kind of a hidden one, but very handy. This pulls the name and username of the person running the report. It’s not only useful as an identifier but I’ve used it to automatically filter out the person running the report, so they don’t see themselves when running a report in a public folder. #SQ($FIELD)# The two fields are account.defaultname and account.personalinfo.username.
Best Practices to Leverage Ultipro BI Today
Best Practices to Leverage UltiPro BI TodayChris Chamberlain – Manager, Benefits & HR Analysis Safelite
Safelite• Safelite Repair, Safelite Replace• Columbus, OH• 10k employees in nearly 700 locations, + 850 “mobile”• Went Live with UltiPro on 10/1/2009• SaaS: HR/Payroll/Benefits, Recruiting, Onboarding, Performance Management, Life Events/Open Enrollment• In 2011 we had 73,803 employee/manager transactions processed in .Net (only 23,454 Administrator transactions)• Ultipro elevated our reporting to true Business Intelligence 2
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 6
Draw Your Map • Plan it like a project • Draw a picture • Write down ALL the fields • How many different calculationsQuery2Field From Show? Calculation Expression Filter Group Sort JoinEmpNo EmpPers X XLast,FirstName EmpPers X XStatus EmpComp <>TEmployee Type EmpComp X REG=1,2JobCode+Title EmpComp X Concat Cd || Title XCount Query2 X Count in JobCoID EmpComp XEEID EmpComp X 7
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 8
Administrator Package Annual Review Analysis - as of: Feb 20, 2012 Year: 2012 DISTRIBUTION• Don’t be afraid of 80 Team Summary “admin” 70 60• More data = More data 50 Distribution Ratings Percent 40 Targeted Percent• Never have missing-field 30 20 regret 10 0 1 2 3 4 5• Moving target Score Overall Performance Rating Team Summary Target Count 1 - Greatly Exceeds Expectations 2% 3% 2 2 - Exceeds Expectations 33% 16% 36 3 - Meets Expectations 59% 70% 65 4 - Does Not Meet Expectations 4% 10% 4 5 - Needs Immediate Improvement 0% 1% 0 Not Rated 3% 0% 3 Summary 110 9
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 10
Query Explorer?• Page View is for the final product• Lets you focus on the data• Multiple queries 11
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 12
Take Control of Your Data• Check your properties• Clear the slate• Create your own aggregations (FOR) Count ( [Employee Number] FOR [OrgLvl1] ) Average ( [Hourly Rt] FOR [JobCode] ) 13
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 14
Make it Simple• Don’t overcomplicate things• Segment your data• Make it easy for error checking 15
Effective-Date Headcount• Query1 – Maxdate: maximum([Business Layer].[Employee Job History].[Effective Date] for [Business Layer].[Employee Job History].[EjhEEID]) • [Business Layer].[Employee Job History].[Effective Date] <=?EffDate? • cast([Effective Date],date)=cast([MaxDate],date) After Aggregation• Query2 – MaxCDateTime: maximum([Query1].[EjhDateTimeCreated] for [Query1].[EjhEEID]) • cast([Query1].[EjhDateTimeCreated],time)=cast ([MaxCDateTime],time) After Aggregation• Query3 • [Employee Status Code]<>T 16
How Do I Get “There”?• Map it out• Use the Administrator package• Use Query Explorer• Take control of your data• Make it simple• Don’t be afraid of SQL 17
Don’t be Afraid of SQL• Use SQL to pull the data that’s not in the package – select * from empcomp• Company comparisons• Joining databases 18
Expressions for the Next Level• Days+/- _Add_Days _add_days(current_date,-7) Cast([Eff Date], date) =• MakeDate Cast([FIELD],date) Cast([DateTimeCreated], date)• Trim Trim([FIELD]) Trim([EmpNo]) = Trim([Query1].[Empno]) [Local Union Code] is missing OR [Local Union• Null [Field] is missing Code]<>’101’ Case• Case When Case when “x”=“y” When [JobCode]=‘1234’ then ‘100’ then “z” else “q” end When [JobCode] starts with ‘4’ then ‘200’ Else ‘300’ End• FOR ‘agg’([FIELD] for X) Count (distinct [EmpNo] for [Job])• Substring substring([FIELD],’start’, Substring([JobCode],3,2)=’11’ ’length’) #sq($account.defaultName)# OR #sq($account.personalInfo.userName)# 19
Best Resources for ReportStudio• BI Exchange • Entity Relationship• LinkedIn - Ultipro Business Diagrams (ERD) – Doc Intelligence Group Website, under Ultipro Reference• COGNOISe.com • Database Layout file:• CogKnowHow.com http://documentation.ultimate (or search Doc WebSite for• CognosForums.com foldercontents.xls)• TechontheNet.com• DashboardInsight.com (for inspiration) 20
BI Exchange – My Examples• Benefit Election Comparison • GL Rules• EEO (Dated Headcount, Hires, • Job changes by daterange Job Change, Promotions, • New Supervisors Terminations) • Total Compensation• Effective-Date Employee List Statement w Burst• Employees between dates • Turnover – Basic w/Org Level w/variable filters Grouping Prompt Chris Chamberlain Christopher.email@example.com 21