This document provides SQL queries for retrieving lists of values (LOVs) commonly used in business intelligence reports. It includes queries to retrieve legislative data groups, secured persons lists, payroll names, country codes, balance categories, payroll flow names, legal employers, payroll statutory units, divisions, business units, legal reporting units, tax reporting units, departments, person names, work schedules, collective labor agreements, supervisor names, benefit life events, and benefit plans. The document states that it will be updated with additional queries to serve as a single reference point for all LOV queries.
Entity Relationship Diagram ERD for Global Human Resources Module Workforce Structures in Oracle HCM Cloud. It provides details on which table maps to which table and the inter relationships between them for most of the business objects.
For more details check out my blog @ http://fusionhcmknowledgebase.com/
Entity Relationship Diagram ERD for Global Human Resources Module Workforce Structures in Oracle HCM Cloud. It provides details on which table maps to which table and the inter relationships between them for most of the business objects.
For more details check out my blog @ http://fusionhcmknowledgebase.com/
One of the major trends in data warehousing/data engineering is the transition from click-based ETL tools to using code for defining data pipelines. Nowadays, the vast majority of projects either start with a set of simple shell/ bash scripts or with platforms such as Luigi or Apache Airflow, with the latter clearly becoming the dominant player. In the past 6 years, Project A also followed this approach when building data warehouses for more than 20 of its portfolio companies and we are now open sourcing the underlying infrastructure (https://github.com/mara). Basically, it is a lightweight, opinionated Airflow, with a focus on transparency and complexity reduction. In this talk, I will guide you through some of the design decisions behind the platform and some general learnings for setting up successful data engineering teams.
Disaster and RecoveryBusiness Impact AnalysisSystem .docxduketjoy27252
Disaster and Recovery
Business Impact Analysis
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Contingency Plan
Incident Response Policy
Purpose
Identifying and Reporting Incidents
Mitigation and Containment
Questions?
Overview
Shawn Kirkland
Purpose
Determine mission/business processes and recovery criticality.
Identify resource requirements.
Identify recovery priorities for system resources.
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Impact Analysis
Shawn Kirkland
Determine mission/business processes and recovery criticality. Mission/business processes supported by the system are identified and the impact of a system disruption to those processes is determined along with outage impacts and estimated downtime. The downtime should reflect the maximum that an organization can tolerate while still maintaining the mission.
Identify resource requirements. Realistic recovery efforts require a thorough evaluation of the resources required to resume mission/business processes and related interdependencies as quickly as possible. Examples of resources that should be identified include facilities, personnel, equipment, software, data files, system components, and vital records.
Identify recovery priorities for system resources. Based upon the results from the previous activities, system resources can more clearly be linked to critical mission/business processes. Priority levels can be established for sequencing recovery activities and resources.
This document is used to build the Dream Landing’s Database Server Information System Contingency Plan (ISCP) and is included as a key component of the ISCP. It also may be used to support the development of other contingency plans associated with the system, including, but not limited to, the Disaster Recovery Plan (DRP) or Cyber Incident Response Plan.
3
Operating System
Microsoft Windows Server 2008 R2
Application
Microsoft SQL Server 2008 Enterprise Edition
Hardware
Dell R720
Location
Server Rack on second floor server room.
Connection
System Administrator connects via local area network.
Other users connect remotely
DR Method
1 Full backup weekly and dailies every day.
3 hours after close of business.
System Description
Shawn Kirkland
The Dream Landing’s database server is comprised of Microsoft SQL Server 2008 Enterprise Edition installed and running on Microsoft Windows Server 2008 R2; this platform is housed on a Dell R720 server-class system. The database server is located in the server rack located on the second floor server room. Local administrators connect directly through the local area network; other users connect indirectly through the web server. Daily snapshot backup operations are conducted every day 3 hours after close of business.
4
ImpactMission/Business ProcessDescriptionQuery customer recordDatabase retrieval of customer.
RELATIONAL DATABASES & Database design
CIS276
EmployeeNumFirstNameLastNameDeptNum2173BarbaraHennessey274519LeeNoordsy318005PatAmidon27
Employee
Table Name
Field Names
Records (rows or tuples)
Fields (columns or attributes)
Tables
StateAbbrevStateNameEnterUnionOrderStateBirdStatePopulationCTConnecticut5Robin3,590,347MIMichigan26Robin9,883,360SDSouth Dakota40Pheasant833,354
Primary Key
Alternate keys
Keys
State
StateAbbrevStateNameEnterUnionOrderStateBirdStatePopulationCTConnecticut5Robin3,590,347MIMichigan26Robin9,883,360SDSouth Dakota40Pheasant833,354StateAbbrevCityNameCityPopulationCTHartford124,062CTMadison18,803CTPortland9,551MILansing119,128SDMadison6,482SDPierre13,899
Primary key (State table)
Keys
Composite primary key (City table)
Foreign Key
State
City
Relationships- One to ManyEmployeeNumFirstNameLastNameDeptNum2173BarbaraHennessey274519LeeNoordsy318005PatAmidon27DeptNumDeptNameDeptHead24Finance811227Marketing217331Technology4519
Primary key for the one to many relationship
Primary Key
Foreign key for the one to many relationship
Employee
Department
1:M or 1:N
Relationships- One to OneEmployeeNumFirstNameLastNameDeptNum2173BarbaraHennessey274519LeeNoordsy318005PatAmidon27EmployeeNumUserNamePassword2173bhennessey********4519lnoordsy********8005Pamidon********
Employee
Credential
Primary key for the one to one relationship
Foreign key for the one to one relationship
1:1
Relationships- Many to ManyEmployeeNumFirstNameLastNameDeptNum2173BarbaraHennessey274519LeeNoordsy318005PatAmidon27PositIDPositDescPayGrade1Director452Manager403Analyst30EmployeeNumPositIDStartDateEndDate2173212/14/20114519104/23/20134519311/11/200704/22/20138005306/05/201208/25/20138005207/02/201006/04/2012
Employee
Position
Employment
Primary Key (Employee table)
Primary Key (Position table)
Composite primary key of join table
Foreign keys related to the Employee and Position tables
M:N
Integrity Constraints
Entity integrity constraint
Primary key cannot be null
Referential integrity
Each non-null foreign key value must match a primary key value in the primary table
Domain integrity constraint
A domain is a set of values from which one or more fields draw their actual values
A rule you specify for a field (text size, validation rule, etc.)
Dependencies and DeterminantsEmployeeNumPositIDLastNamePositDescStartDateHealthPlanPlanDesc21732HennesseyManager12/14/2011BManaged HMO45191NoordsyDirector04/23/2013AManaged PPO45193NoordsyAnalyst11/11/2007AManaged PPO80053AmidonAnalyst06/05/2012CHealth Savings80054AmidonClerk07/02/2010CHealth Savings
StartDate
EmployeeNum
PositID
HealthPlan
LastName
PlanDesc
PositDesc
Composite Key
Transitive Dependancy
AnomaliesEmployeeNumPositIDLastNamePositDescStartDateHealthPlanPlanDesc21732HennesseyManager12/14/2011BManaged HMO45191NoordsyDirector04/23/2013AManaged PPO45193NoordsyAnalyst11/11/2007AManaged PPO80053AmidonAnalyst06/05/2012CHealth Savings80054AmidonClerk07/02/2010CHealth Savings
Composite Key
Insertion anomal ...
Data infrastructure for the other 90% of companiesMartin Loetzsch
Abstract: Unscientific guess: 90% of the companies out there neither have the data amounts nor the real-time requirements that justify maintaining a big data streaming infrastructure. Still, these companies also need to integrate data in order to improve their products and processes. Some of them then still use Spark to handle a few GB of data, but for the vast majority, running SQL scripts in simple relational databases does the trick. In this talk, I will give some recommendations and best practices for setting up data integration infrastructure with open source technologies. I will explain why PostgreSQL is a perfect fit for building data warehouses with up to a few TB of data. And I will argue that Airflow is probably not the best tool for orchestrating the execution of SQL scripts.
Presented at the Data Council Meetup Kickoff in Berlin
A task can be done in a very short way and in a very long way.
Which one will you choose?
Knowledge is Power!
This session gives lots of interesting knowledge about an allegedly boring topic - Constraints.
And this knowledge will give you the power to optimize and make better decisions.
See the companion webinar at: http://embt.co/1uHXmjv
The ever-growing interest in data accumulation from multiple sources and organizations for reporting and analysis exposes a dirty secret: those business terms that we all think we understand actually have a wide variety of definitions. Sometimes these variances are largely irrelevant, and do not significantly impact the ability to create a reasonable report. However, there are some instances in which even minor variations in structure, content, or semantics can have a significant impact in delivering trustworthy results.
This leads to the question: if we have two different structures or definitions for what appear to be two similar concepts, should we harmonize the definitions and structures into one? In some cases this will be a good idea, and it will lead to increased consistency, but this is only true as long as the two concepts really refer to the same real-world idea. In other cases, the same terms are used for two different ideas, necessitating a division into two or more qualified business terms and definitions.
Distributed Queries in IDS: New features.Keshav Murthy
Learn about the latest function relating to distributed queries that was delivered in IBM Informix® Dynamic Server (IDS) 11 and 11.5. This talk will provide an overview of distributed queries, then will jump into a deep dive on the latest functions and how you can benefit from implementing distributed queries in your solutions.
DN 2017 | Reducing pain in data engineering | Martin Loetzsch | Project ADataconomy Media
Making the data of a company accessible to analysts, business users and data scientists can be a quite painful endeavor. In the past 5 years, Project A has supported many of its portfolio companies with building data infrastructures and we experienced many of these pains first-hand. This talk shows how some of these pains can be overcome by applying common sense and standard software engineering best practices.
Similar to Most frequently used sq ls for a list of values (20)
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
Most frequently used sq ls for a list of values
1. Most Frequently Used SQLs for List of Values (LOVs) in BI Report
Beloware the listof SQLs that are mostfrequentlyusedtogetthe listof valuesforbusinessobjects.
Legislative Data Group:
Selectname fromper_legislative_data_groups_vl
SecuredPersonsList:
SELECT DISTINCTPERSON_NUMBERFROM PER_PERSON_SECURED_LIST_V
SecuredDepartments List:
SELECT DISTINCTSUBSTR(DEPT.NAME,1,INSTR(DEPT.NAME,'-',1)-1) DEPCODEFROM
PER_DEPARTMENT_SECURED_LIST_V SEC_DEPT,
PER_DEPARTMENTS DEPT
WHERE DEPT.ORGANIZATION_ID=SEC_DEPT.ORGANIZATION_ID
Payroll Name:
selectdistinctpayroll_name
frompay_all_payrolls_f
where TRUNC(SYSDATE) BETWEEN effective_start_date ANDeffective_end_date
Country Codes:
selectcountry_code,GEOGRAPHY_ELEMENT1 fromHZ_GEOGRAPHIES where GEOGRAPHY_TYPE =
'COUNTRY'
Balance Category:
selectdistinctUSER_CATEGORY_NAMEfromPAY_BALANCE_CATEGORIES_VL
Pay Action Status Lookups:
SELECT meaning FROMhcm_lookupsWHERE lookup_type ='PAY_ACTION_STATUS'
Payroll FlowName:
SELECT pfi.instance_name FROMpay_flow_instancespfi
ORDER BY creation_date desc
ElementClassification:
SELECT DISTINCTc.classification_nameClassfication
FROMpay_ele_classifications_vl c, per_legislative_data_groups_vl l
WHERE c.legislation_code =l.legislation_code
ANDl.name IN (:P_LDG)
ORDER BY c.classification_name
Pay Periods:
selectptp.period_name
2. frompay_time_periodsptp,pay_all_payrolls_f pp
where pp.payroll_id=ptp.payroll_idandpp.payroll_name =:OOS_PAYROLL
and sysdate betweenpp.effective_start_date andpp.effective_end_date
and ptp.period_category='E'
and ( substr(ptp.period_name,instr(ptp.period_name,'', 1)+1, 4) = to_char(sysdate,'YYYY') OR
substr(ptp.period_name,instr(ptp.period_name,'', 1)+1, 4) = to_char(sysdate,'YYYY') - 1 OR
substr(ptp.period_name,instr(ptp.period_name,'', 1)+1, 4) = to_char(sysdate,'YYYY') + 1)
orderby ptp.time_period_id
Legal Employers:
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_LEMP'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Payroll Statutory Units (PSUs):
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_PSU'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Divisions:
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
3. houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_DIVISION'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
BusinessUnits:
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,
HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATE AND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='FUN_BUSINESS_UNIT'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Legal ReportingUnits(LRUs):
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,
HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_LRU'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Tax ReportingUnits(TRUs):
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,
HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
4. ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_TRU'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Enterprise:
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,
HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='HCM_TRU'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
Departments:
SELECT hauft.NAME
FROM HR_ORG_UNIT_CLASSIFICATIONS_Fhoucf,HR_ALL_ORGANIZATION_UNITS_Fhaouf,
HR_ORGANIZATION_UNITS_F_TLhauft
WHERE haouf.ORGANIZATION_ID=houcf.ORGANIZATION_ID
ANDhaouf.ORGANIZATION_ID=hauft.ORGANIZATION_ID
ANDhaouf.EFFECTIVE_START_DATEBETWEEN houcf.EFFECTIVE_START_DATEAND
houcf.EFFECTIVE_END_DATE
ANDhauft.LANGUAGE= 'US'
ANDhauft.EFFECTIVE_START_DATE= haouf.EFFECTIVE_START_DATE
ANDhauft.EFFECTIVE_END_DATE= haouf.EFFECTIVE_END_DATE
ANDhoucf.CLASSIFICATION_CODE='DEPARTMENT'
AND SYSDATEBETWEEN hauft.effective_start_date ANDhauft.effective_end_date
PersonNames:
selectfull_namefromper_person_names_f
where name_type ='GLOBAL'
and trunc(sysdate) betweeneffective_start_dateandeffective_end_date
Work Schedules:
select schedule_idwork_schedule_id,schedule_namework_schedule_name
5. fromzmm_sr_schedules_vlwhere trunc(sysdate) betweentrunc(effective_from_date) and
trunc(effective_to_date)
Collective LaborAgreements:
SELECT DISTINCTCA.COLLECTIVE_AGREEMENT_NAME,CA.COLLECTIVE_AGREEMENT_IDFROM
PER_COL_AGREEMENTS_TL CA
SupervisorNames:
SupervisorName LOV
selectdistinctfull_name
fromPER_PERSON_NAMES_Fppnf,
PER_ASSIGNMENT_SUPERVISORS_Fpasf
where pasf.manager_id=ppnf.person_id
ANDppnf.name_type='GLOBAL'
ANDpasf.manager_type='LINE_MANAGER'
orderby full_name
BenefitLife Events:
SelectDISTINCTName fromBEN_LER_F where TRUNC(SYSDATE) BETWEEN EFFECTIVE_START_DATE
ANDEFFECTIVE_END_DATE
BenefitPlans:
selectdistinctpl.namefromBEN_PL_FPL
where (PL.NAMEnotlike 'Waive%'ANDPL.NAMEnotlike 'Volun%')
and TRUNC(SYSDATE) BETWEEN PL.EFFECTIVE_START_DATEANDPL.EFFECTIVE_END_DATE
That's all for now... Will keepupdatingthispostwithotherqueriesasIwantto maintainsingle pointof
reference forall Listof Values.
Stay tunedformore updates.