• Save
Ab initio training Ab-initio Architecture
Upcoming SlideShare
Loading in...5
×
 

Ab initio training Ab-initio Architecture

on

  • 1,770 views

Ab-initio architecture, Cooperating System, GDE,

Ab-initio architecture, Cooperating System, GDE,
Graphs, pipelining, parallelism, Unix Scripting integration,
Sandbox Envionment

Statistics

Views

Total Views
1,770
Views on SlideShare
666
Embed Views
1,104

Actions

Likes
0
Downloads
0
Comments
0

6 Embeds 1,104

http://sandyclassic.wordpress.com 835
http://businessintelligencetechnologytrend.wordpress.com 150
http://datawarehouseview.wordpress.com 109
http://www.slideee.com 8
http://prlog.ru 1
https://sandyclassic.wordpress.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Ab initio training Ab-initio Architecture Ab initio training Ab-initio Architecture Presentation Transcript

  • Ab Initio OverviewAb Initio Overview Co>Operating system EME DTM GDE User User User Create all your graphs Graph when deployed generate .ksh Run all your graphs Store all variables in a repository / is also used for control / also collects all metadata about graph developed in GDE Used to schedule graphs developed in GDE. It also has capability to maintain dependencies between graphs
  • Continue To Next SlideContinue To Next Slide More Details VisitMore Details Visit  More Details blog:http://sandyclassic.wordpress.com  linkedin:https://www.linkedin.com/in/sandepsharma  slideshare: http://www.slideshare.net/SandeepSharma65  facebook:https://facebook.com/sandeepclassic  google+ http://google.com/+SandeepSharmaa  Twitter: https://twitter.com/sandeeclassic  http://thedatawarehouseclassics.wordpress.com  http://businessintelligencetechnologytrend.wordpress. com
  • About Ab InitioAbout Ab Initio  Ab Initio is a general purpose data processing platform for enterpriseAb Initio is a general purpose data processing platform for enterprise class, mission critical applications such as data warehousing,class, mission critical applications such as data warehousing, clickstream processing, data movement, data transformation andclickstream processing, data movement, data transformation and analytics.analytics.  Supports integration of arbitrary data sources and programs, andSupports integration of arbitrary data sources and programs, and provides complete metadata management across the enterprise.provides complete metadata management across the enterprise.  Proven best of breed ETL solution.Proven best of breed ETL solution.  Applications of Ab Initio:Applications of Ab Initio: – ETL for data warehouses, data marts and operational data sources.ETL for data warehouses, data marts and operational data sources. – Parallel data cleansing and validation.Parallel data cleansing and validation. – Parallel data transformation and filtering.Parallel data transformation and filtering. – High performance analyticsHigh performance analytics – Real time, parallel data capture.Real time, parallel data capture. View slide
  • Ab Initio ArchitectureAb Initio Architecture Native Operating System UNIX Windows NT Ab Initio Co>Operating System Component Library User-defined Components Third Party Components Application Development Environments Graphical C ++ Shell Applications Ab Initio Metadata Repository View slide
  • Co>Operating SystemCo>Operating System  The Co>Operating System is core software that unites aThe Co>Operating System is core software that unites a network of computing resources-CPUs, storage disks,network of computing resources-CPUs, storage disks, programs, datasets-into a production-quality dataprograms, datasets-into a production-quality data processing system with scalable performance andprocessing system with scalable performance and mainframe reliabilitymainframe reliability..  The Co>Operating System is layered on top of the nativeThe Co>Operating System is layered on top of the native operating systems of a collection of computers. It providesoperating systems of a collection of computers. It provides a distributed model for process execution, filea distributed model for process execution, file management, process monitoring, check-pointing, andmanagement, process monitoring, check-pointing, and debugging.debugging.
  • Co>Operating SystemCo>Operating System  The Graphical Development Environment (GDE) providesThe Graphical Development Environment (GDE) provides a graphical user interface into the services of thea graphical user interface into the services of the Co>Operating System.Co>Operating System.  Unlimited scalability : Data parallelism results in speedupsUnlimited scalability : Data parallelism results in speedups proportional to the hardware resources provided, doubleproportional to the hardware resources provided, double the number of CPUs and execution time is halved.the number of CPUs and execution time is halved.  Flexibility : Provides a powerful and efficient dataFlexibility : Provides a powerful and efficient data transformation engine and an open component model fortransformation engine and an open component model for extending and customizing Ab Initio’s functionality.extending and customizing Ab Initio’s functionality.  Portability : Runs heterogeneously across a huge variety ofPortability : Runs heterogeneously across a huge variety of operating system and hardware platforms.operating system and hardware platforms.
  • Graphical Development EnvironmentGraphical Development Environment (GDE)(GDE)  GDE lets create applications by dragging and droppingGDE lets create applications by dragging and dropping components onto a canvas configuring them withcomponents onto a canvas configuring them with familiar, intuitive point and click operations, andfamiliar, intuitive point and click operations, and connecting them into executable flowcharts.connecting them into executable flowcharts.  These diagrams are architectural documents thatThese diagrams are architectural documents that developers and managers alike can understand anddevelopers and managers alike can understand and use, but they are not mere pictures: the co>operatinguse, but they are not mere pictures: the co>operating system executes these flowcharts directly. This meanssystem executes these flowcharts directly. This means that there is a seamless and solid connection betweenthat there is a seamless and solid connection between the abstract picture of the application and the concretethe abstract picture of the application and the concrete reality of its execution.reality of its execution.
  • Ab Initio S/w Versions & File ExtensionsAb Initio S/w Versions & File Extensions  Software VersionsSoftware Versions – Co>Operating System Version =>Co>Operating System Version => – GDE Version =>GDE Version =>  File ExtensionsFile Extensions – .mp.mp Stored Ab Initio graph or graph componentStored Ab Initio graph or graph component – .mpc.mpc Program or custom componentProgram or custom component – .mdc.mdc Dataset or custom dataset componentDataset or custom dataset component – .dml.dml Data Manipulation Language file or record typeData Manipulation Language file or record type definitiondefinition – .xfr.xfr Transform function fileTransform function file – .dat.dat Data file (either serial file or multifile)Data file (either serial file or multifile)
  • Connecting to Co>op Server from GDEConnecting to Co>op Server from GDE
  • Host Profile SettingHost Profile Setting 1.1. Choose settings from the run menuChoose settings from the run menu 2.2. Check the use host profile setting checkbox.Check the use host profile setting checkbox. 3.3. Click Edit button to open the Host profile dialog.Click Edit button to open the Host profile dialog. 4.4. If running Ab Initio on your local NT system, check LocalIf running Ab Initio on your local NT system, check Local Execution (NT) checkbox and go to step 6.Execution (NT) checkbox and go to step 6. 5.5. If running Ab Initio on a Remote UNIX system, fill in theIf running Ab Initio on a Remote UNIX system, fill in the path to the Host and Host Login and Password.path to the Host and Host Login and Password. 6.6. Type the full path of Host directory.Type the full path of Host directory. 7.7. Select the Shell Type from pull down menu.Select the Shell Type from pull down menu. 8.8. Test Login and if necessary make changes.Test Login and if necessary make changes.
  • Host ProfileHost Profile Select the Shell Type Enter Host, Login, Password & Host directory
  • Ab Initio ComponentsAb Initio Components Ab Initio provided components. Datasets, Partition, Transform, Sort, Database are frequently used.
  • Creating GraphCreating Graph Type the Label Specify the Input .dat file
  • Create Graph - DmlCreate Graph - Dml  Propagate from Neighbors: CopyPropagate from Neighbors: Copy record formats from connected flow.record formats from connected flow.  Same As: Copy record format’sSame As: Copy record format’s from a specific component’s port.from a specific component’s port.  Path: Store record formats in aPath: Store record formats in a Local file, Host File, or in the AbLocal file, Host File, or in the Ab Initio repository.Initio repository.  Embedded: Type the record formatEmbedded: Type the record format directly in a string.directly in a string. Specify the .dml file
  • Creating Graph - dmlCreating Graph - dml  DML is Ab Initio’s DataDML is Ab Initio’s Data Manipulation Language.Manipulation Language.  DML describes data in termsDML describes data in terms ofof – Record Formats that list theRecord Formats that list the fields and format of input,fields and format of input, output, and intermediateoutput, and intermediate records.records. – Expressions that defineExpressions that define simple computations, forsimple computations, for example, selection.example, selection. – Transform Functions thatTransform Functions that control reformatting,control reformatting, aggregation, and other dataaggregation, and other data transformations.transformations. – Keys that specify groupings,Keys that specify groupings, ordering, and partitioningordering, and partitioning relationships betweenrelationships between records.records. Editing .dml file through Record Format Editor – Grid View
  • Creating Graph - TransformCreating Graph - Transform  A transform function is either aA transform function is either a DML file or a DML string thatDML file or a DML string that describes how you manipulatedescribes how you manipulate your data.your data.  Ab Initio transform functionsAb Initio transform functions mainly consist of a series ofmainly consist of a series of assignment statements. Eachassignment statements. Each statement is called a businessstatement is called a business rule.rule.  When Ab Initio evaluates aWhen Ab Initio evaluates a transform function, it performstransform function, it performs following tasks:following tasks: – Initializes local variablesInitializes local variables – Evaluates statementsEvaluates statements – Evaluates rules.Evaluates rules.  Transform function files have theTransform function files have the xfr extension.xfr extension. Specify the .xfr file
  • Creating Graph - xfrCreating Graph - xfr  Transform functions: A setTransform functions: A set of rules that computeof rules that compute output values from inputoutput values from input values.values.  Business rule: Part of aBusiness rule: Part of a transform function thattransform function that describes how youdescribes how you manipulate one field ofmanipulate one field of your output data.your output data.  Variable: Optional part of aVariable: Optional part of a transform function thattransform function that provides storage forprovides storage for temporary values.temporary values.  Statement: Optional part ofStatement: Optional part of a transform function thata transform function that assigns values of variablesassigns values of variables in a specific order.in a specific order.
  • Sample ComponentsSample Components  SortSort  DedupDedup  JoinJoin  ReplicateReplicate  RollupRollup  Filter by ExpressionFilter by Expression  MergeMerge  LookupLookup  Reformat etc.Reformat etc.
  • Creating Graph – Sort ComponentCreating Graph – Sort Component  Sort: The sort componentSort: The sort component reorders data. Itreorders data. It comprises twocomprises two parameters: Key andparameters: Key and max-core.max-core.  Key: The Key is one ofKey: The Key is one of the parameters for Sortthe parameters for Sort component whichcomponent which describes the collationdescribes the collation order.order.  Max-core: The max-coreMax-core: The max-core parameter controls howparameter controls how often the sort componentoften the sort component dumps data fromdumps data from memory to disk.memory to disk. Specify Key for the Sort
  • Creating Graph – Dedup componentCreating Graph – Dedup component  Dedup componentDedup component removes duplicateremoves duplicate records.records.  Dedup criteria willDedup criteria will be either unique-be either unique- only, First or Last.only, First or Last. Select Dedup criteria.
  • Creating Graph – Replicate ComponentCreating Graph – Replicate Component  ReplicateReplicate combines thecombines the data records fromdata records from the inputs intothe inputs into one flow andone flow and writes a copy ofwrites a copy of that flow to eachthat flow to each of its output ports.of its output ports.  Use Replicate toUse Replicate to supportsupport componentcomponent parallelism.parallelism.
  • Creating Graph – Join ComponentCreating Graph – Join Component • Specify the key for join • Specify Type of Join
  • Database Configuration (.dbc)Database Configuration (.dbc)  A file with a .dbc extension which provides the GDE withA file with a .dbc extension which provides the GDE with the information it needs to connect to a database. Athe information it needs to connect to a database. A configuration file contains the following information:configuration file contains the following information: – The name and version number of the database to which you wantThe name and version number of the database to which you want to connect.to connect. – The name of the computer on which the database instance orThe name of the computer on which the database instance or server to which you want to connect runs, or on which the databaseserver to which you want to connect runs, or on which the database remote access software is installed.remote access software is installed. – The name of the database instance, server, or provider to whichThe name of the database instance, server, or provider to which you want to connect.you want to connect. – You generate a configuration file by using the Properties dialog boxYou generate a configuration file by using the Properties dialog box for one of the Database components.for one of the Database components.
  • Creating Parallel ApplicationsCreating Parallel Applications  Types of Parallel ProcessingTypes of Parallel Processing – Component-level Parallelism: An application with multipleComponent-level Parallelism: An application with multiple components running simultaneously on separate data usescomponents running simultaneously on separate data uses component parallelism.component parallelism. – Pipeline parallelism: An application with multiple componentsPipeline parallelism: An application with multiple components running simultaneously on the same data uses pipeline parallelism.running simultaneously on the same data uses pipeline parallelism. – Data Parallelism: An application with data divided into segmentsData Parallelism: An application with data divided into segments that operates on each segment simultaneously uses datathat operates on each segment simultaneously uses data parallelism.parallelism.
  • Partition ComponentsPartition Components  Partition by Expression: Dividing data according to a DML expression.Partition by Expression: Dividing data according to a DML expression.  Partition by Key: Grouping data by a key.Partition by Key: Grouping data by a key.  Partition with Load balance: Dynamic load balancing.Partition with Load balance: Dynamic load balancing.  Partition by Percentage: Distributing data, so the output is proportionalPartition by Percentage: Distributing data, so the output is proportional to fractions of 100.to fractions of 100.  Partition by Range: Dividing data evenly among nodes, based on a keyPartition by Range: Dividing data evenly among nodes, based on a key and a set of partitioning ranges.and a set of partitioning ranges.  Partition by Round-robin: Distributing data evenly, in blocksize chunks,Partition by Round-robin: Distributing data evenly, in blocksize chunks, across the output partitions.across the output partitions.
  • Departition ComponentsDepartition Components  Concatenate: Concatenate component produces a single output flowConcatenate: Concatenate component produces a single output flow that contains first all the records from the first input partition, then allthat contains first all the records from the first input partition, then all the records from the second input partition and so on.the records from the second input partition and so on.  Gather: Gather component collects inputs from multiple partitions in anGather: Gather component collects inputs from multiple partitions in an arbitrary manner, and produces a single output flow, does not maintainarbitrary manner, and produces a single output flow, does not maintain sort order.sort order.  Interleave: Interleave component collects records from many sourcesInterleave: Interleave component collects records from many sources in round robin fashion.in round robin fashion.  Merge: Merge component collects inputs from multiple sorted partitionsMerge: Merge component collects inputs from multiple sorted partitions and maintains the sort order.and maintains the sort order.
  • Multifile systemsMultifile systems  A multifile system is a specially created set of directories, possibly onA multifile system is a specially created set of directories, possibly on different machines, which have identical substructure.different machines, which have identical substructure.  Each directory is a partition of the multifile system. When a multifile isEach directory is a partition of the multifile system. When a multifile is placed in a multifile system, its partitions are files within each of theplaced in a multifile system, its partitions are files within each of the partitions of the multifile system.partitions of the multifile system.  Multifile system leads to better performance than flat file systemsMultifile system leads to better performance than flat file systems because multifile systems can divide your data among multiple disks orbecause multifile systems can divide your data among multiple disks or CPUs.CPUs.  Typically (SMP machine is exception) a multifile system is created withTypically (SMP machine is exception) a multifile system is created with the control partition on one node and data partitions on other nodes tothe control partition on one node and data partitions on other nodes to distribute the work and improve performance.distribute the work and improve performance.  To do this use full internet URLs that specify file and directory namesTo do this use full internet URLs that specify file and directory names and locations on remote machines.and locations on remote machines.
  • MultifileMultifile
  • SANDBOXSANDBOX  A sandbox is a collection of graphs and related files thatA sandbox is a collection of graphs and related files that are stored in a single directory tree, and treated as a groupare stored in a single directory tree, and treated as a group for purposes of version control, navigation, and migration.for purposes of version control, navigation, and migration.  A sandbox can be a file system copy of a datastore projectA sandbox can be a file system copy of a datastore project..  In the graph, instead of specifying the entire path for anyIn the graph, instead of specifying the entire path for any file location ,we specify only the sandbox parameterfile location ,we specify only the sandbox parameter variable. For ex : $AI_IN_DATA/customer_info.dat. wherevariable. For ex : $AI_IN_DATA/customer_info.dat. where $AI_IN_DATA contains the entire path with reference to$AI_IN_DATA contains the entire path with reference to the sandbox $AI_HOME variable.the sandbox $AI_HOME variable.  The actual in_data dir is $AI_HOME/in_data in sandboxThe actual in_data dir is $AI_HOME/in_data in sandbox
  • SANDBOXSANDBOX  The sandbox provides an excellent mechanism toThe sandbox provides an excellent mechanism to maintain uniqueness while moving frommaintain uniqueness while moving from development to production environment by meansdevelopment to production environment by means switch parameters.switch parameters.  We can define parameters in sandbox those canWe can define parameters in sandbox those can be used across all the graphs pertaining to thatbe used across all the graphs pertaining to that sandbox.sandbox.  The topmost variable $PROJECT_DIR containsThe topmost variable $PROJECT_DIR contains the path of the home directorythe path of the home directory
  • SANDBOXSANDBOX
  • DeployingDeploying  Every graph after validation and testing has to be deployedEvery graph after validation and testing has to be deployed as .ksh file into the run directory on UNIX.as .ksh file into the run directory on UNIX.  This .ksh file is an executable file which is the backbone forThis .ksh file is an executable file which is the backbone for the entire automation/wrapper process.the entire automation/wrapper process.  The wrapper automation consists of .run, .env,The wrapper automation consists of .run, .env, dependency list ,job list etcdependency list ,job list etc  For a detailed description on wrapper and differentFor a detailed description on wrapper and different directories and files , Please refer the documentation ondirectories and files , Please refer the documentation on wrapper / UNIX presentation.wrapper / UNIX presentation.
  • ReferencesReferences  Ab Initio TutorialAb Initio Tutorial  Ab Initio Online HelpAb Initio Online Help  Website (abinitio.com)Website (abinitio.com)  http://sandyclassic.wordpress.comhttp://sandyclassic.wordpress.com  Data warehouse : http://datawarehouseview.wordpress.com/  Data Science View : http://thedatascience.wordpress.com/  Business Intelligence trends : http://businessintelligencetechnologytrend.wordpress.com/  Business Architect :http://businessarchitectview.wordpress.com/  Enterprise Architect : http://enterprisearchitectview.wordpress.com/  Project Manager :http://projectmanagerview.wordpress.com  data Architect : https://dataarchitectview.wordpress.com/