This white paper explains how the Cognos BI Server running in the Linux environment can be configured and used with a Greenplum database. Included in this paper are detailed instructions for configuration and connectivity verification.
Cognos 10 upgrade migrate fixpack by bhawani nandan prasadBhawani N Prasad
This document provides guidance on migrating, upgrading, and installing fix packs for Cognos BI and Planning 10.1. It includes a detailed schedule and steps for migrating production and test systems. Key activities include installing server components, configuring gateways, exporting and importing content, testing, setting up security, and leveraging new features. Guidelines are provided for evaluating current systems, creating a migration plan, preparing test environments, installation order, and report development best practices.
The document summarizes various physical architecture patterns for web applications, including single server, separate database, replicated web servers, separate scripting engines, application servers, and J2EE architectures. It also discusses dimensions to consider in architecture design like performance, scalability, and constraints. Additional topics covered include web caching techniques and an overview of cloud computing characteristics and service models.
This document discusses the process of developing a user experience (UX) model for a web application from requirements. It explains that requirements engineering and analysis produce a requirements model and analysis model, which then inform the design of the interaction model or UX model. The UX model defines elements like user interface metaphors, naming conventions, and page layout specifications to guide the development team.
This document is a project report submitted by Souham Biswas for the degree of Bachelor of Technology in Computer Science & Engineering. The report details the development of the TOS Solution Manager website for Adani Logistics Ltd to manage issue tracking. It provides an overview of the existing and proposed systems, feasibility study, system requirements, design including screen layouts, code, testing, and future scope. The report includes 10 sections and a bibliography.
1) Many groups presented file replication systems they have developed and are using in production, including JLAB, SRB, Globus, GDMP, MAGDA, SAM, STAR, and BaBar.
2) The systems utilize various components like replica catalogs, file transfer services, storage interfaces, and scheduling/management layers to provide robust file replication capabilities.
3) Key topics of discussion included interfaces and standards for replication services, error handling, reliability, performance, and experience from different experiments. Groups expressed interest in further collaboration in these areas.
[DSBW Spring 2009] Unit 07: WebApp Design Patterns & Frameworks (2/3)Carles Farré
This document summarizes various design patterns and frameworks related to web presentation layers and business layers. For web presentation layers, it discusses the Context Object pattern for encapsulating state, the Synchronizer Token pattern for controlling request flow, and different approaches to session state management. It also reviews integration patterns for connecting web presentation and business layers, including the Service Locator and Business Delegate patterns. Finally, it examines common architectural patterns for the business layer such as Transaction Script, Domain Model, and Table Module.
WSO2 Enterprise Integrator v6.1.1 was explored through a pilot study and proof of concepts. Key capabilities tested included:
- Mediating between external clients and endpoints like Axis2, WSO2 Broker, and Spring Boot using ESB proxy services.
- Connecting to external systems like ActiveMQ and an LDAP server.
- Deploying BPEL and BPMN processes on the Business Process Services runtime.
- Implementing a two node vertical cluster with load balancing.
- Applying security policies to authenticated ESB services using an LDAP backend.
The study validated core integration and mediation capabilities of the product.
Cognos 10 upgrade migrate fixpack by bhawani nandan prasadBhawani N Prasad
This document provides guidance on migrating, upgrading, and installing fix packs for Cognos BI and Planning 10.1. It includes a detailed schedule and steps for migrating production and test systems. Key activities include installing server components, configuring gateways, exporting and importing content, testing, setting up security, and leveraging new features. Guidelines are provided for evaluating current systems, creating a migration plan, preparing test environments, installation order, and report development best practices.
The document summarizes various physical architecture patterns for web applications, including single server, separate database, replicated web servers, separate scripting engines, application servers, and J2EE architectures. It also discusses dimensions to consider in architecture design like performance, scalability, and constraints. Additional topics covered include web caching techniques and an overview of cloud computing characteristics and service models.
This document discusses the process of developing a user experience (UX) model for a web application from requirements. It explains that requirements engineering and analysis produce a requirements model and analysis model, which then inform the design of the interaction model or UX model. The UX model defines elements like user interface metaphors, naming conventions, and page layout specifications to guide the development team.
This document is a project report submitted by Souham Biswas for the degree of Bachelor of Technology in Computer Science & Engineering. The report details the development of the TOS Solution Manager website for Adani Logistics Ltd to manage issue tracking. It provides an overview of the existing and proposed systems, feasibility study, system requirements, design including screen layouts, code, testing, and future scope. The report includes 10 sections and a bibliography.
1) Many groups presented file replication systems they have developed and are using in production, including JLAB, SRB, Globus, GDMP, MAGDA, SAM, STAR, and BaBar.
2) The systems utilize various components like replica catalogs, file transfer services, storage interfaces, and scheduling/management layers to provide robust file replication capabilities.
3) Key topics of discussion included interfaces and standards for replication services, error handling, reliability, performance, and experience from different experiments. Groups expressed interest in further collaboration in these areas.
[DSBW Spring 2009] Unit 07: WebApp Design Patterns & Frameworks (2/3)Carles Farré
This document summarizes various design patterns and frameworks related to web presentation layers and business layers. For web presentation layers, it discusses the Context Object pattern for encapsulating state, the Synchronizer Token pattern for controlling request flow, and different approaches to session state management. It also reviews integration patterns for connecting web presentation and business layers, including the Service Locator and Business Delegate patterns. Finally, it examines common architectural patterns for the business layer such as Transaction Script, Domain Model, and Table Module.
WSO2 Enterprise Integrator v6.1.1 was explored through a pilot study and proof of concepts. Key capabilities tested included:
- Mediating between external clients and endpoints like Axis2, WSO2 Broker, and Spring Boot using ESB proxy services.
- Connecting to external systems like ActiveMQ and an LDAP server.
- Deploying BPEL and BPMN processes on the Business Process Services runtime.
- Implementing a two node vertical cluster with load balancing.
- Applying security policies to authenticated ESB services using an LDAP backend.
The study validated core integration and mediation capabilities of the product.
Build Applications on the Microsoft Platform Using Eclipse, Java, Ruby and PHP!goodfriday
Come hear how Microsoft has delivered multiple technologies that focus on interoperability with non-Microsoft and Open Source technologies. Learn how to use the Eclipse tools today to build Silverlight applications that run on PCs and Macs, how to develop using combinations of Java, Ruby and PHP in addition to the standard Microsoft languages, and how Microsoft's commitment to openness with the Azure Services Platform and the use of claims-based identity supports heterogeneous identity systems.
The document discusses J2EE (Java 2 Enterprise Edition) interview questions and answers. It covers topics such as what J2EE is, J2EE modules, components, containers, deployment descriptors, transaction management, and differences between technologies like EJBs and JavaBeans. The document provides detailed explanations of core J2EE concepts.
The document discusses the evolution of J2EE architecture from single-tier to multi-tier architectures. It describes the key components and services in J2EE like EJBs, servlets, JSPs, JNDI, JTA, etc. It also discusses how J2EE applications are deployed on application servers with different containers managing different components.
The paper focuses on the architecture of JBoss Application Server and how it helps to automate the
development, deployment, and operation of business-critical and mission-critical applications. The paper
also describes about the Dynamic application implemented by JBoss.
BlazeDS is an open source remoting and messaging technology from Adobe that allows Flex and AIR applications to easily connect to existing server-side logic. It provides high performance data transfer for responsive applications and full publish/subscribe messaging capabilities. BlazeDS standardizes the programming model for remoting and messaging across platforms and simplifies backend integration.
The document introduces JDBC and its key concepts. It discusses the JDBC architecture with two layers - the application layer and driver layer. It describes the four types of JDBC drivers and how they work. The document outlines the classes and interfaces that make up the JDBC API and the basic steps to create a JDBC application, including loading a driver, connecting to a database, executing statements, and handling exceptions. It provides examples of using JDBC to perform common database operations like querying, inserting, updating, and deleting data.
This document provides a tutorial on packaging and deploying J2EE projects using Rational Application Developer V6. It discusses creating J2EE projects, importing and exporting modules, and packaging applications to take advantage of WebSphere Application Server features. The tutorial also includes optional sections on setting up a sample database using Cloudscape and running a simple address book application to demonstrate packaging and deployment.
From 0 to 1000 Apps: The First Year of Cloud Foundry at the Home DepotVMware Tanzu
From 0 to 1000 Apps documents The Home Depot's first year of experience with Pivotal Cloud Foundry from 2015-2016. Key points include:
- PCF was initially installed on-premises in June 2015 and usage gradually increased over the year. By mid-2016 there were over 3000 apps, 4000 instances, and 1300 unique users.
- Lessons learned centered around removing barriers to entry, establishing support models, avoiding capacity issues, and focusing on enabling developers rather than just operating the platform.
- An "aha moment" realization was that the team does not just operate infrastructure but instead enables developers, and should view developers as their customers.
Slides for my talk at Cloud Foundry Summit Europe 2016.
Nearly 1.2 million people die in road crashes each year (WHO - 2015) with additional millions becoming injured or disabled. One big part of this problem is worst road traffic conditions and unless action is taken, road traffic injuries are predicted to become the fifth leading cause of death by 2030. Moreover, although road traffic injuries have been a major cause of mortality for many years, most traffic accidents are both predictable and preventable. In this talk, we want to demonstrate a scalable IoT platform that uses weather data and data from other cars to warn drivers of dangerous conditions. We will show how CF can help to save human lives and the architecture behind this. Additionally, we will also explain the data science that is involved.
Delivering Apache Hadoop for the Modern Data Architecture Hortonworks
Join Hortonworks and Cisco as we discuss trends and drivers for a modern data architecture. Our experts will walk you through some key design considerations when deploying a Hadoop cluster in production. We'll also share practical best practices around Cisco-based big data architectures and Hortonworks Data Platform to get you started on building your modern data architecture.
White Paper: xDesign Online Editor & API Performance Benchmark Summary EMC
This white paper explains the performance of the xDesign Online Editor and its web services APIs, part of the EMC Document Sciences xPression suite. It provides performance data for editing a document, publishing a document, returning it to the calling application or browser, and displaying it in the user’s queue.
The document describes a new Russian tank called the "Sibirsk 1000". It has big shells, enough cannons, and can travel at 90 mph due to its aerodynamic form. The tank is described as excellent for combat. The document concludes by telling the reader to get back to work and not spend free time with war toys or nude images.
This trailer summary analyzes scenes from the Mission Impossible III trailer through shots and editing:
1) It establishes the main character, Ethan Hunt, through shots that introduce him mysteriously on a rooftop in dark clothing, fitting the spy/action genre.
2) Tension is built through a countdown and scenes of the damsel in distress and villain before cutting to black, leaving the audience waiting for the promised action.
3) Color, music, and text are used to convey the danger and excitement of the spy/action film and leave the audience anticipating the summer release date.
Russia was ruled by the Czars prior to 1917. Czar Nicholas grew increasingly unpopular as he limited civil liberties and failed to address economic issues like worker unrest. Meanwhile, Rasputin gained influence over Nicholas and his family but was disliked by many Russians. In 1917, widespread revolts and unrest led Nicholas to abdicate, and a provisional government took over before the Bolsheviks seized power in the Russian Revolution, establishing the Soviet Union and communist rule.
Based on a map of languages in Europe, students should identify three countries where Spanish-like languages are spoken, such as Italy, Portugal, France, and three countries where German-like languages are spoken, such as Germany, Austria, Switzerland. The document instructs students to compare this map to a modern map of Europe to identify 14 countries that were part of the Roman Empire and explain why Spanish and Italian would be mutually intelligible, as well as write a paragraph about how modern US society has borrowed from Roman culture.
The document describes VMware vFabric GemFire, a distributed in-memory data platform. Key points:
- It manages data in pooled cluster memory rather than on disk for improved performance. Data can be fully replicated or partitioned across nodes.
- It supports reliable publish-subscribe of data changes and "continuous querying" to provide low-latency updates and event-driven capabilities.
- Application functions can be executed in parallel across nodes, allowing "data-aware" and distributed behavior. This provides better scalability than centralizing logic in a single database node.
This document summarizes a presentation about scale-out converged solutions for analytics. The presentation covers the history of analytic infrastructure, why scale-out converged solutions are beneficial, an analytic workflow enabled by EMC Isilon storage and Hadoop, test results showing performance benefits, customer use cases, and next steps. It includes an agenda, diagrams demonstrating analytic workflows, performance comparisons, and descriptions of enterprise features provided by using EMC Isilon with Hadoop.
This document provides examples of mathematical equations and calculations. It includes expressions with variables, multiplication, division, exponents, and equals signs. The document works through several arithmetic steps to solve for unknown values.
http://globalvision.com.vn/vn/wincor-nixdorf-b-36.htm
Wincor Nixdorf
Wincor Nixdorf là công ty của Đức một hãng hàng đầu thế giới về cung cấp thiết bị POS, giải pháp và dịch vụ IT cho ngành bán lẻ và ngân hàng. Các giải pháp phần cứng cũng chư phần mềm của Wincor hiệu quả cao với giá cả hợp lý, giúp tăng tốc và hiệu quả trong các ứng dụng ngân hàng (banking) và hệ thống bán lẻ (retail industries). Hiện Wincor Nixdorf đã hiện diện ở trên 100 nước, trong đó 41 nước có văn phòng đại diện. Với hơn 9,000 nhân viên trên toàn thế giới.
1. The document discusses how the concept of "keeping up with the Joneses" and arms races contributed to World War 1. It instructs students to take notes on how the industrial revolution enabled arms races between European powers and how this escalating military spending ultimately led to World War 1.
2. Students are asked to graph military spending data, define an arms race, draw connections to keeping up with neighbors, and predict which countries would lose in a European war.
3. The document provides guidance for an assignment analyzing the links between arms races, nationalism, and the outbreak of World War 1.
Build Applications on the Microsoft Platform Using Eclipse, Java, Ruby and PHP!goodfriday
Come hear how Microsoft has delivered multiple technologies that focus on interoperability with non-Microsoft and Open Source technologies. Learn how to use the Eclipse tools today to build Silverlight applications that run on PCs and Macs, how to develop using combinations of Java, Ruby and PHP in addition to the standard Microsoft languages, and how Microsoft's commitment to openness with the Azure Services Platform and the use of claims-based identity supports heterogeneous identity systems.
The document discusses J2EE (Java 2 Enterprise Edition) interview questions and answers. It covers topics such as what J2EE is, J2EE modules, components, containers, deployment descriptors, transaction management, and differences between technologies like EJBs and JavaBeans. The document provides detailed explanations of core J2EE concepts.
The document discusses the evolution of J2EE architecture from single-tier to multi-tier architectures. It describes the key components and services in J2EE like EJBs, servlets, JSPs, JNDI, JTA, etc. It also discusses how J2EE applications are deployed on application servers with different containers managing different components.
The paper focuses on the architecture of JBoss Application Server and how it helps to automate the
development, deployment, and operation of business-critical and mission-critical applications. The paper
also describes about the Dynamic application implemented by JBoss.
BlazeDS is an open source remoting and messaging technology from Adobe that allows Flex and AIR applications to easily connect to existing server-side logic. It provides high performance data transfer for responsive applications and full publish/subscribe messaging capabilities. BlazeDS standardizes the programming model for remoting and messaging across platforms and simplifies backend integration.
The document introduces JDBC and its key concepts. It discusses the JDBC architecture with two layers - the application layer and driver layer. It describes the four types of JDBC drivers and how they work. The document outlines the classes and interfaces that make up the JDBC API and the basic steps to create a JDBC application, including loading a driver, connecting to a database, executing statements, and handling exceptions. It provides examples of using JDBC to perform common database operations like querying, inserting, updating, and deleting data.
This document provides a tutorial on packaging and deploying J2EE projects using Rational Application Developer V6. It discusses creating J2EE projects, importing and exporting modules, and packaging applications to take advantage of WebSphere Application Server features. The tutorial also includes optional sections on setting up a sample database using Cloudscape and running a simple address book application to demonstrate packaging and deployment.
From 0 to 1000 Apps: The First Year of Cloud Foundry at the Home DepotVMware Tanzu
From 0 to 1000 Apps documents The Home Depot's first year of experience with Pivotal Cloud Foundry from 2015-2016. Key points include:
- PCF was initially installed on-premises in June 2015 and usage gradually increased over the year. By mid-2016 there were over 3000 apps, 4000 instances, and 1300 unique users.
- Lessons learned centered around removing barriers to entry, establishing support models, avoiding capacity issues, and focusing on enabling developers rather than just operating the platform.
- An "aha moment" realization was that the team does not just operate infrastructure but instead enables developers, and should view developers as their customers.
Slides for my talk at Cloud Foundry Summit Europe 2016.
Nearly 1.2 million people die in road crashes each year (WHO - 2015) with additional millions becoming injured or disabled. One big part of this problem is worst road traffic conditions and unless action is taken, road traffic injuries are predicted to become the fifth leading cause of death by 2030. Moreover, although road traffic injuries have been a major cause of mortality for many years, most traffic accidents are both predictable and preventable. In this talk, we want to demonstrate a scalable IoT platform that uses weather data and data from other cars to warn drivers of dangerous conditions. We will show how CF can help to save human lives and the architecture behind this. Additionally, we will also explain the data science that is involved.
Delivering Apache Hadoop for the Modern Data Architecture Hortonworks
Join Hortonworks and Cisco as we discuss trends and drivers for a modern data architecture. Our experts will walk you through some key design considerations when deploying a Hadoop cluster in production. We'll also share practical best practices around Cisco-based big data architectures and Hortonworks Data Platform to get you started on building your modern data architecture.
White Paper: xDesign Online Editor & API Performance Benchmark Summary EMC
This white paper explains the performance of the xDesign Online Editor and its web services APIs, part of the EMC Document Sciences xPression suite. It provides performance data for editing a document, publishing a document, returning it to the calling application or browser, and displaying it in the user’s queue.
The document describes a new Russian tank called the "Sibirsk 1000". It has big shells, enough cannons, and can travel at 90 mph due to its aerodynamic form. The tank is described as excellent for combat. The document concludes by telling the reader to get back to work and not spend free time with war toys or nude images.
This trailer summary analyzes scenes from the Mission Impossible III trailer through shots and editing:
1) It establishes the main character, Ethan Hunt, through shots that introduce him mysteriously on a rooftop in dark clothing, fitting the spy/action genre.
2) Tension is built through a countdown and scenes of the damsel in distress and villain before cutting to black, leaving the audience waiting for the promised action.
3) Color, music, and text are used to convey the danger and excitement of the spy/action film and leave the audience anticipating the summer release date.
Russia was ruled by the Czars prior to 1917. Czar Nicholas grew increasingly unpopular as he limited civil liberties and failed to address economic issues like worker unrest. Meanwhile, Rasputin gained influence over Nicholas and his family but was disliked by many Russians. In 1917, widespread revolts and unrest led Nicholas to abdicate, and a provisional government took over before the Bolsheviks seized power in the Russian Revolution, establishing the Soviet Union and communist rule.
Based on a map of languages in Europe, students should identify three countries where Spanish-like languages are spoken, such as Italy, Portugal, France, and three countries where German-like languages are spoken, such as Germany, Austria, Switzerland. The document instructs students to compare this map to a modern map of Europe to identify 14 countries that were part of the Roman Empire and explain why Spanish and Italian would be mutually intelligible, as well as write a paragraph about how modern US society has borrowed from Roman culture.
The document describes VMware vFabric GemFire, a distributed in-memory data platform. Key points:
- It manages data in pooled cluster memory rather than on disk for improved performance. Data can be fully replicated or partitioned across nodes.
- It supports reliable publish-subscribe of data changes and "continuous querying" to provide low-latency updates and event-driven capabilities.
- Application functions can be executed in parallel across nodes, allowing "data-aware" and distributed behavior. This provides better scalability than centralizing logic in a single database node.
This document summarizes a presentation about scale-out converged solutions for analytics. The presentation covers the history of analytic infrastructure, why scale-out converged solutions are beneficial, an analytic workflow enabled by EMC Isilon storage and Hadoop, test results showing performance benefits, customer use cases, and next steps. It includes an agenda, diagrams demonstrating analytic workflows, performance comparisons, and descriptions of enterprise features provided by using EMC Isilon with Hadoop.
This document provides examples of mathematical equations and calculations. It includes expressions with variables, multiplication, division, exponents, and equals signs. The document works through several arithmetic steps to solve for unknown values.
http://globalvision.com.vn/vn/wincor-nixdorf-b-36.htm
Wincor Nixdorf
Wincor Nixdorf là công ty của Đức một hãng hàng đầu thế giới về cung cấp thiết bị POS, giải pháp và dịch vụ IT cho ngành bán lẻ và ngân hàng. Các giải pháp phần cứng cũng chư phần mềm của Wincor hiệu quả cao với giá cả hợp lý, giúp tăng tốc và hiệu quả trong các ứng dụng ngân hàng (banking) và hệ thống bán lẻ (retail industries). Hiện Wincor Nixdorf đã hiện diện ở trên 100 nước, trong đó 41 nước có văn phòng đại diện. Với hơn 9,000 nhân viên trên toàn thế giới.
1. The document discusses how the concept of "keeping up with the Joneses" and arms races contributed to World War 1. It instructs students to take notes on how the industrial revolution enabled arms races between European powers and how this escalating military spending ultimately led to World War 1.
2. Students are asked to graph military spending data, define an arms race, draw connections to keeping up with neighbors, and predict which countries would lose in a European war.
3. The document provides guidance for an assignment analyzing the links between arms races, nationalism, and the outbreak of World War 1.
The document provides an overview of Streamlined Task Orientated Management of Projects (STOMP), a high-level project methodology designed to provide visibility of project processes. It discusses common project phases including planning, design, build, test, and delivery. STOMP sits above detailed methodologies to allow all resources to understand the process. Key aspects covered include the project plan, change control process, and tracking progress against the schedule. The goal is to enable teams to communicate project status easily to management.
Taming Latency: Case Studies in MapReduce Data AnalyticsEMC
This session discusses how to achieve low latency in MapReduce data analysis, with various industrial and academic case studies. These illustrate various improvements on MapReduce for squeezing out latency from whole data processing stack, covering batch-mode MapReduce system, as well as stream processing systems. This session also introduces our BoltMR project efforts on this topic and discloses some interesting benchmark results.
Objective 1: Understand why low-latency matters for many MapReduce-based big data analytics scenarios.
After this session you will be able to:
Objective 2: Learn the root causes of MapReduce latency, the obstacles to lowering the latency and the various (im)mature solutions.
Objective 3: Understand the extent of MapReduce low-latency that is needed for their own applications and which optimization techniques are potentially applicable.
Hyper-V Dynamic Memory allows virtual machines to dynamically adjust their memory usage. It uses techniques like ballooning and external page sharing to optimize memory allocation. The goal is to improve consolidation ratios with minimal performance impact. Dynamic Memory treats memory as a dynamically schedulable resource like CPU. It adds memory to VMs when needed and reclaims unused memory periodically to improve overall utilization.
MongoDB Developer's Notebook, March 2016 -- MongoDB Connector for Business In...Daniel M. Farrell
This document provides instructions for configuring MongoDB, the MongoDB Connector for BI, Eclipse, and Toad to allow running SQL queries against MongoDB from within Eclipse. It describes downloading and installing a Postgres JDBC driver, MongoDB, and the MongoDB Connector for BI. It also covers creating a sample MongoDB database and collection with documents, and configuring Eclipse and Toad to connect to MongoDB via the Connector using the JDBC driver. This will allow running SQL queries from within Eclipse to interact with MongoDB data.
A New Paradigm In Linux Debug From Viosoftguestc28df4
1) The Arriba Debugger provides a holistic approach to debugging embedded Linux through its VMON module, which has minimal performance impact and provides full visibility of the Linux target.
2) It addresses traditional limitations by enabling debugging of loadable modules, multiple processes, and production kernels without halting the target.
3) The Arriba Debugger and Linux Event Analyzer integrate with Eclipse and provide a comprehensive Linux development environment.
A New Paradigm In Linux Debug From Viosoft Corporationart_lee
1) The Arriba Debugger provides a holistic approach to debugging embedded Linux through its VMON module, which has minimal performance impact and provides full visibility of the Linux target.
2) It addresses traditional limitations by enabling debugging of loadable modules, multiple processes, and production kernels without altering target performance.
3) The Arriba Debugger integrates with the Eclipse IDE and includes the Linux Event Analyzer tool for profiling Linux events with minimal overhead.
MongoDB World 2018: Bumps and Breezes: Our Journey from RDBMS to MongoDBMongoDB
The document summarizes the journey of migrating from an RDBMS to MongoDB. It describes the pre-MongoDB RDBMS environment, reasons for choosing MongoDB, and the evolution of the MongoDB environment over time. The evolution involved some bumps in configuring databases and applications, but also many breezes like improved performance, flexibility and scalability. Benchmarking showed MongoDB could handle more concurrent users. Future plans include using MongoDB 4.0 features and further optimizing sharding performance.
The document discusses the Base/1 Foundation Application (BFC) which allows building secure database applications with C# and ASP.NET using a distributed architecture. It supports major databases like Microsoft SQL Server, Oracle, and MySQL. Key features include a data dictionary, integration with Visual Studio, a consistent API for database access, and security features. The architecture uses distributed batch processing services and grid computing to break large jobs into smaller pieces that run across available computers. Advantages include lower costs, faster deployment, and the ability to build large-scale applications in a secure and efficient manner.
Installing IBM Cognos 10: Tips and Tricks from the TrenchesSenturus
Learn about Cognos 10 BI Server core components, common installation issues, Cognos 10 search index requirements post-install and how to navigate the maze of 32 vs. 64 bit. View the video recording and download this deck: http://www.senturus.com/resource-video/installing-cognos-10-2-1-tips-tricks-trenches/?rId=2567
Topics include:
- Cognos 10.2.1 BI Server core components
- Common installation issues
- Tips for a successful configuration (including Dynamic Query Mode support for the RAVE visualization engine)
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
Installing Cognos 10.2.1: Tips and Tricks from the TrenchesSenturus
Tips to avoid common issues for a successful Cognos BI 10.2.1 install. View the webinar video recording and download this deck: http://www.senturus.com/resources/installing-cognos-10-2-1-tips-tricks-trenches/.
Benefit from our experience installing Cognos hundreds of times across all versions of Cognos 8 and Cognos 10 to learn about Dynamic Query mode and support for the RAVE visualization engine.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
The document outlines the installation steps and notes for IBM Cognos Analytics (Cognos 11). It describes the three installation types - Ready to Run, Expand, and Custom. Ready to Run provides a full pre-configured version for quick setup while Custom allows flexibility to choose components. It also notes post-installation configuration tips like changing the JDBC driver location and data file path.
This document discusses options for deploying COBOL applications using managed code environments like .NET and JVM. It covers the benefits of managed code, such as reuse of existing frameworks, improved application integrity with features like exception handling, and end-to-end debugging across languages. The document also addresses considerations for moving COBOL code to managed code, including database access, file handling, and support for technologies like Java application servers. Resources for learning more about modernizing COBOL applications are provided.
This document discusses Java Database Connectivity (JDBC) which provides a standard interface for connecting Java applications to various databases. It describes the JDBC API and architecture, including the four types of JDBC drivers. The key points are:
1) JDBC provides a standard way for Java programs to access any SQL database. It uses JDBC drivers implemented by database vendors to translate JDBC calls into database-specific protocols.
2) The JDBC API has two layers - an application layer used by developers, and a driver layer implemented by vendors. There are four main interfaces (Driver, Connection, Statement, ResultSet) and the DriverManager class.
3) There are
This document discusses Java Database Connectivity (JDBC) which provides a standard interface for connecting Java applications to various databases. It describes the JDBC API and architecture, including the four types of JDBC drivers. The key points are:
1) JDBC provides a standard way for Java programs to access any SQL database. It uses JDBC drivers implemented by database vendors to translate JDBC calls into database-specific protocols.
2) The JDBC API has two layers - an application layer used by developers, and a driver layer implemented by vendors. There are four main interfaces (Driver, Connection, Statement, ResultSet) and the DriverManager class.
3) There are
Ibm db2 10.5 for linux, unix, and windows developing ado.net and ole db app...bupbechanhgmail
This document provides information about developing ADO.NET and OLE DB applications using IBM DB2 10.5. It discusses deploying .NET applications on Windows, supported development software, DB2 integration in Visual Studio, the IBM Data Server Provider for .NET, and the testconn command. It also covers the OLE DB .NET Data Provider, ODBC .NET Data Provider, IBM OLE DB Provider, and IBM Data Server Provider for .NET namespaces.
This document provides details about an Electricity Bill Management System project, including:
- The project aims to partially computerize processes at an Electricity Board like generating bills and maintaining customer records.
- Visual Basic 6.0 is used as the front-end and MS Access 2000 as the back-end database.
- The objectives are to efficiently store and retrieve customer, billing, and employee information to improve record keeping.
- Hardware requirements include a PC and printer, and the software environment uses VB6, Access, and Windows.
- VB6 is used for its visual interface design capabilities and event-driven programming. Access is used as a relational database.
Session 3962: Docking DevOps was originally presented at IBM InterConnect 2015 Feb. 22 - 26, 2016.
The presentation explores the values of Docker and containers and provides insight into areas that IBM has embraced the use of Docker within it's cloud strategy.
This talk, a case study in application deployment models, was given at IBM InterConnect 2017 in Las Vegas, NV on March 21, 2017 by Lin Sun & Phil Estes of IBM Cloud.
In this talk, Lin & Phil provided a background of IBM Bluemix compute offerings across Cloud Foundry, Containers + Kubernetes, and FaaS/serverless via OpenWhisk and then used a demo application to describe the tradeoffs between using the various deployment models and technology. The application is open source and available at https://github.com/estesp/flightassist
Everything you need to know about creating, managing and debugging Java applications on IBM Bluemix. This presentation covers the features the IBM WebSphere Application Server Liberty Buildpack provides to make Java development on the cloud easier. It also covers the Eclipse tooling support including remote debugging, incremental update, etc.
Mumbai Academics is Mumbai’s first dedicated Professional Training Center for Training with Spoke and hub model with Multiple verticles . The strong foundation of Mumbai Academics is laid by highly skilled and trained Professionals, carrying mission to provide industry level input to the freshers and highly skilled and trained Software Professionals/other professional to IT companies.
This document provides an overview of IBM Cognos BI, including its architecture and components. It discusses the 3-tier platform architecture with web, application, and data tiers separated by firewalls. The key components include Cognos Connection, Workspace, Query Studio, Analysis Studio, Report Studio, and Administration. IBM Cognos BI is a web-based integrated suite that transforms data into business intelligence for reporting, analysis, scorecarding, and event monitoring to make smart decisions.
Similar to Working with the Cognos BI Server Using the Greenplum Database -- Interoperability and Connectivity Configuration for Linux Users (20)
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
CloudBoost is a cloud-enabling solution from EMC
Facilitates secure, automatic, efficient data transfer to private and public clouds for Long-Term Retention (LTR) of backups. Seamlessly extends existing data protection solutions to elastic, resilient, scale-out cloud storage
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
With EMC XtremIO all-flash array, improve
1) your competitive agility with real-time analytics & development
2) your infrastructure agility with elastic provisioning for performance & capacity
3) your TCO with 50% lower capex and opex and double the storage lifecycle.
• Citrix & EMC XtremIO: Better Together
• XtremIO Design Fundamentals for VDI
• Citrix XenDesktop & XtremIO
-- Image Management & Storage
-- Demonstrations
-- XtremIO XenDesktop Integration
EMC XtremIO and Citrix XenDesktop provide an optimized virtual desktop infrastructure solution. XtremIO's all-flash storage delivers high performance, scalability, and predictable low latency required for large VDI deployments. Its agile copy services and data reduction features help reduce storage costs. Joint demonstrations showed XtremIO supporting thousands of desktops with sub-millisecond response times during boot storms and login storms. A unique plug-in streamlines the automated deployment and management of large XenDesktop environments using XtremIO's advanced capabilities.
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
Explore findings from the EMC Forum IT Study and learn how cloud computing, social, mobile, and big data megatrends are shaping IT as a business driver globally.
Reference architecture with MIRANTIS OPENSTACK PLATFORM.The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model.
The document discusses identity and access management challenges for retailers. It outlines security concerns retailers face, including the need to protect customer data and payment card information from cyber criminals. It then describes specific identity challenges retailers deal with related to compliance, access governance, and managing identity lifecycles. The document proposes using RSA Identity Management and Governance solutions to help retailers with access reviews, governing access through policies, and keeping compliant with regulations. Use cases are provided showing how IMG can help with challenges like point of sale monitoring, unowned accounts, seasonal workers, and operational issues.
Container-based technology has experienced a recent revival and is becoming adopted at an explosive rate. For those that are new to the conversation, containers offer a way to virtualize an operating system. This virtualization isolates processes, providing limited visibility and resource utilization to each, such that the processes appear to be running on separate machines. In short, allowing more applications to run on a single machine. Here is a brief timeline of key moments in container history.
This white paper provides an overview of EMC's data protection solutions for the data lake - an active repository to manage varied and complex Big Data workloads
This infographic highlights key stats and messages from the analyst report from J.Gold Associates that addresses the growing economic impact of mobile cybercrime and fraud.
Virtualization does not have to be expensive, cause downtime, or require specialized skills. In fact, virtualization can reduce hardware and energy costs by up to 50% and 80% respectively, accelerate provisioning time from weeks to hours, and improve average uptime and business response times. With proper training and resources, virtualization can be easier to manage than physical environments and save over $3,000 per year for each virtualized server workload through server consolidation.
An Intelligence Driven GRC model provides organizations with comprehensive visibility and context across their digital assets, processes, and relationships. It enables prioritization of risks based on their potential business impact and streamlines remediation. By collecting and analyzing data in real time, an Intelligence Driven GRC strategy reveals insights into critical risks and compliance issues and facilitates coordinated responses across security, risk management, and compliance functions.
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
This white paper discusses the results of a CIO UK survey on a“Trust Paradox,” defined as employees and business partners being both the weakest link in an organization’s security as well as trusted agents in achieving the company’s goals.
Emory's 2015 Technology Day conference brought together faculty, staff and students to discuss innovative uses of technology in teaching and research. Attendees learned about new tools and platforms through hands-on workshops and presentations by Emory experts. The conference highlighted how technology is enhancing collaboration and creativity across Emory's campus.
Data Science and Big Data Analytics Book from EMC Education ServicesEMC
This document provides information about data science and big data analytics. It discusses discovering, analyzing, visualizing and presenting data as key activities for data scientists. It also provides a website for further information on a book covering the tools and methods used by data scientists.
Using EMC VNX storage with VMware vSphereTechBookEMC
This document provides an overview of using EMC VNX storage with VMware vSphere. It covers topics such as VNX technology and management tools, installing vSphere on VNX, configuring storage access, provisioning storage, cloning virtual machines, backup and recovery options, data replication solutions, data migration, and monitoring. Configuration steps and best practices are also discussed.
2014 Cybercrime Roundup: The Year of the POS BreachEMC
This RSA fraud report summarizes cybercrime in 2014 and includes the number of phishing attacks globally, top hosting countries for phishing attacks, the financial impact of global fraud losses, and a monthly highlight.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Essentials of Automations: Exploring Attributes & Automation Parameters
Working with the Cognos BI Server Using the Greenplum Database -- Interoperability and Connectivity Configuration for Linux Users
1. White Paper
Working with the Cognos BI Server Using the
Greenplum Database
Interoperability and Connectivity Configuration for Linux Users
Abstract
This white paper explains how the Cognos BI Server running in
the Linux environment can be configured and used with a
Greenplum database. Included in this paper are detailed
instructions for configuration and connectivity verification.
March 2012
3. Executive summary
The correct functionality of the Greenplum database with the Cognos BI Server is
dependent on the configuration of an ODBC (Open Database Connectivity) driver
using the Greenplum Connectivity Pack. This white paper walks the reader through
the process of driver selection and installation, the configuration and validation of an
ODBC connection to Greenplum, the creation of a Cognos data connection, and
validation of that connection using Cognos Framework Manager and Query Studio.
This white paper is based on examples from Cognos 10.1.1 (release pack 1), Red Hat
Linux 5.5 and Greenplum 4.1.1 and Greenplum Connectivity pack greenplum-
connectivity-4.1.1.0-build-4-RHEL5-x86_64.
Audience
This white paper is intended for customers, as well as, EMC field and support
personnel who will be using the Cognos BI Server in the Linux environment with the
Greenplum database. This white paper does not replace the Cognos documentation
set supplied by IBM nor the Greenplum documentation set supplied by EMC. It is
expected that the reader has basic knowledge of the Cognos BI Server, ODBC driver
configuration in the Linux environment, and the Greenplum database.
Organization of this paper
This paper covers the following topics:
Overview of the Cognos BI system and components
How the Cognos BI Server integrates with relational database management
systems
The installation, configuration, and verification of an ODBC connection using
the Greenplum Connectivity Pack
Validation of the connectivity between the Cognos BI Server and the
Greenplum database using the Cognos components Framework Manager and
Query Studio
Overview of the Cognos BI system and components
The IBM Cognos BI server is implemented in a multi-tier architecture. For descriptive
purposes, this architecture can be thought of as three tiers. Please note that Cognos
10 is a 32 bit application, therefore 32 bit ODBC drivers must be used.
Tier 3 is the query database or data source. In this white paper the query database
can be Greenplum either implemented on a computing appliance (Greenplum Data
Computing Appliance) or in software only mode.
Working with the Cognos BI Server in Linux with the Greenplum Database 3
4. Tier 2 contains the Web server where the IBM Cognos BI gateway, dispatcher and
content manager are hosted. The content store is a relational database that contains
data that IBM Cognos needs to operate, such as report specifications, published
models, and the packages that contain them.
Tier 1 contains user interfaces including the Framework Manager modeling tool that
drives query generation for IBM Cognos and the Cognos Connection user portal that
includes administrative tools and reporting tools such as Query Studio.
TIER 1 TIER 2 TIER 3
Framework Manager
ODBC
GPDB
ODBC
Cognos Web server
Connection
IBM Cognos BI Gateway
Greenplum DCA
Content or Software
Store Installation
How the Cognos BI Server integrates with relational database
management systems
Because Cognos supports many databases and those databases offer various levels
of functionality, the Cognos BI Server must take into account which database it is
sending SQL commands to in order to get optimal use out of that database. Cognos
supports variable levels of SQL functionality by shipping individualized initialization
files for each supported query database. The initialization file renders the generic
Cognos SQL into the dialect of a particular supported database. The Cognos BI server
resolves which database initialization file to load by interrogating the
Working with the Cognos BI Server in Linux with the Greenplum Database 4
5. SQL_DBMS_NAME variable returned from the SQLGetInfo call to the ODBC driver. All
of this occurs automatically for the Cognos user.
The installation, configuration, and verification of an ODBC connection
using the Greenplum Connectivity Pack
This section walks the reader through the steps required to select, install, configure
and verify an ODBC connection to Greenplum.
Install Required and Recommended Software
The correct Greenplum Connectivity Pack for a particular combination of Greenplum
and Cognos releases can be determined by referring to the IBM support site.
Searching for the string “cognos 10.1.1 supported environments” in a Web search
site, should direct the reader to the IBM support site. In the ODBC section of the
Cognos release software environments page, locate the Greenplum database to be
installed. A Greenplum Connectivity Pack version will be indicated for each supported
environment. For example, for Cognos 10.1.1, Greenplum 4.1.1 is supported via
Connectivity Pack 4.1.1 for Linux (x86).
Greenplum Connectivity Packs can be downloaded from EMC’s PowerLink web site or
from the Greenplum Community site. Please refer to the GPConnectUnix PDF in the
installation pack for detailed installation instructions. Briefly, the installation of the
GP connectivity tools consists of these steps:
1. Download the appropriate greenplum-connectivity-4.1.1.0-build-4-RHEL5-x86_64.bin
installer package for RedHat Linux 64-bit.
2. Unzip the installer:
unzip greenplum-connectivity-4.1.1.0-build-4-RHEL5-x86_64.bin.zip
3. Run the installer:
/bin/bash greenplum-connectivity-4.1.1.0-build-4-RHEL5-x86_64.bin. Accept the
license agreement and supply an absolute path for the tool installation.
4. As a convenience, a greenplum_connectivity_path.sh file is provided in the client tools
installation directory following installation to set the environment variables GPHOME_CLIENTS,
PATH, and LD_LIBRARY_PATH. The examples in this white paper specified the ODBC driver
manager as unixodbc-2.2.12 and the ODBC driver as psqlodbc-08.04.0200.
The Cognos BI server requires a 32-bit ODBC driver. The word size of the ODBC driver
downloaded can be confirmed using the Linux “file” command. For example,
-bash-3.2$ file <4.1.1-gp-conn-install-dir>/drivers/odbc/psqlodbc-
08.04.0200/unixodbc-2.2.12/psqlodbcw.so
psqlodbcw.so: ELF 32-bit LSB shared object, Intel 80386, version 1
(SYSV), not stripped
Working with the Cognos BI Server in Linux with the Greenplum Database 5
6. In order to verify that all the shared objects required by the ODBC driver are properly identified
in the LD_LIBRARY_PATH, it is recommended that the user run the “ldd” command on the driver
shared object. The ldd command prints the shared libraries required by each program or
shared library specified on the command line. For example,
ldd <4.1.1-gp-conn-install-dir>/drivers/odbc/psqlodbc-08.04.0200/unixodbc-
2.2.12/psqlodbcw.so
linux-gate.so.1 => (0xffffe000)
libssl.so.0.9.8 => /my-gpconn-dir/lib/libssl.so.0.9.8 (0xf7f22000)
libpq.so.5 => /my-gpconn-dir/lib/libpq.so.5 (0xf7eeb000)
libpthread.so.0 => /lib/libpthread.so.0 (0xf7ec0000)
libodbcinst.so.1 => /my-gpconn-dir/drivers/odbc/psqlodbc-08.04.0200/unixodbc-
2.2.12/libodbcinst.so.1 (0xf7eab000)
libodbc.so.1 => /my-gpconn-dir/drivers/odbc/psqlodbc-08.04.0200/unixodbc-
2.2.12/libodbc.so.1 (0xf7e29000)
libc.so.6 => /lib/libc.so.6 (0xf7ce3000)
libcrypto.so.0.9.8 => /my-gpconn-dir/lib/libcrypto.so.0.9.8 (0xf7b9c000)
libdl.so.2 => /lib/libdl.so.2 (0xf7b98000)
libkrb5.so.3 => /my-gpconn-dir/lib/libkrb5.so.3 (0xf7b17000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0xf7ae5000)
libldap_r-2.3.so.0 => /my-gpconn-dir/lib/libldap_r-2.3.so.0 (0xf7a9f000)/lib/ld-
linux.so.2 (0x00a0e000)
libk5crypto.so.3 => /my-gpconn-dir/lib/libk5crypto.so.3 (0xf7a7b000)
libcom_err.so.3 => /my-gpconn-dir/lib/libcom_err.so.3 (0xf7a75000)
libkrb5support.so.0 => /my-gpconn-dir//lib/libkrb5support.so.0 (0xf7a6d000)
libresolv.so.2 => /lib/libresolv.so.2 (0xf7a5a000)
liblber-2.3.so.0 => /my-gpconn-dir/lib/liblber-2.3.so.0 (0xf7a4c000)
Each of the shared objects should be found. If exceptions occur, the LD_LIBRARY_PATH
environment file should be adjusted.
It is recommended that the GP Client Tools from EMC be downloaded and installed.
The client tools can be downloaded from EMC’s PowerLink web site or the Greenplum
Community site. This tool will be used to verify connectivity between the Linux
machine where the Cognos BI Server will run and the target Greenplum database.
Please refer to the GPClientToolsUnix PDF in the Greenplum Database Client Tools for
Unix installation pack for detailed installation instructions. Briefly, the installation of
the GP client tools consists of these steps:
1. Download the appropriate greenplum-clients-4.1.1.0-build-4-RHEL5-x86_64.bin installer
package for RedHat Linux.
2. Unzip the installer:
unzip greenplum-clients-4.1.1.0-build-4-RHEL5-x86_64.bin.zip
3. Run the installer:
/bin/bash greenplum-clients-4.1.1.0-build-4-RHEL5-x86_64.bin. Accept the
license agreement and supply an absolute path for the tool installation.
Working with the Cognos BI Server in Linux with the Greenplum Database 6
7. 4. As a convenience, a greenplum_clients_path.sh file is provided in the client tools installation
directory following installation to set the environment variables GPHOME_CLIENTS, PATH, and
LD_LIBRARY_PATH.
Verify Connectivity between the Cognos Linux Machine and Greenplum
Before beginning the configuration of an ODBC connection for Cognos, it is
recommended that the connectivity between the Linux machine where the BI Server
will run and Greenplum be verified. If issues such as firewall restrictions exist, they
will be exposed by this verification step. The psql command line tool included in the
Greenplum Client Tools installation will be used to verify connectivity. At a Unix
command prompt, simply invoke the psql command line tool supplying the
Greenplum master database host name, port number, user name, and password. The
psql command connection options are:
Connection options:
-h, --host=HOSTNAME database server host or socket directory (default:
"local socket")
-p, --port=PORT database server port (default: "5432")
-U, --username=USERNAME database user name
-d, --dbname=DBNAME database name
A successful connection is followed by a prompt from psql that includes the database
name. For example, to access the cognos_samples database, the following command
would be issued supplying the correct host name, username, and password.
-bash-3.2$ psql -h HOSTNAME -U USER -d cognos_samples -p 5432
psql (8.2.15)
Type "help" for help.
cognos_samples=#
Some users may be inclined to confirm the connectivity between the Cognos BI host
and the Greenplum database using the Linux-installed isql program. Caution should
be exercised during this test because there is the possibility of a word-size mismatch
between a 64-bit isql program and the 32-bit Greenplum Connectivity pack ODBC
driver, resulting in a false-negative result to this connectivity test. It is recommended
that the “-v” option be supplied when invoking isql in order to expose wrong ELF class
errors.
Configure an ODBC DSN for Greenplum
The data source name (DSN) that is used to connect to Greenplum is specified in the odbc.ini
file. This file may be placed wherever it is convenient. The environment variable ODBCINI will
be used to inform Cognos where to find it. To get to the Greenplum database, the following are
needed:
The database name
Working with the Cognos BI Server in Linux with the Greenplum Database 7
8. The host name or IP address of the GPDB master server
The port number used by the GPDB, default 5432.
The username to log in to the master server
The password of the login user
There are two sections of interest in the odbc.ini file, namely the ODBC Data Source section
followed by one section for each DSN defined. These will be described by in-line comments
below.
# ODBC Data Sources lists the DSN’s to be defined
[ODBC Data Sources]
Greenplum=PostgreSQL driver for Greenplum
# DSN for Greenplum points to cognos_samples database
[Greenplum]
Description = PostgreSQL driver for Greenplum
# the absolute location for ODBC driver to be used
Driver = /my-gpconn-install-dir/drivers/odbc/psqlodbc-
08.04.0200/unixodbc-2.2.12/psqlodbcw.so
# tracing is may useful during testing but turned
# after moving to production
Trace = 0
TraceFile = /tmp/odbctraces_dbtm
Debug = 0
DebugFile = /tmp/odbcdebug
# the name of the target database
Database = cognos_samples
# the host name or IP address, user and password of the target data
server
Servername = xx.x.xx.xxx
UserName = user-name
Password = password
# default port number for Greenplum
Port = 5432
ReadOnly = No
RowVersioning = No
# recommended size
MaxLongVarcharSize = 2048
DisallowPremature = No
# provides some efficiency in query reuse
UseServerSidePrepare = Yes
ShowSystemTables = Yes
ShowOidColumn = No
FakeOidIndex = No
# allows for cursor fetch of result sets avoids out of
# memory errors in Cognos BI server
useDeclareFetch = 1
Fetch = 4096
UpdatableCursors = Yes
# required version
Protocol = 7.4
# recommended sizes
CacheSize = 75000
MaxVarcharSize = 1024
Working with the Cognos BI Server in Linux with the Greenplum Database 8
9. Validation of the connectivity between the Cognos BI Server and the
Greenplum database
In order to validate the connectivity end-to-end between Cognos and Greenplum a
Cognos data source connection will be created that will be used in small package
created in Framework Manager and exercised in Cognos Connection Query Studio.
Create a Cognos Data Connection
From the Cognos Connection portal, launch IBM Cognos Administration. Select
Configuration > Data Source Connections > *New Data Source. Enter a Data Source
Name and Description and select the Next button.
For a Type, pick ODBC from the pull down and select the Next button. For the ODBC
data source, enter the ODBC DSN created above, in this example “Greenplum”.
Supply the User ID and Password in the Sigons section.
Working with the Cognos BI Server in Linux with the Greenplum Database 9
10. At the bottom of page, select Test the Connection. Verify connectivity to Greenplum
through Cognos and ODBC by selecting the Test button.
Working with the Cognos BI Server in Linux with the Greenplum Database 10
11. The next page should show the connection status as Succeeded. Complete the
Cognos data connection by selecting Close twice, followed by Finish.
Create Project in Framework Manager
In Windows, start Framework Manager (Start -> Programs -> IBM Cognos -> IBM Cognos
Framework Manager). From the Welcome page, click Create a new project. In the New
Project page, specify a name and location for the project, for example Greenplum in
this location, and click OK. In the Select Language page, click the design language for
the project.
Once the Metadata Wizard appears, select the Cognos Data Source created above, in
this case Greenplum, and then select the Next button. The scope of Greenplum
objects to be imported by the Wizard can be controlled in the Select Objects screen.
Assuming the IBM Cognos Samples database has been loaded into Greenplum, select
the branch table in Great Outdoors Sales (gosales) schema. Select the Next button to
continue.
Working with the Cognos BI Server in Linux with the Greenplum Database 11
12. It is important note that although Greenplum does not enforce referential integrity,
users should include foreign key constraints during data migration since they are the
source of information for the Metadata Wizard to build relationships between tables
in Query Subjects. Select Import followed by Finish to complete the metadata import
process.
Create a Package
In order to make the Query Subject just created available for reporting in the Cognos
Connection, a Package must be created and published. In Framework Manager select
Create under Packages.
Working with the Cognos BI Server in Linux with the Greenplum Database 12
13. Give the Package a name, in this case Greenplum and select the Next button. Select
the Next button and include the Greenplum function set in the Create Package screen.
Select the Finish button and specify the IBM Cognos 10 Content Store as publishing
location in the Publish Wizard – Select Location Type. Select defaults for security and
publish. Exit the Wizard by selecting the Finish button.
Create a Report in Query Studio
The final step in the end-to-end validation is to create a report in Query Studio. From
the Cognos Connection portal, launch Query Studio. In the Insert Data menu select
branch_code, address1, address2, and city from the branch table. The appearance of
Working with the Cognos BI Server in Linux with the Greenplum Database 13
14. data demonstrates a successful end-to-end validation of the Cognos to Greenplum
connectivity.
Working with the Cognos BI Server in Linux with the Greenplum Database 14
15. Conclusion
As stated at the outset of this white paper, the correct functionality of the Greenplum
database with the Cognos BI Server is dependent on the configuration of an ODBC
driver using the Greenplum Connectivity Pack. This white paper walked the reader
through the process of driver selection, installation, the configuration and validation
of an ODBC connection to Greenplum, the creation of a Cognos data connection to a
Greenplum database, and validation of that connection using Cognos Framework
Manager and Query Studio.
Working with the Cognos BI Server in Linux with the Greenplum Database 15