Presented at SHARE San Antonio 2016
Files, they're old technology right? As everything is online and in 'the cloud' these days, do people really still use them and base their business on them?
It seems they do, not just 1 or two people, a lot of our customers make use of files to transmit data. Not just for batch processing, but online and dynamic processing too.
There are lots of aspects to file processing, transfer, size, data format, interaction with other enterprise systems, datasets, ftp, sftp, etc.
Come along to this session to hear about the file transfer and processing abilities of MQ Managed File Transfer and IBM Integration Bus.
We'll touch on how MQ Manager File Transfer (MFT) will reliably deliver your files around your enterprise.
Then we'll look at how IBM Integration Bus (IIB) can be used to leverage its integration and data transformation capabilities with files and datasets. Looking at both local and remotely accessed files, processing large files in record format, as well as integration with enterprise file transfer systems MQ MFT and IBM Sterling Connect:Direct.
“Thread - A New Wireless Networking Protocol for Internet of Things” - Ankith...EIT Digital Alumni
What if your coffee machine makes your coffee when you wake up and how about if it tells your toaster to keep your sandwich ready at your breakfast time? Thread is built to turn cool ideas like these into reality!
“Thread - A New Wireless Networking Protocol for Internet of Things” - Ankith...EIT Digital Alumni
What if your coffee machine makes your coffee when you wake up and how about if it tells your toaster to keep your sandwich ready at your breakfast time? Thread is built to turn cool ideas like these into reality!
Describes the key protocols used in Internet of Things across the network stack.
It covers the MAC protocol such as 802.15.4, Zigbee, Z-wave, Bluetooth, BLE, some key protocol used in IIoT - Industrial Internet of Things / Automation such as ISA 100.11, Wireless HART, NB-IoT, LTE-M
Effort has been made to keep the explanation short and crisp. The intention was never to replace numerous books on this subject.
MQTT - A practical protocol for the Internet of ThingsBryan Boyd
In today’s mobile world, the volume of connected devices and data is growing at a rapid pace. As more and more “things” become part of the Internet (refrigerators, pacemakers, cows?), the importance of scalable, reliable and efficient messaging becomes paramount. In this talk we will dive into MQTT: a lightweight, open standard publish/subscribe protocol for rapid messaging between “things”.
MQTT is simple to understand, yet robust enough to support interactions between millions of devices and users. MQTT is being used in connected car applications, mobile banking, Facebook Messenger, and many things in between. In this talk you will learn all about the protocol (in 10 minutes!) and see some of its applications: live-tracking, gaming, and more. We’ll walk through designing an MQTT-based API for a ride-share mobile application, and discuss how MQTT and REST APIs can complement each other.
Edge computing allows data produced by internet of things (IoT) devices to be processed closer to where it is created instead of sending it across long routes to data centers or clouds.
Doing this computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries, including manufacturing, health care, telecommunications and finance.Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.
Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.
High level overview of CoAP or Constrained Application Protocol. CoAP is a HTTP like protocol suitable for constrained environment like IoT. CoAP uses HTTP like request response model, status code etc.
A talk presented at IEEE ComSoc workshop on Evolution of Data-centers in the context of 5G.
Discuss about what is edge computing and management issues in Edge Computing
In this tutorial on User Datagram Protocol, we will understand the working of a connectionless and unreliable network protocol. It is applied to transmit real-time data and live network services, like streaming gaming videos, and is active in the Transport layer of the OSI model.
Topics covered in this tutorial on User Datagram protocol are:
1. What Is User Datagram Protocol?
2. Features of User Datagram Protocol
3. UDP Header Format
4. Working of the UDP Protocol
5. Applications of the UDP Protocol
6. UDP vs TCP
This is the presentation on clusters computing which includes information from other sources too including my own research and edition. I hope this will help everyone who required to know on this topic.
Pervasive digital technology is fundamentally changing the retail banking business model. Here's how banking Chief Information Officers (CIOs) need to change in order to lead the digital charge, according to our recent study.
Describes the key protocols used in Internet of Things across the network stack.
It covers the MAC protocol such as 802.15.4, Zigbee, Z-wave, Bluetooth, BLE, some key protocol used in IIoT - Industrial Internet of Things / Automation such as ISA 100.11, Wireless HART, NB-IoT, LTE-M
Effort has been made to keep the explanation short and crisp. The intention was never to replace numerous books on this subject.
MQTT - A practical protocol for the Internet of ThingsBryan Boyd
In today’s mobile world, the volume of connected devices and data is growing at a rapid pace. As more and more “things” become part of the Internet (refrigerators, pacemakers, cows?), the importance of scalable, reliable and efficient messaging becomes paramount. In this talk we will dive into MQTT: a lightweight, open standard publish/subscribe protocol for rapid messaging between “things”.
MQTT is simple to understand, yet robust enough to support interactions between millions of devices and users. MQTT is being used in connected car applications, mobile banking, Facebook Messenger, and many things in between. In this talk you will learn all about the protocol (in 10 minutes!) and see some of its applications: live-tracking, gaming, and more. We’ll walk through designing an MQTT-based API for a ride-share mobile application, and discuss how MQTT and REST APIs can complement each other.
Edge computing allows data produced by internet of things (IoT) devices to be processed closer to where it is created instead of sending it across long routes to data centers or clouds.
Doing this computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries, including manufacturing, health care, telecommunications and finance.Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.
Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.
High level overview of CoAP or Constrained Application Protocol. CoAP is a HTTP like protocol suitable for constrained environment like IoT. CoAP uses HTTP like request response model, status code etc.
A talk presented at IEEE ComSoc workshop on Evolution of Data-centers in the context of 5G.
Discuss about what is edge computing and management issues in Edge Computing
In this tutorial on User Datagram Protocol, we will understand the working of a connectionless and unreliable network protocol. It is applied to transmit real-time data and live network services, like streaming gaming videos, and is active in the Transport layer of the OSI model.
Topics covered in this tutorial on User Datagram protocol are:
1. What Is User Datagram Protocol?
2. Features of User Datagram Protocol
3. UDP Header Format
4. Working of the UDP Protocol
5. Applications of the UDP Protocol
6. UDP vs TCP
This is the presentation on clusters computing which includes information from other sources too including my own research and edition. I hope this will help everyone who required to know on this topic.
Pervasive digital technology is fundamentally changing the retail banking business model. Here's how banking Chief Information Officers (CIOs) need to change in order to lead the digital charge, according to our recent study.
New processing technology for agri fiber stalksDavid James
The We need to go beyond mere bio composites. If we can take the lignin, bio-polymer, and most importantly GIVE IT ITS OWN AESTHETIC SIGNATURE, only then can we truly compete with the Hard Woods being mowed down.
Cloud Computing: Architecture, IT Security and Operational PerspectivesMegan Eskey
A 2010 presentation on NASA Nebula that makes no reference to OpenStack (or pinet) dated a month after OpenStack was released to the public as open source. There is no link between Nebula and OpenStack.
Creating a Digital Banking Strategy - 01.23.15Calvin Turner
Today, the new buzzword in business is “Digital Strategy”. The problem, however, is that if you ask a group of business professionals to define "Digital Strategy" to you, depending on the industry, who you ask, and the ages of the respondents (yes, the generational perspective makes a difference), you will likely get a wide variety of different responses to that simple question. To illustrate this point, in a December 2014, Digital Banking research study published by Celent, when banking executives were asked what “Digital” means for them, they responded with a diverse – and sometimes inconsistent – set of answers. But invariably, mobile devices and social media are usually included somewhere in the answer. So, let's begin the discussion by clearing up a common misconception: an organization's Digital Strategy is NOT enabling/allowing customers to use mobile devices to communicate and conduct business. They are certainly components of a Digital Strategy, but the true definition of a Digital Strategy is much broader than that.
Growing momentum for Disruption in FinTech:
Looking back and looking forward.
Recording of the Backbase webinar of December 18th, 2014.
In our 2014 closing webinar we will look back at the disruptive highlights of this year and we start looking forward to 2015.
From BBVA acquiring Simple, to more and more neo-banks popping up, fintech startups going IPO and omni-channel moving from marketing buzz to the real thing. In this 60 minute webinar, Backbase's Jouk Pleiter and Jelmer de Jong discuss the main trends and best practices for banks and credit unions to keep on disrupting in the digital banking space.
Safety and secure methods while remodeling the houselee shin
home remodeling is the trending home improvement process which is been done for various purposes. this home remodeling technique should be done in well-secured manner in order to avoid many problems. these slides shows that methodology
IBM Managed File Transfer Portfolio - IBMImpact 2014Leif Davidsen
The data held in files can represent some of the most valuable assets that a business has. If this data is trapped in a file in a remote system then it loses value. IBM has recently updated its portfolio of Managed File Transfer offerings, simplifying choice for the business user and offering better value for money, while extending the access to this valuable data. Here about different managed file transfer use cases and suggested solutions
Inter-Process-Communication (or IPC for short) are mechanisms provid.pdfaesalem06
Inter-Process-Communication (or IPC for short) are mechanisms provided by the kernel to allow
processes to communicate with each other.
The types of inter process communication on Linux OS are:
The following IPC mechanisms are supported by Windows:
1. Clipboard - The clipboard acts as a central depository for data sharing among applications.
When a user performs a cut or copy operation in an application, the application puts the selected
data on the clipboard in one or more standard or application-defined formats. Any other
application can then retrieve the data from the clipboard, choosing from the available formats
that it understands.
2. File Mapping - File mapping enables a process to treat the contents of a file as if they were a
block of memory in the process\'s address space. The process can use simple pointer operations
to examine and modify the contents of the file. When two or more processes access the same file
mapping, each process receives a pointer to memory in its own address space that it can use to
read or modify the contents of the file.
3. Mailslot - Mailslots provide one-way communication. Any process that creates a mailslot is a
mailslot server. Other processes, called mailslot clients, send messages to the mailslot server by
writing a message to its mailslot.
4. RPC - RPC enables applications to call functions remotely. Therefore, RPC makes IPC as easy
as calling a function. RPC operates between processes on a single computer or on different
computers on a network.
5. Windows Socket - Windows Sockets is a protocol-independent interface capable of supporting
current and emerging networking capabilities.
The following IPC mechanisms are supported by Mac OS:
1. Mach Ports : Mach 3.0 is capable of running as a stand-alone kernel, with other traditional OS-
services like IO, file systems and networking stack running as user mode.It is much faster to
make a direct call between linked components than it is to send messages or do RPC between
separate tasks.
2. Apple Events : Universally supported by GUI applications on Mac OS for remote
control.Operations like opening or telling an application to open a file or quit etc can be done
using these.
3. Pasteboard - Copy Paste , Drag and drop done between applications is performed using this
technique.
4. Distributed Objects - Remote messaging feature of Cocoa to call an object in different Cocoa
applicaton.
Windows server uses the best technique to manage IPC because
a) It provides an efficient way for two or more processes on the same computer to share data.
b) It is capable of supporting current and emerging networking capabilities, such as quality of
service monitoring, robust asynchronous communication, I/O completion ports for superior
performance, and protocol-specific network
features.
=> Multiprocessing : refers to the use of two or morecentral processing units (CPU) within a
single computer system.All the operating systems provide support for multiprocessing.
Windows manages.
IBM MQ Advanced - IBM InterConnect 2016Leif Davidsen
Presentation from IBM InterConnect 2016 explaining the contents and benefits of IBM MQ Advanced, and positioning it compared to other Messaging offerings, and outlining different deployment options on-premise, or in the cloud, or as a hybrid messaging deployment
FTP Client Buildup1. IntroductionThe project is related to buiJeanmarieColbert3
FTP Client Buildup
1. Introduction
The project is related to build up of FTP Client and server, which is majorly based on the transfer of files.
FTP stands for file transfer protocol.
FTP is majorly a standard network protocol usually used for the proper and secure transfer of files.
Files are usually transferred from any server to the client with the implications of computer networking.
2. Continue
FTP is built upon the basis of the client server model.
The model includes separate control given on the data connections present between the server and the client.
The FTP users will authenticate with the sign-in implications by exact text in protocols in the form of usernames and passwords.
For signing in, there is a requirement of configuration by the server.
3. Continue
This FTP system will be secured with SSH file transfer protocol.
It will be done for the protection of the username and password of the users.
This FTP client and server buildup is designed for desktop.
4. Business Objectives
The objectives of this FTP client and server desktop implications are as follows:
Greater Security
Better Control
Transfer of large files
Better and Improved Workflow
Recovery from Disaster
5. Assumptions
Some of the essential assumptions for this set up as follows:
Set defaulted values will always be existent for all talent options.
The defaults can also be present in virtual network terminal
6. Continue
Every talent session will begin with NVT option values
If there is any deviation from NVT optional values, then defaults must be discussed.
7. Continue
Due to symmetry of negotiation, each side will accept the input if positive acknowledgement is received based on own request.
8. Constraints
Some of the randomly formed applications based on FTP models usually lack security.
Encryption is sometimes not given
Compliance is also a broad issue in this regard
FTP implications are more vulnerable to attack
9. Description of Preferred
Solution
Due to vulnerability and security issues, the SSH transfer protocol will be used for transferring the file.
SSH is a networking protocol that helps access, transfer, and manage files over any reliable data stream.
SSH will work with the assumption that the protocol is working on a secure channel.
10. Continue
Turning on the passive mode in the FTP program will help out in the encryption issues.
It will tunnel FTP through SSH connection which is usually encrypted.
11. Continue
The compliance and maintenance issues will be resolved by internal tracking and notifications.
The involvement of SSH as automation will tackle the complications related to FTP movements and maintenance.
12. Main Project Requirements
The primary requirement of this project is FileZilla Site Manager.
FileZilla is primarily used for:
1. File Transfer over the Internet
2. Development of FTP Client
3. Upload and Download files from the web hosting server
13. Continue
Other requirements for this project include:
Windows XP, 7, 8, 8.1 and 10
SSH File Tran ...
his Course is about learning How Linux Processes Talk to each Other. This is a sub-domain of Linux System Programming. We shall explore various popular mechanism used in the industry through which Linux processes to exchange data with each other. We will go through the concepts in detail behind each IPC mechanism, discuss the implementation, and design and analyze the situation where the given IPC is preferred over others.
Monitoring in Big Data Platform - Albert Lewandowski, GetInDataGetInData
Did you like it? Check out our blog to stay up to date: https://getindata.com/blog
The webinar was organized by GetinData on 2020. During the webinar we explaned the concept of monitoring and observability with focus on data analytics platforms.
Watch more here: https://www.youtube.com/watch?v=qSOlEN5XBQc
Whitepaper - Monitoring ang Observability for Data Platform: https://getindata.com/blog/white-paper-big-data-monitoring-observability-data-platform/
Speaker: Albert Lewandowski
Linkedin: https://www.linkedin.com/in/albert-lewandowski/
___
Getindata is a company founded in 2014 by ex-Spotify data engineers. From day one our focus has been on Big Data projects. We bring together a group of best and most experienced experts in Poland, working with cloud and open-source Big Data technologies to help companies build scalable data architectures and implement advanced analytics over large data sets.
Our experts have vast production experience in implementing Big Data projects for Polish as well as foreign companies including i.a. Spotify, Play, Truecaller, Kcell, Acast, Allegro, ING, Agora, Synerise, StepStone, iZettle and many others from the pharmaceutical, media, finance and FMCG industries.
https://getindata.com
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Nobody Uses Files Any More Do They? New Technologies for Old Technology, File Processing in MQ MFT and IIB
1. Insert
Custom
Session
QR if
Desired.
Nobody Uses Files Any More Do They?
New Technologies for Old Technology,
File Processing in MQ MFT and IIB
Tom Leend IBM UK Rob Convery IBM UK
tom.leend@uk.ibm.com convery@uk.ibm.com
Session 18855
Wednesday 2rd March 2016
2. How do most organizations move files today?
Most organizations rely on a mix of homegrown code, several legacy products and different technologies
… and even people!
• FTP
– Typically File Transfer Protocol (FTP) is combined with writing and maintaining homegrown code to address its limitations
• Why is FTP use so widespread?
– FTP is widely available – Lowest common denominator
– Promises a quick fix – repent at leisure
– Simple concepts – low technical skills needed to get started
– FTP products seem “free”, simple, intuitive and ubiquitous
• Legacy File Transfer products
– A combination of products often used to provide silo solutions
– Often based on proprietary versions of FTP protocol
– Can’t transport other forms of data besides files
– Usually well integrated with B2B but rarely able to work with the rest of the IT infrastructure – especially with SOA
• People
– From IT Staff to Business staff and even Security Personnel
– Using a combination of email, fax, phone, mail, memory keys…
3
3. Shortcomings of Basic FTP
4
Limited Flexibility
Limited Security
Limited visibility and traceability
Limited Reliability
Unreliable delivery – Lacking checkpoint
restart – Files can be lost
Transfers can terminate without
notification or any record – corrupt or
partial files can be accidentally used
File data can be unusable after transfer
– lack of Character Set conversion
Often usernames and passwords are
sent with file – as plain text!
Privacy, authentication and encryption
often not be available
Non-repudiation often lacking
Transfers cannot be monitored and managed
centrally or remotely
Logging capabilities may be limited and may only
record transfers between directly connected
systems
Cannot track the entire journey of files – not just
from one machine to the next but from the start of
its journey to its final destination
Changes to file transfers often require
updates to many ftp scripts that are typically
scattered across machines and require
platform-specific skills to alter
All resources usually have to be available concurrently
Often only one ftp transfer can run at a time
Typically transfers cannot be prioritized
4. What is MQ Managed File Transfer?
5
Auditable Full logging and auditing of file transfers + archive audit data to a database
Reliable Checkpoint restart. Exploits solid reliability of IBM MQ
Secure Protects file data in transit using SSL. Provides end-to-end encryption using AMS
Automated Providing scheduling and file watching capabilities for event-driven transfers
Centralized Provides centralized monitoring and deployment of file transfer activities
Any file size Efficiently handles anything from bytes to terabytes
Integrated Integrates with IIB, WSRR, ITCAMs for Apps, DataPower + Connect:Direct
Cost Effective Reuses investment in IBM MQ. Wide range of support (inc. z/OS and IBM i)
A B C X Y Z
MQ Managed File Transfer
5. A consolidated transport for both files and messages
• Traditional approaches to file transfer result in parallel
infrastructures
– One for files – typically built on FTP
– One for application messaging – based on IBM MQ, or
similar
• High degree of duplication in creating and maintaining
the two infrastructures
• Managed File Transfer reuses the MQ network for
managed file transfer and yields:
– Operational savings and simplification
– Reduced administration effort
– Reduced skills requirements and maintenance
6
File Transfers
Application
Messaging
Consolidated Transport
for Messages & Files
6. Components of a typical MQ MFT network
• Agents
– The endpoints for managed file transfer
operations
• Commands
– Send instructions to agents
• Log database or file
– A historical record of file transfers
• Coordination queue manager
– Gathers together file transfer events
7
Applications exchanging file data
MQ
Agent Agent Agent
“Coordination”
Queue Manager
Commands Log database
or file
7. Architecture – MQ Queue Manager Roles
• Three key roles played by MQ queue managers:
– Agent queue manager role (one to many per topology)
• Hosts the queues required by the agent to process file transfers
– Command queue manager role (one to many per topology)
• Used to communicate with agents and get information from the coordination queue
manager
– Coordination queue manager role (one qmgr per topology)
• Central collection point for MFT activity information (transfer logs etc)
• Can use the same queue manager for all roles or have separate ones
– Latter option is more common
8
11. Agents
• Act as the end points for file transfers
• Long running MQ applications that transfer files by
splitting them into MQ messages
– Efficient transfer protocol avoids excessive use of
MQ log space or messages building up on queues
• Multi-threaded file transfers
– Can both send and receive multiple files at the
same time
• Generate a log of file transfer activities which is
sent to the “coordination queue manager”
– This can be used for audit purposes
• Associated with one particular queue manager
(any supported version)
– Agent state on queues
12
Applications exchanging file data
MQ
Agent Agent Agent
“Coordination”
Queue Manager
Commands Log database
or file
12. Notes on: Agents
MFT agent processes define the end-points for file transfer. That is to say that if you want to
move files off a machine, or onto a machine – that machine would typically need to be
running an agent process
Agent processes are long running MQ applications that oversee the process of moving file
data in a managed way. Each agent monitors a ‘command’ queue waiting for messages
which instruct it to carry out work, for example file transfers
13
NOTES
13. Notes on: Agents
The MFT agent process needs connectivity to an MQ queue manager to do useful work. It
can connect either directly to a queue manager running on the same system, or as an MQ
client using an embedded version of the MQ client library (which is kept completely separate
to any other MQ client libraries that may or may not already have been installed onto the
system)*
– Each agent requires its own set of MQ queues – which means that an agent is tied to
the queue manager where these queues are defined
– However – one queue manager can support multiple agents
* Note: availability of direct (bindings) connectivity or MQ client based connectivity is dependent on the version of MQ MFT in
use
• MQ Managed File Transfer on z/OS does not support the MQ client style of connectivity
• Managed File Transfer on distributed platforms has a ‘server’ and ‘client’ offering. The agent component of the ‘client’ offering is restricted to only
supporting MQ client style connectivity. The agent component of the ‘server’ offering may be used either connectivity options
14
NOTES
14. Commands
• Send instructions to agents and display
information about agent configuration
– Via MQ messages
• Many implementations of commands:
– MQ Explorer plug-in
– Command line programs
– Open scripting language
– JCL
– Documented interface to program to
15
Applications exchanging file data
MQ
Agent Agent Agent
“Coordination”
Queue Manager
Commands Log database
or file
15. Notes on: Commands
“Commands” is the name we have given to anything which instructs the MFT agent. As
described on the previous slide, there are a wide range of command implementations
including graphical and non-graphical command-line based commands
Commands instruct the MFT agent by sending it messages. The messages themselves use
a documented format which can easily be incorporated into your own applications
The commands that are supplied with MFT can connect either as an MQ client (again based
on embedded client libraries) or directly to a queue manager located on the same system
16
NOTES
16. Log Database & File
• Keeps a historical account of transfers
that have taken place
– Who, where, when… etc.
• Implemented by the ‘logger’ component
which connects to the coordination
queue manager
– Stand alone application
• Can log to database or file
– Or JEE application
• Can log to database only
17
Applications exchanging file data
MQ
Agent Agent Agent
“Coordination”
Queue Manager
Commands Log database
or file
17. Notes on: Log Database
Managed File Transfer can record a historical account of file transfers to a database using
the ‘database logger’ component.
– This component is available to run as a stand alone process or as a JEE application
The information used to populate the log database is generated, as MQ messages, by the
MFT agents participating in file transfers. This is routed to a collection point in the MQ
network, referred to as the ‘coordination queue manager’ (see next slides). The database
logger component subscribes to the messages produced by agents and reliably enters them
into a database.
18
NOTES
18. Coordination Queue Manager
• Gathers together information about
events in the file transfer network
• Not a single point of failure
– Can be made highly available
– Messages stored + forwarded
• MQ v7 publish / subscribe
– Allows multiple log databases,
command installs
– Documented interface
19
Applications exchanging file data
MQ
Agent Agent Agent
“Coordination”
Queue Manager
Commands Log database
or file
19. Notes on: Coordination queue manager
The coordination queue manager is used as the gathering point for all the information about
file transfers taking place between a collection of MFT agents
The queue manager uses publish/subscribe (so it must be MQ version 7+) to distribute this
information to “interested parties” which typically include:
– The IBM MQ Explorer plug-in, which provides a graphical overview of MFT activity
– The database logger component, which archives the information to a database
– Some of the command line utilities which are part of the MFT product
The format used to publish information is documented and can be used to develop 3rd party
applications which process this data
Although there is only a single ‘coordination’ queue manager for a given collection of agents
it does not represent a point of failure:
– MQ stores and forwards messages to the coordination queue manager when it is available – so if the
coordination queue manager is temporarily unavailable no log data is lost
– The ‘coordination’ queue manager can be made highly available using standard HA techniques such
as MQ multi-instance queue manager or via a HA product such as PowerHA
20
NOTES
20. Example usage of monitoring & program execution
21
1. Application writes
file to file system
Existing
Application
WMQ
MFT
Agent
WMQ
MFT
Agent
Existing
Application
2. Agent monitors file system,
spots arrival of file and based
on rules, transfers the file
3. MFT transports file
to destination
4. At destination MFT writes file
to file system
5. MFT can also start another
application to process the file
21. Notes on: Monitoring & Program Execution
Resource monitors work in two stages:
1. Poll a resource (in this case the file system – but as we’ll see later the ‘resource’ can also be an MQ
queue) and identify that a condition has been met (perhaps the appearance of a file matching a
particular pattern)
2. Perform an action (which can include starting a managed file transfer, or running a script), optionally
propagating information about the resource (for example the name of the file triggered on) into the
action
As shown on the previous slide, resource monitors are typically used to provide integration
with an existing system without needing to make changes to the system
Another function of MFT, used for integration with existing systems is the ability to execute
programs or scripts both on the source or destination systems for a file transfer. This can
be used to:
– Start a program, on the source system, which generates the file data to be transferred prior to
performing the managed file transfer
– Start, or notify, a program on the destination system when the file data has been transferred –
allowing it to process the data without having to poll
22
NOTES
22. Protocol bridge agents
• Support for transferring files located on FTP, FTPS and SFTP servers
– The source or destination for a transfer can be an FTP, FTPS or an SFTP server
• Enables incremental modernization of FTP-based home-grown solutions
– Provides auditability of transfers across FTP/FTPS/SFTP to central audit log
• Ensures reliability of transfers across FTP/FTPS/SFTP with checkpoint restart
– Fully integrated into graphical, command line and XML scripting interfaces
– Just looks like another MFT agent…
23
Audit
information
Agent
MQ
Agent Agent
Protocol
Bridge
Agent
FTP/
SFTP
FTP/
SFTP
Server
FTP/
SFTP
Client
FTP/
SFTP
Client
FTP/
SFTP
Client
Files exchanged between MFT and FTP/FTPS/SFTP
23. Log database
or file
The MQ Appliance can be the Queue Manager
to provide both regular MQ Queue Manager
capabilities
No other MQ deployment needed
Also Coordination Queue Manager capabilities
No files are stored on the appliance
No MQ MFT Agent needed on appliance to
support this
Highly available and robust
Secure with MQ AMS entitlement built in
– Content encrypted based on policies
Applications exchanging file data
Agent Agent Agent
“Coordination”
Queue Manager
Commands
Using MQ MFT and the MQ Appliance
24. IBM Integration Bus Nodes
FTEInput node
– Build flows that accept file transfers from the MQ MFT
network
FTEOutput node
– Build flows that are designed to send a file across an MQ
MFT network
25
Message Flow
Execution Group
IIB
FTEInput FTEOutput
MQ
MFT
Agent
WMQ
MFT
Agent
WMQ
MFT
Agent
MQ
MFT
Agent
25. Creating and running an agent in Integration Bus
• Install
– FTE code is installed as part of Integration Bus install. No need for separate install
• Deployment
– Agents run in the integration server JVMs. One agent per integration server
– Coordinating queue manager defined as an integration server property
– Agent name is derived from integration node and integration server name:
• <integration node name>.<integration server name>
– FTE Agents are created automatically by integration node including required queues
• Starting/stopping
– Agent started when first FTE flow node using it is deployed
– Agent stopped when last FTE flow node using it is un-deployed or stopped
26. The core FTE product is installed as part of the normal IIB install. No additional components need to be installed
apart from the MQExplorer add on which can be used to monitor transfers. It is not required for the transfers to
work but is useful for monitoring what is happening with transfers.
Each integration server can be configured to run an FTE agent. This is done by setting the coordination queue
manager property either using MBExplorer or the mqsichangeproperties command.
The agent name is derived by concatenating the integration node and integration server name together. If the
name is too long to be a valid agent name then it is truncated. If it contains non-valid characters (any characters
not supported in MQ queue names) an error is written to the local system log.
All the configuration for the agent is created when the coordination queue manager is set and also deleted when
it is unset.
As well as creating the required config files, all the required queues are also created.
The actual agent is started when the first FTE flow node using it is started and stopped when the last FTE flow
node using it is stopped.
If a node is deployed without setting the coordination queue manager then an agent will be created at
deployment time using the integration node queue manager as the coordination queue manager. This is a
temporary agent which is deleted when the last FTE flow node using it has been stopped. It is recommended
that the integration server property is set even when the integration node queue manager is the coordination
queue manager.
Notes on: Creating and running an agent in IIB
NOTES
27. The administration of the agent is very simple with almost all tasks being done automatically by
the integration server.
All the config files created are identical to those used in a standalone agent.
Logging by the agent is written to the standard integration node user and service trace but the
log file in the config directory is still written to as well.
A location under the workpath is used as the default transfer directory and also to stage files
before they are transferred. To separate files being transferred by different nodes a directory
structure is created which includes the flow name and node name.
As with the file nodes a mqsitransmit directory is used to build files up before transferring them.
The names of files in these directories are mangled to stop clashes of files with the same name
which are going to be transferred to different agents or different directories. If is recommended
not to delete files from the transmit directory unless the message flow using them has been
stopped and all files are deleted.
The FTE MQExplorer plugin is very useful to monitor the progress of any transfers which have
been done.
Notes on: Administering an agent in IIB
NOTES
28. Accessing agent from a message flow
• Messages received and sent from agent using the following flow nodes:
– FTEInput node: receives any file transferred to integration server agent
– FTEOutput node: constructs a file and sends a request to the integration server agent
to transfer a file
• FTE nodes based on the existing file nodes allowing:
– Record based processing
– Stream parsing for large file support
– File filtering
– Can be used with all other flow nodes
29. FTE Input Node
• Consistent with file input node but makes full use of the power of MQ MFT
• Timely
– The FTE node is notified by the FTE agent when an inbound transfer is complete
– The node processes the files in the transfer immediately.
– Each file is processed independently
– Can leave file unchanged after processing and just delete notification message
• Metadata
– Metadata associated with the transfer is sent with the notification
– Includes user defined metadata
• Filter
– The node by default receives all transfers
– Can specify which files to receive using a filter
30. The FTEInput node has all the core function of the standard File input node but has been
enhanced to make use of the powerful function provided by MQ MFT.
It does not require any polling mechanism to scan directories because it is directly trigger by the
embedded agent when a file has arrived and the transfer is complete.
It processes each file in a transfer separately and can process each file in parallel.
As well as receiving the data from the transfer it also receives all the meta data associated with
the transfer. This includes lots of information from MQ MFT but also can include user defined
data.
Each FTE Input node can specify a filter of which files it wants to process. By default it
processes all files. If two nodes both have a filter which matches a file then only one will get
given it to process. By default, if no filter expression is given, then the node we accept any
transferred file.
The file name filter accepts wild cards but the directory can either be blank (accept any files) or a
string that must match exactly. Relative paths are allowed which are taken relative to the default
transfer directory.
Notes on: FTEInput Node
NOTES
32. FTE Output Node
• Consistent with file output node but makes full use of the power of MQ MFT
• Transfer details
– The destination agent, directory etc are defined on the node
– All details can be overridden using Local environment
– Support also for wild card file names
• Staging
– Uses a local staging directory to build up a file record by record for transfer
– Once a file is finished a request is sent to the FTE Agent to transfer the file
• Metadata
– User data – if provided in the Local Environment - is sent with the transfer
– Other metadata is generated by the FTE Agent or by the integration node.
– Ant scripts – give details of ant scripts to run on remote agent
33. The FTEOutput node makes use of the core function provided by File nodes.
The main properties on the node are details of where to transfer the files to. These can all be
overridden using the local environment.
The main difference with an FTEOutput node is that the destination the file is to be written to is
not local to the integration node file system but instead is a file system on a remote agent. The
node first writes it to a staging directory on the local file system and then sends a request to the
embedded agent to transfer it.
Files are built up in the mqsitransmit directory just like with a file node before being moved to the
final file name once the file is complete and the transfer request is sent to the FTE agent. The
name in the staging area is the same as the name the file will be transferred to unless a file with
that name already exists on the local file system. If it exists than a number is appended to the
file. The file name it is transferred to on the remote system is not effected by this and will not
have the number appended.
Notes on: FTEOutput Node
NOTES
35. File integration in IBM Integration Bus
File
Processing
Nodes
FTP
Processing
Nodes
sFTP
Processing
Nodes
WTX
MQ MFT
FTE Nodes
IBM Sterling
Connect:Direct
Nodes
Integration Bus
* Support pac
37. The FileInput nodes reads data from a file and triggers the start
of processing in the flow. Three main areas will be covered in
detail:
The mechanism used to detect a file is ready to be processed
The splitting of the file up into records
The archiving or deleting of the file once processing has been finished
Notes on: FileInput Node
NOTES
38. FileInput node – Operation
• Scans a pre-configured directory (relative or absolute)
for files that match a given specification
• Locked files are ignored until they become unlocked /home
hursley
messages
F1.txt F2.xml F3.txt
39. FileInput node – Record Detection
• Handling options (on the Records and Elements tab):
– Whole file
– Fixed Length *
– Delimited *
– Parsed Record Sequence *
• Note * - results in separate records - message flow is invoked multiple times
• Only requires one record to be in memory at any one time
– Allows very large files (Gigabyte) to be streamed efficiently
– Streaming possible with DFDL, MRM (CWF and TDS) and XMLNSC parsers only
• If connected, ‘End Of Data’ terminal is triggered at end of file
– Empty BLOB message and a LocalEnvironment.File structure
41. Record Detection Examples 2
• With Input File:
• Parsed record sequence
– When the parser is set to XMLNSC, this propagates three XML messages
<cities><city name=“Boston” rating=“10”></cities>
<cities><city name=“San Francisco” rating=“9”></cities>
<cities><city name=“Seattle” rating=“8”></cities>
<cities><city name=“Boston” rating=“10”></cities>
<cities><city name=“San Francisco” rating=“9”></cities>
<cities><city name=“Seattle” rating=“8”></cities>
42. FileInput node – Archiving
• Upon successful processing, file is either deleted or moved to an mqsiarchive
subdirectory
• Dealing with files with duplicate names:
– Option to include timestamp in archived filename
– Option to replace any existing file
/home
hursley
messages
F1.txt
F2.xml
F3.txt
mqsiarchive
43. FileInput node – using FTP and SFTP
• When active, FTP settings cause the node to periodically transfer files on a
remote server to the local directory for input.
• Security Identity ‘UserMapping’ set using runtime command:
– mqsisetdbparms BROKER –n ftp::UserMapping –u USER –p PASS
45. The file output node is used to create and write to a file anywhere in
the middle of a flow:
How it decides which file to create and where
How a file is created using a series of records appended
What happens if the file it attempts to create already exists
Notes on: FileOutput Node
NOTES
46. • In the simplest scenario, the received
message body is written to the pre-configured
file:
• When writing to the output file, the wildcard (if
present) is replaced with the value of
LocalEnvironment.Wildcard.WildcardMatch
– Allows you to preserve elements of a
filename during processing
FileOutput – Writing files
47. Appending Records
• “Records and Elements” tab defines how multiple writes to the same file are handled
• Record definition options:
– Record is Whole File – close file automatically after first write
– Record is Unmodified Data – the message bit-stream appended to file
– Record is Fixed Length Data – specify length in bytes and padding character
– Record is Delimited Data – specify delimiter and infix/postfix option
• Unless “Record is Whole File” is selected, the file will be closed when the “Finish File” terminal is triggered
– The Finish File message is propagated on to the FileOutput’s “End Of Data” terminal
• The mqsitransit subdirectory holds all files that have not yet been closed
48. Options if the file already exists
• Output file action (basic tab)
• If the file already exists
– Replace it
– Append to it
– Go down failure terminal
– Move to mqsiarchive subdirectory and replace
– Add timestamp, move to mqsiarchive subdirectory and replace
49. FileOutput - FTP Support
• If enabled, whenever a complete file is closed an FTP transfer of the file is attempted to the supplied FTP
server
• File is optionally deleted from the local file system when the transfer completes
• Transfer is synchronous
– Use additional instances if throughput rate is an issue
• If remote file exists, choose either replace or append
51. FileRead node
• File Read Node
– Reads data in middle of flow like an MQGet or a TCPIPReceive node
– Reads either whole file contents or one record from the file
– Allows user to override the file to be read and the offset within the file to start reading
from
• Has a No match terminal which the message is sent to if it can not find a file or a record
in the file
52. • FileRead node behaves like a MQGet or TCPIPReceive node in the sense that it
reads in data within a flow without first sending data out. For example: MQGet
reads a message from a queue, TCPIPReceive reads data from a TCPIP input
stream and the Fileread node reads data from a file.
• Can either read the whole contents of a file or a single record from the file and
then parses and constructs a message to propagate down the flow.
• The node is very configurable both at design time and during runtime where most
properties can be overridden based on the message content.
• The basic properties are similar to a File input node where the details of the file to
process are given.
Notes on: FileRead Node
NOTES
53. Defining which record to read and propagate
• Where the record starts
– Defaults to the beginning of the file
– Give the offset into the file to start record from based on contents of the message
• Where the record ends
– Define the record detection mechanism:
• Whole file
• Fixed size
• Delimited
• Parser
• Which record to propagate
– Define an expression to specify which record to propagate
– Node iterates through all records from the start offset until one matches
– Only propagates the first match
54. Defining which record to read and propagate
• Three key pieces of information need to be defined to create and propagate a record from the file.
• Where the record starts. By default the file read node always starts the reading of a record from
the beginning of the file. It does not remember or store the result of the last record read so the next
time through the node will start from the same place. Unless the user specifies in the local
environment where to start from. It is possible to configure the node to use the end record offset
from a previous fileread node as the start of the current record.
• Where the record ends. The normal record detection mechanisms are used to find the end of the
record. Fixed size just reads that many bytes, delimited scans for a delimiter and parser uses an
integration node parser to determine the end (like XMLNSC if the record is an XML document).
When using fixed size it is possible to override the length being used using the message content or
local environment.
• Which record to propagate. After finding the record the Record selection expression is evaluated. If
it is true then the record is propagated otherwise the next record is found using the end of the last
record as the start of the new one. The process is repeated until either the expression is true or the
end of file is found. Only the first matching record is propagated. If no record matches then the file
gets sent to the no match terminal.
55. Constructing the message using the record read location
• Which part of the record read to propagate
– Result data location
• Where to put the record in the outgoing message
– Output data location
56. Constructing the message using the record read location
• The Result panel has properties which specify how the outgoing
message is constructed based on the contents of the file and the
incoming message.
• By default, the whole incoming message is replaced with the contents of
the record retrieve from the file.
• The Result data location is used to extract a piece of information from
the file to insert into the outgoing message.
• The Output data location is used to find the location to write the extracted
information to.
NOTES
57. Changing the file disposition after the read
• By default file is left unchanged after read
• Disposition change is always done when the FileRead node executes Finish File:
– The end of the file is reached
– A message is sent to the finish file terminal
• Following actions are available:
• Archived files are moved to the mqsiarchive directory
• Can override the archive directory and archive name using local environment or
data in the message
58. Changing the file description after the read
• By default, after any read, the file read node will leave the file
unchanged and will close any connections to it.
• It is possible to configure the node to modify the file disposition
when the nodes finish file action is triggered. Finish file is defined
as either when the file read node reads to the end of a file (whole
file mode or when the last record is read) or when a message
arrives at the finish file terminal.
• It is possible to delete or archive the file. Archiving moves the file to
the mqsiarchive directory but it is possible to override this based on
the contents of the message to move the file to any new directory
and any new file name.
NOTES
60. IBM Sterling Connect:Direct (no IIB)
CD server CD server
CD client CD client
Machine A: Windows Machine B: UNIX
IBM Sterling Connect:Direct is a managed file transfer solution
auditconfigure track
61. IBM Sterling Connect:Direct (no IIB)
CD server CD server
CD serverCD client
Machine A
Machine C
Machine AMachine AMachine AMachine A
Machine B
Machine D Machine A
CD client
62. IBM Sterling Connect:Direct (with IIB)
CD server
CD server
CD server
CD client
Machine C
Machine A
Machine B
Machine D
CD client
Integration Bus
Integration Bus
63. • Install Integration Bus
• Create and start an integration node on the machine
• Set up a security identity to be used to connect to CD server:
• – mqsisetdbparms BROKER –n cd::default –u convery –p ********
• Create and deploy flows which interact with CD via built in CD nodes
• That is all that is needed if the broker is on the same machine
Adding IBM Integration Bus to a CD network
64. Adding IBM Integration Bus to a CD network
IBM Integration Bus machine
CD server
CD machine
Message flow
Configurable service
Shared file system
IBM Integration Bus does not have to be on the same machine as CD server.
They must have access to a shared file system where files are transfer to.
A configurable service can be created with details of CD Server:
– Hostname
– API port to connect to
– Security identity
– Filepath of the shared file systems
65. Accessing a CD server from a message flow
• Messages sent and received from the CD server using Integration Bus nodes:
– CDInput node: receives any file transferred to the CD server
– CDOutput node: constructs a file and sends a request to the CD server to transfer the file to a
remote system.
• CD nodes based on the current file nodes allowing:
– Record based processing
– Stream parsing for large file support
– Can be used with all other message flow nodes
• CD server is not stopped, started or administered by Integration Bus. Both products are
decoupled:
– Integration Bus outages have no effect on CD server
– CD server outages only effect Integration Bus when a transfer is requested by a message flow
66. CD Input Node
Consistent with file input node but makes full use of CD function
Timely
– The CD node monitors the CD servers stats for transfers that have completed
– The node processes the files in the transfer immediately.
– Each copied file is processed independently
– Can leave the file unchanged after processing and just delete notification message
Metadata
– Metadata associated with the transfer is sent with the notification
– Includes user defined application data field
Filter
– The node by default receives all transfers
– Can specify which files to receive using a filter
68. CD Output Node
Consistent with file output node but makes full use of CD function
Transfer details
– The destination CD server, directory etc are defined on the node
– Support also for wild card file names
Staging
– Uses a local staging directory to build up a file record by record for transfer
– Once a file is finished a request is sent to the CD Server to transfer the file
Metadata
– User data if provided in the Local Environment is sent with the transfer
– Other metadata is generated by CD server or by Broker
70. File types supported
• Flat files on windows, UNIX and z/OS
– Treated and processed the same as in the normal file nodes
• z/OS Sequential datasets
– File name is the dataset name: JREEVE.TEST1.TEST2
– Wildcard values allowed anywhere: JREEVE.*.*
– Staged to HFS and then treated the same as in the normal file nodes
• z/OS Partitioned datasets
– For the input node file name is a pattern like:
• JREEVE.TEST1.TEST3(MEM1)
• JREEVE.TEST1.TEST3(MEM*)
• JREEVE.TEST1.TEST3(*)
• JREEVE.*.TEST3(*)
– Each member is staged to HFS and then treated the same as in the normal file nodes
– For the output node the file name must be a single member:
• JREEVE.TEST1.TEST3(MEM1)
71. Summary
• MQ MFT
• MQ MFT(FTE) nodes in IIB
• File nodes in IIB
• IBM Sterling Connect:Direct nodes in IIB
72. This was Session 18855. The rest of the week ……
Day Monday Tuesday Wednesday Thursday Friday
8:30 Nobody Uses Files Any More
Do They? New Technologies for
Old Technology, File Processing
in MQ MFT and IIB
Room 225D
Common Problems and Problem
Determination for MQ z/OS
MQ and CICS - Integration
Options and Costs
Room 302B
10:00 Introduction to MQ - Can
MQ Really Make My Life
Easier?
Introduction to the New MQ Appliance DevOps : Using z/OSMF to
Provision MQ for z/OS
MQ for z/OS: The Insider Story MQ for z/OS, Using and Abusing
New Hardware and the New v8
Features
11:15 Introduction to IBM
Integration Bus on z/OS
MQ Security Bootcamp:
Understanding SSL/TLS Principles -
Taking You from Beginner to Expert,
Part 2 of 3
MQ Labs - Room 303A
OR
Giving It the Beans: Using IBM MQ as the
Messaging Provider for JEE Applications
in IBM WebSphere Application Server
13:45 What's New in the
Messaging Family - MQ v8
and More [z/OS &
Distributed]
Thoughts on MQ Architecture &
Design
DevOps : IIB Administration for
Continuous Delivery and DevOps
Room 304B
15:15 What's New in IBM
Integration Bus and IIB on
Cloud
IBM Integration Bus MQ flexibility
[z/OS & Distributed]
MQ Security Bootcamp:
Security Features Deep Dive -
Securing your Enterprise,Part 3
of 3
IBM MQ: Are z/OS & Distributed
Platforms Like Oil & Water?
OR
DevOps : Empowering the Delivery of
Data Centre Operations Through
Increased Automation and Cloud
[Distributed] Room 304B
16:30 MQ Security Bootcamp:
Securing MQ from End to
End, Part 1 of 3
Digging into the MQ SMF Data Programming with PCF
Messages
Monitoring and Auditing MQ