SlideShare a Scribd company logo
1 of 93
Download to read offline
1
Jump start with the enhanced Informix
Thursday, July 13, 2017
12:30 PM EST
Pradeep Muthalpuredathe
Technology Director and Head of Engineering
HCL
Shawn Moe
Software Architect
HCL
2
Safe Harbor Statement
2
Copyright © IBM Corporation 2017. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corporation
THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE
TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS”
WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON CURRENT THINKING REGARDING
TRENDS AND DIRECTIONS, WHICH ARE SUBJECT TO CHANGE BY IBM WITHOUT NOTICE. FUNCTION DESCRIBED HEREIN MY NEVER BE
DELIVERED BY I BM. IBM SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO,
THIS PRESENTATION OR ANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, NOR SHALL HAVE
THE EFFECT OF, CREATING ANY WARRANTIES OR REPRESENTATIONS FROM IBM (OR ITS SUPPLIERS OR LICENSORS), OR ALTERING THE
TERMS AND CONDITIONS OF ANY AGREEMENT OR LICENSE GOVERNING THE USE OF IBM PRODUCTS AND/OR SOFTWARE.
IBM, the IBM logo, ibm.com and Informix are trademarks or registered trademarks of International Business Machines Corporation in the United
States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a
trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on
the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml
Informix Into the Future
Pradeep Muthalpuredathe (mpradeep@hcl.com)
Director – WW Informix Engineering (OneTeam)
@mspradeep66
#Informix
#Reinventing
#Unleashed
IBM Analytics HCL Products & Platforms
4
Data is the new currency and at the core of every business
but…
1. Only 15% of organizations fully leverage data and analytics
à Unlock the potential of all your data available in all data types. Combine it
with public or 3rd party data sets
2. Many users don’t have direct or timely access to information
à Short-cut / avoid dependencies and democratize access with integrated
governance enabling Self-Service
3. 90% of the worlds data cannot be googled
à Leverage data where it resides and bring analytic capabilities & cloud
benefits to your data
4. The cloud journey is a marathon, not a sprint
à Hybrid cloud solutions offer faster, incremental value at lower risk
5
+
Announcing a powerful partnership for Informix
6
Investing in Informix
15-year strategic partnership to jointly develop and market the
IBM Informix family of products, effective April 1, 2017Partnership
Enhancement
Next Generation
The best of our shared knowledge and teaming experience
will enhance current Informix products
HCL will build the next generation of Informix products based
on market needs and client priorities
IBM and HCL IP Partnership – Key Highlights
IBM	and	HCL	have	entered	into	a	15+	year	IP	partnership.	HCL	has	set	up	a	new	division	
called	HCL	Products	and	Platforms.
IBM	will	continue	to	sell	IBM	Informix	products	through	IBM	channels	and	continues	to	
own	Level	1	support.		HCL	is	responsible	for	dev,	all	other	support	and	customer	
advocacy.	HCL	will	also	bring	additional	Sales	and	Marketing	to	Informix.
All	of	the	IBM	development	and	support	engineers	have	joined	HCL.	Customers	
will	continue	to	have	access	to	the	expertise	of	the	labs.
HCL	has	a	Client	Advocacy	team	building	on	IBM’s	Lab	Advocate	program.	This	is	a	
‘hypercare’	approach	to	customer	relationships.
HCL	will	be	accelerating	the	product	roadmaps,	and	delivering	new	features	
and	functionality,		benchmarks	and	new	Cloud	Offerings
HCL	will	be	refreshing	Lab	Services	offerings	to	provide	customers	
higher	ROI	and	faster	time	to	value.
01
02
03
04
05
06
8
About HCL
REST	OF	WORLD
EUROPE
AMERICAS
MANUFACTURING	
FINANCIAL	SERVICES
LIFESCIENCES	&	
HEALTHCARE
PUBLIC	SERVICES
TELECOM,	MEDIA,	
PUBLISHING	&	ENTERTAINMENT
RETAIL	AND	CPG
OTHERS
32%
10%
9%
10%
1%
26%
12% APPLICATION	SERVICES
40.9%
INFRASTRUCTURE	SERVICES
ENGINEERING	AND
RESEARCH	SERVICES
10%
30%
60%
Vertical	Mix
REVENUES
Service	Mix
REVENUES
Geo	Mix
REVENUES
PEOPLE	&	INDUSTRIES REVENUE	&	SERVICE	LINES GLOBAL	COVERAGE
111,000+ 31COUNTRIES7.1	BILLION	USD
36%
40%
19%
BUSINESS	SERVICES	
14.4%
5%
9
HCL Products & Platforms Division
A services company
mindset to customer
relationship
Accelerate roadmaps
and bringing new
features to our customers
Real innovation that
solves customer problems
Provide insights
beyond customer input
by using our own products
Bringing	speed,	insights	and	innovations	(big	and	small)	to	create	value	for	our	
customers	in	DevOps,	Automation	and	Application	Modernization	software
10
HCL Client Advocacy
A	customer-centric	approach	is	the	foundational	element	of	the	HCL	Products	business	philosophy.	
We	strive	to	deliver	a	high-touch,	highly	interactive	approach	to	customer	relationships,	and	to	
provide	the	greatest	value	and	service	to	customers	through	strong	connections	to	our	product	
experts.
More cohesive and collaborative
approach to the client
relationship
Incessant support for client's
product usage and business
needs
More frequent touchpoints with
product roadmaps
Proactive communication on
product news and updates
Deeper understanding of the client's
business and challenges
11
Informix Investment Priorities
Delight the
Client
Cloud
Integration
IoT
Enhance value &
experience
Enable hybrid
cloud journey
Extend use cases
and simplify
adoption
12
Short
Term
Medium
Term
Long
Term
Under
Review
Delight the
Customer
Expand on
Cloud
Optimize
for IoT
Backup to Cloud -
Softlayer
Smart Triggers –
Server & JDBC
DSX Analytics for Informix
Self Service provisioning
for Cloud (PAYGO)
BlueMix ICIAE Certification
Informix Roadmap
Backup to Cloud – AWS,
Google, …
Smart triggers- additional
APIs
Simplified Licensing
Simplified Upgrade
and Deployment
Benchmarks
Hosted Services –
Various Cloud Platforms
High Availability Offering
Sensors in motion
Elastic Scaling
Simplify Solution Development, focus on ISVs – for IoT, Cloud and OnPrem
SQL Enhancements
Ease of Use and Administration
Developer Ecosystem and Community Engagement
Cross Focus
Items
HTAP
Recompress Data
Dictionary
Edge-2-Cloud
Solution stack
IWA for Cloud
DBaaS Offering
TimeSeries compression
on strings
Blockchain Integration
High frequency ingest
of data (sub second)
13
Informix on Social Media
http://www.informixcommunity.com/
14
Informix: You have the right tool for the job – all in one toolbox!
• Outstanding Performance and Uptime
• Application Development via modern APIs
• Hybrid storage and hybrid applications with data
consistency
– The only database that can be utilized and provisioned on heterogeneous, commodity
hardware, different O/S, and different database versions
• Modern interface providing JSON / BSON native support
– Rapid delivery of applications
– Access Relational, TimeSeries, Spatial, Graph data from SQL and/or NoSQL
application
• Super scale out
– Multiple nodes, multiple versions, multiple copies, data sharding
– Best	of	the	Breed	HA	and	Workload	management	solutions
• At the Edge, On-Prem and Cloud
Informix New Features Overview
Shawn Moe
Informix Engineering Lab
smoe@hcl.com
July 2017
16
Agenda
• What’s new in 12.10.xC9?
– Released in July 2017
– Backup to Cloud Object Store
– Smart Triggers/Push Data
– Informix on Cloud
– Tracking Moving Objects
• What’s new in 12.10.xC8?
– Released in December 2016
– Highlights
– Encryption at Rest
– Regular Expressions
– JDBC 4.0
• Version independent enhancements
17
Backup to Cloud Object Storage
User Story –
As the CIO of Acme Manufacturing, I need to be able to store
our database backups in a secure, offsite location. In the
event of a disaster, we must be able to quickly recover our
systems using these backups, possibly from another location.
18
Object Storage
• Objects are stored as an unknown stream of bytes
• Objects are not files, nor are they are not disk blocks. The physical way they
are stored and where they are stored is a black box for the user
• Objects will have properties, tags or characteristics that can be attached to it,
so it allows for multi-dimensional organization, search and retrieval
19
Object Storage Characteristics
• Implemented as a black box, although some implementations are open like
SWIFT which is used by SoftLayer
• Can be distributed
• Can be redundant
• Could provide versioning
• Could provide modification capabilities
• Is always managed through an API, so there is usually not a way to see “files”
or “directories”
• Objects have names, you can use “/” as part of the name. This makes the
names look like paths, but they are not!
20
Getting started in SoftLayer - Create an Object Storage “Bucket”
21
Cloud Backups using On-Bar
• It is possible to use STDIO devices to take direct backups from On-Bar to a
Cloud Provider
• The feature is implemented in the Primary Storage Manager (PSM)
• It is NOT possible to use this if you use a third party storage manager
22
STDIO devices
• A new type of device called “STDIO device” was implemented in PSM
• This new device type will send the backup stream data or get the restore stream
data from a separate process (ie sftp)
• Communication to/from this process will occur using the standard input/standard
output of that process, very similar to use a pipe
Archive API
IDS curl,
sftp, or
aws-cli
OBJECT
STORAGE
On-Bar
PSM
23
Create STDIO device
onpsm -D -add /home/shawn/mycurl -t STDIO
--stdio_warg "BACKUP @obj_name1@.@obj_id@.@obj_part@"
--stdio_rarg "RESTORE @obj_name1@.@obj_id@.@obj_part@"
--stdio_darg "DELETE @obj_name1@.@obj_id@.@obj_part@"
--max_part_size <size in KB>
•Notice the type “STDIO”
•Notice the device path is a path to a executable, usually a shell script that will
take/retrieve the data
•We have to provide the arguments that we will use to invoke the program for
backup (send data), restore (get data) and delete (erase data)
24
Upcoming Work…
• Distribute scripts to connect to the most popular Cloud providers
• Define a way to do this configuration automatically
• Provide the same capability to ontape
• Add the capability to send the data offsite in addition to keep a local copy
(ifxbkpcloud.jar will be replaced with new functionality)
25
Smart Triggers & Push Data Notifications
User Story –
As the CFO of Acme Manufacturing, we are bound by various
regulations that require us to record various information about
large purchases of certain products. We need an easy way to
monitor these transactions as they update our enterprise
database.
26
Smart Trigger Value Proposition
• Selectively trigger events based on changes in server data
• Real time ‘push’ notifications help clients avoid polling the server
• Small data flow allows simple small clients to work with many triggered events
at once
27
Smart Triggers in JDBC
• Smart Triggers are registered events on the server that you
subscribe to from your JDBC client
– Triggers are based on a SQL statement query that matches changes made to
a table
– SELECT id FROM CUSTOMER WHERE cardBalance > 20000;
• One client can listen to many events from many tables, allowing a
wide range of monitoring opportunities
– Monitor account balances
– Take action on suspicious behaviors
28
What does a Smart Trigger Look Like?
• It’s designed to be a simple set of classes/interfaces in Java
• Designed for both simple standalone monitor applications as well as integration
into multi-threaded environments
• Leverages the Push Notification feature in the server to do the heavy lifting
• Receives JSON documents when a trigger occurs
• Adding Smart Triggers to the JDBC driver allows other languages to have this
support
– Groovy, JavaScript (NodeJS), Python, Scala and more
29
Use case: Banking
• Bank accounts
– I want to be alerted when an account balance drops below zero dollars
– I don’t want to write SPL or install stored procedures
– I want to be notified in my client application
– I don’t want to poll the database for this information or re-query each time a
balance changes from the client
30
Smart Trigger Bank Code
public class BankMonitor implements IfmxSmartTriggerCallback {
public static void main(String[] args) throws SQLException {
IfxSmartTrigger trigger = new IfxSmartTrigger(args[0]);
trigger.timeout(5).label("bank_alert");
trigger.addTrigger("account", "informix", "bank",
"SELECT * FROM account WHERE balance < 0", new BankMonitor());
trigger.watch(); //blocking call
}
@Override
public void notify(String json) {
System.out.println("Bank Account Ping!");
if(json.contains("ifx_isTimeout")) {
System.out.println("-- No balance issues");
}
else {
System.out.println("-- Bank Account Alert detected!");
System.out.println(" " + json);
}
}
}
31
Logical Log
Grouper
Snoopy
Database
Push-data Clients
sesid = task(“pushdata open”)
Task(“pushdata register”, {json})
Task(“pushdata register”, {json})
While (1)
{
bytes=Ifx_lo_read(sesid, buf, size, err)
Execute action;
}
Push-data Sample app
Event Data
Event Data
Event Data
Event Data
OLTP Clients
Server Architecture Diagram
32
API Calls via sysadmin Tasks
• TASK(‘pushdata open’);
– Register client session as a push data session
– Returns session id, need this id to read event data
• TASK(‘pushdata register’, {event and session attributes});
– Register event conditions, and session specific attributes
• Smart blob read API (ifx_lo_read() or equivalent call)to read event data
– Pseudo smart blob interface to read event data
– Returns JSON document(s)
– Can be configured as blocking or non-blocking call
• TASK(‘pushdata deregister’, {event condition details});
– De-register event conditions
33
Example event data documents
• Sample output for Insert operation:
{“operation”:"insert",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn
alert",”txnid”:2250573177224,”commit_time”:1488243530,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"66
66-6666-6666-6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-
01T10:35:10.000Z } }}
• Sample output for Update operation:
{“opertion”:"update",table:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn
alert",”txnid”:2250573308360,”commit_time”:1488243832,”op_num”:1,”rowdata”:{uid:21,cardid:"7777-
7777-7777-7777",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":"25-Jan-2017 16:15"}
},”before_rowdata”:{“uid”:21,”cardid”:"6666-6666-6666-6666",”carddata”:{"Merchant":"Sams
Club","Amount":200,"Date":2017-05-01T10:35:10.000Z } }}
• Sample output for Delete operation:
{“opertion”:"delete",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn
alert",”txnid”:2250573287760,”commit_time”:1488243797,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"6666-
6666-6666-6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T13:35:06.000Z } }}
• Sample output for multi row document when maxrecs input attribute set to greater than 1:
{[
{“operation”:"Insert",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert",
“txnid”:2250573309999,”commit_time”:1487781325,”op_num”:1,”rowdata”:{uid:"7",”cardid”:"6666-6666-6666-
6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T15:10:10.000Z } }},
{“operation”:"insert",table:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn
alert",”txnid”:2250573177224,”commit_time”:1488243530,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"6666-6666-6666-
6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T16:20:10.000Z } }}
]}
34
• Print all sessions: onstat –g pd
• Print all event conditions: onstat –g pd event
• Print information about specific session: onstat –g pd 70
• Print event conditions for specific session: onstat –g pd 39 event
onstat Commands
35
Comparing Smart Trigger and Regular I/U/D Triggers
Smart Trigger Regular Trigger(I/U/D)
Post Commit Pre Commit
Register Trigger on a specific Dataset/Event Trigger gets fired for all changes
Asynchronous and Linear Scalability Synchronous
Data is in JSON format SQL format
Trigger logic gets executed in the client Trigger logic gets executed in the server
Natural fit for event driven programming model -
No schema changes required to define new smart
trigger
Require schema changes and exclusive lock on
the table to modify trigger definition
36
Informix on Cloud
User Story –
As a quality assurance manager at Acme Manufacturing, we
need additional Informix instances, often at short notice, for
various periods of time, to test our new functionality in several
configurations used by our customers. We can’t justify
purchasing additional hardware and software licenses for
machines which will not be used every day.
37
l IBM Informix on Cloud is available on IBM Bluemix
• Cloud hosted service includes Informix license and cloud “hardware”
• T-shirt sizing: S, M, L, XL instances match Informix license and hardware
capacities to provide optimal value at each size
• Informix instance hosted in IBM SoftLayer data centers with world wide
deployment options
• IBM provisions, configures, and tests the instance and then passes the
credentials on to the customer
• Full Informix functionality to support all kinds of work loads:
• OLTP
• Hybrid NoSQL, SQL, TimeSeries and Spatial
• IoT
• IWA
• Rapid application development with support for SQL, MongoDB, REST or
MQTT themed applications
• It’s Informix!
Rationale: deliver high-quality cloud service with low cost of operations
38
Informix on Cloud – Bluemix “Pay & Go” Subscription
• Bluemix Pay & Go functionality just released in early July 2017
• Uses credit card information tied to your account
– If you don’t want to do this, you can still go through IBM sales
• All existing “Order with IBM Sales assistance” offerings remain available
• Compete process from selecting a virtual server image to having a running, provisioned &
configured Informix server is now about 20 minutes!
39
Tracking Moving Objects
User Story –
As a software developer at Acme Manufacturing, I need a
mechanism to help me track locations of our delivery vehicles
(trucks and drones) as they make deliveries during the day.
Our application needs to be able to visualize the location of
each of our vehicles in relation to each other and to various
pickup and drop-off locations. In some situations, we need to
be able to record locations at sub-second intervals.
40
Track moving objects
• You can track a moving object, such as a vehicle, by capturing location information for the
object at regular time intervals. You can use the new spatiotemporal search extension to index
the data and then query on either time or on location to determine the relationship of one to
the other. You can query when an object was at a specified location, or where an object was
at a specified time. You can also find the trajectory of a moving object over a range of time.
• The spatiotemporal search extension depends on the TimeSeries and spatial extensions. You
store the spatiotemporal data in a TimeSeries data type with columns for longitude and
latitude. You index and query the spatiotemporal data with the new spatiotemporal search
functions. You can also query spatiotemporal data with time series and spatial routines.
• A greater frequency for tracking moving or stationary objects is available. Time can now be
entered as a string or a number.
41
Spatiotemporal indexing parameters
• The spatiotemporal index parameters are defined as a BSON document. The
parameters define how spatiotemporal data is configured and the storage
spaces and extent sizes for subtrack tables and their indexes
• New for 12.10.xC9 is the averageStationaryGPSRate parameter
– Specifies how often a position reading is generated for a stationary object
– The seconds value is a floating-point number, with a range greater than or equal to .001 and
less than or equal to 1800, that represents the number of seconds between readings.
Default is 60.
42
Time units
• Five parameters, averageMovingGPSRate, averageStationaryGPSRate,
minStationaryInterval, minNoDataInterval, and
maxGPSTimeIntervalPerTrajectory, can now be specified either as a number
or a string.
– For example: “averageMovingGPSRate”:10 can also be specified as a string
(“averageMovingGPSRate”:”10”).
– Additional functionality can be added by specifying an optional unit of time measure
(represented as a string) after the number - s for second, m for minute, h for hour and d for
day.
• For averageMovingGPSRate and averageStationaryGPSRate, the result is
used as a real number
• For minStationaryInterval, minNoDataInterval, and
maxGPSTimeIntervalPerTrajectory the result is cast to an integer and used
43
12.10xC8 – New Feature Summary
• Encryption at Rest (EAR)
• Regular Expression (REGEX) support via SQL & JSON Listener
• JDBC 4.0 support
• Informix on Cloud
• Synchronous sharded inserts
• Mongo 3.2 API support
• TimeSeries Analytics functions
• Embed Informix and deploy as a non-root user
• Rename system generated indexes
• Real result set cursoring support in the listener
• Dump ER commands from syscdr database
• SQL interface to obtain temporary space usage
• IWA – improve support for non-correlated sub-queries and derived
tables
Released in December
2016
44
Encryption at Rest
User Story –
As the CIO for Acme Manufacturing, I am responsible for the
integrity of our customer data, and am required to ensure that
our database data on disk is encrypted.
45
What is Encryption at Rest?
• Encryption of your data at rest (on disk) vs. encryption of your data in flight
• All, some, or none of your selected dbspaces are encrypted. You control
which dbspaces are encrypted
• Existing dbspaces can be encrypted
• Encryption process occurs in low level I/O routines as data is being
read/written to disk
• Informix feature added in 12.10.xC8 and available in all editions at no extra
cost. No extra or 3rd party software required.
46
Quick Start
1. Set DISK_ENCRYPTION in $ONCONFIG file
DISK_ENCRYPTION keystore=jc_keystore
2. oninit -ivy
(Snippet of verbose output on next slide)
47
...
...
Initializing Dictionary Cache and SPL Routine Cache...succeeded
Initializing encryption-at-rest if necessary...succeeded
Initializing encryption-at-rest structures (part 1)...succeeded
Bringing up ADM VP...succeeded
Creating VP classes...succeeded
Forking main_loop thread...succeeded
Initializing DR structures...succeeded
Forking 1 'ipcshm' listener threads...succeeded
Starting tracing...succeeded
Initializing 1 flushers...succeeded
Clearing encrypted root chunk 1 before initialization...
25% done.
50% done.
75% done.
100% done.
Initializing encryption-at-rest structures (part 2)...succeeded
Initializing log/checkpoint information...succeeded
...
...
Quick Start
48
Results of Quick Start
• A new instance with one chunk, encrypted using the default cypher (aes128)
– We currently support aes128, aes192, and aes256
• Keystore and “stash file” created in $INFORMIXDIR/etc
– $INFORMIXDIR/etc/jc_keystore.p12
– $INFORMIXDIR/etc/jc_keystore.sth
49
How can I tell whether EAR is enabled?
• oncheck
oncheck -pr | head -15
oncheck –pr | grep rest
• Select from sysmaster:sysshmhdr
select value from sysshmhdr where name = "sh_disk_encryption";
• Look for "Encryption-at-rest is enabled using cipher" in the message log
• onstat -g dmp
onstat -g dmp <rhead addr> rhead_t | grep
sh_disk_encryption
50
How can I tell whether a space is encrypted?
• Select from sysdbspaces
select name from sysdbspaces where is_encrypted = 1;
• onstat -d
(Example on next slide)
51
IBM Informix Dynamic Server Version 12.10.F -- On-Line -- Up 00:03:16
-- 38324 Kbytes
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
4484f028 1 0x1 1 1 2048 N BA informix rootdbs
4484fdd0 2 0x10000001 2 1 2048 N BAE informix jcdbs
2 active, 2047 maximum
a
Chunks
address chunk/dbs offset size free bpages flags pathname
4484f268 1 1 0 100000 35118 PO-B-- /work3/JC/rootchunk
44958450 2 2 0 5000 3209 PO-B-- /work3/JC/chunk2
2 active, 32766 maximum
NOTE: The values in the "size" and "free" columns for DBspace chunks are
displayed in terms of "pgsize" of the DBspace to which they belong.
Expanded chunk capacity mode: always
Look for the
‘E’ (encrypted)
flag
52
Creating new spaces
• With EAR enabled, new spaces will be encrypted by default
• To override that default:
onspaces -c -d unencrypted_space -p /work3/JC/chunk3 -o 0 -s 1000 -u
execute function task("create unencrypted dbspace…
execute function task("create unencrypted blobspace…
etc…
53
Disabling chunk clearing
• By default a chunk is cleared (filled with blank pages) before any page
contained in it is encrypted. From the message log:
15:22:48 Clearing encrypted chunk 6 before initialization...
• Chunk clearing can be disabled by setting the undocumented
CLEAR_CHK_B4_ENCRYPT configuration parameter to 0
54
First 12 reserved pages are not encrypted
ROOT	Chunk
…
Pages	0	- 11
NOT	ENCRYPTED NOT	ENC6A?*b&h8@w=#Z0;kl
When the server boots, it has to be able to read something from disk that
indicates whether EAR is enabled. Obviously that info can't be encrypted.
55
What’s in memory?
• Pages in the buffer pool are not encrypted
• Decryption happens during the read from disk, at a low level in the
I/O code. Encryption happens at the same level during a write
• onstat -g dmp will display decrypted data
• Shared memory dump files will contain decrypted data, but not
encryption keys
56
What’s in the key store file?
• The Key Store file ($INFORMIXDIR/etc/<keystore name>.p12)
contains a single encryption key, which is used only for ROOTDBS (dbspace 1)
• The Key Store file is encrypted
• To decrypt the Key Store file, the server needs the Master Key
57
Where is the master key?
• The Master Key is stored in a stash file
($INFORMIXDIR/etc/<keystore name>.sth)
• The stash file is encrypted. The server knows how to read it only because
GSKit knows how to read it
• Best practice is to store encrypted chunks on a separate disk from
$INFORMIXDIR
• Users are expected to back up $INFORMIXDIR with some regularity
• Support for a networked key store is planned
58
Encryption keys and spaces
• Each space in an instance uses a different encryption key
• Keys 2-2047 are derived from Key 1 at run-time and never stored anywhere on
disk
59
No encryption dependencies across nodes*
• Encryption in a secondary is entirely independent of encryption in a primary
• A primary may be encrypted while a secondary is not, and vice-versa
• A different set of spaces may be encrypted in a primary vs. a secondary
*SDS is the exception
60
Archives are not encrypted (by default)
• Pages are decrypted before they are sent to either ontape or onbar
• No key store or stash file is needed to restore any archive
• Admins should continue to use BACKUP_FILTER or another preferred method
to encrypt archives
– We are looking at providing a BACKUP_FILTER script to encrypt archives by
default
61
Any archive can be used to encrypt spaces
• First, enable encryption by setting DISK_ENCRYPTION $ONCONFIG
parameter
– Requires a shutdown or bounce of a running instance
• Perform either a cold or warm restore
ontape -r -encrypt
onbar -r -encrypt
62
Regular Expressions
User Story –
As an application developer at Acme Manufacturing, I have
developed many complex queries on our data that use regular
expressions. We use these types of queries from Unix shell
scripts and need to be able to use the same query syntax
when querying our Informix database.
63
Regular Expressions Overview
• Regular expressions combine literal characters and meta-characters to define
the search and replace criteria. You run the functions from the Informix Regex
extension to find matches to strings, replace strings, and split strings into
substrings.
• The Informix Regex extension supports extended regular expressions, based
on the POSIX 1003.2 standard, and basic regular expressions
• You can specify case-sensitive or case-insensitive searching.
• You can search single-byte character sets or UTF-8 character sets.
64
Regular Expressions (REGEX)
• Built into the server
• ifxregex.1.00
– Different name & different UDR names to distinguish from the old
DeveloperWorks version
• Autoregistered
– Registered on first use (preferred)
• Can also be registered with SQL Registration
– Execute function sysbldprepare(‘ifxregex.*’, ‘create’)
• Same database restrictions as other datablades
65
Datatypes
• To use regex pattern matching, you must provide the text data as a CHAR,
LVARCHAR, NCHAR, NVARCHAR, VARCHAR, or CLOB data type
– If you want to replace text in a CLOB value with the regex_replace() function,
you must have a default sbspace
• Do not inherently use any database indexes
66
REGEX Meta Characters
Metacharacter Action
^ Begin of string
$ End of String
| Or
[abc] Match any character enclosed in [ ]
[^abc] Match any character not enclosed in [ ]
[a-c] Match the range of characters
[:cclass:] Match the characters in the character class, like the
ctype.h, alpha, alnum, lower
(ASCII centric)
[=cname=] Match the character by its name, ie quotation-mark,
asterisk
. Match any character
( ) Group the regular expression within the parentheses
67
REGEX Meta Characters
Metacharacters Action
? Match zero or one of the preceding expression.
Not applicable to basic regular expressions.
* Match zero, one, or many of the preceding expression
+ Match one or many of the preceding expression.
Not applicable to basic regular expressions.
 Use the literal meaning of the metacharacter.
For basic regular expressions, treat the next character
as a metacharacter
68
REGEX Replacement Meta Characters
Metacharacters Action
& Reference the entire matched text for string substitution.
For example, the statement execute function
regex_replace('abcdefg', '[af]', '.&.') replaces 'a' with '.a.'
and 'f' with '.f.' to return: '.a.bcde.f.g'.
n Reference the subgroup n within the matched text,
where n is an integer 0-9.
0 and & have identical actions.
1 - 9 substitute the corresponding subgroup.
 Use the literal meaning of the metacharacter, for
example, & escapes the Ampersand symbol and 
escapes the backslash.
For basic regular expressions, treat the next character
as a metacharacter.
69
regex_match() function
• regex_match(
str lvarchar|clob,
re lvarchar,
copts integer DEFAULT1)
returns boolean
• Example
execute function regex_match (
’Regex module’ ,
’[Mm]odule|DataBlade’);
(expression) t
70
regex_replace() function
• regex_replace(
str lvarchar|clob,
re lvarchar,
rep lvarchar,
limit integer DEFAULT 0,
copts integer DEFAULT 1)
returns lvarchar|clob
• Example
execute function regex_replace (
'Regular expressions combine literal Characters and
metacharacters.',
'( |^)[A-Za-z]*[Cc]haracter[a-z]*[ .,$]',
'<b>&</b>');
(expression) Regular expressions combine literal<b>
Characters </b>and<b> metacharacters.</b>
– Regular expressions combine literal characters and metacharacters.
71
regex_extract() function
• regex_extract(
str lvarchar|CLOB,
re lvarchar,
limit copts integer DEFAULT 0,
copts integer DEFAULT 1)
returns lvarchar
• Iterator
• Example
execute function regex_extract(
’How much wood could a woodchuck chuck if a woodchuck could chuck wood? A
woodchuck could chuck as much wood as a woodchuck would chuck if a woodchuck
could chuck wood.’,
’wo[ou]l?d[a-z]*[- .?!:;]’,
2 );
(expression) wood
(expression) woodchuck
2 row(s) retrieved.
– No limit, 10 rows, with wood, woodchuck, would
72
regex_split() function
• regex_split(
str lvarchar|CLOB,
re lvarchar,
limit copts integer DEFAULT 0,
copts integer DEFAULT 1)
returns lvarchar
• The regex_split function and the regex_extract function perform the complete
opposite actions of each other.
73
regex_extract() & regex_split() functions example
execute function regex_extract(
’Jack be nimble, Jack be quick, Jack jump over the candlestick.’, ’( |^)[A-Za-z]*ick’ );
(expression) quick
(expression) candlestick
2 row(s) retrieved.
execute function regex_split(
’Jack be nimble, Jack be quick, Jack jump over the candlestick.’, ’( |^)[A-Za-z]*ick’);
(expression) Jack be nimble, Jack be
(expression) , Jack jump over the
(expression) .
3 row(s) retrieved.
74
Regular Expressions are also supported in MongoDB clients & REST
• db.mycollection.find( { "description": /https?:/// } )
• db.mycollection.find( { "description": /https?:///i } )
– Adding case insensitive flag (i)
• db.mycollection.find( { "description": {"$regex": "https?://" } } )
– This is the exact same query as the first one, just explicitly using the $regex operator
• db.mycollection.find( { "description": {"$regex": "https?://" , "options" :
"i"} } )
– Adding case insensitive flag (i) to $regex syntax
• GET /db/mycollection?query={ "description": /https?:/// }
75
JDBC 4.0 Compliance
User Story –
As an application developer at Acme Manufacturing, my Java
applications have to support several different DBMS. I need
my application to be as database agnostic as possible, and so
all the JDBC drivers that we use must be JDBC 4.0 compliant
so that we do not have to maintain special logic for any
particular DBMS.
76
JDBC 4.10.JC8 Features
• JDBC 4.0 Compliance
– Almost a hundred new API’s implemented or enhanced to provide 4.0
compliance
– Compliance doesn’t necessarily mean all JDBC 4.0 methods are
supported
77
4.10.JC8 – ResultSet enhancements
• isClosed() and getHoldability() methods
• update* methods now work with long values
• Before this was what you had
– resultSet.updateBlob(“blobcolumn”, inputStream, int length);
• Now you can also use
– resultSet.updateBlob(“blobcolumn”, inputStream, long length);
• Before 4.10.JC8 you couldn’t always send an input stream that didn’t
have a length specified, now you can
– resultSet.updateAsciiStream(“charcolumn”, new
FileInputStream(“/myfile.txt”));
• This was done for all resultset update API’s
78
JDBC 4.0 Compliance
• Connection.java
– Gets proper createBlob() & createClob() API’s
• Statement objects get a minor update
boolean isClosed() throws SQLException;
void setPoolable(boolean poolable) throws SQLException;
boolean isPoolable() throws SQLException;
• Blob API gets filled out a bit
free();
getBinaryStream(long pos, long length);
• Clob API gets filled out a bit
free();
getCharacterStream(long pos, long length);
79
PreparedStatement enhancements
• Added ability to use long data type in set* API’s
– Before you could only set up to a 2gb object due to the use of int, now you
can try to set up to Informix’s max limit blob/clob data
• Fixed a few areas when a Reader or InputStream is passed in, as it was
possible to incorrectly determine how long it was. Data from these streams are
now correctly pulled in.
• Implemented more set* API’s around clobs, character streams
• CallableStatment gets the same treatment
80
IfxSmartBlob enhancements
• A prerequisite for a number of JDBC 4.0 compliance implementations is the
ability to write streamed data to a smart large object
• Added 6 new method calls to IfxSmartBlob.java
– These are helper methods for your existing Blob/Clob API’s
– They allow streaming of any length of data from a stream object in Java into a
blob/clob (up to what Informix supports or the size of a long which is huge)
public long write(int lofd, InputStream is) throws SQLException
public long write(int lofd, InputStream is, long length) throws SQLException
public long writeWithConversion(int lofd, InputStream is) throws SQLException
public long writeWithConversion(int lofd, InputStream is, long length) throws SQLException
public long writeWithConversion(int lofd, Reader r) throws SQLException
public long writeWithConversion(int lofd, Reader, long length) throws SQLException
81
IfxSmartBlob enhancements
• Created a default 32k buffer (matching the internal 32K buffer size we use for
sending chunks of data through the network
– Adjustable with setWriteStreamBufferSize(int)
• Any codeset conversion that used to be done via a write to a temp file is now
done purely in memory
– Much faster and creating files on disk to do this work is avoided
82
4.10.JC8 Features
• JDBC Packaging
– Combined ifxjdbc.jar and ifxjdbcx.jar
– Removed the old ifxjdbcx.jar, as it was small, with extra overhead to build and
test, and it’s features complimented what was in ifxjdbc.jar already
– Removed SQLJ from the JDBC installer
– Not maintained, can still get it from older drivers or IBM JCC
– Simplifies and streamlines what we produce and what you see
– We are no longer generating Javadoc for the BSON API’s
– Javadocs and source for BSON functions available online already
– http://api.mongodb.com/java/2.2
83
JDBC and Maven
• Starting with 4.10.JC8W1, Informix JDBC drivers are published to Maven Central!
• Maven artifacts prefer semantic versioning
– For JDBC we use 3 or 4 digits
– Latest JDBC driver is 4.10.8.1
• This allows you to easily and programmatically grab the driver
• Using your own Gradle, Maven, sbt to pull in latest versions of the driver
• Can even use ‘curl’ or wget to pull down the file directly from the web
• Can stage drivers into your own internal Maven repository
• Link takes you to Maven’s search page with details about the driver and version
http://mvnrepository.com/artifact/com.ibm.informix/jdbc/4.10.8.1
84
Informix JDBC on Maven
85
Other Version Independent Enhancements
86
Informix Innovator-C Edition on Docker Hub
87
Informix Developer Edition on Docker Hub
88
New Connectivity to Informix
• New Informix Python driver
– https://github.com/ifxdb/PythonIfxDB
• Informix Node.js driver
– https://github.com/ifxdb/node-ifx_db
– https://www.npmjs.com/package/ifx_db
• Drivers developed and supported by Informix lab
• Both currently require full CSDK installation – but this is changing!
89
Informix Hybrid Support - All Clients can access all Data Models
• NoSQL ↔ SQL
Translation
• Wire Listeners for
MongoDB, REST &
MQTT protocols
• SQLI, DRDA Protocol
Support
• Relational, Collection,
Time Series, & Spatial
Data Support
Mobile
Desktop
Web
REST Client
MongoDB
Client
SQLI Client
DRDA Client
MQTT Client
Informix
DBMS
Informix Wire
Listener
Spatial
Time Series
JSON Collection
Relational Table
90
• IBM Smart Gateway kit - https://ibm.biz/BdXr2W
• Code samples - https://ibm.biz/BdX4QV
• Github - https://github.com/IBM-IoT/
• Free Informix Developer Edition - https://ibm.biz/BdXp2g
• Innovator-C edition on Docker Hub
https://registry.hub.docker.com/u/ibmcom/informix-innovator-c/
• Developer edition on Docker Hub https://registry.hub.docker.com/u/ibmcom/informix-
developer-database/
• Informix Developer Edition for Raspberry Pi (32bit)
https://registry.hub.docker.com/r/ibmcom/informix-rpi/
• Client and connectivity examples https://github.com/ibm-informix/informix-client-
examples
Developers - Get Started!
Docker Hub
91
Some Useful Information
• Bloor White Paper – IBM Informix and the Internet of Things - http://ibm.co/2bITDyU
• IBM Informix - http://www-01.ibm.com/software/data/informix/
• IBM Informix Support - http://www-
947.ibm.com/support/entry/portal/overview/software/information_management/informix_servers
• IBM developerWorks pages for Informix - http://www.ibm.com/developerworks/data/products/informix/
• Informix International User Group (IIUG) - http://www.iiug.org/index.php
• Informix Community - http://www.informixcommunity.com/
• Planet IDS - http://planetids.com/
• IBM Informix on LinkedIn - http://www.linkedin.com/groups?home=&gid=4029470&trk=anet_ug_hm
• IBM Informix on Facebook - https://www.facebook.com/IBM.Informix
• IBM Informix on Twitter - https://twitter.com/WW_Informix
• Informix YouTube Channel - https://www.youtube.com/channel/UCsdfm-BDILWYPM04F7jdKhw
• IBM Informix Blogs (a few of them):
• https://www.ibm.com/developerworks/community/blogs/smoe/?lang=en
• https://www.ibm.com/developerworks/community/blogs/idsteam/?lang=en
• https://www.ibm.com/developerworks/community/blogs/fredho66/?lang=en_us
• https://www.ibm.com/developerworks/community/blogs/idsdoc/?lang=en_us
92
Shawn Moe – smoe@hcl.com
93
Informix 12.10: Simply Powerful
28

More Related Content

What's hot

Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLWebinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
Severalnines
 

What's hot (20)

[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기
 
How the Postgres Query Optimizer Works
How the Postgres Query Optimizer WorksHow the Postgres Query Optimizer Works
How the Postgres Query Optimizer Works
 
Graylog is the New Black
Graylog is the New BlackGraylog is the New Black
Graylog is the New Black
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 
Getting the Scylla Shard-Aware Drivers Faster
Getting the Scylla Shard-Aware Drivers FasterGetting the Scylla Shard-Aware Drivers Faster
Getting the Scylla Shard-Aware Drivers Faster
 
High Availability PostgreSQL with Zalando Patroni
High Availability PostgreSQL with Zalando PatroniHigh Availability PostgreSQL with Zalando Patroni
High Availability PostgreSQL with Zalando Patroni
 
Altinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouseAltinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouse
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for Experimentation
 
[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기
 
Battle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWaveBattle of the Stream Processing Titans – Flink versus RisingWave
Battle of the Stream Processing Titans – Flink versus RisingWave
 
Real-time analytics with Druid at Appsflyer
Real-time analytics with Druid at AppsflyerReal-time analytics with Druid at Appsflyer
Real-time analytics with Druid at Appsflyer
 
Apache Cassandra - Einführung
Apache Cassandra - EinführungApache Cassandra - Einführung
Apache Cassandra - Einführung
 
MongoDB WiredTiger Internals
MongoDB WiredTiger InternalsMongoDB WiredTiger Internals
MongoDB WiredTiger Internals
 
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLWebinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
 
Webinar: MongoDB Schema Design and Performance Implications
Webinar: MongoDB Schema Design and Performance ImplicationsWebinar: MongoDB Schema Design and Performance Implications
Webinar: MongoDB Schema Design and Performance Implications
 
Working with JSON Data in PostgreSQL vs. MongoDB
Working with JSON Data in PostgreSQL vs. MongoDBWorking with JSON Data in PostgreSQL vs. MongoDB
Working with JSON Data in PostgreSQL vs. MongoDB
 
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
 
Elasticsearch for Data Analytics
Elasticsearch for Data AnalyticsElasticsearch for Data Analytics
Elasticsearch for Data Analytics
 
MongoDB performance
MongoDB performanceMongoDB performance
MongoDB performance
 
Designing Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things RightDesigning Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things Right
 

Similar to Informix into the future13 july2017

Data-Centric Infrastructure for Agile Development
Data-Centric Infrastructure for Agile DevelopmentData-Centric Infrastructure for Agile Development
Data-Centric Infrastructure for Agile Development
DATAVERSITY
 
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
IBM France Lab
 

Similar to Informix into the future13 july2017 (20)

Integrating Structure and Analytics with Unstructured Data
Integrating Structure and Analytics with Unstructured DataIntegrating Structure and Analytics with Unstructured Data
Integrating Structure and Analytics with Unstructured Data
 
BLU Acceleration on the Cloud – 101
BLU Acceleration on the Cloud – 101BLU Acceleration on the Cloud – 101
BLU Acceleration on the Cloud – 101
 
Db2 tools
Db2 toolsDb2 tools
Db2 tools
 
Impala Unlocks Interactive BI on Hadoop
Impala Unlocks Interactive BI on HadoopImpala Unlocks Interactive BI on Hadoop
Impala Unlocks Interactive BI on Hadoop
 
What is A Cloud Stack in 2017
What is A Cloud Stack in 2017What is A Cloud Stack in 2017
What is A Cloud Stack in 2017
 
Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud Experience
 
What is cloud computing
What is cloud computingWhat is cloud computing
What is cloud computing
 
Are your Cloud Services Secure and Compliant today?
Are your Cloud Services Secure and Compliant today?Are your Cloud Services Secure and Compliant today?
Are your Cloud Services Secure and Compliant today?
 
Bluemix digital innovation_platform
Bluemix digital innovation_platformBluemix digital innovation_platform
Bluemix digital innovation_platform
 
Enabling a hardware accelerated deep learning data science experience for Apa...
Enabling a hardware accelerated deep learning data science experience for Apa...Enabling a hardware accelerated deep learning data science experience for Apa...
Enabling a hardware accelerated deep learning data science experience for Apa...
 
Cloud Innovation Tour - Design Track
Cloud Innovation Tour - Design TrackCloud Innovation Tour - Design Track
Cloud Innovation Tour - Design Track
 
IMS10 unleash the capabilities of new technologies
IMS10   unleash the capabilities of new technologiesIMS10   unleash the capabilities of new technologies
IMS10 unleash the capabilities of new technologies
 
Big Data: InterConnect 2016 Session on Getting Started with Big Data Analytics
Big Data:  InterConnect 2016 Session on Getting Started with Big Data AnalyticsBig Data:  InterConnect 2016 Session on Getting Started with Big Data Analytics
Big Data: InterConnect 2016 Session on Getting Started with Big Data Analytics
 
Cloud 101: The Basics of Cloud Computing
Cloud 101: The Basics of Cloud ComputingCloud 101: The Basics of Cloud Computing
Cloud 101: The Basics of Cloud Computing
 
Stl meetup cloudera platform - january 2020
Stl meetup   cloudera platform  - january 2020Stl meetup   cloudera platform  - january 2020
Stl meetup cloudera platform - january 2020
 
IBM Informix on cloud webcast August 2017
IBM Informix on cloud webcast August 2017IBM Informix on cloud webcast August 2017
IBM Informix on cloud webcast August 2017
 
Achieving Separation of Compute and Storage in a Cloud World
Achieving Separation of Compute and Storage in a Cloud WorldAchieving Separation of Compute and Storage in a Cloud World
Achieving Separation of Compute and Storage in a Cloud World
 
Data-Centric Infrastructure for Agile Development
Data-Centric Infrastructure for Agile DevelopmentData-Centric Infrastructure for Agile Development
Data-Centric Infrastructure for Agile Development
 
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
Bluemix Paris Meetup - Session #8 - 20th may 2015 - Passer au cloud hybride a...
 
Enabling a hardware accelerated deep learning data science experience for Apa...
Enabling a hardware accelerated deep learning data science experience for Apa...Enabling a hardware accelerated deep learning data science experience for Apa...
Enabling a hardware accelerated deep learning data science experience for Apa...
 

Recently uploaded

一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
q6pzkpark
 
sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444
saurabvyas476
 
Simplify hybrid data integration at an enterprise scale. Integrate all your d...
Simplify hybrid data integration at an enterprise scale. Integrate all your d...Simplify hybrid data integration at an enterprise scale. Integrate all your d...
Simplify hybrid data integration at an enterprise scale. Integrate all your d...
varanasisatyanvesh
 
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get CytotecAbortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
mikehavy0
 
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
zifhagzkk
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
wsppdmt
 
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
acoha1
 

Recently uploaded (20)

一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
 
sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444
 
Credit Card Fraud Detection: Safeguarding Transactions in the Digital Age
Credit Card Fraud Detection: Safeguarding Transactions in the Digital AgeCredit Card Fraud Detection: Safeguarding Transactions in the Digital Age
Credit Card Fraud Detection: Safeguarding Transactions in the Digital Age
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
 
Pentesting_AI and security challenges of AI
Pentesting_AI and security challenges of AIPentesting_AI and security challenges of AI
Pentesting_AI and security challenges of AI
 
Simplify hybrid data integration at an enterprise scale. Integrate all your d...
Simplify hybrid data integration at an enterprise scale. Integrate all your d...Simplify hybrid data integration at an enterprise scale. Integrate all your d...
Simplify hybrid data integration at an enterprise scale. Integrate all your d...
 
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get CytotecAbortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
 
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
Abortion Clinic in Kempton Park +27791653574 WhatsApp Abortion Clinic Service...
 
DS Lecture-1 about discrete structure .ppt
DS Lecture-1 about discrete structure .pptDS Lecture-1 about discrete structure .ppt
DS Lecture-1 about discrete structure .ppt
 
Introduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptxIntroduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptx
 
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
如何办理(Dalhousie毕业证书)达尔豪斯大学毕业证成绩单留信学历认证
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
 
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarjSCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
 
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptxRESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
 
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(UPenn毕业证书)宾夕法尼亚大学毕业证成绩单本科硕士学位证留信学历认证
 
DAA Assignment Solution.pdf is the best1
DAA Assignment Solution.pdf is the best1DAA Assignment Solution.pdf is the best1
DAA Assignment Solution.pdf is the best1
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptx
 
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
 

Informix into the future13 july2017

  • 1. 1 Jump start with the enhanced Informix Thursday, July 13, 2017 12:30 PM EST Pradeep Muthalpuredathe Technology Director and Head of Engineering HCL Shawn Moe Software Architect HCL
  • 2. 2 Safe Harbor Statement 2 Copyright © IBM Corporation 2017. All rights reserved. U.S. Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corporation THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON CURRENT THINKING REGARDING TRENDS AND DIRECTIONS, WHICH ARE SUBJECT TO CHANGE BY IBM WITHOUT NOTICE. FUNCTION DESCRIBED HEREIN MY NEVER BE DELIVERED BY I BM. IBM SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, NOR SHALL HAVE THE EFFECT OF, CREATING ANY WARRANTIES OR REPRESENTATIONS FROM IBM (OR ITS SUPPLIERS OR LICENSORS), OR ALTERING THE TERMS AND CONDITIONS OF ANY AGREEMENT OR LICENSE GOVERNING THE USE OF IBM PRODUCTS AND/OR SOFTWARE. IBM, the IBM logo, ibm.com and Informix are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml
  • 3. Informix Into the Future Pradeep Muthalpuredathe (mpradeep@hcl.com) Director – WW Informix Engineering (OneTeam) @mspradeep66 #Informix #Reinventing #Unleashed IBM Analytics HCL Products & Platforms
  • 4. 4 Data is the new currency and at the core of every business but… 1. Only 15% of organizations fully leverage data and analytics à Unlock the potential of all your data available in all data types. Combine it with public or 3rd party data sets 2. Many users don’t have direct or timely access to information à Short-cut / avoid dependencies and democratize access with integrated governance enabling Self-Service 3. 90% of the worlds data cannot be googled à Leverage data where it resides and bring analytic capabilities & cloud benefits to your data 4. The cloud journey is a marathon, not a sprint à Hybrid cloud solutions offer faster, incremental value at lower risk
  • 5. 5 + Announcing a powerful partnership for Informix
  • 6. 6 Investing in Informix 15-year strategic partnership to jointly develop and market the IBM Informix family of products, effective April 1, 2017Partnership Enhancement Next Generation The best of our shared knowledge and teaming experience will enhance current Informix products HCL will build the next generation of Informix products based on market needs and client priorities
  • 7. IBM and HCL IP Partnership – Key Highlights IBM and HCL have entered into a 15+ year IP partnership. HCL has set up a new division called HCL Products and Platforms. IBM will continue to sell IBM Informix products through IBM channels and continues to own Level 1 support. HCL is responsible for dev, all other support and customer advocacy. HCL will also bring additional Sales and Marketing to Informix. All of the IBM development and support engineers have joined HCL. Customers will continue to have access to the expertise of the labs. HCL has a Client Advocacy team building on IBM’s Lab Advocate program. This is a ‘hypercare’ approach to customer relationships. HCL will be accelerating the product roadmaps, and delivering new features and functionality, benchmarks and new Cloud Offerings HCL will be refreshing Lab Services offerings to provide customers higher ROI and faster time to value. 01 02 03 04 05 06
  • 9. 9 HCL Products & Platforms Division A services company mindset to customer relationship Accelerate roadmaps and bringing new features to our customers Real innovation that solves customer problems Provide insights beyond customer input by using our own products Bringing speed, insights and innovations (big and small) to create value for our customers in DevOps, Automation and Application Modernization software
  • 10. 10 HCL Client Advocacy A customer-centric approach is the foundational element of the HCL Products business philosophy. We strive to deliver a high-touch, highly interactive approach to customer relationships, and to provide the greatest value and service to customers through strong connections to our product experts. More cohesive and collaborative approach to the client relationship Incessant support for client's product usage and business needs More frequent touchpoints with product roadmaps Proactive communication on product news and updates Deeper understanding of the client's business and challenges
  • 11. 11 Informix Investment Priorities Delight the Client Cloud Integration IoT Enhance value & experience Enable hybrid cloud journey Extend use cases and simplify adoption
  • 12. 12 Short Term Medium Term Long Term Under Review Delight the Customer Expand on Cloud Optimize for IoT Backup to Cloud - Softlayer Smart Triggers – Server & JDBC DSX Analytics for Informix Self Service provisioning for Cloud (PAYGO) BlueMix ICIAE Certification Informix Roadmap Backup to Cloud – AWS, Google, … Smart triggers- additional APIs Simplified Licensing Simplified Upgrade and Deployment Benchmarks Hosted Services – Various Cloud Platforms High Availability Offering Sensors in motion Elastic Scaling Simplify Solution Development, focus on ISVs – for IoT, Cloud and OnPrem SQL Enhancements Ease of Use and Administration Developer Ecosystem and Community Engagement Cross Focus Items HTAP Recompress Data Dictionary Edge-2-Cloud Solution stack IWA for Cloud DBaaS Offering TimeSeries compression on strings Blockchain Integration High frequency ingest of data (sub second)
  • 13. 13 Informix on Social Media http://www.informixcommunity.com/
  • 14. 14 Informix: You have the right tool for the job – all in one toolbox! • Outstanding Performance and Uptime • Application Development via modern APIs • Hybrid storage and hybrid applications with data consistency – The only database that can be utilized and provisioned on heterogeneous, commodity hardware, different O/S, and different database versions • Modern interface providing JSON / BSON native support – Rapid delivery of applications – Access Relational, TimeSeries, Spatial, Graph data from SQL and/or NoSQL application • Super scale out – Multiple nodes, multiple versions, multiple copies, data sharding – Best of the Breed HA and Workload management solutions • At the Edge, On-Prem and Cloud
  • 15. Informix New Features Overview Shawn Moe Informix Engineering Lab smoe@hcl.com July 2017
  • 16. 16 Agenda • What’s new in 12.10.xC9? – Released in July 2017 – Backup to Cloud Object Store – Smart Triggers/Push Data – Informix on Cloud – Tracking Moving Objects • What’s new in 12.10.xC8? – Released in December 2016 – Highlights – Encryption at Rest – Regular Expressions – JDBC 4.0 • Version independent enhancements
  • 17. 17 Backup to Cloud Object Storage User Story – As the CIO of Acme Manufacturing, I need to be able to store our database backups in a secure, offsite location. In the event of a disaster, we must be able to quickly recover our systems using these backups, possibly from another location.
  • 18. 18 Object Storage • Objects are stored as an unknown stream of bytes • Objects are not files, nor are they are not disk blocks. The physical way they are stored and where they are stored is a black box for the user • Objects will have properties, tags or characteristics that can be attached to it, so it allows for multi-dimensional organization, search and retrieval
  • 19. 19 Object Storage Characteristics • Implemented as a black box, although some implementations are open like SWIFT which is used by SoftLayer • Can be distributed • Can be redundant • Could provide versioning • Could provide modification capabilities • Is always managed through an API, so there is usually not a way to see “files” or “directories” • Objects have names, you can use “/” as part of the name. This makes the names look like paths, but they are not!
  • 20. 20 Getting started in SoftLayer - Create an Object Storage “Bucket”
  • 21. 21 Cloud Backups using On-Bar • It is possible to use STDIO devices to take direct backups from On-Bar to a Cloud Provider • The feature is implemented in the Primary Storage Manager (PSM) • It is NOT possible to use this if you use a third party storage manager
  • 22. 22 STDIO devices • A new type of device called “STDIO device” was implemented in PSM • This new device type will send the backup stream data or get the restore stream data from a separate process (ie sftp) • Communication to/from this process will occur using the standard input/standard output of that process, very similar to use a pipe Archive API IDS curl, sftp, or aws-cli OBJECT STORAGE On-Bar PSM
  • 23. 23 Create STDIO device onpsm -D -add /home/shawn/mycurl -t STDIO --stdio_warg "BACKUP @obj_name1@.@obj_id@.@obj_part@" --stdio_rarg "RESTORE @obj_name1@.@obj_id@.@obj_part@" --stdio_darg "DELETE @obj_name1@.@obj_id@.@obj_part@" --max_part_size <size in KB> •Notice the type “STDIO” •Notice the device path is a path to a executable, usually a shell script that will take/retrieve the data •We have to provide the arguments that we will use to invoke the program for backup (send data), restore (get data) and delete (erase data)
  • 24. 24 Upcoming Work… • Distribute scripts to connect to the most popular Cloud providers • Define a way to do this configuration automatically • Provide the same capability to ontape • Add the capability to send the data offsite in addition to keep a local copy (ifxbkpcloud.jar will be replaced with new functionality)
  • 25. 25 Smart Triggers & Push Data Notifications User Story – As the CFO of Acme Manufacturing, we are bound by various regulations that require us to record various information about large purchases of certain products. We need an easy way to monitor these transactions as they update our enterprise database.
  • 26. 26 Smart Trigger Value Proposition • Selectively trigger events based on changes in server data • Real time ‘push’ notifications help clients avoid polling the server • Small data flow allows simple small clients to work with many triggered events at once
  • 27. 27 Smart Triggers in JDBC • Smart Triggers are registered events on the server that you subscribe to from your JDBC client – Triggers are based on a SQL statement query that matches changes made to a table – SELECT id FROM CUSTOMER WHERE cardBalance > 20000; • One client can listen to many events from many tables, allowing a wide range of monitoring opportunities – Monitor account balances – Take action on suspicious behaviors
  • 28. 28 What does a Smart Trigger Look Like? • It’s designed to be a simple set of classes/interfaces in Java • Designed for both simple standalone monitor applications as well as integration into multi-threaded environments • Leverages the Push Notification feature in the server to do the heavy lifting • Receives JSON documents when a trigger occurs • Adding Smart Triggers to the JDBC driver allows other languages to have this support – Groovy, JavaScript (NodeJS), Python, Scala and more
  • 29. 29 Use case: Banking • Bank accounts – I want to be alerted when an account balance drops below zero dollars – I don’t want to write SPL or install stored procedures – I want to be notified in my client application – I don’t want to poll the database for this information or re-query each time a balance changes from the client
  • 30. 30 Smart Trigger Bank Code public class BankMonitor implements IfmxSmartTriggerCallback { public static void main(String[] args) throws SQLException { IfxSmartTrigger trigger = new IfxSmartTrigger(args[0]); trigger.timeout(5).label("bank_alert"); trigger.addTrigger("account", "informix", "bank", "SELECT * FROM account WHERE balance < 0", new BankMonitor()); trigger.watch(); //blocking call } @Override public void notify(String json) { System.out.println("Bank Account Ping!"); if(json.contains("ifx_isTimeout")) { System.out.println("-- No balance issues"); } else { System.out.println("-- Bank Account Alert detected!"); System.out.println(" " + json); } } }
  • 31. 31 Logical Log Grouper Snoopy Database Push-data Clients sesid = task(“pushdata open”) Task(“pushdata register”, {json}) Task(“pushdata register”, {json}) While (1) { bytes=Ifx_lo_read(sesid, buf, size, err) Execute action; } Push-data Sample app Event Data Event Data Event Data Event Data OLTP Clients Server Architecture Diagram
  • 32. 32 API Calls via sysadmin Tasks • TASK(‘pushdata open’); – Register client session as a push data session – Returns session id, need this id to read event data • TASK(‘pushdata register’, {event and session attributes}); – Register event conditions, and session specific attributes • Smart blob read API (ifx_lo_read() or equivalent call)to read event data – Pseudo smart blob interface to read event data – Returns JSON document(s) – Can be configured as blocking or non-blocking call • TASK(‘pushdata deregister’, {event condition details}); – De-register event conditions
  • 33. 33 Example event data documents • Sample output for Insert operation: {“operation”:"insert",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert",”txnid”:2250573177224,”commit_time”:1488243530,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"66 66-6666-6666-6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05- 01T10:35:10.000Z } }} • Sample output for Update operation: {“opertion”:"update",table:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert",”txnid”:2250573308360,”commit_time”:1488243832,”op_num”:1,”rowdata”:{uid:21,cardid:"7777- 7777-7777-7777",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":"25-Jan-2017 16:15"} },”before_rowdata”:{“uid”:21,”cardid”:"6666-6666-6666-6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T10:35:10.000Z } }} • Sample output for Delete operation: {“opertion”:"delete",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert",”txnid”:2250573287760,”commit_time”:1488243797,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"6666- 6666-6666-6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T13:35:06.000Z } }} • Sample output for multi row document when maxrecs input attribute set to greater than 1: {[ {“operation”:"Insert",”table”:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert", “txnid”:2250573309999,”commit_time”:1487781325,”op_num”:1,”rowdata”:{uid:"7",”cardid”:"6666-6666-6666- 6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T15:10:10.000Z } }}, {“operation”:"insert",table:"creditcardtxns",”owner”:"informix",”database”:"creditdb",”label”:"card txn alert",”txnid”:2250573177224,”commit_time”:1488243530,”op_num”:1,”rowdata”:{“uid”:22,”cardid”:"6666-6666-6666- 6666",”carddata”:{"Merchant":"Sams Club","Amount":200,"Date":2017-05-01T16:20:10.000Z } }} ]}
  • 34. 34 • Print all sessions: onstat –g pd • Print all event conditions: onstat –g pd event • Print information about specific session: onstat –g pd 70 • Print event conditions for specific session: onstat –g pd 39 event onstat Commands
  • 35. 35 Comparing Smart Trigger and Regular I/U/D Triggers Smart Trigger Regular Trigger(I/U/D) Post Commit Pre Commit Register Trigger on a specific Dataset/Event Trigger gets fired for all changes Asynchronous and Linear Scalability Synchronous Data is in JSON format SQL format Trigger logic gets executed in the client Trigger logic gets executed in the server Natural fit for event driven programming model - No schema changes required to define new smart trigger Require schema changes and exclusive lock on the table to modify trigger definition
  • 36. 36 Informix on Cloud User Story – As a quality assurance manager at Acme Manufacturing, we need additional Informix instances, often at short notice, for various periods of time, to test our new functionality in several configurations used by our customers. We can’t justify purchasing additional hardware and software licenses for machines which will not be used every day.
  • 37. 37 l IBM Informix on Cloud is available on IBM Bluemix • Cloud hosted service includes Informix license and cloud “hardware” • T-shirt sizing: S, M, L, XL instances match Informix license and hardware capacities to provide optimal value at each size • Informix instance hosted in IBM SoftLayer data centers with world wide deployment options • IBM provisions, configures, and tests the instance and then passes the credentials on to the customer • Full Informix functionality to support all kinds of work loads: • OLTP • Hybrid NoSQL, SQL, TimeSeries and Spatial • IoT • IWA • Rapid application development with support for SQL, MongoDB, REST or MQTT themed applications • It’s Informix! Rationale: deliver high-quality cloud service with low cost of operations
  • 38. 38 Informix on Cloud – Bluemix “Pay & Go” Subscription • Bluemix Pay & Go functionality just released in early July 2017 • Uses credit card information tied to your account – If you don’t want to do this, you can still go through IBM sales • All existing “Order with IBM Sales assistance” offerings remain available • Compete process from selecting a virtual server image to having a running, provisioned & configured Informix server is now about 20 minutes!
  • 39. 39 Tracking Moving Objects User Story – As a software developer at Acme Manufacturing, I need a mechanism to help me track locations of our delivery vehicles (trucks and drones) as they make deliveries during the day. Our application needs to be able to visualize the location of each of our vehicles in relation to each other and to various pickup and drop-off locations. In some situations, we need to be able to record locations at sub-second intervals.
  • 40. 40 Track moving objects • You can track a moving object, such as a vehicle, by capturing location information for the object at regular time intervals. You can use the new spatiotemporal search extension to index the data and then query on either time or on location to determine the relationship of one to the other. You can query when an object was at a specified location, or where an object was at a specified time. You can also find the trajectory of a moving object over a range of time. • The spatiotemporal search extension depends on the TimeSeries and spatial extensions. You store the spatiotemporal data in a TimeSeries data type with columns for longitude and latitude. You index and query the spatiotemporal data with the new spatiotemporal search functions. You can also query spatiotemporal data with time series and spatial routines. • A greater frequency for tracking moving or stationary objects is available. Time can now be entered as a string or a number.
  • 41. 41 Spatiotemporal indexing parameters • The spatiotemporal index parameters are defined as a BSON document. The parameters define how spatiotemporal data is configured and the storage spaces and extent sizes for subtrack tables and their indexes • New for 12.10.xC9 is the averageStationaryGPSRate parameter – Specifies how often a position reading is generated for a stationary object – The seconds value is a floating-point number, with a range greater than or equal to .001 and less than or equal to 1800, that represents the number of seconds between readings. Default is 60.
  • 42. 42 Time units • Five parameters, averageMovingGPSRate, averageStationaryGPSRate, minStationaryInterval, minNoDataInterval, and maxGPSTimeIntervalPerTrajectory, can now be specified either as a number or a string. – For example: “averageMovingGPSRate”:10 can also be specified as a string (“averageMovingGPSRate”:”10”). – Additional functionality can be added by specifying an optional unit of time measure (represented as a string) after the number - s for second, m for minute, h for hour and d for day. • For averageMovingGPSRate and averageStationaryGPSRate, the result is used as a real number • For minStationaryInterval, minNoDataInterval, and maxGPSTimeIntervalPerTrajectory the result is cast to an integer and used
  • 43. 43 12.10xC8 – New Feature Summary • Encryption at Rest (EAR) • Regular Expression (REGEX) support via SQL & JSON Listener • JDBC 4.0 support • Informix on Cloud • Synchronous sharded inserts • Mongo 3.2 API support • TimeSeries Analytics functions • Embed Informix and deploy as a non-root user • Rename system generated indexes • Real result set cursoring support in the listener • Dump ER commands from syscdr database • SQL interface to obtain temporary space usage • IWA – improve support for non-correlated sub-queries and derived tables Released in December 2016
  • 44. 44 Encryption at Rest User Story – As the CIO for Acme Manufacturing, I am responsible for the integrity of our customer data, and am required to ensure that our database data on disk is encrypted.
  • 45. 45 What is Encryption at Rest? • Encryption of your data at rest (on disk) vs. encryption of your data in flight • All, some, or none of your selected dbspaces are encrypted. You control which dbspaces are encrypted • Existing dbspaces can be encrypted • Encryption process occurs in low level I/O routines as data is being read/written to disk • Informix feature added in 12.10.xC8 and available in all editions at no extra cost. No extra or 3rd party software required.
  • 46. 46 Quick Start 1. Set DISK_ENCRYPTION in $ONCONFIG file DISK_ENCRYPTION keystore=jc_keystore 2. oninit -ivy (Snippet of verbose output on next slide)
  • 47. 47 ... ... Initializing Dictionary Cache and SPL Routine Cache...succeeded Initializing encryption-at-rest if necessary...succeeded Initializing encryption-at-rest structures (part 1)...succeeded Bringing up ADM VP...succeeded Creating VP classes...succeeded Forking main_loop thread...succeeded Initializing DR structures...succeeded Forking 1 'ipcshm' listener threads...succeeded Starting tracing...succeeded Initializing 1 flushers...succeeded Clearing encrypted root chunk 1 before initialization... 25% done. 50% done. 75% done. 100% done. Initializing encryption-at-rest structures (part 2)...succeeded Initializing log/checkpoint information...succeeded ... ... Quick Start
  • 48. 48 Results of Quick Start • A new instance with one chunk, encrypted using the default cypher (aes128) – We currently support aes128, aes192, and aes256 • Keystore and “stash file” created in $INFORMIXDIR/etc – $INFORMIXDIR/etc/jc_keystore.p12 – $INFORMIXDIR/etc/jc_keystore.sth
  • 49. 49 How can I tell whether EAR is enabled? • oncheck oncheck -pr | head -15 oncheck –pr | grep rest • Select from sysmaster:sysshmhdr select value from sysshmhdr where name = "sh_disk_encryption"; • Look for "Encryption-at-rest is enabled using cipher" in the message log • onstat -g dmp onstat -g dmp <rhead addr> rhead_t | grep sh_disk_encryption
  • 50. 50 How can I tell whether a space is encrypted? • Select from sysdbspaces select name from sysdbspaces where is_encrypted = 1; • onstat -d (Example on next slide)
  • 51. 51 IBM Informix Dynamic Server Version 12.10.F -- On-Line -- Up 00:03:16 -- 38324 Kbytes Dbspaces address number flags fchunk nchunks pgsize flags owner name 4484f028 1 0x1 1 1 2048 N BA informix rootdbs 4484fdd0 2 0x10000001 2 1 2048 N BAE informix jcdbs 2 active, 2047 maximum a Chunks address chunk/dbs offset size free bpages flags pathname 4484f268 1 1 0 100000 35118 PO-B-- /work3/JC/rootchunk 44958450 2 2 0 5000 3209 PO-B-- /work3/JC/chunk2 2 active, 32766 maximum NOTE: The values in the "size" and "free" columns for DBspace chunks are displayed in terms of "pgsize" of the DBspace to which they belong. Expanded chunk capacity mode: always Look for the ‘E’ (encrypted) flag
  • 52. 52 Creating new spaces • With EAR enabled, new spaces will be encrypted by default • To override that default: onspaces -c -d unencrypted_space -p /work3/JC/chunk3 -o 0 -s 1000 -u execute function task("create unencrypted dbspace… execute function task("create unencrypted blobspace… etc…
  • 53. 53 Disabling chunk clearing • By default a chunk is cleared (filled with blank pages) before any page contained in it is encrypted. From the message log: 15:22:48 Clearing encrypted chunk 6 before initialization... • Chunk clearing can be disabled by setting the undocumented CLEAR_CHK_B4_ENCRYPT configuration parameter to 0
  • 54. 54 First 12 reserved pages are not encrypted ROOT Chunk … Pages 0 - 11 NOT ENCRYPTED NOT ENC6A?*b&h8@w=#Z0;kl When the server boots, it has to be able to read something from disk that indicates whether EAR is enabled. Obviously that info can't be encrypted.
  • 55. 55 What’s in memory? • Pages in the buffer pool are not encrypted • Decryption happens during the read from disk, at a low level in the I/O code. Encryption happens at the same level during a write • onstat -g dmp will display decrypted data • Shared memory dump files will contain decrypted data, but not encryption keys
  • 56. 56 What’s in the key store file? • The Key Store file ($INFORMIXDIR/etc/<keystore name>.p12) contains a single encryption key, which is used only for ROOTDBS (dbspace 1) • The Key Store file is encrypted • To decrypt the Key Store file, the server needs the Master Key
  • 57. 57 Where is the master key? • The Master Key is stored in a stash file ($INFORMIXDIR/etc/<keystore name>.sth) • The stash file is encrypted. The server knows how to read it only because GSKit knows how to read it • Best practice is to store encrypted chunks on a separate disk from $INFORMIXDIR • Users are expected to back up $INFORMIXDIR with some regularity • Support for a networked key store is planned
  • 58. 58 Encryption keys and spaces • Each space in an instance uses a different encryption key • Keys 2-2047 are derived from Key 1 at run-time and never stored anywhere on disk
  • 59. 59 No encryption dependencies across nodes* • Encryption in a secondary is entirely independent of encryption in a primary • A primary may be encrypted while a secondary is not, and vice-versa • A different set of spaces may be encrypted in a primary vs. a secondary *SDS is the exception
  • 60. 60 Archives are not encrypted (by default) • Pages are decrypted before they are sent to either ontape or onbar • No key store or stash file is needed to restore any archive • Admins should continue to use BACKUP_FILTER or another preferred method to encrypt archives – We are looking at providing a BACKUP_FILTER script to encrypt archives by default
  • 61. 61 Any archive can be used to encrypt spaces • First, enable encryption by setting DISK_ENCRYPTION $ONCONFIG parameter – Requires a shutdown or bounce of a running instance • Perform either a cold or warm restore ontape -r -encrypt onbar -r -encrypt
  • 62. 62 Regular Expressions User Story – As an application developer at Acme Manufacturing, I have developed many complex queries on our data that use regular expressions. We use these types of queries from Unix shell scripts and need to be able to use the same query syntax when querying our Informix database.
  • 63. 63 Regular Expressions Overview • Regular expressions combine literal characters and meta-characters to define the search and replace criteria. You run the functions from the Informix Regex extension to find matches to strings, replace strings, and split strings into substrings. • The Informix Regex extension supports extended regular expressions, based on the POSIX 1003.2 standard, and basic regular expressions • You can specify case-sensitive or case-insensitive searching. • You can search single-byte character sets or UTF-8 character sets.
  • 64. 64 Regular Expressions (REGEX) • Built into the server • ifxregex.1.00 – Different name & different UDR names to distinguish from the old DeveloperWorks version • Autoregistered – Registered on first use (preferred) • Can also be registered with SQL Registration – Execute function sysbldprepare(‘ifxregex.*’, ‘create’) • Same database restrictions as other datablades
  • 65. 65 Datatypes • To use regex pattern matching, you must provide the text data as a CHAR, LVARCHAR, NCHAR, NVARCHAR, VARCHAR, or CLOB data type – If you want to replace text in a CLOB value with the regex_replace() function, you must have a default sbspace • Do not inherently use any database indexes
  • 66. 66 REGEX Meta Characters Metacharacter Action ^ Begin of string $ End of String | Or [abc] Match any character enclosed in [ ] [^abc] Match any character not enclosed in [ ] [a-c] Match the range of characters [:cclass:] Match the characters in the character class, like the ctype.h, alpha, alnum, lower (ASCII centric) [=cname=] Match the character by its name, ie quotation-mark, asterisk . Match any character ( ) Group the regular expression within the parentheses
  • 67. 67 REGEX Meta Characters Metacharacters Action ? Match zero or one of the preceding expression. Not applicable to basic regular expressions. * Match zero, one, or many of the preceding expression + Match one or many of the preceding expression. Not applicable to basic regular expressions. Use the literal meaning of the metacharacter. For basic regular expressions, treat the next character as a metacharacter
  • 68. 68 REGEX Replacement Meta Characters Metacharacters Action & Reference the entire matched text for string substitution. For example, the statement execute function regex_replace('abcdefg', '[af]', '.&.') replaces 'a' with '.a.' and 'f' with '.f.' to return: '.a.bcde.f.g'. n Reference the subgroup n within the matched text, where n is an integer 0-9. 0 and & have identical actions. 1 - 9 substitute the corresponding subgroup. Use the literal meaning of the metacharacter, for example, & escapes the Ampersand symbol and escapes the backslash. For basic regular expressions, treat the next character as a metacharacter.
  • 69. 69 regex_match() function • regex_match( str lvarchar|clob, re lvarchar, copts integer DEFAULT1) returns boolean • Example execute function regex_match ( ’Regex module’ , ’[Mm]odule|DataBlade’); (expression) t
  • 70. 70 regex_replace() function • regex_replace( str lvarchar|clob, re lvarchar, rep lvarchar, limit integer DEFAULT 0, copts integer DEFAULT 1) returns lvarchar|clob • Example execute function regex_replace ( 'Regular expressions combine literal Characters and metacharacters.', '( |^)[A-Za-z]*[Cc]haracter[a-z]*[ .,$]', '<b>&</b>'); (expression) Regular expressions combine literal<b> Characters </b>and<b> metacharacters.</b> – Regular expressions combine literal characters and metacharacters.
  • 71. 71 regex_extract() function • regex_extract( str lvarchar|CLOB, re lvarchar, limit copts integer DEFAULT 0, copts integer DEFAULT 1) returns lvarchar • Iterator • Example execute function regex_extract( ’How much wood could a woodchuck chuck if a woodchuck could chuck wood? A woodchuck could chuck as much wood as a woodchuck would chuck if a woodchuck could chuck wood.’, ’wo[ou]l?d[a-z]*[- .?!:;]’, 2 ); (expression) wood (expression) woodchuck 2 row(s) retrieved. – No limit, 10 rows, with wood, woodchuck, would
  • 72. 72 regex_split() function • regex_split( str lvarchar|CLOB, re lvarchar, limit copts integer DEFAULT 0, copts integer DEFAULT 1) returns lvarchar • The regex_split function and the regex_extract function perform the complete opposite actions of each other.
  • 73. 73 regex_extract() & regex_split() functions example execute function regex_extract( ’Jack be nimble, Jack be quick, Jack jump over the candlestick.’, ’( |^)[A-Za-z]*ick’ ); (expression) quick (expression) candlestick 2 row(s) retrieved. execute function regex_split( ’Jack be nimble, Jack be quick, Jack jump over the candlestick.’, ’( |^)[A-Za-z]*ick’); (expression) Jack be nimble, Jack be (expression) , Jack jump over the (expression) . 3 row(s) retrieved.
  • 74. 74 Regular Expressions are also supported in MongoDB clients & REST • db.mycollection.find( { "description": /https?:/// } ) • db.mycollection.find( { "description": /https?:///i } ) – Adding case insensitive flag (i) • db.mycollection.find( { "description": {"$regex": "https?://" } } ) – This is the exact same query as the first one, just explicitly using the $regex operator • db.mycollection.find( { "description": {"$regex": "https?://" , "options" : "i"} } ) – Adding case insensitive flag (i) to $regex syntax • GET /db/mycollection?query={ "description": /https?:/// }
  • 75. 75 JDBC 4.0 Compliance User Story – As an application developer at Acme Manufacturing, my Java applications have to support several different DBMS. I need my application to be as database agnostic as possible, and so all the JDBC drivers that we use must be JDBC 4.0 compliant so that we do not have to maintain special logic for any particular DBMS.
  • 76. 76 JDBC 4.10.JC8 Features • JDBC 4.0 Compliance – Almost a hundred new API’s implemented or enhanced to provide 4.0 compliance – Compliance doesn’t necessarily mean all JDBC 4.0 methods are supported
  • 77. 77 4.10.JC8 – ResultSet enhancements • isClosed() and getHoldability() methods • update* methods now work with long values • Before this was what you had – resultSet.updateBlob(“blobcolumn”, inputStream, int length); • Now you can also use – resultSet.updateBlob(“blobcolumn”, inputStream, long length); • Before 4.10.JC8 you couldn’t always send an input stream that didn’t have a length specified, now you can – resultSet.updateAsciiStream(“charcolumn”, new FileInputStream(“/myfile.txt”)); • This was done for all resultset update API’s
  • 78. 78 JDBC 4.0 Compliance • Connection.java – Gets proper createBlob() & createClob() API’s • Statement objects get a minor update boolean isClosed() throws SQLException; void setPoolable(boolean poolable) throws SQLException; boolean isPoolable() throws SQLException; • Blob API gets filled out a bit free(); getBinaryStream(long pos, long length); • Clob API gets filled out a bit free(); getCharacterStream(long pos, long length);
  • 79. 79 PreparedStatement enhancements • Added ability to use long data type in set* API’s – Before you could only set up to a 2gb object due to the use of int, now you can try to set up to Informix’s max limit blob/clob data • Fixed a few areas when a Reader or InputStream is passed in, as it was possible to incorrectly determine how long it was. Data from these streams are now correctly pulled in. • Implemented more set* API’s around clobs, character streams • CallableStatment gets the same treatment
  • 80. 80 IfxSmartBlob enhancements • A prerequisite for a number of JDBC 4.0 compliance implementations is the ability to write streamed data to a smart large object • Added 6 new method calls to IfxSmartBlob.java – These are helper methods for your existing Blob/Clob API’s – They allow streaming of any length of data from a stream object in Java into a blob/clob (up to what Informix supports or the size of a long which is huge) public long write(int lofd, InputStream is) throws SQLException public long write(int lofd, InputStream is, long length) throws SQLException public long writeWithConversion(int lofd, InputStream is) throws SQLException public long writeWithConversion(int lofd, InputStream is, long length) throws SQLException public long writeWithConversion(int lofd, Reader r) throws SQLException public long writeWithConversion(int lofd, Reader, long length) throws SQLException
  • 81. 81 IfxSmartBlob enhancements • Created a default 32k buffer (matching the internal 32K buffer size we use for sending chunks of data through the network – Adjustable with setWriteStreamBufferSize(int) • Any codeset conversion that used to be done via a write to a temp file is now done purely in memory – Much faster and creating files on disk to do this work is avoided
  • 82. 82 4.10.JC8 Features • JDBC Packaging – Combined ifxjdbc.jar and ifxjdbcx.jar – Removed the old ifxjdbcx.jar, as it was small, with extra overhead to build and test, and it’s features complimented what was in ifxjdbc.jar already – Removed SQLJ from the JDBC installer – Not maintained, can still get it from older drivers or IBM JCC – Simplifies and streamlines what we produce and what you see – We are no longer generating Javadoc for the BSON API’s – Javadocs and source for BSON functions available online already – http://api.mongodb.com/java/2.2
  • 83. 83 JDBC and Maven • Starting with 4.10.JC8W1, Informix JDBC drivers are published to Maven Central! • Maven artifacts prefer semantic versioning – For JDBC we use 3 or 4 digits – Latest JDBC driver is 4.10.8.1 • This allows you to easily and programmatically grab the driver • Using your own Gradle, Maven, sbt to pull in latest versions of the driver • Can even use ‘curl’ or wget to pull down the file directly from the web • Can stage drivers into your own internal Maven repository • Link takes you to Maven’s search page with details about the driver and version http://mvnrepository.com/artifact/com.ibm.informix/jdbc/4.10.8.1
  • 88. 88 New Connectivity to Informix • New Informix Python driver – https://github.com/ifxdb/PythonIfxDB • Informix Node.js driver – https://github.com/ifxdb/node-ifx_db – https://www.npmjs.com/package/ifx_db • Drivers developed and supported by Informix lab • Both currently require full CSDK installation – but this is changing!
  • 89. 89 Informix Hybrid Support - All Clients can access all Data Models • NoSQL ↔ SQL Translation • Wire Listeners for MongoDB, REST & MQTT protocols • SQLI, DRDA Protocol Support • Relational, Collection, Time Series, & Spatial Data Support Mobile Desktop Web REST Client MongoDB Client SQLI Client DRDA Client MQTT Client Informix DBMS Informix Wire Listener Spatial Time Series JSON Collection Relational Table
  • 90. 90 • IBM Smart Gateway kit - https://ibm.biz/BdXr2W • Code samples - https://ibm.biz/BdX4QV • Github - https://github.com/IBM-IoT/ • Free Informix Developer Edition - https://ibm.biz/BdXp2g • Innovator-C edition on Docker Hub https://registry.hub.docker.com/u/ibmcom/informix-innovator-c/ • Developer edition on Docker Hub https://registry.hub.docker.com/u/ibmcom/informix- developer-database/ • Informix Developer Edition for Raspberry Pi (32bit) https://registry.hub.docker.com/r/ibmcom/informix-rpi/ • Client and connectivity examples https://github.com/ibm-informix/informix-client- examples Developers - Get Started! Docker Hub
  • 91. 91 Some Useful Information • Bloor White Paper – IBM Informix and the Internet of Things - http://ibm.co/2bITDyU • IBM Informix - http://www-01.ibm.com/software/data/informix/ • IBM Informix Support - http://www- 947.ibm.com/support/entry/portal/overview/software/information_management/informix_servers • IBM developerWorks pages for Informix - http://www.ibm.com/developerworks/data/products/informix/ • Informix International User Group (IIUG) - http://www.iiug.org/index.php • Informix Community - http://www.informixcommunity.com/ • Planet IDS - http://planetids.com/ • IBM Informix on LinkedIn - http://www.linkedin.com/groups?home=&gid=4029470&trk=anet_ug_hm • IBM Informix on Facebook - https://www.facebook.com/IBM.Informix • IBM Informix on Twitter - https://twitter.com/WW_Informix • Informix YouTube Channel - https://www.youtube.com/channel/UCsdfm-BDILWYPM04F7jdKhw • IBM Informix Blogs (a few of them): • https://www.ibm.com/developerworks/community/blogs/smoe/?lang=en • https://www.ibm.com/developerworks/community/blogs/idsteam/?lang=en • https://www.ibm.com/developerworks/community/blogs/fredho66/?lang=en_us • https://www.ibm.com/developerworks/community/blogs/idsdoc/?lang=en_us
  • 92. 92 Shawn Moe – smoe@hcl.com