DB2 Security and PCI
A BEST PRACTICES GUIDE
Ulf T. Mattsson,
Chief Technology Officer,
THE PAYMENT CARD INDUSTRY (PCI) DATA SECURITY
PCI is a set of collaborative security requirements for the protection of credit card
transactions and cardholder data for all brands. This paper will review DB2 solutions that
are compliant to the requirements for data at rest encryption in the PCI Data Security
Standard and are based on a design that also provides separation of duties, audit, and
central key management. The PCI standard incorporates sound and necessary security
practices, such as encryption, continuous data access monitoring and control; assessments;
auditing and implementation of comprehensive key management processes and procedures
for keys used for encryption of cardholder data. PCI Compliance is mandatory for any
business that stores, processes, or transmits data. PCI requires that the index also is be
encrypted if credit card number is in the index. Make sure that the information in the index
is not sensitive. PCI is also suggesting as a to ‘Install Application-layer firewall in front of
web-facing applications to detect and prevent attacks’.
MAINFRAMES - THE FOUNDATION OF THE IT INFRASTRUCTURE
Legacy mainframe applications form the foundation of the IT infrastructure at many
companies. Some sources indicate that about 70 percent of the world's data resides on
mainframes and 85 percent of all business transactions are processed on these machines.
Many organizations have front-ended and networked their machines to create a "new"
mainframe that's fully IP-networked and supported by Web-enabled data stores and
services. However, security that protects networks and servers from external threats in
many cases overlooks the need to protect databases from potential threats from inside the
firewall. With the increased demand for data privacy and security, the need for data
encryption has moved to the forefront of technology concerns.
THE DATABASE IS THE LAST LINE OF DEFENSE
This paper describes a relatively overlooked situation, where you need to encrypt your
database. Databases are far too critical to an organization to be left unsecured, or
incorrectly secured. The database is indeed the last line of defense in an organization. This
paper will review best practices to ensure that the last line of defense is not easily breached
by external or internal attacks.
THREATS TO DB2 DATA
NEW SECURITY BOUNDARIES
For many years external security threats received more attention than internal ones, but the
focus has changed. Worms, viruses and the external hacker were once perceived as the
biggest threats to computer systems. What is often overlooked is the potential for a trusted
individual with special privileges or access to steal or modify data. While viruses and
worms are serious, attacks perpetrated by people with trusted insider status—employees,
ex-employees, contractors and business partners—pose a far greater threat to organizations
in terms of potential cost per occurrence and total potential cost than attacks mounted from
outside. Well documented breaches have heightened the public’s – and regulatory
agencies’ - concerns about how well companies are securing consumer-specific
information captured at the point-of-acquisition. Extended partnerships lead to that more
and more tasks will be performed outside the physical boundaries of company facilities
which will add another level of due diligence we must take into account.
NEW AND INNOVATIVE INTRUSION ATTEMPTS
There's no guarantee that any one approach will handle every new and innovative intrusion
attempt. One approach is to build a protective layer of encryption around individual data
items or objects to protect sensitive data wherever it's stored or processed. Since we know
that systems are built and used in layers, the security also needs to be implemented in each
layer. Whenever possible, the authorization and ownership should be to or by a group or a
role, but the individual who requests information needs to be understood to provide
accountability. Firewalls can prevent some problems, but do not address many others. Web
Application Firewalls can prevent additional problems, including SQL injection.
INSIDER ATTACKS HURT DISPROPORTIONATELY
The reason why insider attacks hurt disproportionately is that insiders can and will take
advantage of trust and physical access. In general, users and computers accessing
resources on the local area network of the company are deemed trusted. Practically, we do
not firmly restrict their activities because an attempt to control these trusted users too
closely will impede the free flow of business. And, obviously, once an attacker has
physical control of an asset, that asset can no longer be protected from the attacker. While
databases often are protected by perimeter security measures and built in RDBMS
(Relational Database Management Systems) security functionality, they are exposed to
legitimate internal users at some degree. Due to the fragmented distribution of database
environments, real time patch management, granular auditing, vulnerability assessment,
and intrusion detection become hard to achieve. With the growing percentage of internal
intrusion incidents in the industry and tougher regulatory and compliance requirements,
companies are facing tough challenges to both protect their sensitive data against internal
threats and meet regulatory and compliance requirements.
DBA MAY HAVE COMPLETE ACCESS TO THE DATA
For years, databases have been able to keep unauthorized persons from being able to see
the data. This is generally covered by privileges and authorities within the database
manager. In today's environments, there is an increasing need for privacy of stored data.
This means that even though a DBA may have complete access to the data in a table, there
is information that the owner of the data would not want anyone else to see. This has
surfaced in particular with web-based applications where the user has entered data (such as
credit card numbers) that is to be kept for subsequent uses of the application by the same
user. People want assurance that nobody else can access this data.
PCI DATA SECURITY STANDARD
The PCI Security Standards Council (https://www.pcisecuritystandards.org) is an open
global forum for the ongoing development, enhancement, storage, dissemination and
implementation of security standards for account data protection. The PCI Security
Standards Council’s mission is to enhance payment account data security by fostering
broad adoption of the PCI Security Standards. The organization was founded by American
Express, Discover Financial Services, JCB, MasterCard Worldwide, and Visa
ENCRYPTION OF CREDIT CARD INDEX FIELDS
The main disadvantage of some Table/Row level Encryption Tools compared to column
level encryption solutions is that indexes are not encrypted. It is essential that the index
also is be encrypted if credit card number is in the index. According to ‘PCI DSS v1.1’
section #3.4 - ‘The MINIMUN account information that must be rendered unreadable is
the PAN (credit card number)’. We will discuss some column level encryption solutions
that are based on DB2 FIELDPROC or DB2 UDF. They will also encrypt the index and be
compliant to PCI DSS 1.1.
COMPREHENSIVE KEY MANAGEMENT
PCI Security Standards require implementation of comprehensive key management
processes and procedures for keys used for encryption of cardholder data, including -
Generation of strong keys, Secure key distribution, Secure key storage, Periodic changing
of keys, preferably automatically and at least annually, Destruction of old keys, Split
knowledge and establishment of dual control of keys, Prevention of unauthorized
substitution of keys, Replacement of known or suspected compromised keys and
Revocation of old or invalid keys.
APPLICATION-LAYER FIREWALL IN FRONT OF WEB-FACING
‘PCI DSS v1.1’ section #6.6 – is suggesting to ‘Install Application-layer firewall in front
of web-facing applications to detect and prevent attacks’. This method is considered a best
practice until June 30, 2008, after which it becomes a requirement.
MORE PLACES WHICH NEED TO BE ADDRESSED
More potential places which need to be addressed include application code, web servers,
database servers, directory and authentication devices, firewalls, network and enclave
configuration and operating system platforms. It’s important to understand the other
security techniques and the controls to be sure there are no gaps in the fences. In general,
we find more business losses from errors and omissions than from any other category. This
area is a gateway to bigger problems, and one that can have a very positive return on
AN ENTERPRISE LEVEL SECURITY MONITOR
That technique means that access for many resources can be more consistent, whether the
resource is an application, file or database access. Another way of distinguishing the level
of security is the enterprise level approach for the access control. The tightest security
would use a single enterprise level security monitor. Using enterprise level system controls
and subsystems will allow tighter security than having application programs individually
provide the security.
APPLICATION LEVEL PROTECTION
Applications do not have some of the protection mechanisms or the level of assurance
provided by system security, so use the stronger system techniques whenever possible.
Static SQL prevents a number of problems, including SQL injection, while improving
performance. Static SQL authorization techniques can be used to avoid granting wide
access to tables. If dynamic SQL is used, then use of parameter markers and host variables
for input can also avoid SQL injection. Checking the input must be performed. Use of
CONNECT with a password provides a shared technique and userid that will make
management more difficult. Use system identification and authentication. Changing the
password is needed more often if you have passwords in programs. There are several
vendors that can provide comprehensive application security functions without requiring
the addition of propitiatory appliances.
TRUSTED SECURITY CONTEXT FOR CONNECTIONS
The option to set a system parameter indicates to DB2 that all connections are to be
trusted. It is unlikely that all connection types, such as DRDA, RRS, TSO, and batch, from
all sources will fit into this category. It is likely that only a subset of connection requests
for any type and source may be trusted or that you want to restrict trusted connections to a
specific server. More granular flexibility will allow for the definition of trusted connection
objects. Once defined, connections from specific users via defined attachments and source
servers will allow trusted connections to DB2. The users defined in this context can also be
defined to obtain a database role.
A LAYERED APPROACH TO SECURITY
LEAST PRIVILEGE PRINCIPLE - MORE GRANULAR CONTROLS
“Least Privilege – This principle requires that each subject in a system be granted the most
restrictive set of privileges (or lowest clearance) needed for the performance of authorized
tasks. The application of this principle limits the damage that can result from accident,
error, or unauthorized use.” One of the primary concepts in security is giving the
individuals only the privileges needed to do the job. That often means using more granular
controls. Another key principle is ease of safe use. We want individuals to have all of the
privileges they need to do the full job, but no more. If there is less complexity in the
security controls, that means less cost and generally results in better compliance.
COMPLETE ACCOUNTABILITY AND SEPARATION OF DUTIES
From an administration point of view, a DBA is playing an important and positive role.
However, when security and privacy become a big issue, we cannot simply trust particular
individuals to have total control over other people’s secrecy. This is not just a problem of
trustiness, it is a principle. Technically, if we allow a DBA to control security without any
restriction, the whole system becomes vulnerable because if the DBA is compromised, the
security of the whole system is compromised, which would be a disaster. On the other
hand, if we have a mechanism in which each user could have control over his/her own
secrecy, the security of the system is maintained even if some individuals do not manage
their security properly.
SEPARATED SECURITY DIRECTORY FOR ENCRYPTION CONTROL
Access control is the major security mechanism deployed in all RDBMSs. It is based upon
the concept of privilege. A subject (i.e., a user, an application, etc.) can access a database
object if the subject has been assigned the corresponding privilege. Access control is the
basis for many security features. Special views and stored procedures can be created to
limit users’ access to table contents. However, a DBA has all the system privileges.
Because of her/his ultimate power, a DBA can manage the whole system and make it work
in the most efficient way. In the mean time, she/he also has the capability to do the most
damage to the system. With a separated security directory the security administrator is
responsible for setting the user permissions.
ADD A LAYER OF PROTECTION
Guaranteeing that unauthorized users can't access data ensures privacy. Encryption is the
primary solution for data privacy. In fact, it's a necessity in all situations in which
customers can perform (or authorized users are provided access to) transactions involving
confidential information within the target system's database. Any security program must
ensure that secure automated encryption management - including secure encryption key
protection, aging, and replacement - is implemented across all platforms hosting critical
information. Encryption adds an essential level of protection from intruders who break
through firewalls or operating system security features. It also deters malfeasance from
ADD ENCRYPTION FOR THE BEST DEFENSE
In many industries encryption is a growing requirement to satisfy the need for data
security. In some industries, government regulations, such as PCI, US State laws and the
Health Insurance Portability and Accountability Act of 1996, might actually impose a
security requirement that is best met by encrypting data. Financial information, health care
data and defense have very different statements of security objectives, but some of the
same principles apply.
NATIVE DATABASES ONLY PROVIDE FEW PROTECTION
Native Database servers provide only a few protection mechanisms. The lower levels can
provide stronger enterprise level defenses via third party solutions. If application code uses
the security mechanisms, then the need for assurance is much less. If the application
implements security, then much stronger assurance is required. If the application does not
pass through the security information, then the ability to use database and operating system
security can be compromised.
CRYPTOGRAPHY SHOULD NOT TAKE PLACE INSIDE THE
Some native database encryption solutions are based on cryptography taking place inside
the database. That practice is one that would necessarily be equivalent to giving all of the
keys to the DBAs and/or system administrators, as they control database engine
deployment. Instead, crypto activity takes place outside the database; secure applications
require a particularly secured portion of the application infrastructure.
CREATING AN EFFECTIVE CRYPTOGRAPHIC DATABASE
It is critical to implement a cryptographic architecture that is flexible and modular so that it
is easily adaptable to various situations. Cryptographic architectures and systems can be
difficult to manage if they become overly complex, and the challenge is to find the right
balance between security and complexity on one side, and usability on the other. Creating
an effective cryptographic database infrastructure is not an elementary task given the
different requirements of security and functionality.
SECURITY SOLUTIONS FOR DB2
This review will discuss various practices for DB2 security and encryption. Choices and
guidelines will be our primary points, discussing how to provide improved security for
your situation. The solutions that we review in this article let you secure sensitive and
private data at the DB2 table level and column level. We will review the issues with the
A COMMON MISCONCEPTION ABOUT DB2
A common misconception about DB2® and security concerns the value of encryption
versus the traditional methods that DB2 uses to keep data secure from unauthorized usage.
A combination of encryption and the more traditional methods of DB2 and RACF is the
best defense. DB2 and RACF work together to ensure that only authorized users can
access DB2 data, but those security measures are ineffective against a person who can
circumvent the operating system.
RACF CONTROL COMES WITH POLICY AND PEOPLE
RACF for access control imposes significant policy and people implications. If you want
the database administrators to manage security, then integration with DB2 is very
important. If you want security administrators to manage security, then integration with the
security server is more important. As you make this change, note that roles will change and
authorities will change. This is not a compatible change. You must plan to use RACF
facilities more, like groups and patterns. The implementation team needs both DB2 and
RACF knowledge for implementation.
DB2 SECURITY CONTEXT
DB2 uses the security context when possible, so batch jobs, TSO users, IMS and CICS
transactions have security that uses a consistent identification and authentication. This is
true for stored procedures from these environments as well. The large number of options,
exits, environments and asynchronous or parallel work provide challenges for security.
Some key applications manage security at the application level.
Session Variables provide another way to provide information to applications. Some
variables will be set by DB2. Others can be set in the connection and signon exits to set
these session variables A new built-in function GETVARIABLE is added to retrieve the
values of a session variable. This function can be used in views, triggers, stored procedures
and constraints to help enforce a security policy. If your primary security need is more
general, flexible controls, this information complements other security mechanisms. For
example, you can have a view which provides data that is at the user's current security
MULTILEVEL SECURITY OR MANDATORY ACCESS CONTROL
z/OS 1.5 and RACF or Security Server 1.5 (improved in 1.6) add another type of security,
called multilevel security, labeled security or mandatory access control (MAC). The only
option in the past with a high degree of separation has been physical separation. In the
database world that might mean another machine or LPAR or perhaps another subsystem,
another database or another table. With multilevel security, we still have a high degree of
security even with data in the same table. Access control is consistent across many types of
resources using RACF, so that multilevel controls apply for data sets, for communications,
for print and for database access – both objects and now with row level granularity. The
DB2 controls are for both SQL access and for utility access. For more on multilevel
security, see Planning for Multilevel Security and Common Criteria (GA22-7509).
ENCRYPTION ISSUES WITH DB2
Different solutions are utilizing fundamentally different methods to encrypt DB2 data on
z/OS. An EDITPROC acts on a complete row of a table, as received from the database.
The index is stored in the clear (in other words, unencrypted). A FIELDPROC transforms
a single short-string column and allows the index to be stored as encrypted text. A User
Defined Function (UDF) enables column-level encryption with views and triggers. DB2 9
provides a higher level of application transparency for this method than earlier DB2
versions. The native column-level encryption on DB2 V8 requires SQL users to supply the
encryption key and is not application transparent.
CHALLENGES ASSOCIATED WITH DB2 ENCRYPTION
The technical challenges associated with encryption include application changes,
performance overhead, and the difficulties of managing the encryption keys. DB2 and the
zSeries® platform provides the basis for meeting these challenges with the help of
hardware, software and third party products.
ENCRYPTION DOES MEAN SOME TRADEOFFS
Encryption does mean some tradeoffs in function, usability and performance. Either the
indexes are not encrypted or encrypted data will not give correct results for comparisons
other than equals or not equals. All greater than, less than and range predicates are not
usable. FIELDPROC provides the additional index protection and PCI compliance if credit
card numbers are indexed. Both FIELDPROC and EDITPROC can utilize the
cryptography hardware on IBM platforms. UDF implementations provide support for
additional and longer data types and additional search operations. DB2 V.9 enables a fully
application transparent use of UDF implementations.
EDITPROC PROTECTS ALL DATA EXCEPT WHAT IS IN INDEX
The main disadvantage of the tool encryption solutions for DB2 Databases that are using
editproc compared to encryption solutions that are using field proc or DB2 native column
level encryption is that indices are not encrypted.
CREDIT CARD INDEX IN CLEAR IS NOT COMPLIANT TO PCI 1.1
If it is essential that indexes be encrypted, you should probably choose another tool or
security method. Make sure that the information in the index is not sensitive. A credit card
number should not be exposed in the index if you need to be compliant to PCI DSS 1.1.
FIELDPROC ALSO PROTECTS DATA IN INDICIES
If multiple data elements in table need protection and none are in an INDEX, then
EDITPROC is less resource intensive then multiple FIELDPROCs. If only one or few
columns need protection then only those are encrypted. FIELDPROC can only be specified
on short string COLUMNs, and cannot be specified on ROWID or LOB columns but
columns with those data types are allowed in the TABLE. FIELDPROC can be added by
ALTER TABLE, so no need for unload, drop table, create table, reload sequence.
USER-DEFINED FUNCTIONS PROVIDES MORE FLEXIBILITY AT A
Encryption can also be implemented in DB2 User-defined functions (UDF). A UDF can be
called by TRIGGERs and VIEWs. The enhanced support for these functions in DB2 v.9
provides a higher level of application transparency than earlier DB2 versions. The lack of
application transparency in earlier DB2 versions can be an issue when implementing UDF
based encryption. UDF based encryption can be attractive if only one or few columns need
protection and if additional data type support is needed. UDF based encryption can also
support a wider range of search operations on encrypted columns. Additional overhead, in
some cases significant, can be expected if the search is forcing table scans and decryption
of a large number of rows during the search operation
SOME SOLUTIONS TO ENCRYPT DB2 DATA ON Z/OS
Consider some available packaged solutions for encrypting DB2 data on z/OS:
• Third-party tools for DB2 Databases with Enterprise Key Management (column-
level encryption, row-level encryption, supporting hardware and software,
supporting fieldproc, editproc and UDF). This tool is also supporting other
database brands and provides separation of duties, auditing and optional
encryption of index values.
• A separately purchased tool called the IBM Encryption Tool for IMS and DB2
Databases (row-level encryption, supporting hardware, based on editproc).
• A native column level encryption method in Version 8 of DB2 for z/OS (column-
level encryption), with no Key Management.
This is a short summary of the support of some popular functions in the different types of
encryption tools for DB2 on z/OS:
3rd Party IBM v.8
Functionality Solutions Tool Native
Support for IBM crypto hardware Y Y Y
Table/row level encryption Y Y
Column level encryption Y Y
Support range search Y Y
Support for long data types Y Y
Support for index encryption Y Y
Enterprise key management Y
Separation of duties - policy Y
Key protection in memory Y Y
Local key management Y Y
Support for DB2 editproc Y Y
Support for DB2 fieldproc Y
Support for DB2 User defined
IBM DATA ENCRYPTION TOOL IS BASED ON EDITPROC
IBM Data Encryption for IMS and DB2 Databases provides you with a tool for both IMS
and DB2 in a single product. The tool enables you to leverage the power of Storage Area
Networks (SANs) safely while complying with privacy and security regulations in place or
being enacted worldwide. During encryption, IMS or DB2 application data is converted to
database data that is unintelligible except to the person authorized by your security
administrator. Sensitive data is protected at the row level for DB2 and the segment level
for IMS. Encryption and decryption can be customized at these levels on the respective
databases. The tool is implemented using standard DB2 and IMS exits. Data Encryption
for IMS and DB2 Databases runs as an exit. The exit code invokes the zSeries® and
S/390® Crypto Hardware to encrypt data for storage and decrypt data for application use,
thereby protecting sensitive data residing on various storage media.
EDIT PROCEDURE CAN BE COSTLY
Encryption/decryption of the entire row is always performed regardless if any sensitive
columns are referenced in the query. This can be costly for large rows and for queries that
are just searching on non-sensitive columns since decryption happens for every row
accessed during SELECT even if data fails WHERE
NEWER ALTERNATIVE SOLUTIONS FOR DB2 DATABASES
Without these packaged solutions, you need to write and maintain your own encryption
software or use plug-and-play solutions from third party vendors that already deliver
solutions that merge the functional advantages for DB2 Databases column level encryption
and the IBM Encryption Tool for DB2 Databases. All of these solutions exploits z/Series
and S/390 Crypto Hardware features, resulting in low overhead encryption/decryption and
also enables fast software encryption options.
NATIVE ENCRYPTION ON DB2 V. 8 REQUIRES EXTENSIVE
DB2 for z/OS V8 provides many enhancements, but the data encryption supported by DB2
Version 8 requires extensive application changes, and the encryption is done at the column
level. This method also requires that the SQL users supply the encryption key. The chief
advantage of DB2’s column-level encryption over table level encryption tools is that the
index is encrypted with the columns; however, DB2 does not support native encryption of
numeric columns, and the DB2 Load Utility does not support this native encryption. The
instead of trigger is an SQL technique that allows a trigger to be used in place of a view,
consistent with DB2 for LUW. Additional overhead can be expected if the search is
forcing table scans and decryption of a large number of rows during the search operation.
Improved audit selectivity is needed for being able to see that security is functioning.
Secure Socket Layer or SSL implementation provides encryption of data on the wire.
Some additional techniques for data encryption will help protect data at rest and in
ADDED CAPABILITY WITH DB2 V. 9
The instead of trigger is an SQL technique that allows a trigger to be used in place of a
view, consistent with DB2 for LUW. Improved audit selectivity is needed for being able to
see that security is functioning. Secure Socket Layer or SSL implementation provides
encryption of data on the wire. Some additional techniques for data encryption will help
protect data at rest and in backups. Transparent UDF based encryption is enabled in DB2
SOLUTIONS FROM THIRD PARTY VENDORS
Challenges remain for IBM to merge the functional advantages for IMS DB2 Databases
column level encryption and the IBM Encryption Tool for IMS and DB2 Databases, as
well as to allow us to effectively compress encrypted tables. Another problem is common
to both the IBM Encryption Tool and DB2 column level encryption — namely that you
cannot effectively compress encrypted data. Mature solutions from third party vendors
already deliver solutions that merge the functional advantages for DB2 Databases column
level encryption and the IBM Encryption Tool for DB2 Databases.
UTILIZING CRYPTOGRAPHIC HARDWARE
PCI version 1.1 does not require the use of cryptographic hardware but the IBM native
cryptographic hardware on the newer mainframe models is fast and local to each processor
that handles the database. This will provide a short path length and also CP offloading of
cryptographic operations. Data Encryption for IMS and DB2 Databases requires OS/390
or z/OS Integrated Cryptographic Service Facility (ICSF), which only runs on processors
that support the IBM® Cryptographic Coprocessor Feature (CCF) or the CP Assist for
Cryptographic Functions (CPACF). These processors include most modern mainframes,
for example, G3, G4, G5, G6, Multiprise® 2000, Multiprise 3000, or eServer™ zSeries®,
but do not include such machines as the Flex/ES emulator machines. The hardware CCF or
CPACF modules must be enabled with configuration data by the local IBM engineer,
which is a separately orderable feature and requires a processor power-on-reset (POR) to
complete the loading of the data into the crypto modules. Data Encryption for IMS and
DB2 Databases is part of a larger task that must be performed to implement data
encryption/decryption. DB2 exit routines can make significant changes in identification,
authentication, access control and auditing.
IBM NATIVE CRYPTOGRAPHIC HARDWARE IS FAST
The performance characteristics of the Encryption solution for DB2 Databases that are
using field proc or editproc are similar to those of DB2 table space compression. In both
cases, the performance impact is much less for online transaction processing (OLTP) than
for queries and utilities. The magnitude of CP cost is similar, and the row size can affect
the cost. The degree of processing overhead you experience depends on how your
applications access data. The z9 processors are faster than the z990/z890 processors and
the IBM cryptographic hardware available on IBM models z900, z800 and earlier 9672
systems are slower. The software encryption should be functionally identical to the
hardware encryption, and can be decrypted by the hardware if the site upgrades the
hardware later. IBM mainframe hardware models z890, z990, z9EC and z9BC have
feature code 3863 which adds cryptographic hardware to the system. Unlike earlier IBM
mainframes which had a more limited cryptographic hardware, this hardware is available
for each CPU within the box, and the feature code needs to be applied to each CPU by the
local IBM engineer in order to turn on the hardware. The default installation for a mature
solution for z/OS should optionally utilize this hardware if available.
SOFTWARE ENCRYPTION IS SLOWER THAN IBM NATIVE
If the IBM native encryption hardware is not available, a mature Data Encryption Solution
should enable software encryption and decryption as an alternative. This gives a much
shorter path length to the encryption service but it is significantly slower than the use of
the cryptographic hardware if used on longer rows and columns.
NETWORK ATTACHED CRYPTOGRAPHIC HARDWARE IS SLOW
Network attached cryptographic hardware gives a significantly longer path length to the
encryption service and has significantly lower throughput than the use of the IBM native
cryptographic hardware when used on typical rows and column sizes.
KEY MANAGEMENT ISSUES
KEY PROTECTION ISSUES IN DB2 NATIVE TOOL
DB2 v 8 column level encryption has the encryption key in a dump of the DB2 master
task. The DB2 subtask in a mature solution should NOT have the key so it is not available
in a dump of the DB2 master task. The subtask should only have the key-label which is
not enough to encrypt the data and is meaningless since it’s only the name of the key.
DBM1 ADDRESS SPACE SHOULD NOT CONTAIN ENCRYPTION
Encryption itself is not a protection against somebody who illicitly gains access to a
password because DB2 will happily decrypt data on behalf of an authorized user. Thus,
encryption and userid/password controls are complementary aspects of security, helping to
protect against different types of security exposures. When you select your tool to use -
check what’s in the DBM1 address space - it should never contain information about
encryption keys in working storage. Keys should never be exposed in a DBM1 dump.
SECURITY OF CKDS CAN BECOME A POINT OF VURNERABILITY
The IBM tool uses DB2 Editproc, and a key label is stored in the Editproc. The assignment
of an Editproc to a table determines which key label to use for the table. ICSF itself
determines the key and associates the master key with a key label as well as keeps track of
these associations in its own CKDS data set, so the security of the RACF-protected CKDS
becomes the very important point for encryption security.
DEFINE SEPARATE USERIDS FOR THE STARTED KEY
Any program running on z/OS can access the DB2 data sets, unless they are protected.
Define userids for the started tasks and prevent almost every other id from accessing the
DB2 data sets. There are some exceptions for administrators who must manage the logs or
work with the DSN1* utilities. Having separate ids for each subsystem is the standard
POINT SOLUTIONS OR ENTERPRISE SOLUTIONS
The IBM Encryption Tool for IMS and DB2 Databases (IET) uses row level encryption.
This tool works on any version of DB2, and all DB2 utilities work with the encryption
tool. The tool relies on the Integrated Cryptographic Service Facility (ICSF) to provide a
point solution for key management on z/OS. An ICSF administrator manages the ICSF
environment where the keys are built, stored and maintained. As a result, when you use
this tool, DB2 applications do not need any awareness of keys.
ENTERPRISE KEY MANAGEMENT
Enterprise level encryption solutions are centrally managing the encryption keys across all
different platforms and then use proprietary logic to invoke CPACF. Point solutions for the
mainframe like the IBM Encryption Tool for IMS and DB2 Databases uses the PCIXCC or
Crypto Express2 coprocessors to secure the key with the master key, then use proprietary
logic to invoke CPACF.
MANAGE THE ENCRYPTION KEYS CENTRALLY AT AN
Another problem is common to both the IBM Encryption Tool and DB2 column level
encryption — namely that you cannot effectively manage the encryption keys centrally at
an enterprise level. Challenges remain for DB2 to merge the functional advantages for
IMS DB2 Databases column level encryption and the IBM Encryption Tool for IMS and
DB2 Databases, as well as to allow us to effectively manage the encryption keys centrally
at an enterprise level and provide separation of duties for a PCI level policy management
BEST PRACTICES FOR DATABASE ENCRYPTION
SINGLE DES OR TRIPLE DES
You have a choice of using Single DES or Triple DES, but you must consider the trade-off
between security and performance. Some of the solutions also support the faster AES
algorithm. For large strings, the cost per byte of encryption using triple DES is
significantly higher than single DES. Triple DES encrypts the data three times. It takes
longer for a hacker to decode Triple DES than Single DES, measured in terms of years or
decades, but keep in mind that the hardware performance cost of Triple DES is triple the
cost of Single DES. Security considerations should take into account the shelf life of the
data. If the data becomes obsolete by the time that a hacker can break the code of Single
DES, the value to using Triple DES or AES can be compliance to PCI 1.1.
Separation of Duties
An effective security policy should protect sensitive data from all ‘reasonable’ threats.
Administrators who have access to all data are a reasonable threat, no matter how much we
trust them - trust is not a policy. Implementing a separation of duties of security definition
and database operations provides a checks-and-balances approach that mitigates this threat.
ENTERPRISE CLASS SOLUTIONS
More advanced database encryption solutions provide centralized management of
encryption parameters, and support all major databases and operating systems, including
mainframe platforms. Advanced database encryption solutions also automate encryption,
audit, and separation of duties for access to sensitive information in databases. It is also
imperative that your encryption solution provides cryptographically enforced
authorization, secure and automated key management, secure audit and reporting facilities,
enforced separation of duties, interoperability with other security technologies, and,
operational transparency to applications.
CENTRALIZATION OF DATA SECURITY MANAGEMENT
Best practices begin with the centralization of Data Security Management, enabling a
consistent, enforceable security policy across the organization. From a centralized console,
the Security Administrator defines, disseminates, enforces, reports and audits security
policy, realizing gains in operational efficiencies while reducing management costs.
Mature solutions offer software-based deployment options, which enables easy roll-out to
remote locations from a central source. This is critical for organizations with a large
number of databases to protect, especially if they are geographically dispersed. Not having
to physically install and maintain hardware across the organization mitigates risk, reduces
strain on resources, and saves time and money.
COMPREHENSIVE PROTECTION OF DATA
Experts agree the best protection of sensitive information is encryption. Database level
encryption provides the most comprehensive protection: Protection against storage-media
thefts, storage level attacks, database layer attacks and attacks from ‘super-user’ access.
Best practices dictate that a solution delivers:
Choose only the sensitive data your organization needs to protect (Credit Card, Social
Security #, Salary, etc). Provide individual protection for each column through individual
keys to gain an extra layer of protection in case of security breach.
Strong Key Management
A secure system is only as good as the protection and management of its keys including a
configuration choice and control of where keys are stored and cached, who has access to
them, and ensures they are encrypted and protected.
Protecting Policy Changes
Changes to security policy are critical events that need to be protected. It is a best practice
to require more than one person to approve such changes.
REPORTING AND MONITORING
Reporting and monitoring your security policy and generating protected audit logs
are fundamental best practices, and required by regulations. A mature solution should offer
comprehensive and efficient reporting, including:
Some regulatory requirements require an evidence-quality auditing that not only tracks all
authorized activity, it also tracks unauthorized attempts as well as any changes to security
policies – it even tracks activities of the database administrator (DBA), and provides a
complete audit report of all these activities.
Separation of Roles
Regulations stipulate that a data security system must provide “reasonable protection from
threats.” Having the ability to log and review the activities of both the Security
Administrator and the Database Administrator provides a checks-and-balances approach
that protects from all reasonable threats.
Selective Auditing Capabilities
Mature reporting system is highly selective, allowing your security administrators to
examine the information most critical to their job.
Protected Audit Logs
Simply put, the audit logs themselves must be secure. Encryption of all audit logs can also
protect the confidentiality of data in the logs. This prevents an administrator from doing
something bad and changing the logs to cover his tracks.
CONTROLLING ACCESS TO SENSITIVE DATA
It is estimated that 70-80% of all security breaches come from within the firewall.
Controlling access to the data is a critical element to any security policy. Defining a
security policy that allows centralized definition of data access, down to the data field
level, on an individual-by-individual basis across the entire enterprise is best practice.
Setting the Boundaries
Putting limits on authorized usage of data is necessary to avoid breeches from the inside.
Much like an ATM machine will limit the amount of money a person can take out of their
own account, it is important to be able to set the limits on authorized use as part of data
security policy. If the use of sensitive data should be limited to 9-5, Monday-Friday, then
any attempt to access that data outside of those boundaries should be denied.
Stronger database security policies and procedures must be in place to accommodate the
regulatory compliance environment. There are no guarantees that any one approach will be
able to deal with new and innovative intrusions in increasingly complex technical and
business environments. A protective layer of encryption is provided around specific
sensitive data items or objects, instead of building walls around servers or hard drives.
This prevents outside attacks as well as infiltration from within the server itself. This also
allows the security administrator to define which data are sensitive and thereby focus
protection on the sensitive data, which in turn minimizes the delays or burdens on the
system that may occur from bulk encryption methods. Field-level data encryption is clearly
the most versatile solution that is capable of protecting against external and internal
threats. Also remember that PCI requires that the index also is be encrypted if credit card
number is in the index. Centralized database management security must be considered to
reduce cost. Implementing "point" or manual solutions are hard to manage as the
environment continues to grow and become more complex. Centralized data security
management environment must be considered as a solution to increase efficiency, reduce
implementation complexity, and in turn to reduce cost. PCI DSS 1.1 is also suggesting to
‘Install Application-layer firewall in front of web-facing applications to detect and prevent
attacks’. This method can prevent SQL injection and other database attacks and is now
considered a PCI best practice until June 30, 2008, after which it becomes a requirement.
By implementing solutions documented above, we should be in a better position to face
growing database security challenges, to proactively meet regulatory and compliance
requirements and to better control our sensitive data.