Your SlideShare is downloading. ×

Best Practices for running the Oracle Database on EC2 webinar

10,600
views

Published on

Best practices for running the Oracle Database on EC2 including storage, security, networking, EC2, deployment, deployment, management, and monitoring.

Best practices for running the Oracle Database on EC2 including storage, security, networking, EC2, deployment, deployment, management, and monitoring.

Published in: Technology, News & Politics

1 Comment
14 Likes
Statistics
Notes
  • Often wondered, why you would publish doc on internet then disable download!! Now, you have to copy image by image... just hasstle!!
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
10,600
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
1
Likes
14
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=4728Amazon Web Services (AWS) EC2 offers you the ability to run your Oracle Database instances in a hosted, infrastructure as a service (IaaS) environment. Running Oracle Database on Amazon Web Services EC2 is very similar to running it at your data center. To a DBA or a developer, there are no differences. However, there are several AWS platform considerations when it comes to security, storage, compute configurations, management, and monitoring that influence success. After this session, you’ll have the foundational knowledge to architect, install, configure, secure, manage, and support your Oracle Database instances on AWS EC2. Whether you are a systems administrator, developer, DBA, or architect, you will benefit from learning the technical particulars of running Oracle Database on AWS.
  • This timeline highlights the collaboration between Oracle and AWS along with the major milestones and joint deliverables. In 2007 a year after AWS was founded, Oracle became the first enterprise software vendor to collaborate with AWS. The first AWS service Oracle launched was Amazon EC2 in 2008. Soon after also in 2008, Oracle released the Oracle Secure Backup Cloud (OSB) on Amazon EC2. This allowed the ability to back up the Oracle Database in the Cloud is a key part of Oracle’s Cloud offering. OSB also allows the backup of the Oracle databases to Amazon’s Simple Storage Service (S3).Compared to traditional tape-based offsite storage, Cloud backups are more accessible, faster to restore under most circumstances, and more reliable. AWS started support for Oracle Virtual Manager (OVM) in 2010. OVM is the only hypervisor that AWS supports other than AWS Xen. In May 2011 Oracle on the Amazon Relational Database Service (RDS) was introduced before Microsoft SQL Server and after Oracle MySQL which was introduced in October of 2009. A few years later in 2012, AWS released the first set of Oracle on AWS test drives to support the increased demand from enterprise customers. These test drives were created by AWS Oracle System Integrator partners. By the end of 2012, there were 23 Oracle test drives. The test drives were an opportunity for customers to try the Oracle products on AWS. Try before you buy! 2013 is the year of repeatable solutions! This includes new Amazon Machine Images (AMIs), reference configurations, CloudFormation scripts, white papers, and additional test drives labs including those for WebLogic.
  • Amazon Web Services provides highly scalable computing infrastructure that enables organizations around the world to requisition compute power, storage, and other on-demand services in the cloud.  These services are available on demand so a customer doesn’t need to think about controlling them, maintaining them or even where they are located. Our approach has always been to be a customer focused company.  We constantly look to develop services in line with the needs of our customers to make sure they get the flexibility and usability out of the service that they need to be successful. 
  • This timeline highlights the collaboration between Oracle and AWS along with the major milestones and joint deliverables. In 2007 a year after AWS was founded, Oracle became the first enterprise software vendor to collaborate with AWS. The first AWS service Oracle launched was Amazon EC2 in 2008. Soon after also in 2008, Oracle released the Oracle Secure Backup Cloud (OSB) on Amazon EC2. This allowed the ability to back up the Oracle Database in the Cloud is a key part of Oracle’s Cloud offering. OSB also allows the backup of the Oracle databases to Amazon’s Simple Storage Service (S3).Compared to traditional tape-based offsite storage, Cloud backups are more accessible, faster to restore under most circumstances, and more reliable. AWS started support for Oracle Virtual Manager (OVM) in 2010. OVM is the only hypervisor that AWS supports other than AWS Xen. In May 2011 Oracle on the Amazon Relational Database Service (RDS) was introduced before Microsoft SQL Server and after Oracle MySQL which was introduced in October of 2009. A few years later in 2012, AWS released the first set of Oracle on AWS test drives to support the increased demand from enterprise customers. These test drives were created by AWS Oracle System Integrator partners. By the end of 2012, there were 23 Oracle test drives. The test drives were an opportunity for customers to try the Oracle products on AWS. Try before you buy! 2013 is the year of repeatable solutions! This includes new Amazon Machine Images (AMIs), reference configurations, CloudFormation scripts, white papers, and additional test drives labs including those for WebLogic.
  • Traditional and AWS EnvironmentsAll Oracle Software licenses are fully portable to Amazon Elastic Compute Cloud (EC2) such as the Enterprise License Agreements (ELA), Unlimited License Agreements (ULA), Oracle Partner Network (OPN), Business Process Outsourcing (BPO) and Oracle Technology Network (OTN). Just as with on premise installations OTN licenses are a 30 day trial period license. Similar, OPN licenses can be used by partners to develop and test applications but do keep in mind regular licenses must be purchased by the customer to run the application in production. The conversion of processor and socket based licenses is .25 cents per socket for standard licenses and .5 cents per processor for enterprise licenses. Standard Edition is ¼ multiplier which means you only need 1 Oracle license for every 4 virtual cores on Amazon EC2. This means that running on m1.small, m1.medium, and m1.large will all cost the same from an Oracle perspective. When licensing Oracle programs with Standard Edition One or Standard Edition in the product name, the pricing is based on the size of the Amazon EC2 instances. Amazon EC2 instances with 4 or less virtual cores are counted as 1 socket, which is considered equivalent to a processor license. For Amazon EC2 instances with more than 4 virtual cores, every 4 virtual cores used rounded up to the closest multiple of 4, equate to a licensing requirement of 1 socket. For Enterprise Edition licenses, a .5 multiplier applies which means you need 1 Oracle license for every 2 virtual cores on Amazon EC2. This means that  running on m1.small, and m1.medium will all cost the same from an Oracle perspective. Running on m1.large would cost double the price since it has 4 virtual cores.
  • 6. IDS : An intrusion detection system (IDS) is a device or software application that monitors network or system activities for malicious activities or policy violations and produces reports to a management station. Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of a monitoring system.7. IPS : Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about this activity, attempt to block/stop it, and report it.   Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity.A host-based intrusion detection system (HIDS) is an intrusion detection system that monitors and analyzes the internals of a computing system, and in some cases the network packets on its network interfaces (just like an NIDS).  A host-based IDS monitors all or parts of the dynamic behavior and the state of a computer system. HIDS was first designed for the mainframe.  HIDS uses sensors (agents) located on each host.    These host-based agents, which are sometimes referred to as sensors (or agents), would typically be installed on a machine that is deemed to be susceptible to possible attacks. The term “host” refers to an individual computer/virtual host. This means that separate sensor would be needed for every machine/virtual host. Sensors/agents work by collecting data about events taking place on the system being monitored. This data is recorded by operating system in audit trails. Therefore, HIDS is very log intensive.Network-based intrusion detection systems offer a different approach. NIDS collects information from the network itself rather than from each separate host. They operate essentially based on a “wiretapping concept" (network taps).  Information is collected from the network traffic stream, as data travels on the network.  The intrusion detection system checks for attacks or irregular behavior by inspecting the contents and header information of all the packets moving across the network. The network sensors come equipped with “attack signatures” that are rules on what will constitute an attack, and most network-based systems allow advanced users to define their own signatures.  this method is also known as packet sniffing, and allows the sensor to identify hostile traffic.I still don't believe that we are injecting a 0/0 route, but I haven't personally tried setting up a no-BGP tunnel to an ASA, I will try and find one to test and reach out to the VPC team to ask.  On the HIPS/HIDS question, the typical FUD is around additional resources being used by the HIPS agent, aka Amazon wants you to run HIPS so you need to run more instances (and pay more $) because the IPS agent will use a bunch of resources.  In fact the HIPS solution we recommend, Trend Micro Deep Security, is really lightweight because it only loads the signatures that are required for that instance based on the software and OS that is running plus it has the advantage of being able to stop attacks as well as reducing false positives since the signature set is automatically tuned for that particular instance.  This is a huge benefit in my opinion because typical NIDS create a crapton of noise and thus typically no one ever looks the output, resulting in a lower security posture in many cases.  Also if they really want NIDS the Alert Logic Threat Manager product is also fairly lightweight, though it does impact network performance, and since few instances are really ever 100% network bound the additional bandwidth has a negligible impact.  CISCO ASA and SonicWall dedicated device for AWS VPC.   Configure VPN on AWS side it generates an ACL that tunnel is requesting needs to be 0.0.0.0/0 on both device then all traffic on that device will only go to AWS. BGP is available this is not an issue. Only an issue when using ASA (specific routes).Migrate R5 Demo ApplicationWhat is required to be Active/Active : How to use shopping cart session data (DynamoDB), AZ to AZ using ELB, Auto Scaling, Route 53. Database only running in one AZ.  How do they manage?·       How should specific application design be modified to utilize AWS such as shared data, shopping carts and content delivery (S3)
·       Requires Application architect resource to provide direction to the  THG development team to modify application code to be Active/Active 
  • For AWS, this means securing the underlying infrastructure:Data centers – Non-descript facilities, 24x7 security guards, two-factor authentication, access logging and review, video surveillance, disk degaussing and destructionHardware infrastructure – Servers, storage devices, and other appliances on which all of our services rely Software infrastructure – Host operating systems, service applications, and virtualization softwareNetwork infrastructure – Routers, switches, load balancers, firewalls, cabling, etc. Includes continuous network monitoring at external boundaries, secure access points, and redundant infrastructure.
  • Physical SecurityAmazon has many years of experience in designing, constructing, and operating large-scale datacenters. This experience has been applied to the AWS platform and infrastructure. AWS datacenters are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication a minimum of two times to access datacenter floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.  AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee of Amazon or Amazon Web Services. All physical access to datacenters by AWS employees is logged and audited routinely.Network SecurityDistributed Denial of Service (DDoS)Standard mitigation techniques in effectMan in the Middle (MITM)All API endpoints protected by SSLIP SpoofingProhibited at host OS levelNetwork SecurityUnauthorized Port ScanningViolation of TOSDetected, stopped and blockedPacket SniffingPromiscuous mode ineffectiveProtection at hypervisor levelStorage Device DecommissioningUses techniques from:DoD 5220.22-M (“National Industrial Security Program Operating Manual “)NIST 800-88 (“Guidelines for Media Sanitization”)Ultimately, all devices are:degaussedphysically destroyedVirtual Memory and Local DiskProprietary disk management prevents one instance from reading disk contents of anotherDisk is wiped upon creationDisks can be encrypted by customerAWS Third-Party Attestations, Reports, and CertificationsAWS EnvironmentService Organization Controls (SOC) ReportsSOC 1 Type II (SSAE 16/ISAE 3402/formerly SAS70)SOC 2 Type IISOC 3Payment Card Industry Data Security Standard (PCI DSS) Level 1 CertificationISO 27001 CertificationFedRAMPSMDIACAP and FISMAITARFIPS 140-2Additional information available at https://aws.amazon.com/compliance/. Customers have deployed various compliant applications:Sarbanes-Oxley (SOX) HIPAA (healthcare)FedRAMPSM (US Public Sector)FISMA (US Public Sector)ITAR (US Public Sector)DIACAP MAC III Sensitive IATO
  • The firewall can be configured in groups permitting different classes of instances to have different rules. Consider, for example, the case of a traditional three-tiered web application. The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet. The group for the application servers would have port 8000 (application specific) accessible only to the web server group. The group for the database servers would have port 3306 (MySQL) open only to the application server group. All three groups would permit administrative access on port 22 (SSH), but only from the customer’s corporate network. Highly secure applications can be deployed using this expressive mechanism.  Here is an example of the commands needed to establish multi-tier security architecture and of course customers could use the AWS Management Console to do the same:# Permit HTTP(S) access to Web Layer from the Entire Internetec2auth Web -p 80,443 -s 0.0.0.0/0# Permit ssh access to App Layer from Corp Networkec2auth App -p 22 -s 1.2.3.4/32# Permit ssh access to DB Layer from Vendor Networkec2auth DB -p 22 -s 5.6.7.8/32# Permit Application and DB Layer Access to appropriate internal layersec2auth App -p $APP_PORT -o Webec2auth DB -p $DB_PORT -o App# Permit Bastion host access for Web and DB Layers from App Layerec2auth Web -p 22 -o Appec2auth DB -p 22 -o App
  • on Slide 14, under encryption we can split encryption at rest by usingo   Oracle Transparent data encryption at database and store keys in CloudHSMo   OS level encryption by using tools like trucrypt or third party encryption tools like SafeNet
  • We have achieved up to 40,000 IOPS by striping together 22 Amazon EBS volumes with 2000 PIOPS. Theoretically, with 4000 PIOPS volumes you could achieve an even higher total IOPS when many Amazon EBS volumes are striped together. The effect of cumulative IOPS diminishes after about 20-22 volumes, so we don't recommend going beyond 20-22 TB overall size for the database. There is a maximum ratio of 10 between the volume size (in GB) and the provisioned IOPS. For example, if you provision a 50GB volume, the maximum provisioned IOPS that you could request would be 500.Provisioned IOPS EBS VolumesProvisioned IOPS EBS volumes are designed to deliver predictable and consistent high performance for I/O intensive workloads such as databases. With Provisioned IOPS volumes, you specify an IOPS (I/O operations per second) rate when creating a volume, and then Amazon EBS provisions that rate for the lifetime of the volume. Here are some important characteristics of Provisioned IOPS volumes:Amazon EBS currently supports up to 4,000 IOPS per Provisioned IOPS volume. By striping across 10 volumes, you would consistently provide your database with up to 40,000 IOPS.The number of provisioned IOPS applies to I/O operations with a size of 16KB or less. Beyond 16KB, the number of IOPS will decrease proportionally with the size of the I/O. For example, if you provision a 4,000 IOPS volume and your average I/O size is 32KB, you should expect 2,000 IOPS. If your I/O size is 64KB, then you should expect 1,000 IOPS.There is a maximum ratio of 10 between the volume size (in GB) and the provisioned IOPS. For example, if you provision a 50GB volume, the maximum provisioned IOPS that you could request would be 500.While providing more consistent performance, Provisioned IOPS can be more cost effective than standard EBS volumes if your database consistently generates a high I/O workload. With Provisioned IOPS you pay for the number of provisioned IOPS (whereas you pay for actual usage for standard EBS volumes). This has the added benefit of making your I/O cost more predictable.
  • Speaker Notes:[Type your notes here]If your performance is disk I/O limited, changes to the configuration of your disk resources may be in order. Amazon EBS volumes, the persistent block storage available to EC2 instances, are connected via the network. An increase in network usage can have a significant impact on “disk” performance, so be sure to choose the appropriate instance size.To scale up random I/O performance, you can increase the number of EBS volumes as a ratio of EBS storage, like using 6x  100GB EBS volumes instead of 1 x 600GB EBS volume. EBS volumes can be aggregated using different techniques like Linux software RAID, Logical Volume Manager (LVM) or Oracle Automatic Storage Management (ASM). Aggregating multiple EBS volumes increases the total IOPS of the logical volume. However, remember that utilizing striping techniques generally reduces operational durability of the logical volume by a degree inversely proportional to the number of EBS volumes in the stripe set. A single EBS volume can provide approximately 100 IOPS, and single instances with arrays of 10+ attached EBS disks can often reach 1,000 IOPS sustained. Data, log, and temporary files will benefit from being stored on independent EBS volumes or volume aggregates because they present different I/O patterns. In order to take advantage of additional attached EBS disks, be sure to evaluate the network load to ensure that your instance size is sufficient to provide the network bandwidth required. For sequential disk access, ephemeral disks are somewhat higher performance, and don’t impact your network connectivity. Some customers have found it useful to use ephemeral disks to store temporary files to conserve network bandwidth and EBS I/O for log and data operations. Good place to elaborate on PIOPS, EBS Optimized Instances, CC and Hi I/O instances.Oracle ASM on Amazon EBSOracle Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager. Compared to other file systems, ASM presents several advantages specifically for Oracle databases:Chunks of data are distributed pseudo-randomly across all available logical disks in a disk group, thereby removing potential performance “hot-spots”ASM does not perform any I/O itself and does no read-ahead (like file systems) to push data in cache that is never used by the database.No intensive tuning such as setting fragment sizes and file system journals is requiredNo journal is required for consistency; this function is already covered by Oracle redo logs.Adding or removing storage is very easy. After adding storage, ASM automatically rebalances the volumes so they will all be utilized equally. This again increases performance and is particularly in an environment like AWS where you can provision new EBS volumes on-demand.Oracle ASM disk groups provide three types of redundancy: normal, high, and external. With normal and high redundancy, files are replicated within the disk group. With external redundancy, ASM does not provide any redundancy for the disk group. When creating setting up ASM for a group of volumes, we recommend using external redundancy since Amazon EBS volumes are already redundant within an availability zone.Oracle ASM best practices like having different disk groups for data and log files, work and recovery areas, also apply in Amazon EBS.
  • Oracle ASM disk groups provide three types of redundancy: normal, high, and external. With normal and high redundancy, files are replicated within the disk group. With external redundancy, ASM does not provide any redundancy for the disk group. When creating setting up ASM for a group of volumes, we recommend using external redundancy since Amazon EBS volumes are already redundant within an availability zone.Oracle ASM best practices like having different disk groups for data and log files, work and recovery areas, also apply in Amazon EBS.Because this architecture is targeted at a medium-sized enterprise class database, we recommend using fewer than 10 total volumes. To provide a benefit, a provisioned IOPS volume must maintain an average queue length (rounded up to the nearest whole number) of 1 for every 200 provisioned IOPS per minute. If you set the queue length to less than 1 per 200 IOPS provisioned, your volume will not consistently deliver the IOPS that you've provisioned. Setting the queue length too far above the recommended setting won't affect the IOPS your volume delivers, however per-request latencies will increase. For a Provisioned IOPS volume of 500, the queue length average must be 3. If the average queue length is less than 3 for this volume, you aren't consistently sending enough I/O requests.Instance StoreZero network overhead; local, direct attached resource.No network variabilityNot optimized for random I/OGenerally better for sequential I/ORoot volume and data volume are lost on physical disk failure, stopping, or terminating of instanceIdeal for storing temporary data like buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.Maintain a number of pending I/O requests to get the most out of your Provisioned IOPS volume. The volumes must maintain an average queue length of 1 (rounded up to the nearest whole number) for every 200 provisioned IOPS in a minute Maintain a queue depth of 10 for a 2,000 Provisioned IOPS volumeMaintain a queue depth of 3 for a 500 Provisioned IOPS volumeExample: a 2000 Provisioned IOPS volume can handle:2000 16KB read/write per second, or 1000 32KB read/write per second, or 500 64KB read/write per second You will get consistent 32 MB/sec throughput (with 16KB or higher IOs)Perform an index creation action and sends I/O of 32K, IOPS becomes 1000, you still get 32MB/sec throughputOn best effort, you may get up to 40 MB/sec throughput fioLinux, WindowsFor benchmarking I/O performance. (Note that fio has a dependency on libaio-devel.)Oracle ORIONLinux, WindowsFor calibrating the I/O performance of storage systems to be used with Oracle databases.SQLIOWindowsFor calibrating the I/O performance of storage systems to be used with Microsoft SQL Server.We like ext3/4, but we love XFSHigh performance, consistentRobust and lots of options for tweaking/adjusting as neededOur favorite mount options: (your mileage may vary)inode64, noatime, nodiratime, attr2, nobarrier, logbufs=8, logbsize=256k, osyncisdsync, nobootwait, noautoYields great performance, reduces unnecessary writes, stableWe like ZFS a lot too, but we want to see more runtime on linux firstBut FreeBSD/ZFS would be a fine choiceHowever: test your workload!File systems behave differently under different workloadsAn EC2 instance comes with a certain amount of “local” storage, which is ephemeral. Any data placed on those devices will not be available after that instance is terminated by the customer, or if the underlying hardware fails which would cause an instance restart to happen on a different server. This characteristic makes instance storage ill-suited for database persistent storage. AWS offers a storage service called Amazon EBS (Elastic Block Storage), which provides persistent block-level storage volumes. Amazon EBS volumes are off-instance storage that persists independently from the life of an instance. Amazon EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone (datacenter) to prevent the loss of data from the failure of any  single component. For all these reasons, we recommend to use EBS for data files, log files and for the flash recovery area. Using ephemeral storage intelligently can boot performance. This can be used for many kind of temp files and regularly backup static files.For high I/O workloads, an alternative to Provisioned IOPS EBS volumes is to use High I/O instances, which contain SSD drives as internal storage and address the most demanding database workloads. The High I/O Quadruple Extra Large instance can provide up to 120,000 random read IOPS and 85,000 random write IOPS. The High Memory Cluster Eight Extra Large Instance offers 244 GB of memory in addition to 240 GB of local SSD storage. Note however that this SSD storage is internal to the instance and will be lost if the instance is stopped or if the underlying hardware fails. When using this type of storage for databases, you should make sure that you have a solid strategy to avoid loss of data, for example by frequently backing up your data to Amazon S3. In addition to storage performance, High I/O and High Memory Cluster Instances also have very high I/O performance via 10 Gigabit Ethernet, which allows for increased EBS performance.
  • General purpose/Standard m3.2xlarge 64-bit 8 26 ECU 30 GB RAM EBS only Yes High (mid size relational Database)Memory optimized (High Memory and Storage) cr1.8xlarge 64-bit 32 *1 88 ECU 244 GB RAM 2 x 120 SSD - 10 Gigabit *5 (SAP HANA)Compute optimized c1.xlarge 64-bit 8 20 ECU 7 GB RAM 4 x 420 Yes High (Web or application Server)Memory optimized m2.4xlarge 64-bit 8 26 ECU 68.4 GB RAM 2 x 840 Yes High (larger relational Database)Storage optimized hs1.8xlarge 64-bit 16 35 ECU 117 GB RAM 24 x 2,048*4 - Strorage 10 Gigabit *5 (data warehouse / redshift)Speaker Notes:AWS EC2 provides the flexibility to choose from a number of different instance types to meet your computing needs. Each instance provides a predictable amount of dedicated compute capacity and is charged per instance-hour consumed.First generation (M1) Standard instances provide customers with a balanced set of resources and a low cost platform that is well suited for a wide variety of applications.Second generation (M3) Standard instances provide customers with a balanced set of resources and a higher level of processing performance compared to First Generation Standard instances. Instances in this family are ideal for applications that require higher absolute CPU and memory performance. Examples of applications that will benefit from the performance of Second Generation Standard instances include encoding, high traffic content management systems, and memcached.High-Memory Instances offer large memory sizes for high throughput applications, including database and memory caching applications. High-CPU Instances have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications. There are also various high storage and cluster computer instance types available.EC2 Instances come in 19 different types from Micro instance al the way up to the Cluster Compute and High I/O instances. We have also grouped the instance types into traditional configurations whether it is High Memory instances for Databases or High CPU for workloads with high computational needs. The standard instance types are configured to be the workhorses of your application… like a web tier frontend. General purpose/Standard m3.2xlarge 64-bit 8 26 ECU 30 GB RAM EBS only Yes High (mid size relational Database)Memory optimized (High Memory and Storage) cr1.8xlarge 64-bit 32 *1 88 ECU 244 GB RAM 2 x 120 SSD - 10 Gigabit *5 (SAP HANA)Compute optimized c1.xlarge 64-bit 8 20 ECU 7 GB RAM 4 x 420 Yes High (Web or application Server)Memory optimized m2.4xlarge 64-bit 8 26 ECU 68.4 GB RAM 2 x 840 Yes High (larger relational Database)Storage optimized hs1.8xlarge 64-bit 16 35 ECU 117 GB RAM 24 x 2,048*4 - Strorage 10 Gigabit *5 (data warehouse / redshift)
  • Compute optimized cc2.8xlarge 64-bit 32 88 ECU 60.5 Memory 4 x 840 - 10 Gigabit4 Memory optimized m2.4xlarge 64-bit 8 26 ECU 68.4 2 x 840 Memort Yes HighStorage optimized hi1.4xlarge 64-bit 16 35 60.5 2 x 1,024SSD2 - 10 Gigabit4 Very small instance types are not suitable for Oracle DB as Oracle Database is resource intensive when it comes to CPU usage. Instances with larger memory foot print would improve database performance by providing better caching and bigger SGA.Thus it is a good idea to choose instances that has a good balance of memory and CPU. It wouldn’t help to use EC2 instance with higher CPU and Memory than permitted by the DB license type you have.Oracle Database heavily uses disk storage for read and write operations so it is highly recommended to use only EBS Optimized EC2 Instances.Amazon EC2 instances are grouped into eight families: Standard (first and second generation), Micro, High-Memory, High-CPU, Cluster Compute, Cluster GPU, High I/O, High Storage, and High Memory Cluster. For complete, up-to-date information about Amazon EC2 instance types, see http://aws.amazon.com/ec2/instance-types/. When running high-performance databases, the High-Memory Instances can be a good option because they allow you to maximize the amount of memory available to the SGA (System Global Area) of the database. The Cluster Compute Instances combine very high CPU capability and high-memory. You should also consider the High I/O Instances because they feature local SSD drives and will offer the most I/O of any instance type. The High Memory Cluster Instances feature a high amount of memory with local SSD storage and can be good choice for the largest database instances. For more information about the impact of the instance size on I/O performance, see the “Disk I/O Management” sections.In AWS it is very easy and quick to scale vertically (change the instance type and size), if you find out that you undersized or oversized your instance. The method to change the size of the instance depends on the type of AMI that you selected:EBS-backed instances are instances where the root device is stored in Amazon Elastic Block Store (Amazon EBS). In this case, you can just stop the instance, change the instance type (either through the AWS Management Console, the CLI, or an API), and restart the instance.Instance-store backed instances are instances where the root device is stored on the instance internal storage. In this case, you would save any changes to the root device (for example by rebundling an AMI), terminate the instance and start a new one.In both scenarios, the instance size change will be accomplished within a few minutes.Note: To determine whether your instance is EBS-backed or instance-store backed, you can look at the Amazon EC2 instances dashboard in the AWS Management Console, or parse the output of the CLI command ec2-describe-images -v <AMI ID> and look for the value of the rootDeviceType field.
  • AMIS : You need to use an AMI (Amazon Machine Image) to start an EC2 instance. There are a lot of options. We recommend using the AMIs that are published by Oracle, available at http://aws.amazon.com/amis/Oracle. There are AMIs containing Oracle Enterprise Linux and Oracle database 11g release 2 with the following versions: Standard Edition One, Standard Edition and Enterprise Edition. You get the benefit of having a fully pre-installed Oracle database. Alternatively, our customers can start an EC2 instance running the operating system of your choice, and install Oracle manually, just like they would do on an internal server at their company .  As the number of Oracle supplied AMIs have need kept up with demand and as Oracle has not been providing AMIs for the latest and greatest releases, it might be a good idea to give options to the users.Sizing: The amount of CPU and memory, as well as the network bandwidth available to the database depends on the type of instance on which it is deployed. If migrating an existing database from on-prem to EC2, you can pick the closest instance  type and use that as the starting point and then monitor the performance to determine whether it is a good match or if you need to pick a bigger/smaller instance type.When running constant-on high-performance databases, it is best to choose the high-memory instance class as this allows you to maximize the amount of memory available to the SGA of the database. Larger instance types may also have the added benefit of providing higher throughput to the attached EBS volumes. Mention advantages of ne CC and Hi I/O instances.Instance Type: Increasing the performance of a database requires an understanding of which of the server’s resources is the performance constraint. If the database performance is limited by CPU or memory, users can scale up the memory, compute, and network resources by choosing a larger instance type. The three architectures we've discussed cover most Oracle database use cases on the AWS platform. In the rare case that you want to run an OLTP application, your database would need very high IOPS, in the range of 100,000 – 200,000. To attain those high IOPS in this architecture, we use local SSD-based volumes that are available in the Amazon EC2 instance itself. Because these are ephemeral disks, there is the potential to lose the entire database if the instance fails. To prevent any potential for data loss and ensure reliability, this architecture employs a second instance in the same Availability Zone. It uses Oracle Data Guard to replicate data to this instance from the primary instance. We may also want to introduce the Oracle Flash Cache feature to extend database performance on High memory instance types with SSD disks.In short we can utilize the Oracle Flash Cache feature on Oracle 11g to extend database Buffer cache to 240G of SSD above the existing 240G of RAM. This is useful for high memory database requirements and also in-memory database requirements.For simple bootstrapping, user data text/scripts may be adequate.  Keep in mind the limit on size is 16K for user data.s3cmd is often used to load the bootstrap scripts for S3. More on this can be found here:http://s3tools.org/s3cmdhttps://github.com/s3tools/s3cmdA very good document on using user data, CloudFormation, Chef, Puppet and other tools to bootstrap EC2 instances can be found here:https://s3.amazonaws.com/cloudformation-examples/BoostrappingApplicationsWithAWSCloudFormation.pdf
  • In addition to the availability features offered by AWS, customers choosing to deploy Oracle in EC2 will benefit from the high-availability capabilities that Oracle offers, such as online reorganization, transportable tablespaces, RMAN, secure backup, streams and GoldenGate.Oracle Data Guard can be used to set up one or several slave databases which will be the foundation of a highly available environment. It maintains the standby databases as transaction- consistent copies of the primary database. These instances can be placed in several availability zones.Then, if the production database becomes unavailable because of a planned or an unplanned outage of this instance or of the full availability zone, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage.  It has three protection modes allowing the customers to maximize protection, availability or performance. The Active Data Guard module, featured in the diagram below, enables read-only access to the standby databases, thereby allowing customers to run read queries and reports on the standby instances, and to perform the backups from a standby instance.
  • Use of Route 53 to manage Oracle database endpoints as seen by applications - this makes it easier to maintain HA in an environment where the Oracle instances themselves may be transient.Vertical Scaling : For many customers, increasing the performance of a single DB instance is the easiest way to increase the performance of their application overall. In the Amazon EC2 or Amazon RDS environments, you can simply stop an instance, increase the instance size, and restart the instance. This is particularly true if you have a set maintenance window and can tolerate system downtime. This technique is often referred to as scaling up.Advanced setups can benefit from the elastic nature of Amazon Web Services. By monitoring the usage of the primary database with Amazon CloudWatch, you can receive notifications indicating that a heavy load threshold has been met or exceeded. In this situation, you can create on-demand new stand-by databases to lower the load on the primary. Once this heavy usage period is finished, stand-by instances and the resources they consume can be disposed . DataGuard can be used only with EE. There are many third party solutions that provide the same functionality even for Standard and Standard one (like SharePlex, Dbvisit). Would be a good idea to mention those too.Active-Active replicationCommercially available active-active database replication technologies can also be used to boost the overall throughput of   an application. This can be especially useful if there is way to divide up the workload between multiple DB instances such that even when they share the same schema, the updates they make are mostly exclusive to each other. For instance handling customer orders based on the location of the customer with all US based orders going into one database and non-US orders going to a second database. However the application would need to handle any conflict resolution scenarios, for instance if there is a total count of number of orders being maintained, then it needs to be updated outside of these replicated databases . Would be good to explain Multi-Master setups too.   Oracle Golden Gate also can be used for this purpose, so can be streams. Oracle is putting emphasis on GG on their Road map so it would be a good idea to cover that tooAWS specific tactics for implementing HA best practices:1. Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures2. Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure highly availability. Utilize Amazon RDS Multi-AZ [21] deployment functionality to automatically replicate database updates across multiple Availability Zones.3. Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.4. Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.5. Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.6. Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups.This implementation sets up Data Guard for Fast Start Failover, so that the failover to standby instance can be achieved quickly. In this architecture the primary instance uses Elastic Network Interface (ENI), which can be leveraged for an even faster failover by swapping the ENI from the primary instance to the standby instance, because both instances are in the same Availability Zone. This requires a third observer instance to monitor the primary instance and swap the ENI in case of a failure.Oracle Active Data Guard is an Oracle Database add-on, which allows you to set up standby databases that can be open for read-only requests, while continuing to archive transactions from the primary database. The standby databases can be used as read replicas of your primary database. The replication between the primary and the standby databases can be configured to be synchronous. This allows you to scale your database layer horizontally by adding read replicas and to offload read-only queries from the primary database. This setup is often valuable because most applications generate more reads to the database than writes. Also, read-heavy clients like business intelligence applications can be executed against a standby instance, with no impact on the primary production database.You can use Active Data Guard to build an elastic database infrastructure. By monitoring the usage of the primary database with Amazon CloudWatch, you can receive notifications indicating that a heavy load threshold has been met or exceeded. In this situation, you can create on-demand new standby databases to lower the load on the primary. Once this heavy usage period is over, standby instances and the resources they consume can be disposed.Note: Oracle Active Data Guard is only available for Oracle Database Enterprise Edition, not for Standard Edition and Standard Edition One.It is also possible to use active-active replication to boost performance. In this scenario, you creates one or more database replicas that can be both written to and read from, in effect implementing a distributed database where all replicas are synchronized. These technologies are covered in the “High Availability” section.
  • MonitoringAmazon CloudWatchAmazon CloudWatch is a web service that provides monitoring for AWS cloud resources, starting with Amazon EC2. It provides customers with visibility into resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network traffic. To use Amazon CloudWatch, simply select the Amazon EC2 instances that you’d like to monitor; within minutes, Amazon CloudWatch will begin aggregating and storing monitoring data that can be accessed using the AWS Management Console, web service APIs or Command Line Tools.Our internal instance monitoring service. Amazon CloudWatch provides detailed CPU, disk, and network utilization metrics for each enabled EC2 instance and EBS disk, allowing detailed reporting and management. This data is available in our web-based AWS Management Console as well as our API, which allows for infrastructure automation and orchestration based on these availability and load metrics.Amazon Elastic Block Store (Amazon EBS). Amazon EBS volumes are durable, high-performance, network-attached block device resources. These “virtual disks” can be attached to your servers, and can persist when servers are stopped or terminated, thus providing durable storage for databases. EBS volumes that operate with 20GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%.Users also have the ability to deploy Oracle Enterprise Manager – Grid Control to monitor and manage their database environment on EC2. Using the Diagnostics and Tuning packs, you can identify root causes of various issues with the database (hanging sessions, long running queries, etc.) as well as get recommendations on optimizing and tuning database performance.Finally, users can use any open-source monitoring tool, such as Nagios or Cacti, and run them on EC2 to monitor their whole environment, including their Oracle databases.Amazon CloudWatch is an AWS instance monitoring service that provides detailed CPU, disk, and network utilization metrics for each Amazon EC2 instance and EBS volume, which allows for detailed reporting and management. This data is available in the web-based AWS Management Console as well as the API. This allows for infrastructure automation and orchestration based on the availability and load metrics.In addition, you can use any third-party monitoring tools that have built-in Oracle monitoring capabilities, such as the open-source monitoring frameworks Nagios and Zabbix, and run them on Amazon EC2 to monitor your whole AWS environment, including your Oracle databases.
  • The third step is to configure the on premise application environment to retrieve data from Amazon Glacier as needed. Amazon Web Services provides SDKs for you to develop applications for Amazon Glacier. Amazon Glacier is supported by the AWS SDKs for Java, .NET, PHP, and Python. This means you could write extensions to your current Oracle Fusion Application to automatically archive data to Amazon Glacier. AWS Identity and Access Management (IAM) service can be used to secure access to Amazon Glacier. IAM allows security access to be set for both users and groups of users. Amazon Simple Notification Service (SNS) can be used to send notifications to the application when the job completes.
  • AWS provides a host of options when it comes to backing up Oracle database environments and being prepared to face any disaster. This slide will cover database backup using Oracle Secure Backup Cloud Module to backup using RMAN directly to AWS S3. Setting policy based archival to Glacier from S3 for data information lifecycle management (ILM) will be discussed. A more cost effective backup approach, an alternative to OSB, which utilizes AWS snapshots will also be covered. The details of snapshot management and restoration will be discussed. The use of snapshots with RAID and snapshot copy to other regions will also be covered in detail. For backup and recovery of the Oracle Database software, AMI creation and AMI copy to other AWS regions for DR will be covered. Planning, preparing, and executing DR plans will be covered in detail along with DR architectural design patterns on AWS.Oracle on Amazon EC2When you deploy Oracle databases on Amazon EC2 instances, you are responsible for database backups. Options include the Oracle Secure Backup cloud module which allows you to back up your database via Oracle RMAN (Recovery Manager) directly to Amazon S3. You can also take snapshots of your underlying EBS volumes.Oracle Secure Backup Cloud ModuleThe Oracle Secure Backup Cloud Module is an MML (Media Management Library) for RMAN. It provides the flexibility to back up a database on Amazon S3. It’s important to note that OSB Cloud Module backups work with tools like Oracle Enterprise Manager and customized RMAN scripts.The benefits are twofold. First, OSB Cloud Module Backups stored on Amazon S3 are always accessible. Unlike tapes that need to be shipped to/from other locations and then loaded, data stored on Amazon S3 is instantly available using any of its various interfaces (command line, API, web console, etc.). The ability to easily access backups substantially reduces database restoration times. Second, using Amazon S3 means having a geo-redundant and highly-durable storage service for database backups at a very affordable price . EBS snapshots In addition to RMAN backups, customers can also put their tablespaces in hotbackup mode and take snapshots of the underlying Amazon EBS (Elastic Block Storage) volumes, using the EBS snapshotting feature through the AWS management console, command-line interface or API. Glacier for backup archival ?
  • Installing and configuring the database using CloudFormation and OpsWorks. This slide covers how to make the installation and configuration of the Oracle Database on EC2 a repeatable, template-based experience. CloudFormation provides the capability to package the EC2, EBS, security, and networking infrastructure into one package. OpsWorks is used to make upgrading and management of the Oracle Database on EC2 easy.
  • On the other end of the spectrum from the minimal PeopleSoft configuration is highly available and scalable Oracle E-Business Suite implementation. These implementations can be complex and expensive. There are typically dense peak periods and wild swings in traffic patterns result in low utilization rates of expensive hardware. The user's web site requests are served by Amazon Route 53, a highly available Domain Name System (DNS) service.Network traffic is routed to infrastructure running in Amazon Web Services. The HTTP requests are first handled by the Elastic Load Balancing, which automatically distributes incoming application traffic across multiple Amazon EC2 instances across AZs. Amazon Spot Instances or Auto Scaling can be used to support batch processing.Web and application servers are deployed in an Auto Scaling group. Auto Scaling automatically adjusts your capacity according to conditions you define. Oracle database backups and the batch flat files for integration with the corporate data center are stored on Amazon S3.The storage volumes for the Applications Servers will be standard Amazon EBS volumes.The Oracle database storage volumes will be Amazon EBS PIOPS volumes. These provide up to 1000 IOPS per volume. These will be stripped using Oracle ASM. Spot instances can be used to handle large batch loads.
  • Production Deployment with HA. This architecture has many of the AWS and Oracle services, products and features we have seen in the the other use cases: region, AZs, VPC, customer gateway, internet gateway, VPN Gateway, Amazon S3, OSB, and Oracle DataGuard. An AWS service we have not seen before is the Elastic Load Balancer. This is because we have not shown a highly available and scalable mutli-AZ architecture before. Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance, scalability and availability. In this case, we are distributed across multiple AZs. The rest of the architecture should be familiar to you as a full production environment will have disaster recovery built in using DataGuard and Oracle Secure Backup.
  • This hybrid architecture applies to all the use cases except the Amazon Glacier use case. The Oracle Database could run in an AWS Direct Connect facility. AWS Direct Connect facility are essentially a colocation allowing for low latency, high bandwidth connections directly into the AWS Datacenters. These facilities are located in close proximity to the AWS data centers and offer 1Gbps to 10 Gbps to the AWS data centers. For this Oracle configuration that is utilizing a Direct Connect facility, the web and application servers running Oracle WebLogic, or this could be any another application such as Tomcat, IBM Websphere, Microsoft IIS will run in the AWS cloud. The architecture can include all the AWS services we used in the previous use cases such as Route 53, Elastic Load Balancing, Auto Scaling, EBS, and others. The Oracle Real Application Cluster (RAC) is running in the Data Connect facility connected over a 1 Gbps or 10 Gbps dedicated link to the AWS data center. Oracle RAC is not supported inside of AWS so this is an excellent use case for running a hybrid architecture. Datapipe is one Direct Connect partner that offers RAC-as-a-service with usage based pricing for Oracle RAC. In addition to hosting Oracle RAC other AWS partners, like NetApp, offer hardware and software solutions in a Direct Connect facility.
  • The Module ObjectivesBy the end of this training you will be able to do the following:Identify the Oracle and AWS alliance timeline. Describe how to identify opportunities that can be solved by AWS products and services and what other customers have done before. Verify some common best practices using Oracle and AWS product and services. Describe the support and licensing polices and other online resources.
  • Transcript

    • 1. Best Practices for Running Oracle Database Instances on Amazon Web Services EC2 [CON4728] ©Amazon.com, Inc. and its affiliates. All rights reserved. Tom Laszewski Strategic Solution Architect
    • 2. Setting the stage Goal : Build a secure, reliable, scalable, cost effective, elastic and performant Oracle database architecture on Amazon EC2 Session Assumptions • You are an enterprise architect, solution architect, DBA, system administrator, SQL developer or have a related role • Experience with AWS and Cloud computing • Strong Oracle Database skills • Working knowledge of compute, storage, security and networking • Experience deploying Oracle Databases on EC2 and/or RDS
    • 3. What you will learn AWS and Oracle Partnership Amazon Security and Networking Oracle Database Best Practices Amazon Storage Oracle Database Best Practices Amazon Compute Installation and Configuration Oracle Database Best Practices Amazon Scaling and HA Oracle Database Best Practices Management and Monitoring Best Practices Development Best Practices Complete Architecture Amazon RDS Next Steps
    • 4. Building out the architecture End to end architectureFinal Blueprint DevelopmentDeployment Management & Monitoring Management & Administration Scaling & HABoot strappingInstances Installation and configuration StorageSecurityNetworking Foundational Services
    • 5. Oracle and AWS Cloud
    • 6. Oracle and AWS Partnership Oracle : First major software vendors to support AWS September 2008 : Oracle on EC2 Oracle technology stack and Oracle Applications http://aws.typepad.com/aws/2008/09/hello-oracle.html September 2008 : Oracle Secure Backup Cloud Module September 2010 : Oracle VM virtualization support May 2011 : Oracle on RDS Before SQL Server and after MySQL October 2012 : Oracle Test Drives Current : Updated AMIs, reference configurations, joint white papers, joint test drives + more
    • 7. All Oracle Software licenses are fully portable to Amazon Elastic Compute Cloud (EC2) • Enterprise License Agreement (ELA) • Unlimited License Agreement (ULA) • Oracle Partner Network (OPN) • Business Process Outsourcing (BPO) • Oracle Technology Network (OTN) Oracle on AWS • Processor & Socket Licensing: • 0.25 core multiplier for standard licenses (sockets) • 0.5 core multiplier for enterprise licenses (processor) • Oracle Cloud Licensing Policy Oracle License Portability on AWS Oracle AWS cloud licensing document: oracle.com/us/corporate/pricing/cloud- licensing-070579.pdf
    • 8. Security and Networking
    • 9. Networking : Building Blocks • Virtual Private Cloud • Subnets • Route Tables, Security Groups, NACLs • Virtual Private Gateway • AWS Direct Connect • Internet Gateway • Elastic IPs and Load Balancers
    • 10. Networking : Oracle DB Architecture WebLogic Availability Zone Elastic IP Internet Gateway Private VPC Subnet Public VPC Subnet NAT Instance Customer Gateway VPN Gateway Internet Corporate Datacenter ELB Route 53 Direct Connect Database
    • 11. Networking : Best Practices VPC • Use it…VPC by default for new accounts • Database in private subnet VPN • Redundant connections • Consider two Customer Gateways • Dynamic routing (BGP) over static (ASA) NAT • Set up multi-AZ NAT IDS/IPS • Trend Micro, AlertLogic, Snort • Host based • Conduct penetration test : prior approval from AWS Dedicated, secure connection • Direct Connect - 1 Gbps or 10 Gbps Fail over • ELB : Multi-AZ • Route 53 : Geo/region
    • 12. AWS Global Infrastructure Hardware AWS Data Centers AWS Account Amazon Provided Customer Configurable IAM User AWS Services Used by Customer Software Network IAM User • 24x7 guards • Limited access • Two-factor auth. • Disk destruction • Intrusion detection • Security reviews • Network monitoring • Secure API endpts Security : Building Blocks • Network access • Audit logging • Asset inventory • Guest OS patching • Anti-malware • IDS/IPS • Backups
    • 13. Security : Building Blocks Physical Security of Data Centers • Controlled, need-based access • Separation of Duties • 24 x 7 security guards Network Security • Distributed Denial of Service (DDoS) • Man in the Middle (MITM) • IP Spoofing : EC2 instances can not do • Unauthorized Port Scanning • Packet SniffingStorage Device Decommissioning • Uses techniques from: DoD 5220.22-M, NIST 800-88 • Ultimately, all devices are: degaussed, physically destroyed Virtual Memory and Local Disk • Proprietary disk management prevents one instance from reading disk contents of another AWS Third-Party Attestations, Reports, and Certifications • SOC 3, ISO2007, PCI DSS, FIPS140-2, FedRamp, ITAR, SOX, HIPPA VPC, IAM, Security Groups
    • 14. Tier Port Source 80 0.0.0.0/0 443 0.0.0.0/0 22 Bastion 22 Bastion 8000 Web 1521 207.171.191.92/32 1521 App 22 Bastion 22 207.171.191.60/32 EC2 EC2 App EC2 1521 22 22 Bast ion 22 HTTP SSH DB sync Web DB 8000 All other Internet ports blocked by default EC2 EC2 207.171.191.60 207.171.191.92 Define the Groups Web App DB Bast ion Security : Oracle DB Architecture
    • 15. Security : Best Practices Use Multiple Layers of Defense • Security Groups (EC2, VPC, RDS, ElastiCache) • IPTables • Bastion Host • Host-based Firewalls* • IDS* Protect privacy and enforce your policies with data encryption • Encrypt data in transit • (SSL/TLS) and TDE • Encrypt data at rest – TDE with keys in AWS CloudHSM – OS level : Trucrypt, SafeNet, CipherCoud (EBS+RDS), 3RD party Identity and Access Management • Create Users and Groups within a master account Operating system security • EC2 Key Pairs • No external SSH to Oracle DB VPC • Database in private subnet • Database access only from application server or bastion host AWS Account Management • Multiple accounts may be created to isolate resources. Accounts may be isolated by: Environment (e.g., dev, test, prod), Major System, Line of business / function, Customer, Risk level
    • 16. Storage
    • 17. Storage : Building Blocks Attach to running instance and expose as a block device Snapshots stored durably in Amazon S3 Block storage volumes for use with Amazon EC2 instances • POSIX, file system • 1 TB • Attached to one EC2 instance, lives in one AZ • You select : files system, RAID, encryption • Standard IOPS (~100) and PIOPS (4,000) • RAID, LVM • Pay for what you Provision $0.10 per GB/month • Natively mirrored/replicated
    • 18. EBS PIOPS EBS : Oracle DB Architecture Oracle ASM
    • 19. EBS : Best Practices EBS • PIOPS (applies to I/O with a block size of 16KB) • Stripe using RAID 0, 10, LVM, or ASM • RAID 10 (can decrease performance) • Snapshot often : Single volume DB • 20 TB DB size (max) : Depends upon IOPS and instance type (1 Gbps or 10 Gbps) Tuning • Maintain an average queue length of 1 for every 200 provisioned IOPS in a minute • Pre-warm $ dd of=/dev/md0 if=/dev/null • fio, Oracle ORION • Oracle Advanced Compression File system • ext3/4, XFS (less mature) • Try different block sizes : start with 64K Stripping • Stripe multiple volumes for more IOPS (e.g., (20) x 2,000 IOPS volumes in RAID0 for 40,000 IOPS) • ASM with external redundancy • More difficult to Snapshot : Use OSB Storage • Use Instance storage for temporary storage or database
    • 20. Installation and Configuration AMI
    • 21. Installation and Configuration : Building Blocks CloudFormation • Infrastructure as code, suitable for change management in version control • Define an entire application stack (in a JSON template file) • Define runtime parameters for a template (e.g., EC2 Instance Size, EC2 Key Pair, etc) AMI • Building blocks of EC2 Instances • Is a template of a computer’s root volume, create ‘gold images’ of EC2 • Public or private EC2 • Resizable compute capacity • Complete control of your compute resources • Reduces time to required to obtain and boot new server instances to minutes
    • 22. 256 128 64 32 16 8 4 2 1 1 2 4 8 16 32 64 128 EC2 Compute Units (HP) = 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor Memory(GB) CR1.8xlarge 88 ECU, 244 GB M3.2xlarge 26 ECU, 30 GB C1.xlarge 20 ECU, 7 GB M2.4xlarge 26 ECU, 68 GB HS1.8xlarge 35 ECU, 117 GB IntsanceTypes
    • 23. EC2 Instance Families Standard High-CPU High-Memory Micro Cluster Compute Cluster GPU High I/O High Storage High Cluster Memory Most Apps, Low-cost, App Server / Web Server Databases, Databases Databases… Compute + Network Throughput Scale-out Compute, Batch Processing For Starters, Low throughout, Websites Parallel Processing OLAP, Hadoop, File Systems, Data Warehouses NoSQL, Best for Random IOPS In-memory Apps and DBs. Best $/RAM
    • 24. Installation and Configuration : EC2 LoremDa Ipsum Dolor sit amet Architecture Standard Enterprise Class Large Enterprise Class High Performance Database Size 10GB - 1 TB 500 GB - 5 TB 2 TB - 20 TB 5 GB - 2 TB I/O Performance Moderate (500-1000 IOPS) Moderately High (2500-10,000 IOPS) High (8000 to 20,000 IOPS) Very High (Up to 200,000 IOPS) Recommended Instance Types m1.large m1.xlarge m3.xlarge m3.2xlarge m2.2xlarge m2.4xlarge m1.xlarge m3.xlarge m3.2xlarge m2.2xlarge m2.4xlarge m2.4xlarge hi1.4xlarge cc2.8xlarge cr1.8xlarge hi1.4xlarge
    • 25. Installation and Configuration : Best Practices AMIS • Use Oracle provided • Build your own using Oracle Enterprise Linux Boot Strapping • User data/scripts • CloudFormation • Consider Chef, Puppet, OpsWorks EC2 • EBS optimized • SSD backed for high performance IO : hi1.4xlarge has 2 TB of SSD attached storage • SSD backed, high memory instance for cached database using Oracle Smart Flash Cache: cr1.8xlarge has 240 GB of SSD plus 244 GB of memory and 88 ECUs • Turn off (stop) when not using EBS • Install Oracle software binaries on a separate EBS volume https://s3.amazonaws.com/cloudformation- examples/BoostrappingApplicationsWithAWSCloudFormation.pdf
    • 26. Scaling and HA Availability Zone A Availability Zone B Availability Zone C US West (OR)
    • 27. Scaling and HA : Building Blocks US Regions Global Regions Availability Zone A Availability Zone B Availability Zone C EU (Ireland) Availability Zone A Availability Zone B South America (Sao Paulo) Availability Zone A Availability Zone B Asia Pacific (Sydney) Availability Zone A Availability Zone B GovCloud (OR) Availability Zone A Availability Zone B Availability Zone C Availability Zone D US East (VA) Availability Zone A Availability Zone B US West (CA) Availability Zone A Availability Zone B Asia Pacific (Singapore) Availability Zone A Availability Zone B Availability Zone C Asia Pacific (Tokyo) Availability Zone A Availability Zone B Availability Zone C US West (OR) Customer Decides Where Applications and Data Reside Note: Conceptual drawing only. The number of Availability Zones may vary.
    • 28. Scaling and HA : Best Practices Scaling • Vertical Scaling with EC2 : stop instance and change instance type • Horizontal Scaling with Read Replicas and multi-AZ • This will need to be configured using Oracle Active Data Guard, Oracle GoldenGate, 3rd party technology • Amazon Cloudwatch • Route 53 : Latency based routing to route traffic to region closest to the user Requires replicated, sharded, or geo dispersed databases HA • Elastic IPs and Elastic Network Interfaces (ENIs) • Active-passive multi-AZ using Oracle Data Guard • Active-Active multi-AZ using Oracle GoldenGate • Route 53 : Now supports health checks for multi-region HA • ELB : Web and Application Server for multi-AZ HA. Health checks (HTML file) to see if Oracle DB is up and running. Associate ENI / Elastic IP to new Oracle DB.
    • 29. Management and Monitoring
    • 30. Management : CloudWatch and 3RD party tools • Visibility into resource utilization, operational performance, and overall demand patterns • Accessible via the AWS Management Console, web service APIs or Command Line Tools • Custom metrics of your own : Memory, Oracle specific • Metrics such as CPU utilization, disk reads and writes, and network traffic • 3RD party : Nagios, Zabbix • Basic monitoring metrics every 5 minutes, detailed metrics every minute
    • 31. Management : OEM 12c Plug In • The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources. • Monitor EBS, EC2 and RDS instances on Amazon Web Services: • Gather performance metrics and configuration details for AWS instances • Raise alerts and violations based on thresholds set on monitoring • Generate reports based on the gathered data • Leverage the Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. • AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. https://blogs.oracle.com/zerotocloud/entry/amazon_web_services_aws_plug
    • 32. Region Database BackupsOMS BackupsAvailability Zone NAT VPC-VPC Tunnel Instance Public Subnet OMS / EM 12c OMR Private Subnet Amazon SES Staging Area OMS / EM 12c OMR Custom Scripts Common Repository – S3 EM AMI’s Availability Zone NAT VPC-VPC Tunnel Instance Public Subnet OMR Private Subnet VPC VPN Tunnel VPN Gateway Elastic Load Balancer Management : Hosting Em12c on AWS
    • 33. AWS Direct Connect AWS Import/Export Your Data Center On-Premise Application Server Archive Application Amazon IAM Amazon SNS Send + Receive Data Send Data Send Notification to your application Central Access to your data Tapes Amazon Glacier Step 3 Step 1 Step 2 Archiving
    • 34. Corporate Data center Backup and Recovery via RMAN and OSB Cloud Module Oracle RMAN Oracle Secure Backup Module S3 Region http://www.oracle.com/technetwork/products/secure-backup/documentation/securebackup-094467.html http://www.oracle.com/technetwork/database/features/availability/twp-oracledbcloudbackup-130129.pdf
    • 35. Development
    • 36. Development : Oracle, AWS and third party • Jdeveloper and SQL Developer (SQL Plus) just as you do today • Third party tools like Toad for Database • Third party tools like Chef for Application and Database • AWS OpsWorks
    • 37. Deployment : Connecting to my database cd c:ProgramDataOracleSoftwareinstantclient_11_2 sqlplus hr/edbaAWSlab@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HO ST=ec2-23-20-225-111.compute- 1.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SID=PROD)))
    • 38. End to End Architecture
    • 39. Architecture : End to End
    • 40. Database Region Application Server Availability Zone Availability Zone Application Server Standby Data Guard Application Server OSB Cloud Module Elastic Load Balancer Customer Gateway VPN Gateway Internet Gateway Amazon S3 Bucket Private VPC Subnet Architecture : Pilot Light Disaster Recovery Region Amazon S3 Bucket Data Guard
    • 41. Architecture : Direct Connect with Oracle RAC
    • 42. RDS
    • 43. Amazon RDS for Oracle Amazon RDS for Oracle is a fully managed Oracle database service. • Simple to deploy • Easy to operate and scale • Reliable • Secure • Cost effective
    • 44. Schema design Frequent server upgrades Storage upgrades Backup and recovery Software upgrades Patching Hardware management Query construction Query optimization Configuration management Migration Offload the “muck” to RDS Focus on the App “innovation” Amazon RDS Drives Developer and IT Productivity
    • 45. High Performance, Availability & Security Features • High Availability with Multi-AZ • Push button scale compute & scale storage • 3 TB database size & 30K PIOPS per database instance • VPC & Transparent Data Encryption (using Oracle ASO) • Cross region snapshots for disaster recovery Push-Button Scaling Provisioned IOPS Transparent Data Encryption Master Encryption Key Oracle Wallet Oracle Database Encrypt / Decrypt Cross Region Snapshots Region Multi-AZ Availability Zone Availability Zone
    • 46. Oracle Database Editions & Licensing Options License Included Bring-Your-Own-License Standard Edition One Standard Edition Enterprise Edition Amazon RDS Oracle – Licensing & Pricing Options Oracle Database Pricing Options Contract Pay Per Hour License Included Bring Your Own License Multi AZ On-Demand No contract Reserved Instances 1 year or 3 year options Licensing Options: Pricing Options:
    • 47. Next Steps : Resources to review • Amazon Web Services: aws.amazon.com • Amazon Relational Database Service: aws.amazon.com/rds • Running Oracle on AWS: aws.amazon.com/oracle • Oracle Test Drive Labs: http://awstestdrive.com • Oracle Partner Accreditation:http://aws.amazon.com/partners/overview/partner-training/ • Oracle FAQ: http://www.oracle.com/technetwork/topics/cloud/faq-098970.html • Oracle Secure Backup Cloud Module product Page: http://www.oracle.com/us/products/database/secure-backup-066578.html • Oracle AWS cloud licensing document: oracle.com/us/corporate/pricing/cloud-licensing- 070579.pdf • Oracle on AWS videos: • Migrating case study : http://www.youtube.com/watch?v=t2UcCdnNsRc&feature=youtu.be • OEM12c as a hosted service : http://youtu.be/XSBND55sghc • Oracle on AWS introduction : http://mfile.akamai.com/23543/wmv/citrixvar.download.akamai.com/2354 3/www/729/667/988620541967729667/2-988620541967729667- 13e6bb61c33.asx
    • 48. Call To Action • Attend AWS re:Invent Oracle Related Sessions – DAT202 - Using Amazon RDS to Power Enterprise Applications – DAT401 - Advanced Data Migration Techniques for Amazon RDS – STG301 - AWS Storage Tiers for Enterprise Workloads - Best Practices – STG305 - Disaster Recovery Site on AWS - Minimal Cost Maximum Efficiency – STG303 - Running Microsoft and Oracle Stacks on Elastic Block Store – ENT303 - Migrating Enterprise Applications to AWS - Best Practices, Tools and Techniques Register here: https://portal.reinvent.awsevents.com/portal/newreg.ww?trk=PS_reinvent2013_Bra nd_AWS_Sessions_BMM