On X86 systems, using an Unbreakable Enterprise Kernel (UEK) is recommended over other enterprise distributions as it provides better hardware support, security patches, and testing from the larger Linux community. Key configuration recommendations include enabling maximum CPU performance in BIOS, using memory types validated by Oracle, ensuring proper NUMA and CPU frequency settings, and installing only Oracle-validated packages to avoid issues. Monitoring tools like top, iostat, sar and ksar help identify any CPU, memory, disk or I/O bottlenecks.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
My experience with embedding PostgreSQLJignesh Shah
At my current company, we embed PostgreSQL based technologies in various applications shipped as shrink-wrapped software. In this session we talk about the experience of embedding PostgreSQL where it is not directly exposed to end-user and the issues encountered on how they were resolved.
We will talk about business reasons,technical architecture of deployments, upgrades, security processes on how to work with embedded PostgreSQL databases.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
VMworld 2013
Cormac Hogan, VMware
Kiran Madnani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
IRQs: the Hard, the Soft, the Threaded and the PreemptibleAlison Chaiken
The Linux kernel supports a diverse set of interrupt handlers that partition work into immediate and deferred tasks. The talk introduces the major varieties and explains how IRQs differ in the real-time kernel.
Comparing file system performance: Red Hat Enterprise Linux 6 vs. Microsoft W...Principled Technologies
Understanding how your choice of operating system affects file system I/O performance can be extremely valuable as you plan your infrastructure. Using the IOzone Filesystem Benchmark in our tests, we found I/O performance of file systems on Red Hat Enterprise Linux 6 was better than the file systems available on Microsoft Windows Server 2012, with both out-of-the-box and optimized configurations. Using default native file systems, ext4 and NTFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 65.2 percent out-of-the-box, and as much as 33.4 percent using optimized configurations. Using more advanced native file systems, XFS and ReFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 31.9 percent out-of-the-box, and as much as 48.4 percent using optimized configurations.
Many applications are ultimately constrained by the I/O subsystems on which they reside, making it crucial to choose the best combination of file system and operating system to achieve peak I/O performance. As our testing demonstrates, with the file system performance that Red Hat Enterprise Linux 6 can deliver, you are less likely to see I/O bottlenecks and can potentially accelerate I/O performance in your datacenter.
My experience with embedding PostgreSQLJignesh Shah
At my current company, we embed PostgreSQL based technologies in various applications shipped as shrink-wrapped software. In this session we talk about the experience of embedding PostgreSQL where it is not directly exposed to end-user and the issues encountered on how they were resolved.
We will talk about business reasons,technical architecture of deployments, upgrades, security processes on how to work with embedded PostgreSQL databases.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
VMworld 2013
Cormac Hogan, VMware
Kiran Madnani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
IRQs: the Hard, the Soft, the Threaded and the PreemptibleAlison Chaiken
The Linux kernel supports a diverse set of interrupt handlers that partition work into immediate and deferred tasks. The talk introduces the major varieties and explains how IRQs differ in the real-time kernel.
Comparing file system performance: Red Hat Enterprise Linux 6 vs. Microsoft W...Principled Technologies
Understanding how your choice of operating system affects file system I/O performance can be extremely valuable as you plan your infrastructure. Using the IOzone Filesystem Benchmark in our tests, we found I/O performance of file systems on Red Hat Enterprise Linux 6 was better than the file systems available on Microsoft Windows Server 2012, with both out-of-the-box and optimized configurations. Using default native file systems, ext4 and NTFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 65.2 percent out-of-the-box, and as much as 33.4 percent using optimized configurations. Using more advanced native file systems, XFS and ReFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 31.9 percent out-of-the-box, and as much as 48.4 percent using optimized configurations.
Many applications are ultimately constrained by the I/O subsystems on which they reside, making it crucial to choose the best combination of file system and operating system to achieve peak I/O performance. As our testing demonstrates, with the file system performance that Red Hat Enterprise Linux 6 can deliver, you are less likely to see I/O bottlenecks and can potentially accelerate I/O performance in your datacenter.
There are many systems that handle heavy UDP transactions, like DNS and RADIUS servers. Nowadays 10G Ethernet NICs are so widely deployed and even 40G and 100G NICs are out there. This makes it difficult for a single server to get enough performance to consume link bandwidth with short packet transactions. Since usually Linux is by default not tuned for dedicated UDP servers, we are investigating ways to boost such UDP transaction performance.
This talk will show how we analyze the bottleneck and give tips we found to make the performance better. Also we discuss challenges to improve it even more.
This presentation was given at LinuxCon Japan 2016 by Toshiaki Makita
Administering a Hadoop cluster isn't easy. Many Hadoop clusters suffer from Linux configuration problems that can negatively impact performance. With vast and sometimes confusing config/tuning options, it can can tempting (and scary) for a cluster administrator to make changes to Hadoop when cluster performance isn't as expected. Learn how to improve Hadoop cluster performance and eliminate common problem areas, applicable across use cases, using a handful of simple Linux configuration changes.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me to download the slides
Improving Hadoop Cluster Performance via Linux ConfigurationAlex Moundalexis
Administering a Hadoop cluster isn't easy. Many Hadoop clusters suffer from Linux configuration problems that can negatively impact performance. With vast and sometimes confusing config/tuning options, it can can tempting (and scary) for a cluster administrator to make changes to Hadoop when cluster performance isn't as expected. Learn how to improve Hadoop cluster performance and eliminate common problem areas, applicable across use cases, using a handful of simple Linux configuration changes.
Linux is usually at the edge of implementing new storage standards, and NVMe over Fabrics is no different in this regard. This presentation gives an overview of the Linux NVMe over Fabrics implementation on the host and target sides, highlighting how it influenced the design of the protocol by early prototyping feedback. It also tells how the lessons learned during developing the NVMe over Fabrics, and how they helped reshaping parts of the Linux kernel to support NVMe over Fabrics and other storage protocols better."
This presentation was delivered at LinuxCon Japan 2016 by Christoph Hellwig
SR-IOV ixgbe Driver Limitations and ImprovementLF Events
SR-IOV is a device virtualization technology, it’s mainly used for improving network performance of virtual machines. However, SR-IOV has some limitations which come from hardware and/or driver implementation. For certain use case, such as Network Function Virtualization(NFV), those limitations are critical to provide services. Intel 10Gb NIC, Niantic(82599), has such limitations(e.g. VLAN filtering, multicast promiscuous) for NFV use case.
This presentation will show the limitations and issues and how they are being addressed, then explain how implements VF multicast promiscuous mode support in ixgbe driver and VF trust, iproute2 functionality enhancement.
This presentation was delivered at LinuxCon Japan 2016 by Hiroshi Shimamoto
Presentation on WebLogic Basics and how to run WebLogic 12.2.1 in Docker Container. Live-Demo included. Talk was given at DOAG 2015 in Nürnberg, Germany.
Container Storage Best Practices in 2017Keith Resar
Docker Storage Drivers are a rapidly moving target. Considering the addition of new graphdrivers and continued maturing of the existing set, we evaluate how each works, performance implications from their implementation architecture, and ideal use cases for each.
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
OSDC 2016 - Tuning Linux for your Database by Colin CharlesNETWAYS
Many operations folk know that performance varies depending on using one of the many Linux filesystems like EXT4 or XFS. They also know of the schedulers available, they see the OOM killer coming and more. However, appropriate configuration is necessary when you're running your databases at scale.
Learn best practices for Linux performance tuning for MariaDB/MySQL (where MyISAM uses the operating system cache, and InnoDB maintains its own aggressive buffer pool), as well as PostgreSQL and MongoDB (more dependent on the operating system). Topics that will be covered include: filesystems, swap and memory management, I/O scheduler settings, using and understanding the tools available (like iostat/vmstat/etc), practical kernel configuration, profiling your database, and using RAID and LVM.
There is a focus on bare metal as well as configuring your cloud instances in.
Learn from practical examples from the trenches.
2. Who Am I
• Senior Consultant in brillix
• Unix-Linux System Admin since 1997
• DBA oracle and MySQL since 2000
• Linux and security Consultant
• blogger in ildba.co.il
4. ◦ Use Unbreakable Enterprise Kernel (UEK), Redhat ,
Suse Whey ?
Better optimized for large systems and workloads
Better hardware support
Modern Linux features
Patches required for correct Oracle product operation
5. • Bug Fixes from upstream
• Most Bug Fixes originate Upstream and are backported to Enterprise Distributions
• More code change from upstream means more time before patches are backported -- if it’s even possible to do so
• More time for security patches to be backported to the Enterprise versions
• Bug Fixes in Upstream apply cleanly to UEK
• Better testing
• Code is tested by the whole Linux community, not dependent on one OS vendor and their customers
• You run the same code that’s in upstream
• No backporting/scaffolding to use the latest Linux kernel features (i.e. NFSv4, TCP fast open, etc)
• Better contributions
• Largest amount of Developers and Company Contributions
• Major Backports not required to provide cutting edge features
• New features seamlessly used by Oracle products
6. • Updates contain critical security and bug fixes.
• Most Enterprise Distribution updates contain new features.
• “Security Errata” can contain new features
• Updates often contain 1000s of lines of new code from upstream.
• Includes features, bug fixes, enhancements and other tweaks
• Install Only security and bug fixes to avoid down time from new and
untested features
• The DB server is not your Laptop do not install unknown/new software.
7. • Use Oracle validated
• http://www.oracle.com/technetwork/server-storage/linux/validated-configurations-
085828.html
• Install only base + Oracle validated packages
• Do not install games, applications .. on the production servers
9. • BIOS
• CPU
• Memory TYPE , swap
• Disk (Disks in the more the better )
• Virtualization
10. Motherboard
Handle 0x0003, DMI type 2, 16 bytes
Base Board Information
Manufacturer: Intel
Product Name: S5000PAL0
Processor
Processor Information
Version: Intel(R) Xeon(R) CPU X5355
Memory
Handle 0x0034, DMI type 17, 27 bytes
Memory Device
Data Width: 64 bits
Size: 2048 MB
Form Factor: DIMM
Set: 1
Locator: ONBOARD DIMM_A1
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz (1.5 ns)
• Motherboard
4 Memory Channels (S5000PAL0) 8 Slots
(A1/A2/B1/B2/D1/D2)Channels
• CPU
Intel ClovertownCPUs
1333Mhz (Dual Independent FSB)
Bandwidth 10666 MB/s per FSB
21 GB/s Maximum FSB Bandwdith
• Memory
Memory DDR2 667 = PC2-5300
4 Memory Channels at 5.3GB/s each
Memory Bandwidth of 21 GB/s from all 4
channels
16GB memory in total
http://ark.intel.com/
11. • Always check for appropriate BIOS settings Look out for:
• CPU features
• Enable Maximum Performance in the BIOS
• Memory
• Enable numa
• Power Management
12. • will give you 35% better performance ( Test On OLTP).
• SMT Simultaneous Multi-Threading
• Run 2 threads at the same time per core
• Do I have HT ?
• Ensuring that HT is enabled at the BIOS.
• grep -e "model name" /proc/cpuinfo
• http://ark.intel.com/
• Do not Enable TH on I/O bound server it only will make it worse.
13. cpufreq
you can dynamically scale processor frequencies through the CPUfreq subsystem.
◦ Enable Maximum Performance in the BIOS
◦ /sys/devices/system/cpu/cpu<n>/cpufreq/scaling_governor
◦ On Redhat 5.x default is performance
◦ On Redhat 6.x default is normal
◦ echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
15. • Load of more then n on n cpu server is bad
• Load of n cpu on n cpu server is good
= load of 1.00
= load of 0.50
= load of 1.70
16. • Use Load to find if the server is CPU bound
• one, five, and fifteen minute averages
• In LINUX CPU bound can impact I/O
17. cat /proc/meminfo
◦ Free = Cached + Free
◦ All free space on Linux is used for pagecache.
◦ This behavior can be controlled by cgroups.
◦ PageTables large use HugePages.
/dev/shm
◦ implementation of traditional shared memory (ramfs )
◦ Used by Automatic Memory Management ( AMM MEMORY_TARGET )
◦ Not working with Hugepages ID 1134002.1
18. • Oracle will recognize NUMA systems and adjust memory and scheduling operations
accordingly and NUMA technology allows for faster communication between
distributed memory in a multi-processor server. ID 759565.1
• !! Disabling or enabling NUMA Will change application performance. !!
• 8 sockets and beyond may see gains of approximately 5%
• Enbale on bios level and in grub.conf remove numa=off
• _enable_NUMA_optimization=TRUE (look for bugs on your version before enable)
• dmesg | grep -i numa
NUMA: Initialized distance table, cnt=2
NUMA: Node 0 [0,c0000000) + [100000000,1040000000) -> [0,1040000000)
pci_bus 0000:00: on NUMA node 0 (pxm 0)
pci_bus 0000:80: on NUMA node 1 (pxm 1)
19. • Without HugePages the memory of the is divided into 4K pages
• Using HugePages the page size is increased to 2MB (configurable to 1G )
• HugePages reducing the total number of pages to be managed by the
kernel
• reducing the amount of memory required to hold the page table in memory.
20. • Use Hugepages Oracle Doc ID 749851.1
• Reduce footprint of individual Oracle database connections.
• Increase performance and scalability with fewer tlb misses.
• Requires manual tuning after SGA changes, and does not work with AMM
(/dev/shm).
21. Without Hugepages
o 200 Connections to a
12.9GB SGA
o Before DB Startup
Pagetables: 7400 kB
o After DB Startup Pagetables:
652900 kB
o After 200 PQ Slave run query
o Pagetables: 6189248k
o Time to complete:
00:10:23.60
With Hugepages
o 200 Connections to a 12.9GB
SGA
o Before DB Startup PageTables:
7748 kB
o After DB Startup Pagetables:
21288 kB
o After 200 PQ slaves run query
o Pagetables: 80564 kB
o Time to complete: 00:00:18.77
22. Use Hugepages with VMs for non-swappable, shared pagetables.
Hugepages must allocated in the guest VM and the hypervisor
Oracle VM 3.2.6 contains support for pv-hugepages
23. What about Swap?
◦ Modern Linux distributions Do not use Swap (swappiness is very low )
◦ Swap is for OS services only. I do not recommend swap = ram.
◦ Check vmstat output: ensure swap
◦ Do not use Swap as memory – buy more memory
◦ If you have free memory
echo 10 > /proc/sys/vm/swappiness
vm.swappiness in /etc/sysctl.conf
24. • Disks – the more the better
• Do not mix.
• Use RAID
• Use Hardware RAID
• RAID 1+0 is best for write performance (logs).
• RAID 5 is best for read performance.
RAID Level
Total array
capacity
Fault tolerance Read speed (4k) Write speed (4k)
RAID-1+0
500GB x 4 disks
1000 GB 1 disk 2X 2X
RAID-5
500GB x 3 disks
1000 GB 1 disk 3X
Speed of a RAID 5
depends on
controller
25. • High “log file sync” event time .
• Do Not Use Raid 5 on Redo Logs (low write performance).
• Upgrading the CPU enabled more throughput increase for redo (LGWR also requires CPU)
• reducing the overall number of commits by batching transactions can have a very beneficial effect.
• See if any of the processing can use the COMMIT NOWAIT option.
• See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options.
• Enlarge the redologs so the logs switch between 15 to 20 minutes.
• ID 34592.1
26. On Linux Use ASM (Block/RAW Device, O_DIRECT )
Raw Devices deprecated by OUI for Oracle 11.2 ID 357492.1
Raw Devices may still bring benefits for intensive redo and large redo log
files
Use udev or asmlib to Control Devices
27. If using file system Bypass journaling when you create a file system., use EXT-2 or
EXT-4 with journaling turned off,
journaling turned off eliminates double writes.
“noatime” option eliminates the need for the system to create writes to the file
system when objects are only being read.
To Creaet partition and to disable DOS compatibility
fdisk -c -u /dev/sda1
To turn off journaling, execute:
tune4fs -O ^has_journal /dev/sda1
mount -t ext4 -o noatime /dev/sda1 /oradata
28. Device w/s wMB/s avgrq-sz avqqu-sz avwait svctm %util
sdb1 21357.33 167.86 16.10 1.51 0.07 0.02 44.53
Device w/s wMB/s avgrq-sz avqqu-sz avwait svctm %util
sdd1 3343.00 130.68 80.06 3.25 0.97 0.25 83.97
SSD
HDD
iostat information recorded during the ASM tests SSD/RAW, HDD/RAW, 50GB over a 5 minute period
the redo on the 8 x SSD drives is writing 1.28X more data per second and doing 6.4X the writes/second
although the avgrq-sz shows that the HDD configuration is writing more data for each operation.
However, the avwait, svctm and %util show the the HDD configuration is busier and responding slower.
29. • Top 5 Timed Events (AWR) looked as follows:
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 19,832 78.42
log file sync 6,700,242 4,059 1 16.05 Commit
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 14,255 52.53
log file sync 5,366,376 12,709 2 46.83 Commit
SSD
HDD
30. • DB Smart Flash Cache is new (11.2) extension for buffer cache area.
• extension to the SGA as L2 cache
ID 1317950.1
db_flash_cache_file = <+FLASH/filename>
db_flash_cache_size = <flash pool size>
alter [table|index object_name] storage (flash_cache keep);
31.
32. • Look for high I/O wait (%wa in top, await iostat)
• Look at %util for disk saturation.
• In the AWR most of DB Time is I/O.
33. Virtualization performance is proportional to native performance
VM Drivers Vs Native Drivers Have ~16% Overhead
34. • top
• iostat –Nx 1 100
• Sar
• Ksar
• Oracle Orion Calibration Tool
http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#PFG
RF95244
35. • From Redhat 6.x (6.2 best) and EUK 3
• cgroups: Control Groups for Linux Containers
• Provide fine grained control over system resources
• Can be used to throttle page cache use by backup
processes - Often the reason why systems are slower
after overnight backups
37. Cgroup How To Use
yum install libcgroup
/etc/init.d/cgconfig start
/etc/cgconfig.conf
mount {
cpu = /cgroup/cpu;
memory = /cgroup/memory;
}
group http {
memory {
memory.limit_in_bytes = 10M;
}
אני אדבר על 2 חלקים נפרדים התוכנה וההפצות
וחומרה
הם כמובן נפגשים אבל יש להם במצגת שלי שתי חלקים נפרדים
הרבה משתשמים ב
centosאו בפדורה
החבילות שם לא ניבדקו הם מעולם לא עברו QA
ויכול להיות מצב שמי שקימפל את החבילות עשה עבודה גרוע ואתם תגלו את זה בראשון לחודש בשלוש בלילה ביום שישי .
The DB server is not your Laptop do not install unknown/new software.
השתמשו באתר oracle validate
Install only base + Oracle validated packages
כמה זיכרון יש לשרת ?
מה זה /dev/shm
התשמשו ב block
אפשר גם ב row
יש יתרונות ל row
כאשר יש כתיבות גדולות
Raw נתמח אבל דפריקדט
נוכל להעמיס יותר את ה
SSD
הבעיה העיקרית של SSD
היא הכתיבה מחדש .
מחיקה לוקחת זמן רב