This document discusses file size limits in Oracle databases. The 2GB file size limit arises because many systems use 32-bit integers to represent file sizes, which can only address up to 2GB of data. Using files larger than 2GB requires operating system and Oracle database versions that support 64-bit file APIs. The document outlines issues with tools like Export and SQL*Loader not fully supporting large files and provides platform-specific guidance.
Harrison fisk masteringinnodb-diagnosticsguest8212a5
This document provides an overview of techniques for diagnosing and troubleshooting performance and other issues with the InnoDB storage engine in MySQL. It discusses sources of diagnostic information like SHOW ENGINE INNODB STATUS and various status variables. Common problems covered include data dictionary issues, crashing, locking, and performance problems related to disk I/O, tablespace usage, CPU usage, and thread thrashing. Interpreting diagnostic information and potential solutions are provided for each type of issue.
Pldc2012 innodb architecture and internalsmysqlops
Innodb uses a traditional OLTP architecture with row-based storage and row locking. Data is stored in tablespaces made up of segments and logs record changes in circular log files. The buffer pool caches data pages and uses an LRU algorithm to flush dirty pages. Multi-versioning allows transactions to read past versions of rows without locking while write operations require row locks. A variety of helper threads perform tasks like flushing data from the buffer pool to disk.
This document provides instructions for quickly installing Oracle Database 12c Release 1 on Windows x64 systems:
- It describes configuring the system to meet hardware and software requirements, installing the Oracle Database software, and validating a successful installation.
- The typical installation will require a minimum of 2GB RAM, 10GB disk space, and supported versions of Windows and compilers.
- The installation creates several OS groups like ORA_DBA and ORA_ASMADMIN to manage privileges and provides options for specifying an Oracle Home user.
This document provides an overview of Oracle 12c Sharded Database Management. It defines what sharding is, how it works, and the benefits it provides such as extreme scalability, fault isolation, and cost reduction. It discusses Oracle's implementation of sharding using database partitioning and Global Data Services (GDS). Key concepts covered include shards, chunks, consistent hashing, and how Oracle supports operations across shards through GDS request routing.
Microsoft SQL Server Data Warehouses for SQL Server DBAsMark Kromer
The document discusses Microsoft SQL Server data warehousing solutions. It provides an agenda for a presentation that includes an overview of Microsoft's data warehousing offerings, how to establish baseline metrics for Fast Track reference configurations, and how to design balanced server and storage configurations for data warehousing workloads. It also discusses software and hardware best practices, such as data striping and storage configuration recommendations. Overall, the document outlines topics and solutions to help customers accelerate their data warehouse deployments using Microsoft SQL Server.
The document discusses techniques for compacting, compressing, and de-duplicating data in Domino applications to reduce storage usage and improve performance. It covers compacting databases, compressing design elements, documents, and attachments, using DAOS to store attachments externally, and tools for defragmenting files.
This document discusses new features in Oracle WebLogic Server 12c. It begins with an introduction of the presenters. It then outlines 12 key things to know about WebLogic 12c, including an updated installer, per-domain Node Manager, server templates, dynamic clusters, unicast groups, cluster-targeted JMS, Java Mission Control for monitoring, built-in WLDF diagnostic modules, and RESTful management APIs. The document provides information on why upgrading to WebLogic 12.1.3 would be beneficial.
The document summarizes new features in MySQL 5.7, including improvements to InnoDB performance for read-only and read-write workloads, faster connection handling, bulk data load improvements, statement timeouts, multiple user level locks, and other features to improve scalability, concurrency, and performance.
Harrison fisk masteringinnodb-diagnosticsguest8212a5
This document provides an overview of techniques for diagnosing and troubleshooting performance and other issues with the InnoDB storage engine in MySQL. It discusses sources of diagnostic information like SHOW ENGINE INNODB STATUS and various status variables. Common problems covered include data dictionary issues, crashing, locking, and performance problems related to disk I/O, tablespace usage, CPU usage, and thread thrashing. Interpreting diagnostic information and potential solutions are provided for each type of issue.
Pldc2012 innodb architecture and internalsmysqlops
Innodb uses a traditional OLTP architecture with row-based storage and row locking. Data is stored in tablespaces made up of segments and logs record changes in circular log files. The buffer pool caches data pages and uses an LRU algorithm to flush dirty pages. Multi-versioning allows transactions to read past versions of rows without locking while write operations require row locks. A variety of helper threads perform tasks like flushing data from the buffer pool to disk.
This document provides instructions for quickly installing Oracle Database 12c Release 1 on Windows x64 systems:
- It describes configuring the system to meet hardware and software requirements, installing the Oracle Database software, and validating a successful installation.
- The typical installation will require a minimum of 2GB RAM, 10GB disk space, and supported versions of Windows and compilers.
- The installation creates several OS groups like ORA_DBA and ORA_ASMADMIN to manage privileges and provides options for specifying an Oracle Home user.
This document provides an overview of Oracle 12c Sharded Database Management. It defines what sharding is, how it works, and the benefits it provides such as extreme scalability, fault isolation, and cost reduction. It discusses Oracle's implementation of sharding using database partitioning and Global Data Services (GDS). Key concepts covered include shards, chunks, consistent hashing, and how Oracle supports operations across shards through GDS request routing.
Microsoft SQL Server Data Warehouses for SQL Server DBAsMark Kromer
The document discusses Microsoft SQL Server data warehousing solutions. It provides an agenda for a presentation that includes an overview of Microsoft's data warehousing offerings, how to establish baseline metrics for Fast Track reference configurations, and how to design balanced server and storage configurations for data warehousing workloads. It also discusses software and hardware best practices, such as data striping and storage configuration recommendations. Overall, the document outlines topics and solutions to help customers accelerate their data warehouse deployments using Microsoft SQL Server.
The document discusses techniques for compacting, compressing, and de-duplicating data in Domino applications to reduce storage usage and improve performance. It covers compacting databases, compressing design elements, documents, and attachments, using DAOS to store attachments externally, and tools for defragmenting files.
This document discusses new features in Oracle WebLogic Server 12c. It begins with an introduction of the presenters. It then outlines 12 key things to know about WebLogic 12c, including an updated installer, per-domain Node Manager, server templates, dynamic clusters, unicast groups, cluster-targeted JMS, Java Mission Control for monitoring, built-in WLDF diagnostic modules, and RESTful management APIs. The document provides information on why upgrading to WebLogic 12.1.3 would be beneficial.
The document summarizes new features in MySQL 5.7, including improvements to InnoDB performance for read-only and read-write workloads, faster connection handling, bulk data load improvements, statement timeouts, multiple user level locks, and other features to improve scalability, concurrency, and performance.
Know Your Competitor - Oracle 10g Express EditionRonald Bradford
The document provides an overview of Oracle 10g Express Edition (XE) for MySQL developers. It discusses why developers should be familiar with Oracle as the largest RDBMS provider, describes what XE is and how to install and configure it. The key processes involved in running an XE database are also outlined, including the log writer, database writer, checkpoint process, and memory manager.
The document discusses Oracle 12c's multitenant architecture which introduces the concepts of a container database (CDB) and pluggable databases (PDBs). A CDB can host multiple PDBs that appear as independent databases but share resources. PDBs can be unplugged from one CDB and plugged into another, allowing for quick provisioning and cloning of databases. The multitenant architecture provides benefits like consolidation of databases, rapid provisioning and cloning using SQL, and easier patching and upgrades.
This document provides an overview of Oracle 12c Pluggable Databases (PDBs). Key points include:
- PDBs allow multiple databases to be consolidated within a single container database (CDB), providing benefits like faster provisioning and upgrades by doing them once per CDB.
- Each PDB acts as an independent database with its own data dictionary but shares resources like redo logs at the CDB level. PDBs can be unplugged from one CDB and plugged into another.
- Hands-on labs demonstrate how to create, open, clone, and migrate PDBs between CDBs. The document also compares characteristics of CDBs and PDBs and shows how a non-C
Red Stack Tech Ltd is a global Oracle Technology brand specialising in the provision of Oracle software, Hardware, Managed and professional services across the entire Oracle Technology stack. Established in the mid 90’s, Red Stack Tech have developed through R&D and investment in new technologies, a brand which is highly regarded within the Oracle landscape. Red Stack Tech are able to deliver full end-to-end solutions that encompass all Oracle technologies with a strong focus on Oracle Engineered Systems, Database Management Services and Business Analytics.
Stellar toolkit for exchange, Toolkit for Every Exchange Administrator Bharat Bhushan
5-in-1 suite of specialized tools, highly recommended by MVPs and IT administrators, for repairing corrupt EDB, extracting mailboxes from backup, and converting exchange database (EDB) mailboxes into PST file format. It also offers the tools for extracting mailbox data from inaccessible OST & resetting Windows Server password.
The document discusses monitoring and tuning Oracle Real Application Clusters (RAC) databases. It focuses on the global buffer cache, which is shared across RAC instances. Key waits related to the buffer cache include gc cr request for retrieving data from a remote cache and gc buffer busy waits for a remote instance accessing requested data. Tuning queries to reduce blocks accessed and managing hot blocks can help address buffer cache-related waits. The document recommends using Confio Ignite software to monitor RAC performance at the query level and identify imbalances or excessive overhead causing degraded performance.
This document discusses the creation of a multitenant container database (CDB) and pluggable databases (PDBs) in Oracle Database 12c. It covers creating a CDB using Oracle Universal Installer, Database Configuration Assistant, or manually. The manual process involves setting enable_pluggable_database to true, adding clauses to the CREATE DATABASE command, and running a script that creates the root and seed PDBs. The document also provides commands to validate if a database is a CDB and view its containers.
This document provides information on parsing XML documents in J2ME applications. It discusses XML parser types, commonly used XML parsers like kXML for J2ME, and provides an example of using kXML to parse an XML document retrieved over HTTP and display the parsed data in a J2ME application.
Presentation review best ways to accomplish database load testing and analysis of database performance. Presentation targeting major RDBMS:systems - Oracle and SQL Server as well as tolls necessary for database load testing, Oracle performance tuning, SQL Server performance tuning, Windows and Linux performance optimization
Once the ‘Backup Database’ command executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device.
Sql Health in a SharePoint environmentEnrique Lima
This document discusses how to maintain a healthy SharePoint environment. It emphasizes the importance of properly configuring and managing the SQL Server database that SharePoint runs on. It provides guidance on capacity planning, hardware sizing, maintenance best practices, and understanding SharePoint limitations and thresholds. The goal is to ensure the SQL Server infrastructure can support the SharePoint implementation and meet performance requirements.
Microsoft SQL Server - Files and FilegroupsNaji El Kotob
This document discusses files and filegroups in Microsoft SQL Server. It begins by explaining pages and extents, which are the basic units of data storage and management in SQL Server. It then defines files, filegroups, and their default extensions (.mdf, .ndf, .ldf). The document outlines the differences between primary and secondary filegroups and provides recommendations for using files and filegroups to improve performance, enable backup/restore strategies, and follow design rules. It also discusses read-only filegroups and compares the benefits of using filegroups versus RAID storage configurations.
This document outlines best practices for configuring and managing the tempDB database in SQL Server. It discusses using tempDB for sorting operations, querying size and location information with T-SQL, autogrowth settings, shrinking tempDB with DBCC, best practice guidelines, optimizing performance by adjusting size and placement, changing the location, adding new files, and demonstrates these tasks in a live demo.
The document outlines 25 steps to implement a physical standby database between two servers, a primary database on 10.10.1.248 and a standby on 10.10.1.249. The steps include configuring the primary for archiving, setting log archive parameters, backing up the primary, duplicating the backup on the standby, and enabling archiving and recovery to bring the standby up to date with the primary.
This document discusses the architecture of Oracle's Exadata Database Machine. It describes the key components which provide high performance and availability, including:
- Shared storage using Exadata Storage Servers and Automatic Storage Management (ASM) for redundancy.
- A shared InfiniBand network for fast, low-latency interconnect between database and storage servers.
- A shared cache within the Real Application Clusters (RAC) environment.
- A cluster of up to 8 database servers each with 80 CPU cores and 256GB memory.
Oracle Database 12c Release 2 - New Features On Oracle Database Exadata Expr...Alex Zaballa
The document discusses new features in Oracle Database 12c Release 2 when used with Oracle Database Exadata Express Cloud Service. It covers features like pluggable databases supporting up to 4096 databases, hot cloning of databases, sharding capabilities, in-memory column store, application containers, and more. The presentation provides examples demonstrating several of these new features, such as native JSON support, improved data conversion functions, and approximate query processing.
Microsoft Windows has long been an ideal platform for the Oracle database server. Oracle has always fully supported Microsoft Windows and has added Windows‐ only features to Oracle. This trend has continued with Oracle 11g. In some respects, the choice of the Operating System (OS) is irrelevant and in other situations very important. This choice may depend on several variables, some technical and others business oriented. One may find that Microsoft Windows provides advantages in some environments, and disadvantages in others. Introducing a new OS into an existing Windows infrastructure for the purpose of running a single application such as Oracle, is no small task and can cause unforeseen problems, such as require additional staff or training. Similarly, installing Oracle on Windows in an exclusively UNIX environment can pose similar problems.
This paper written by Ed Whalen, Performance Tuning Corporation COO and Oracle ACE, highlights some of the advantages of running Oracle on the Windows 64‐bit operating system. In addition, it addresses some of the basic issues and factors to consider when choosing to deploy a new operating system, such as Windows.
For more information, learn more at www.perftuning.com
This document provides a summary of testing done to compare the performance of Oracle Database 10gR2 Real Application Cluster (RAC) implementations on 64-bit Microsoft Windows Server 2003 and 64-bit Red Hat Enterprise Linux. The testing environment consisted of identical Oracle RAC configurations on each platform using the same hardware components. Benchmark testing was performed using the SwingBench tool to evaluate and compare the performance of the two implementations under stress testing and user load testing scenarios. The results and conclusions from these tests are presented.
The document discusses new features in version 0.9.4 of the DivConq file transfer software, including file tasks that can be triggered by uploads, scheduling, or file system events. It introduces dcScript, the scripting language that allows users to string together various file operations and tasks. Key points include that dcScript scripts can run asynchronously, optimize file operations through in-memory streaming rather than disk reads/writes, and offer features to simplify complex multi-step file tasks. The document provides examples of using dcScript to encrypt, compress, split and transfer files with just a few lines of code.
Know Your Competitor - Oracle 10g Express EditionRonald Bradford
The document provides an overview of Oracle 10g Express Edition (XE) for MySQL developers. It discusses why developers should be familiar with Oracle as the largest RDBMS provider, describes what XE is and how to install and configure it. The key processes involved in running an XE database are also outlined, including the log writer, database writer, checkpoint process, and memory manager.
The document discusses Oracle 12c's multitenant architecture which introduces the concepts of a container database (CDB) and pluggable databases (PDBs). A CDB can host multiple PDBs that appear as independent databases but share resources. PDBs can be unplugged from one CDB and plugged into another, allowing for quick provisioning and cloning of databases. The multitenant architecture provides benefits like consolidation of databases, rapid provisioning and cloning using SQL, and easier patching and upgrades.
This document provides an overview of Oracle 12c Pluggable Databases (PDBs). Key points include:
- PDBs allow multiple databases to be consolidated within a single container database (CDB), providing benefits like faster provisioning and upgrades by doing them once per CDB.
- Each PDB acts as an independent database with its own data dictionary but shares resources like redo logs at the CDB level. PDBs can be unplugged from one CDB and plugged into another.
- Hands-on labs demonstrate how to create, open, clone, and migrate PDBs between CDBs. The document also compares characteristics of CDBs and PDBs and shows how a non-C
Red Stack Tech Ltd is a global Oracle Technology brand specialising in the provision of Oracle software, Hardware, Managed and professional services across the entire Oracle Technology stack. Established in the mid 90’s, Red Stack Tech have developed through R&D and investment in new technologies, a brand which is highly regarded within the Oracle landscape. Red Stack Tech are able to deliver full end-to-end solutions that encompass all Oracle technologies with a strong focus on Oracle Engineered Systems, Database Management Services and Business Analytics.
Stellar toolkit for exchange, Toolkit for Every Exchange Administrator Bharat Bhushan
5-in-1 suite of specialized tools, highly recommended by MVPs and IT administrators, for repairing corrupt EDB, extracting mailboxes from backup, and converting exchange database (EDB) mailboxes into PST file format. It also offers the tools for extracting mailbox data from inaccessible OST & resetting Windows Server password.
The document discusses monitoring and tuning Oracle Real Application Clusters (RAC) databases. It focuses on the global buffer cache, which is shared across RAC instances. Key waits related to the buffer cache include gc cr request for retrieving data from a remote cache and gc buffer busy waits for a remote instance accessing requested data. Tuning queries to reduce blocks accessed and managing hot blocks can help address buffer cache-related waits. The document recommends using Confio Ignite software to monitor RAC performance at the query level and identify imbalances or excessive overhead causing degraded performance.
This document discusses the creation of a multitenant container database (CDB) and pluggable databases (PDBs) in Oracle Database 12c. It covers creating a CDB using Oracle Universal Installer, Database Configuration Assistant, or manually. The manual process involves setting enable_pluggable_database to true, adding clauses to the CREATE DATABASE command, and running a script that creates the root and seed PDBs. The document also provides commands to validate if a database is a CDB and view its containers.
This document provides information on parsing XML documents in J2ME applications. It discusses XML parser types, commonly used XML parsers like kXML for J2ME, and provides an example of using kXML to parse an XML document retrieved over HTTP and display the parsed data in a J2ME application.
Presentation review best ways to accomplish database load testing and analysis of database performance. Presentation targeting major RDBMS:systems - Oracle and SQL Server as well as tolls necessary for database load testing, Oracle performance tuning, SQL Server performance tuning, Windows and Linux performance optimization
Once the ‘Backup Database’ command executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device.
Sql Health in a SharePoint environmentEnrique Lima
This document discusses how to maintain a healthy SharePoint environment. It emphasizes the importance of properly configuring and managing the SQL Server database that SharePoint runs on. It provides guidance on capacity planning, hardware sizing, maintenance best practices, and understanding SharePoint limitations and thresholds. The goal is to ensure the SQL Server infrastructure can support the SharePoint implementation and meet performance requirements.
Microsoft SQL Server - Files and FilegroupsNaji El Kotob
This document discusses files and filegroups in Microsoft SQL Server. It begins by explaining pages and extents, which are the basic units of data storage and management in SQL Server. It then defines files, filegroups, and their default extensions (.mdf, .ndf, .ldf). The document outlines the differences between primary and secondary filegroups and provides recommendations for using files and filegroups to improve performance, enable backup/restore strategies, and follow design rules. It also discusses read-only filegroups and compares the benefits of using filegroups versus RAID storage configurations.
This document outlines best practices for configuring and managing the tempDB database in SQL Server. It discusses using tempDB for sorting operations, querying size and location information with T-SQL, autogrowth settings, shrinking tempDB with DBCC, best practice guidelines, optimizing performance by adjusting size and placement, changing the location, adding new files, and demonstrates these tasks in a live demo.
The document outlines 25 steps to implement a physical standby database between two servers, a primary database on 10.10.1.248 and a standby on 10.10.1.249. The steps include configuring the primary for archiving, setting log archive parameters, backing up the primary, duplicating the backup on the standby, and enabling archiving and recovery to bring the standby up to date with the primary.
This document discusses the architecture of Oracle's Exadata Database Machine. It describes the key components which provide high performance and availability, including:
- Shared storage using Exadata Storage Servers and Automatic Storage Management (ASM) for redundancy.
- A shared InfiniBand network for fast, low-latency interconnect between database and storage servers.
- A shared cache within the Real Application Clusters (RAC) environment.
- A cluster of up to 8 database servers each with 80 CPU cores and 256GB memory.
Oracle Database 12c Release 2 - New Features On Oracle Database Exadata Expr...Alex Zaballa
The document discusses new features in Oracle Database 12c Release 2 when used with Oracle Database Exadata Express Cloud Service. It covers features like pluggable databases supporting up to 4096 databases, hot cloning of databases, sharding capabilities, in-memory column store, application containers, and more. The presentation provides examples demonstrating several of these new features, such as native JSON support, improved data conversion functions, and approximate query processing.
Microsoft Windows has long been an ideal platform for the Oracle database server. Oracle has always fully supported Microsoft Windows and has added Windows‐ only features to Oracle. This trend has continued with Oracle 11g. In some respects, the choice of the Operating System (OS) is irrelevant and in other situations very important. This choice may depend on several variables, some technical and others business oriented. One may find that Microsoft Windows provides advantages in some environments, and disadvantages in others. Introducing a new OS into an existing Windows infrastructure for the purpose of running a single application such as Oracle, is no small task and can cause unforeseen problems, such as require additional staff or training. Similarly, installing Oracle on Windows in an exclusively UNIX environment can pose similar problems.
This paper written by Ed Whalen, Performance Tuning Corporation COO and Oracle ACE, highlights some of the advantages of running Oracle on the Windows 64‐bit operating system. In addition, it addresses some of the basic issues and factors to consider when choosing to deploy a new operating system, such as Windows.
For more information, learn more at www.perftuning.com
This document provides a summary of testing done to compare the performance of Oracle Database 10gR2 Real Application Cluster (RAC) implementations on 64-bit Microsoft Windows Server 2003 and 64-bit Red Hat Enterprise Linux. The testing environment consisted of identical Oracle RAC configurations on each platform using the same hardware components. Benchmark testing was performed using the SwingBench tool to evaluate and compare the performance of the two implementations under stress testing and user load testing scenarios. The results and conclusions from these tests are presented.
The document discusses new features in version 0.9.4 of the DivConq file transfer software, including file tasks that can be triggered by uploads, scheduling, or file system events. It introduces dcScript, the scripting language that allows users to string together various file operations and tasks. Key points include that dcScript scripts can run asynchronously, optimize file operations through in-memory streaming rather than disk reads/writes, and offer features to simplify complex multi-step file tasks. The document provides examples of using dcScript to encrypt, compress, split and transfer files with just a few lines of code.
Whitepaper: Running Oracle e-Business Suite Database on Oracle Database Appli...Maris Elsins
This is the whitepaper for my Collaborate 13 presentation with the same title. It describes how Pythian completed a migration project of eBS R12 database top ODA (Oracle Appliance Kit v2.2).
White Paper: Still All on One Server: Perforce at ScalePerforce
- The document discusses Perforce, the source control system, at Google scale. It describes how Google runs the largest single Perforce server supporting over 12,000 users and handling 11-12 million commands daily.
- The server runs on a high-powered machine but database locking still limits concurrency at times. Google has taken steps to optimize resources like CPU, memory, disk I/O, and reduce metadata to improve performance.
- Regular upgrades to Perforce software have provided gains, and Google also pursues other improvements like reducing metadata and optimizing hardware resources and usage patterns.
SOGo is a scalable groupware server that provides shared calendars, address books and emails through a web interface and native clients. This document outlines how to install and configure SOGo on Red Hat or CentOS. It describes downloading and installing SOGo and its dependencies using YUM. The configuration involves setting parameters in the GNUstep user defaults file to configure SOGo's general preferences, authentication method, and domains. The preferences hierarchy allows parameters to be defined at the system, domain, or user level.
Apache hadoop 3.x state of the union and upgrade guidance - Strata 2019 NYWangda Tan
The document discusses Apache Hadoop 3.x updates and provides guidance for upgrading to Hadoop 3. It covers community updates, features in YARN, Submarine, HDFS, and Ozone. Release plans are outlined for Hadoop, Submarine, and upgrades from Hadoop 2 to 3. Express upgrades are recommended over rolling upgrades for the major version change. The session summarizes that Hadoop 3 is an eagerly awaited release with many successful production uses, and that now is a good time for those not yet upgraded.
Centralized open source deployment to capture and deploy multi plate-form images and providing centralized authentication and authorization with OpenLDAP for verifyimg on client machine.
The document summarizes new features in MySQL 5.5 and 5.6. Some key points:
- MySQL 5.5 improved InnoDB performance, added new monitoring tools, and supported features like multi-buffer pools.
- MySQL 5.6 focused on improvements to replication like GTIDs for easier management, multi-threaded slaves for performance, and crash-safe replication.
- Other new features included online DDL support and transportable InnoDB tables to move data between servers.
The document discusses several case studies of using the DBAL (Database Abstraction Layer) in real-life TYPO3 projects with different database backends like Oracle 10g, MS SQL Server, and MySQL Cluster. It finds that while some issues remain, the DBAL works well enough to use in projects and allows different database backends. It encourages providing feedback to help improve DBAL and help address remaining problems.
A Technical Comparison: ISO/IEC 26300 vs Microsoft Office Open XML Alexandro Colorado
Two XML office file formats have been pressing upon our attention, the OASIS OpenDocument Format, recently standardized by ISO, and the Draft Ecma Office Open XML. This presentation will review history of each, the process that created them, and examine each format to compare and contrast how they deal with issues such extensibility, modularization, expressivity, performance, reuse of standards, programability, ease of use, and application/OS neutrality.
Sequential file programming patterns and performance with .netMichael Pavlovsky
Sequential file access is very common and critical for large files. The .NET framework and Windows provide default buffered I/O for sequential file access that achieves excellent performance of 50 MB/s without needing custom code. The document describes programming patterns for simple sequential text and binary file access in .NET, including opening files, reading and writing data, and handling exceptions. It also analyzes the performance impacts of parameters like block size and file fragmentation.
What Have We Lost - A look at some historical techniquesLloydMoore
In the last 40+ years that I have been developing software there has been a massive increase in computing capability, both in terms of performance and the speed and ease of which new software can be created.
If we look back , the level of benefit to the user hasn’t kept pace with either of the above metrics. 40+ years ago I would have used a word processor to write this abstract (albeit without some of the fancy formatting) on a computer that had a single 8 bit processor and ran at Mhz speeds (ie: Commodore 64). On my modern computer with 16x 64 bit processors each running at Ghz speed writing this abstract is much the same experience. (For the moment we’ll ignore that MABYE in the not too distant future I won’t be doing this at all as a ChatGPT descent will do it for me – which of course would justify the cost of my current computer!)
Let’s take a moment to look back at how things were done in the past, particularly at techniques which I do not find in common practice, to see how we can do better. This is not to say that 40+ years of progress should be thrown out the window, this is to say some 40+ year old techniques still have value today and should not be forgotten.
Windows 2000 is a 32-bit operating system designed for compatibility, reliability, and performance. It includes several key components like the kernel, executive services, and environmental subsystems. The kernel schedules threads and handles exceptions/interrupts. Executive services include the object manager, virtual memory manager, process manager, and I/O manager. Environmental subsystems allow running applications from other operating systems. The document also discusses disk structure, file systems, networking, and other OS concepts.
The document discusses new features in Apache Hadoop Common and HDFS for version 3.0. Key updates include upgrading the minimum Java version to Java 8, improving dependency management, adding a new Azure Data Lake Storage connector, and introducing erasure coding in HDFS to improve storage efficiency. Erasure coding in HDFS phase 1 allows for striping of small blocks and parallel writes/reads while trading off higher network usage compared to replication.
Galaxy Big Data with MariaDB 10 by Bernard Garros, Sandrine Chirokoff and Stéphane Varoqui.
Presented 26.6.2014 at the MariaDB Roadshow in Paris, France.
This document provides 10 tips for optimizing MySQL database performance at the operating system level. The tips include using SSDs instead of HDDs for faster I/O, allocating large amounts of memory, avoiding swap space, keeping the MySQL version up to date, using file systems without barriers, configuring RAID cards for write-back caching, and leveraging huge pages. Overall, the tips aim to improve I/O speeds and memory usage to enhance MySQL query processing performance.
This document provides instructions for installing Oracle Applications R12 (12.1.3) on a Linux (64-bit) system. It describes downloading and unzipping the installation files, performing pre-install tasks like configuring disk space and software requirements, and outlines the installation process including setting environment variables and the directory structure. It also covers upgrading an existing 12.1.3 installation with a patch and provides solutions for potential issues that may occur.
Hdg explains swapfile.sys, hiberfil.sys and pagefileTrường Tiền
The document discusses the pagefile.sys, hiberfil.sys, and swapfile.sys files in Windows 8. It explains that pagefile.sys is used for virtual memory when physical RAM is exhausted. Hiberfil.sys is used for hibernation and fast startup, and is only present if fast startup is enabled. Swapfile.sys is used specifically for suspending and resuming Metro apps, and may have other future uses. It is smaller than pagefile.sys. Fast startup results in hiberfil.sys being 75% of RAM size and pagefile.sys 25% of RAM size.
Hdg explains swapfile.sys, hiberfil.sys and pagefileTrường Tiền
The document discusses the pagefile.sys, hiberfil.sys, and swapfile.sys files in Windows 8. It explains that pagefile.sys is used for virtual memory when physical RAM is exhausted. Hiberfil.sys is used for hibernation and fast startup, and only exists if fast startup is enabled. Swapfile.sys is used specifically for suspending and resuming Metro apps, and may have other future uses. It is smaller than pagefile.sys.
Similar to 2 gb or not 2gb file limits in oracle (20)
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
2 gb or not 2gb file limits in oracle
1. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
1 of 6 1/19/2011 6:19 PM
2Gb or Not 2Gb - File limits in Oracle [ID 62427.1]
Modified 29-SEP-2010 Type BULLETIN Status ARCHIVED
Applies to:
Oracle Server - Enterprise Edition - Version: 7.0.16.0 to 8.1.7.4 - Release: 7.0 to 8.1.7
Information in this document applies to any platform.
Purpose
This document describes "2Gb" issues. It gives information on why 2Gb is a
magical number and outlines the issues you need to know about if you are
considering using Oracle with files larger than 2Gb in size. It also
looks at some other file related limits and issues.
Articles giving port specific limits are listed in the last section.
Topics covered include:
Why is 2Gb a Special Number ?
Why use 2Gb+ Datafiles ?
Export and 2Gb
SQL*Loader and 2Gb
Oracle and other 2Gb issues
Port Specific Information on "Large Files"
Scope and Application
This document has a Unix bias as this is where most of the 2Gb issues arise
but there is information relevant to other (non-unix) platforms.
Note: Questions regarding application specific issues (e.g., Concurrent Manager
in Oracle E-Business Suite R12.1.2 on OEL5-64-bit.) and file size limitations
will need to be addressed with the specific application team and are beyond the
scope of this document.
2Gb or Not 2Gb - File limits in Oracle
Why is 2Gb a Special Number?
Many CPU's and system call interfaces (API's) in use today use a word
size of 32 bits. This word size imposes limits on many operations.
In many cases the standard API's for file operations use a 32-bit signed
word to represent both file size and current position within a file (byte
displacement). A 'signed' 32bit word uses the top most bit as a sign
indicator leaving only 31 bits to represent the actual value (positive or
negative). In hexadecimal the largest positive number that can be
represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
This is ONE less than 2Gb.
Files of 2Gb or more are generally known as 'large files'. As one might
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.
2. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
2 of 6 1/19/2011 6:19 PM
expect, problems can start to surface once you try to use the number
2147483648 or higher in a 32bit environment. To overcome this problem
recent versions of operating systems have defined new system calls which
typically use 64-bit addressing for file sizes and offsets. Recent Oracle
releases make use of these new interfaces but there are a number of issues
one should be aware of before deciding to use 'large files'.
Another "special" number is 4Gb. 0xFFFFFFFF in hexadecimal can be
interpreted as an UNSIGNED value (4294967295 decimal) which is one less
than 4Gb. Adding one to this value yields 0x00000000 in the low order
4 bytes with a '1' carried over. The carried over bit is lost when using
32bit arithmetic. Hence 4Gb is another "special" number where problems
may occur. Such issues are also mentioned in this document.
What does this mean when using Oracle?
The 32bit issue affects Oracle in a number of ways. In order to use large
files you need to have:
1. An operating system that supports 2Gb+ files or raw devices
2. An operating system which has an API to support I/O on 2Gb+ files
3. A version of Oracle which uses this API
Today most platforms support large files and have 64bit APIs for such files.
Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs but
the situation is very dependent on platform, operating system version and
the Oracle version. In some cases 'large file' support is present by
default, while in other cases a special patch may be required.
At the time of writing there are some tools within Oracle which have not
been updated to use the new API's, most notably tools like EXPORT and
SQL*LOADER, but again the exact situation is platform and version specific.
Why use 2Gb+ Datafiles?
In this section we will try to summarise the advantages and disadvantages
of using "large" files / devices for Oracle datafiles:
Advantages of files larger than 2Gb:
On most platforms Oracle7 supports up to 1022 datafiles.
With files < 2Gb this limits the database size to less than 2044Gb.
This is not an issue with Oracle8 and higher which supports many more files.
(Oracle8 supported 1022 files PER TABLESPACE).
In reality the maximum database size in Oracle7 would be less than
2044Gb due to maintaining separate data in separate tablespaces.
Some of these may be much less than 2Gb in size. Larger files
allow this 2044Gb limit to be exceeded.
Larger files can mean less files to manage for smaller databases.
Less file handle resources required.
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.
3. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
3 of 6 1/19/2011 6:19 PM
Disadvantages of files larger than 2Gb:
The unit of recovery is larger. A 2Gb file may take between 15 minutes
and 1 hour to backup / restore depending on the backup media and
disk speeds. An 8Gb file may take 4 times as long.
Parallelism of backup / recovery operations may be impacted.
There may be platform specific limitations - Eg: Asynchronous IO
operations may be serialised above the 2Gb mark.
As handling of files above 2Gb may need patches, special configuration
etc.. there is an increased risk involved as opposed to smaller files.
Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
Important points if using files >= 2Gb
Check with the OS Vendor to determine if large files are supported
and how to configure for them.
Check with the OS Vendor what the maximum file size actually is.
Check with Oracle support if any patches or limitations apply
on your platform , OS version and Oracle version.
Remember to check again if you are considering upgrading either
Oracle or the OS in case any patches are required in the release
you are moving to.
Make sure any operating system limits are set correctly to allow
access to large files for all users.
Make sure any backup scripts can also cope with large files.
Note that there is still a limit to the maximum file size you
can use for datafiles above 2Gb in size. The exact limit depends
on the DB_BLOCK_SIZE of the database and the platform. On most
platforms (Unix, NT, VMS) the limit on file size is around
4194302*DB_BLOCK_SIZE.
See the details in the Alert in which describes
problems with resizing files, especially to above 2Gb in size.
Important notes generally
Be careful when allowing files to automatically resize. It is
sensible to always limit the MAXSIZE for AUTOEXTEND files to less
than 2Gb if not using 'large files', and to a sensible limit
otherwise. Note that due to it is possible to specify
an value of MAXSIZE larger than Oracle can cope with which may
result in internal errors after the resize occurs. (Errors
typically include ORA-600 [3292])
On many platforms Oracle datafiles have an additional header
block at the start of the file so creating a file of 2Gb actually
requires slightly more than 2Gb of disk space. On Unix platforms
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.
4. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
4 of 6 1/19/2011 6:19 PM
the additional header for datafiles is usually DB_BLOCK_SIZE bytes
but may be larger when creating datafiles on raw devices.
2Gb related Oracle Errors:
These are a few of the errors which may occur when a 2Gb limit
is present. They are not in any particular order.
ORA-01119 Error in creating datafile xxxx
ORA-27044 unable to write header block of file
SVR4 Error: 22: Invalid argument
ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
ORA-27070 skgfdisp: async read/write failed
ORA-02237 invalid file size
KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
file limit exceed.
Unix error 27, EFBIG
Export and 2Gb
2Gb Export File Size
At the time of writing most versions of export use the default file
open API when creating an export file. This means that on many platforms
it is impossible to export a file of 2Gb or larger to a file system file.
There are several options available to overcome 2Gb file limits with
export such as:
- It is generally possible to write an export > 2Gb to a raw device.
Obviously the raw device has to be large enough to fit the entire
export into it.
- By exporting to a named pipe (on Unix) one can compress, zip or
split up the output.
See: "Quick Reference to Exporting >2Gb on Unix"
- One can export to tape (on most platforms)
See "Exporting to tape on Unix systems"
(This article also describes in detail how to export to
a unix pipe, remote shell etc..)
- Oracle8i allows you to write an export to multiple export
files rather than to one large export file.
Other 2Gb Export Issues
Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
with EXPORT on many releases of Oracle such that if you export a large table
and specify COMPRESS=Y then it is possible for the NEXT storage clause
of the statement in the EXPORT file to contain a size above 2Gb. This
will cause import to fail even if IGNORE=Y is specified at import time.
This issue is reported in and is alerted in
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.
5. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
5 of 6 1/19/2011 6:19 PM
An export will typically report errors like this when it hits a 2Gb
limit:
. . exporting table BIGEXPORT
EXP-00015: error on row 10660 of table BIGEXPORT,
column MYCOL, datatype 96
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
There is a secondary issue reported in which indicates that
a full database export generates a CREATE TABLESPACE command with the
file size specified in BYTES. If the filesize is above 2Gb this may
cause an ORA-2237 error when attempting to create the file on IMPORT.
This issue can be worked around be creating the tablespace prior to
importing by specifying the file size in 'M' instead of in bytes.
indicates a similar problem.
Export to Tape
The VOLSIZE parameter for export is limited to values less that 4Gb.
On some platforms may be only 2Gb.
This is corrected in Oracle 8i. describes this problem.
SQL*Loader and 2Gb
Typically SQL*Loader will error when it attempts to open an input
file larger than 2Gb with an error of the form:
SQL*Loader-500: Unable to open file (bigfile.dat)
SVR4 Error: 79: Value too large for defined data type
The examples in can be modified to for use with SQL*Loader
for large input data files.
Other 2Gb issues
This sections lists miscellaneous 2Gb issues:
- From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
An extract from the 8.0.5 README file introduces these - see
- DBV (the database verification file program) may not be able to scan
datafiles larger than 2Gb reporting "DBV-100".
- "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
specified in 'M' or 'K' to create files larger than 2Gb otherwise the
error "ORA-02237: invalid file size" is reported. This is documented
in .
- Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
Eg: ALTER USER QUOTA 2500M ON
reports
ORA-2187: invalid quota specification.
This is documented in .
The workaround is to grant users UNLIMITED TABLESPACE privilege if they
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.
6. https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&i...
6 of 6 1/19/2011 6:19 PM
need a quota above 2Gb.
- Tools which spool output may error if the spool file reaches 2Gb in size.
Eg: sqlplus spool output.
- Certain 'core' functions in Oracle tools do not support large files
- The UTL_FILE package uses the 'core' functions mentioned above and so is
limited by 2Gb restrictions Oracle releases which do not contain this fix.
is a PL/SQL package which allows file IO from within
PL/SQL.
Port Specific Information on "Large Files"
Below are references to information on large file support for specific
platforms. Although every effort is made to keep the information in
these articles up-to-date it is still advisable to carefully test any
operation which reads or writes from / to large files:
Platform See
~~~~~~~~ ~~~
AIX (RS6000 / SP)
Digital Unix
Sequent PTX
Sun Solaris
Windows NT Maximum 4Gb files on FAT
Theoretical 16Tb on NTFS
** See before using large files
on NT with Oracle8
*2 There is a problem with DBVERIFY on 8.1.6
See
*3 There is a problem with 8.1.6 / 8.1.7
where an autoextend to 4Gb can
cause a crash - see
Related
Products
Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
Back to top
Rate this document
Created with novaPDF Printer (www.novaPDF.com). Please register to remove this message.