Aditya Meshram graduated from VJTI in Mumbai and owns a 40 lakh flat. He is awaiting friends at a base to watch his pole dance performance as he relives memories from a trip to Israel while in India.
The document discusses strategies for online fundraising and engagement, including case studies. It describes how agile decision making, empowering supporters and beneficiaries, and keeping campaigns simple and using agile technology can help non-profits raise more money online. Specific tactics mentioned are identifying influential bloggers to engage as advisors or supporters, sizing fundraising potential based on social media followers, and using landing pages and viral campaigns to increase conversion rates and lower costs. The document provides examples from charities that have successfully used these approaches.
This document discusses adding nodes to an Oracle Grid Infrastructure (GI) configuration in Oracle Database 11g Release 2. It notes some differences when adding nodes in 11gR2 GI compared to 10gR2. Key steps include:
1. Performing prerequisite checks like verifying user equivalence between nodes and disabling the ntpd service if using octssd.
2. Running the cluvfy command to verify the new node meets cluster requirements.
3. Using the addNode.sh script to add the new node, either through interactive mode which does not work correctly or through silent mode by specifying parameters.
4. The process involves pre-checks, modifying configuration files like hosts, and installing Oracle
Este documento define un polígono como una superficie plana cerrada formada por segmentos rectos unidos en sus extremos. Explica que los elementos clave de un polígono incluyen sus lados, vértices, diagonales, perímetro, ángulos interiores y ángulos exteriores.
07. a. salinan permendikbud no. 69 th 2013 ttg ttg kd dan struktur kurikulum ...Irma Muthiara Sari
Peraturan ini menetapkan kerangka dasar dan struktur kurikulum untuk sekolah menengah atas dan madrasah aliyah yang mencakup kompetensi inti, mata pelajaran, beban belajar, dan kompetensi dasar sebagai acuan pengembangan kurikulum pada tingkat nasional dan daerah.
The survey found franchisors and service providers reported improved optimism for 2015. Franchisors were positive about general business conditions and franchisee profitability prospects. However, franchisors expected difficulties finding suitable staff. The results showed high confidence in franchisee sales but lower confidence in operating costs. Franchisors and service providers cited finding franchisees and competition as the top challenges for franchise development in 2015.
John whaley 10mythsaboutfindinga-softwareitjobSony Prasad
This document discusses 10 common myths about finding software/IT jobs. It is authored by John Whaley, CTO and founder of MokaFive. The myths addressed include that jobs are hard to find, it's okay to lie or exaggerate on resumes, requirements listed in job postings are absolute, staying at one company for a long time builds experience, technical skills are all that matter for IT jobs, higher salaries should always be prioritized over other factors, and bigger companies always have more opportunities. The document provides the realities of each myth and offers tips for things like resume writing, interviewing, negotiating salaries, and getting promotions.
This document discusses various WordPress plugins that can enhance a website. It begins by introducing the author and their experience. It then provides an overview of WordPress and explains that plugins add functionality. Several plugin categories are covered, including performance, backups, analytics, editing, contact forms, and email marketing. Specific plugins are described within each category, noting their features and benefits. The overall purpose is to help users choose the right plugins to improve their WordPress site.
1. The document provides instructions for closing work orders in Dtelnet/Jupiter and Oracle, including viewing quantities on hand, issuing and scrapping WIP, moving work orders to the next step, and returning parts.
2. Key steps include verifying lot numbers, selecting the correct source type and reason for scrapping, and using the view material requirements tool to match closing quantities.
3. Transactions over $15 must be returned to a location before scrapping, and anything over $100 scrapped from a location requires a cycle count request email.
The Franchising Confidence Index survey found that franchisor and service provider sentiment toward general business conditions, franchisor growth prospects, and several key franchising growth drivers declined from the previous quarter. While franchisor sentiment about franchisee profitability remained positive, it decreased from the last quarter. The survey represented the views of 37 franchisors and 10 service providers in New Zealand.
Assessing the biological effects form low dose exposures similar to medical a...Leishman Associates
Nghi Phan investigated the biological effects of low dose radiation exposures similar to medical and occupational doses. Specifically, Phan assessed genetic damage, DNA damage, and cell death in mice exposed to a single 20 mSv dose, similar to a CT scan, and repeated 20 mSv exposures over 10 weeks for a total of 400 mSv. The study found evidence of genetic damage and DNA damage from both single and repeated exposures. Repeated exposures did not increase levels of chromosomal damage or cell death compared to spontaneous levels, but may induce an adaptive response. The results indicate assays can detect effects below 20 mSv and differences exist between single and repeated low dose exposures.
In the last year, we've gone from millions of pieces of data to billions of pieces of data. I will speak on a solution for scaling up and about the challenges presented. Also covered will be the future of data at Qihoo 360 with MongoDB.
MySQL 5.6 GA版本已经发布了,其中包含了大量的新特性,了解这些新特性,不仅对数据库内核研发有帮助,对于更好的使用MySQL数据库也有着极大的意义。本分享将深入剖析MySQL 5.6新特性的实现细节,一共分为两期:分别是InnoDB引擎以及MySQL Server。本次为第一期,分享 MySQL 5.6 InnoDB引擎中的性能优化与功能增强。
This document discusses various methods for checking the health of Oracle database storage structures. It describes tools from Oracle like Hcheck scripts, Health Monitor Checks, and Log Scanner that check data dictionary integrity and consistency. It also discusses third party tools like ParnassusData recovery manager and hcmon. Specific checks covered include dictionary integrity checks, data block integrity checks, and ASM health checks. The document provides technical details on how many of these checks operate.
Vbox virtual box在oracle linux 5 - shoug 梁洪响maclean liu
The document describes setting up an Oracle 11g Release 2 RAC environment using VirtualBox virtual machines on Oracle Linux 5.7. It outlines planning the RAC logical architecture and installation requirements. It then details steps to create two virtual machines, install Oracle Linux on them, configure user accounts and directories for the Grid and Oracle software installations, and prepare the systems for the Oracle software installations.
Shoug at apouc2015 4min pitch_biotwang_v2maclean liu
Biot Wang introduces the Shanghai Oracle User Group (SHOUG), which he has been a member of for one year. SHOUG holds Oracle-related meetings and activities a few times a year to share knowledge. While some meetings have been more successful than others due to factors like scheduling, SHOUG is still learning and aims to make meetings better over time. The group has grown from a few members to over 400 currently. SHOUG's goal is to help members improve their Oracle skills and knowledge through sharing.
Biot Wang introduces the Shanghai Oracle User Group (SHOUG), which he has been a member of for one year. SHOUG holds Oracle-related meetings and activities a few times a year to share knowledge. While some meetings have been more successful than others due to factors like scheduling, SHOUG is still learning and aims to make meetings better over time. The group has grown from a few members to over 400 currently. SHOUG's goal is to help members improve their Oracle skills and knowledge through sharing.
This document discusses various techniques for optimizing JDBC performance when accessing an Oracle database. It provides examples of common issues like repeatedly opening and closing database connections, failing to close result sets and statements, and not using prepared statements with bind variables. The solutions outlined include using connection, statement and result set pools, try-finally blocks to ensure proper closing, and leveraging features like prepared statements, bind variables, batching, and prefetching to reduce round trips.
Parnassus data recovery manager for oracle database user guide v0.3maclean liu
ParnassusData Recovery Manager (PRM) is an enterprise Oracle database recovery tool, which can extract database datafile from Oracle 9i, 10g, 11g, 12c directly without any SQL execution on database. ParnassusData Recovery Manager was developed by Java, which can be used cross platforms. It can be run without any installation. Download it, and click to run
PRM has full rich GUI for any command. It is not necessary to learn script or master any skill in Oracle data structure. Recovery Wizard is integrated in the tool.
2. About Me
l Email:liu.maclean@gmail.com
l Blog:www.oracledatabase12g.com
l Oracle Certified Database Administrator Master 10g
and 11g
l Over 6 years experience with Oracle DBA technology
l Over 7 years experience with Linux technology
l Member Independent Oracle Users Group
l Member All China Users Group
l Presents for advanced Oracle topics: RAC,
DataGuard, Performance Tuning and Oracle Internal.
3. 事情是这样的,客户有一套核心的 10g 业务数据库,需要针对个别大表删除 2 年前的归档数
据,这些表都是普通的堆表(heap table),没有使用分区或其他技术。因为考虑到不能影响在
线业务,所以不能使用 insert append/rename 的方式来加速删除,只能老老实实地在匿名
PL/SQL 块里通过 rowid 批量删除数据,虽然慢一些但还是能接受的,具体的 PL /SQL 块如下:
DECLARE
CURSOR table_name_cur IS SELECT /*+ FULL(a) */ a.rowid from table_name a
where time_column<required_date
table_name_rec table_name_cur%ROWTYPE;
row_number number;
BEGIN
row_number :=0;
OPEN table_name_cur;
LOOP
FETCH table_name_cur INTO table_name_rec;
IF table_name_cur%NOTFOUND
THEN
commit;
EXIT;
END IF;
delete from table_name WHERE rowid = table_name_rec.rowid;
row_number := row_number + 1;
if (mod (row_number,1000) =0) then
insert into delete_rows values (row_number);
commit;
end if;
END LOOP;
insert into delete_rows values (row_number);
commit;
CLOSE table_name_cur;
END;
/
可以看到以上使用一个游标 FULL SCAN 目标数据表取出所需删除行的 rowid,之后在循环
中不断 fetch 出 rowid 并实际删除数据。
问题出在一张不是非常大的 LG 表上(不超过 10GB),删除这张 LG 表消耗的时间超过 10 个小
时,而其他更大的表删除也仅用 2-3 个小时。
针对这一反常现象,客户对删除操作做了 10046 level8 的跟踪,因为整个删除过程比较长,
4. 所以仅 trace 了一小段时间,因为这个 trace 并不完整所以没有办法使用 tkprof 工具分析该
trace。没办法,就分析裸 trace 信息吧。
从 trace 内容来看,该时段内主要的等待是 db file sequence read(简称 DFSR)即数据文件单块
读事件,一开始以为是表上有链式行/迁移行造成了大量的 DFSR,但客户日常有对该表执行
chained rows analyze,没有发现该表上有明显的 chained/migrated rows 问题。
具体观察该 DFSR 事件的 p1/p2 obj#参数发现这些数据文件单块读主要是针对该 LG 表的 2 个
索引的,而且最为奇怪的是其中一个索引单块读的频率远多于另外一个索引,比例大约为
60:1。这 2 个索引的差异表现,让我意识到得问题的所在,查看 dba_indexes 索引视图发现最
近一次分析是在 4/18 日,而 2 个索引统计信息间最 大的差异不在于索引大小,而在于
clustering_factor 也就是我们说的聚集因子, LG 表上大约有 6000 万条数据,索引 A 的
clustering_factor 为 170 万,而索引 B 的 clustering_factor 达到了 3400 万,即 2 个索引的聚集
因子差 20 倍,显然这时因为索引 A 上 column 更为有序(可能是 sequence)而索引 B 上的字段
较为随机造成了这种反差。
因为一开始使用 FULL SCAN 目标数据表来获取所需的 ROWID,所以在实际删除前相关数
据行所在的表数据块已经被 FULL SCAN 读取到 buffer cache 中了,FULL SCAN 使用
scattered read 多块读,在这里不是主要的性能瓶颈。最主要的问题在于,假设一个 table data
block 中有 20 行数据,对于 clustering_factor 较低的索引 A 而言可能这 20 行数据都映射到索
引的一个 leaf block 中,而对于 clustering_factor 很高的索引 B 而言可能这 20 行数据需要映射
到 20 个 leaf block 中,那么如果我们要删除这个数据块中的 20 行数据,就需要访问索引 A
上的 1 个 leaf 块和索引 B 上的 20 个 leaf 块,因为这些都是历史归档数 据,所以日常已经没
有业务访问这些 old leaf block 了,所以这部分的叶子块几乎不会在 buffer cache 中,服务进
程需要把它们”一一”(这里真的是一一,一次读一块)从 disk 上读取到缓存中;最糟糕的还不
止于此,因为实例的 buffer cache 有限,索引 B 上的 leaf block 在读入后可能很快被踢出
buffer cache,而因为 table 与索引 B 间数据的无序性,这些 leaf block 可能需要在后续的删除
中再次访问,这将对索引 B 的物理读取代价大大放大了。这种代价反映在 Oracle wait
interface 上就是用户频繁地看到针对某个索引的”db file sequential read”等待事件。
我们通过还原现场,来深入了解一下 clustering factor 对于删除的影响:
5. 首先要构建一张有一列极端有序,而一列极端无序的数据表
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> create table sample nologging tablespace users as select rownum t1 from
dual connect by level<=900000;
Table created.
SQL> alter table sample add t2 number;
Table altered.
SQL> update sample set t2=dbms_random.value(1,999999999999999);
900000 rows updated.
SQL> commit;
Commit complete.
SQL> create index ind_t1 on sample(t1) nologging;
Index created.
SQL> create index ind_t2 on sample(t2) nologging;
Index created.
SQL> exec dbms_stats.gather_table_stats('MACLEAN','SAMPLE',cascade=>TRUE);
PL/SQL procedure successfully completed.
SQL> select blocks,NUM_ROWS from dba_tables where table_name='SAMPLE';
BLOCKS NUM_ROWS
---------- ----------
13213 900000
SQL> select CLUSTERING_FACTOR,LEAF_BLOCKS,DISTINCT_KEYS,index_name from
dba_indexes where table_name='SAMPLE';
CLUSTERING_FACTOR LEAF_BLOCKS DISTINCT_KEYS INDEX_NAME
----------------- ----------- ------------- ------------------------------
899782 4148 896896 IND_T2
14975 2004 900000 IND_T1
/* 以上构建了一张 90 万行的数据表,共 13213 个 block
T1 列通过序列产生,较为有序
T2 列通过随机数产生,无序
ind_t1 索引构建在 t1 列上,clustering_factor 较低 14975,接近表上数据块的总数
ind_t2 索引构建在 t2 列上,clustering_factor 为 899782,接近表上数据行的总数
*/
SQL> alter session set events '10046 trace name context forever,level 8';
Session altered.
6. SQL> set timing on;
DECLARE
CURSOR table_name_cur IS SELECT /*+ FULL(a) */ a.rowid from sample a where
t1<=900000;
table_name_rec table_name_cur%ROWTYPE;
row_number number;
BEGIN
row_number :=0;
OPEN table_name_cur;
LOOP
FETCH table_name_cur INTO table_name_rec;
IF table_name_cur%NOTFOUND
THEN
commit;
EXIT;
END IF;
delete from sample WHERE rowid = table_name_rec.rowid;
row_number := row_number + 1;
if (mod (row_number,1000) =0) then
insert into delete_rows values (row_number);
commit;
end if;
END LOOP;
insert into delete_rows values (row_number);
commit;
CLOSE table_name_cur;
END;
/
Elapsed: 00:03:28.52
观察其 trace 文件,可以发现在多次 EXEC/FETCH 后就会紧跟一个 db file sequential read 等待事件
众所周知 db file sequential read 等待事件在如
v$session/V$session/v$active_session_history
等动态性能视图中的 p1 代表 file 号,p2 为 block 号,p3 为读取 block 总数,一般为 1
而在 10046 trace 中可以直接看到 file#,block#,blocks 和 obj#,为了分辨单块读的对象,可以直接从
obj#了解
SQL> select object_id,object_name,object_type from dba_objects where object_name
in ('SAMPLE','IND_T1','IND_T2');
OBJECT_ID OBJECT_NAME OBJECT_TYPE
---------- -------------------- -------------------
1307548 IND_T1 INDEX
1307549 IND_T2 INDEX
1307547 SAMPLE TABLE
WAIT #3: nam='db file sequential read' ela= 283 file#=6 block#=3311 blocks=1
obj#=1307549 tim=1275797217728516
EXEC #3:c=999,e=349,p=1,cr=2,cu=8,mis=0,r=1,dep=1,og=1,tim=1275797217728552
FETCH #2:c=0,e=5,p=0,cr=1,cu=0,mis=0,r=1,dep=1,og=1,tim=1275797217728578
EXEC #3:c=0,e=49,p=0,cr=1,cu=8,mis=0,r=1,dep=1,og=1,tim=1275797217728641
7. FETCH #2:c=0,e=4,p=0,cr=1,cu=0,mis=0,r=1,dep=1,og=1,tim=1275797217728663
EXEC #3:c=0,e=36,p=0,cr=1,cu=8,mis=0,r=1,dep=1,og=1,tim=1275797217728712
FETCH #2:c=0,e=3,p=0,cr=1,cu=0,mis=0,r=1,dep=1,og=1,tim=1275797217728732
WAIT #3: nam='db file sequential read' ela= 205 file#=6 block#=3956 blocks=1
obj#=1307549 tim=1275797217728979
EXEC #3:c=0,e=265,p=1,cr=1,cu=8,mis=0,r=1,dep=1,og=1,tim=1275797217729010
FETCH #2:c=0,e=5,p=0,cr=1,cu=0,mis=0,r=1,dep=1,og=1,tim=1275797217729036
[oracle@rh2 udump]$ cat g10r2_ora_5190.trc|grep "db file sequential read"|wc -l
72395
[oracle@rh2 udump]$ cat g10r2_ora_5190.trc|grep "db file sequential read"|grep
1307549|wc -l
67721
[oracle@rh2 udump]$ cat g10r2_ora_5190.trc|grep "db file sequential read"|grep
1307548|wc -l
3878
/* 以上 object_id 1307549 对应为较高 clustering_factor 的索引 IND_T2,
该索引发生了绝大多数 db file sequential read 等待
而 object_id 1307548 对应为较低 clustering_factor 的索引 IND_T1,
该索引发生了较少量的 db file sequential read 等待
*/
SQL> select
sql_id,executions,disk_reads,user_io_wait_time/1000000,elapsed_time/1000000
2 from v$sql
3 where sql_text='DELETE FROM SAMPLE WHERE ROWID = :B1 ';
SQL_ID EXECUTIONS DISK_READS USER_IO_WAIT_TIME/1000000
ELAPSED_TIME/1000000
------------- ---------- ---------- -------------------------
--------------------
31m4m2drt2t5m 900000 74936 67.862581
147.743482
[oracle@rh2 udump]$ tkprof g10r2_ora_5190.trc 5190.tkf sys=no
DELETE FROM SAMPLE
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 900000 78.67 147.73 74936 916440 6401613 900000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 900001 78.67 147.73 74936 916440 6401613 900000
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 21.99 21.99