The log is a sequential file that contains the complete history of changes made to the database. It uses various techniques like group commit and ping-pong writes to optimize writes to the log efficiently and reliably. The log anchor points to the end of the log and is carefully written to allow recovery after failures by finding the last part of the log and replaying transactions. Logs can be archived periodically to remove old transactions and advance the recovery start point.
JDO 2019: Kubernetes logging techniques with a touch of LogSense - Marcin StożekPROIDEA
Kubernetes helps us run our applications across multiple nodes using the standardized, declarative way. While we don’t need to think about where our applications are run physically, we still want to have some insights into how they behave. But we are no longer allowed to log into a specific node and just "read the logs" as it does not make much sense. We need to have a proper, automated solution. Kubernetes allows us to use different techniques to achieve this goal. Let's take a look at these techniques and their pros and cons. Once we have those logs, from multiple different applications where each logs things differently – what’s next? This talk also will provide real-life examples of how LogSense works to make sense of all Kubernetes logs, regardless of the format or structure.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
Show an FSM for an implementation of readLine() that accepts lines that satis...hwbloom111
Show an FSM for an implementation of readLine() that accepts lines that satisfy the actual RFC 2616 definition: “HTTP/1.1 defines the sequence
CR LF as the end-of-line marker” (RFC 2616, Section 2.2).
The input is the sequence of characters in the data stream. The output actions are
createNewBuffer() // create an empty buffer
bufferChar() // append the next char to the buffer
returnBuffer() // return the buffer
erroneousInput() // report that the input contains illegal characters
JDO 2019: Kubernetes logging techniques with a touch of LogSense - Marcin StożekPROIDEA
Kubernetes helps us run our applications across multiple nodes using the standardized, declarative way. While we don’t need to think about where our applications are run physically, we still want to have some insights into how they behave. But we are no longer allowed to log into a specific node and just "read the logs" as it does not make much sense. We need to have a proper, automated solution. Kubernetes allows us to use different techniques to achieve this goal. Let's take a look at these techniques and their pros and cons. Once we have those logs, from multiple different applications where each logs things differently – what’s next? This talk also will provide real-life examples of how LogSense works to make sense of all Kubernetes logs, regardless of the format or structure.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
Show an FSM for an implementation of readLine() that accepts lines that satis...hwbloom111
Show an FSM for an implementation of readLine() that accepts lines that satisfy the actual RFC 2616 definition: “HTTP/1.1 defines the sequence
CR LF as the end-of-line marker” (RFC 2616, Section 2.2).
The input is the sequence of characters in the data stream. The output actions are
createNewBuffer() // create an empty buffer
bufferChar() // append the next char to the buffer
returnBuffer() // return the buffer
erroneousInput() // report that the input contains illegal characters
Digital marketing is an incredibly fast-paced industry, which makes it challenging to stay on top of all the latest trends. New technology is always changing the way we interact with the world, and consequently, how businesses should effectively market to their audience. Additionally, how do you know which trends are worth following for your business, and which aren’t?
I was proud as Lead Instructor for General Assembly to deliver this talk in partnership with ClubWorkspace London Bridge on 7th Feb 2017 where I shared insights on the emerging trends in digital marketing, how to spot them and how they can be used to help your business evolve, excel and avoid burn-out.
Application Logging in the 21st century - 2014.keyTim Bunce
Slides for my talk at the Austrian Perl Workshop in Salzburg on October 10th.
A video of the talk can be found at https://www.youtube.com/watch?v=4Qj-_eimGuE
[db tech showcase Tokyo 2017] C23: Lessons from SQLite4 by SQLite.org - Richa...Insight Technology, Inc.
SQLite4 was a project started at the beginning of 2012 and designed to provide a follow-on to SQLite3 without the constraints of backwards compatibility. SQLite4 was built around a Log Structured Merge (LSM) storage engine that is transactional, stores all content in a single file on disk, and that is faster than LevelDB. Other innovations in include the use of decimal floating-point arthimetic and a single storage engine namespace used for all tables and indexes. Expectations were initially high. However, development stopped about 2.5 years later, after finding that the design of SQLite4 would never be competitive with SQLite3. This talk overviews the technological ideas tried in SQLite4 and discusses why they did not work out for the kinds of workloads typically encountered for an embedded database engine.
You have a system with an advanced programmatic tracer: do you know what to do with it? Brendan has used numerous tracers in production environments, and has published hundreds of tracing-based tools. In this talk he will share tips and know-how for creating CLI tracing tools and GUI visualizations, to solve real problems effectively. Programmatic tracing is an amazing superpower, and this talk will show you how to wield it!
Kafka Tiered Storage | Satish Duggana and Sriharsha Chintalapani, UberHostedbyConfluent
Kafka is a vital part of data infrastructure in many organizations. When the Kafka cluster grows and more data is stored in Kafka for a longer duration, several issues related to scalability, efficiency, and operations become important to address. Kafka cluster storage is typically scaled by adding more broker nodes to the cluster. But this also adds needless memory and CPUs to the cluster making overall storage cost less efficient compared to storing the older data in external storage.
Tiered storage is introduced to extend Kafka's storage beyond the local storage available on the Kafka cluster by retaining the older data in cheaper stores, such as HDFS, S3, Azure or GCS with minimal impact on the internals of Kafka.
We will talk about
- How tiered storage addresses the above problems and also brings several other advantages.
- High level architecture of tiered storage
- Future work planned as part of tiered storage.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Chapter 3 - Islamic Banking Products and Services.pptx
10a log
1. Gray & Reuter Log
10a: 1
Log ManagerLog Manager
Jim GrayJim Gray
Microsoft, Gray @ Microsoft.comMicrosoft, Gray @ Microsoft.com
Andreas ReuterAndreas Reuter
International University, Andreas.Reuter@i-u.deInternational University, Andreas.Reuter@i-u.de
9:00
11:00
1:30
3:30
7:00
Overview
Faults
Tolerance
T Models
Party
TP mons
Lock Theory
Lock Techniq
Queues
Workflow
Log
ResMgr
CICS & Inet
Adv TM
Cyberbrick
Files &Buffers
COM+
Corba
Replication
Party
B-tree
Access Paths
Groupware
Benchmark
Mon Tue Wed Thur Fri
2. Gray & Reuter Log
10a: 2
Log ConceptLog Concept
• Log is a history of all changes to the state.Log is a history of all changes to the state.
• Log + old state gives new state
• Log + new state gives old state (not in this picture)
• Log is a sequential file.
• Complete log is the complete history
• Current state is just a "cache" of the log records.
Archive
Sunday Master Monday Master
Monday
Transactions
Monday
Night
Batch
Run
Monday Master
Tuesday Master
Tuesday
Transactions
Tuesday
Night
Batch
Run
Tuesday Master
Wednesday
Transactions
Wednesday
Night
Batch
Run
Wednesday Master
Wednesday Master
3. Gray & Reuter Log
10a: 3
How Log is UsedHow Log is Used
• Recovery from faults
A redundant copy of the state and transitions
• Security audits:
Who did what to whom.
Often too low-level for this.
• Performance Monitor & Accounting:
But only records changes (not reads).
• ISSUES: Who should be allowed to read the log?
It is a security hole.
Must authorize access on a per-record basis.
4. Gray & Reuter Log
10a: 4
The Log Manager in the Scheme of Things
Interesting thing is the cycle:
Need log to recover archive to recover log.
Break the cycle with a bootstrap file.
Log Manager
Transaction Manager
Lock Manager
Buffer Manager
Media Manager
SQL & Other
Resource Managers
Archive
Manager
Operating
System
File
System
File Manager
5. Gray & Reuter Log
10a: 5
Log Is a Sequential File.Log Is a Sequential File.
Encapsulation of the log: it is a shared resource.
Startup: Log manager holds startup info for all others.
Careful writes: Log manager provides a
• High performance.
• Very reliable
• Semi-infinite
• Archived
Sequential file.
Some RMs keep private logs anyway.
(Notably PORTABLE DB systems.)
Then user or system has to manage multiple logs
6. Gray & Reuter Log
10a: 6
The Log Table
Log table is a sequential set (relation).
Log Records have standard part and then a log body.
Often want to query table via one attribute or
another: . RMID, TRID, timestamp,
create domain LSN unsigned integer(64); -- log sequence number (file #, rba)
create domain RMID unsigned integer; -- resource manager identifier
create domain TRID char(12); -- transaction identifier
create table log_table (
lsn LSN, -- the record’s log sequence number
prev_lsn LSN, -- the lsn of the previous record in log
timestamp TIMESTAMP, -- time log record was created
resource_manager RMID, -- resource mgr that wrote this record
trid TRID, -- id of transaction that wrote this record
tran_prev_lsn LSN, -- prev log record of this transaction (or 0)
body varchar, -- log data: rm understands it
primary key (lsn) -- lsn is primary key
foreign key (prev_lsn) -- previous log record in this table
references a_log_table(lsn), --
foreign key (tran_prev_lsn) -- transaction's prev log rec also in table
references a_log_table(lsn), --
) entry sequenced; -- inserts go at end of file
7. Gray & Reuter Log
10a: 7
Log is complete historyLog is complete history
Log anchor points at chain of each transaction.
May maintain other chains.
Log records map to sequence of N-plexed files
Old files are archived.
Eventually, archive files are discarded (weeks, months, never)
A files B files
Archive
lsn
prev_lsn
resource_mgr
trid
tran_prev_lsn
body
Log Table
Log Anchor
trid,
max_lsn,
min_lsn...
8. Gray & Reuter Log
10a: 8
The Log LSN
Each log record has a logical sequence number.
This number (LSN for Log Sequence Number) plays a
key role in many algorithms.
Key property MONOTONICITY:
If action A happened after action B then
LSN(A) > LSN(B).
9. Gray & Reuter Log
10a: 9
Reading The Log
long log_read_lsn( LSN lsn, /* lsn of record to be read */
log_record_header header, /* header fields of record to be read */
long offset, /* offset into body to start read */
pointer buffer, /* buffer to receive log data */
long n); /* length of buffer */
LSN log_max_lsn(void); /* returns the current maximum lsn of the log table.*/
Read with C (see next slide) or SQL:
long sql_count( RMID rmid) /* count log records written by this rmid */
{ long rec_count; /* count of records */
exec sql SELECT count (*) /* ask sql to scan log counting records */
INTO :rec_count /* written by the calling resource mgr and */
FROM log_table /* place count in the rec_count */
WHERE resource_manager = :rmid; /* */
return rec_count; /* return the answer. */
};
10. Gray & Reuter Log
10a: 10
Reading the Log: SQL is easier than CReading the Log: SQL is easier than C
long c_count( RMID rmid)/* count log records written by this rmid */
{ log_record_header header; /* structure to receive log record header */
LSN lsn; /* log sequence number of next log rec */
char buffer[1];/* null buffer to receive log record body. */
long rec_count = 0; /* count of records */
int n = 1; /* size of log body returned */
if (!log_open(READ)) panic(); /* open the log (authorization check)*/
lsn = log_max_lsn( ); /* get most recent lsn */
while (lsn != NullLSN) /* scan backward through the log */
{ n = log_read_lsn( lsn, /* lsn of record to be read */
header, /* log record header fields */
0L, &buffer, 1L );/* log rec body ignored. */
if (header.rmid == rmid) /* if record written by this RMID then */
rec_count = rec_count + 1; /* increment count */
lsn = header.prev_lsn; /* go to previous LSN. */
}; /* loop over LSNs */
logtable_close( ); /* close log table */
return rec_count; /* return the answer. */
}; /* */
11. Gray & Reuter Log
10a: 11
Writing The Log
Add a log record, Log manager fills in header.
LSN log_insert( char * buffer, long n);
/* log body is buffer[0..n-1] */
Force log up to a certain LSN to persistent storage:
LSN log_flush( LSN lsn, Boolean lazy); /**/
(lazy waits for a batch write or timeout == boxcar)
Note: many real interfaces allow some of:
empty buffer: to allow RM to fill it in (avoids data copies)
incremental copy: build the "buffer" in steps.
gather: take log data from many buffers.
Few offer SQL access to the log.
12. Gray & Reuter Log
10a: 12
Summary Of Log Structure And Verbs
Operations: Open/Close
Read(LSN),
Insert(body),
Flush(LSN)
SQL read operations.
Log Table
header
body
A file
Log pages
in buffer pool
log page header
end of
durable
log
current end of log
B file
empty page in
buffer pool
durable
storage
Pages written in next write
13. Gray & Reuter Log
10a: 13
Log Anchor Logging and Locking
Log records never updated: only inserted and read.
So no locks needed on log.
Semaphore (or something) needed on "end" of log
to manage space/growth/LSN for inserts
typedef struct {
filename tablename; /* name of log table */
struct log_files;/* A & B file prefix names & active file # */
xsemaphore lock; /* semaphore regulates log write */
LSN prev_lsn; /* LSN of most recent write */
LSN lsn; /* LSN of next record */
LSN durable_lsn; /* max lsn in durable storage */
LSN TM_anchor_lsn; /* lsn of trans mgr's last ckpt */
struct { /* array of open log parts */
long partno; /* partition number */
int os_fnum; /* operating system file # */
} part [MAXOPENS]; /* */
} log_anchor ; /* */
14. Gray & Reuter Log
10a: 14
Making Optimistic Log Reads Work
Log is duplexed.
Log manager reads only one copy of the page.
What if the "other" copy has more data?
Trick:
read BOTH copies of FIRST and LAST page in log.
Other pages have "full" flag and a timestamp.
IF not full or timestamp < prev_timestamp THEN
read other page and take highest timestamp
Torn log pages
Log page consists of disk sectors (512B).
Write may only write some sectors.
How detect missing fragments?
1. Checksum?
2. Byte stuffing: stuff a “parity” byte on each page
15. Gray & Reuter Log
10a: 15
Log InsertLog Insert
Log semaphore covers
Incrementing LSN
Finding the log end
filling in the page(s)
allocating space on a page, perhaps allocating new pages.
LSN log_insert( char * buffer, long n) /* insert a log record with body buffer[0..n]*/
/* Acquire the log lock (an exclusive semaphore on the log) */
Xsem_get(&log_anchor.lock); /* lock the log end in exclusive mode */
lsn = log_anchor.lsn; /* make a copy of the record’s lsn. */
/* find page and allocate space in it. */
/* fill in log record header & body */
/* update the anchors */
log_anchor.prev_lsn = lsn; /* log anchor lsn points past this record */
log_anchor.lsn.rba = log_anchor.lsn.rba + rec_len; /* */
Xsem_give(&log_anchor.lock); /* unlock the log end */
return lsn; }; /* return lsn of record just inserted */
16. Gray & Reuter Log
10a: 16
Log Write Demon
Log Semaphore can be a hotspot so: No IO under semaphore
Allocation (OS requests), and Archiving is done in advance.
Flush to persistent storage (disc) is done asynchronously.
Demons driven by timers and by events (requests)
Demons need not touch end-of-log semaphore
log daemon
to flush
(carefully write)
log pages as needed
log data in shared
memory and on disc
log daemon
to allocate
new log files
as needed
application
programs
resource
managers
log code
17. Gray & Reuter Log
10a: 17
Careful Writes
If partial pages may be written then
subsequent write may invalidate previous write.
Standard technique:
Serial Writes: write one page then write the second page.
Problem: ~ 1/2 disc bandwidth, 2x delay.
Ping-Pong technique:
Never overwrite good page: Ping-Pong between I and I+1
When complete, assure that page I has final data
Never worse than serial write, generally 2x better.
Also note the careful techniques for optimistic reads and torn pages.
Disc Page
Disc Page
Disc Page
i:
i+1:
Parallel
Ping-Pong
Writes
New Log
18. Gray & Reuter Log
10a: 18
Group Commit (Boxcaring)
Batch processing of log writes.
If receive 1,000 log force requests/second
why not just execute 50 of them?
Response time will be the same (~20ms).
IOs will be 20x fewer
CPU will be ~ 10x smaller (10x fewer dispatches, 20x fewer OS IO).
Without it, systems are limited to about
50tps no ping-pong
100tps ping-pong.
With it, systems are limited to disc bandwidth >>10ktps.
Group commit threshold can be set automatically.
19. Gray & Reuter Log
10a: 19
WADS- Giving the Log Disc Zero Latency
Log disc is dedicated, so only has rotational latency.
Reserve some cylinders on the disc as scratch.
For each write:
Write at current position on next track (zero latency).
When have a full-track (or two) of log data
consolidate the write in ram
do a single LARGE write (100KB = 1 rotation) to the log.
cost of this is seek + rotation ~ 20ms.
This reserved area is called the Write Ahead Data Set (WADS).
At restart:
read cylinders
gather recent log data
rewrite end of log.
RAID Write Cache makes this obsolete (if it works).
20. Gray & Reuter Log
10a: 20
Log: Normal Use
Transaction UNDO During Normal Operation
Transaction log anchor: needed during normal operation
Points to most recent log rec of that transaction.
Follow the transaction prev_lsn chain.
EASY!
21. Gray & Reuter Log
10a: 21
The Log Anchor: Where It All StartsThe Log Anchor: Where It All Starts
REDO/UNDO at System / RM Restart.
Need to bootstrap the most recent log state.
Log manager is the first to restart
Helps Transaction Manager recover
Transaction manager helps Resource mangers recover.
Alternate design (each RM has its own log).
All this depends on rebuilding the log anchor.
Log Anchor
Transaction Manager
Checkpoint Record
Resource Manager
Checkpoint Records
The Log
Previous Transaction
Manager Checpoint Record
22. Gray & Reuter Log
10a: 22
Preparing For Restart:
Careful Write of Log Anchor
Use the "standard" careful write techniques:
Put the anchor in a special well-known place(s)
Ping-Pong to 2 or more copies
Timestamp each copy
N-plex the copies on devices with independent failures.
Align copies so that writes are "atomic"
Accept most recent copy on pessimistic reads.
Now TM and RMs can bootstrap:
their anchors are in the log.
23. Gray & Reuter Log
10a: 23
Finding the End of the Log
Find the anchor
If using WADS, go to the WADS area and write log end.
else Scan forward from the most log-anchor lsn
Read optimistic all full pages.
At 1/2 full page or bad page read pessimistic.
Now have end-of log.
Finish 1/2 finished record at end of log and give to TM
Pages
End of log
Half-finished record
Invalid Page
Pages
End of log
24. Gray & Reuter Log
10a: 24
Archiving The Log And "Old" Transactions
What if transaction/RM low water mark is 1-month old?
Abort?
Copy aside:
copy the undo/redo log records to a side file
Copy forward:
copy the undo/redo log records forward in the file.
Dynamic log:
copy undo records aside (so can online-undo if needed).
All advance the low water mark.
25. Gray & Reuter Log
10a: 25
Archiving the Log Online
Log
1
2
2
3 1
3
1 2 3
Archive
Staggered
Allocation of
Log Tables on
Secondary Storage
26. Gray & Reuter Log
10a: 26
The Safety Spectrum
Just UNDO
transactional storage (no durable log)
Just Online Restart:
keep simplexed durable log.
Online plus Off-line Archive (no single point of failure):
periodic copies of data
duplex log
Electronic vaulting:
archive copies and duplexing is done to remote site.
via fast communications links (or Federal Express).
27. Gray & Reuter Log
10a: 27
Multiple Logs?
Transaction Manager has a log (DECdtm, MS-DTC,…)
Transaction Monitor has a log (CICS, Tuxedo, ACMS,...)
Each DB instance (3 Oracle, 2 Informix, 4 Rdb) has a log.
Some have 3 logs: UNDO, REDO, SNAPSHOT.
Cons
Lots of tapes/files.
Lots of IOs at commit
Lots of things to break.
Pros:
Portable
Performance (in the 1 RM case)
You decide
28. Gray & Reuter Log
10a: 28
Client/Server Logging
One server design (can be process pair)
Well known log server in the net.
Client sends a BATCH of log records to the server.
Gets back a LSN
Uses "local" LSNs for his objects.
Log servers can be N-plexed processes.
Multi-server design
Client forms a quorum (majority of servers).
Client sends log batch to all, gets back N-LSNs.
If less than majority, client must poll ALL N servers
Servers synchronize their "logical" logs as "sum" of
physical logs (need a majority).
29. Gray & Reuter Log
10a: 29
Summary
• Log is a sequential file
• Contains entire history of DB
• Many tricks to write it efficiently and
carefully
• Many tricks to archive and recover it