Your SlideShare is downloading. ×
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
ADS_Lec1_Linear_lists_Stacks
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

ADS_Lec1_Linear_lists_Stacks

339

Published on

INFORMATION STRUCTURES …

INFORMATION STRUCTURES
LINEAR LIST PROCESSING
HANDLING MULTIPLE STACKS
GARWICK'S REPACKING ALGORITHM

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
339
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. ADVANCED DATA STRUCTURES: SEQUENTIAL ALLOCATION Sub code: 12CSE102 Hemanth Kumar G, Assistant Professor, Department of CSE, NMAMIT, Nitte http://hemanthglabs.wordpress.in http://veda-vijnana.blogspot.in
  • 2. INFORMATION STRUCTURES
  • 3. Introduction • Programs  operate on  tables of info. o Involve structural rels. bn data elements. o Queries.. ? ? ? • Kinds of Structures: o Linear list of elements, o 2D array (matrix/grid), o nD array, o tree, o multi links like in human brain • Static & dynamic props of diff structures. • Storage allocation & repn of structured data. • Efficient algos. for creating, altering, accessing, & destroying structural info. • Applications
  • 4. List Processing • No. of sys: LISP works with structures called Lists. • Pros & Cons of List processing. • MIX Computer: Illustration model of info Structures • Info in a table has set of nodes (Records, Entities, beads) o Item / element. o >=1 consecutive words of the compu mem, divided into named parts called fields. o E.g., Playing Cards. • Address (node) = link, ptr, ref = mem loc of Ist word. • Usually rel. addressing, lets keep it simple with abs. o Contents (node) :- nums, alpha, char, links, etc.,
  • 5. Case Study: Solitaire game o TAG = 1 => Card is face down, 0 => face up o SUIT = 1:clubs, 2:diamonds, 3:hearts, or 4:spades o RANK = 1:Ace, 2:deuce, . . . , 13:King o Next = link to the card below this one in the pile o Title = 5-char alphabetic name of this card. Gnd: null link Top: link var, ptr var
  • 6. Case Study: Solitaire game • Fields naming ceremony • Algo for placing a new card face up on top of the pile, assuming that NEWCARD is a link var = link to new card. • Algo to Count the # cards.
  • 7. Linear Lists
  • 8. Linear Lists
  • 9. Linear Lists • Stacks: o Alias:- push-down lists, reversion storages, cellars, nesting stores, piles, LIFO lists, yo-yo lists. o Removal:- Youngest item. • Queues: o Alias:- Circular stores, FIFO lists, • “Shelf” for O/p restricted deques, • “Scrolls” or “Rolls” for I/p restricted deques. o Removal :- Oldest item.
  • 10. Exercise • Idea:- Go through a set of data & keep a list of exceptional conditions or things to do later; after we’re done with the original set, we can then do the rest of the processing by coming back to the list, removing entities until it becomes empty. • Which Information Structure you suggest? Substantiate. *** Problems teach the Philosophy of Life ***
  • 11. 3 important classes of linear lists • A <= x. x <= A. top(A)
  • 12. Exercises
  • 13. Sequential Allocation • In general
  • 14. General Representation
  • 15. Common action Specs
  • 16. Exercises • 3. What really are OVERFLOW or UNDERFLOW conditions? What are their effects? What do we do when they occur? Do u hate to give up on OVERFLOW?
  • 17. Excess or Deficiency of Items • What to do on UNDERFLOW or OVERFLOW? o For underflow, try to remove a nonexistent item. Not an error. o Overflow is an error. Table is full, but more info is waiting to be put in. • Report that the program cannot go bcos its storage capacity has been exceeded & terminate the program. • What if a program has multiple stacks of varying size? o Don’t impose a maximum size on each stack, since the size is usually unpredictable; and even if a maximum size has been determined for each stack, we will rarely find all stacks simultaneously filling their maximum capacity. o When there are just 2 variable-size, they can coexist together very nicely if we let the lists grow toward each other.
  • 18. Variable sized lists • There is no way to store >=3 var-size seql lists in mem: a. OVERFLOW will occur only when the total size of all lists exceeds the total space and b. Each list has a fixed loc for its “bottom” element. • Special case: Each of the var-size lists is a stack. o Top element is relevant at any time, we can proceed as before. o What if you have n stacks?
  • 19. Multiple Stacks • It is not unusual to encounter applications in which several stacks of variable size are involved. • In this case, using a separate vector to store each individual stack is not practical, because of the following reasons: o We will have to allocate enough memory for the anticipated maximum size of each stack. o It is unlikely that all the stacks will be full simultaneously, yet they cannot share space when one of them overflows. • To make efficient use of our resources, we must find a way to store these multiple stacks in such a way as to allow the sharing of memory locations among them. • The solution lies in storing the stacks in a common sequential set of nodes. • We shall represent this set as a vector V of size m, which can store n stacks.
  • 20. Multiple Stacks • Initially, the stacks may be given the same number of nodes; the m nodes can be divided more or less equally among them. • When overflow occurs in one of the stacks, we can then preempt some available nodes from some other stack by reallocating memory. • We will consider the case for o n = 2 (two stacks sharing a vector V) and o n >= 3 (three or more stacks sharing a vector V) separately, because stack behavior is quite different in each case.
  • 21. Two Stacks Sharing a Sequentially allocated Vector • Fig. 2 shows two stacks whose bottoms are anchored at either end of the vector V and which grow toward each other. • We can see that overflow will not occur until both stacks use up all the allocated space.
  • 22. Three or More Stacks Sharing a Sequentially Allocated Vector • With three or more stacks coexisting in V with bottoms anchored to a fixed position, overflow may occur in one of the stacks while there are still unused nodes. • If we want to maximize the space such that all nodes are used before overflow occurs, then we should allow the bottoms of the stacks to change position. This brings us to the problem of memory allocation.
  • 23. Memory Allocation problem in multiple stacks • Initially, we could set up the stack boundaries according to the following policy: • o where B( i ) is the bottom of the ith stack, o T( i ) is the top of the ith stack, o m is the size of the vector, and o n is the number of stacks.
  • 24. Multiple Stack Push & Pop
  • 25. Multiple Stack Overflows • We now consider the problem of memory reallocation when a certain stack, say stack i, overflows. • Procedure MSPUSH calls MSTACKFULL in such an event. • How does MSTACKFULL look for additional space to give to stack i? • One simple method of obtaining more space for stack i is by looking for the nearest stack above stack i which has unused nodes. • If such a stack can be found, then we shift up this stack, along with the stacks in between, by one node up. • If no free nodes can be found above stack i, then we search for the free nodes below, starting with the stack nearest stack i. • If we find one, then we shift the stacks above it up to, and including stack i, down by one node. • If no free nodes can be found either above of below stack i, then all the allocated space is in use and we stop looking; the overflow cannot be avoided.
  • 26. Multiple Stacks
  • 27. Overflow w.r.t Stack i
  • 28. Garwick’s Algorithm 1. Strip all the stacks of unused cells and consider all of the unused cells as comprising the available or free space. 2. Reallocate 1-10% of the available space equally among the stacks. 3. Reallocate the remaining available space among the stacks in proportion to recent growth. - where recent growth is measured as the difference T[i] – oldT[i], - where oldT[i] is the value of T[i] at the end of last reallocation. - A negative(positive) difference means that stack i actually decreased(increased) in size since last reallocation.
  • 29. Garwick’s Algorithm
  • 30. Garwick’s Algorithm
  • 31. Garwick’s algorithm Procedure MSTACKFULL >= 3 stacks. [C Version]
  • 32. /* Test if there are still available nodes. if count < 0 then [ output “no more available nodes” stop ] Calculate allocation factors according to the following distribution policy: - 10% of unused nodes will be distributed equally among the n stacks; - the remaining 90% will be distributed in proportion to the amount of increase in stack size since last reallocation. */
  • 33. Garwick’s Algorithm [C Version]
  • 34. Garwick’s Algorithm [C Version]
  • 35. Exercises
  • 36. Reference “The Art of Computer Programming” – Volume 1, on Fundamental Algorithms by Donald E. Knuth, Stanford University, published by Pearson Education © 1997.

×