This Wednesday Wisdom deck goes deep into Memory Management in Go. Discover how Go's automatic memory allocation, stack, heap, and garbage collection work together. Learn strategies to optimize memory usage, reduce garbage, and restructure structs for efficiency along with Go's unique memory model, emphasizing stack allocation for performance.
Wednesday is a leading Product Engineering Agency. We have worked with over 50 Global Brands.
Due to our product thinking expertise, Wednesday has been the agency of choice for fast-growing startups. Over 10% of India's Unicorns are our customers.
We are known for our expertise in:
1. Data Engineering: Using DataOps principles, we build data pipelines that are cost-effective and performant and allow you to make strategic decisions.
2. Applied AI: We use large language models and your proprietary data to build data-centric intelligent apps for your customers.
3. App Development & Modernization: We use our expertise in strategy, product development & design to build web, mobile, TV & IoT Apps.
We offer our expertise via the following services:
- Launch: Idea from napkin sketch to product market fit.
- Amplify: Engineering & design for companies with product market fit.
- Control: Fast-paced, tight-deadline projects suited for enterprises.
- Catalyse: Staff Augmentation
If you want to explore an opportunity to work with Wednesday, send your resume to careers@wednesday.is.
2. 2
Content
What we’ll be covering..
Memory Management
Restructure the structs
Garbage Collector
Stack
Heap
MemoryAllocation
MemoryAllocators
How to reduce Garbage
3. wednesday.is
Memory Management
Memory management is a method in the operating system to manage
operations between main memory and disk during process execution.
Importance of memory management: RAM is
fi
nite. If a program keeps on consuming
memory without freeing it, it will run out of memory and crash itself. Therefore, software
programs can’t just keep using RAM as they like, it will cause other programs and processes
to run out of memory. Due to importance of this, most programming languages (including Go)
provide ways to do automatic memory management.
We can achieve ef
fi
cient utilisation of memory by following a good memory
management.
5. wednesday.is
Memory Management in Go
• The automatic memory allocation and automatic garbage collection will avoid many
lurking bugs.
• Providing ways to dynamically allocate portions of memory to programs at their
request, and free it for reuse when no longer needed.
Go is a language which supports automatic memory management, such as
automatic memory allocation and automatic garbage collection.
6. wednesday.is
Memory Allocations
• Static Allocation is what happens when you declare a global variable. Each global
variable de
fi
nes one block of space, of a
fi
xed size.The space is allocated once,
when your program is started, and is never freed.
• Automatic Allocation happens when you declare an automatic variable, such as a
function argument or a local variable. The space for an automatic variable is
allocated when the compound statement containing the declaration is entered, and
is freed when that compound statement is exited.
• Dynamic Allocation is a technique in which programs determine as they are
running where to store some information. We need dynamic allocation when the
amount of memory we need, or how long we continue to need it, depends on
factors that are not known before the program runs.
7. wednesday.is
Memory Allocators
• TCMalloc is faster than the glibc 2.3 malloc (available as a separate library called ptmalloc2).
• ptmalloc2 takes approximately 300 nanoseconds to execute a malloc.
• TCMalloc implementation takes approximately 50 nanoseconds for the same operation pair.
As Go doesn’
t use Malloc to get memory, but asks OS directly(via mmap), it has to
implement memory allocation and deallocation on its own(like Malloc does). Go’
s memory
allocator is originally based off TCMalloc (thread-cache memory allocation).
9. Stack
It is used for static memory allocation and just like the
data structure.
It follows the last in
fi
rst out(LIFO) approach.
Typically, functional parameters and local variables
are allocated on the stack.
Nothing really moves physically when data is pushed on to
or popped o
ff
from a stack in memory. Only the values
stored in the memory managed by the stack are changed.
This makes the process of storing and retrieving data from the
stack very fast since there is no lookup required. We can just
store and retrieve data from the top most block on it.
Any data that is stored on the stack has to be
fi
nite and static.
This means the size of the data is known at compile time.
Memory management of the stack is simple and straightforward
and is done by the OS.
10. Heap
The name heap has nothing to do with the heap data structure.
Heap is used for dynamic memory allocation.
Whereas the stack only allows allocation and deallocation at the top,
programs can allocate or deallocate memory anywhere in a heap.
The program must return memory to the stack in the opposite
order of its allocation. But the program can return memory to the
heap in any order.
This means heap is more
fl
exible than the stack.
Pointers, arrays and big data structures are usually stored in Heap.
11. wednesday.is
Stack Vs Heap
Heap is slower compared to stack because the process of looking up data is more involved.
Heap can store more data than the stack.
Heap stores data with dynamic size; stack stores data with static size.
Heap is shared among threads of an application.
Heap is trickier to manage because of its dynamic nature.
Heap memory allocation isn’t as safe as stack memory allocation, because the data stored
in this space is accessible or visible to all threads.
Whenever a function is called, its variables get memory allocated on the stack. And
whenever the function call is over, the memory for the variables is de-allocated.
Heap memory allocation scheme does not provide automatic de-allocation. We need
to use a Garbage Collector to remove the unused objects in order to use the
memory e
ffi
ciently.
12. wednesday.is
Memory model, the Go way!
• Go allocates memory in two places: a global heap for dynamic allocations and a local stack for each
goroutine.
• In Go, each goroutine(thread) has its own stack. When we start a goroutine, we allocate a block of
memory to be used as that goroutine’s stack.
• A goroutine starts with 2 KB stack size which can grow and shrink as needed.
• Go prefers allocation on the stack.
• Stack allocation is cheaper because it only requires two CPU instructions: one to push onto the stack
for allocation, and another to release(pop) from the stack.
• Unfortunately not all data can use memory allocated on the stack. Stack allocation requires that the
lifetime and memory footprint of a variable can be determined at compile time.
• If it can’t be determined, a dynamic allocation onto the heap occurs at runtime.
• The Go compiler uses a process called escape analysis to
fi
nd objects whose lifetime is known at
compile-time and allocates them on the stack rather than in garbage collected heap memory.
13. wednesday.is
Garbage Collector
It is form of automatic memory management. The Garbage Collector
attempts to reclaim memory which was allocated by program, but is no longer
referenced.
Go’s garbage collector is a non-generational concurrent, tri-color mark
and sweep garbage collector.
Go’s garbage collection works in two phases, the mark phase, and sweep
phase. GC uses the tri-color algorithm to analyse the use of memory blocks.
This algorithm
fi
rst marks objects that are still being referenced as “alive”, and
in the next phase sweep frees the memory of objects that are not alive.
14. wednesday.is
You don’t have to collect your garbage,
but you can reduce your garbage.
Restructure your structs.
Reduce the number of long living objects.
Remove pointers within pointers.
Avoid unnecessary string/byte array allocations.
16. wednesday.is
Why do we do restructure structs…?
While reading data, a modern Computer’s CPU’s internal data registers
can hold and process 64 bits. This is called the word size (It is usually 32 bits
or 64 bits).
When we don’t align our data to
fi
t word sizes, padding is added for
properly aligning
fi
elds in memory so that next
fi
eld can start at an o
ff
set
that’s multiple of a word size.