http://de.slideshare.net/nicolayludwig/7-cpp-memory-representationpointerarithmeticsexercises-38510699
- The octal and hexadecimal Numeral System
- Byte Order
- Memory Representation of Arrays and Pointer Arithmetics
- Array Addressing with "Cast Contact Lenses"
- The Heap: Segmentation and "Why do Programs crash?"
- How to understand and fix Bugs
- The Teddybear Principle
Check out these exercises: http://de.slideshare.net/nicolayludwig/6-cpp-numeric-representationexercises
- Mathematical Number Systems revisited
- The Decimal Numeral System
- Numeric Representation of Integers in Software
- C/C++ Integral Conversion in Memory
- Integer Overflow in Memory and Integer Division
- Bitwise Integral Operations
- Numeric Representation of Fractional Numbers in Software and IEEE 754
- Floaty <-> Integral Conversion and Reinterpretation of Bit Patterns
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It provides details on each system such as the base, digits used, applications, and how to convert between them. Binary uses only 0s and 1s and is the most fundamental system used in computing. Octal uses digits 0-7, with applications including older computer architectures. Decimal uses 0-9 and is the most common. Hexadecimal uses 0-9 and A-F, with each digit representing 4 bits, making it convenient for displaying colors and memory addresses.
The document discusses various topics related to graphs including:
1. Graph representations such as adjacency matrices and linked lists.
2. Graph terminology like vertices, edges, paths, cycles, and traversals.
3. Graph traversal algorithms like depth-first search and breadth-first search.
4. Applications of graphs in areas like transportation networks, databases, and computer networks.
5. Minimum spanning trees and algorithms to find them such as Kruskal's algorithm and Prim's algorithm.
This document defines and explains the key concepts of binary trees. It begins by defining a binary tree as a collection of nodes where each node contains data and pointers to its left and right children. It describes the root node and how trees can be empty or non-empty. It then explains terms like subtrees, successors, leaf nodes, siblings, levels, and degrees of nodes. The document also covers traversing binary trees using pre-order, in-order, and post-order algorithms and representing binary trees in memory using linked and sequential structures. It concludes with an example of using a binary tree to represent an algebraic expression.
- Bits are the smallest units of data in computing, represented as 0s and 1s. 8 bits form a byte.
- The motherboard contains the CPU, RAM, ROM, and connections for expansion cards and peripherals. RAM is used for active programs and files while ROM contains startup instructions.
- An operating system manages hardware, allows software to interface with the CPU, and provides a user interface like graphical desktops. Common functions include file management, multitasking, and coordinating input/output.
- Arrays revisited
- Value and Reference Semantics of Elements
- A Way to categorize Collections
- Indexed Collections
-- Lists
-- Basic Features and Examples
-- Size and Capacity
Here are a few things you could try to address the increased executable size and performance impact on the CPU cache:
1. Recompile the executables to only use 64-bit pointers where needed, and use 32-bit pointers elsewhere to reduce the overall size.
2. Optimize the compiler to better pack instructions and data to improve cache utilization.
3. Consider using position independent code (PIC) to allow sharing of common code segments between processes to reduce duplicated code.
4. Profile the applications to identify hot spots and optimize those sections first, such as improving data locality.
5. Consider using link-time optimizations (LTO) to better optimize across compilation units.
6. Upgrade CPU/
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Check out these exercises: http://de.slideshare.net/nicolayludwig/6-cpp-numeric-representationexercises
- Mathematical Number Systems revisited
- The Decimal Numeral System
- Numeric Representation of Integers in Software
- C/C++ Integral Conversion in Memory
- Integer Overflow in Memory and Integer Division
- Bitwise Integral Operations
- Numeric Representation of Fractional Numbers in Software and IEEE 754
- Floaty <-> Integral Conversion and Reinterpretation of Bit Patterns
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It provides details on each system such as the base, digits used, applications, and how to convert between them. Binary uses only 0s and 1s and is the most fundamental system used in computing. Octal uses digits 0-7, with applications including older computer architectures. Decimal uses 0-9 and is the most common. Hexadecimal uses 0-9 and A-F, with each digit representing 4 bits, making it convenient for displaying colors and memory addresses.
The document discusses various topics related to graphs including:
1. Graph representations such as adjacency matrices and linked lists.
2. Graph terminology like vertices, edges, paths, cycles, and traversals.
3. Graph traversal algorithms like depth-first search and breadth-first search.
4. Applications of graphs in areas like transportation networks, databases, and computer networks.
5. Minimum spanning trees and algorithms to find them such as Kruskal's algorithm and Prim's algorithm.
This document defines and explains the key concepts of binary trees. It begins by defining a binary tree as a collection of nodes where each node contains data and pointers to its left and right children. It describes the root node and how trees can be empty or non-empty. It then explains terms like subtrees, successors, leaf nodes, siblings, levels, and degrees of nodes. The document also covers traversing binary trees using pre-order, in-order, and post-order algorithms and representing binary trees in memory using linked and sequential structures. It concludes with an example of using a binary tree to represent an algebraic expression.
- Bits are the smallest units of data in computing, represented as 0s and 1s. 8 bits form a byte.
- The motherboard contains the CPU, RAM, ROM, and connections for expansion cards and peripherals. RAM is used for active programs and files while ROM contains startup instructions.
- An operating system manages hardware, allows software to interface with the CPU, and provides a user interface like graphical desktops. Common functions include file management, multitasking, and coordinating input/output.
- Arrays revisited
- Value and Reference Semantics of Elements
- A Way to categorize Collections
- Indexed Collections
-- Lists
-- Basic Features and Examples
-- Size and Capacity
Here are a few things you could try to address the increased executable size and performance impact on the CPU cache:
1. Recompile the executables to only use 64-bit pointers where needed, and use 32-bit pointers elsewhere to reduce the overall size.
2. Optimize the compiler to better pack instructions and data to improve cache utilization.
3. Consider using position independent code (PIC) to allow sharing of common code segments between processes to reduce duplicated code.
4. Profile the applications to identify hot spots and optimize those sections first, such as improving data locality.
5. Consider using link-time optimizations (LTO) to better optimize across compilation units.
6. Upgrade CPU/
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
The document discusses optimizing code and data for CPU caches through various techniques like improving data locality, reducing unnecessary memory accesses, and reusing cached data. It covers optimizing code layout, data structures, prefetching, and addressing issues like aliasing.
The document discusses various techniques for optimizing memory usage and cache performance in code. It begins by justifying the need for memory optimization given trends in CPU and memory speeds. It then provides an overview of memory hierarchies and caches. The rest of the document discusses specific techniques for optimizing data structures, prefetching, layout of data in memory, reducing aliasing, and other strategies to improve cache utilization and performance.
The document discusses dynamic memory allocation in C and some of the challenges that can arise. Specifically, it notes that malloc(1) will not return a 1 byte block due to overhead in allocation structures. This overhead includes space for the size field and a pointer to the next free block. The document also provides an example of undefined behavior that can occur when freeing a pointer that is later accessed, and offers a solution using a temporary variable. Overall, the document outlines the basics of dynamic memory allocation and some issues programmers need to be aware of.
8086 Assembler Tutorial For Beginners (Part 1Brooke Heidt
This document provides an introduction to assembly language programming using 8086 assembler. It discusses basic concepts like registers, memory addressing, and variables. Registers include general purpose registers like AX, BX, CX and DX as well as segment registers. Memory can be addressed using registers and offsets. Variables are declared using directives like DB and DW and can be used to simplify memory references. The MOV instruction is demonstrated for moving data between registers and memory. Arrays are also introduced using directives like DUP.
The document discusses computer architecture and the MIPS instruction set architecture. It begins by defining computer architecture as the instruction set architecture (ISA), which is the boundary between hardware and software and defines what a machine can do. It also discusses machine organization, which is how the hardware works to implement the ISA. The document then covers various aspects of the MIPS ISA, including its register-based load/store architecture, instruction formats, common operations like data movement and arithmetic, and addressing modes.
Number Systems — Decimal, Binary, Octal, and Hexadecimal
Base 10 (Decimal) — Represent any number using 10 digits [0–9]
Base 2 (Binary) — Represent any number using 2 digits [0–1]
Base 8 (Octal) — Represent any number using 8 digits [0–7]
Base 16(Hexadecimal) — Represent any number using 10 digits and 6 characters [0–9, A, B, C, D, E, F]
This document provides definitions and explanations of common computer terms like bits, bytes, binary, hexadecimal and decimal numbering systems. It also describes various components of a computer like memory, storage drives, the central processing unit, buses and ports that allow connection of peripheral devices using standards like USB, Firewire, Ethernet and more.
This document provides an overview of a course on computer fundamentals. The course aims to give students an understanding of how computers work at the assembly level and introduce assembly language programming. Topics covered include the fetch-execute cycle, memory hierarchy, CPU components like the ALU and registers, number representation, and arithmetic and logical instructions. The course outline provides context on the history of computing and early computer designs to prepare students for future operating systems courses.
1) The document is an introduction to 8086 assembly language and provides information on 8086 architecture including registers, memory access, variables, arrays, and instructions like MOV.
2) It explains the main components of a computer like the CPU, RAM, and system bus and describes the different types of registers in the 8086 including general purpose, segment, and special purpose registers.
3) The tutorial provides examples of using directives, variables, memory addressing modes, and instructions like MOV to demonstrate basic 8086 programming concepts.
RAM (random-access memory) is a type of memory that can be accessed randomly; each byte stored at a RAM chip can be accessed directly without reading through consecutive locations. RAM is organized into words of a certain number of bits that are accessed via an address. Larger RAMs can be constructed by combining smaller RAM chips through an addressing scheme. Dynamic RAM must be regularly refreshed to maintain its data but provides higher density than static RAM.
This document discusses different memory addressing modes used in microprocessors, including real mode, protected mode, and flat mode addressing. It provides details on how addresses are calculated in real mode using segment registers and offsets. Protected mode addressing allows accessing memory above 1MB by using descriptor tables to map segment selectors to base addresses and limits. Various programming models for addressing program memory and stack memory are also covered.
This document discusses how computers represent different types of data using binary numbers. It explains that all data inside a computer is stored as binary digits (bits) that represent ON and OFF switches. Various data types like characters, pictures, sound, programs and integers are represented by grouping bits into bytes. The context determines how a computer interprets each byte. Standards like ASCII, JPEG and WAV define how different data is encoded into binary format and bytes. The document also covers number systems like binary, decimal, hexadecimal and their properties.
This document provides information about a computer architecture course taught at Velammal Engineering College. The course is aimed at teaching students the basic structure and operations of a computer. It will cover topics like the ALU, pipelined execution, parallelism, memory hierarchies, and virtual memory. The course outcomes include discussing computer basics, designing an ALU, analyzing pipelining and parallel architectures, and examining memory systems. The syllabus is divided into 5 units covering basic computer structure, arithmetic, processors, parallelism, and memory/I/O systems. Textbooks and an introduction to the course are also included.
The microprocessor is a programmable device that processes binary numbers according to instructions stored in memory. It contains arithmetic, logic, and control circuits on a single silicon chip. Early processors used discrete components but were large and slow. The invention of the microchip led to much smaller and faster processors by integrating all components onto a single silicon slice. Modern microprocessors manipulate 32-bit or 64-bit words and have instruction sets that define their capabilities. The 8085 was an 8-bit microprocessor that used multiplexed address/data lines, requiring external latching to separate addresses from data.
The document discusses data formats and machine level programming on Intel processors. It describes how Intel uses words, double words, and quad words to refer to 16-bit, 32-bit, and 64-bit data types. It also discusses common data types like integers, pointers, floating point numbers, and how they are stored. The document then covers general purpose registers, addressing modes, and instructions for data movement between registers and memory like MOV, PUSH, and POP.
The document discusses various computer components including input devices, processors, memory, storage devices and output devices. It describes the features, functions and uses of keyboards, mice, microphones, touchpads, digital cameras, scanners, webcams and other input devices. It also compares these input devices based on characteristics such as resolution, speed and cost. Output devices such as monitors, printers and speakers are also described along with comparisons of their characteristics. Storage devices including hard drives, floppy drives, CDs, DVDs and magnetic tape are outlined.
1) The document presents various algorithms for efficiently transposing matrices while minimizing memory accesses and cache misses.
2) It analyzes the algorithms under different memory models: RAM, I/O, cache, and cache-oblivious. The block transpose, half/full copying, and Morton layout algorithms improve performance by reusing data blocks.
3) Experimental results on a 300MHz system show the Morton layout and half copying algorithms have the fastest runtimes due to minimizing data references, L1 misses, and TLB misses. The relative performance of algorithms depends on cache miss latency.
This document provides an introduction to a course on computer programming and data structures. It discusses storage devices used in computers, including primary storage devices like RAM and ROM, as well as secondary storage devices like hard disks, SSDs, and USB drives. It also covers number systems used in computing, such as binary, octal, decimal, and hexadecimal. Conversion between these number systems is demonstrated through examples of division and multiplication. Homework questions are provided at the end regarding cache memory and the need for octal and hexadecimal numbers in computer systems.
(8) cpp stack automatic_memory_and_static_memoryNico Ludwig
Check out these exercises: http://de.slideshare.net/nicolayludwig/8-cpp-stack-automaticmemoryandstaticmemory-38510742
- Introducing CPU Registers
- Function Stack Frames and the Decrementing Stack
- Function Call Stacks, the Stack Pointer and the Base Pointer
- C/C++ Calling Conventions
- Stack Overflow, Underflow and Channelling incl. Examples
- How variable Argument Lists work with the Stack
- Static versus automatic Storage Classes
- The static Storage Class and the Data Segment
Von Gleichungen zu Funktionen
Überblick über ganzrationale Funktionen
Koordinatensysteme für Graphen ganzrationaler Funktionen mit Excels Liniendiagrammen erstellen
Einfache Analyse ganzrationaler Funktionen anhand deren Graphen
Von linearen zu quadratischen Gleichungssystemen
Verschiedene Möglichkeiten quadratische Gleichungssysteme grafisch zu lösen
Koordinatensysteme für quadratische Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Grafische Interpretation der Lösungen von Normalparabel-Gerade Kombinationen
More Related Content
Similar to (7) cpp memory representation_pointer_arithmetics
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
The document discusses optimizing code and data for CPU caches through various techniques like improving data locality, reducing unnecessary memory accesses, and reusing cached data. It covers optimizing code layout, data structures, prefetching, and addressing issues like aliasing.
The document discusses various techniques for optimizing memory usage and cache performance in code. It begins by justifying the need for memory optimization given trends in CPU and memory speeds. It then provides an overview of memory hierarchies and caches. The rest of the document discusses specific techniques for optimizing data structures, prefetching, layout of data in memory, reducing aliasing, and other strategies to improve cache utilization and performance.
The document discusses dynamic memory allocation in C and some of the challenges that can arise. Specifically, it notes that malloc(1) will not return a 1 byte block due to overhead in allocation structures. This overhead includes space for the size field and a pointer to the next free block. The document also provides an example of undefined behavior that can occur when freeing a pointer that is later accessed, and offers a solution using a temporary variable. Overall, the document outlines the basics of dynamic memory allocation and some issues programmers need to be aware of.
8086 Assembler Tutorial For Beginners (Part 1Brooke Heidt
This document provides an introduction to assembly language programming using 8086 assembler. It discusses basic concepts like registers, memory addressing, and variables. Registers include general purpose registers like AX, BX, CX and DX as well as segment registers. Memory can be addressed using registers and offsets. Variables are declared using directives like DB and DW and can be used to simplify memory references. The MOV instruction is demonstrated for moving data between registers and memory. Arrays are also introduced using directives like DUP.
The document discusses computer architecture and the MIPS instruction set architecture. It begins by defining computer architecture as the instruction set architecture (ISA), which is the boundary between hardware and software and defines what a machine can do. It also discusses machine organization, which is how the hardware works to implement the ISA. The document then covers various aspects of the MIPS ISA, including its register-based load/store architecture, instruction formats, common operations like data movement and arithmetic, and addressing modes.
Number Systems — Decimal, Binary, Octal, and Hexadecimal
Base 10 (Decimal) — Represent any number using 10 digits [0–9]
Base 2 (Binary) — Represent any number using 2 digits [0–1]
Base 8 (Octal) — Represent any number using 8 digits [0–7]
Base 16(Hexadecimal) — Represent any number using 10 digits and 6 characters [0–9, A, B, C, D, E, F]
This document provides definitions and explanations of common computer terms like bits, bytes, binary, hexadecimal and decimal numbering systems. It also describes various components of a computer like memory, storage drives, the central processing unit, buses and ports that allow connection of peripheral devices using standards like USB, Firewire, Ethernet and more.
This document provides an overview of a course on computer fundamentals. The course aims to give students an understanding of how computers work at the assembly level and introduce assembly language programming. Topics covered include the fetch-execute cycle, memory hierarchy, CPU components like the ALU and registers, number representation, and arithmetic and logical instructions. The course outline provides context on the history of computing and early computer designs to prepare students for future operating systems courses.
1) The document is an introduction to 8086 assembly language and provides information on 8086 architecture including registers, memory access, variables, arrays, and instructions like MOV.
2) It explains the main components of a computer like the CPU, RAM, and system bus and describes the different types of registers in the 8086 including general purpose, segment, and special purpose registers.
3) The tutorial provides examples of using directives, variables, memory addressing modes, and instructions like MOV to demonstrate basic 8086 programming concepts.
RAM (random-access memory) is a type of memory that can be accessed randomly; each byte stored at a RAM chip can be accessed directly without reading through consecutive locations. RAM is organized into words of a certain number of bits that are accessed via an address. Larger RAMs can be constructed by combining smaller RAM chips through an addressing scheme. Dynamic RAM must be regularly refreshed to maintain its data but provides higher density than static RAM.
This document discusses different memory addressing modes used in microprocessors, including real mode, protected mode, and flat mode addressing. It provides details on how addresses are calculated in real mode using segment registers and offsets. Protected mode addressing allows accessing memory above 1MB by using descriptor tables to map segment selectors to base addresses and limits. Various programming models for addressing program memory and stack memory are also covered.
This document discusses how computers represent different types of data using binary numbers. It explains that all data inside a computer is stored as binary digits (bits) that represent ON and OFF switches. Various data types like characters, pictures, sound, programs and integers are represented by grouping bits into bytes. The context determines how a computer interprets each byte. Standards like ASCII, JPEG and WAV define how different data is encoded into binary format and bytes. The document also covers number systems like binary, decimal, hexadecimal and their properties.
This document provides information about a computer architecture course taught at Velammal Engineering College. The course is aimed at teaching students the basic structure and operations of a computer. It will cover topics like the ALU, pipelined execution, parallelism, memory hierarchies, and virtual memory. The course outcomes include discussing computer basics, designing an ALU, analyzing pipelining and parallel architectures, and examining memory systems. The syllabus is divided into 5 units covering basic computer structure, arithmetic, processors, parallelism, and memory/I/O systems. Textbooks and an introduction to the course are also included.
The microprocessor is a programmable device that processes binary numbers according to instructions stored in memory. It contains arithmetic, logic, and control circuits on a single silicon chip. Early processors used discrete components but were large and slow. The invention of the microchip led to much smaller and faster processors by integrating all components onto a single silicon slice. Modern microprocessors manipulate 32-bit or 64-bit words and have instruction sets that define their capabilities. The 8085 was an 8-bit microprocessor that used multiplexed address/data lines, requiring external latching to separate addresses from data.
The document discusses data formats and machine level programming on Intel processors. It describes how Intel uses words, double words, and quad words to refer to 16-bit, 32-bit, and 64-bit data types. It also discusses common data types like integers, pointers, floating point numbers, and how they are stored. The document then covers general purpose registers, addressing modes, and instructions for data movement between registers and memory like MOV, PUSH, and POP.
The document discusses various computer components including input devices, processors, memory, storage devices and output devices. It describes the features, functions and uses of keyboards, mice, microphones, touchpads, digital cameras, scanners, webcams and other input devices. It also compares these input devices based on characteristics such as resolution, speed and cost. Output devices such as monitors, printers and speakers are also described along with comparisons of their characteristics. Storage devices including hard drives, floppy drives, CDs, DVDs and magnetic tape are outlined.
1) The document presents various algorithms for efficiently transposing matrices while minimizing memory accesses and cache misses.
2) It analyzes the algorithms under different memory models: RAM, I/O, cache, and cache-oblivious. The block transpose, half/full copying, and Morton layout algorithms improve performance by reusing data blocks.
3) Experimental results on a 300MHz system show the Morton layout and half copying algorithms have the fastest runtimes due to minimizing data references, L1 misses, and TLB misses. The relative performance of algorithms depends on cache miss latency.
This document provides an introduction to a course on computer programming and data structures. It discusses storage devices used in computers, including primary storage devices like RAM and ROM, as well as secondary storage devices like hard disks, SSDs, and USB drives. It also covers number systems used in computing, such as binary, octal, decimal, and hexadecimal. Conversion between these number systems is demonstrated through examples of division and multiplication. Homework questions are provided at the end regarding cache memory and the need for octal and hexadecimal numbers in computer systems.
(8) cpp stack automatic_memory_and_static_memoryNico Ludwig
Check out these exercises: http://de.slideshare.net/nicolayludwig/8-cpp-stack-automaticmemoryandstaticmemory-38510742
- Introducing CPU Registers
- Function Stack Frames and the Decrementing Stack
- Function Call Stacks, the Stack Pointer and the Base Pointer
- C/C++ Calling Conventions
- Stack Overflow, Underflow and Channelling incl. Examples
- How variable Argument Lists work with the Stack
- Static versus automatic Storage Classes
- The static Storage Class and the Data Segment
Similar to (7) cpp memory representation_pointer_arithmetics (20)
Von Gleichungen zu Funktionen
Überblick über ganzrationale Funktionen
Koordinatensysteme für Graphen ganzrationaler Funktionen mit Excels Liniendiagrammen erstellen
Einfache Analyse ganzrationaler Funktionen anhand deren Graphen
Von linearen zu quadratischen Gleichungssystemen
Verschiedene Möglichkeiten quadratische Gleichungssysteme grafisch zu lösen
Koordinatensysteme für quadratische Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Grafische Interpretation der Lösungen von Normalparabel-Gerade Kombinationen
Diagramme
Einführung linearer Gleichungssysteme mit zwei Unbekannten
Rechnerische und grafische Lösung linearer Gleichungssysteme
Wertetabellen mit Excel erstellen
Koordinatensysteme und lineare Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Mit großen Tabellen arbeiten
Sortieren und Filtern
Objekte einfügen
Formeln und rechnen mit Excel, insbesondere Zahlen und Textbearbeitung
Relative und absolute Zellbezüge
Funktionen: SUMME(), ANZAHL(), MIN(), MAX(), MITTELWERT(), JETZT(), HEUTE(), ZUFALLSZAHL(), PI() und SUMMEWENN()
Mathematische Probleme in Tabellenform
Geschichtliche Entwicklung
Grundlegende Konzepte und Begriffe in Excel
Selektion, Dateneingabe und Datentypen
Zellformatierung und Inhaltsformatierung
This document discusses the history and evolution of graphical user interfaces (GUIs) in web browsers. It begins by explaining the concepts of vector-based and raster-based graphics. It then describes early approaches using bare HTML pages, which felt like navigation rather than a true application. Plugins were introduced to provide richer content but had problems with resources, installation, and security. Finally, the document introduces HTML5 Canvas as a solution for continuous, event-driven drawing without plugins, allowing single-page web applications.
The document discusses different approaches to drawing graphics on a computer screen, including raster-based and vector-based graphics. It focuses on raster-based drawing APIs and their evolution from native platform-specific APIs to platform-independent APIs in web browsers. Key developments included the use of browser plugins, problems with plugins, and the introduction of HTML5 Canvas which enables interactive drawing in browsers without plugins.
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
This document discusses new features in C# 4 including home-brew dynamic dispatch using the DynamicObject class. It allows implementing custom dynamic behavior by overriding methods like TryInvokeMember. The document also covers hosting scripting languages with the Dynamic Language Runtime (DLR), including IronPython, IronRuby, and IronScheme. Dynamic dispatch enables seamless collaboration and controlled isolation between .NET and DLR-based languages.
This document discusses new features in C# 4 related to dynamic coding. It describes how dynamic typing allows easier COM automation without needing interop types. Anonymous type instances can now be passed to methods using dynamic parameters. The document also discusses dynamic objects in JavaScript and how ExpandoObjects in .NET allow objects to have dynamically added and modified properties at runtime similar to dynamic objects in JavaScript.
This document discusses the Dynamic Language Runtime (DLR) and dynamic coding features in C# 4. It provides an overview of the DLR and how it allows interoperability between statically and dynamically typed languages on the .NET framework. The DLR transforms dynamic operations in C# into calls to the DLR at compile time and handles dynamic dispatch at runtime. It uses expression trees to represent operations in a language-agnostic way and caches binding results for improved performance.
This document discusses new features related to dynamic programming in C# 4. It begins by explaining why dynamic typing is an important new feature in .NET 4 due to increasing needs for interoperability. It then provides an overview of dynamic typing concepts like late binding and duck typing. The document shows how these concepts are implemented in VB for dynamic programming and how C#4 introduces dynamic typing capabilities through the new "dynamic" keyword and Dynamic Language Runtime (DLR) while still being a statically typed language. It discusses the basic syntax for using dynamics in C#4 and some restrictions.
This document summarizes new generic types and features in C# 4, including Lazy<T> for deferred initialization, tuples for ad-hoc data structures, and generic variance. Generic variance allows covariant and contravariant conversions between generic types to promote substitutability. It was enabled for interfaces and delegates in C# 4 through explicit variance declarations like "out T" and "in T". This improves type safety over arrays, which allow unsafe covariance in C#.
This document discusses LINQ (Language Integrated Query) features in C#, including introducing LINQ to Objects, basic query expressions for projection and selection, and anonymous types. It provides examples of how LINQ to Objects maps to extension methods, functional and declarative expressibility using LINQ, details about the LINQ translation process from query expressions to method calls, and examples of using anonymous types.
The document discusses LINQ and C# algorithms. It compares the C++ STL algorithm model to the .NET algorithm model using IEnumerable<T> sequences and extension methods. Key points include:
- The .NET model uses deferred execution via IEnumerable<T> sequences and iterator blocks, avoiding intermediate collections and allowing multiple operations to be chained.
- Extension methods allow algorithms to be expressed via method chaining in a declarative way rather than using loops.
- IEnumerable<T> sequences are more readable, uniform, and composable than the STL model due to chaining and deferred execution.
This document summarizes new C# 3.0 features including implicit typing, lambda expressions, and extension methods. Implicit typing allows declaring variables without an explicit type using the 'var' keyword. Lambda expressions provide a concise way to pass code as arguments using the '=>' operator. Extension methods allow adding methods to existing types without modifying them by defining methods in static classes that take the extended type as the first parameter.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
2. 2
TOC
● (7) C++ Basics
– The octal and hexadecimal Numeral System
– Byte Order
– Memory Representation of Arrays and Pointer Arithmetics
– Array Addressing with "Cast Contact Lenses"
– The Heap: Segmentation and "Why do Programs crash?"
– How to understand and fix Bugs
– The Teddybear Principle
● Sources:
– Bjarne Stroustrup, The C++ Programming Language
– Charles Petzold, Code
– Rob Williams, Computer System Architecture
– Jerry Cain, Stanford Course CS 107
3. 3
The octal Numeral System
● The idea is to compact bit patterns to lesser digits per value:
– Binary values can be written in groups of 3b, i.e. triples.
– A triple can represent 23
= 8 values, it can be simply written as a single digit then.
– => We need a numeral system with the base of 8 to get matching digits.
● The numeral system on the base 8 is called octal numeral system.
– A single octal digit can hold one of the symbols [0, 7].
– To represent octal digits, no extra numeric symbols need to be added.
– The mathematical notation of octal literals looks like this: 4548
or 454oct
or 454eight
● In C/C++ integer literals can be written as octal number with the 0-prefix.
short s = 300; 0 100 s (300)
short s = 0454; // s (300)
101100000000
4540 00
● Why do we introduce the octal numeral system
here?
● Binary numbers can be quickly transformed into
octal digits and vice versa. The conversion of
groups of 3b makes this possible.
● It is used in some computer systems to enhance
the readability of values under certain
circumstances (24b data words) - It is esp. used in
Unix' file permission system.
4. 4
The octal Numeral System – There is a Problem
● The octal system was used, when computer systems applied 24b data words.
– It was used, because 24 is a perfect multiple of three (bits).
● But today we have data words of 16b, 32b and 64b.
– None of these is a perfect multiple of three!
● Problem: triple-patterns can be applied in different ways on non-multiples of 3b.
– This leads to different possible representations of the same bit pattern:
● => The octal system is no longer suitable for modern data words...
– ...but a solution is in sight!
0 100101100000000
45
40 00
10010100
0
00 001000
0 10
45
s (300)
interpreted as 2x1B pattern
interpreted as complete pattern
s (300)
● Sometimes the octal numeral system does also play
a role in encoding of (serial) data communication,
where 3b-wise encoding can be found often.
● Notice, that after the 2x1B pattern application only
the rightmost triples retain their representation.
5. 5
The hexadecimal Numeral System
● In previous pictures, binary values have been written in groups of 4b very often.
– We call these "half bytes" nibbles or tetrads. Two tetrads make one octet – the byte!
– A tetrad can represent 24
= 16 values, it can simply be written as a single digit then.
– => We need a numeral system with the base of 16 to get matching digits.
● The numeral system on the base 16 is called hexadecimal (short "hex") numeral system.
– A single hexadecimal digit can hold one of the symbols [0, 9] or [A, F] (or [a, f], the case does not matter).
– And a single hexadecimal digit can be directly represented by a tetrad.
– The mathematical notation of literals looks like this: 12C16
12Chex
or 12Csixteen
or 12Ch
● In C/C++ integer literals can be written as hex number with the 0x-prefix.
● In the depicted memory view there is the 300 in hex! But...
something is wrong: the digits are somehow "mirrored"?
– We'll discuss the "effect" of byte order in short!
short s = 300; s (300)
short s = 0x12C; // s (300)
C210
0000 0001 0010 1100
Em, 12C16
… Huh?
● Sometimes in long binary numbers the tetrads are separated by dashes
to enhance the readability.
● Why do we introduce the hexadecimal numeral system here?
● Binary numbers can be quickly transformed into hexadecimal digits and
vice versa. The conversion of groups of 4b makes this possible.
● Esp. we need it to understand the contents of the memory view (i.e.
memory dumps or "raw" views of data (e.g. network traffic) in the IDE's
debugger), which mostly uses a hexadecimal presentation.
● C++14 introduced binary integer literals with the 0b-prefix. - Before that,
programmers could use special libraries like Boost, which provide macros
to use binary literals. (Binary integer literals can be defined in Java 7 with
the 0b-prefix as well.)
● Where do we need the hexadecimal and octal numeral system?
● Well, the decimal numeral system uses values that are often not
dividable by a power of two. As binary numbers can be written in
blocks, which are powers of two, the hexadecimal and octal numeral
system can compress the projection of numbers very nicely.
● Memory addresses are almost always written as hexadecimal
numbers.
● Media access control (MAC) addresses are typically written as six
dash-separated two-digit hex numbers.
● Do you know other numeral systems?
● Esp. important: the roman system and the sexagesimal system, based
on the value 60 used e.g. by the babylonians, who also introduced the
360°, taken, because of a year having about 360 days (still used today
to display clock time and angles).
● The hexadecimal representation of numbers will be used occasionally in
this course.
6. 6
Numbers and the Memory
● I've just overheard this statement "Numbers are stored hexadecimally in a computer's memory!" - No! That's nonsense!
● The next "correction" was "Oh, I mean numbers are stored binary in a computer's memory!" - No, it's the same nonsense!
● Folks! A computer stores data as tiny pieces of electric attributes, e.g. voltage, in electronic circuits. That's all!
– Permanent memory also uses magnetic (e.g. tape), optic (e.g. DVD) or meanwhile also electric (flash) attributes.
● If these attributes have discretely distinguishable values, these are interpreted as different states, often just "on" and "off".
● Computer scientists, programmers and software interpret "on" and "off" as information.
– Software can interpret series of "on" and "off" information-pieces and groups as numbers.
– Sure, the same number can be written in the binary, octal, hexadecimal or decimal numeral system!
Memory is an arrangement of
electric, magnetic or optic attributes.
Memory
300
0 100101100000000
4540 00
C210
Representation in the memory viewRepresentation as numbers
● Sometimes in long binary numbers the tetrads are separated by dashes
to enhance the readability.
● Why do we introduce the hexadecimal numeral system here?
● Binary numbers can be quickly transformed into hexadecimal digits and
vice versa. The conversion of groups of 4b makes this possible.
● Esp. we need it to understand the contents of the memory view (i.e.
memory dumps or "raw" views of data (e.g. network traffic)), which
mostly uses a hexadecimal presentation.
● (The case of the letter symbols [A, F] does not matter for the value.)
● There are no binary integer literals in C/C++11! - There are, however
special libraries like Boost, which provide macros to use binary literals.
(Binary integer literals can be defined in Java 7 with the 0b-prefix.)
● Where do we need the hexadecimal and octal numeral system?
● Well, the decimal numeral system uses values that are often not
dividable by a power of two. As binary numbers can be written in
blocks, which are powers of two, the hexadecimal and octal numeral
system can compress the projection of numbers very nicely.
● Memory addresses are almost always written as hexadecimal
numbers.
● Media access control (MAC) addresses are typically written as six
dash-separated two-digit hex numbers.
● Do you know other numeral systems?
● Esp. important: the roman system and the sexagesimal system, based
on the value 60 used e.g. by the babylonians, who also introduced the
360°, taken, because of a year having about 360 days (still used today
to display clock time and angles).
● The hexadecimal representation of numbers will be used occasionally in
this course.
7. 7
Why do I need to understand Memory Representation?
● Today, programming is very comfortable.
● Understanding "the metal" helps understanding pointers.
● Understanding "the metal" helps understanding errors.
● Understanding the past helps understanding presence and future.
● Don't believe in magic! Try to understand memory!
8. 8
Memory Representation – Byte Order – Part I
● Let's introduce some terms for the byte-components of a value:
– The byte storing the largest portion of the value is called most significant byte (MSB).
– The byte storing the smallest portion of the value is called least significant byte (LSB).
– But how is s' value represented in memory?
● The value of s occupies 2B in memory. The type short is a multibyte type.
– But whether MSB or LSB is stored on the higher memory-address is not standardized!
● A big-endian architecture stores the LSB at the higher memory address.
● A little-endian architecture stores the MSB at the higher memory address.
short s = 300; s (300)01 2C
01 2C
2C 01
higher addresses
s (300)
s (300)
Aha! It's a hexadecimal
little-endian representation of 300!
● Another example for the significance of the parts of
a value: for the clock time represented as hh:mm:ss
the hh-portion is the most significant part of the
clock time value. → The clock time is encoded as
big-endian.
● Interestingly date values are written in completely
different orders depending on the
culture/country/locale:
● US: MM/DD/YYYY (middle-endian)
● Germany: DD.MM.YYYY (little-endian)
● ISO-8601: YYYY-MM-DD (big-endian)
● The terms big/little-endians stem from the book
"Gulliver's Travels" (Jonathan Swift). In the story
the folk of Lilliput is required to open boiled eggs on
the small end (little-endians), whereas in the rival
kingdom of Belfuscu the folk is required to open
them on the big end (big-endians). - The picture
was used by the engineer Danny Cohen to explain
the difficulties of byte order.
9. 9
Memory Representation – Byte Order – Part II
● Benefits of big-endian (68K, PPC, SPARC):
– Better readability: the values are represented in the way they'd be written as literals.
● Benefits of little-endian (x86):
– Scalability: if a value in memory needs resizing, the memory address is kept (like short to int).
● Relevance of the byte order in C++:
– For the representation of multibyte values having sequential bytes (a memory-block).
● I.e. for values of size > 1B (e.g. int or short (but not arrays, esp. c-strings, for pointer arithmetics))
– Functions like std::memcpy() operate on the exact underlying byte order.
● Irrelevance of the byte order in C++:
– The address-operator (&) always returns the address of the lowest-addressed byte.
– The assignment-operator (=) works always correctly.
– Bit-operations and conversions are effective on values, not their byte order!
●
The war of byte orders of multibyte data is fought since the 70s when Intel (x86)
and Motorola (68K) presented their products.
●
The readability of big-endians is relevant for memory dumps.
● There are also mixed byte orders (bi-endian) and (rare) completely different
byte orders.
●
Bi-endians (IA-64) can switch the byte order and can have a different byte
order in different memory segments.
●
Different byte orders store integral and floating point number in different byte
orders, or they store floating point values as big/little-endian mix.
● The date format in the US uses middle-endian: MM/DD/YYYY.
●
C-strings are always stored as big-endians.
● Relevance in other areas:
● For registers the byte order is not relevant, the rightmost byte is always the
LSB.
● As the byte order influences the stack layout, certain sorts of bugs show up
differently depending on the byte order. We'll discuss this in a future lecture.
● If systems having different byte orders exchange data via a network, the
byte order conversion (just on the byte copy layer) takes place in the
network drivers. The byte order of the network protocol needs to be fixed to
make this function (TCP/IP uses big-endian, so even little-endian machines
need to convert information to big-endian to communicate with each other!).
The byte order of the IP-family of protocols is big-endian. On the same
machine there is of course no problem.
●
I/O on files from such different system is implemented via a compatibility
layer.
● A byte order mark (BOM, the 2B sequences "FE FF" for big-endian and "FF
FE" for little-endian) is used at the beginning of a stream of text encoded as
UTF-16 or UTF-32, it allows the receiver to interpret the text correctly.
10. 10
Memory Representation of Arrays
● The elements of a reside in memory as sequential block with space for ten ints.
– The array symbol a represents the base address of the array.
– The 1st
element of a (a[0]), which is also a's base address, resides at the lowest address.
● The size and count of elements of non-dynamic arrays:
● The address of the array (a (not &a)) is the same as of its first element (&a[0]).
– The symbol a does itself represent the address of the array a.
– W/o context, the address of a could be the address of an int or of an int-array.
int a[10]; ? ? ? ? ? ? ? ? ? ? a
a[0] = 44;
a[9] = 100;
4B
2C 64
std::size_t arraySize = sizeof(a); // Get the size of the array a (40).
int nElements = arraySize/sizeof(a[0]); // Get the count of elements in a (10).
40B
● In which "unit" does the sizeof operator return its
result?
● In std::size_t, a std::size_t of value 1 represents
the sizeof(char).
● The expression sizeof(a[0]) is ok for non-dynamic
arrays, because they can not have zero elements.
● Instead of sizeof(a) we could also get the value of
sizeof(int[10]) as both values need to be equal (in
the latter form we are required to write the
argument for sizeof in parentheses, because int[] is
a type).
● We can not get the address of the array a by writing
&a, because the symbol a is not an l-value. The
symbol a itself represents the address of the array
a.
● When we pass an array to a function, we are really
passing its address (which is also the address of
the first element) to the function. - We need to pass
the length of an array separately.
11. 11
Arrays and Pointer Arithmetics – Offset Addressing
● As a represents a pointer to the first element, the index can be seen as offset.
– It is possible to access elements via offset calculations against a.
– (Keep in mind that this is valid: a == &a[0])
– This allows so called pointer arithmetics. – This is a very important topic in C/C++!
– Pointer arithmetics is only meaningful in arrays.
● So, adding a number to the pointer a is interpreted as an offset like this:
● Pointer arithmetics for array offset addressing works like this:
– The sizeof(a[0]) * 1 is added to a's address to access the element at the index 1.
– This means that a + 1 is the calculated address of a[1]. => a + k == &a[k]
– In the example above the 1 is interpreted as an offset of 4B to the pointer a.
int a[] = {1, 2, 3, 4}; ? 2 3 4 a
4B
1
*(a + 1) = 6; // same as a[1] = 6 /* ...or via the pointer: */ *(++ap) = 6;
int* ap = a;
● In C/C++ the argument of the subscript is virtually
not an index, it is rather an offset from the address
of the array, which is the address of the very first
array element.
● The pointer arithmetics works, because the scalar
number being added to or subtracted from a pointer
is interpreted to scale for the type of the pointer we
are operating on.
● Pointer arithmetics is an important topic in C++,
because it abstracts the functionality of builtin types
to be idiomatically compatible to the Standard
Template Library (STL).
12. 12
Arrays and Pointer Arithmetics – Element Distance
● If we have the addresses of two elements of the array we can get their distance.
– So, subtracting two pointers to array elements is interpreted as their distance like this:
● Pointer arithmetics for array element distance works like this:
– The addresses of the elements a[1] (e1) and a[3] (e2) are pointers to int.
– Then just the count of ints "fitting between" both pointers is "measured".
– This means that (e1 + d) boils down to e2, or &a[1] + d == &a[3].
– The result of a pointer subtraction is a value of the (signed) type std::ptrdiff_t.
int a[] = {1, 2, 3, 4}; ? 2 3 4 a
4B
1
int* e1 = &a[1];
int* e2 = &a[3];
std::ptrdiff_t d = e2 - e1;
e1 e2
e2 - e1 d (2)
● The type std::ptrdiff_t is defined in <cstddef>.
13. 13
Pointer Arithmetics, Pointer and Value Identity
● Now we understand how the []-operator boils down to pointer arithmetics.
– This also explains why we can use the []-operator with dynamic arrays.
● Some additional facts/features make pointer arithmeticswork correctly:
– All pointers can be compared against 0.
– Pointers of equal type can be compared to other pointers of equal type.
– Equal pointers represent the same address, thus the dereferenced values are equal.
●
We call this feature pointer identity.
● Following "equations" are valid for pointer/value identity with pointer arithmetics:
– a == &a[0] (pointer identity)
– (a + k) == &a[k] && &a[k] == &k[a] (pointer identity)
– *a == a[0] (value identity after dereferencing)
– *(a + k) == a[k] && a[k] == k[a] (value identity after dereferencing)
int a[] = {1, 2, 3, 4};
int k = 2;
● The way pointer arithmetics work does also explain
why the void* pointing to array created on the heap
needs to be casted to a concrete type. - Why?
● Because the operations with pointer arithmetics
need to know the size of the elements to calculate
correct offsets, a void* is just a generic pointer to a
block of memory. It explains also why a void* can
not be dereferenced.
14. 14
Arrays and foreign Memory
● We call the pointer to the array element a[length] the past-the-end-pointer.
– It points to the first "element" not belonging to the array.
– It has a special meaning in C/C++.
– C/C++ support no bounds checking, but the past-the-end-pointer can be safely read.
● Writing memory, which we don't own is still illegal.
– The local variables of a function are packed into a so called stack frame (sf).
– Writing to array-foreign memory could modify the values of adjacent variables in the sf.
a[length] = 7;
? ? ? ? ? ? ? ? a
a[-1] = 3;
? ? 73
length
const int length = 10;
int a[length];
15. 15
An Array with Cast Contact Lenses on
● The expression a[2] is calculated to be effectively at the address a + (2 * sizeof(int)).
– The value at this offset (in memory), which has space for one int, is set to 128.
● Then we put cast contact lenses of type short on:
– The expression reinterpret_cast<short*>(a)[5] is calculated to be effectively at the address a + (5 * sizeof(short)).
– The value at this offset, which is assumed to have space for one short, is set to 2.
● Taking the contact lenses off again: we've modified the higher address 2B of the value ofa[2].
– The int at a[2] has a completely different value from 128 or 2!
int a[4];
2B
a[2] = 128;
reinterpret_cast<short*>(a)[5] = 2;
? ? ? ? ? ? ? ? a
4B
? ? ? ? ? ? ?? ??
? ? ? ? ? ? 80 0 a? 0 ? ? ? ?? ??
80 0 0 0
02
a + 2 * 4B
a + 5 * 2B
a[2] (128)
a[2] (131200)
a[2]
● In the following examples we'll have to use
reinterpret_casts (static_casts can not convert from
int* to short* as in this example).
● Why are the memory portions modified in the
presented order?
● Because of the byte order. (We assume little
endian in this case, as the MSB is right from the
LSB.)
● We could go really crazy with casting and
navigation through memory, the possible
combinations are endless. But it is often also
pointless and maybe dangerous.
16. 16
Memory Segmentation – Geometric Memory Properties
● Let's assume a pointer has a size of 4B.
● The stack segment
– stores local (auto) variables,
– manages the stack of function calls and
– is owned and managed by hardware.
● The heap segment
– is managed by the heap manager and
– is owned and managed by software.
● std::malloc(), std::realloc(), std::free() etc.
Stack segment
Heap segment
0
232
- 1
void* a = std::malloc(80);
void* b = std::malloc(40);
80B 40B
void* c = std::realloc(b, 100);
100B
Code segment
Data segment
● Why do we draw a memory of size 232
?
● If the pointer's size is 4B, the width of the address bus (count of
address-"wires") must be 32b, this makes 232
different bytes to be
addressable.
● This memory model:
● Keeping data and code in the same memory is an important aspect of
the "von Neumann architecture".
● The depicted memory model exists since the early 70s and was
introduced with the "Real" processor mode. Safety was introduced with
the protected mode .
● The dimensions of the segments are not realistic. The stack is rather
small and the heap is rather big.
● In assembly languages it is possible to use direct segmentation to
define which data resides in the data and in the code segment.
● When a program is loaded the start and end address of the heap are
passed to the heap manager.
● What is the code segment (text segment)?
● In this portion of the memory the object code or assembly code
resides.
● The C/C++ types for which memory is allocated don't matter to the heap
manger; it rather manages requests of portions of bytes.
● The function std::realloc() may or may not extend the portion of memory
(in "higher address direction") that is designated by the passed pointer.
This means that the address returned by std::realloc() needs not to be the
same as the one passed.
● If a memory block is reallocated and can not be extended, because there
is not enough adjacent space, another matching memory block will be
allocated.
17. 17
Heap segment
Why do Programs crash?
● Segmentation fault:
– Happens, if we dereference a bad pointer.
– The 0-pointer is not part of any segment.
– This also happens on "dereferencing very small numbers..."
● Bus error:
– Happens, if we dereference a pointer having an unexpected location.
– Here the runtime may spot the error, as shorts usually reside on even addresses.
●
But vp was pointing into a segment, so this is no segmentation fault.
0
char* ch = 0;
char c = *ch; // Dereferencing the 0-pointer...
ch
void* vp; // The pointer vp will be initialized with rubbish.
*reinterpret_cast<short*>(vp) = 42; // Dereferencing will fail for a chance of 50% (see explanation).
● Why does the explained bus error only appear in
50% of the cases?
● By a chance of 50% the dereferenced address is
odd.
● Typically ints reside on addresses being a multiple
of four.
● Typically there is no address restriction for
bytes/chars.
18. 18
Why do Programs crash? – A 20k Miles Perspective
● A crash is a fatal error, maybe due to
– a hardware malfunction or
– a logical software error (let's call this a software bug or simply a bug).
● Let's focus on bugs, why can we have bugs in our software?
– Bad values, e.g. invalid input or uninitialized memory.
– Unwarranted assumptions, e.g. infrastructure problems like "disc full".
– An otherwise faulty logic.
● This is relevant for C/C++, as there is no guarantee on misusing features!
● Help yourself: Trace a bug with tools:
– Analyze present stacktraces and logs. - Often customers can provide them, if the program produced such information.
– Use "printf()-debugging" and/or IDE-debugging.
– Create and run unit tests before and after the error was found.
● After a crash we may be forced to remove the
crashed process or to reboot the system.
● C/C++ is a programming language for
programmers: there is no guarantee on misusing
features. - There is no elaborate exception strategy
like in .Net or Java. In C++ we can use/consume
exceptions, but they are only present in STL APIs,
for other APIs we have to cope with undefined
behavior.
19. 19
Nailing down Bugs – Simple but useful Tips
● 1. Bug-finding needs to be done systematically.
● 2. Make assumptions about the bug or problem!
● 3. We should be utterly critical about our assumptions and hold nothing for granted!
– We have always to check user input!
– Forgetting to program defensively is a major source of errors!
– Compile with highest warning level!
● 4. Don't panic!
● 5. Do it with pair programming.
● 6. Explain the bug or problem to another person, or...
20. 20
The Teddybear Principle
● This is a serious idea for problem solving!
● If we have a bug or problem: We should first talk to the teddybear!
– It is often helpful to reflect the problem in question with a peer.
– Notice that successful problem solvers often have sidekicks!
● Sherlock Holmes → Dr. John Watson
●
Dr. Gregory House → Dr. James Wilson
● Only if the teddybear provides no answer: We should ask a trainer.
● The teddybear principle:
http://talkaboutquality.wordpress.com/2010/08/30/tell
-it-to-your-teddy-bear/
21. 21
Allocation from the Heap – normative Process
● The heap can be seen as a large array. (Here assuming that it is totally free.)
● Memory allocation from the heap works like so (simplified):
– Search from the beginning of the heap for a big enough free block of memory.
– Record this allocated space (address and size) somewhere.
– Return the address of the allocated space.
● The memory for b can be allocated right "after" a's memory.
– In principle the procedure is exactly like that for a's memory allocation.
● The heap manager may use any other heuristics to make it faster...
void* a = std::malloc(40);
40B
void* b = std::malloc(60);
60B
● This is not exactly what happens, but close enough
to understand it basically.
22. 22
Freeing from the Heap and Fragmentation
●
Freeing memory from the heap:
– Mark the memory addressed by a as free.
– The contents (bit pattern) of the memory addressed by a are not touched!
– But the contents have no longer a meaning!
● Next allocation:
– The heap manager will start to find a free space (at the beginning) of the heap.
– There is a free memory block at the beginning, but it is too small (40B).
– The next free memory block is right after b + 60.
● Since different block sizes get allocated the heap fragments during usage.
std::free(a);
40B 60B40B
void* c = std::malloc(45);
45B
● The heap fragmentation evolves like a parking bay
w/o marks: Cars of different dimensions enter and
leave the bay. After a while, spaces occupied by
wide cars will be reused by small cars. The surplus
space is wasted leaving a fragmented parking bay.
● After a has been freed, the formerly occupied
memory could be reused by next allocations.
● Not all implementations of the heap manager start
finding free memory at the beginning of the heap.
● The heap manager can record the available gaps of
free memory as a linked list to their pointers. The
pointers are called "free nodes" and the linked list is
called "free list".
23. 23
Final Words on the Heap
● Every process thinks it owns the whole memory of the machine.
– At the start of an application the bounds of the heap are passed to the heap manager.
● There exist different heap managing and optimization strategies, e.g.:
– 1. The size of a heap block's memory is stored in a header field of the block.
●
This means that the size of the allocated block is indeed a little bit larger than required.
– 2. A set of void* to the heap are managed in the heap manager to check freeing.
– 3. Segmented heap with segments for blocks of different sizes -> less fragmentation.
● Don't rely on proprietary features of the heap.
? ? ? ?
4B
4
20B
void* v = std::malloc(4 * sizeof(int));
v
size
● A concrete example for a segmented heap is the
"Low Fragmentation Heap" (LFH), which can be
used in Windows Vista and newer Windows
versions. It stores memory blocks of different sizes
in dedicated buckets to optimize search time for free
memory and to lower the heap fragmentation of
course.