The document describes the NVRAM Manager module which manages non-volatile data storage in devices like EEPROM or flash memory. The NVRAM Manager provides services for initializing, reading, writing, and controlling non-volatile data blocks. It uses a memory abstraction layer to access the underlying memory devices and presents a uniform addressing scheme to upper layers. The NVRAM Manager also supports different types of data blocks like native, redundant, and dataset blocks.
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DMYahoo!デベロッパーネットワーク
LINE Developer Meetup #68 - Big Data Platformの発表資料です。HDFSのメジャーバージョンアップとRouter-based Federation(RBF)の適用について紹介しています。イベントページ: https://line.connpass.com/event/188176/
Spark SQL Catalyst Code Optimization using Function Outlining with Kavana Bha...Databricks
The document discusses code optimization techniques in Spark SQL's Catalyst optimizer. It describes how function outlining can improve performance of generated Java code by splitting large routines into smaller ones. The document outlines a Spark SQL query optimization case study where outlining a 300+ line routine from Catalyst code generation improved query performance by up to 19% on a Power8 cluster. Overall, the document examines how function outlining and other code generation optimizations in Catalyst can help the Java JIT compiler better optimize Spark SQL queries.
CXL is enabling new memory architectures by connecting CPUs and GPUs to shared memory pools. Early CXL 1.1 focused on memory expansion by connecting processors to DRAM modules. CXL 2.0 allowed for small memory pools accessible by a few servers. CXL 3.0 supports larger shared memory fabrics by connecting thousands of nodes and enabling true shared memory regions accessible coherently by multiple hosts and accelerators. However, shared memory fabrics using CXL 3.0 may experience greater latency variability and congestion compared to single-host or small memory pooling configurations.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
"With Flink and Kubernetes, it's possible to deploy stream processing jobs with just SQL and YAML. This low-code approach can certainly save a lot of development time. However, there is more to data pipelines than just streaming SQL. We must wire up many different systems, thread through schemas, and, worst-of-all, write a lot of configuration.
In this talk, we'll explore just how ""declarative"" we can make streaming data pipelines on Kubernetes. I'll show how we can go deeper by adding more and more operators to the stack. How deep can we go?"
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and SwitchingMemory Fabric Forum
The document discusses CXL, a new open standard protocol for efficient CPU and memory connectivity. CXL allows for memory disaggregation and pooling across devices by enabling high-bandwidth, low-latency connections between CPUs, GPUs, accelerators, and memory. This helps address the growing CPU-memory bottleneck by allowing expansion of memory capacity beyond what can physically connect to the CPU. CXL also enables memory tiering by providing different performance and cost options for "near" directly attached memory versus "far" switched or fabric attached memory.
This document provides an overview of IoT databases and time series data. It discusses different database types and popular IoT database solutions like InfluxDB and TimescaleDB. Implementations and demos of these databases are shown, including writing and querying time series data. Challenges of IoT databases are also mentioned.
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DMYahoo!デベロッパーネットワーク
LINE Developer Meetup #68 - Big Data Platformの発表資料です。HDFSのメジャーバージョンアップとRouter-based Federation(RBF)の適用について紹介しています。イベントページ: https://line.connpass.com/event/188176/
Spark SQL Catalyst Code Optimization using Function Outlining with Kavana Bha...Databricks
The document discusses code optimization techniques in Spark SQL's Catalyst optimizer. It describes how function outlining can improve performance of generated Java code by splitting large routines into smaller ones. The document outlines a Spark SQL query optimization case study where outlining a 300+ line routine from Catalyst code generation improved query performance by up to 19% on a Power8 cluster. Overall, the document examines how function outlining and other code generation optimizations in Catalyst can help the Java JIT compiler better optimize Spark SQL queries.
CXL is enabling new memory architectures by connecting CPUs and GPUs to shared memory pools. Early CXL 1.1 focused on memory expansion by connecting processors to DRAM modules. CXL 2.0 allowed for small memory pools accessible by a few servers. CXL 3.0 supports larger shared memory fabrics by connecting thousands of nodes and enabling true shared memory regions accessible coherently by multiple hosts and accelerators. However, shared memory fabrics using CXL 3.0 may experience greater latency variability and congestion compared to single-host or small memory pooling configurations.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
"With Flink and Kubernetes, it's possible to deploy stream processing jobs with just SQL and YAML. This low-code approach can certainly save a lot of development time. However, there is more to data pipelines than just streaming SQL. We must wire up many different systems, thread through schemas, and, worst-of-all, write a lot of configuration.
In this talk, we'll explore just how ""declarative"" we can make streaming data pipelines on Kubernetes. I'll show how we can go deeper by adding more and more operators to the stack. How deep can we go?"
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and SwitchingMemory Fabric Forum
The document discusses CXL, a new open standard protocol for efficient CPU and memory connectivity. CXL allows for memory disaggregation and pooling across devices by enabling high-bandwidth, low-latency connections between CPUs, GPUs, accelerators, and memory. This helps address the growing CPU-memory bottleneck by allowing expansion of memory capacity beyond what can physically connect to the CPU. CXL also enables memory tiering by providing different performance and cost options for "near" directly attached memory versus "far" switched or fabric attached memory.
This document provides an overview of IoT databases and time series data. It discusses different database types and popular IoT database solutions like InfluxDB and TimescaleDB. Implementations and demos of these databases are shown, including writing and querying time series data. Challenges of IoT databases are also mentioned.
During the CXL Forum at OCP Global Summit, Mahesh Wagh, CXL Consortium TTF Co-chair and Senior Fellow at AMD, presented and update of the CXL Consortium mission and road map.
Apache Phoenix’s relational database view over Apache HBase delivers a powerful tool which enables users and developers to quickly and efficiently access their data using SQL. However, Phoenix only provides a Java client, in the form of a JDBC driver, which limits Phoenix access to JVM-based applications. The Phoenix QueryServer is a standalone service which provides the building blocks to use Phoenix from any language, not just those running in a JVM. This talk will serve as a general purpose introduction to the Phoenix QueryServer and how it complements existing Apache Phoenix applications. Topics covered will range from design and architecture of the technology to deployment strategies of the QueryServer in production environments. We will also include explorations of the new use cases enabled by this technology like integrations with non-JVM based languages (Ruby, Python or .NET) and the high-level abstractions made possible by these basic language integrations.
The document discusses implementing client quotas in Apache Kafka to ensure fair sharing of broker resources among clients. It describes enforcing quotas based on client IDs and overriding quotas for specific clients. Metrics are used to monitor client throughput and enforce quotas by delaying requests if clients exceed their limits. The approach aims to improve performance for well-behaved clients by preventing a few misbehaving clients from impacting others. Evaluation results show improved throughput and latency for clients when quotas are enforced.
During the CXL Forum at OCP Global Summit 23, Rick Kutcipal and Sreeni Bagalkote of Broadcom presented their PCIe/CXL Roadmap and announced their Atlas 4 CXL switch.
Wie oft haben Sie schon in Foren gelesen: "Das geht nicht mit Bordmitteln; das muss man mit der C API machen". Schön und gut, aber wie geht das? Welche Tools benötige ich, und wo bekomme ich diese her? Die Session gibt einen Überblick über die Anwendungsgebiete der C / C++ API für Lotus Notes / Domino und erläutert die Installation einer Entwicklungsumgebung. Neben der Erstellung von C Programmen wird auch der direkte Aufruf von Funktionen aus Lotusscript heraus erläutert.
Praktische Beispiele sollen dem Entwickler den Einstieg in die Programmierung mit der C / C++ API für Lotus Notes / Domino erleichtern. Level: Einsteiger, die sich auch in Zeiten von XPages, JAVA und SSJS noch an das "Urgestein C" herantrauen.
Apache Iceberg Presentation for the St. Louis Big Data IDEAAdam Doyle
Presentation on Apache Iceberg for the February 2021 St. Louis Big Data IDEA. Apache Iceberg is an alternative database platform that works with Hive and Spark.
A Deep Dive into Apache Cassandra for .NET DevelopersLuke Tillman
.NET developers have a lot of options when it comes to databases these days. Apache Cassandra is a scalable, fault-tolerant database that has already found its way into more than 25% of the Fortune 100 and continues to grow in popularity. But what makes it different from the myriad of other options available? In this talk, we’ll take a deep dive into Cassandra and learn about:
- Cassandra’s internals and how it works
- CQL (the SQL-like query language for Cassandra)
- Data Modeling like a pro
- Tools available for developers
- Writing .NET code that talks to Cassandra
If there’s time and interest, we’ll finish up with how some companies are already using Cassandra to power services you probably interact with in your daily life. You’ll leave with all the tools you need to start build highly available .NET applications and services on top of Cassandra.
C* Summit 2013: How Not to Use Cassandra by Axel LiljencrantzDataStax Academy
At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting design anti-patterns, performance killers and other opportunities to lose a finger that are at your disposal with Cassandra.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
During the CXL Forum at OCP Global Summit, MemVerge software architect Steve Scargall defines the CXL software stack and where the development is being done.
The document outlines topics covered in "The Impala Cookbook" published by Cloudera. It discusses physical and schema design best practices for Impala, including recommendations for data types, partition design, file formats, and block size. It also covers estimating and managing Impala's memory usage, and how to identify the cause when queries exceed memory limits.
Presentation slide for "In-Memory Storage Evolution in Apache Spark" at Spark+AI Summit 2019
https://databricks.com/session/in-memory-storage-evolution-in-apache-spark
Kernel modules allow adding and removing functionality from the Linux kernel while it is running. Modules are compiled as ELF binaries with a .ko extension and are loaded and unloaded using commands like insmod, rmmod, and modprobe. Modules can export symbols to be used by other modules and have dependencies on other modules that must be loaded first. The kernel tracks modules and their state using data structures like struct module to manage loading, unloading, and dependencies between modules.
Connecting and using PostgreSQL database with psycopg2 [Python 2.7]Dinesh Neupane
This presentation covers the basic idea of connecting postgresql database with python and psycopg2 module.
Covered Topics:
1. Psycopg2 Installation
2. Connecting to PostgreSQL Database
3. Connection Parameters
4. Create and Drop Table
5. Adaptation of Python Values to SQL Types
6. SQL Transactions
7. DML
Operating and Supporting Delta Lake in ProductionDatabricks
The document discusses strategies for optimizing and managing metadata in Delta Lake. It provides an overview of optimize, auto-optimize, and optimize write strategies and how to choose the appropriate strategy based on factors like workload, data size, and cluster resources. It also discusses Delta Lake transaction logs, configurations like log retention duration, and tips for working with Delta Lake metadata.
This document discusses optimizing Spark write-heavy workloads to S3 object storage. It describes problems with eventual consistency, renames, and failures when writing to S3. It then presents several solutions implemented at Qubole to improve the performance of Spark writes to Hive tables and directly writing to the Hive warehouse location. These optimizations include parallelizing renames, writing directly to the warehouse, and making recover partitions faster by using more efficient S3 listing. Performance improvements of up to 7x were achieved.
The document provides an introduction to the Yocto Project, including what it is, its main components, and workflow. It describes the Yocto Project as being comprised of Poky (the build system), tools, and upstreams. Poky contains BitBake (the build engine) and metadata (task configurations and definitions). It outlines the main components, including sub-projects, and compares the Yocto Project to OpenEmbedded. Finally, it summarizes the Yocto Project workflow, which involves configuring the build using recipes and layers then building packages, images, and cross-development toolchains.
Advanced performance troubleshooting using esxtopAlan Renouf
This document discusses using esxtop and resxtop tools to troubleshoot performance issues on VMware ESXi hosts. It provides 10 key things to know about esxtop counters and how they work. It then gives examples of using esxtop to troubleshoot common problems like CPU contention, memory issues, network throughput problems, and disk I/O latency. It also lists some other diagnostic tools that can be used along with esxtop.
During the CXL Forum at OCP Global Summit, Mahesh Wagh, CXL Consortium TTF Co-chair and Senior Fellow at AMD, presented and update of the CXL Consortium mission and road map.
Apache Phoenix’s relational database view over Apache HBase delivers a powerful tool which enables users and developers to quickly and efficiently access their data using SQL. However, Phoenix only provides a Java client, in the form of a JDBC driver, which limits Phoenix access to JVM-based applications. The Phoenix QueryServer is a standalone service which provides the building blocks to use Phoenix from any language, not just those running in a JVM. This talk will serve as a general purpose introduction to the Phoenix QueryServer and how it complements existing Apache Phoenix applications. Topics covered will range from design and architecture of the technology to deployment strategies of the QueryServer in production environments. We will also include explorations of the new use cases enabled by this technology like integrations with non-JVM based languages (Ruby, Python or .NET) and the high-level abstractions made possible by these basic language integrations.
The document discusses implementing client quotas in Apache Kafka to ensure fair sharing of broker resources among clients. It describes enforcing quotas based on client IDs and overriding quotas for specific clients. Metrics are used to monitor client throughput and enforce quotas by delaying requests if clients exceed their limits. The approach aims to improve performance for well-behaved clients by preventing a few misbehaving clients from impacting others. Evaluation results show improved throughput and latency for clients when quotas are enforced.
During the CXL Forum at OCP Global Summit 23, Rick Kutcipal and Sreeni Bagalkote of Broadcom presented their PCIe/CXL Roadmap and announced their Atlas 4 CXL switch.
Wie oft haben Sie schon in Foren gelesen: "Das geht nicht mit Bordmitteln; das muss man mit der C API machen". Schön und gut, aber wie geht das? Welche Tools benötige ich, und wo bekomme ich diese her? Die Session gibt einen Überblick über die Anwendungsgebiete der C / C++ API für Lotus Notes / Domino und erläutert die Installation einer Entwicklungsumgebung. Neben der Erstellung von C Programmen wird auch der direkte Aufruf von Funktionen aus Lotusscript heraus erläutert.
Praktische Beispiele sollen dem Entwickler den Einstieg in die Programmierung mit der C / C++ API für Lotus Notes / Domino erleichtern. Level: Einsteiger, die sich auch in Zeiten von XPages, JAVA und SSJS noch an das "Urgestein C" herantrauen.
Apache Iceberg Presentation for the St. Louis Big Data IDEAAdam Doyle
Presentation on Apache Iceberg for the February 2021 St. Louis Big Data IDEA. Apache Iceberg is an alternative database platform that works with Hive and Spark.
A Deep Dive into Apache Cassandra for .NET DevelopersLuke Tillman
.NET developers have a lot of options when it comes to databases these days. Apache Cassandra is a scalable, fault-tolerant database that has already found its way into more than 25% of the Fortune 100 and continues to grow in popularity. But what makes it different from the myriad of other options available? In this talk, we’ll take a deep dive into Cassandra and learn about:
- Cassandra’s internals and how it works
- CQL (the SQL-like query language for Cassandra)
- Data Modeling like a pro
- Tools available for developers
- Writing .NET code that talks to Cassandra
If there’s time and interest, we’ll finish up with how some companies are already using Cassandra to power services you probably interact with in your daily life. You’ll leave with all the tools you need to start build highly available .NET applications and services on top of Cassandra.
C* Summit 2013: How Not to Use Cassandra by Axel LiljencrantzDataStax Academy
At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting design anti-patterns, performance killers and other opportunities to lose a finger that are at your disposal with Cassandra.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
During the CXL Forum at OCP Global Summit, MemVerge software architect Steve Scargall defines the CXL software stack and where the development is being done.
The document outlines topics covered in "The Impala Cookbook" published by Cloudera. It discusses physical and schema design best practices for Impala, including recommendations for data types, partition design, file formats, and block size. It also covers estimating and managing Impala's memory usage, and how to identify the cause when queries exceed memory limits.
Presentation slide for "In-Memory Storage Evolution in Apache Spark" at Spark+AI Summit 2019
https://databricks.com/session/in-memory-storage-evolution-in-apache-spark
Kernel modules allow adding and removing functionality from the Linux kernel while it is running. Modules are compiled as ELF binaries with a .ko extension and are loaded and unloaded using commands like insmod, rmmod, and modprobe. Modules can export symbols to be used by other modules and have dependencies on other modules that must be loaded first. The kernel tracks modules and their state using data structures like struct module to manage loading, unloading, and dependencies between modules.
Connecting and using PostgreSQL database with psycopg2 [Python 2.7]Dinesh Neupane
This presentation covers the basic idea of connecting postgresql database with python and psycopg2 module.
Covered Topics:
1. Psycopg2 Installation
2. Connecting to PostgreSQL Database
3. Connection Parameters
4. Create and Drop Table
5. Adaptation of Python Values to SQL Types
6. SQL Transactions
7. DML
Operating and Supporting Delta Lake in ProductionDatabricks
The document discusses strategies for optimizing and managing metadata in Delta Lake. It provides an overview of optimize, auto-optimize, and optimize write strategies and how to choose the appropriate strategy based on factors like workload, data size, and cluster resources. It also discusses Delta Lake transaction logs, configurations like log retention duration, and tips for working with Delta Lake metadata.
This document discusses optimizing Spark write-heavy workloads to S3 object storage. It describes problems with eventual consistency, renames, and failures when writing to S3. It then presents several solutions implemented at Qubole to improve the performance of Spark writes to Hive tables and directly writing to the Hive warehouse location. These optimizations include parallelizing renames, writing directly to the warehouse, and making recover partitions faster by using more efficient S3 listing. Performance improvements of up to 7x were achieved.
The document provides an introduction to the Yocto Project, including what it is, its main components, and workflow. It describes the Yocto Project as being comprised of Poky (the build system), tools, and upstreams. Poky contains BitBake (the build engine) and metadata (task configurations and definitions). It outlines the main components, including sub-projects, and compares the Yocto Project to OpenEmbedded. Finally, it summarizes the Yocto Project workflow, which involves configuring the build using recipes and layers then building packages, images, and cross-development toolchains.
Advanced performance troubleshooting using esxtopAlan Renouf
This document discusses using esxtop and resxtop tools to troubleshoot performance issues on VMware ESXi hosts. It provides 10 key things to know about esxtop counters and how they work. It then gives examples of using esxtop to troubleshoot common problems like CPU contention, memory issues, network throughput problems, and disk I/O latency. It also lists some other diagnostic tools that can be used along with esxtop.
Transparent Computing: A cloud service using Spatio-Temporal Extension on von...aswingk
The rapid advancements in hardware, software, and computer networks have facilitated the shift of the computing paradigm from mainframe to cloud computing, in which users can get their desired services anytime, anywhere, and by any means. However, cloud computing also presents many challenges, one of which is the difficulty in allowing users to freely obtain desired services, such as heterogeneous OSes and applications, via different light-weight devices. We have proposed a new paradigm by spatio-temporally extending the von Neumann architecture, called transparent computing, to centrally store and manage the commodity programs including OS codes, while streaming them to be run in non-state clients. This leads to a service-centric computing environment, in which users can select the desired services on demand, without concern for these services' administration, such as their installation, maintenance, management, and upgrade. In this paper, we introduce a novel concept, namely Meta OS, to support such program streaming through a distributed 4VP+ platform. Based on this platform, a pilot system has been implemented, which supports Windows and Linux environments.
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptxJean Carlos Cruz
- Check the version of upgrade package and current system
- Check the content of upgrade package
- Check the upgrade procedure
- Check the upgrade risk
- Check the upgrade time and impact
- Check the upgrade rollback plan
- Check the upgrade test plan
- Get approval before upgrade
- Record the upgrade process and result
- Handle upgrade failure properly
- Improve upgrade procedure continuously
So the key points of upgrade validation are:
- Version check
- Content check
- Procedure check
- Risk check
- Time and impact check
- Rollback plan
- Test plan
- Approval
- Record
The document provides an overview of the EMC VNX storage system. It includes 21 modules covering topics like unified management, storage configuration, block and file provisioning, host integration, data protection features like replication and snapshots, and file system configuration. It also lists the various VNX models that scale up to 1500 drives and 1 million IOPS and provides flexible connectivity. The document is intended as a training guide for EMC customers and includes internal links to additional learning resources.
z/OS Authorized Code Scanner (zACS) is a tool that provides the ability to test PCs and SVCs and client’s authorized code to provide diagnostic information for subsequent investigation as needed.
Virtual SAN is VMware's hyper-converged infrastructure storage solution that is integrated with vSphere. It provides a software-defined, distributed storage platform that offers policy-based placement and management of virtual machine storage. Version 6.1 introduced new features like stretched clusters for disaster recovery between sites, support for high-density flash devices, and health monitoring and troubleshooting tools through integration with vRealize Operations. Future enhancements may include RAID 5 and 6 functionality over the network to improve storage efficiency as well as data deduplication and compression.
The CNBS server is a backup solution that can perform both selective online backups of workstations and servers, as well as bare metal restores of offline systems based on various operating systems like Linux, Windows, MacOS, and BSD. It supports backing up multiple filesystems, only saving used blocks to increase efficiency. Almost all steps can be done via commands and options, and it supports features like multicast, mass cloning, and automatic changing of hostnames and SIDs for Windows clients. However, it has some limitations like not supporting differential or incremental backups for offline backups, requiring equal or larger disks for restores, and potential network issues corrupting backups.
This document provides an overview of the EMC VNX storage system. It includes 14 modules that cover topics such as unified management, block and file storage provisioning, data protection features, host integration, and more. The document also lists system limits and specifications for the different VNX models. It aims to educate customers on the technical capabilities and features of the VNX platform.
- The document discusses NetBackup SAN Client and Fibre Transport which provides high-speed backups and restores over a SAN. SAN Client uses SCSI protocol and has a smaller footprint than SAN Media Server. It also discusses NetBackup appliances which are purpose-built backup appliances versus building your own solution. Appliances offer advantages like simplified management, security, and monitoring.
UNIT 1 ERTS-1.pptusbce18ugxy8vsxysqvyexveeerithanya
This document provides an overview of embedded and real-time systems. It discusses the key components of embedded systems including processors, memory, timers, and I/O devices. It also covers memory management methods, real-time operating systems, classifications of embedded systems, and considerations for selecting processors and memory. The document aims to introduce students to the structural units and design of embedded systems.
Project Slides for Website 2020-22.pptxAkshitAgiwal1
This document describes a resource efficient ternary content addressable memory (TCAM) architecture. It uses a pipelined layer architecture to increase operating frequency. The TCAM was implemented on an FPGA with a size of 64x32 and achieved a 10.52% higher speed, 7.62% lower power, and 50% lower resource utilization compared to existing designs. When implemented in ASIC using a 45nm process, the proposed TCAM achieved 121.10% higher speed, 70.12% lower power, and an 18.7x smaller area compared to existing designs.
Analysis of Database Issues using AHF and Machine Learning v2 - AOUG2022Sandesh Rao
VP AIOps for the Autonomous Database discusses AHF (Autonomous Health Framework) and how it is used for automatic issue detection, diagnostic collection and analysis of database issues using machine learning. AHF includes components like EXAchk for compliance checking, TFA for issue notification and support, and the Cluster Health Monitor and Cluster Health Advisor for monitoring cluster and database health using techniques like anomaly detection, diagnostics, and prognosis. It also discusses how AHF is calibrated and used to check for health issues and potential failures in an autonomous database deployment.
This document provides an overview of z/OS virtual memory and the Virtual Storage Manager (VSM). It discusses basic memory management concepts including real, auxiliary, and virtual memory. It describes how VSM allocates and manages 31-bit virtual storage through subpools and uses storage keys for protection. The document also covers common, private, and shared storage areas, VSM services, and options in the VSM DIAGxx member to enable tracing and health checks.
Amoeba is a distributed operating system developed at Vrije University in Amsterdam in the 1980s. It connects multiple computers over a network to function as a single system. Amoeba uses a client-server model with remote procedure calls and represents resources as capabilities-protected objects. It features a small kernel design and was an early experiment in wide area networking between distributed sites. Amoeba aimed to provide transparency, distribution, parallelism, and high performance through its object-based architecture and processor pool design.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
WebSphere Technical University: Top WebSphere Problem Determination FeaturesChris Bailey
Problem determination is an important focus area in the IBM WebSphere Application Server. Serviceability improvements have been added that have greatly improved the ability to find root causes of problems in both the full IBM WebSphere Application Server profile, and the newer Liberty profile. The session focuses on how to effectively use serviceability improvements added to the application server since V8.0. This includes high performance extensibe logging, cross-component trace, IBM Support Assistant data collector, timed operations, memory leak detection/prevention, and IBM Support Assistant 5.
Presented at the WebSphere Technical University 2014, Dusseldorf
This document summarizes a presentation on reverse engineering the Rocket-Chip SoC generator to develop a customized SoC called Aghaaz. The presentation covers deconstructing the Rocket-Chip software architecture, developing a Micro-Architecture and Software Specification (MASS) document, configuring an Aghaaz SoC using the MASS document, and generating the SoC from the Rocket-Chip generator. Key aspects included developing object-oriented representations of Rocket-Chip modules, flowcharts to explain the code, and configuring an RV32 core with caches and extensions.
Similar to Specification_of_NVRAM_Manager.pptx (20)
Dahua provides a comprehensive guide on how to install their security camera systems. Learn about the different types of cameras and system components, as well as the installation process.
Charging Fueling & Infrastructure (CFI) Program by Kevin MillerForth
Kevin Miller, Senior Advisor, Business Models of the Joint Office of Energy and Transportation gave this presentation at the Forth and Electrification Coalition CFI Grant Program - Overview and Technical Assistance webinar on June 12, 2024.
car rentals in nassau bahamas | atv rental nassau bahamasjustinwilson0857
At Dash Auto Sales & Car Rentals, we take pride in providing top-notch automotive services to residents and visitors alike in Nassau, Bahamas. Whether you're looking to purchase a vehicle, rent a car for your vacation, or embark on an exciting ATV adventure, we have you covered with our wide range of options and exceptional customer service.
Website: www.dashrentacarbah.com
Charging Fueling & Infrastructure (CFI) Program Resources by Cat PleinForth
Cat Plein, Development & Communications Director of Forth, gave this presentation at the Forth and Electrification Coalition CFI Grant Program - Overview and Technical Assistance webinar on June 12, 2024.
2. www.sasken.com | Copyright: Sasken Technologies Ltd. 2
Introduction to Memory Stack->
Memory Stack:-
• Memory Stack (MemStack) provides basic memory management services to the upper Application layer
and to the Basic Software Modules (BSW) of the AUTOSAR layered architecture.
• The memory management services ensure access to the memory cluster, to the devices or software
functions, for reading and writing data to non-volatile memory media like Flash or EEPROM
Memory Stack
3. www.sasken.com | Copyright: Sasken Technologies Ltd. 3
Introduction to NVRAM Manager->
NVRAM Manager:-
Manages the NV data of an EEPROM (EA) or a FLASH EEPROM emulation (FEE) device
provide services to ensure the data storage and maintenance of NV (non volatile) data
provide the required synchronous/asynchronous services for the management and the maintenance of
NV data (init/read/write/control)
Terms to understand
1. NV data:- Data whose values are changeable but available across the power cycles. It needs to be stored in
the Non-volatile memory.
2. EA:-
• EEPROM driver provides services for reading, writing, erasing data to/from an EEPROM.
• It also provides a service for comparing a data block in the EEPROM with a data block in the memory
(e.g. RAM).
• EA module abstracts from the addressing scheme of underlying EEPROM driver and hence provides a
uniform addressing scheme
4. www.sasken.com | Copyright: Sasken Technologies Ltd. 4
Continue->
3. Flash EEPROM Emulation (FEE) Module :-
• The Flash EEPROM Emulation (FEE) abstracts from the device, a specific addressing scheme and
segmentation
• This provides the upper layers (NvM) with a virtual addressing scheme, segmentation as well as a
“virtually” unlimited number of erase cycles
6. www.sasken.com | Copyright: Sasken Technologies Ltd. 6
Dependencies to other modules->
1. Header files structure:-
• NvM module shall include NvM.h, Dem.h, MemIf.h
• Only NvM.h shall be included by the upper layer
2. Memory Abstraction Modules:-
• abstract the NvM module from the subordinated drivers which are hardware dependent
• provide a runtime translation of each block access initiated by the NvM module to select the
corresponding driver functions
• The memory abstraction module is chosen via the NVRAM block device ID which is configured for each
NVRAM block
3. CRC module (Configuration parameter)
• The NvM module uses CRC generation routines (8/16/32 bit) to check and to generate CRC for NVRAM
blocks as a configurable option
• The CRC routines have to be provided externally
7. www.sasken.com | Copyright: Sasken Technologies Ltd. 7
Continue->
4. Capability of the underlying drivers
• A set of underlying driver functions has to be provided for every configured NVRAM device as, for
example, internal or external EEPROM or FLASH devices
• The unique driver functions inside each set of driver functions are selected during runtime via a
memory hardware abstraction module
• A set of driver functions has to include all the needed functions to write to, to read from or to maintain
(e.g. erase) a configured NVRAM device.
8. www.sasken.com | Copyright: Sasken Technologies Ltd. 8
Layer structure
• Communication interaction of module NvM
9. www.sasken.com | Copyright: Sasken Technologies Ltd. 9
DEM
• The Diagnostic Event Manager (DEM) handles and stores the events detected by diagnostic monitors in
both Software Components (SW-Cs) and Basic software (BSW) modules. The stored event information is
available through an interface to other BSW modules or SW-Cs.
Diagnostic Event is the status (pass/fail) of a system under test for a set of test conditions
If an event gets qualified as failed, it becomes active; else passive;
There are two types of events:
• BSW Events: Reported by basic software using Dem_ReportEventStatus API
• SW-C Events: Reported by application using Dem_SetEventStatus API
10. www.sasken.com | Copyright: Sasken Technologies Ltd. 10
Det->
• Development Error Tracer (DET)
The Development Error Tracer provides functionality to support error detection and tracing of errors
during the development of SWC’s and other Basic Software Modules.
For this purpose the Development Error Tracer receives and evaluates error messages from these
components and modules
11. www.sasken.com | Copyright: Sasken Technologies Ltd. 11
MemIf & BSWm
• Memory Interface (MemIf) is responsible for abstracting the upper layer from the lower layer Flash and
EEPROM modules and provide a uniform address space. & this module allows the NVRAM manager to
access several memory abstraction modules (FEE or EA modules)
• BSWM interacts with different BSW and Application modules. It receives application mode requests
through RTE and the mode switch notification from BSWM also passes through RTE to the application.
12. www.sasken.com | Copyright: Sasken Technologies Ltd. 12
SchM->
• BSW Scheduler is a part of system services and provides services for all modules of all layers.
• It provides services for Integration
• Embed BSW modules into AUTOSAR OS context
• Trigger main processing functions of the BSW modules
13. www.sasken.com | Copyright: Sasken Technologies Ltd. 13
Limitations
• Limitations are given mainly by the finite number of “Block Management Types” and their individual
treatment of NV data.
• These limits can be reduced by an enhanced user defined management information, which can be stored
as a structured part of the real NV data.
• In this case the user defined management information has to be handled by the application
14. www.sasken.com | Copyright: Sasken Technologies Ltd. 14
Addressing scheme for the memory hardware abstraction
SWS_NvM_00051
• The Memory Abstraction Interface, the underlying Flash EEPROM Emulation and EEPROM Abstraction
Layer provide the NvM module with a virtual linear 32bit address space which is composed of a 16bit
block number and a 16bit block address offset
• The NvM module allows for a (theoretical) maximum of 65536 logical blocks, each logical block having a
(theoretical) maximum size of 64 Kbytes.
15. www.sasken.com | Copyright: Sasken Technologies Ltd. 15
Continue->
SWS_NvM_00122
• The NvM module shall further subdivide the 16bit Fee/Ea block number into the following parts:
• NV block base number (NVM_NV_BLOCK_BASE_NUMBER- Configuration parameter to perform the link
between the NVM_NVRAM_BLOCK_IDENTIFIER used by the SW-Cs and the FEE_BLOCK_NUMBER ) with a bit
width of (16 -NVM_DATASET_SELECTION_BITS)
• Data index with a bit width of (NVM_DATASET_SELECTION_BITS)
16. www.sasken.com | Copyright: Sasken Technologies Ltd. 16
Continue->
• SWS_NvM_00343
Handling/addressing of redundant NVRAM blocks shall be done towards the memory hardware
abstraction in the same way like for dataset NVRAM blocks, i.e. the redundant NV blocks shall be
managed by usage of the configuration parameter NvMDatasetSelectionBits
• SWS_NvM_00123
NVM_NV_BLOCK_BASE_NUMBER) shall be located in the most significant bits of the Fee/Ea block number
17. www.sasken.com | Copyright: Sasken Technologies Ltd. 17
Example->
• The configuration parameter NvMDatasetSelectionBits is configured to be 2 This leads to the result that 14
bits are available as range for the configuration parameter NvMNvBlockBaseNumber.
Range of NvMNvBlockBaseNumber: 0x1..0x3FFE
Range of data index: 0x0..0x3(=2^NvMDatasetSelectionBits-1)
Range of FEE_BLOCK_NUMBER/EA_BLOCK_NUMBER: 0x4..0xFFFB
FEE/EA_BLOCK_NUMBER can be calculated using following formula:-
FEE/EA_BLOCK_NUMBER = (NvMNvBlockBaseNumber << NvMDatasetSelectionBits) + DataIndex
18. www.sasken.com | Copyright: Sasken Technologies Ltd. 18
Native NVRAM block->
• The Native NVRAM block is the simplest block management type. It allows storage to/retrieval from NV
memory with a minimum of overhead.
• For a native NVRAM block with NvMNvBlockBaseNumber = 2:
• NV block is accessed with FEE/EA_BLOCK_NUMBER = 8
19. www.sasken.com | Copyright: Sasken Technologies Ltd. 19
Redundant NVRAM block ->
• Redundant NVRAM block provides enhanced fault tolerance, reliability and availability.
• It increases resistance against data corruption.
For a redundant NVRAM block with NvMNvBlockBaseNumber = 3
1st NV block with data index 0 is accessed with FEE/EA_BLOCK_NUMBER = 12
2nd NV block with data index 1 is accessed with FEE/EA_BLOCK_NUMBER = 13
20. www.sasken.com | Copyright: Sasken Technologies Ltd. 20
Dataset NVRAM block ->
• The Dataset NVRAM block is an array of equally sized data blocks (NV/ROM). The application can at one
time access exactly one of these elements.
For a dataset NVRAM block with NvMNvBlockBaseNumber = 4, NvMNvBlockNum = 3:
• NV block #0 with data index 0 is accessed with FEE/EA_BLOCK_NUMBER = 16
• NV block #1 with data index 1 is accessed with FEE/EA_BLOCK_NUMBER = 17
• NV block #2 with data index 2 is accessed with FEE/EA_BLOCK_NUMBER = 18
22. www.sasken.com | Copyright: Sasken Technologies Ltd. 22
Basic Storage objects
1. NV block :
• The NV block is a basic storage object and represents a memory area consisting of NV user data and (optionally)
a CRC value and (optionally) a NV block header
• RAM block:-
• The RAM block is a basic storage object and represents an area in RAM consisting of user data and (optionally) a
CRC value and (optionally) a NV block header
• CRC is only available if the corresponding NV block(s) also have a CRC
• The data area of a RAM block shall be accessible from NVRAM Manager and from
the application side (data passing from/to the corresponding NV block).
• RAM block data shall contain the permanently or temporarily
assigned user data
In case of permanent data, address of the RAM block data is known during
configuration time
In case of temperory data, address of the RAM block data is not known
during configuration time and will be passed to the NvM module during runtime
23. www.sasken.com | Copyright: Sasken Technologies Ltd. 23
Continue->
3. ROM :- Provide default data in case of an empty or damaged NV block.
4. Administrative block:
• It should be contained in the RAM block and it should contain block index which is used in dataset NV
blocks and Error/status information of the corresponding NVRAM block
• The NvM module shall use state information of the permanent RAM block or of the RAM mirror in the
NvM module in case of explicit syncronization (invalid/valid) to determine the validity of the permanent
RAM block user data
• The Administrative block shall be invisible for the application and is used exclusively by the NvM module
for security and administrative purposes of the RAM block and the NVRAM block itself.
• There are 2 types of state which RAM block indicates:-
Invalid state:- If data area of the respective RAM block is invalid.
Valid state:- If data area of the respective RAM block is valid.
• NVm use error/status field to manage the error/status value of the last request
5. NV block header :- The first section in NV block will be NV block header, if mechanism “Static ID is
enabled”
24. www.sasken.com | Copyright: Sasken Technologies Ltd. 24
Block management types
The following types of NVRAM storage shall be supported by the NvM module implementation:
• NVM_BLOCK_NATIVE :- NVRAM storage shall consist of the following basic storage objects:
NV Blocks: 1
RAM Blocks: 1
ROM Blocks: 0..1
Administrative Blocks:1
• NVM_BLOCK_REDUNDANT :-
NV Blocks: 2
RAM Blocks: 1
ROM Blocks: 0..1
Administrative Blocks:1
• NVM_BLOCK_DATASET :-
NV Blocks: 1..(m<256)*
RAM Blocks: 1
ROM Blocks: 0..n
Administrative Blocks:1
25. www.sasken.com | Copyright: Sasken Technologies Ltd. 25
NVRAM block structure
• NVRAM block shall consist of the mandatory basic storage objects NV block, RAM block and
Administrative block
• All address offsets are given relatively to the start addresses of RAM or ROM in the NVRAM block
descriptor.
• The start address is assumed to be zero.
• A device specific base address or offset will be added by the respective device driver if needed.
NVRAM block descriptor table:-
• single NVRAM block to deal with will be selected via the NvM module API by providing a subsequently
assigned Block ID
• All structures related to the NVRAM block descriptor table and their addresses in ROM (FLASH) have to
be generated during configuration of the NvM module
26. www.sasken.com | Copyright: Sasken Technologies Ltd. 26
NVRAM Manager API configuration classes
• To adapt the NvM module to limited hardware resources
API configuration class 3: All specified API calls are available. A maximum of functionality is supported.
Drawback:- No immediate data can be written
API configuration class 2: An intermediate set of API calls is available.
API configuration class 1: Especially for matching systems with very limited hardware resources this API
configuration class offers only a minimum set of API calls
Drawback:- block management type NVM_BLOCK_DATASET is not supported
27. www.sasken.com | Copyright: Sasken Technologies Ltd. 27
Priority Scheme
• NvM module shall support a priority based job processing
• By configuration parameter NvMJobPrioritization , priority based job processing shall be
enabled/disabled
• In case of priority based job processing order, the NvM module shall use two queues, one for immediate
write jobs (crash data) another for all other jobs (including immediate read/erase jobs)
• If NvMJobPrioritization is disabled,
processes all jobs in FCFS order
Does not support immediate write jobs
• The job queue length for multi block requests originating from the NvM_ReadAll and NvM_WriteAll shall
be one
• NvM module shall not interrupt jobs originating from the NvM_ReadAll request by other requests
• NvM_WriteAll request can be aborted by calling NvM_CancelWriteAll
• In this, current block is processed completely but no further blocks are written
28. www.sasken.com | Copyright: Sasken Technologies Ltd. 28
Continue->
• The preempted job shall subsequently be resumed / restarted by the NvM module. This behavior shall
apply for single block requests as well as for multi block requests.
29. www.sasken.com | Copyright: Sasken Technologies Ltd. 29
Functional requirements:-
• For each asynchronous request, a notification of the caller after completion of the job shall be a
configurable option
• NvM module copies one or more blocks from NVRAM to the RAM and the other way round
• application accesses the RAM data directly
• NvM module shall queue all asynchronous “single block” read/write/control requests if the block with
its specific ID is not already queued or currently in progress
• multiple asynchronous “single block” requests as long as no queue overflow occurs
• NvM module shall implement implicit mechanisms for consistency checks of data saved in NV memory
• highest priority request shall be fetched from the queues by the NvM module and processed in a serialized
order
• If there is no default ROM data available at configuration time or no callback defined by
“NvMInitBlockCallback” then the application shall be responsible for providing the default initialization
data
• During processing of NvM_ReadAll, the NvM module shall be able to detect corrupted RAM data by
performing a checksum calculation
30. www.sasken.com | Copyright: Sasken Technologies Ltd. 30
NVRAM manager startup
• Nvm_Init shall be invoked by the BSW Mode Manager
• Due to strong constraints concerning the ECU startup time, the NvM_Init request shall not contain the
initialization of the configured NVRAM blocks
• NvM_Init request shall not be responsible to trigger the initialization of underlying drivers and memory
hardware abstraction
• This shall also be handled by the BSW Mode Manager
• The initialization of the RAM data blocks shall be done by another request, namely NvM_ReadAll.
NvM_ReadAll shall be called exclusively by BSW Mode Manager
• Software components which use the NvM module shall be responsible for checking global error/status
information resulting from the NvM module startup
• The BSW Mode Manager shall use polling by using NvM_GetErrorStatus
• If polling is used, the end of the NVRAM startup procedure shall be detected by the global error/status
NVM_REQ_OK or NVM_REQ_NOT_OK
• If callbacks are chosen for notification, software components shall be notified automatically if an
assigned NVRAM block has been processed
31. www.sasken.com | Copyright: Sasken Technologies Ltd. 31
Continue->
• software component shall be responsible to set the proper index position by “NvM_SetDataIndex”
• To get the current index position of a “Dataset Block”, the software component shall use the
“NvM_GetDataIndex”
NVRAM manager shutdown
• The basic shutdown procedure shall be done by the request “NvM_WriteAll”
NVRAM block consistency check
• The data consistency check of a NVRAM block shall be done by CRC recalculations of its corresponding NV
block
32. www.sasken.com | Copyright: Sasken Technologies Ltd. 32
Error recovery
• The NvM module shall provide error recovery on read for NVRAM blocks of block management type
“NVM_BLOCK_REDUNDANT” by loading the RAM block with default values
• The NvM module shall provide error recovery on write by performing write retries regardless of the
NVRAM block management type
• job of the function NvM_WriteBlock shall check the number of write retries using a write retry counter to
avoid infinite loops
• The configuration parameter “NVM_MAX_NUM_OF_WRITE_RETRIES” shall describe the maximum number
of write retries for the job of the function “NvM_WriteBlock” when RAM block data cannot be written
successfully to the corresponding NV block.
33. www.sasken.com | Copyright: Sasken Technologies Ltd. 33
Recovery of a RAM block with ROM data
Implicit recovery of a RAM block with ROM default data
Data content of the corresponding NV block shall remain unmodified during the implicit recovery
The implicit recovery shall not be provided during startup
34. www.sasken.com | Copyright: Sasken Technologies Ltd. 34
Explicit recovery of a RAM block with ROM default data
• For explicit recovery with ROM block data the NvM module shall provide functions
“NvM_RestoreBlockDefaults” and “NvM_RestorePRAMBlockDefaults” to restore ROM data to its
corresponding RAM block
• The function “NvM_RestoreBlockDefaults” or “NvM_RestorePRAMBlockDefaults” shall be used by the
application to restore ROM data to the corresponding RAM block every time it is needed
Detection of an incomplete write operation to a NV block
• The detection of an incomplete write operation to a NV block is out of scope of the NvM module.
• This is handled and detected by the memory hardware abstraction.
• The NvM module expects to get information from the memory hardware abstraction if a referenced NV block is
invalid or inconsistent and cannot be read when requested.
• SW-Cs may use “NvM_InvalidateNvBlock” to prevent lower layers from delivering old data
35. www.sasken.com | Copyright: Sasken Technologies Ltd. 35
Termination of a single block & Multi block request
• All asynchronous requests provided by the NvM module (except for “NvM_CancelWriteAll”) shall indicate
their result in the designated error/status field of the corresponding Administrative block
• In communication with application SW-C, the ECUC configuration parameter “NvMSingleBlockCallback”
should be configured to the corresponding Rte_call_<p>_<o> API
For Multiblock request:
• The NvM module shall use a separate variable to store the result of an asynchronous multi block request
(NvM_ReadAll, NvM_WriteAll including NvM_CancelWriteAll, NvM_ValidateAll)
• The function “NvM_GetErrorStatus” shall return the most recent error/status information of an
asynchronous multi block request (including NvM_CancelWriteAll)in conjunction with a reserved block ID
value of ‘0’
36. www.sasken.com | Copyright: Sasken Technologies Ltd. 36
General handling of asynchronous requests/ job processing
• Every time when CRC calculation is processed within a request, the NvM module shall calculate the CRC
in multiple steps if the referenced NVRAM block length exceeds the number of bytes configured by the
parameter NvMCrcNumOfBytes
• For CRC calculation, the NvM module shall use initial values which are published by the CRC module
• Multiple concurrent single block requests shall be queueable
• The NvM module shall interrupt asynchronous request/job processing in favor of jobs with immediate
priority
• If the invocation of an asynchronous function on the NvM module leads to a job queue overflow, the
function shall return with E_NOT_OK
• On successful enqueuing a request, the NvM module shall set the request result of the corresponding
NVRAM block to NVM_REQ_PENDING
• If the NvM module has successfully processed a job, it shall return NVM_REQ_OK as request result
37. www.sasken.com | Copyright: Sasken Technologies Ltd. 37
NVRAM block write protection
• Enabling/Disabling of the write protection is allowed using ‘NvM_SetBlockProtection” function when the
“NvMWriteBlockOnce” is FALSE
• For all NVRAM blocks configured with (initial write protection of the NV block) NvMBlockWriteProt =
TRUE, the NvM module shall enable a default write protection
• For NVRAM blocks configured with (write protection after first write) NvMWriteBlockOnce == TRUE, the
NvM module shall only write once to the associated NV memory,
i.e in case of a blank NV device and NvM module shall not allow disabling the write protection explicitly
using the “NvM_SetBlockProtection” function and NvM shall reject any Write/Erase/Invalidate request
made prior to the first read request
39. www.sasken.com | Copyright: Sasken Technologies Ltd. 39
Continue->Description
• After the Initialization, the RAM Block is in state INVALID/UNCHANGED until it is updated via
“NvM_ReadAll”, which causes a transition to state VALID/UNCHANGED.
• In this state WriteAll is not allowed. This state is left, if the “NvM_SetRamBlockStatus” is invoked.
• If there occurs a CRC error the RAM Block changes to state INVALID again, which then can be left via the
implicit or explicit error recovery mechanisms
• After error recovery the block is in state VALID/CHANGED as the content of the RAM differs from the
NVRAM content
• In case of an unsuccessful block read attempt, it is the responsibility of the application to provide valid
data before the next write attempt
• In case a RAM block is successfully copied to NV memory the RAM block state shall be set to
"valid/unmodified“ afterwards
40. www.sasken.com | Copyright: Sasken Technologies Ltd. 40
The VALID / UNCHANGED state
To enter the VALID / UNCHANGED state, at least of the following must occur:
• 1. NvM_ReadAll() read successfully the block
• 2. NvM_ReadBlock finished successfully for the block
• 3. NvM_WriteBlock finished successfully for the block
• 4. NvM_WriteAll() wrote successfully the block
The VALID / CHANGED state
To enter the VALID / UNCHANGED state, at least of the following must occur:
1. NvM_SetRamBlockStatus called with TRUE for the block
2. NvM_WriteBlock is called for the block
3. NvM_WriteAll will also process the block
4. NvM_ReadBlock called for the block gives default data
5. NvM_RestoreBlockDefaults called for the block finishes successfully
6. NvM_ReadAll gives default data when processign the block
7. NvM_ValidateAll processed successfully the block
41. www.sasken.com | Copyright: Sasken Technologies Ltd. 41
INVALID / UNCHANGED state
• This state implies that the NV Block is invalid
• For a DATASET block this means that the NV Block contents are invalid for the last instance that was
processed
• To enter the INVALID / UNCHANGED state, at least one of the following must occur:
1. NvM_SetRamBlockStatus called with FALSE for the block
2. NvM_ReadBlock indicates invalidation by user request for the block
3. NvM_ReadBlock indicates corrupted data (if CRC configured) for the block
4. NvM_ReadBlock indicates wrong StaticID (if configured) for the block
5. NvM_WriteBlock finished non-successfully for the block
6. NvM_WriteAll non-successful write for the block
7. NvM_InvalidateNvBlock finished successfully for the block
8. NvM_EraseNvBlock finished successfully for the block
42. www.sasken.com | Copyright: Sasken Technologies Ltd. 42
Communication and implicit synchronization between application and
NVRAM manager
• if several applications are sharing a NVRAM block by using (different) temporary RAM blocks, synchronization between applications
becomes more complex and this is not handled by the NvM module, too.
• In case of using callbacks as notification method, it could happen that e.g. an application gets a notification although the request has
not been initiated by this application
• Write requests (NvM_WriteBlock or NvM_WritePRAMBlock – refer pdf pg no 51
• Read requests (NvM_ReadBlock or NvM_ReadPRAMBlock) – refer pdf pg no 51
• Restore default requests (NvM_RestoreBlockDefaults and NvM_RestorePRAMBlockDefaults – refer pdf pg no 52
• Multi block read requests (NvM_ReadAll):-
This request may be triggered only by the BSW Mode Manager at system startup
This request fills all configured permanent RAM blocks with necessary data for startup.
If the request fails or the request is handled only partially successful, the NVRAM-Manager signals this condition to the DEM and returns
an error to the BSW Mode Manager
The DEM and the BSW Mode Manager have to decide about further measures that have to be taken. These steps are beyond the scope of
the NvM module and are handled in the specifications of DEM and BSW Mode Manager
refer pdf pg no 53
43. www.sasken.com | Copyright: Sasken Technologies Ltd. 43
Multi block write requests (NvM_WriteAll):-
• This request must only be triggered by the BSW Mode Manager at shutdown of the system
• This request writes the contents of all modified permanent RAM blocks to NV memory
• By calling this request only during ECU shutdown, the BSW Mode Manager can ensure that no SW
component is able to modify data in the RAM blocks until the end of the operation
• Applications have to adhere the following rules during multi block write requests for implicit
synchronization between application and NVRAM manager:
1. The BSW Mode Manager issues the NvM_WriteAll request which transfers control to the NvM module.
2. The BSW Mode Manager can use polling to get the status of the request or can be informed via a callback
function
• Cancel Operation (NvM_CancelWriteAll):-
• This request cancels a pending NvM_WriteAll request. This is an asynchronous request and can be called
to terminate a pending NvM_WriteAll request.
• NvM_CancelWriteAll request shall only be used by the BSW Mode Manager.
44. www.sasken.com | Copyright: Sasken Technologies Ltd. 44
Modification of administrative blocks
• If there is a pending single-block operation for a NVRAM block, the application is not allowed to call any
operation that modifies the administrative block, like NvM_SetDataIndex, NvM_SetBlockProtection,
NvM_SetRamBlockStatus, until the pending job has finished.
45. www.sasken.com | Copyright: Sasken Technologies Ltd. 45
API specifications
• Pdf pg no 76
• NvM_ReadAll, a single block callback (if configured) will be invoked after having completely processed a
NVRAM block. These callbacks enable the RTE to start each SW-C individually
• NvM_WriteAll :-By calling this request only during ECU shutdown, the BSW Mode Manager can ensure
that no SW component is able to modify data in the RAM blocks until the end of the operation
• NvM_FirstInitAll:- Defines whether a block will be processed or not by “NvM_FirstInitAll”
• NvM_SetBlockLockStatus:-
This function is to be used mainly by DCM, but it can also be used by complex device drivers. The
function is not included in the ServicePort interface
Callback notification of the NvM module
• Pdf pg no 131
Scheduled functions
NvM_MainFunction :-
Service for performing the processing of the NvM jobs.
51. www.sasken.com | Copyright: Sasken Technologies Ltd. 51
Cancellation of a Multi Block Request
• The running NvM_WriteAll function completes the actual NVRAM block and stops further writes
52. www.sasken.com | Copyright: Sasken Technologies Ltd. 52
BswM Interaction
• Interactions between NvM and BswM in terms of single block operation
53. www.sasken.com | Copyright: Sasken Technologies Ltd. 53
• Interactions between NvM and BswM
in terms of single block operation
54. www.sasken.com | Copyright: Sasken Technologies Ltd. 54
• NvM interraction with BswM in case
of a WriteAll cancellation
55. www.sasken.com | Copyright: Sasken Technologies Ltd. 55
• NvM interraction with BswM
in case of a single block cancellation
56. www.sasken.com | Copyright: Sasken Technologies Ltd. 56
www.sasken.com | Copyright: Sasken Technologies Ltd.
Thank You
Visit us at www.sasken.com
Queries: marketing@sasken.com