ok.ru is one of top 10 internet sites of the World, according to similarweb.com. Under the hood, it has several thousand servers. Each of those servers own only fraction of the data or business logic. Shared nothing architecture can be hardly applied to social network, due to its nature, so a lot of communication happens between these servers, diverse in kind and volume. This makes ok.ru one of the largest, complicated, yet highly loaded distributed systems in the world.
This talk is about our experience in building always available, resilient to failures distributed systems in Java, their basic and not so basic failure and recovery scenarios, methods of failure testing and diagnostics. We’ll also discuss on possible disasters and how to prevent or get over them.
This document provides an overview of ZooKeeper, describing it as a centralized service for maintaining configuration information, providing naming services, and enabling distributed synchronization and group services for distributed applications. It then discusses what ZooKeeper can do, how it is structured like a hierarchical file system, how it works using the Zab consensus algorithm, how to deploy it either on a single server or quorum cluster, how to interact with it using command line tools or APIs, and some key features like notifications, ordering, and high availability.
Node.js at Joyent: Engineering for Productionjclulow
Joyent is one of the largest deployers of Node.js in production systems. In order to successfully deploy large-scale, distributed systems, we must understand the systems we build! For us, that means having first-class tools for debugging our software, and understanding and improving its performance.
Come on a whirlwind tour of the tools and techniques we use at Joyent as we build out large-scale distributed software with Node.js: from mdb for Post-Mortem Debugging, to Flame Graphs for performance analysis; from DTrace for dynamic, production-safe instrumentation and tracing, to JSON-formatted logging with Bunyan.
Add a bit of ACID to Cassandra. Cassandra Summit EU 2014odnoklassniki.ru
OK.ru is one of the largest social networks for Russian-speaking audiences with 80+ million unique user’s visits monthly. ok.ru uses Cassandra since 2010 and made a number of improvements to C* 2.0 and 2.1 codebase. Until recent time more than 50 TB of data at Ok.ru OLTP systems was managed by Microsoft SQL Sever. It’s very expensive, hard to scale and cannot save us from outage if one of our data centers fail. We wanted a new, fast scalable and reliable storage for these data. These data has requirements to support ACID transactions, so we don’t have to rewrite all application code from scratch. С* does not support these transactions, only lightweight, so we implemented a new storage with ACID and selected features of SQL world by ourselves. Still, it has C* at its heart. We’ll discuss the internals of the new storage, what features of C* we had to alter and which to rewrite from scratch. We’ll also talk about its operational experience in production.
Zookeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and leader election. It allows for distributed applications to synchronize transactions and configuration updates. Zookeeper uses a data model of znodes that can be persistent, ephemeral, or sequential. Clients can set watches on znodes to receive notifications of changes. Zookeeper provides consistency guarantees including sequential consistency, atomicity, and a single system image. Many large companies and open source projects use Zookeeper for coordination across distributed systems.
The document evaluates Lustre 2.9 and OpenStack for providing isolated POSIX file systems to tenants in OpenStack, finding that Lustre 2.9 allows uid mapping that can isolate tenants while maintaining high performance, and that physical and virtual Lustre routers can route traffic between tenants effectively albeit with some increased east-west traffic with virtual routers.
The document discusses RabbitMQ boot steps, which take care of starting the many RabbitMQ subsystems in the proper order and respecting dependencies. It was created by @leastfixedpoint to handle ordering and allow modifying configuration at startup. The boot steps are defined using Erlang modules and module attributes, and their execution involves functions like rabbit_misc:all_module_attributes/1, rabbit:boot_steps/0, and rabbit_misc:build_acyclic_graph/3.
ok.ru is one of top 10 internet sites of the World, according to similarweb.com. Under the hood, it has several thousand servers. Each of those servers own only fraction of the data or business logic. Shared nothing architecture can be hardly applied to social network, due to its nature, so a lot of communication happens between these servers, diverse in kind and volume. This makes ok.ru one of the largest, complicated, yet highly loaded distributed systems in the world.
This talk is about our experience in building always available, resilient to failures distributed systems in Java, their basic and not so basic failure and recovery scenarios, methods of failure testing and diagnostics. We’ll also discuss on possible disasters and how to prevent or get over them.
This document provides an overview of ZooKeeper, describing it as a centralized service for maintaining configuration information, providing naming services, and enabling distributed synchronization and group services for distributed applications. It then discusses what ZooKeeper can do, how it is structured like a hierarchical file system, how it works using the Zab consensus algorithm, how to deploy it either on a single server or quorum cluster, how to interact with it using command line tools or APIs, and some key features like notifications, ordering, and high availability.
Node.js at Joyent: Engineering for Productionjclulow
Joyent is one of the largest deployers of Node.js in production systems. In order to successfully deploy large-scale, distributed systems, we must understand the systems we build! For us, that means having first-class tools for debugging our software, and understanding and improving its performance.
Come on a whirlwind tour of the tools and techniques we use at Joyent as we build out large-scale distributed software with Node.js: from mdb for Post-Mortem Debugging, to Flame Graphs for performance analysis; from DTrace for dynamic, production-safe instrumentation and tracing, to JSON-formatted logging with Bunyan.
Add a bit of ACID to Cassandra. Cassandra Summit EU 2014odnoklassniki.ru
OK.ru is one of the largest social networks for Russian-speaking audiences with 80+ million unique user’s visits monthly. ok.ru uses Cassandra since 2010 and made a number of improvements to C* 2.0 and 2.1 codebase. Until recent time more than 50 TB of data at Ok.ru OLTP systems was managed by Microsoft SQL Sever. It’s very expensive, hard to scale and cannot save us from outage if one of our data centers fail. We wanted a new, fast scalable and reliable storage for these data. These data has requirements to support ACID transactions, so we don’t have to rewrite all application code from scratch. С* does not support these transactions, only lightweight, so we implemented a new storage with ACID and selected features of SQL world by ourselves. Still, it has C* at its heart. We’ll discuss the internals of the new storage, what features of C* we had to alter and which to rewrite from scratch. We’ll also talk about its operational experience in production.
Zookeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and leader election. It allows for distributed applications to synchronize transactions and configuration updates. Zookeeper uses a data model of znodes that can be persistent, ephemeral, or sequential. Clients can set watches on znodes to receive notifications of changes. Zookeeper provides consistency guarantees including sequential consistency, atomicity, and a single system image. Many large companies and open source projects use Zookeeper for coordination across distributed systems.
The document evaluates Lustre 2.9 and OpenStack for providing isolated POSIX file systems to tenants in OpenStack, finding that Lustre 2.9 allows uid mapping that can isolate tenants while maintaining high performance, and that physical and virtual Lustre routers can route traffic between tenants effectively albeit with some increased east-west traffic with virtual routers.
The document discusses RabbitMQ boot steps, which take care of starting the many RabbitMQ subsystems in the proper order and respecting dependencies. It was created by @leastfixedpoint to handle ordering and allow modifying configuration at startup. The boot steps are defined using Erlang modules and module attributes, and their execution involves functions like rabbit_misc:all_module_attributes/1, rabbit:boot_steps/0, and rabbit_misc:build_acyclic_graph/3.
This document discusses issues related to low latency and mechanical sympathy in a Java application for routing orders. It addresses GC pauses, power management, NUMA, and cache pollution. To achieve sub-millisecond performance, the document recommends tuning the GC, BIOS, OS, NUMA configuration, and avoiding cache pollution through thread affinity and CPU isolation. The goal is to optimize performance without changing application code through mechanical sympathy with the hardware.
Thijs Feryn will give a presentation on using Varnish as an HTTP accelerator and caching proxy at the PHP Barcelona Conference on October 29, 2010. He discusses how Varnish can be used to cache content, protect servers from overload, and improve site performance through caching, hitting ratios of over 50% on average.
Thinking outside the box, learning a little about a lotMark Broadbent
Being a SQL specialist is not enough. Windows and the interfaces SQL consumes are becoming ever more complex. Being aware of these technologies and beyond can make you a better DBA. You will learn how to become a DBA 2.5, SQL Enterprise name resolution strategies, create your own private VSAN & VCluster, fail instances BETWEEN separate clusters and use the CLR to manage the OS from SQL.
This document discusses improvements made to LXC container support in Ganeti through a Google Summer of Code project. Key areas worked on included fixing existing LXC integration, adding unit tests, setting up quality assurance, and adding features like migration. The project was mentored by Hrvoje Ribicic. Future work may include two-level isolation of containers within VMs and live migration of LXC instances.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This document summarizes performance testing of OpenStack with Cinder volumes on Ceph storage. It tested scaling performance with increasing instance counts on a 4-node and 8-node Ceph cluster. Key findings include:
- Large file sequential write performance peaked with a single instance per server due to data striping across OSDs. Read performance peaked at 32 instances per server.
- Large file random I/O performance scaled linearly with increasing instances up to the maximum tested (512 instances).
- Small file operations showed good scaling up to 32 instances per server for creates and reads, but lower performance for renames and deletes.
- Performance tuning like tuned profiles, device readahead, and Ceph journal configuration improved both
Kernel Recipes 2015 - Porting Linux to a new processor architectureAnne Nicolas
Getting the Linux kernel running on a new processor architecture is a difficult process. Worse still, there is not much documentation available describing the porting process.
After spending countless hours becoming almost fluent in many of the supported architectures, I discovered that a well-defined skeleton shared by the majority of ports exists. Such a skeleton can logically be split into two parts that intersect a great deal.
The first part is the boot code, meaning the architecture-specific code that is executed from the moment the kernel takes over from the bootloader until init is finally executed. The second part concerns the architecture-specific code that is regularly executed once the booting phase has been completed and the kernel is running normally. This second part includes starting new threads, dealing with hardware interrupts or software exceptions, copying data from/to user applications, serving system calls, and so on.
In this talk I will provide an overview of the procedure, or at least one possible procedure, that can be followed when porting the Linux kernel to a new processor architecture.
Joël Porquet – Joël was a post-doc at Pierre and Marie Curie University (UPMC) where he ported Linux to TSAR, an academic processor. He is now looking for new adventures.
Hands on Virtualization with Ganeti (part 1) - LinuxCon 2012Lance Albertson
This document is part 1 of a presentation on virtualization with Ganeti. It introduces Ganeti as virtual machine management software that manages clusters of physical machines running Xen, KVM, or LXC. It discusses Ganeti's components, architecture, features like live migration and failure recovery using DRBD, and how it is used at OSU Open Source Lab to power hundreds of VMs. The presentation then demonstrates initializing a Ganeti cluster, adding nodes and instances, and recovering from failures before opening for questions.
This document discusses NoSQL databases and provides examples of different types. It begins by discussing motivations for NoSQL like performance, scalability, and flexibility over traditional relational databases. It then categorizes NoSQL databases as key-value stores like Redis and Tokyo Cabinet, column-oriented stores like BigTable and Cassandra, document-oriented stores like CouchDB and MongoDB, and graph databases like Neo4J. For each category it provides comparisons on attributes and examples using different languages.
Checkpoint/Restore mostly in UserspaceAndrey Vagin
CRIU is a tool that provides checkpoint and restore capabilities in Linux. It allows saving the state of processes and restoring them later. This allows for failure recovery, live migration, rebootless upgrades, and other use cases. CRIU works by dumping kernel objects and process data to disk, then restoring that data to recreate the processes. It has support for Linux processes, threads, memory mappings, files, sockets, namespaces and more. CRIU is tested on real applications and works to integrate with the Linux kernel and distributions.
Nvidia® cuda™ 5.0 Sample Evaluation Result Part 1Yukio Saito
This document summarizes the results of running CUDA 5.0 samples on a system with a GTX 560 Ti GPU. Many samples ran successfully but some failed because the GTX 560 Ti only supports up to compute capability 2.1, while some samples require capability 3.0 or higher. The samples that failed included those using CUDA Dynamic Parallelism or requiring capability 3.5 or higher. Overall the evaluation provided performance data for many samples and identified requirements for samples that did not run.
This document provides an overview of Proximal Data's AutoCache software and how it can accelerate storage performance in a virtualized environment using Nytro WarpDrive PCIe flash storage. It discusses how AutoCache works, benchmarks showing significant IOPS and latency improvements when using a Nytro WarpDrive 6203 card with AutoCache compared to a HDD baseline. It also shows nearly linear scaling of IOPS with additional Nytro cards under AutoCache 2.0. The document provides guidance on monitoring and optimizing performance further through settings like queue depth and discusses other related solutions and resources.
This document discusses various tools for monitoring and analyzing metaspace and class metadata in the Java virtual machine. It describes using -XX:+PrintGCDetails to print details of full GC collections including metaspace usage. It also discusses using MBeans, jstat -gc, and VisualVM to monitor memory pools like metaspace and class space. The document further explains using jmap -clstats to view statistics per class loader and GC.class_stats to view statistics on Java class metadata, which both require unlocking diagnostic VM options.
APC & Memcache the High Performance DuoAnis Berejeb
The document discusses Apc & Memcached as a high-performance caching duo for PHP applications. It provides an overview of APC for opcode caching and user data caching. It then discusses Memcached which allows distributed caching across multiple servers and provides more advanced features than APC. Examples are given for basic usage of storing, retrieving, and deleting data from the caches with both extensions.
The document discusses accessing the virtual router in a CloudStack KVM environment. It explains that the virtual router can be accessed by SSHing into its link local IP address from the compute node it is running on. When logging in, it is shown that the link local IP address may change if the virtual router is restarted. The internal interfaces, routing tables, and network configuration of the running virtual router are then displayed and examined.
The document discusses the use of APC (Alternative PHP Cache) and Memcached for caching and improving performance in PHP applications. APC is useful for opcode caching and basic user data caching, while Memcached provides a more robust distributed caching system and additional features like data segmentation across multiple servers and atomic counters. The document provides examples of using the basic caching functions of APC and Memcached, as well as more advanced techniques.
The document discusses the evolution of Cassandra's data modeling capabilities over different versions of CQL. It covers features introduced in each version such as user defined types, functions, aggregates, materialized views, and storage attached secondary indexes (SASI). It provides examples of how to create user defined types, functions, materialized views, and SASI indexes in CQL. It also discusses when each feature should and should not be used.
zfsday talk (a video is on the last slide). The performance of the file system, or disks, is often the target of blame, especially in multi-tenant cloud environments. At Joyent we deploy a public cloud on ZFS-based systems, and frequently investigate performance with a wide variety of applications in growing environments. This talk is about ZFS performance observability, showing the tools and approaches we use to quickly show what ZFS is doing. This includes observing ZFS I/O throttling, an enhancement added to illumos-ZFS to isolate performance between neighbouring tenants, and the use of DTrace and heat maps to examine latency distributions and locate outliers.
La scalabilité des applications est une préoccupation importante. Beaucoup de pertes en scalabilité proviennent de code contenant des locks qui produisent une importante contention en cas de forte charge.
Dans cette présentation nous allons aborder différentes techniques (striping, copy-on-write, ring buffer, spinning, ...) qui vont nous permettre de réduire cette contention ou d'obtenir un code sans lock. Nous expliquerons aussi les concepts de Compare-And-Swap et de barrières mémoires.
The document discusses using jcmd to troubleshoot Java applications. It provides an overview of the jcmd command and describes the various domains and suffixes that can be used with jcmd to obtain diagnostic information or control the JVM. These include getting thread dumps, heap details, JIT compiler data, and configuring Java logging. The document also demonstrates some example jcmd commands.
- The document discusses the configuration and management of logical volume manager (LVM) and XFS filesystem on Redhat Enterprise Linux 6.x.
- It describes the components of LVM including physical volumes, volume groups, and logical volumes. It then shows commands to create a volume group from two physical volumes, extend the logical volume size by adding a new physical disk, and grow the XFS filesystem.
- The key steps shown are: creating physical volumes from disks, creating a volume group, making a logical volume, formatting it as XFS, mounting it, adding a new disk as physical volume to the volume group, extending the logical volume size, and growing the XFS filesystem.
Wed, August 24, 9:00am – 9:45am
Youtube: https://youtu.be/mCdoq7P5Zkw
First Name: Norm
Last Name: Green
Email where you can always be reached: norm.green@gemtalksystems.com
Type: Talk
Abstract: GemStone/64 product update and road map. A review of what's
new in version 3.3 and a preview to what we're working on for version
3.4. This year, I will also start with a few slides describing what
GemStone is and benefits of using it ("GemStone-101").
Bio: Norm Green started his career in 1989 at IBM in Toronto, Canada
as a quality assurance engineer. In 1993, he moved to the DACS (Data
Acquisition and Control System) team where he helped design and build
site-wide data collection system in VisualWorks and GemStone/S
Smallalk.
In 1996, he joined GemStone Systems as a Senior Consultant and
traveled the world helping GemStone/S customers be successful.
Currently, Norm lives near Portland, Oregon and holds the position of
Chief Technical Officer at GemTalk Systems.
Infrastructure review - Shining a light on the Black BoxMiklos Szel
Scenario: You work as a consultant and a new client has just signed on. Their DBA left suddenly leaving nothing but some outdated documentation in their wiki. After the kick-off meeting you realise that the operations and the development teams know little to none about the databases. They have been encountering intermittent problems with the application’s performance and suspect it’s related to the databases. You are told: "Please fix it ASAP!” So you have your public key installed on their jumphost and they manage to provide you with a 6 character long mysql root password. This is where your journey begins! During this session you will learn some of the best practices around discovering a new environment, finding possible threats and weaknesses and determining what key metrics to focus on for performance and reliability. We will cover architecture, replication, OS and MySQL level configuration, storage engines, failover strategies, backup and restores, monitoring, query tuning and possible ways to save money. The goal at the end of the presentation is to have a prioritized action plan. I will also explain the usage and the output of some tools/wrappers that help during an infrastructure review. Examples include creation of maximum integetr usage reports, table fragmentation and duplicate keys (we will be leveraging multiple Percona Toolkit scripts but also some lesser known tools as well). It is often easy to overlook underlying problems in the infrastructure during day-to-day operations, so this presentation will aim to highlight how to identify and resolve potential bottlenecks with your systems.
This document discusses issues related to low latency and mechanical sympathy in a Java application for routing orders. It addresses GC pauses, power management, NUMA, and cache pollution. To achieve sub-millisecond performance, the document recommends tuning the GC, BIOS, OS, NUMA configuration, and avoiding cache pollution through thread affinity and CPU isolation. The goal is to optimize performance without changing application code through mechanical sympathy with the hardware.
Thijs Feryn will give a presentation on using Varnish as an HTTP accelerator and caching proxy at the PHP Barcelona Conference on October 29, 2010. He discusses how Varnish can be used to cache content, protect servers from overload, and improve site performance through caching, hitting ratios of over 50% on average.
Thinking outside the box, learning a little about a lotMark Broadbent
Being a SQL specialist is not enough. Windows and the interfaces SQL consumes are becoming ever more complex. Being aware of these technologies and beyond can make you a better DBA. You will learn how to become a DBA 2.5, SQL Enterprise name resolution strategies, create your own private VSAN & VCluster, fail instances BETWEEN separate clusters and use the CLR to manage the OS from SQL.
This document discusses improvements made to LXC container support in Ganeti through a Google Summer of Code project. Key areas worked on included fixing existing LXC integration, adding unit tests, setting up quality assurance, and adding features like migration. The project was mentored by Hrvoje Ribicic. Future work may include two-level isolation of containers within VMs and live migration of LXC instances.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This document summarizes performance testing of OpenStack with Cinder volumes on Ceph storage. It tested scaling performance with increasing instance counts on a 4-node and 8-node Ceph cluster. Key findings include:
- Large file sequential write performance peaked with a single instance per server due to data striping across OSDs. Read performance peaked at 32 instances per server.
- Large file random I/O performance scaled linearly with increasing instances up to the maximum tested (512 instances).
- Small file operations showed good scaling up to 32 instances per server for creates and reads, but lower performance for renames and deletes.
- Performance tuning like tuned profiles, device readahead, and Ceph journal configuration improved both
Kernel Recipes 2015 - Porting Linux to a new processor architectureAnne Nicolas
Getting the Linux kernel running on a new processor architecture is a difficult process. Worse still, there is not much documentation available describing the porting process.
After spending countless hours becoming almost fluent in many of the supported architectures, I discovered that a well-defined skeleton shared by the majority of ports exists. Such a skeleton can logically be split into two parts that intersect a great deal.
The first part is the boot code, meaning the architecture-specific code that is executed from the moment the kernel takes over from the bootloader until init is finally executed. The second part concerns the architecture-specific code that is regularly executed once the booting phase has been completed and the kernel is running normally. This second part includes starting new threads, dealing with hardware interrupts or software exceptions, copying data from/to user applications, serving system calls, and so on.
In this talk I will provide an overview of the procedure, or at least one possible procedure, that can be followed when porting the Linux kernel to a new processor architecture.
Joël Porquet – Joël was a post-doc at Pierre and Marie Curie University (UPMC) where he ported Linux to TSAR, an academic processor. He is now looking for new adventures.
Hands on Virtualization with Ganeti (part 1) - LinuxCon 2012Lance Albertson
This document is part 1 of a presentation on virtualization with Ganeti. It introduces Ganeti as virtual machine management software that manages clusters of physical machines running Xen, KVM, or LXC. It discusses Ganeti's components, architecture, features like live migration and failure recovery using DRBD, and how it is used at OSU Open Source Lab to power hundreds of VMs. The presentation then demonstrates initializing a Ganeti cluster, adding nodes and instances, and recovering from failures before opening for questions.
This document discusses NoSQL databases and provides examples of different types. It begins by discussing motivations for NoSQL like performance, scalability, and flexibility over traditional relational databases. It then categorizes NoSQL databases as key-value stores like Redis and Tokyo Cabinet, column-oriented stores like BigTable and Cassandra, document-oriented stores like CouchDB and MongoDB, and graph databases like Neo4J. For each category it provides comparisons on attributes and examples using different languages.
Checkpoint/Restore mostly in UserspaceAndrey Vagin
CRIU is a tool that provides checkpoint and restore capabilities in Linux. It allows saving the state of processes and restoring them later. This allows for failure recovery, live migration, rebootless upgrades, and other use cases. CRIU works by dumping kernel objects and process data to disk, then restoring that data to recreate the processes. It has support for Linux processes, threads, memory mappings, files, sockets, namespaces and more. CRIU is tested on real applications and works to integrate with the Linux kernel and distributions.
Nvidia® cuda™ 5.0 Sample Evaluation Result Part 1Yukio Saito
This document summarizes the results of running CUDA 5.0 samples on a system with a GTX 560 Ti GPU. Many samples ran successfully but some failed because the GTX 560 Ti only supports up to compute capability 2.1, while some samples require capability 3.0 or higher. The samples that failed included those using CUDA Dynamic Parallelism or requiring capability 3.5 or higher. Overall the evaluation provided performance data for many samples and identified requirements for samples that did not run.
This document provides an overview of Proximal Data's AutoCache software and how it can accelerate storage performance in a virtualized environment using Nytro WarpDrive PCIe flash storage. It discusses how AutoCache works, benchmarks showing significant IOPS and latency improvements when using a Nytro WarpDrive 6203 card with AutoCache compared to a HDD baseline. It also shows nearly linear scaling of IOPS with additional Nytro cards under AutoCache 2.0. The document provides guidance on monitoring and optimizing performance further through settings like queue depth and discusses other related solutions and resources.
This document discusses various tools for monitoring and analyzing metaspace and class metadata in the Java virtual machine. It describes using -XX:+PrintGCDetails to print details of full GC collections including metaspace usage. It also discusses using MBeans, jstat -gc, and VisualVM to monitor memory pools like metaspace and class space. The document further explains using jmap -clstats to view statistics per class loader and GC.class_stats to view statistics on Java class metadata, which both require unlocking diagnostic VM options.
APC & Memcache the High Performance DuoAnis Berejeb
The document discusses Apc & Memcached as a high-performance caching duo for PHP applications. It provides an overview of APC for opcode caching and user data caching. It then discusses Memcached which allows distributed caching across multiple servers and provides more advanced features than APC. Examples are given for basic usage of storing, retrieving, and deleting data from the caches with both extensions.
The document discusses accessing the virtual router in a CloudStack KVM environment. It explains that the virtual router can be accessed by SSHing into its link local IP address from the compute node it is running on. When logging in, it is shown that the link local IP address may change if the virtual router is restarted. The internal interfaces, routing tables, and network configuration of the running virtual router are then displayed and examined.
The document discusses the use of APC (Alternative PHP Cache) and Memcached for caching and improving performance in PHP applications. APC is useful for opcode caching and basic user data caching, while Memcached provides a more robust distributed caching system and additional features like data segmentation across multiple servers and atomic counters. The document provides examples of using the basic caching functions of APC and Memcached, as well as more advanced techniques.
The document discusses the evolution of Cassandra's data modeling capabilities over different versions of CQL. It covers features introduced in each version such as user defined types, functions, aggregates, materialized views, and storage attached secondary indexes (SASI). It provides examples of how to create user defined types, functions, materialized views, and SASI indexes in CQL. It also discusses when each feature should and should not be used.
zfsday talk (a video is on the last slide). The performance of the file system, or disks, is often the target of blame, especially in multi-tenant cloud environments. At Joyent we deploy a public cloud on ZFS-based systems, and frequently investigate performance with a wide variety of applications in growing environments. This talk is about ZFS performance observability, showing the tools and approaches we use to quickly show what ZFS is doing. This includes observing ZFS I/O throttling, an enhancement added to illumos-ZFS to isolate performance between neighbouring tenants, and the use of DTrace and heat maps to examine latency distributions and locate outliers.
La scalabilité des applications est une préoccupation importante. Beaucoup de pertes en scalabilité proviennent de code contenant des locks qui produisent une importante contention en cas de forte charge.
Dans cette présentation nous allons aborder différentes techniques (striping, copy-on-write, ring buffer, spinning, ...) qui vont nous permettre de réduire cette contention ou d'obtenir un code sans lock. Nous expliquerons aussi les concepts de Compare-And-Swap et de barrières mémoires.
The document discusses using jcmd to troubleshoot Java applications. It provides an overview of the jcmd command and describes the various domains and suffixes that can be used with jcmd to obtain diagnostic information or control the JVM. These include getting thread dumps, heap details, JIT compiler data, and configuring Java logging. The document also demonstrates some example jcmd commands.
- The document discusses the configuration and management of logical volume manager (LVM) and XFS filesystem on Redhat Enterprise Linux 6.x.
- It describes the components of LVM including physical volumes, volume groups, and logical volumes. It then shows commands to create a volume group from two physical volumes, extend the logical volume size by adding a new physical disk, and grow the XFS filesystem.
- The key steps shown are: creating physical volumes from disks, creating a volume group, making a logical volume, formatting it as XFS, mounting it, adding a new disk as physical volume to the volume group, extending the logical volume size, and growing the XFS filesystem.
Wed, August 24, 9:00am – 9:45am
Youtube: https://youtu.be/mCdoq7P5Zkw
First Name: Norm
Last Name: Green
Email where you can always be reached: norm.green@gemtalksystems.com
Type: Talk
Abstract: GemStone/64 product update and road map. A review of what's
new in version 3.3 and a preview to what we're working on for version
3.4. This year, I will also start with a few slides describing what
GemStone is and benefits of using it ("GemStone-101").
Bio: Norm Green started his career in 1989 at IBM in Toronto, Canada
as a quality assurance engineer. In 1993, he moved to the DACS (Data
Acquisition and Control System) team where he helped design and build
site-wide data collection system in VisualWorks and GemStone/S
Smallalk.
In 1996, he joined GemStone Systems as a Senior Consultant and
traveled the world helping GemStone/S customers be successful.
Currently, Norm lives near Portland, Oregon and holds the position of
Chief Technical Officer at GemTalk Systems.
Infrastructure review - Shining a light on the Black BoxMiklos Szel
Scenario: You work as a consultant and a new client has just signed on. Their DBA left suddenly leaving nothing but some outdated documentation in their wiki. After the kick-off meeting you realise that the operations and the development teams know little to none about the databases. They have been encountering intermittent problems with the application’s performance and suspect it’s related to the databases. You are told: "Please fix it ASAP!” So you have your public key installed on their jumphost and they manage to provide you with a 6 character long mysql root password. This is where your journey begins! During this session you will learn some of the best practices around discovering a new environment, finding possible threats and weaknesses and determining what key metrics to focus on for performance and reliability. We will cover architecture, replication, OS and MySQL level configuration, storage engines, failover strategies, backup and restores, monitoring, query tuning and possible ways to save money. The goal at the end of the presentation is to have a prioritized action plan. I will also explain the usage and the output of some tools/wrappers that help during an infrastructure review. Examples include creation of maximum integetr usage reports, table fragmentation and duplicate keys (we will be leveraging multiple Percona Toolkit scripts but also some lesser known tools as well). It is often easy to overlook underlying problems in the infrastructure during day-to-day operations, so this presentation will aim to highlight how to identify and resolve potential bottlenecks with your systems.
Title: Gemtalk Systems Product Roadmap
Speaker: Norm Green
Thu, August 21, 9:00am – 9:45am
Video Part1: https://www.youtube.com/watch?v=PTLzyjrml7g
Video Part2: https://www.youtube.com/watch?v=w9ya2-xRopM
Description
Norm Green started his career in 1989 at IBM in Toronto, Canada as a quality assurance engineer. In 1993, he moved to the DACS (Data Acquisition and Control System) team where he helped design and build site-wide data collection system in VisualWorks and GemStone/S .
In 1996, he joined GemStone Systems as a Senior Consultant and traveled the world helping GemStone/S customers be more successful.
Within GemStone Systems, Norm held several positions including Director of Professional Services and Director of Engineering.
Gilt uses Puppet for infrastructure configuration management. They have around 1000 Puppet modules and typically run Puppetmaster with Apache and Passenger on CentOS. Changes go through code review and are deployed incrementally using a "canary" flag. An internal tool called Mothership acts as an external node classifier for Puppet and also handles provisioning, assets management, users/groups and DNS. Lessons learned include keeping modules small and simple, planning for change like OS upgrades, and needing at least two views (logical and physical) for node management.
Nimble Storage is developing flash-enabled storage solutions including an accelerator appliance and storage server. The accelerator increases storage cache by 100x and reduces latency by 25x compared to disk-only solutions. It is targeted at the $15 billion networked storage market. Nimble's technology utilizes a new flash-optimized file system and compression to provide compelling price/performance advantages over competitors. The company is led by experienced engineers from Data Domain, NetApp, and other storage firms.
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
The document summarizes the results of testing a Ceph storage cluster configuration using Supermicro hardware. Key findings include:
- Using SSDs for journals improved sequential write bandwidth significantly.
- Erasure coded pools provided reasonable performance at a lower cost compared to replicated pools.
- A single client could saturate the network connection with two 36-bay OSD nodes.
- Network performance was critical as the cluster scaled to support more clients and objects.
- Further testing was needed on erasure coded performance under failure conditions and using newer Ceph and Linux versions.
Apache Cassandra 2.0 is out - now there's no reason not to ditch that ol' legacy relational system for your important online applications. Cassandra 2.0 includes big impact features like Light Weight Transactions and Triggers. Do you know about the other new enhancements that got lost in the noise. Let's put the spotlight on all the things! Changes in memory management, file handling and internals. Low hype but they pack a big punch. While we were at it, we also did a bit of house cleaning.
Cassandra Community Webinar | Cassandra 2.0 - Better, Faster, StrongerDataStax
Patrick McFadin presented on new features in Cassandra 2.0 including lightweight transactions, triggers, CQL improvements, and performance enhancements. Some key points are that lightweight transactions allow conditional updates in a single atomic operation, triggers allow custom Java code to modify mutations before writing, and various optimizations improve query performance, server performance, and operation throughput. Removed features include SuperColumns and on-heap row caching.
The document discusses best practices for deploying MongoDB including sizing hardware with sufficient memory, CPU and I/O; using an appropriate operating system and filesystem; installing and upgrading MongoDB; ensuring durability with replication and backups; implementing security, monitoring performance with tools, and considerations for deploying on Amazon EC2.
PuppetCamp SEA @ Blk 71 - Puppet: The Year That WasWalter Heck
Nigel Kersten started off the day with a very interesting and informative talk about the past, current and future of Puppet. He showed Puppet's link with the worldwide tech community and how they plan to make the Puppet experience even better. He also gave updates on what Puppet Labs has done recently, as well as elaborated on the improvements of Puppet 3.0, Puppet DB and Puppet Enterprise. Nigel also mentioned that Puppet Labs is still dedicated on fixing any issues that any updates or the community may have, and that the company also hopes to improve things moving towards the future.
PuppetCamp SEA @ Blk 71 - Puppet: The Year That WasOlinData
The Puppet community and tools grew substantially in 2012. The mailing list and IRC channel memberships doubled, PuppetCamps increased from around 3 to 15, and PuppetConf had over 750 attendees. Puppet 3.0 was released with improved performance, Ruby 1.9 support, and data bindings. Hiera 1.x provided key-value configuration data storage. PuppetDB 1.x became the new store of Puppet-generated data. Puppet Enterprise 2.x offered a pre-configured complete stack including Puppet, MCollective, Hiera, and the Enterprise Console.
This document discusses running JavaScript on microcontrollers. It evaluates two JavaScript engines: JavaScriptCore and Duktape. JavaScriptCore was too large, with initial attempts resulting in binaries over 2 MB. Duktape produced smaller binaries down to 108 KB and passed 96.5% of ECMAScript tests. While most SunSpider tests failed due to timeouts or memory limits, Duktape shows promise for running JavaScript on resource-constrained microcontroller devices.
The document provides guidance on deploying MongoDB in production environments. It discusses sizing hardware requirements for memory, CPU, and disk I/O. It also covers installing and upgrading MongoDB, considerations for cloud platforms like EC2, security, backups, durability, scaling out, and monitoring. The focus is on performance optimization and ensuring data integrity and high availability.
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...Glenn K. Lockwood
Glenn K. Lockwood's document summarizes his professional background and experience with data-intensive computing systems. It then discusses the Gordon supercomputer deployed at SDSC in 2012, which was one of the world's first systems to use flash storage. The document analyzes Gordon's architecture using burst buffers and SSDs, experiences using the flash file system, and lessons learned. It also compares Gordon's proto-burst buffer approach to the dedicated burst buffer nodes on the Cori supercomputer.
Catching up with what has happened with logging in Docker since late 2014 all the way up to the recently released Docker 0.10. Also, presenting my view on a comprehensive approach to monitoring Docker using the API to get events, logs, stats with a little bit of self promotion in pointing out that we have recently released an implementation of comprehensive monitoring as part of a Sumo Logic collector source.
Galaxy Big Data with MariaDB 10 by Bernard Garros, Sandrine Chirokoff and Stéphane Varoqui.
Presented 26.6.2014 at the MariaDB Roadshow in Paris, France.
This document provides an introduction to Docker and Openshift including discussions around infrastructure, storage, monitoring, metrics, logs, backup, and security considerations. It describes the recommended infrastructure for a 3 node Openshift cluster including masters, etcd, and nodes. It also discusses strategies for storage, monitoring both internal pod status and external infrastructure metrics, collecting and managing logs, backups, and security features within Openshift like limiting resource usage and isolating projects.
Thousands of running services on hundreds of machines ask the admins to pay attention. At all times, on all channels, with all available software packages that openSUSE has to offer.
Lars does not only keep an eye on all devices inside SUSE R&D, he is also maintaining some of the biggest packages in the server:monitoring repository like Nagios, Icinga, Shinken, check_mk, mod_gearman, Naemon, PNP4Nagios or the monitoring-plugins (and others).
The integration of some of those packages into a complex, secure and high available monitoring setup together with some tips and tricks and insights into the SUSE R&D infrastructure will be demonstrated.
- The document provides guidance on deploying MongoDB including sizing hardware, installing and upgrading MongoDB, configuration considerations for EC2, security, backups, durability, scaling out, and monitoring. Key aspects discussed are profiling and indexing queries for performance, allocating sufficient memory, CPU and disk I/O, using 64-bit OSes, ext4/XFS filesystems, upgrading to even version numbers, and replicating for high availability and backups.
Workshop: Identifying concept inventories in agile programmingESUG
This document discusses the development of a concept inventory to identify common misconceptions in agile programming and object-oriented development. The project aims to strengthen collaboration between INRIA/Lille and ÉTS/UQAM by creating a concept inventory that can be used to improve teaching of agile development with object-oriented languages like TypeScript, JavaScript, and Pharo. The methodology involves identifying misconceptions, proposing a concept inventory, and validating it in courses by measuring understanding before and after instruction. A workshop will help identify initial misconceptions in Smalltalk/Pharo by capturing them in a collaborative tool.
This document proposes integrating documentation into the Pharo language metamodel and environment to improve documentation support. It suggests making documentation first-class citizens in Pharo by providing built-in support and a minimal API, which would allow tight integration with development tools and future extensions without requiring grammar changes or large efforts. This could improve documentation quality by enabling direct references between code and documentation and automatic logging of documentation usage.
The Pharo Debugger and Debugging tools: Advances and RoadmapESUG
This document outlines advances and the roadmap for debugging tools in Pharo. It discusses recent improvements to the debugging infrastructure, including architectural changes and new debugging commands. It also describes upcoming work, such as additional infrastructure improvements, an emergency debugger, support for meta-object protocols, a redesigned user experience, a remote debugger, and improved documentation. The document concludes by inviting participants to help evaluate new debugging experiments.
The document describes Sequence, a pipeline modeling and discrete event simulation framework developed in Pharo Smalltalk. Sequence allows describing system resources, building blocks that use those resources, assembling scenarios from blocks, collecting information during simulated runs, and interactively exploring system traces. The framework implements a discrete event simulation engine with event streams that model periodic processes and resources. Sequence provides tools for evaluating system performance through simulation before complete hardware is available.
Migration process from monolithic to micro frontend architecture in mobile ap...ESUG
This document discusses migrating a monolithic mobile application called CARL Touch to a micro frontend architecture. It presents a migration process involving three steps: 1) analysis of the monolithic codebase, 2) identification of potential micro frontends, and 3) transformation of the codebase to implement the identified micro frontends. Previous experiments at Berger-Levrault involving two teams migrating CARL Touch provided insights. The proposed process uses static and dynamic analysis, code visualization and clustering techniques to help identify optimal micro frontends and transform the codebase in a semi-automated manner.
Analyzing Dart Language with Pharo: Report and early resultsESUG
This document summarizes an analysis of the Dart programming language using tools in the Pharo environment. It describes generating a parser for Dart using SmaCC, which produces an AST. It also details defining a Famix meta-model for Dart and the Chartreuse-D importer that creates a FamixDart model from the AST. Future work is outlined, including improving SmaCCDart, continuing to develop the FamixDart meta-model, and handling dynamic types when importing associations. The goal is to analyze Dart and explore modeling Flutter applications.
Transpiling Pharo Classes to JS ECMAScript 5 versus ECMAScript 6ESUG
This document summarizes research on transpiling Pharo classes to JavaScript using ECMAScript 5 versus ECMAScript 6. It finds that transpiling to ES6 provides benefits like significantly faster load times, improved benchmark performance up to 43%, and more idiomatic code compared to ES5. However, fully emulating Smalltalk semantics like metaclass inheritance remains challenging when targeting JavaScript.
The document presents an approach for automated test generation from software models and execution traces. Key aspects of the approach include using metamodels to represent the codebase, values, and desired unit test structure. Models are built from the codebase and traces, then transformations are applied to generate unit tests conforming to the test metamodel. Abstract syntax trees are used to export the generated tests to code. The approach aims to generate tests that are relevant, readable and maintainable without relying on existing tests. An example demonstrates generating a JUnit test from an application class.
Genetic programming is used to generate unit tests by evolving test code via genetic algorithms to maximize coverage. Tests are represented as chromosomes of object and message statements. The genetic algorithm selects tests based on coverage, combines tests through crossover, and replaces tests in the population over generations to find optimal test sequences. Future work includes improving path exploration and comparing with other test generation tools.
Threaded-Execution and CPS Provide Smooth Switching Between Execution ModesESUG
Threaded execution and continuation-passing style (CPS) allow for smooth switching between execution modes in Zag Smalltalk. Threaded execution interprets code as a sequence of addresses like bytecode but is 2.3-4.7 times faster, while CPS passes continuations explicitly like in functional languages and is 3-5 times faster than bytecode. Both approaches allow fallback to debugging. The implementation shares context and stack between modes to easily switch with proper object structures.
Exploring GitHub Actions through EGAD: An Experience ReportESUG
This document summarizes an experience report on exploring GitHub Actions through EGAD, a tool for GitHub Action analysis. It discusses three key lessons learned: 1) Composing a story by documenting tasks and linking documentation to code, 2) Navigating custom views to conduct research, and 3) Supporting onboarding of researchers by assigning mentors, scheduling meetings, and encouraging use of resources. EGAD takes workflow YAML files, wraps them in a domain model to provide context, and allows inspecting examples to fully explore the GitHub Actions domain model.
Pharo: a reflective language A first systematic analysis of reflective APIsESUG
This document analyzes the reflective features and APIs in Pharo, a reflective programming language. It presents a catalog of Pharo's reflective APIs and analyzes how they relate to metaobjects. The analysis highlights areas for potential improvement, such as providing solutions for intercession on state reads/writes and addressing constraints when changing an object's class. The document contributes to understanding Pharo's reflective design and its evolution over time.
The document discusses garbage collector tuning for applications with pathological allocation patterns. It begins by explaining the motivation and issues caused by pathological patterns, such as applications taking over an hour and a half to run. It then provides an overview of garbage collection and how allocation patterns can impact performance. The document dives into two specific tuning techniques - increasing the full GC threshold to prevent premature full GCs from being triggered, and increasing the tenuring threshold to avoid large objects residing in the remembered set and slowing down scavenges. These tunings resulted in significant performance improvements for the sample DataFrame application, reducing the run time from over an hour and a half to around seven minutes.
Improving Performance Through Object Lifetime Profiling: the DataFrame CaseESUG
This document discusses improving garbage collection performance in Pharo through object lifetime profiling. It presents Illimani, a lifetime profiler developed for Pharo. Illimani was used to profile the lifetimes of objects created when loading a large DataFrame. The profiling revealed that most objects had short lifetimes, suggesting the garbage collector could be tuned. Tuning the garbage collector parameters based on the lifetime profiles improved the performance of loading the DataFrame.
This document discusses the past, present, and future of Pharo DataFrames. It began as a student project but has evolved into a mature project with dedicated engineers, improving performance and adding functionality. Future plans include further performance enhancements, adding more functionality, better integration with other Pharo projects, and support for big data. Evaluation of DataFrames is also planned.
This document discusses issues with thisContext in the Pharo debugger not correctly representing the execution context and being the DoIt context instead. This was fixed in Pharo12 by making thisContext a variable object that is wrapped in a DoItVariable, so the debugger context is used. When inspecting or doing DoIt, the doIt Variable is pushed and read to provide the proper execution context.
This document proposes using websockets to display fencing scores and a chronometer from an arena server to mobile phones over the internet in real-time. It includes links to video examples of a chronometer display and photos from fencing competitions.
ShowUs: PharoJS.org Develop in Pharo, Run on JavaScriptESUG
This document discusses PharoJS, which allows developers to develop applications in Pharo and then export them to run as JavaScript applications. PharoJS enables 100% of Pharo code to be executed during development, and then 100% of that same code is exported to JavaScript to be executed in production. The document also briefly mentions deployment options for exported PharoJS applications like GitHub Pages and GitHub Actions.
The document contains testimonials from participants of the Pharo MOOC praising its effectiveness at teaching object-oriented design. It also announces an upcoming advanced design MOOC that will have over 60 lectures, slides, videos and an exercise booklet. Finally, it provides links to the course websites and encourages people to stay tuned for the new MOOC.
A New Architecture Reconciling Refactorings and TransformationsESUG
This document discusses reconciling refactorings and transformations in software engineering. It proposes a new architecture where refactorings decorate transformations by checking preconditions and composing multiple transformations. Refactorings ensure transformations are applied safely while transformations focus on model changes. Open questions remain around precondition handling and composition semantics. The goals are to reduce duplication, support custom refactorings/transformations, and provide a modern driver-based user interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
4. GemStone/S 32 Bit
Yes, it’s still in use, and in production!
Last Major Release: 6.6.2 in Feb 2012
6.6.x Summary
• Rudimentary multi-threaded garbage collection
• Use native threads to perform I/O
• Symbol Garbage Collection
• Methods to locate and remove unreferenced symbols.
• Non-canonical symbol detection
• Methods to override commit-at-login behavior
• lastLoginTime is committed at login time if UserProfile security is
enabled.
• Commit-at-login can now be disabled on a per-UserProfile basis.
4
Thursday, August 30, 12
5. GemStone/S 32 Bit
6.6.x Summary
• Backup and Restore to/from NFS
• Full backups may now be made to an NFS file system.
• Full backups may be restored from an NFS file system.
• Tranlogs may be restored from an NFS file system.
• POWER7 Certification (AIX only)
• Add additional memory barriers required by POWER7 Processor
5
Thursday, August 30, 12
6. GemStone/S 32 Bit
End of Life
• EOL for GemStone/S 32 bit is planned for Oct-2012.
• What does that mean exactly?
• No more major releases.
• No new product sales.
• Support for existing customers will continue for 3 more years (2015).
• Maintenance renewals beyond 2015 will not be accepted.
6
Thursday, August 30, 12
7. GemStone/S 64 2.4.x
Current Releases
• 2.4.4.7
• Critical bug fixes
• A few minor new features
• 2.4.5
• Critical and non-critical bug fixes
• Several minor new features
• 2.4.5.1
• Minor bug fixes
• POWER7 certification
Further Details
• Release Notes
• http://community.gemstone.com/display/GSS64/Documentation
7
Thursday, August 30, 12
9. GemStone/64 3.0
3.0 shipped on June 15, 2011.
3.0 was a major step forward in features and
performance.
Adoption has been (predictably) slow.
Native Code Virtual Machine
• Converts byte codes to native instructions
• JIT design – conversion occurs at first method invocation.
• 25% to 100% speed improvement
9
Thursday, August 30, 12
10. GemStone/64 3.0
Foreign Function Interface (FFI)
• Load shared libraries.
• Call C functions without writing C code.
• All coding can be done in Smalltalk
• Martin presented FFI at STIC 2011
10
Thursday, August 30, 12
11. GemStone/S 64 3.1.x
Current Releases
• 3.1 - July 6, 2012
• 3.1.0.1 - August 28, 2012
Next Release
• 3.1.1 - Q1, 2013
• Bug fixes
11
Thursday, August 30, 12
12. New Features
Mid-Level Shared Caches
• Caches that sit between the stone cache and a remote
SPC
• Benefits
• Better caching.
• 3 caches to look for pages rather than 2
• Reduced Network I/O on Stone Server
• 500 remote caches can red-line a 10 GB/s network!
12
Thursday, August 30, 12
13. Classic Remote SPC “Star” Configuration
SPC SPC
Stone
SPC
SPC
SPC
DB
SPC SPC
13
Thursday, August 30, 12
14. Mid-Level Cache Configuration
Mid-Level Caches
SPC SPC
Stone
SPC SPC
SPC SPC
SPC
DB
SPC SPC
Leaf Caches
14
Thursday, August 30, 12
15. New Features
Mid-Level Shared Caches
• Creating a Mid-Level Cache
1.Start the netldi on the remote (leaf cache) host, mid-level
host, and stone server
2.From the leaf-cache host, do a normal remote login
3.Run this code to create the mid-cache:
System
midLevelCacheConnect: midLevelHostName
cacheSizeKB: aSize
maxSessions: nSess
15
Thursday, August 30, 12
16. New Features
Mid-Level Shared Caches
• Connecting to an Existing Mid-Level Cache
1.From the leaf-cache host, do a normal remote login
2.Run this code to connect to the mid-cache:
System midLevelCacheConnect: hostName
16
Thursday, August 30, 12
17. New Features
External Password Validation
• Passwords may now be stored and managed outside of
GemStone.
• 2 New Options
• LDAP = Lightweight (ha!) Directory Access Protocol
• UNIX password
17
Thursday, August 30, 12
18. New Features
External Password Validation
• User ID Aliases are Supported
• GemStone user ID may or may not match external user ID.
• Example:
• GemStone UID: ‘normg’
• LDAP UID: ‘norm.green’
18
Thursday, August 30, 12
19. New Features
External Password Validation
• LDAP – Uses OpenLDAP and OpenSSL libraries (shipped
with GemStone).
• UNIX – Uses getpwnam() / getspnam() UNIX calls.
19
Thursday, August 30, 12
20. New Features
External Password Validation
• Enabling LDAP authentication for a UserProfile:
| u |
u := AllUsers userWithId: ‘normg’.
u enableLDAPAuthenticationWithAlias: ‘norm.green’
baseDN: ‘uid=%s,ou=LdapTesting,dc=gemstone,dc=com’
filterDN: nil .
System commitTransaction .
20
Thursday, August 30, 12
21. New Features
Multi-threaded Operations
• The following operations are now multi-threaded:
• markForCollection
• objectAudit
• findDisconnectedObjects
• countInstances
• listInstances
• listReferences
• GsObjectInventory
• findObjectsLargerThan
• Epoch Garbage Collection
• Write Set Union Sweep (part of GC finalization)
• Native (OS) threads are used.
• Each thread is a unique GemStone session
• Each thread takes a shared page cache slot.
21
Thursday, August 30, 12
22. New Features
Multi-threaded Operations
• Aggressiveness Is Controlled By 2 Parameters:
• Number of threads
• Percent CPU Active
• Means total CPU load on the server across all cores.
• Threads will start sleeping if the threshold is exceeded.
• Both Parameters Can Be Adjusted On The Fly
• SystemRepository
setMultiThreadedScanThreads: numThreads
maxCpu: aPercent
22
Thursday, August 30, 12
23. New Features
Multi-threaded Operations
• The default methods are NOT aggressive
• If you want speed, use the fast* methods
• markForCollection – uses 2 threads
• fastMarkForCollection – uses up to 16 threads
23
Thursday, August 30, 12
24. New Features
ProfMonitor Improvements
• New implementation supports sample intervals down to 1
microsecond (1000 ns).
• New Class Protocol
• ProfMonitor monitorBlock: aBlock downTo: hits
intervalNs: ns
• ProfMonitor monitorBlock: aBlock intervalNs: ns
24
Thursday, August 30, 12
25. New Features
Built-In Monticello Support
• Monticello Support is now available “out of the box”
• Protocols:
• GSPackageLibrary loadMczFile: aFile
fromRepositoryPath: aPath
• GSPackageLibrary loadLastVersionOf: packageName
fromRepositoryPath: aPath
25
Thursday, August 30, 12
26. Smalltalk Changes
Array Literal Syntax
• Run-time Constructor
• 2.x: #[ 1, 2, 3 ]
• 3.0: { 1 . 2 . 3 }
• Compile-time Constructor (unchanged)
• #( 1, 2, 3)
In 3.0, the old 2.x Array syntax can be enabled by
running this code after login:
• System gemConfigurationAt:#GemConvertArrayBuilder put: true
ByteArray Literal Syntax (new)
• #[ 0 1 254 255 ]
26
Thursday, August 30, 12
27. Smalltalk Changes
Exception Handling
• In 3.0, ANSI exceptions are fully supported.
• ANSI exceptions are class-based.
• Old GemStone Exception class is deprecated, but still
works in some cases.
• GemStone error numbers are deprecated.
27
Thursday, August 30, 12
28. Smalltalk Changes
New Special Selectors
• The following selectors are inlined by the compiler and
may not be overloaded:
• ifNil:
• ifNotNil:
• ifNotNil:ifNil:
• ifNil:ifNotNil:
• _isSmallInteger
• _isInteger
• _isNumber
• _isFloat
• _isSymbol
• _isExceptionClass
• _isExecBlock
• _isArray
28
Thursday, August 30, 12
29. Smalltalk Changes
Segment is renamed GsObjectSecurityPolicy
• We’ve wanted to change that for a long time.
29
Thursday, August 30, 12
31. Smalltalk Changes
LargeInteger Changes
• New class: LargeInteger
• Replaces both LargePositiveInteger and LargeNegativeInteger
• Existing Instances:
• LargePositiveInteger -> ObsoleteLargePositiveInteger
• LargeNegativeInteger -> ObsoleteLargeNegativeInteger
• Instances are automatically converted when loaded into object
memory.
• To convert manually, send:
• anObsoleteInt convert
31
Thursday, August 30, 12
32. Smalltalk Changes
ScaledDecimal Changes
• New format:
• mantissa (SmallInteger or LargeInteger)
• scale (SmallInteger) – power of 10
• New Literal Notation (ANSI compliant)
• 1.23s2
• Mantissa: 123
• Scale: 2
32
Thursday, August 30, 12
33. Smalltalk Changes
ScaledDecimal Changes
• GS/64 v2.x ScaledDecimal is now FixedPoint in 3.0
• Instances are automatically converted
• New Literal Syntax:
• 1.23p2
• Numerator: 123
• Denominator: 100
• Scale: 2
33
Thursday, August 30, 12
34. Smalltalk Changes
New Class/Metaclass Hierarchy
2.x 3.0
• Object • Object
• Behavior • Behavior
• Metaclass • ObsoleteMetaclass
• Class • Module
• Metaclass3
• Class
• After upgrade, existing Metaclasses become ObsoleteMetaclass
• Newly created classes will have a class of Metaclass3
• Applications which are sensitive to the superclass of Class or
Metaclass will need to be adjusted.
34
Thursday, August 30, 12
35. Performance Improvements
Aborting A Transaction
• Round trips to stone reduced from 2 to 1
35
Thursday, August 30, 12
36. Performance Improvements
Stone
Old SMC Design
Gem
Gem Procedure
• Get SMC queue lock
• Add self to SMC queue Gem Spin Lock
• Release SMC queue lock
• Wait on semaphore for stone’s
response. Gem
Stone Procedure
1. Get SMC queue lock.
2. Remove all requests from
SMC queue.
3. Release SMC queue lock
4. Process Requests SMC Queue
36
Thursday, August 30, 12
37. Performance Improvements
New SMC Design – No Lock or Queue
Gem Procedure
Gem Gem
• Set my bit (atomic OR)
• Wait on semaphore for stone’s
response.
SMC Words
Stone Procedure
1. For each 64-bit session word: Stone
• Read and clear all bits
(atomic AND).
2. Process Requests
37
Thursday, August 30, 12
38. Performance Improvements
Session Priority
• Elimination of SMC queue means sequence of sessions
requesting service from stone is not preserved.
• Session Priority added to compensate.
• Session Priority Values:
0 – lowest (Reclaim and Admin GC gems)
1 – low
2 – medium (default)
3 – high
4 – highest
• Session holding (or about to receive) the commit token
always has highest priority.
38
Thursday, August 30, 12
39. Performance Improvements
Session Priority Protocol
• System setSessionPriority: anInt
forSessionId: aSessionId
• System priorityForSessionId: aSessionId
39
Thursday, August 30, 12
40. Performance Improvements
Shared Cache Hash Table Read Locks
SPC Hash Table is a dictionary in shared memory
• pageId -> cache offset (address)
• 2.x Design
• All hash table row accesses were serialized by spin lock.
• Result: All lookups are serialized.
• 3.0 Design
• Each hash table row now has a reference count and a write
lock.
• Result: Lookups are done in parallel.
40
Thursday, August 30, 12
41. Performance Improvements
Serialized Hash Table Lookups In 2.x
Gem1
Spin Locks
Collision Chain
Gem2 14 9 2 6
Gem3 SPC Hash Table
•3 Gems want to lookup page 6. If the lookup time is n, then:
•Each gem must hold the lock to •Gem1: n
perform the lookup. •Gem2: 2n
•Gem3: 3n
41
Thursday, August 30, 12
42. Performance Improvements
Parallelized Hash Table Lookups In 3.0
Gem1
Spin Locks
Collision Chain
Gem2 14 9 2 6
Gem3 SPC Hash Table
•3 Gems want to lookup page 6. If the lookup time is n, then:
•All 3 lookups can be done •Gem1: n
simultaneously! •Gem2: n
•Gem3: n
42
Thursday, August 30, 12
43. Performance Improvements
New Tranlog AIO System
• POSIX AIO calls (aio_write()) removed.
• Too many OS bugs
• Replaced With Our Own Code
• Based on Native OS Threads
• New Config Parameter:
• STN_NUM_AIO_WRITE_THREADS - Number of native threads
started to perform tranlog writes.
43
Thursday, August 30, 12
44. Index Performance Improvement
Index Performance
Remove indexes
Operation
Audit indexes
3.0
2.4
Build Indexes
0 200 400 600 800
Seconds
• Solaris 10 SPARC • 5 Equality Indexes
• Tranlogs on solid state disks
• Extents on raw partitions.
• 2M element collection
44
Thursday, August 30, 12
46. GemStone/S 64 3.1
Hot-standby database support
• Automatically synchronize 1 or more hot standby databases across a LAN
or WAN.
• Real-time tranlog replay
• Failover support
• Automatic reconnect support
Support for UTF & Locale-specific collation using ICU
• Interface to the International Components for Unicode (ICU) libraries.
• ICU is a widely used open source set of libraries providing Unicode and
globalization support for software applications.
Secure RPC logins using SSL
• All RPC logins now use Secure Socket Layer (SSL) and Secure Remote
Password (SRP) to establish the initial connection between the GCI client
and gem and to authenticate passwords.
• Passwords are now stored in GemStone in the encrypted form used by
SRP.
46
Thursday, August 30, 12
47. GemStone/S 64 3.1
Backup and restore reimplemented as multi-threaded
• For multi-file backups, the protocol that created or restored backup files in a
sequence of commands has been removed
• All backup files are created or restored simultaneously
• Multi-file backups are now created or restored by a single command rather
than repeated calls to continue* methods
• Written in parallel, using separate threads to write to each file after initial
material is written to the first file.
Symbol garbage collection
• If enabled, this occurs automatically in the background and requires no
management.
Garbage Collection Enhancements
• Ability to defer reclaim under low free space conditions
• Further optimizations to MFC
The FFI has a number of changes and improvements
47
Thursday, August 30, 12
48. GemStone/S 64 3.1
Nested Transactions
• Create up to 16 sub-transactions which may be independently aborted or
committed.
• Protocol:
• System beginNestedTransaction
• Begin a new nested transaction
• System abortNestedTransaction
• Roll back changes made in this nested transaction only.
• System commitNestedTransaction
• Commits the modifications made in the current nested transaction to the next
outer-level transaction.
• Changes are NOT seen by other sessions until the outer-most transaction is
committed.
• Write-write conflicts with other session are not detected here.
48
Thursday, August 30, 12
49. GemStone/S 64 3.1
IPv6 Support
• 3 Address protocols now supported:
• IPv4
• IPv6
• IPv4-mapped IPv6
• New Config File Options:
• STN_LISTENING_ADDRESSES – A list of 0 to 10 addresses upon which stone
should listen for login connections .
• For More Info:
• RFC 2373
• RFC 4291
• RFC 4038
49
Thursday, August 30, 12
50. GemStone/S 64 3.2
Target Date
• 3.2 - Q3, 2013
New Features
• Multi-threaded page reclaim sessions.
• Additional multi-threaded garbage collection options.
• Additional Unicode character features
• Thread-safe GCI (Customer C Interface)
• Optimize GemStone/S to run on ESXi.
• Add protection for DoS attacks.
• Load Balancer – balance loads across multiple nodes.
50
Thursday, August 30, 12
51. Questions ?
VMware Inc. Norman R. Green
15220 NW Greenbrier Pkwy. Director, R&D
Suite 150 normg@vmware.com
Beaverton, Oregon, 97006
(503) 533-3710 direct
www.vmware.com (503) 804-2041 mobile
51
Thursday, August 30, 12