Presentation delivered at LinuxCon China 2017 by Greg Kroah-Hartman.
The Linux kernel is the largest collaborative software development projects ever. This talk will discuss exactly how Linux is developed, how fast it is happening, who is doing the work, and how we all stay sane keeping up with it. It will discuss the development model used, and how it differs from almost all "traditional" models of software development.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
In order to understand HAL layers of Android Framework, having Linux device driver knowledge is important. Hence Day-2 of the workshop focuses on the same.
In Embedded system a set of applications used to perform a complex task or to create a product, which is typically another computer program or a system of programs. Tools are linked (or chained) together by specific stages. Output or resulting environment state of the tool becomes input or starting environment for the next one. By default the host contains some development tools which are called native tool-chain. Here is the presentation that shares mode details on components of tool-chain and how to build them for your own embedded distribution.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
In order to understand HAL layers of Android Framework, having Linux device driver knowledge is important. Hence Day-2 of the workshop focuses on the same.
In Embedded system a set of applications used to perform a complex task or to create a product, which is typically another computer program or a system of programs. Tools are linked (or chained) together by specific stages. Output or resulting environment state of the tool becomes input or starting environment for the next one. By default the host contains some development tools which are called native tool-chain. Here is the presentation that shares mode details on components of tool-chain and how to build them for your own embedded distribution.
This course gets you started with writing device drivers in Linux by providing real time hardware exposure. Equip you with real-time tools, debugging techniques and industry usage in a hands-on manner. Dedicated hardware by Emertxe's device driver learning kit. Special focus on character and USB device drivers.
Embedded Android System Development - Part II talks about Hardware Abstraction Layer (HAL). HAL is an interfacing layer through which Android service can place a request to device. Uses functions provided by Linux system to service the request from android framework. A C/C++ layer with purely vendor specific implementation. Packaged into modules (.so) file & loaded by Android system at appropriate time
Embedded Android system development workshop is focused on integrating new device with Android framework. Our hands-on approach makes Emertxe as the best institute to learn android system development training. This workshop deep dives into Android porting, Android Hardware Abstraction Layer (HAL), Android Services and Linux device driver ecosystem. This workshop based training program will enable you to efficiently integrate new hardware with Android HAL / Framework.
The second part of Linux Internals covers system calls, process subsystem and inter process communication mechanisms. Understanding these services provided by Linux are essential for embedded systems engineer.
There is a surge in number of sensors / devices that are getting connected under the umbrella of Internet-Of-Things (IoT). These devices need to be integrated into the Android system and accessed via applications, which is covered in the course. Our Android system development course curriculum over weekends with practicals ensures you learn all critical components to get started.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Introduction to Linux Kernel by Quontra SolutionsQUONTRASOLUTIONS
Course Duration: 30-35 hours Training + Assignments + Actual Project Based Case Studies
Training Materials: All attendees will receive,
Assignment after each module, Video recording of every session
Notes and study material for examples covered.
Access to the Training Blog & Repository of Materials
Pre-requisites:
Basic Computer Skills and knowledge of IT.
Training Highlights
* Focus on Hands on training.
* 30 hours of Assignments, Live Case Studies.
* Video Recordings of sessions provided.
* One Problem Statement discussed across the whole training program.
* Resume prep, Interview Questions provided.
WEBSITE: www.QuontraSolutions.com
Contact Info: Phone +1 404-900-9988(or) Email - info@quontrasolutions.com
Linux has become integral part of Embedded systems. This three part presentation gives deeper perspective of Linux from system programming perspective. Stating with basics of Linux it goes on till advanced aspects like thread and IPC programming.
linux device drivers: Role of Device Drivers, Splitting The Kernel, Classes of
Devices and Modules, Security Issues, Version Numbering, Building and Running Modules
Kernel Modules Vs. Applications, Compiling and Loading, Kernel Symbol Table,
Preliminaries, Interaction and Shutdown, Module Parameters, Doing It in User Space.
Various virtualization technologies are present at the market for more than a decade, but they were typically occupying cloud platforms. Recently, virtualization began spreading over embedded platforms after ARM presented Virtualization Extension for its recent processors. Various peripherals (like disks and network) had been easily virtualized for usage by several operating systems at once, but things like Graphical Processing Units (GPU) remain to be one of the most intricate parts to be adapted, with very few vendors who actually managed to do it.
Sergiy Kibrik (Software Engineer, GlobalLogic) explain how it was done at GlobalLogic. This presentation was delivered at GlobalLogic Embedded TechTalk Kyiv on July 22, 2015.
After returning from a recent trip that occurred during the middle of a heat wave. I arrived home to find my apartment quite hot, at least 45C inside. Needless to say it wasn’t the most comfortable way to come home after 15 days out of town, I decided it was time for me to do something about it to address this so I didn't come home to that unpleasant surprise again. Normally, this problem is solved by having a thermostat which controls the air conditioning. However, my apartment did not have a thermostat. So I decided to build one using open source software.
This talk will cover how I went about solving my problem using existing software and protocols like home-assistant, MQTT, and also some new software that was created for this. It'll also discuss how using open software and home automation I was able to solve my issue but also make cooling my apartment smarter.
This course gets you started with writing device drivers in Linux by providing real time hardware exposure. Equip you with real-time tools, debugging techniques and industry usage in a hands-on manner. Dedicated hardware by Emertxe's device driver learning kit. Special focus on character and USB device drivers.
Embedded Android System Development - Part II talks about Hardware Abstraction Layer (HAL). HAL is an interfacing layer through which Android service can place a request to device. Uses functions provided by Linux system to service the request from android framework. A C/C++ layer with purely vendor specific implementation. Packaged into modules (.so) file & loaded by Android system at appropriate time
Embedded Android system development workshop is focused on integrating new device with Android framework. Our hands-on approach makes Emertxe as the best institute to learn android system development training. This workshop deep dives into Android porting, Android Hardware Abstraction Layer (HAL), Android Services and Linux device driver ecosystem. This workshop based training program will enable you to efficiently integrate new hardware with Android HAL / Framework.
The second part of Linux Internals covers system calls, process subsystem and inter process communication mechanisms. Understanding these services provided by Linux are essential for embedded systems engineer.
There is a surge in number of sensors / devices that are getting connected under the umbrella of Internet-Of-Things (IoT). These devices need to be integrated into the Android system and accessed via applications, which is covered in the course. Our Android system development course curriculum over weekends with practicals ensures you learn all critical components to get started.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Introduction to Linux Kernel by Quontra SolutionsQUONTRASOLUTIONS
Course Duration: 30-35 hours Training + Assignments + Actual Project Based Case Studies
Training Materials: All attendees will receive,
Assignment after each module, Video recording of every session
Notes and study material for examples covered.
Access to the Training Blog & Repository of Materials
Pre-requisites:
Basic Computer Skills and knowledge of IT.
Training Highlights
* Focus on Hands on training.
* 30 hours of Assignments, Live Case Studies.
* Video Recordings of sessions provided.
* One Problem Statement discussed across the whole training program.
* Resume prep, Interview Questions provided.
WEBSITE: www.QuontraSolutions.com
Contact Info: Phone +1 404-900-9988(or) Email - info@quontrasolutions.com
Linux has become integral part of Embedded systems. This three part presentation gives deeper perspective of Linux from system programming perspective. Stating with basics of Linux it goes on till advanced aspects like thread and IPC programming.
linux device drivers: Role of Device Drivers, Splitting The Kernel, Classes of
Devices and Modules, Security Issues, Version Numbering, Building and Running Modules
Kernel Modules Vs. Applications, Compiling and Loading, Kernel Symbol Table,
Preliminaries, Interaction and Shutdown, Module Parameters, Doing It in User Space.
Various virtualization technologies are present at the market for more than a decade, but they were typically occupying cloud platforms. Recently, virtualization began spreading over embedded platforms after ARM presented Virtualization Extension for its recent processors. Various peripherals (like disks and network) had been easily virtualized for usage by several operating systems at once, but things like Graphical Processing Units (GPU) remain to be one of the most intricate parts to be adapted, with very few vendors who actually managed to do it.
Sergiy Kibrik (Software Engineer, GlobalLogic) explain how it was done at GlobalLogic. This presentation was delivered at GlobalLogic Embedded TechTalk Kyiv on July 22, 2015.
After returning from a recent trip that occurred during the middle of a heat wave. I arrived home to find my apartment quite hot, at least 45C inside. Needless to say it wasn’t the most comfortable way to come home after 15 days out of town, I decided it was time for me to do something about it to address this so I didn't come home to that unpleasant surprise again. Normally, this problem is solved by having a thermostat which controls the air conditioning. However, my apartment did not have a thermostat. So I decided to build one using open source software.
This talk will cover how I went about solving my problem using existing software and protocols like home-assistant, MQTT, and also some new software that was created for this. It'll also discuss how using open software and home automation I was able to solve my issue but also make cooling my apartment smarter.
Presentation delivered at LinuxCon China 2017. Rethinking the Operating System.
A new wave of Operating Systems optimized for containers appeared on the horizon making us excited and puzzeled at the same time.
"Why do we need anything different for containers when traditional OSs served us well in the last 25+ years?" "Isn't Kubernetes just another package to install on top of my favorite distro?"" Will this obsolete my whole infrastructure?" are some of the questions this talk will shed some light on.
Explore the journey SUSE made in rethinking the OS: From a conservative linux distribution to a platform that goes hand in hand with the needs of Microservices.
You will get an insight at what lessons were learned during the intense development effort that lead to SUSE Containers as a Service Platform, how the obstacles along the way were lifted and why "Upstream first" is - and should always be - the rule.
Besides huge success in mobile, ARM is also ambitious in server field. Software ecosystem is now a barrier for wide deployment of ARM servers in data center. ARM Shanghai Workloads team is working on clouding and big data software enablement and optimization on ARM64 platform.
In this presentation, Yibo Cai will introduce the status and challenges of running OpenStack on ARM servers, with emphasis on OpenStack compute, storage and networking.
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Presentation delivered at LinuxCon China 2017.
The Libvirt API is cloud industry standard API to manage virtualization hosts on cross platforms.It is widely implemented in renowned cloud system such as openstack,opencloud. The compatibility and fragmentation avoiding of Libvirt APIs will eventually play great impact on what Libvirt can achieve as a whole. For this reason,Libvirt API certification is introduced. Libvirt API certification focuses on testing a technology implementation to make sure that it operates consistently with all other implementations of the same Libvirt technology specification.
In this paper, the author will review current state of Libvirt API certification, discuss the challenges it faces, and look forward to how Libvirt community may address those challenges.
Presentation delivered at the 2017 LinuxCon China.
Container is a popular technology in cloud with the merits of fast provisioning, high density and near-native performance. New requirements have been rising to further use GPU to accelerate various applications in containers, e.g. media trans-coding, machine learning, etc. However there is still gap of managing GPU in such container usages. This presentation will first review the status of GPU support on both native container and Intel clear container, then provide an anatomy of technical gaps and enabling plans in major areas (orchestration layer, resource isolation and QoS performance isolation), then review our work for GPU virtualization in clear container and prototype work through extensions in docker plugin, cgroup and GPU scheduler to fix those gaps.
In this talk, Tim Bird will discuss the recent status of the Linux with regard to embedded systems. This will include a review of the last year's worth of mainline kernel releases, as well as topic areas specifically related to embedded, such as boot-up time, security, system size, etc. Tim will also present recent and planned work by the Core Embedded Linux Project of the Linux Foundation, and discuss the current status of Linux in various markets and fields. Tim will go over current areas of work, and discuss remaining challenges faced by Linux in embedded projects.
Presentation delivered at LinuxCon China 2017.
Operating systems need to move faster without sacrificing stability. New hardware, new software features, and bugfixes are making it into distribution components every day. To maintain stability, packagers and distribution developers are looking toward lessons learned in the DevOps movement to implement Continuous Integration/Continuous Delivery (CI/CD) workflows that provide quicker test feedback to developers.
This talk highlights some of the coming trends in Fedora such as: streamlined base package sets, userspace applications delivered as containers, continuous validation of individual distro components and the distro as a whole, and collaboration with the CentOS Project.
Presentation delivered at LinuxCon China 2017
The practices of Blockchain as a service in Dianrong (Shiyuan Xiao, Dianrong.com) - Blockchain as a Service (BaaS) provides a easy, low-cost and flexible platform for companies to enable their businesses based on blockchain backed by a cloud platform. Shiyuan will introduce the experiences to build such a BaaS platform, what is the architecture, what problems we have met and solved and the best practices we summarized.
Kdump is a long existing method for acquiring dump of crashed kernel, however very few literatures are available to understand it's usage and internals. We receive a lot of queries on kexec mailing list about different issues related to the kexec/kdump environment.
In this presentation, we talk about basics of kdump usage and some internals about kdump/kexec kernel implementation. It includes end to end flow from kdump kernel configuration to crash analysis. We discuss some of the problem which is frequently faced by kdump users. It also includes related information about ELF structure, so that one can debug if vmcore itself gets corrupted because of any architecture related issue.
Can we leverage the resource of public cloud for gaming, streaming, transcoding, machine learning and visualized CAD application on demand? Yes if it provides the capability and infrastructure to utilize GPUs. Can we get high performance networking in the cloud as what I have in the bare metal environment? Yes with SR-IOV. How to achieve them? In this presentation we describe Discrete Device Assignment (also known as PCI Pass-through) support for GPU and network adapter in Linux guest and SR-IOV architectures of Linux guest with near-native performance profile running on Hyper-V. We also will share how to integrate accelerated graphics and networking capabilities in Microsoft Azure infrastructure.
Fully Automated Kubernetes Deployment and Management (Peng Jiang, Rancher Labs) - Kubernetes is rapidly gaining popularity as a powerful container orchestration and scheduling platform. But deploying and managing Kubernetes clusters is still a challenge for many organizations.How to ensure Kubernetes clusters in different clouds and data centers can communicate with each other? How to automate the deployment of multiple Kubernetes clusters? How to incorporate the new Kubernetes Federation into multi cloud and multi datacenter deployments? How to manage the health of Kubernetes cluster itself? etc.
In this talk, Peng will share his experience on how to automate and simplify Kubernetes deployments, and discuss how some of the latest community projects (such as kubeadm and self-hosting Kubernetes) will help address the problems in the future.
Kernel Recipes 2014 - The Linux Kernel, how fast it is developed and how we s...Anne Nicolas
This talk will go into the latest statistics for the development of the Linux kernel.
It will describe how the many thousand developers all work together and are able to release a stable kernel every 3 months with no planning.
Greg Kroah-Hartman, Linux Foundation
Kernel Recipes 2017 - Linux Kernel Release Model - Greg KHAnne Nicolas
This talk describes how the Linux kernel development model works, what a long term supported kernel is, and why all Linux-based systems devices should be using all of the stable releases and not attempting to pick and choose random patches.
It also goes into how the kernel community approaches “security” patches, and the reasoning why they do it in the manner they do.
This is an expanded talk based on a lighting talk presented at Kernel Recipes 2015, and the speaker’s interactions with companies reaction to the supposedly confusing development model on which they rely on.
Greg KH, Linux Foundation
Kernel Recipes 2016 - The kernel reportAnne Nicolas
The Linux kernel is at the core of any Linux system; the performance and capabilities of the kernel will, in the end, place an upper bound on what he system as a whole can do. This talk will review recent events in the kernel development community, discuss the current state of the kernel and the challenges it faces, and look forward to how the kernel may address those challenges. Attendees of any technical ability should gain a better understanding of how the kernel got to its current state and what can be expected in the near future.
Jonathan Corbet, LWN.net
A lot of Internet of things devices use linux as its core. More so with the advent of DIY projects and Internet of things projects. A lot of Raspberry PI's, Beaglebone, Tessel boards are out there with default settings, and all connected to the internet, ready to be taken over. With the recent dyn DNS attack its of prime importance to know how we can keep these end point devices secure and out of the hands of botnet hoarders, attackers. In this presentation Rabimba Karanjai will show how to harden the security on these endpint devices taking a RaspBerry PI as an example. He will explain different techniques with code examples along with a toolkit made specifically for this demo which will make devices considerable harder to compromise. And even when they are, will allow to locate and detect the breach. After all, proetcting the device fially protects us all (prevents another DDOS)
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
The Blockchain for the Internet of Things (IoT) has considered to "change the future." Despite a myriad of studies on the blockchain IoT, few studies have investigated how an IoT blockchain system develops with open source technologies, open standards, web technologies, and a p2p network. In this presentation, Jollen will share the Flowchain case study, an open source IoT blockchain project in Node.js; he will discuss the practice, the technical challenges, and the engineering experiences. Furthermore, to provide the real-time data transaction capabilities for current IoT requirements, he will utilize the "virtual block" idea to facilitate such technical challenges.
Secure Container solution is to enhance container security by isolating memory between Docker containers inside one VM with Intel VT-x EPT HW, which is highly effective to protect container’s memory and at the meantime defends ret2user privilege escalation attack that exploits kernel vulnerabilities (eg. CVE-2017-6074 UAF (use-after-free) vulnerability). It extends KVM interfaces which the guest OS can leverage to isolate container memory from other containers, and the interfaces rely on Intel VT-x EPT hardware extension and provide memory access protection for the container which sits in an isolated memory region. Each secure container has a dedicated EPT table rather than sharing one EPT table with guest OS, which enforces the cross-EPT memory access protection. The whole solution is user-friendly to fit in the existing cloud server infrastructure with very limited changes.
There are best practices to understand when building products from open source software, but there are a number of anti-patterns that crop up along the way. Product teams (from engineering to marketing) need to understand these patterns and practices to participate best in open source project communities and deliver products and services to their customers at the same time. These patterns hold regardless of whether the vendor created and owns the project or participates in projects outside their control.
Enterprise data centers have to support a diverse of set of workloads: cloud native, big data, high performance computing, and legacy applications. While cloud native applications are ideal to run in Docker clusters, bare metal and virtualization infrastructures must still be supported in the data center. The result is a proliferation of clusters and technologies running in individual silos, resulting in high management costs and low utilization. This talk describes the challenges and experiences in implementing a shared cluster infrastructure based on Kubernetes to support big data, high performance computing, and VM-based workloads. The talk will show the deployment and scaling of a high performance computing workload manager, Spark, and OpenStack, and how the VM and Docker management can be integrated together.
Resource placement is a policy-rich problem, particularly across multi-cluster, multi-geography and multi-cloud environments. Placement may be based on company conventions, external regulation, pricing, performance requirements, or complex combinations of those. Furthermore, placement policies evolve over time and vary across organizations. As a result, it is very difficult to anticipate the policy requirements of all users.
In this presentation, Torin Sandal (Lead Engineer of Open Policy Agent) will present, along with Irfan Ur Rehman, and demonstrate the work they've done integrating OPA into the Kubernetes Cluster Federation Control Plane. This enables high level policies to be expressed in a easy to understand policy language, and automatically enforced across federations of Kubernetes clusters.
Failure injection is somewhat analogous to a vaccine. We want to inject these bad behaviours so our developers can build immunities to them. Can we inject failure scenarios into deployed systems to reduce platform risk Demonstrations of the Simian Army, Chaos Lemur and Locust.io tools will be presented.
Even when all of the individual services in a distributed system are functioning properly, the interactions between those services can cause unpredictable outcomes. Unpredictable outcomes, compounded by rare but disruptive real-world events that affect production environments, make these distributed systems inherently chaotic.
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
Presentation delivered at LinuxCon China 2016
UEFI HTTP/HTTPS Boot is a new feature of UEFI 2.5+. In the meantime, this feature is not yet implemented in any Linux bootloader. This Birds of a Feather session will give an introduction to UEFI HTTP/HTTPS Boot, and share a proof-of-concept implementation based on grub2 that works on both the emulator (QEMU/OVMF) and HPE ProLiant Gen10 servers.
For HTTPS, the experience and comparison will be shared between the purely software-based and UEFI-based implementations in the aspects of ease of implementation, security strength, and limitation.
In the container ecosystem, there is perhaps no technology that has received more focus and attention than orchestration and scheduling. Mesos, Kubernetes, and Swarm have established themselves as the leading technology choices in this space.
In this talk, Sheng will discuss what he learned from working directly with hundreds of users who have deployed one of these frameworks. He will look at how these frameworks will continue to evolve and if there’re any gaps and opportunities in container orchestration and scheduling. Sheng will make a case that there are still room for innovation and new orchestration and scheduling frameworks will be created in the future. He will discuss what new frameworks might look like--the features, functionalities, and attributes that differentiate them from the mainstream frameworks today.
Presentation delivered at LinuxCon China 2017.
Zephyr is an upstream open source project for places where Linux is too big to fit. This talk will overview the progress we've made in the first year towards the projects goals around incorporating best of breed technologies into the code base, and building up the community to support multiple architectures and development environments. We will share our roadmap, plans and the challenges ahead of the us and give an overview of the major technical challenges we want to tackle in 2017.
More from LinuxCon ContainerCon CloudOpen China (16)
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
19. Top developers by quantity
Chris Wilson 1093
Mauro Carvalho Chehab 845
Johan Hovold 754
Arnd Bergmann 714
Viresh Kumar 537
Geert Uytterhoeven 473
Christoph Hellwig 456
Wei Yongjun 451
Ville Syrjälä 446
Linus Walleij 411
Greg Kroah-Hartman 409
Kernel releases 4.7.0 – 4.11.0
20. Top Signed-off-by:
Greg Kroah-Hartman 7734
David S. Miller 7107
Mauro Carvalho Chehab 2317
Linus Torvalds 2144
Mark Brown 1966
Andrew Morton 1930
Ingo Molnar 1809
Alex Deucher 1529
Linus Walleij 1202
Chris Wilson 1199
Kalle Valo 1196
Kernel releases 4.7.0 – 4.11.0
21. Who is funding this work?
Kernel releases 4.7 – 4.11
1. “Amateurs” 14.4%
2. Intel 13.4%
3. Red Hat 7.3%
4. Linaro 6.4%
5. IBM 3.4%
6. Samsung 3.4%
7. Consultants 3.0%
8. SuSE 2.9%
9. Google 2.7%
10. AMD 2.3%
36. 3,781 developers
≈400 companies
Kernel releases 4.7.0 – 4.11.0
May 2016 – April 2017
This makes the Linux kernel the largest
contributed body of software out there that
we know of.
This is just the number of companies that we
know about, there are more that we do not,
and as the responses to our inquiries come
in, this number will go up.
Have surpassed 400 companies for 4 years
now.
37. 7,300 lines added
2,400 lines removed
2,000 lines modified
Kernel releases 4.7.0 – 4.11.0
May 2016 – April 2017
38. 7,300 lines added
2,400 lines removed
2,000 lines modified
Every day
Kernel releases 4.7.0 – 4.11.0
May 2016 – April 2017
39. 8 changes per hour
Kernel releases 4.7.0 – 4.11.0
May 2016 – April 2017
This is 24 hours a day, 7 days a week, for a
full year.
We went this fast the year before this as well,
this is an amazing rate of change.
Interesting note, all of these changes are all
through the whole kernel.
For example, the core kernel is only 5% of the
code, and 5% of the change was to the core
kernel. Drivers are 55%, and 55% was done
to them, it's completely proportional all
across the whole kernel.
40. 9.7 changes per hour
4.9 release
2.6.20 to 2.6.24-rc8
4.9 was the “largest” in number of changes
that we have ever accepted. After 4.9,
things went down a bit for 4.10 and 4.11,
but 4.12 is getting very big.
Now this is just the patches we accepted, not
all of the patches that have been submitted,
lots of patches are rejected, as anyone who
has ever tried to submit a patch can attest
to.
41. 4.12 release July 7th
?
2nd
largest release
2.6.20 to 2.6.24-rc8
4.12 should be released on July 7th
and is on
track to be the 2nd
largest release by number
of changes we have ever done.
And the first largest on number of lines of
code we have added, due to some very large
drivers being added to the tree.
42. How we stay sane
2.6.20 to 2.6.24-rc8
Time based releases
Incremental changes
43. New release every
2½ months
Kernel releases 3.10.0 – 3.14.0
67 days to be exact, very regular experience.
44. How a kernel is developed.
Linus releases a stable kernel
- 2 week merge window from subsystem
maintainers
- rc1 is released
- bugfixes only now
- 2 weeks later, rc2
- bugfixes and regressions
- 2 weeks later,rc3
And so on until all major bugfixes and
regressions are resolved and then the cycle
starts over again.
45. Greg takes the stable releases from Linus,
and does stable releases with them,
applying only fixes that are already in
Linus's tree.
Requiring fixes to be in Linus's tree first
ensures that there is no divergence in the
development model.
After Linus releases a new stable release,
the old stable series is dropped.
With the exception of “longterm” stable
releases, those are special, the stick around
for much longer...
46. “Longterm kernels”
One picked per year
Maintained for two years
4.4 4.9
2.6.20 to 2.6.24-rc8
I pick one kernel release per year to maintain for
longer than one release cycle. This kernel I will
maintain for at least 2 years.
This means there are 2 longterm kernels being
maintained at the same time.
4.4 and 4.9 are the longterm kernel releases I am
currently maintaining
The LTSI project is based on the longterm kernels.
47. Like mentioned before, we have almost 3000
individual contributors. They all create a
patch, a single change to the Linux kernel.
This change could be something small, like a
spelling correction, or something larger, like
a whole new driver.
Every patch that is created only does one
thing, and it can not break the build,
complex changes to the kernel get broken
up into smaller pieces.
48. The developers send their patch to the
maintainer of the file(s) that they have
modified.
We have about 700 different
driver/file/subsystem maintainers
49. After reviewing the code, and adding their
own signed-off-by to the patch, the
file/driver maintainer sends the patch to the
subsystem maintainer responsible for that
portion of the kernel.
We have around 150 subsystem maintainers
50. Linux-next gets created every night from all
of the different subsystem trees and build
tested on a wide range of different
platforms.
We have about 150 different trees in the
linux-next release.
Andrew Morton picks up patches that cross
subsystems, or are missed by others, and
releases his -mm kernels every few weeks.
This includes the linux-next release at that
time.
51. Every 3 months, when the merge window
opens up, everything gets sent to Linus from
the subsystem maintainers and Andrew
Morton.
The merge window is 2 weeks long, and
thousands of patches get merged in that
short time.
All of the patches merged to Linus should
have been in the linux-next release, but that
isn't always the case for various reasons.
Linux-next can not just be sent to Linus as
there are things in there that sometimes are
not good enough to be merged just yet, it is
up to the individual subsystem maintainer to
decide what to merge.
52. Top developers by quantity
Chris Wilson 1093
Mauro Carvalho Chehab 845
Johan Hovold 754
Arnd Bergmann 714
Viresh Kumar 537
Geert Uytterhoeven 473
Christoph Hellwig 456
Wei Yongjun 451
Ville Syrjälä 446
Linus Walleij 411
Greg Kroah-Hartman 409
Kernel releases 4.7.0 – 4.11.0
Chris – intel graphics drivers
Mauro – Video 4 Linux (media drivers)
Johan – greybus, usb-serial, drivers
Arnd – janitorial cleanups and arch-generic
Viresh – greybus
Geert – janitorial
Christoph – vfs, filesystems, xfs, everywhere
Wei - Janitorial
Ville – intel graphics
Linus – gpio, pin, arm drivers
Greg – greybus
53. Top Signed-off-by:
Greg Kroah-Hartman 7734
David S. Miller 7107
Mauro Carvalho Chehab 2317
Linus Torvalds 2144
Mark Brown 1966
Andrew Morton 1930
Ingo Molnar 1809
Alex Deucher 1529
Linus Walleij 1202
Chris Wilson 1199
Kalle Valo 1196
Kernel releases 4.7.0 – 4.11.0
Greg – driver core, usb, staging, greybus
David – networking, isa
Mauro – video 4 linux (media)
Linus – everything
Mark – embedded sound
Andrew – everything
Ingo – x86
Alex – radeon graphics
Linus – gpio and pinctl
Chris – intel graphics
Kalle – wireless drivers
54. Who is funding this work?
Kernel releases 4.7 – 4.11
1. “Amateurs” 14.4%
2. Intel 13.4%
3. Red Hat 7.3%
4. Linaro 6.4%
5. IBM 3.4%
6. Samsung 3.4%
7. Consultants 3.0%
8. SuSE 2.9%
9. Google 2.7%
10. AMD 2.3%
So you can view this as either 14% is done by
non-affiliated people, or 86% is done by
companies.
Now to be fair, if you show any skill in kernel
development you are instantly hired.
Why this all matters: If your company relies
on Linux, and it depends on the future of
Linux supporting your needs, then you
either trust these other companies are
developing Linux in ways that will benefit
you, or you need to get involved to make
sure Linux works properly for your
workloads and needs.
61. 2.6.20 to 2.6.24-rc8
Getting involved
Google “write your first kernel patch”
This is a video of a talk I gave at FOSDEM,
going through the steps, showing exactly
how to create, build, and send a kernel
patch.
62. 2.6.20 to 2.6.24-rc8
Getting involved
kernelnewbies.org/KernelJanitors/Todo
So you know how to create a patch, but
what should you do? The kernel janitors has
a great list of tasks to start with in cleaning
up the kernel and making easy patches to be
accepted.
63. 2.6.20 to 2.6.24-rc8
Linux Driver Project
drivers/staging/*/TODO
Getting involved
The staging tree also needs a lot of help, here
are lists of things to do in the kernel for the
drivers to be able to move out of the staging
area.
Please always work off of the linux-next tree if
you want to do these tasks, as sometimes
they are already done by others by the time
you see them in Linus's tree.
64. 2.6.20 to 2.6.24-rc8
Getting involved
http://eudyptula-challenge.org/
Eudyptula Challenge
(little penguin)
Google “Linux kernel challenge” to find the
site, if you can't remember Eudyptula.
It is a series of programming challenges, all
run through email that starts out with a
“Hello World” kernel module, and gets more
complex from there. Over 4000 people are
currently taking the challenge, and is a lot
of fun if you don't know where to start out.
You need knowledge of C, but that's about it.