This document provides an overview of the OpenNaaS project, which aims to create an open source Network as a Service (NaaS) software stack. The key goals of OpenNaaS are to enable long-term network research, leverage past research assets, and create an open community of stakeholders. OpenNaaS is based on OSGi and uses a lightweight abstraction model centered around resources, capabilities, actions, and profiles. It provides reusable platform components and aims to be embeddable, interoperable, and provide a foundation for NaaS plugins. The roadmap discusses ongoing and future extensions related to different network domains.
Bruno Guedes - Hadoop real time for dummies - NoSQL matters Paris 2015NoSQLmatters
There are many frameworks that can offer real time on top of Hadoop. This talk will show you the usage of Pivotal HAWQ and how it is easy to use SQL for querying your Hadoop data. Come and see the power and easy of use that can help you on using the Hadoop ecosystem.
Having used apache pulsar in production for an year for our pub sub use cases such as stream analytics, event sourcing etc, this slide deck presents the lesson learned per se understanding the architecture, tuning the cluster, managing to keep it highly available and fault tolerant and much more.
While the slides are presented in terms of apache pulsar, a lot of the concepts can be easily extended to a lot of distributed systems.
The views here are my own and do not represent the view of nutanix corporation.
Bruno Guedes - Hadoop real time for dummies - NoSQL matters Paris 2015NoSQLmatters
There are many frameworks that can offer real time on top of Hadoop. This talk will show you the usage of Pivotal HAWQ and how it is easy to use SQL for querying your Hadoop data. Come and see the power and easy of use that can help you on using the Hadoop ecosystem.
Having used apache pulsar in production for an year for our pub sub use cases such as stream analytics, event sourcing etc, this slide deck presents the lesson learned per se understanding the architecture, tuning the cluster, managing to keep it highly available and fault tolerant and much more.
While the slides are presented in terms of apache pulsar, a lot of the concepts can be easily extended to a lot of distributed systems.
The views here are my own and do not represent the view of nutanix corporation.
Developing Real-Time Data Pipelines with Apache KafkaJoe Stein
Developing Real-Time Data Pipelines with Apache Kafka http://kafka.apache.org/ is an introduction for developers about why and how to use Apache Kafka. Apache Kafka is a publish-subscribe messaging system rethought of as a distributed commit log. Kafka is designed to allow a single cluster to serve as the central data backbone. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of coordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages. For the Spring user, Spring Integration Kafka and Spring XD provide integration with Apache Kafka.
A la rencontre de Kafka, le log distribué par Florian GARCIALa Cuisine du Web
Kafka c’est un peu la nouvelle star sur la scène des files de messages. Pourtant Kafka ne se présente pas en tant que tel, c’est un log distribué !
Alors qu’est ce que c’est ? Comment ça marche ? Et surtout comment et pourquoi je l’utilise ?
Dans cette session, on décortique la bête pour tout vous expliquer ! Au programme : des concepts, des cas d’usage, du streaming et un retour d’expérience !
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Introducing Kafka-on-Pulsar: bring native Kafka protocol support to Apache Pu...StreamNative
Kafka-on-Pulsar has been one of the most anticipated features in the Pulsar ecosystem. The Kafka-on-Pulsar project was initiated by StreamNative and the OVHCloud team quickly joined the project to collaborate on its development. Kafka-on-Pulsar enables Kafka applications to leverage Pulsar’s powerful features, such as streamlined operations with enterprise-grade multi-tenancy, without modifying code.
In this webinar, Sijie Guo, from StreamNative, and Pierre Zemb, from OVHCloud, will introduce KoP and discuss the following:
1. What are the key benefits?
2. What is the protocol handler and how does it work?
3. How KoP is implemented?
4. What are the new use cases it unlocks?
5. Watch a Live Demo!
Many enterprise are implementing Hadoop projects to manage and process large datasets. Big question is: how to configure Hadoop clusters to connect to enterprise directory containing 100k+ users and groups for access management. Several large enterprises have complex directory servers for managing users and groups. Many advanced features have been recently added to Hadoop user management in order to support various complex directory server structures.
In this session attendees will learn about: setting up Hadoop node with users from Active Directory for executing Hadoop jobs, setting up authentication for enterprise users, and setting up authorization for users and groups using Apache Ranger. Attendees will also learn about the common challenges faced in the enterprise environments while interacting with Active Directory including filtering out users to be brought into Hadoop from Active Directory, restricting access to a set of users from Active Directory, handling users from nested group structures, etc.
Speakers
Sailaja Polavarapu, staff Software Engineer, Hortonworks
Velmurugan Periasamy, Director - Engineering, Hortonworks
Apache Con 2021 : Apache Bookkeeper Key Value Store and use casesShivji Kumar Jha
In order to leverage the best performance characters of your data or stream backend, it is important to understand the nitty gritty details of how your backend store and compute works, how data is stored, how is it indexed and how the read path is. Understanding this empowers you to design your use case solutioning so as to make the best use of resources at hand as well as get the optimum amount of consistency, availability, latency and throughput for a given amount of resources at hand.
With this underlying philosophy, in this slide deck, we will get to the bottom of storage tier of pulsar (apache bookkeeper), the barebones of the bookkeeper storage semantics, how it is used in different use cases ( even other than pulsar), understand the object models of storage in pulsar, different kinds of data structures and algorithms pulsar uses therein and how that maps to the semantics of the storage class shipped with pulsar by default. Oh yes, you can change the storage backend too with some additional code!
The focus will be more on storage backend so as to not keep this tailored to pulsar specifically but to be able to apply it different data stores or streams.
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...StreamNative
We will introduce HerdDB a distributed database written in Java.
We will see how a distributed database can be built using Apache BookKeeper as write-ahead commit log.
Pulsar Summit Asia - Structured Data Stream with Apache PulsarShivji Kumar Jha
Type safety is extremely important in any application built around a message bus like Pulsar. Type definition and evolution can either be built in the application or relied upon the data layer to support it out of the box allowing the application to only concentrate on business logic, not how of data store and evolution. Apache pulsar offers server as well as client side support for the structured streaming.
We have been using pulsar for asynchronous communication among microservices in our nutanix beam app for over an year in production.
This talk presents the technical details on what is available in the apache pulsar server and client side, how we have used pulsar's schema support to build our use cases and our learnings from them.
This is an overview of interesting features from Apache Pulsar. Keep in mind that by the time I did this presentation I did not have used Pulsar yet. It's just my first impressions from the list of features.
Using Apache Spark with IBM SPSS Modeler with Dr. Steve Poulin.
An introduction to Apache Spark and its relevant integration with IBM SPSS Modeler. Why integrate? What type of benefits?
A review the integration process high level and advise which enhanced features to pay attention to, and common pitfalls to avoid.
OGh Oracle Fusion Middleware Experience 2016 bij FIGI Zeist
Door Maarten Smeets and Robbrecht van Amerongen, 16-02-2016
Ogh fmw experience 16 februari 2016
How Pulsar Stores Your Data - Pulsar Summit NA 2021StreamNative
In order to leverage the best performance characters of your stream backend, it is important to understand the nitty gritty details of how pulsar stores your data. Understanding this empowers you to design your use case solutioning so as to make the best use of resources at hand as well as get the optimum amount of consistency, availability, latency and throughput for a given amount of resources at hand.
With this underlying philosophy, in this talk, we will get to the bottom of storage tier of pulsar (apache bookkeeper), the barebones of the bookkeeper storage semantics, how it is used in different use cases ( even other than pulsar), understand the object models of storage in pulsar, different kinds of data structures and algorithms pulsar uses therein and how that maps to the semantics of the storage class shipped with pulsar by default. Oh yes, you can change the storage backend too with some additional code!
This session will empower you with the right background to map your data right with pulsar.
Happy Family: istruzioni per l'uso è una presentazione utilizzata nell'ambito del Mese della cultura psicologica svoltasi a Parma nel maggio 2015.
Le serate sono state svolte in collaborazione con la dott.ssa Luana Randis e il dott. Ivano Ceriati.
Developing Real-Time Data Pipelines with Apache KafkaJoe Stein
Developing Real-Time Data Pipelines with Apache Kafka http://kafka.apache.org/ is an introduction for developers about why and how to use Apache Kafka. Apache Kafka is a publish-subscribe messaging system rethought of as a distributed commit log. Kafka is designed to allow a single cluster to serve as the central data backbone. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of coordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages. For the Spring user, Spring Integration Kafka and Spring XD provide integration with Apache Kafka.
A la rencontre de Kafka, le log distribué par Florian GARCIALa Cuisine du Web
Kafka c’est un peu la nouvelle star sur la scène des files de messages. Pourtant Kafka ne se présente pas en tant que tel, c’est un log distribué !
Alors qu’est ce que c’est ? Comment ça marche ? Et surtout comment et pourquoi je l’utilise ?
Dans cette session, on décortique la bête pour tout vous expliquer ! Au programme : des concepts, des cas d’usage, du streaming et un retour d’expérience !
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Introducing Kafka-on-Pulsar: bring native Kafka protocol support to Apache Pu...StreamNative
Kafka-on-Pulsar has been one of the most anticipated features in the Pulsar ecosystem. The Kafka-on-Pulsar project was initiated by StreamNative and the OVHCloud team quickly joined the project to collaborate on its development. Kafka-on-Pulsar enables Kafka applications to leverage Pulsar’s powerful features, such as streamlined operations with enterprise-grade multi-tenancy, without modifying code.
In this webinar, Sijie Guo, from StreamNative, and Pierre Zemb, from OVHCloud, will introduce KoP and discuss the following:
1. What are the key benefits?
2. What is the protocol handler and how does it work?
3. How KoP is implemented?
4. What are the new use cases it unlocks?
5. Watch a Live Demo!
Many enterprise are implementing Hadoop projects to manage and process large datasets. Big question is: how to configure Hadoop clusters to connect to enterprise directory containing 100k+ users and groups for access management. Several large enterprises have complex directory servers for managing users and groups. Many advanced features have been recently added to Hadoop user management in order to support various complex directory server structures.
In this session attendees will learn about: setting up Hadoop node with users from Active Directory for executing Hadoop jobs, setting up authentication for enterprise users, and setting up authorization for users and groups using Apache Ranger. Attendees will also learn about the common challenges faced in the enterprise environments while interacting with Active Directory including filtering out users to be brought into Hadoop from Active Directory, restricting access to a set of users from Active Directory, handling users from nested group structures, etc.
Speakers
Sailaja Polavarapu, staff Software Engineer, Hortonworks
Velmurugan Periasamy, Director - Engineering, Hortonworks
Apache Con 2021 : Apache Bookkeeper Key Value Store and use casesShivji Kumar Jha
In order to leverage the best performance characters of your data or stream backend, it is important to understand the nitty gritty details of how your backend store and compute works, how data is stored, how is it indexed and how the read path is. Understanding this empowers you to design your use case solutioning so as to make the best use of resources at hand as well as get the optimum amount of consistency, availability, latency and throughput for a given amount of resources at hand.
With this underlying philosophy, in this slide deck, we will get to the bottom of storage tier of pulsar (apache bookkeeper), the barebones of the bookkeeper storage semantics, how it is used in different use cases ( even other than pulsar), understand the object models of storage in pulsar, different kinds of data structures and algorithms pulsar uses therein and how that maps to the semantics of the storage class shipped with pulsar by default. Oh yes, you can change the storage backend too with some additional code!
The focus will be more on storage backend so as to not keep this tailored to pulsar specifically but to be able to apply it different data stores or streams.
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...StreamNative
We will introduce HerdDB a distributed database written in Java.
We will see how a distributed database can be built using Apache BookKeeper as write-ahead commit log.
Pulsar Summit Asia - Structured Data Stream with Apache PulsarShivji Kumar Jha
Type safety is extremely important in any application built around a message bus like Pulsar. Type definition and evolution can either be built in the application or relied upon the data layer to support it out of the box allowing the application to only concentrate on business logic, not how of data store and evolution. Apache pulsar offers server as well as client side support for the structured streaming.
We have been using pulsar for asynchronous communication among microservices in our nutanix beam app for over an year in production.
This talk presents the technical details on what is available in the apache pulsar server and client side, how we have used pulsar's schema support to build our use cases and our learnings from them.
This is an overview of interesting features from Apache Pulsar. Keep in mind that by the time I did this presentation I did not have used Pulsar yet. It's just my first impressions from the list of features.
Using Apache Spark with IBM SPSS Modeler with Dr. Steve Poulin.
An introduction to Apache Spark and its relevant integration with IBM SPSS Modeler. Why integrate? What type of benefits?
A review the integration process high level and advise which enhanced features to pay attention to, and common pitfalls to avoid.
OGh Oracle Fusion Middleware Experience 2016 bij FIGI Zeist
Door Maarten Smeets and Robbrecht van Amerongen, 16-02-2016
Ogh fmw experience 16 februari 2016
How Pulsar Stores Your Data - Pulsar Summit NA 2021StreamNative
In order to leverage the best performance characters of your stream backend, it is important to understand the nitty gritty details of how pulsar stores your data. Understanding this empowers you to design your use case solutioning so as to make the best use of resources at hand as well as get the optimum amount of consistency, availability, latency and throughput for a given amount of resources at hand.
With this underlying philosophy, in this talk, we will get to the bottom of storage tier of pulsar (apache bookkeeper), the barebones of the bookkeeper storage semantics, how it is used in different use cases ( even other than pulsar), understand the object models of storage in pulsar, different kinds of data structures and algorithms pulsar uses therein and how that maps to the semantics of the storage class shipped with pulsar by default. Oh yes, you can change the storage backend too with some additional code!
This session will empower you with the right background to map your data right with pulsar.
Happy Family: istruzioni per l'uso è una presentazione utilizzata nell'ambito del Mese della cultura psicologica svoltasi a Parma nel maggio 2015.
Le serate sono state svolte in collaborazione con la dott.ssa Luana Randis e il dott. Ivano Ceriati.
Livet på Marginen. FIVAS, 2004. Temarapporten om utvikling og marginalisering i Bengal ble produsert i forbindelse med Ann-Elin Wangs utstilling av bilder fra kystområdene i Bangladesh. Her tvinges mange mennesker lenger og lenger ut på marginen. De lever på ytterkanten av både elvedeltaet og samfunnet, og må konkurrere om stadig færre ressurser.
11 Tips Melangsingkan Tubuh Secara Alami jika teman2 ingin melihat presentasi ini dalam bentuk animasi powerpoint, silahkan kunjungi:
http://www.youtube.com/watch?v=UgT2MomiDJ4
Untuk mendapatkan tips sehat, add Pin BB: 25 E3E 5CC
Melangsing bukanlah sekedar mengurangi porsi makan anda, tetapi merupakan proses yang seharusnya mengembirakan dan tidak menyiksa :)
Mengurangi porsi makan bukan berarti mengurangi nutrisi yaa.., apalagi kalau sampai membiarkan tubuh kelaparan..
FIVAS (Foreningen for Internasjonale Vannstudier) og FENTAP, 2009. Den peruvianske tegneserien "Vann til alle!" tar opp på humoristisk vis alvorlige temaer som vannprivatisering, kampen for retten til vann og hvordan nye modeller for offentlig vannforvaltning kan være et godt alternativ. Heftet er laget i samarbeid med FENTAP, en landsorganisasjon for arbeidere innenfor vann og sanitær i Peru.
Pasti banyak yang mengalami hal ini. Sudah Olah Raga, tapi masih gemuk.
Sebenarnya dan sudah pasti ada sesuatu penyebabnya. simak presentasi ini baik2 yaa... :)
Menurunkan Berat Badan, Nutrishake Step By StepVega Aminkusumo
Jika anda sudah memilih untuk menurunkan berat badan dan mengkonsumsi Nutrishake, presentasi ini adalah step by step yang menjelaskan dengan sederhana :)
Dan sekarang... bisa di download... *yeay*
simak bentuk animasinya di youtube juga yaa:
http://www.youtube.com/watch?v=6aIoJPXuDbs
Silahkan yaaa... :)
FIVAS og ForUM, 2009. Heftet tar for seg ulike aspekter ved verdens vannkrise, behov for ferskvann, krisen som en global utfordring, vann som menneskerettighet og viktigheten av retten til og tilgang til sanitærtjenester. Heftet er laget i samarbeid med ForUMs gruppe for ferskvann og sanitær.
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...Rahul Krishna Upadhyaya
Slide was presented at Dr. Dobb's Conference in Bangalore.
Talks about Openstack Introduction in general
Projects under Openstack.
Contributing to Openstack.
This was presented jointly by CB Ananth and Rahul at Dr. Dobb's Conference Bangalore on 12th Apr 2014.
Redfish is an IPMI replacement standardized by the DMTF. It provides a RESTful API for server out of band management and a lightweight data model specification that is scalable, discoverable and extensible. (Cf: http://www.dmtf.org/standards/redfish). This presentation will start by detailing its role and the features it provides with examples. It will demonstrate the benefits it provides to system administrator by providing a standardized open interface for multiple servers, and also storage systems.
We will then cover various tools such as the DMTF ones and the python-redfish library (Cf: https://github.com/openstack/python-redfish) offering Redfish abstractions.
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
Tips Tricks and Tactics with Cells and Scaling OpenStack
OpenStack Design Summit, Paris - May, 2015
Belmiro Moreira - CERN
Matt Van Winkle - Rackspace
Sam Morrison - NeCTAR, University of Melbourne
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
"OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others."
Watch the video: http://wp.me/p3RLHQ-gKz
Learn more: http://openhpc.community/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Real-Time Distributed and Reactive Systems with Apache Kafka and Apache AccumuloJoe Stein
In this talk we will walk through how Apache Kafka and Apache Accumulo can be used together to orchestrate a de-coupled, real-time distributed and reactive request/response system at massive scale. Multiple data pipelines can perform complex operations for each message in parallel at high volumes with low latencies. The final result will be inline with the initiating call. The architecture gains are immense. They allow for the requesting system to receive a response without the need for direct integration with the data pipeline(s) that messages must go through. By utilizing Apache Kafka and Apache Accumulo, these gains sustain at scale and allow for complex operations of different messages to be applied to each response in real-time.
Accumulo Summit 2015: Real-Time Distributed and Reactive Systems with Apache ...Accumulo Summit
Talk Abstract
In this talk we will walk through how Apache Kafka and Apache Accumulo can be used together to orchestrate a de-coupled, real-time distributed and reactive request/response system at massive scale. Multiple data pipelines can perform complex operations for each message in parallel at high volumes with low latencies. The final result will be inline with the initiating call. The architecture gains are immense. They allow for the requesting system to receive a response without the need for direct integration with the data pipeline(s) that messages must go through. By utilizing Apache Kafka and Apache Accumulo, these gains sustain at scale and allow for complex operations of different messages to be applied to each response in real-time.
Speaker
Joe Stein
Principal Consultant, Big Data Open Source Security, LLC
Joe Stein is an Apache Kafka committer and PMC member. Joe is the Founder and Principal Architect of Big Data Open Source Security LLC a professional services and product solutions company. Joe has been a developer, architect and technologist professionally for 15 years now having built back end systems that supported over one hundred million unique devices a day processing trillions of events. He blogs and hosts a podcast about Hadoop and related systems at All Things Hadoop and tweets @allthingshadoop
The Apereo OAE Bootcamp offers an introduction into back-end and front-end development for the Apereo OAE project.
The back-end development part focuses on learning the different extension points behind the scenes in the service layer of OAE. A back-end component for OAE that exposes a REST API is built from scratch.
Back-end development topics include:
- Node.js NPM module system
- OAE back-end application life-cycle
- Data-modelling with Apache Cassandra and writing CQL queries from Node.js
- Using the OAE APIs to expose back-end functionality for the web VIA RESTful APIs
- Writing back-end unit tests using Grunt and Mocha. If time permits, the following will also be covered:
- Integrating with OAE's ElasticSearch query and index functionality
- Integrating with OAE's Activity and Notifications functionality
- Integration with OAE's Admin Configuration functionality
The front-end development part focuses on writing a UI widget using the REST APIs developed in the back-end development part.
Front-end development topics include:
- Integrating with the OAE Widget loading system
- Writing internationalizable templates with TrimPath and the widget i18n and l10n functionality
- Interacting with the core OAE UI APIs
- Using bootstrap 3 to design responsive UI layouts for your widgets
- Writing front-end unit tests using Grunt and CasperJS
The new trend of open source based hardware and software is disrupting the data center market unleashing unprecedented value to the customers. Come join us to see how the community based open source networking software OpenSwitch completes the all open sourced data center solution end to end, from the hardware layer to the cloud stack, and to see how HPE and its partners accelerating its adoption with the industry leading HPE Altoline, and OpenSwitch product line.”
Mr.Mohan Babu, HPC @AMD presented on the spack basics and HPC containers. He covered the basics of spack, its concepts and creation and the containers in HPC.
Get Devops Training in Chennai with real-time experts at Besant Technologies, OMR. We believe that learning Devops with practical and theoretical will be the easiest way to understand the technology in quick manner. We designed this Devops from basic level to the latest advanced level
http://www.traininginsholinganallur.in/devops-training-in-chennai.html
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
2. Objective
• A software project, that feeds from research.
– Enable long-term research to happen on top of it.
• And leverage past research and assets.
• Create a community that allows several stakeholders to
contribute and benefit from a common NaaS software
stack.
• Open Source, Open community.
– Solid base functionality that can be used on a production
environement.
• Users increase life span.
• Faster research output adoption.
8. get & compile & install
• Built it from scratch:
8
$ git clone git://github.com/dana-i2cat/opennaas.git
$ cd opennaas
$ git checkout develop # optional
$ mvn install
$ cd opennaas
$ mvn clean
$ git pull git://github.com/dana-i2cat/opennaas.git
$ mvn install
$ cp –r platform/target/opennaas-0.10 /srv
$ cd /srv/opennaas-0.10
$ ./bin/opennaas.sh
• Update from source:
• Run it:
Fetch code
Fetch unstable
Built it
Clean past built
Fetch updates
Built it
Enjoy!
11. OpenNaaS Key Requirements
• On demand (commonly user-triggered) provisioning of
network resources.
• Recursive delegation of access right over managed resources.
• Lightweight Abstracted operational model.
– Decoupled from actual vendor-specific details.
– Flexible enough to accomodate diferent designs and orientations
– Fixed enough so common tools can be build and reused across
plugins.
• Security.
• Lifecycle.
• Monitoring.
• Deployment and upgrade.
• Service orchestration.
12. OpenNaaS Stakeholders
• Network Operators
with an interest on
NaaS:
– NREN.
– Cloud Datacenter.
– New services for ISP’s.
• ISV and integrators
– middleware-network
orchestration.
• Developers and
network researchers.
FUSE ServiceMix
Platform
Extensions
Third Party plugins
OTSdistribution
13. OpenNaaS Platform
• For developers and researchers:
– Modern IDEs available
– Maven based build system and
dependency management
– Plugin howto documentation
– Several available open source
plugins as reference
– An open OpenNaaS community
– Comercial support for underlying
technologies
• Leverage building blocks, both using
existing resources or for creating
new ones.
– Resource Respository and Manager
– Protocol Session Manager
– Standard Capabilities
– Protocol Endpoints for remoting
(SOAP, REST, etc).
– Platform manager
– *.apache.org deployment ready
libraries.
• While plugins can chose to use
technologies like hibernate, spring or
ESB, they don’t have to.
18. Platform
CLI
Persistence
Queue
Resource
Manager
. . .
Security
Protocol Session Manager
Resource Lifecycle
Resource Layer
RouterResource
NetworkResource
BoDResource
OpticalSwitch
Resource
...
Remoting
Scripting
GUI
OpenNebula
OpenStackNS
NSA(NSI)
...
3P
Extensions
3P
Middleware
OpenNaaS Architecture
Network Intelligence
• Integration with Northbound
Middleware
• IaaS/Cloud managers
• Other NMS.
• The user
NaaS Layer
• Network HAL abstraction to
infrastructure.
• Resources manageable by the user.
• Access controlled by the Sec.
Manager.
Platform
• Reusable building blocks, common to
all extensions.
• Controls access to the infrastructure.
• Integrity, Policy, etc..
Managed infrastructure
BoD
19. OpenNaaS Platform Base Components
• ResourceManager.
– Manages the persistence and lifecycle of Resources.
– There is a ResourceManager repository implementation for
each ResourceType.
• Which acts as a Factory for that type.
– Implements also Profiles, we’ll see that later.
– Which brings us to the NaaS abstraction reusable concepts;
• Resource
• Resource Type
• Capability
• Action
• ActionSet
• Profile
20. OpenNaaS Platform Base Components
• Reusable concepts:
– A Resource represents a manageable unit inside the NaaS
concept.
• A Resource can be a switch, a router, a link, a logical router, a
network, etc…
– Instantiations of a Resource Type.
• Resources share a simple lifecycle:
– Initialized, loaded in memory.
– Active, accepts calls.
Capability
Resource
RPC
Router
21. OpenNaaS Platform Base Components
• Reusable concepts:
– A Resource represents a
manageable unit inside the NaaS
concept.
• A Resource is decomposed in:
– A model
– An array of Capabilities.
• The ResourceType defines:
– The model.
– Which Capabilities are allowed.
• Which Capabilities are actually
callable will depend on that actual
Resource instance.
» The Resource can be
interrogated.
Capabil
Resourc
RPC
Router
Chassis
GRE
OSPF
22. OpenNaaS Platform Base Components
• Reusable concepts:
– A Capability is an interface to a given
Resource functionality.
• I.e. for a router:
– OSPF, IPv6, Create/manage logical routers,
etc.
• Callable by the user.
– This interface is, as the Model,
abstracted and vendor neutral.
– Internally the Capability, is
implemented for each kind of device.
• Hence, some capabilities might not be
available for some vendors.
– The Capability is the HAL limit for
OpenNaaS.
Capabil
Resourc
RPC
Router
Chassis
GRE
OSPF
23. OpenNaaS Platform Base Components
• Internally, Capabilities need a way to
abstract implementation details of the
devices.
– They use Actions.
• An Action is a vendor (and protocol)
specific implementation of a
configuration modification.
– It can be Queue’d.
– It can be undone (rollback).
• Actions are grouped into an ActionSet.
• On Action.execute(), the action usually
asks to the ProtocolSessionManager
for an appropriate ProtocolSession to
communicate with the device.
Capabil
Resourc
RPC
24. OpenNaaS Platform Base Components
• An Action can be implemented
from scratch:
– Just fill the execute() method with
some code.
• Or reused from some adaptors
we have.
– Most importantly, netconf actions
are very XML-intensive.
– They use a digester rule set for
XML processing
– And Velocity for XML creation.
Capabil
Resourc
RPC
25. OpenNaaS Platform Base Components
• A Profile is an alternative set of
ActionSets.
• They can be deployed at runtime to
the container.
• On creation time, a Profile can be
specified for a given Resource.
• When looking for an Action to execute
(or queue), Capabilities will first check
the Profile for an alternative Action.
– If found, it will be executed instead of
the default one.
• This is a mechanism for OpenNaaS
administrators to modify behaviour of
default capabilities.
Capabil
Resourc
RPC
26. OpenNaaS Platform Base Components
• The QueueManager is used to
stack all Actions to be executed.
– All modifications can be done over
the network at once.
– Allows rollback of Actions.
– Objective: the network-wide
rollback of actions.
– It is both a Capability and a OSGI
Service.
• The user can check and manipulate
the Queue as a Capability.
• The rest of Capabilities can work
with it via the OSGi registry.
– Saves a lot of serialization.
Capabil
Resourc
RPC
28. Fuse ServiceMix
• Standards based
• Open Source
• State of the art technologies
– OSGi, Java 6, Apache SF, Scala, etc
– Roll your own
• Componetized compilation of Apache library.
• Documented
• Comercial support.
• Portable
– Linux, Windows, Mac.
• Not always the latest library versions…
29. Platform
• Based on a component container:
– OSGi R4 (Apache Felix’s implementation)
• Mainly, this allows:
– The application is split components, and they are:
• Started and stopped at runtime.
– Which can be explored and manipulated via the CLI
– Which can be handled programmatically (via events, RPC, etc).
• Installed and updated from a (remote) repository.
– Components are isolated from each other.
• Classes from a bundle cannot import from other bundles.
• Unless explicitly allowed to.
• There is a service publication/consumption registry.
• On OSGi, these components are called bundles.
– A bundle is a jar + some special lines on the MANIFEST.
– Features.xml allow to specify a version of the platform + an initial set
bundles.
32. OpenNaaS Platform
• Embeddable and interoperable.
– Component of a bigger middleware
• i.e. a cloud management infrastructure.
– L-GPLv3 for the platform.
• Foundation of the NaaS layer.
• Reusable concepts across plugins
– Resource, Capability, Action, Lifecycle.
– A command toolset and remoting layer is built around this
concepts.
– Etc
• Shared but defined roadmap.
33. OpenNaaS Platform Base Components
• Leverage building blocks:
– Resource Respository and Manager
• Handles lifecycle and persistence.
– Protocol Session Manager
• Mantains protocol session lifecycle, with an eye on session reusability.
• Additional protocols can be added
– Standard Capabilities
• Queue (for configuration deployment).
– Protocol Endpoints for remoting (SOAP, REST, etc).
– Platform manager
– *.apache.org deployment ready libraries.
• While plugins can chose to use technologies like hibernate, spring or ESB,
they don’t have to.
34. OpenNaaS Platform Base Components
• Protocol Session Manager
– Implements the ProtocolSession abstraction
• Currently we have these implementations:
– Netconf (IRTF).
– Onesys (EMS Module).
– CLI (Telnet, SSH).
– TL1 (TCP, SSL).
– Manages ProtocolSession lifecycle.
• Performs pooling, if possible.
• Reuses sessions (keeps them alive for some minutes).
• ProtocolSession events.
– Isolates ProtocolSession usage from credentials.
• Loads and pairs ProtocolSessionContexts with appropiate device.
• transport://user:password@ip:port/subsystem
53. Roadmap
• Extensions and platform upgrades are performed according to:
– Research projects
– Internal initiatives from i2CAT
– Initiatives from third party extensions
– Privately funded projects from industry
• The roadmap is open to discussion on the usual project forums
(i.e. mailing lists).
54. Extensions Roadmap
Done Current Short-term (<6m) Mid-Term (>6m)
L1 ROADM
L2 BoD Domain client
• AutoBAHN
BoD Domain Server
• Porting Harmony IDB
BoD Domain Server
• NSI interface.
L2 / L3 Router
L3 Network
Manager GUI
Security Manager
• SAML Idp
Cloud Manager
connectors
• OpenStack NetworkS
ervice drop-in
replacement
• OpenNebula 3.0
• Energy consumption
metrics.
• Infrastructure
Marketplace.
OpenFlow Controller
55. Extensions Roadmap by Project
2012 2013 2014 2015 2016
Mantychore UC1 UC2
NOVI SFA Adapter
GEYSERS MAC Bridge
CONTENT
OFERTIE
SODALES
GN3+
Wifi/TDM Resources
OpenFlow SLA Manager
Wifi/TDM Orchestrator
ARN Resource
GN3
56. Third Party Extensions
• Mantychore extensions are ASLv2, so they can be
used as foundation for additional extensions
– Additional extensions can have any license.
• New extensions can have any license.
• Possibility to be hosted on private repositories.
– And both be installed with a platform well-known
command
• feature:install http://net.biz/3rd.party.feature
• Can leverage both platform functionality and
default extensions.