Talk at the MrDevSummit 2019 @ Microsoft Reactor in London.
Showing the journey from the idea to the final app with all the design challenges when going from 2D to 3D.
Shared on 5th Dec at SGInnovate with Swirlds Mance Harmon, Jordan Fried and Edgar Seah.
Hashgraph consensus, demo apps in Swirlds Java SDK, babble (unofficial golang implementation of Hashgraph) and their implications for distributed ledger technology.
Blockchain is the currently the hottest tech buzzword. Yet is it just hype or is it that fundamental piece of tech that will it truly change the world we live in, much like the internet did 25 years ago?
This presentation initially explains the fundamentals of blockchain and how it enables a new breed of business models.
Then we will then delve into how you can have a blockchain app on Azure, followed by a demo.
The presention describes analyses Microsoft's strategy with blockchain and how they are working on enabling Azure support to a number of DLTs including Ethereum, Hyperledger Fabric, R3 Corda, Quorum and Chain Core by offering easy-to-deploy templates for these ledgers. And more importantly how Microsoft is integrating these DLTs to the existing rich Azure ecosystem to enable the building of truly scalable, distributed enterprise applications using cryptlets and the Coco Framework.
Whose Cloud Is It Anyway? Exploring Data Security, Ownership and ControlDavid Etue
Whose cloud is it anyway? Exploring data security, ownership and control as presented at ISSE EU 2014
Forget the geeky analysis of cloud security; risk is driven by people involved and the approach to adoption. Cloud security conversations often focus on technical risk from other users of the cloud’s pooled resources, or vulnerabilities in the application and virtualization layers. The more important conversation is likely around data control, ownership, and identity management as the resource pooling and abstraction to address risks from cloud users, cloud administrators, law enforcement, intelligence agencies and a pantheon of adversaries. In all these organizations there is an increase in the latest technologies that could possibly jeopardize security. There are trends with using unsecure cloud services and bring your own devices that often make these organizations vulnerable to risks. In today’s technological world it is not a matter of if the data will be compromised but when it will be compromised and what these groups can do to protect the data when this happens.
This discussion will tackle the complex issues around data ownership and control. If data is destiny, then too many people are in charge of your fate. We discuss how to get it back.
The Art of Evading Anti-Virus
There are estimates that security analysts, to include penetration testers, are approximately 5 years behind malicious actors. Anti-virus by itself isn’t enough to stop a malicious individual from gaining access to your servers or computers anymore. In fact many of them have devised ways to evade anti-viruses. We as security professionals should understand how these individuals are doing this, and what tools are available for us to replicate these attacks. Tools such as veil-framework assist us with this. This talk will go over this tool, and how malicious individuals evade anti-viruses with ease.
Quentin Rhoads-Herrera is a security analyst for State Farm. In this position he is responsible for risk analysis and application security assessments. He is accountable for ensuring risks are identified and properly mitigated throughout the organization.
He previously served as the Information Security Director for Clearview Energy and Solarview. In this position he oversaw all information security activities. These included development of company-wide cyber security standards, development of layered defense approaches and the hardening and defense of all company systems.
Mr. Rhoads-Herrera has worked in the Information Security space for a total of seven years serving in roles ranging from Security Consultant to Information Security Director.
Talk at the MrDevSummit 2019 @ Microsoft Reactor in London.
Showing the journey from the idea to the final app with all the design challenges when going from 2D to 3D.
Shared on 5th Dec at SGInnovate with Swirlds Mance Harmon, Jordan Fried and Edgar Seah.
Hashgraph consensus, demo apps in Swirlds Java SDK, babble (unofficial golang implementation of Hashgraph) and their implications for distributed ledger technology.
Blockchain is the currently the hottest tech buzzword. Yet is it just hype or is it that fundamental piece of tech that will it truly change the world we live in, much like the internet did 25 years ago?
This presentation initially explains the fundamentals of blockchain and how it enables a new breed of business models.
Then we will then delve into how you can have a blockchain app on Azure, followed by a demo.
The presention describes analyses Microsoft's strategy with blockchain and how they are working on enabling Azure support to a number of DLTs including Ethereum, Hyperledger Fabric, R3 Corda, Quorum and Chain Core by offering easy-to-deploy templates for these ledgers. And more importantly how Microsoft is integrating these DLTs to the existing rich Azure ecosystem to enable the building of truly scalable, distributed enterprise applications using cryptlets and the Coco Framework.
Whose Cloud Is It Anyway? Exploring Data Security, Ownership and ControlDavid Etue
Whose cloud is it anyway? Exploring data security, ownership and control as presented at ISSE EU 2014
Forget the geeky analysis of cloud security; risk is driven by people involved and the approach to adoption. Cloud security conversations often focus on technical risk from other users of the cloud’s pooled resources, or vulnerabilities in the application and virtualization layers. The more important conversation is likely around data control, ownership, and identity management as the resource pooling and abstraction to address risks from cloud users, cloud administrators, law enforcement, intelligence agencies and a pantheon of adversaries. In all these organizations there is an increase in the latest technologies that could possibly jeopardize security. There are trends with using unsecure cloud services and bring your own devices that often make these organizations vulnerable to risks. In today’s technological world it is not a matter of if the data will be compromised but when it will be compromised and what these groups can do to protect the data when this happens.
This discussion will tackle the complex issues around data ownership and control. If data is destiny, then too many people are in charge of your fate. We discuss how to get it back.
The Art of Evading Anti-Virus
There are estimates that security analysts, to include penetration testers, are approximately 5 years behind malicious actors. Anti-virus by itself isn’t enough to stop a malicious individual from gaining access to your servers or computers anymore. In fact many of them have devised ways to evade anti-viruses. We as security professionals should understand how these individuals are doing this, and what tools are available for us to replicate these attacks. Tools such as veil-framework assist us with this. This talk will go over this tool, and how malicious individuals evade anti-viruses with ease.
Quentin Rhoads-Herrera is a security analyst for State Farm. In this position he is responsible for risk analysis and application security assessments. He is accountable for ensuring risks are identified and properly mitigated throughout the organization.
He previously served as the Information Security Director for Clearview Energy and Solarview. In this position he oversaw all information security activities. These included development of company-wide cyber security standards, development of layered defense approaches and the hardening and defense of all company systems.
Mr. Rhoads-Herrera has worked in the Information Security space for a total of seven years serving in roles ranging from Security Consultant to Information Security Director.
Red, Amber, Green Status: The Human Dashboard
This session will outline the importance of presenting actionable metrics for the Security Awareness program. Oftentimes security programs are presented while omitting the most constant threat to Information Systems: the human. From a security awareness perspective, we will review analytics that include key performance indicators that may already be available to you; they just need to be added to the new human dashboard.
Laurianna Callaghan currently serves as a security consultant for Ana Academy, a Dallas based security training company. Previously, Laurianna worked with Dell where she was the creator of security analytics for a major healthcare customer which were presented at the 2016 IASAP conference. In addition, Laurianna has more than 21 years experience in various IT domains. She has served as the Director of Systems Engineering for a telemarketing firm, the UNIX/MVS Manager for a major airline and has IT experience in the healthcare, communications, transportation, education, retail, and other industry sectors. Laurianna holds both the CCNA Security and CISSP designations.
Blockchain Scalability - Architectures and AlgorithmsGokul Alex
My presentation on 'Blockchain Scalability - Architectures and Algorithms' for the TechAthena Digital Community Webinar.
Blockchain Scalability is one of the most significant concern for Minimum Viable Blockchain Implementation. It is one of the key aspects determining the relevance and feasibility of Blockchain Technology for a particular use case.This session will cover the fundamental aspects of distributed computing that determine the contours of scalability.
Subsequently, the session will outline the parameters and metrics related to Blockchain Scalability in detail. In this context, the session will deep dive into architectural and algorithmic techniques that enables a scalable Blockchain.Architectural techniques such as vertical scaling and horizontal scaling will be explained in detail. Design techniques such as State Channels, Sharding, SideChains, Off chain computations, Block Size and Time Optimization etc. will be explained.
In summary, this session will conclude with the implications and trade-off between Blockchain Scalability, Security, Simplicity and Interoperability. Looking forward to your views and thoughts !
Build Blockchain Prototype using Azure Workbench and Manage data on ledgerMohammad Asif
In this session we show how to create blockchain prototype using Azure Blockchain Workbench and integrate with existing applications & Scale your blockchain apps with azure blockchain as service.
Introducing the Vulnerability Management Maturity Model - VM3
The information security landscape has evolved significantly during the last 5 years with the emergence and wider use of new technologies such as Cloud, BYOD, Mobile and the Internet of Things. Alongside this landscape, corporate organizations‰Ûª key defense leaders, CIOs, CSOs and CISOs, have evolved in their information security defense strategies, as well as in how they think and approach information security. This different and evolved landscape, combined with defense leaders‰Ûª new mindset, has influenced key information security processes and in particular, has resulted in a greater understanding of the process of Vulnerability Management.
This session presents a Vulnerability Management Maturity Model, referred to as VM3, and which identifies six different levels of vulnerability management maturity within which different organizations operate. Detailed findings and lessons learned from of a recent study on vulnerability management maturity are shared.
The session covers the six high level activities, as well as a surrounding business environment which characterize an organization's execution of the vulnerability management process. Key challenges present within each of the six high level activities of vulnerability management, as well as challenges imposed by the organization's surrounding business environment are identified and described. Attendees will learn and appreciate how these key challenges impede one's ability to achieve higher levels of maturity, as well as strategies on overcoming these identified challenges. Attendees will learn how they may help their organization evolve to higher levels of vulnerability management maturity, with the goal of achieving lower levels of information security risk.
Gordon MacKay, CISSP, Software/Systems Guru with a dash of security hacking, serves as CTO for Digital Defense, Inc. He applies mathematical modeling and engineering principles in investigating solutions to many of the challenges within the information security space. His solution to matching network discovered hosts within independent vulnerability assessments across time resulted in achieving patent-pending status for the company’s scanning technology.
He has presented at many conferences including ISC2 Security Summit, Cyber Texas, BSides Detroit, BSides San Antonio, BSides Austin, BSides DFW, RSA and more, and has been featured by top media outlets such as Fox News, CIO Review, Softpedia and others.
He holds a Bachelor's in Computer Engineering from McGill University and is a Distinguished Ponemon Institute Fellow.
Business Geekdom: 1 = 3 = 5
Each year a security team participates in several audits, meetings with the business and strategy meetings. Often times, security is seen as one imposing requirements that are either too difficult, impossible to manage or flat out ridiculous.
This is similar to a geek. A geek is defined, as, "an unfashionable or socially inept person." Is this socially ineptness actually just the lack of the ability to translate the passion of the security professional to the business professional?
In this presentation, I would like to cover how to create, establish and evangelize a framework that has one backend with several frontends. The backend is a common security control framework (not the UCF) and the front end translates to the various business units, audits and business strategies encountered in a security professionals profession each year.
Grant Gilliam is a Enterprise and Solutions Architect for CHRISTUS Health. Previously, Gilliam has been a security architect, senior security engineer and senior data security analyst. Industries worked in include healthcare, insurance, software and news media. Gilliam has also established and created his own business focusing in outsourcing non-competitive business tasks for allowing clients a strategic advantage over competitors by minimizing FTE and contractor headcount.
His educational background includes a Master of Science in Information Systems, focusing in Information Security, and Bachelor of Business Administration in Management Information Systems, both from Baylor University. The focus of his masters degree research was IT law and Intellectual Property. Gilliam also is a Certified Information Systems Security Professional, Certified Information Security Manager and Certified Information Systems Auditor.
Getting started with Azure Event Grid - Webinar with Steef-Jan WiggersCodit
Azure Event Grid is one of the latest Microsoft Azure solutions. It enables you to build reactive, event-driven apps with a fully managed event routing service. The result? It simplifies your event consumption, while you can build reliable cloud apps and focus on product innovation.
A list of action items you want to keep in mind when you're devsecops'ing for your cloudnative environments. Given as a part of a talk on the Modern Security series (
https://info.signalsciences.com/securing-cloud-native-ten-tips-better-container-security).
Hackbama Presentation
Presenter: Jason Cuneo
Abstract: The revolution of blockchain centered technologies provides security practitioners with a unique opportunity to participate in shaping the future of secure networking and has the potential to redefine how organizations and society transact and determine value. The objective of this discussion is to introduce how blockchains are disrupting the status quo and how they can be used to improve the Cybersecurity landscape.
Webinar: Fighting Fraud with Graph DatabasesDataStax
Modern fraud detection has significant engineering challenges. From managing the ingestion and scale, to the analysis of those patterns in real-time. We'll first take a look at how DataStax Enterprise Graph, powered by the industry’s best version of Apache Cassandra™, can meet those requirements to help you save the day.
Soft-Shake 2013 : Enabling Realtime Queries to End UsersBenoit Perroud
Since it became an Apache Top Level Project in early 2008, Hadoop has established itself as the de-facto industry standard for batch processing. The two layers composing its core, HDFS and MapReduce, are strong building blocks for data processing. Running data analysis and crunching petabytes of data is no longer fiction. But the MapReduce framework does have two major drawbacks: query latency and data freshness.
At the same time, businesses have started to exchange more and more data through REST API, leveraging HTTP words (GET, POST, PUT, DELETE) and URI (for instance http://company/api/v2/domain/identifier), pushing the need to read data in a random access style – from simple key/value to complex queries.
Enhancing the BigData stack with real time search capabilities is the next natural step for the Hadoop ecosystem, because the MapReduce framework was not designed with synchronous processing in mind.
There is a lot of traction today in this area and this talk will try to answer the question of how to fill in this gap with specific open-source components, ultimately building a dedicated platform that will enable real-time queries on Internet-scale data sets. After discussing the evolution of the deployments of common Hadoop platform, a hybrid approach called lambda architecture will be proposed. It will be demonstrated with concrete examples, discussing which technology could be a good match, and how they would interact together.
Apache Kafka and the Data Mesh | Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Red, Amber, Green Status: The Human Dashboard
This session will outline the importance of presenting actionable metrics for the Security Awareness program. Oftentimes security programs are presented while omitting the most constant threat to Information Systems: the human. From a security awareness perspective, we will review analytics that include key performance indicators that may already be available to you; they just need to be added to the new human dashboard.
Laurianna Callaghan currently serves as a security consultant for Ana Academy, a Dallas based security training company. Previously, Laurianna worked with Dell where she was the creator of security analytics for a major healthcare customer which were presented at the 2016 IASAP conference. In addition, Laurianna has more than 21 years experience in various IT domains. She has served as the Director of Systems Engineering for a telemarketing firm, the UNIX/MVS Manager for a major airline and has IT experience in the healthcare, communications, transportation, education, retail, and other industry sectors. Laurianna holds both the CCNA Security and CISSP designations.
Blockchain Scalability - Architectures and AlgorithmsGokul Alex
My presentation on 'Blockchain Scalability - Architectures and Algorithms' for the TechAthena Digital Community Webinar.
Blockchain Scalability is one of the most significant concern for Minimum Viable Blockchain Implementation. It is one of the key aspects determining the relevance and feasibility of Blockchain Technology for a particular use case.This session will cover the fundamental aspects of distributed computing that determine the contours of scalability.
Subsequently, the session will outline the parameters and metrics related to Blockchain Scalability in detail. In this context, the session will deep dive into architectural and algorithmic techniques that enables a scalable Blockchain.Architectural techniques such as vertical scaling and horizontal scaling will be explained in detail. Design techniques such as State Channels, Sharding, SideChains, Off chain computations, Block Size and Time Optimization etc. will be explained.
In summary, this session will conclude with the implications and trade-off between Blockchain Scalability, Security, Simplicity and Interoperability. Looking forward to your views and thoughts !
Build Blockchain Prototype using Azure Workbench and Manage data on ledgerMohammad Asif
In this session we show how to create blockchain prototype using Azure Blockchain Workbench and integrate with existing applications & Scale your blockchain apps with azure blockchain as service.
Introducing the Vulnerability Management Maturity Model - VM3
The information security landscape has evolved significantly during the last 5 years with the emergence and wider use of new technologies such as Cloud, BYOD, Mobile and the Internet of Things. Alongside this landscape, corporate organizations‰Ûª key defense leaders, CIOs, CSOs and CISOs, have evolved in their information security defense strategies, as well as in how they think and approach information security. This different and evolved landscape, combined with defense leaders‰Ûª new mindset, has influenced key information security processes and in particular, has resulted in a greater understanding of the process of Vulnerability Management.
This session presents a Vulnerability Management Maturity Model, referred to as VM3, and which identifies six different levels of vulnerability management maturity within which different organizations operate. Detailed findings and lessons learned from of a recent study on vulnerability management maturity are shared.
The session covers the six high level activities, as well as a surrounding business environment which characterize an organization's execution of the vulnerability management process. Key challenges present within each of the six high level activities of vulnerability management, as well as challenges imposed by the organization's surrounding business environment are identified and described. Attendees will learn and appreciate how these key challenges impede one's ability to achieve higher levels of maturity, as well as strategies on overcoming these identified challenges. Attendees will learn how they may help their organization evolve to higher levels of vulnerability management maturity, with the goal of achieving lower levels of information security risk.
Gordon MacKay, CISSP, Software/Systems Guru with a dash of security hacking, serves as CTO for Digital Defense, Inc. He applies mathematical modeling and engineering principles in investigating solutions to many of the challenges within the information security space. His solution to matching network discovered hosts within independent vulnerability assessments across time resulted in achieving patent-pending status for the company’s scanning technology.
He has presented at many conferences including ISC2 Security Summit, Cyber Texas, BSides Detroit, BSides San Antonio, BSides Austin, BSides DFW, RSA and more, and has been featured by top media outlets such as Fox News, CIO Review, Softpedia and others.
He holds a Bachelor's in Computer Engineering from McGill University and is a Distinguished Ponemon Institute Fellow.
Business Geekdom: 1 = 3 = 5
Each year a security team participates in several audits, meetings with the business and strategy meetings. Often times, security is seen as one imposing requirements that are either too difficult, impossible to manage or flat out ridiculous.
This is similar to a geek. A geek is defined, as, "an unfashionable or socially inept person." Is this socially ineptness actually just the lack of the ability to translate the passion of the security professional to the business professional?
In this presentation, I would like to cover how to create, establish and evangelize a framework that has one backend with several frontends. The backend is a common security control framework (not the UCF) and the front end translates to the various business units, audits and business strategies encountered in a security professionals profession each year.
Grant Gilliam is a Enterprise and Solutions Architect for CHRISTUS Health. Previously, Gilliam has been a security architect, senior security engineer and senior data security analyst. Industries worked in include healthcare, insurance, software and news media. Gilliam has also established and created his own business focusing in outsourcing non-competitive business tasks for allowing clients a strategic advantage over competitors by minimizing FTE and contractor headcount.
His educational background includes a Master of Science in Information Systems, focusing in Information Security, and Bachelor of Business Administration in Management Information Systems, both from Baylor University. The focus of his masters degree research was IT law and Intellectual Property. Gilliam also is a Certified Information Systems Security Professional, Certified Information Security Manager and Certified Information Systems Auditor.
Getting started with Azure Event Grid - Webinar with Steef-Jan WiggersCodit
Azure Event Grid is one of the latest Microsoft Azure solutions. It enables you to build reactive, event-driven apps with a fully managed event routing service. The result? It simplifies your event consumption, while you can build reliable cloud apps and focus on product innovation.
A list of action items you want to keep in mind when you're devsecops'ing for your cloudnative environments. Given as a part of a talk on the Modern Security series (
https://info.signalsciences.com/securing-cloud-native-ten-tips-better-container-security).
Hackbama Presentation
Presenter: Jason Cuneo
Abstract: The revolution of blockchain centered technologies provides security practitioners with a unique opportunity to participate in shaping the future of secure networking and has the potential to redefine how organizations and society transact and determine value. The objective of this discussion is to introduce how blockchains are disrupting the status quo and how they can be used to improve the Cybersecurity landscape.
Webinar: Fighting Fraud with Graph DatabasesDataStax
Modern fraud detection has significant engineering challenges. From managing the ingestion and scale, to the analysis of those patterns in real-time. We'll first take a look at how DataStax Enterprise Graph, powered by the industry’s best version of Apache Cassandra™, can meet those requirements to help you save the day.
Soft-Shake 2013 : Enabling Realtime Queries to End UsersBenoit Perroud
Since it became an Apache Top Level Project in early 2008, Hadoop has established itself as the de-facto industry standard for batch processing. The two layers composing its core, HDFS and MapReduce, are strong building blocks for data processing. Running data analysis and crunching petabytes of data is no longer fiction. But the MapReduce framework does have two major drawbacks: query latency and data freshness.
At the same time, businesses have started to exchange more and more data through REST API, leveraging HTTP words (GET, POST, PUT, DELETE) and URI (for instance http://company/api/v2/domain/identifier), pushing the need to read data in a random access style – from simple key/value to complex queries.
Enhancing the BigData stack with real time search capabilities is the next natural step for the Hadoop ecosystem, because the MapReduce framework was not designed with synchronous processing in mind.
There is a lot of traction today in this area and this talk will try to answer the question of how to fill in this gap with specific open-source components, ultimately building a dedicated platform that will enable real-time queries on Internet-scale data sets. After discussing the evolution of the deployments of common Hadoop platform, a hybrid approach called lambda architecture will be proposed. It will be demonstrated with concrete examples, discussing which technology could be a good match, and how they would interact together.
Apache Kafka and the Data Mesh | Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Streaming Cyber Security into Graph: Accelerating Data into DataStax Graph an...Keith Kraus
Traditional security tools like security information and event managers (SIEMs) are struggling to keep up with the terabytes of event data (250M to 2B events) being generated each day from an ever-growing number of devices. Cybersecurity has become a data problem, and enterprises need to reply with scalable solutions to enable effective hunting and combat evolving attacks. Rethinking the cybersecurity problem as a data-centric problem led Accenture Labs’s Cybersecurity team to use emerging big data tools along with new approaches such as graph databases and analysis to exploit the connected nature of the data to its advantage. Joshua Patterson, Michael Wendt, and Keith Kraus explain how Accenture Labs’s Cybersecurity team is using Apache Kafka, Spark, and Flink to stream data into Blazegraph and Datastax Graph to accelerate cyber defense.
Leveraging Datastax Graph and Blazegraph allows Accenture Labs to greatly accelerate query and analysis performance compared to traditional security tools like SIEM. Josh, Michael, and Keith share the challenges of fitting cybersecurity data into each of the graph structures, as well as the ways they exploited the connectedness of events to discover new threats that would have been missed in traditional SIEM tools. In addition, they explain how they use GPUs to accelerate graph analysis by using Blazegraph DASL. Josh, Michael, and Keith end by demonstrating how to efficiently and effectively stream data into these graph databases using best-in-breed technologies such as Apache Kafka, Spark, and Flink and touch on why Kudu is becoming an integral part of Accenture’s technology stack. Utilizing these technologies, clients have supercharged their security analysts’ cyber-hunting abilities and are uncovering threats faster.
Most of us already have a virtual infrastructure already in place. We’re running virtual machines atop a hypervisor, and for the most part enjoying the experience. But there’s always room for improvement. One of those improvements that you can implement today is elevating your simple virtual environment to a real Private Cloud. It’s not difficult, and it leans on the same tools you probably already have. But it does require a different approach to management, and a hard look at supply and demand for resources. Can you quantify how many resources you have? Do you know the exact number your virtual machines are demanding? Is your hardware suited for expansion, or even for the types of high availability a Private Cloud requires? Get the answers to these and many other questions when you attend this half-day workshop with noted VMware Guru Greg Shields. In it, you’ll learn exactly how to construct your own vSphere Private Cloud that exactly meets your needs.
Mitigating One Million Security Threats With Kafka and Spark With Arun Janart...HostedbyConfluent
Mitigating One Million Security Threats With Kafka and Spark With Arun Janarthnam | Current 2022
Citrix Analytics (Security), a user behavior analytics service, protects 100’s of companies from risks and threats posed by users. The service processes 3 billion events per day and can identify security threats in under a minute.
Kafka is the backbone of our real-time platform. It seamlessly glues the numerous stages required for ETL, Feature Extraction, Model Training & Serving, data access etc and enables us to develop new products faster.
In this session, we will talk about how, in the last 6 months, 7M risk indicators were triggered and 1M threat mitigating actions were taken, and the integral role Kafka played in achieving it. We would also like to share some interesting ways Kafka is used at Citrix. Like, how topics are auto provisioned, and security is handled in a multi-tenant, public facing “northbound” Kafka cluster and the Kafka + Spark optimizations that reduced the cost of running 100’s of streaming jobs.
Modern systems in production rely on decades of computer science research. Over time, new architectural patterns emerge that enable more resilient and robust systems. In this talk, we'll discuss some of these patterns from systems I've worked on at Google and the related work that provide insights into the motivations behind them.
DEVNET-1010 Using Cisco pxGrid for Security Platform IntegrationCisco DevNet
This session will cover: Functional and architectural basics of Cisco Platform Exchange Grid (pxGrid), the new publish/subscribe/query contextual information exchange framework for creating integration between DevNet partner platforms and Cisco security products; Integration use-cases such as utilizing pxGrid for executing threat response actions on the network and using identity, endpoint device and user access privilege context to enhance our DevNet partners analytics, forensics and reporting; First-hand developer perspective from DevNet partner ID/IP who used pxGrid to integrate Ping Identity and Cisco Identity Services Engine.
Big Data Fabric: A Necessity For Any Successful Big Data InitiativeDenodo
Watch this webinar in full here: https://buff.ly/2IxM8Iy
Watch all webinars from the Denodo Packed Lunch webinar series here: https://buff.ly/2IR3q6w
While big data initiatives have become necessary for any business to generate actionable insights, big data fabric has become a necessity for any successful big data initiative. The best of breed big data fabrics should deliver actionable insights to the business users with minimal effort, provide end-to-end security to the entire enterprise data platform and provide real-time data integration, while delivering self-service data platform to business users.
Attend this session to learn how big data fabric enabled by data virtualization:
• Provides lightning fast self-service data access to business users
• Centralizes data security, governance and data privacy
• Fulfills the promise of data lakes to provide actionable insights
Despite its notoriously poor user experience for both users and admins, the remote access VPN has remained the standard for remote access to internally managed applications. The tool, which dates back to the 1990s, extends the corporate network to users and exposes it to malware that may be running on mobile devices.
Using Cisco pxGrid for Security Platform Integration: a deep diveCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. This session will cover: Functional and architectural basics of Cisco Platform Exchange Grid (pxGrid), the new publish/subscribe/query contextual information exchange framework for creating integration between DevNet Zone partner platforms and Cisco security products; Integration use-cases such as utilizing pxGrid for executing threat response actions on the network and using identity, endpoint device and user access privilege context to enhance our DevNet Zone partners analytics, forensics and reporting; First-hand developer perspective from DevNet Zone partner ID/IP who used pxGrid to integrate Ping Identity and Cisco Identity Services Engine.
Similar to Talk @FH Hagenberg - Data viz in a collaborative mixed reality space (20)
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
7. 7Confidential
„Which line of code caused a
frustrating user experience
on an iOS 11
with LTE
in Indonesia
for our web checkout
at 11:15am
on which (virtual) host
hosted in which datacenter
....
“
Dynatrace
26. Confidential 26
confidential
Next steps
• Usereal data from public DynatraceAPI
• Stable multi-usersessions, extend to virtual reality
• EAP ready– customers should beable to see their data
• Rethinkinteractions, use cases, visualizations,…
47. Confidential 47
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
49. Confidential 49
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
• Fluid navigation throughdata dimensions.
Context is king.
51. Confidential 51
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
• Fluid navigation throughdata dimensions.
Context is king.
• High acceptance of holograms. Immersionindata.
52. Confidential 52
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
• Fluid navigation throughdata dimensions.
Context is king.
• High acceptance of holograms. Immersionindata.
• Collaboration for distributed teams
56. Confidential 56
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
• Fluid navigation throughdata dimensions.
Context is king.
• High acceptance of holograms. Immersionindata.
• Collaboration for distributed teams
• Space and time independent
58. Confidential 58
confidential
Lessons learned
• Data viz in 3D quitechallenging
• Use real data as soon as possible
• Real data is beautiful
• Usespace to create mental model for complex information
• Fluid navigation throughdata dimensions.
Context is king.
• High acceptance of holograms. Immersionindata.
• Collaboration for distributed teams
• Space and time independent
• Givethe user SUPERPOWERS!
61. Confidential 61
Infrastructure forMultiUser
Dynatrace MXR - API
User A
First client acts as „Master Client“. It can
be made responsible for handling logic
that should only be executed by one
client in a room.
Requests user details, workspace
list and details, triggers data query
Requests user details (auth, tenant,...)
Requests data from Dynatrace API
via Dynatrace Assistant
User B
Joins the workspace as default client.
Requests user details, workspace list and details,
triggers data query
62. Confidential 62
Infrastructure forMultiUser
Dynatrace MXR - API
User A
First client acts as „Master Client“. It can
be made responsible for handling logic
that should only be executed by one
client in a room.
Connects to cloudservice and opens room to sync
e.g. player and entity positions, events
Requests user details, workspace
list and details, triggers data query
PUN
Requests user details (auth, tenant,...)
Requests data from Dynatrace API
via Dynatrace Assistant
User B
Joins the workspace as default client.
Requests user details, workspace list and details,
triggers data query
63. Confidential 63
Infrastructure forMultiUser
Dynatrace MXR - API
User A
First client acts as „Master Client“. It can
be made responsible for handling logic
that should only be executed by one
client in a room.
Connects to cloudservice and opens room to sync
e.g. player and entity positions, events
Requests user details, workspace
list and details, triggers data query
PUN Voice
Connects to cloudservice and opens VoIP room
Requests user details (auth, tenant,...)
Requests data from Dynatrace API
via Dynatrace Assistant
User B
Joins the workspace as default client.
Requests user details, workspace list and details,
triggers data query
Joins VoIP room
64. Confidential 64
Infrastructure forMultiUser
Dynatrace MXR - API
User A
First client acts as „Master Client“. It can
be made responsible for handling logic
that should only be executed by one
client in a room.
Connects to cloudservice and opens room to sync
e.g. player and entity positions, events
Requests user details, workspace
list and details, triggers data query
PUN Voice
Connects to cloudservice and opens VoIP room
Requests user details (auth, tenant,...)
Requests data from Dynatrace API
via Dynatrace Assistant
User B
Joins the workspace as default client.
Requests user details, workspace list and details,
triggers data query
Joins VoIP room
Spatial Anchors
Microsoft Azure
Requests/Saves anchor features
66. 66Confidential
• Need of exception logging
• Tracinguseractions to understand/reproduce bugs
OpenKit – Monitoring multi user sessions topin point issues
Photo: GFDL, CC-BY-SA-3.0 granted by photographer
Focus on web frontend, data viz and IXD
Web applications D3js
Focus on web frontend, data viz and IXD
Web applications D3js
Amazon, hosts
Basic description
Other use cases:
Security (Fraud, Intrusion Detection)
Log Analytics
User Experience Mgmt
Autonomous Cloud Management (self healing apps)
Basic description
Other use cases:
Security (Fraud, Intrusion Detection)
Log Analytics
User Experience Mgmt
Autonomous Cloud Management (self healing apps)
Innovation lab:
New ideas how user consume data -> adapt to new challenges
Business use case,
Simple: How‘s it going?
Focus on mgmt level
CTO Bernd Greifeneder closing talk of dev one:
Create possiblity to find issues faster / more precise
IT Guys: „have you tried to turn it off and on again?“
Create some wow for vegas show
Let‘s do this!
Feature creep deluxe
Iron man ui, futuristic stuff,
A-frame, d3js
A-frame see through webVR,
iPhone SE
Iron man ui, futuristic stuff,
Iron man ui, futuristic stuff,
Las Vegas, Bellagio, 2000 visitors, big main stage, excitement