An exposé on human-centered design, as related to data science and “medium data”. Examples of great API design will be showcased, as well as other end-user facing tools that can enable data scientists to share their observations with the world.
7 tips for managing software development in the age of agileGrowing Agile
Seven tips to focus on as the leader or manager of agile teams. This talk was given at the ITWEB conference in 2015 in South Africa and at #SGPRG (Scrum Gathering Prague)
I’ll present the new knowledge discovery tools we are building at Diffeo. Unlike traditional search engines that use keywords, Diffeo provides an in-browser knowledge base that accelerates information gathering about people, companies, chemical compounds, cyber events, or other real world entities. I’ll describe how Diffeo uses active learning to encourage long and deep user interactions in order to recommend new content for in-progress articles. As you write, the search results get better and more interesting, because the system can see more precisely which entity you mean and which you don’t (disambiguation) and also what you don’t know yet about the entity (discovery).
Finally in this presentation I’ll describe our experience organizing the Text REtrieval Conference (TREC) on Knowledge Base Acceleration (KBA) and Dynamic Domain (DD) which are pushing the state of the art in knowledge discovery on large streams. I’ll show you how to access the largest corpus of streaming text data ever released for public evaluations.
API Driven Applications - An ecosystem architectureWSO2
Today people are connected to information sources with various disparate means. PC is the least in use. From powerful mobile devices, smart televisions, wearable electronics and other ubiquitous computing equipments the entire generation is wired to one another, creating and consuming information. Today if a business wants to reach a market segment; taking the business online is not good enough. It has to innovate on how to reach customers with dozens of available streams. Simply creating a modern e-commerce portal will bring almost no revenue, the business has to innovate on creating an ecosystem around the consumer delivering value.
For this reason the developer community is now increasingly focussing on the API design and architecture practices as opposed to application design and development. Applications are now derived by APIs and now widely exists as thin but rich layers of user interfaces. API first approach have paid well when it comes to creating multiple information streams to deliver and acquire information. Today a successful business model means not only selling the product to the customer but understanding the customer and API driven design supports this business perception.
On the other hand consumer today are far more computer literate than before; they are concerned about online identity, privacy and secure conversation. Application developers need to focus on federated identity, privacy policies and establishing trusted secure communications and sharing these mechanisms with the users by building the trust as well as making the user experience seamless.
This talk will mainly focus on the aforesaid aspects of API driven application design and development. Nuwan will discuss and demonstrate key elements of API driven application ecosystem.
As the importance of having a data strategy in place is sinking in, many organizations have added a chief data officer (CDO) to their executive team to help create and implement that strategy. But every organization is doing this a little bit differently. This talk will describe how a variety of industries and organizations are using CDOs and will make recommendations for best practices.
7 tips for managing software development in the age of agileGrowing Agile
Seven tips to focus on as the leader or manager of agile teams. This talk was given at the ITWEB conference in 2015 in South Africa and at #SGPRG (Scrum Gathering Prague)
I’ll present the new knowledge discovery tools we are building at Diffeo. Unlike traditional search engines that use keywords, Diffeo provides an in-browser knowledge base that accelerates information gathering about people, companies, chemical compounds, cyber events, or other real world entities. I’ll describe how Diffeo uses active learning to encourage long and deep user interactions in order to recommend new content for in-progress articles. As you write, the search results get better and more interesting, because the system can see more precisely which entity you mean and which you don’t (disambiguation) and also what you don’t know yet about the entity (discovery).
Finally in this presentation I’ll describe our experience organizing the Text REtrieval Conference (TREC) on Knowledge Base Acceleration (KBA) and Dynamic Domain (DD) which are pushing the state of the art in knowledge discovery on large streams. I’ll show you how to access the largest corpus of streaming text data ever released for public evaluations.
API Driven Applications - An ecosystem architectureWSO2
Today people are connected to information sources with various disparate means. PC is the least in use. From powerful mobile devices, smart televisions, wearable electronics and other ubiquitous computing equipments the entire generation is wired to one another, creating and consuming information. Today if a business wants to reach a market segment; taking the business online is not good enough. It has to innovate on how to reach customers with dozens of available streams. Simply creating a modern e-commerce portal will bring almost no revenue, the business has to innovate on creating an ecosystem around the consumer delivering value.
For this reason the developer community is now increasingly focussing on the API design and architecture practices as opposed to application design and development. Applications are now derived by APIs and now widely exists as thin but rich layers of user interfaces. API first approach have paid well when it comes to creating multiple information streams to deliver and acquire information. Today a successful business model means not only selling the product to the customer but understanding the customer and API driven design supports this business perception.
On the other hand consumer today are far more computer literate than before; they are concerned about online identity, privacy and secure conversation. Application developers need to focus on federated identity, privacy policies and establishing trusted secure communications and sharing these mechanisms with the users by building the trust as well as making the user experience seamless.
This talk will mainly focus on the aforesaid aspects of API driven application design and development. Nuwan will discuss and demonstrate key elements of API driven application ecosystem.
As the importance of having a data strategy in place is sinking in, many organizations have added a chief data officer (CDO) to their executive team to help create and implement that strategy. But every organization is doing this a little bit differently. This talk will describe how a variety of industries and organizations are using CDOs and will make recommendations for best practices.
Mobile technology Usage by Humanitarian Programs: A Metadata Analysisodsc
CommCare, developed by Dimagi Inc., is an open-source mobile technology platform that supports hundreds of humanitarian frontline programs worldwide. The objective of this analysis is to demonstrate how CommCare metadata contains a wealth of information that can inform humanitarian programs in their use of mobile technology. This understanding can help programs determine the most effective way to implement CommCare or other mobile technology in resource-poor settings. A typical CommCare user is a frontline worker, such as a community health worker who provides outreach to pregnant women and children. An important feature of CommCare is that it supports case management, allowing users to register, update, and close cases in their CommCare application. A case is usually a user’s client, e.g., a pregnant woman who is supported by the CommCare user. While using CommCare, the user fills out electronic forms which eventually get submitted to the CommCare cloud server. The cumulative number of forms submitted by CommCare users as of December 2014 was just over 10 million. Metadata for each form submitted through CommCare are stored in Dimagi’s data platform; included in a form’s metadata are date and time stamps for when each form was started and ended by the user and when the form was eventually received by the cloud server.
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hiveodsc
The main objective of this workshop is to give the audience hands on experience with several Hadoop technologies and jump start their hadoop journey. In this workshop, you will load data and submit queries using Hadoop! Before jumping in to the technology, the Founders of DataKitchen review Hadoop and some of its technologies (MapReduce, Hive, Pig, Impala and Spark), look at performance, and present a rubric for choosing which technology to use when.
We’ve all been told to “work smarter, not harder.” But what does working smarter really mean? In the world of finance and trading, working smarter means working differently. None of us can compete against computers stacked inches away from the stock exchange or blue chip companies with multi-million dollar marketing campaigns. The key to winning is to go where the big guys haven’t and the way to do that is through diverse datasets. In this talk, you will discover the theory and tools to discover new datasets from unexpected sources in order to gain an upper-hand in both finance and business. So whether you’re a quant that trades in his bedroom or a restaurateur looking to grow his business, you’ll learn how the diversity of data can be the sharpest knife if your set.
Data Science at Dow Jones: Monetizing Data, News and Informationodsc
In this presentation I will describe the way Data Science supports the business of information and news at Dow Jones. Specifically, I will describe how we are introducing innovative and advanced large-scale information mining and analytic approaches not only into Dow Jones’ products but also into our strategy and decision making processes.Our goal is to impact every aspect of Dow Jones: from the way journalism is produced in the newsroom, to the way we create and deliver institutional products, to the way we improve retention and acquisition of subscribers. While the task seems broad and daunting, we have already achieved various successes through the application of machine learning, data mining, advanced analytics and big data approaches.In this presentation I will describe how we have achieved this, including our tools, data, approaches and mechanisms as well as describe what our plans are going forward.
Have you been in the situation where you’re about to start a new project and ask yourself, what’s the right tool for the job here? I’ve been in that situation many times and thought it might be useful to share with you a recent project we did and why we selected Spark, Python, and Parquet. My plan is take you through a use case that involves loading, transforming, aggregating, and persisting the dataset. We’ll use an open dataset consisting of full fund holdings graciously provided by Morningstar. My goal in presenting this use case are to have the audience learn about how these technologies can be applied to a real world problem and to inspire members of the audience to start learning these technologies and applying them to their own projects.
Building a Predictive Analytics Solution with Azure MLodsc
Create and operationalize a predictive model using Microsoft Azure Machine Learning.
– Perform the typical steps involved in building a predictive analytics solution such as data ingestion, data cleansing, data exploration, feature engineering, model selection and evaluation of model results
–learn how to use machine learning with big data scenarios using tools like Hadoop and SQL Server to process and work with such data.
Finding and classifying the mentions of the things named in text, often called Named Entity Recognition or NER, is a fundamental task in many search and analysis applications. Mature, robust NER technology is available for many languages and domains, from people, places, and products, to diseases, genes, and molecules. However, for emerging tasks like knowledge-base construction, mentions alone are insufficient.
In this presentation we’ll explore techniques that go beyond names to:
link mentions to one another and to rich knowledge sources like Wikidata
discover and characterise the relationships between entities that are explicit in the text
And we’ll discuss some of the most important practical implications of these advancements for open data science.
According to Credit Suisse’s Gender 3000 report, at the end of 2013, women accounted for 12.9% of top management in 3000 companies across 40 countries. However, since 2009, companies with women as 25-50% of their management team
returned 22-29%. If companies with women in management outperform so dramatically, what would happen if you invested in women-led companies? Karen Rubin will explore this question and share her findings after running a 12 year investment simulation.
Data science allows us to turn a dark forest into a world of
perpetual twilight by giving us the tools to better understand the data that surrounds us. Unfortunately, in this world of twilight we still need a flashlight to get a clean crisp image of our immediate surroundings. We will talk about how to use deep domain expertise as that flashlight shedding light on our understanding of data. Our focus will be on using text analysis as a means to examine qualitative information in a structured, quantitative way. We will draw heavily from examples in complex central bank policy and financial regulation.
Open Source Tools & Data Science Competitions odsc
This talk shares the presenter’s experience with open source tools in data science competitions. In the past several years Kaggle and other competitions have created a large online community of data scientists. In addition to competing with each other for fame and glory, members of this community also generously share knowledge, insights using forum and open source code. The open competition and sharing have resulted in rapid progress in the sophistication of the entire community. This presentation will briefly cover this journey from a competitor’s perspective, and share hands on tips on some open source tools proven popular and useful in recent competitions.
scikit-learn has emerged as one of the most popular open source machine learning toolkits, now widely used in academia and industry.
scikit-learn provides easy-to-use interfaces to perform advanced analysis and build powerful predictive models.
The tutorial will cover basic concepts of machine learning, such as supervised and unsupervised learning, cross validation, and model selection. We will see how to prepare data for machine learning, and go from applying a single algorithm to building a machine learning pipeline.
We will also cover how to build machine learning models on text data, and how to handle very large datasets.
Bridging the Gap Between Data and Insight using Open-Source Toolsodsc
Despite the proliferation of open-source tools for analysis (such as Python and R) and those used for visualization
(such as Javascript / D3), there often exist significant gaps between these areas, and those of us trying to navigate the complete arc from data to insight can encounter many obstacles along the way. Fortunately, in recent years there have been many efforts to fill these needs, and today distilling a meaningful visualization from raw data is faster and easier than ever before.
In this talk we will use will use examples in geospatial analysis and visualization to illustrate how to open-source tools like Python, geopandas, and TileMill work together. Using examples from the RunKeeper mobile app we will show how we currently use these tools to understand better our customers and their data, and to communicate
with our colleagues, external partners, and the data community at large.
Human-generated text may be the next frontier for big data analysis, but we humans are complicated beasts and the text we generate is messy and complicated in ways that can confound analysis. We’ll describe the top ten mistakes people make when they start doing text analysis, and hopefully save you from making a few of these mistakes yourself.
One of the most important, yet often overlooked, aspects of predictive modeling is the transformation of data to create model inputs, better known as feature engineering (FE). This talk will go into the theoretical background behind FE, showing how it leverages existing data to produce better modeling results. It will then detail some important FE techniques that should be in every data scientist’s tool kit.
To rephrase an old saying: ‘It takes a village to raise an Analyst.’ Data Analysts and Scientists are working in teams delivering insight and analysis on an ongoing basis. So how do you get the team to support experimentation and insight delivery without ending up in an IT Engineer vs Analyst vs Data Governance war? We present 5 shocking steps to get these teams of people working together with practical, doable steps that can help you achieve data agility. The speaker has decades of hands on and executive management experience in data, analytics, and software development.
Using your powers for good: Data science in the social sectorodsc
Just like every major corporation today, nonprofits and governments have more data than ever before. And just like those corporations, they are eager to tap into the power of their data. But the social sector doesn’t have the same resources to attract talent. Jeff Hammerbacher, Chief Scientist at Cloudera, put it best: “The best minds of my generation are thinking about how to make people click ads. That sucks.” At DrivenData our goal is to make the world suck a little less by empowering impact organizations to get the most from their data.
Peter Bull, co-founder at DrivenData, will speak on the ways in which statistics, computer science, and machine learning can be applied to the challenges in the social sector. The talk will address both the big-picture context of the data for good movement, and an in-depth case study of the methods which won DrivenData’s recent machine learning competitions.
It’s an exciting time for people who love data: methods are improving, computational costs are decreasing, storage and transport are cheaper, and the talent pool is growing. It’s up to the data geeks to use these powers for good.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Mobile technology Usage by Humanitarian Programs: A Metadata Analysisodsc
CommCare, developed by Dimagi Inc., is an open-source mobile technology platform that supports hundreds of humanitarian frontline programs worldwide. The objective of this analysis is to demonstrate how CommCare metadata contains a wealth of information that can inform humanitarian programs in their use of mobile technology. This understanding can help programs determine the most effective way to implement CommCare or other mobile technology in resource-poor settings. A typical CommCare user is a frontline worker, such as a community health worker who provides outreach to pregnant women and children. An important feature of CommCare is that it supports case management, allowing users to register, update, and close cases in their CommCare application. A case is usually a user’s client, e.g., a pregnant woman who is supported by the CommCare user. While using CommCare, the user fills out electronic forms which eventually get submitted to the CommCare cloud server. The cumulative number of forms submitted by CommCare users as of December 2014 was just over 10 million. Metadata for each form submitted through CommCare are stored in Dimagi’s data platform; included in a form’s metadata are date and time stamps for when each form was started and ended by the user and when the form was eventually received by the cloud server.
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hiveodsc
The main objective of this workshop is to give the audience hands on experience with several Hadoop technologies and jump start their hadoop journey. In this workshop, you will load data and submit queries using Hadoop! Before jumping in to the technology, the Founders of DataKitchen review Hadoop and some of its technologies (MapReduce, Hive, Pig, Impala and Spark), look at performance, and present a rubric for choosing which technology to use when.
We’ve all been told to “work smarter, not harder.” But what does working smarter really mean? In the world of finance and trading, working smarter means working differently. None of us can compete against computers stacked inches away from the stock exchange or blue chip companies with multi-million dollar marketing campaigns. The key to winning is to go where the big guys haven’t and the way to do that is through diverse datasets. In this talk, you will discover the theory and tools to discover new datasets from unexpected sources in order to gain an upper-hand in both finance and business. So whether you’re a quant that trades in his bedroom or a restaurateur looking to grow his business, you’ll learn how the diversity of data can be the sharpest knife if your set.
Data Science at Dow Jones: Monetizing Data, News and Informationodsc
In this presentation I will describe the way Data Science supports the business of information and news at Dow Jones. Specifically, I will describe how we are introducing innovative and advanced large-scale information mining and analytic approaches not only into Dow Jones’ products but also into our strategy and decision making processes.Our goal is to impact every aspect of Dow Jones: from the way journalism is produced in the newsroom, to the way we create and deliver institutional products, to the way we improve retention and acquisition of subscribers. While the task seems broad and daunting, we have already achieved various successes through the application of machine learning, data mining, advanced analytics and big data approaches.In this presentation I will describe how we have achieved this, including our tools, data, approaches and mechanisms as well as describe what our plans are going forward.
Have you been in the situation where you’re about to start a new project and ask yourself, what’s the right tool for the job here? I’ve been in that situation many times and thought it might be useful to share with you a recent project we did and why we selected Spark, Python, and Parquet. My plan is take you through a use case that involves loading, transforming, aggregating, and persisting the dataset. We’ll use an open dataset consisting of full fund holdings graciously provided by Morningstar. My goal in presenting this use case are to have the audience learn about how these technologies can be applied to a real world problem and to inspire members of the audience to start learning these technologies and applying them to their own projects.
Building a Predictive Analytics Solution with Azure MLodsc
Create and operationalize a predictive model using Microsoft Azure Machine Learning.
– Perform the typical steps involved in building a predictive analytics solution such as data ingestion, data cleansing, data exploration, feature engineering, model selection and evaluation of model results
–learn how to use machine learning with big data scenarios using tools like Hadoop and SQL Server to process and work with such data.
Finding and classifying the mentions of the things named in text, often called Named Entity Recognition or NER, is a fundamental task in many search and analysis applications. Mature, robust NER technology is available for many languages and domains, from people, places, and products, to diseases, genes, and molecules. However, for emerging tasks like knowledge-base construction, mentions alone are insufficient.
In this presentation we’ll explore techniques that go beyond names to:
link mentions to one another and to rich knowledge sources like Wikidata
discover and characterise the relationships between entities that are explicit in the text
And we’ll discuss some of the most important practical implications of these advancements for open data science.
According to Credit Suisse’s Gender 3000 report, at the end of 2013, women accounted for 12.9% of top management in 3000 companies across 40 countries. However, since 2009, companies with women as 25-50% of their management team
returned 22-29%. If companies with women in management outperform so dramatically, what would happen if you invested in women-led companies? Karen Rubin will explore this question and share her findings after running a 12 year investment simulation.
Data science allows us to turn a dark forest into a world of
perpetual twilight by giving us the tools to better understand the data that surrounds us. Unfortunately, in this world of twilight we still need a flashlight to get a clean crisp image of our immediate surroundings. We will talk about how to use deep domain expertise as that flashlight shedding light on our understanding of data. Our focus will be on using text analysis as a means to examine qualitative information in a structured, quantitative way. We will draw heavily from examples in complex central bank policy and financial regulation.
Open Source Tools & Data Science Competitions odsc
This talk shares the presenter’s experience with open source tools in data science competitions. In the past several years Kaggle and other competitions have created a large online community of data scientists. In addition to competing with each other for fame and glory, members of this community also generously share knowledge, insights using forum and open source code. The open competition and sharing have resulted in rapid progress in the sophistication of the entire community. This presentation will briefly cover this journey from a competitor’s perspective, and share hands on tips on some open source tools proven popular and useful in recent competitions.
scikit-learn has emerged as one of the most popular open source machine learning toolkits, now widely used in academia and industry.
scikit-learn provides easy-to-use interfaces to perform advanced analysis and build powerful predictive models.
The tutorial will cover basic concepts of machine learning, such as supervised and unsupervised learning, cross validation, and model selection. We will see how to prepare data for machine learning, and go from applying a single algorithm to building a machine learning pipeline.
We will also cover how to build machine learning models on text data, and how to handle very large datasets.
Bridging the Gap Between Data and Insight using Open-Source Toolsodsc
Despite the proliferation of open-source tools for analysis (such as Python and R) and those used for visualization
(such as Javascript / D3), there often exist significant gaps between these areas, and those of us trying to navigate the complete arc from data to insight can encounter many obstacles along the way. Fortunately, in recent years there have been many efforts to fill these needs, and today distilling a meaningful visualization from raw data is faster and easier than ever before.
In this talk we will use will use examples in geospatial analysis and visualization to illustrate how to open-source tools like Python, geopandas, and TileMill work together. Using examples from the RunKeeper mobile app we will show how we currently use these tools to understand better our customers and their data, and to communicate
with our colleagues, external partners, and the data community at large.
Human-generated text may be the next frontier for big data analysis, but we humans are complicated beasts and the text we generate is messy and complicated in ways that can confound analysis. We’ll describe the top ten mistakes people make when they start doing text analysis, and hopefully save you from making a few of these mistakes yourself.
One of the most important, yet often overlooked, aspects of predictive modeling is the transformation of data to create model inputs, better known as feature engineering (FE). This talk will go into the theoretical background behind FE, showing how it leverages existing data to produce better modeling results. It will then detail some important FE techniques that should be in every data scientist’s tool kit.
To rephrase an old saying: ‘It takes a village to raise an Analyst.’ Data Analysts and Scientists are working in teams delivering insight and analysis on an ongoing basis. So how do you get the team to support experimentation and insight delivery without ending up in an IT Engineer vs Analyst vs Data Governance war? We present 5 shocking steps to get these teams of people working together with practical, doable steps that can help you achieve data agility. The speaker has decades of hands on and executive management experience in data, analytics, and software development.
Using your powers for good: Data science in the social sectorodsc
Just like every major corporation today, nonprofits and governments have more data than ever before. And just like those corporations, they are eager to tap into the power of their data. But the social sector doesn’t have the same resources to attract talent. Jeff Hammerbacher, Chief Scientist at Cloudera, put it best: “The best minds of my generation are thinking about how to make people click ads. That sucks.” At DrivenData our goal is to make the world suck a little less by empowering impact organizations to get the most from their data.
Peter Bull, co-founder at DrivenData, will speak on the ways in which statistics, computer science, and machine learning can be applied to the challenges in the social sector. The talk will address both the big-picture context of the data for good movement, and an in-depth case study of the methods which won DrivenData’s recent machine learning competitions.
It’s an exciting time for people who love data: methods are improving, computational costs are decreasing, storage and transport are cheaper, and the talent pool is growing. It’s up to the data geeks to use these powers for good.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.