Machine Learning for Images - what's specific, various implementation of image recognition, what can be build using image recognition via machine learning. Image recognition frameworks, CNN, cuda-convnet2, coffee, torch7, theano, etc.
DN18 | The Evolution and Future of Graph Technology: Intelligent Systems | Ax...Dataconomy Media
Abstract of the Prersentation:
The field of graph technology has developed rapidly in recent years and established itself as an independent technology sector that will probably even receive its own query language standard (GQL). As almost any business benefits from graph platforms it is no wonder that adoption is broad and fast. There must be good reasons for that. In his talk Axel will give an overview of the evolution of technology and products in the Graph Space from the early beginnings up to current developments in machine learning and artificial intelligence. He will also give some examples and explain why graph technology is so well suited for most use cases and to build intelligent systems.
About the Author:
Axel Morgner started Structr in 2010 to create the next-gen CMS. Previously, he worked for Oracle and founded an ECM company. Axel loves Open Source. As CEO, he’s responsible for the company behind Structr and the project itself, with focus on the front end.
Gary Hope - Machine Learning: It's Not as Hard as you ThinkSaratoga
Gary Hope is currently the Data Platform Technical Specialist within Microsoft South Africa having previously worked for several large organisations including American Express and Siemens Business Solutions.
Slides from talks presented at Mammoth BI in Cape Town on 17 November 2014.
Visit www.mammothbi.co.za for details on the event. Follow @MammothBI on twitter.
Edge Intelligence: The Convergence of Humans, Things and AIThomas Rausch
Edge AI and Human Augmentation are two major technology trends, driven by recent advancements in edge computing, IoT, and AI accelerators. As humans, things, and AI continue to grow closer together, systems engineers and researchers are faced with new and unique challenges. In this paper, we analyze the role of edge computing and AI in the cyber-human evolution, and identify challenges that edge computing systems will consequently be faced with. We take a closer look at how a cyber-physical fabric will be complemented by AI operationalization to enable seamless end-to-end edge intelligence systems.
Synthesizing Plausible Infrastructure Configurations for Evaluating Edge Comp...Thomas Rausch
This paper proposes a framework for synthesizing infrastruc-
ture configurations for evaluating edge computing systems
under different conditions. There are a number of tools to sim-
ulate or emulate edge systems, and while they typically pro-
vide ways of modeling infrastructure and network topologies,
they lack reusable building blocks common to edge scenarios.
Consequently, most edge computing systems evaluations to
this date rely on either highly application-specific testbeds, or
abstract scenarios and abstract infrastructure configurations.
We analyze four existing or emerging edge infrastructure sce-
narios, from which we elicit common concepts. The scenarios
serve as input to synthesize plausible infrastructure config-
urations, that are parameterizable in cluster density, device
heterogeneity, and network topology. We demonstrate how
our tool can generate synthetic infrastructure configurations
for the reference scenarios, and how these configurations can
be used to evaluate aspects of edge computing systems.
How to get your engineers to care about the AWS BillGil Zellner
Your engineers need to be able to start their own machines, set up their own services, and be independent in the cloud. That is nice, but that also means relinquishing control of costs to some extent. To do that and not go bankrupt, you need to get your engineers to care about costs. This is how.
DN18 | From Counting to Connecting: A Networked and Data-Driven Approach to M...Dataconomy Media
Abstract of the Presentation:
This talk will focus on applications of knowledge graphs and network science to the exploration of distributed industrial capabilities relevant to support work on United Nations Sustainable Development Goals. The core message and tools presented during the talk are relevant beyond sustainable development and can be applied to inter-organisational collaboration projects, information flows within companies and innovation management.
About the Author:
Pedro Parraguez is the Co-founder of Dataverz, a data analytics company based in Copenhagen, and Postdoctoral Researcher at DTU Management Engineering. Pedro’s research and applied work focuses on complex socio-technical systems, with emphasis on network science and data-driven analyses. This includes the study and development of decision-making support for industrial clusters, complex organisations, and large engineering projects.
DN18 | Applied Machine Learning in Cybersecurity: Detect malicious DGA Domain...Dataconomy Media
Abstract of the Presentation:
Malware like GameOver Zeus and CryptoLocker Botnets are a massive threat for organizations. They use domain generation algorithms (DGAs) to create URLs that host malicious websites or command and control servers. Traditional approaches fail to detect and stop them early. In this Talk you learn in a live demo how you can use machine learning to detect malicious domains in your environment and learn how to implement a full end to end data science use case leveraging the Splunk Machine Learning Toolkit.
About the Author:
Philipp works as Staff Machine Learning Architect at Splunk. His background is in data sciene, visualization and analytics with experience in automotive, transportation and software industries. He enjoys working with Splunk customers and partners across EMEA.
The edge computing market today includes consumer apps and devices, and the industrial sector, where increasingly powerful CPUs drive everything from wind turbines to autonomous vehicles, robots, drones and equipment. The device market is growing explosively:
These devices gather a wealth of data from a broad array of sensors – and have the potential to optimize efficiency, safety and performance, and revolutionize productivity and user experiences. But to deliver these benefits they need to become truly smart, performing analysis, training and inference on high volumes of sensor data on-the-fly.
There is an urgent need for software that simplifies and automates data analysis and inference at the edge, helping devices and systems learn from and make predictions about their environment: Cameras that recognize and track their targets; self-driving cars that choose the least congested routes using real- time predictions for intersections ahead; and drones that dynamically swarm, find their targets and gather intelligence without human oversight.
These examples require each device to make decisions based on a real-time analysis of its own sensor data fused with the analysis and predictions from other systems: Drones in a swarm need to collaborate or they will collide; they must gossip their insights to each other to enable the swarm to perform effectively. Today, the software to enable each of these complex scenarios must be developed from scratch, starting with raw data feeds and network protocols. To unlock the potential of an edge environment rich in sensors and power-efficient computing platforms developers need a simple way to get from vast amounts of raw data to insights and predictions.
What's needed is a new Architecture for the intelligent edge – one that consumes raw data from devices at the edge, and automatically creates a “digital twin” for each real-world system from its data. Digital twins statefully process their own data at the edge, analyzing, learning and predicting in real-time. Digital twins can find anomalies or correlations in their own data, and self-train powerful neural network models that enable them to predict their future performance, then share semantically enriched insights with other digital twins to solve system problems. The architecture helps application developers by dynamically creating digital twins that learn from their own data – automatically building a model of the real world that is always up to date, executes in real-time, and makes accurate predictions of the behavior of complex systems.
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrig...Data Science Milan
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrigoni, Senior Data Scientist, Pirelli (pirelli.com)
Abstract:
Pirelli, a global performance tire manufacturer, uses data science in its 20 factories to improve quality and efficiency, and reduce energy consumption. For this “Smart Manufacturing” initiative, Pirelli’s data science team has developed predictive models and analytics tools to monitor processes, machines and materials on the factory floors. In this talk we will show some of the solutions we deploy, demonstrate how we used Domino’s data science platform and Plot.ly to build these solutions, and discuss the next steps in this journey towards predictive maintenance.
Bio:
Alberto Arrigoni is a data scientist at Pirelli, where he works to process sensors and telemetry data for IoT, Smart Factories and connected-vehicle applications.
He works closely with all major business units such as R&D, industrial engineering and BI to develop tailored machine learning algorithms and production systems.
He holds a PhD in biostatistics from the University of Milan Bicocca and prior to joining Pirelli was a staff data scientist at the National Institute of Molecular Genetics (Milan), as well as a Fulbright student at the Santa Clara University and visiting PhD student at Pacific Biosciences (Menlo Park, CA).
Nurturing Digital Twins: How to Build Virtual Instances of Physical Assets to...Cognizant
To embark on the digital twin jounrey, assess your readiness, define and communicate a vision, set common data management rules and build in flexibility for intelligence.
Feature selection for Big Data: advances and challenges by Verónica Bolón-Can...Big Data Spain
In an era of growing data complexity and volume and the advent of Big Data, feature selection has a key role to play in helping reduce high-dimensionality in machine learning problems.
https://www.bigdataspain.org/2017/talk/feature-selection-for-big-data-advances-and-challenges
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Challenges of Deep Learning in Computer Vision Webinar - Tessellate ImagingAdhesh Shrivastava
Slides from the webinar on Challenges of Deep Learning in Computer Vision presented by Tessellate Imaging and powered by E2E Networks.
The webinar discusses the growth and applications of Computer Vision in modern-day real life. Challenges with implementing and developing Deep Learning and Computer Vision projects for both enterprises and developers.
We introduce MonkAI (https://monkai.org) an Open Sourced Deep Learning wrapper library for Computer Vision development and talk about features tackling some of the challenges in Deep Learning.
Data Analytics plays a significant role in all the latest technologies such as Artificial Intelligence, Internet Of Things, BlockChain, Digital Twin etc.
Code Camp Auckland 2015 - DEV1 Microsoft API Approaches 101Nikolai Blackie
Overview of how organisations can design, build, deploy and manage API's as well as engage API consumers utilising the current Microsoft Azure integration platform offerings. A 101 walk through of Azure API Management, Azure App Services and Team Foundation Server Online capabilities and how organisations can leverage these for cost effective and scalable API's
Code Samples: https://github.com/nikolaiblackie/AKL2015CodeCam pAppServices/blob/master/README.md
Custom Image Classifier with Visual Recognition: Building with Watson IBM Watson
In this Building with Watson presentation, learn how to train a classifier to identify images relevant to you and your company. View the on-demand version of the demonstration and listen to the Q&A with our Watson Vision engineers. http://www.ibm.com/watson/building-with-watson-webinar.html
DN18 | The Evolution and Future of Graph Technology: Intelligent Systems | Ax...Dataconomy Media
Abstract of the Prersentation:
The field of graph technology has developed rapidly in recent years and established itself as an independent technology sector that will probably even receive its own query language standard (GQL). As almost any business benefits from graph platforms it is no wonder that adoption is broad and fast. There must be good reasons for that. In his talk Axel will give an overview of the evolution of technology and products in the Graph Space from the early beginnings up to current developments in machine learning and artificial intelligence. He will also give some examples and explain why graph technology is so well suited for most use cases and to build intelligent systems.
About the Author:
Axel Morgner started Structr in 2010 to create the next-gen CMS. Previously, he worked for Oracle and founded an ECM company. Axel loves Open Source. As CEO, he’s responsible for the company behind Structr and the project itself, with focus on the front end.
Gary Hope - Machine Learning: It's Not as Hard as you ThinkSaratoga
Gary Hope is currently the Data Platform Technical Specialist within Microsoft South Africa having previously worked for several large organisations including American Express and Siemens Business Solutions.
Slides from talks presented at Mammoth BI in Cape Town on 17 November 2014.
Visit www.mammothbi.co.za for details on the event. Follow @MammothBI on twitter.
Edge Intelligence: The Convergence of Humans, Things and AIThomas Rausch
Edge AI and Human Augmentation are two major technology trends, driven by recent advancements in edge computing, IoT, and AI accelerators. As humans, things, and AI continue to grow closer together, systems engineers and researchers are faced with new and unique challenges. In this paper, we analyze the role of edge computing and AI in the cyber-human evolution, and identify challenges that edge computing systems will consequently be faced with. We take a closer look at how a cyber-physical fabric will be complemented by AI operationalization to enable seamless end-to-end edge intelligence systems.
Synthesizing Plausible Infrastructure Configurations for Evaluating Edge Comp...Thomas Rausch
This paper proposes a framework for synthesizing infrastruc-
ture configurations for evaluating edge computing systems
under different conditions. There are a number of tools to sim-
ulate or emulate edge systems, and while they typically pro-
vide ways of modeling infrastructure and network topologies,
they lack reusable building blocks common to edge scenarios.
Consequently, most edge computing systems evaluations to
this date rely on either highly application-specific testbeds, or
abstract scenarios and abstract infrastructure configurations.
We analyze four existing or emerging edge infrastructure sce-
narios, from which we elicit common concepts. The scenarios
serve as input to synthesize plausible infrastructure config-
urations, that are parameterizable in cluster density, device
heterogeneity, and network topology. We demonstrate how
our tool can generate synthetic infrastructure configurations
for the reference scenarios, and how these configurations can
be used to evaluate aspects of edge computing systems.
How to get your engineers to care about the AWS BillGil Zellner
Your engineers need to be able to start their own machines, set up their own services, and be independent in the cloud. That is nice, but that also means relinquishing control of costs to some extent. To do that and not go bankrupt, you need to get your engineers to care about costs. This is how.
DN18 | From Counting to Connecting: A Networked and Data-Driven Approach to M...Dataconomy Media
Abstract of the Presentation:
This talk will focus on applications of knowledge graphs and network science to the exploration of distributed industrial capabilities relevant to support work on United Nations Sustainable Development Goals. The core message and tools presented during the talk are relevant beyond sustainable development and can be applied to inter-organisational collaboration projects, information flows within companies and innovation management.
About the Author:
Pedro Parraguez is the Co-founder of Dataverz, a data analytics company based in Copenhagen, and Postdoctoral Researcher at DTU Management Engineering. Pedro’s research and applied work focuses on complex socio-technical systems, with emphasis on network science and data-driven analyses. This includes the study and development of decision-making support for industrial clusters, complex organisations, and large engineering projects.
DN18 | Applied Machine Learning in Cybersecurity: Detect malicious DGA Domain...Dataconomy Media
Abstract of the Presentation:
Malware like GameOver Zeus and CryptoLocker Botnets are a massive threat for organizations. They use domain generation algorithms (DGAs) to create URLs that host malicious websites or command and control servers. Traditional approaches fail to detect and stop them early. In this Talk you learn in a live demo how you can use machine learning to detect malicious domains in your environment and learn how to implement a full end to end data science use case leveraging the Splunk Machine Learning Toolkit.
About the Author:
Philipp works as Staff Machine Learning Architect at Splunk. His background is in data sciene, visualization and analytics with experience in automotive, transportation and software industries. He enjoys working with Splunk customers and partners across EMEA.
The edge computing market today includes consumer apps and devices, and the industrial sector, where increasingly powerful CPUs drive everything from wind turbines to autonomous vehicles, robots, drones and equipment. The device market is growing explosively:
These devices gather a wealth of data from a broad array of sensors – and have the potential to optimize efficiency, safety and performance, and revolutionize productivity and user experiences. But to deliver these benefits they need to become truly smart, performing analysis, training and inference on high volumes of sensor data on-the-fly.
There is an urgent need for software that simplifies and automates data analysis and inference at the edge, helping devices and systems learn from and make predictions about their environment: Cameras that recognize and track their targets; self-driving cars that choose the least congested routes using real- time predictions for intersections ahead; and drones that dynamically swarm, find their targets and gather intelligence without human oversight.
These examples require each device to make decisions based on a real-time analysis of its own sensor data fused with the analysis and predictions from other systems: Drones in a swarm need to collaborate or they will collide; they must gossip their insights to each other to enable the swarm to perform effectively. Today, the software to enable each of these complex scenarios must be developed from scratch, starting with raw data feeds and network protocols. To unlock the potential of an edge environment rich in sensors and power-efficient computing platforms developers need a simple way to get from vast amounts of raw data to insights and predictions.
What's needed is a new Architecture for the intelligent edge – one that consumes raw data from devices at the edge, and automatically creates a “digital twin” for each real-world system from its data. Digital twins statefully process their own data at the edge, analyzing, learning and predicting in real-time. Digital twins can find anomalies or correlations in their own data, and self-train powerful neural network models that enable them to predict their future performance, then share semantically enriched insights with other digital twins to solve system problems. The architecture helps application developers by dynamically creating digital twins that learn from their own data – automatically building a model of the real world that is always up to date, executes in real-time, and makes accurate predictions of the behavior of complex systems.
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrig...Data Science Milan
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrigoni, Senior Data Scientist, Pirelli (pirelli.com)
Abstract:
Pirelli, a global performance tire manufacturer, uses data science in its 20 factories to improve quality and efficiency, and reduce energy consumption. For this “Smart Manufacturing” initiative, Pirelli’s data science team has developed predictive models and analytics tools to monitor processes, machines and materials on the factory floors. In this talk we will show some of the solutions we deploy, demonstrate how we used Domino’s data science platform and Plot.ly to build these solutions, and discuss the next steps in this journey towards predictive maintenance.
Bio:
Alberto Arrigoni is a data scientist at Pirelli, where he works to process sensors and telemetry data for IoT, Smart Factories and connected-vehicle applications.
He works closely with all major business units such as R&D, industrial engineering and BI to develop tailored machine learning algorithms and production systems.
He holds a PhD in biostatistics from the University of Milan Bicocca and prior to joining Pirelli was a staff data scientist at the National Institute of Molecular Genetics (Milan), as well as a Fulbright student at the Santa Clara University and visiting PhD student at Pacific Biosciences (Menlo Park, CA).
Nurturing Digital Twins: How to Build Virtual Instances of Physical Assets to...Cognizant
To embark on the digital twin jounrey, assess your readiness, define and communicate a vision, set common data management rules and build in flexibility for intelligence.
Feature selection for Big Data: advances and challenges by Verónica Bolón-Can...Big Data Spain
In an era of growing data complexity and volume and the advent of Big Data, feature selection has a key role to play in helping reduce high-dimensionality in machine learning problems.
https://www.bigdataspain.org/2017/talk/feature-selection-for-big-data-advances-and-challenges
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Challenges of Deep Learning in Computer Vision Webinar - Tessellate ImagingAdhesh Shrivastava
Slides from the webinar on Challenges of Deep Learning in Computer Vision presented by Tessellate Imaging and powered by E2E Networks.
The webinar discusses the growth and applications of Computer Vision in modern-day real life. Challenges with implementing and developing Deep Learning and Computer Vision projects for both enterprises and developers.
We introduce MonkAI (https://monkai.org) an Open Sourced Deep Learning wrapper library for Computer Vision development and talk about features tackling some of the challenges in Deep Learning.
Data Analytics plays a significant role in all the latest technologies such as Artificial Intelligence, Internet Of Things, BlockChain, Digital Twin etc.
Code Camp Auckland 2015 - DEV1 Microsoft API Approaches 101Nikolai Blackie
Overview of how organisations can design, build, deploy and manage API's as well as engage API consumers utilising the current Microsoft Azure integration platform offerings. A 101 walk through of Azure API Management, Azure App Services and Team Foundation Server Online capabilities and how organisations can leverage these for cost effective and scalable API's
Code Samples: https://github.com/nikolaiblackie/AKL2015CodeCam pAppServices/blob/master/README.md
Custom Image Classifier with Visual Recognition: Building with Watson IBM Watson
In this Building with Watson presentation, learn how to train a classifier to identify images relevant to you and your company. View the on-demand version of the demonstration and listen to the Q&A with our Watson Vision engineers. http://www.ibm.com/watson/building-with-watson-webinar.html
This is a presentation I use to explain the new Microsoft to Partners. Where have we come from, what are the issues we have faced, what is our strategy going forward
“Retail Rebooted” bundles three trends JWTIntelligence has outlined in recent years that spotlight how retailers are evolving for an increasingly sophisticated digital and data-centric world: Retail As the Third Space, Predictive Personalization and Everything Is Retail. We’ve updated and revised these trends since their initial publication.
The report also maps out 20-plus Things to Watch in Retail, spotlighting a range of developments, from innovative business models to shifting consumer behaviors to the latest tech developments.
This report is the result of quantitative, qualitative and desk research conducted by JWTIntelligence throughout the year. It includes input from experts and influencers in retail and data from a survey JWTIntelligence conducted in the U.S. and the U.K. in November 2012, using SONAR™, JWT’s proprietary online tool.
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Optimization of Neural Networks for Mobile DevicesMatthias Trapp
Recent developments of mobile-device technology enable the processing of neural networks on-device using specialized hardware, e.g., so-called neural processing units. This allows for the application of neural networks directly on these devices without requiring server-based processing. However, using this hardware often requires adaptation and optimization of neural networks to operate in a power-efficient way. In a first approach, the Visual Media Analysis and Processing Group at HPI’s Chair of Computer Graphics Systems in collaboration with Digital Masterpieces GmbH successfully adapted and optimized convolutional neural networks for image and video analysis and stylization.
Scene recognition using Convolutional Neural NetworkDhirajGidde
Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success.
Machine Learning with Data Science Online Course | Learn and Build Learn and Build
You are just one step away from becoming a Data Scientist Engineer. Learn a foundational understanding of Machine Learning techniques at one place. Get Online Machine Learning Certification at Learn and Build.
Vertex has invested in companies across geographies addressing different industry applications leveraging AI to transform their service offerings. Read more on the trends and waves of AI developments observed.
Kaz Sato, Evangelist, Google at MLconf ATL 2016MLconf
Machine Intelligence at Google Scale: Tensor Flow and Cloud Machine Learning: The biggest challenge of Deep Learning technology is the scalability. As long as using single GPU server, you have to wait for hours or days to get the result of your work. This doesn’t scale for production service, so you need a Distributed Training on the cloud eventually. Google has been building infrastructure for training the large scale neural network on the cloud for years, and now started to share the technology with external developers. In this session, we will introduce new pre-trained ML services such as Cloud Vision API and Speech API that works without any training. Also, we will look how TensorFlow and Cloud Machine Learning will accelerate custom model training for 10x – 40x with Google’s distributed training infrastructure.
Traditional Machine Learning had used handwritten features and modality-specific machine learning to classify images, text or recognize voices. Deep learning / Neural network identifies features and finds different patterns automatically. Time to build these complex tasks has been drastically reduced and accuracy has exponentially increased because of advancements in Deep learning. Neural networks have been partly inspired from how 86 billion neurons work in a human and become more of a mathematical and a computer problem. We will see by the end of the blog how neural networks can be intuitively understood and implemented as a set of matrix multiplications, cost function, and optimization algorithms.
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smartphones. Highlights some frameworks and best practices.
Top 5 Artificial intelligence [AI]
Hello, today we are going to brief discussion about the top 5 Artificial intelligence.
Go through this website for mote details.Top 5 Artificial intelligence [AI] (theknowledge.cloud)
What is artificial intelligence?
Artificial Intelligence (AI) refers to the simulation of human intelligence in computers and other machines. It involves creating algorithms and systems that enable machines to perform tasks that would normally require human intelligence. AI systems aim to replicate cognitive functions such as learning, reasoning, problem-solving, perception, language understanding, and decision-making.
Here's a list of 10 influential AI technologies and areas that were making significant strides:
1. Natural Language Processing (NLP) Models.
2. Computer Vision.
3. Reinforcement Learning.
4. Generative Adversarial Networks (GANs).
5. Autonomous Vehicles.
Here we are going to discuss about the full details on the above topics.
1. Natural Language Processing (NLP) Models:
I. What are NLP Models?
NLP models are a subset of artificial intelligence that focuses on enabling computers to understand, interpret, and generate human language. These models are designed to bridge the gap between human communication and computer understanding, allowing machines to process and generate text in a way that's meaningful and contextually relevant.
II. Key Components of NLP Models:
a. Tokenization: The process of breaking down text into individual units called tokens, which can be words, subwords, or characters. Tokenization is the first step in converting text into a format that computers can understand.
b. Word Embeddings: A technique that maps words or tokens into numerical vectors in a way that captures semantic relationships. Word embeddings help models understand the context and relationships between words.
c. Sequences and Context: NLP models consider the order of words in a sentence as well as the surrounding context. This allows them to understand nuances, idiomatic expressions, and various language structures.
d. Attention Mechanism: A mechanism used in transformer-based models that assigns different weights to different parts of a text sequence, allowing the model to focus on relevant information and capture long-range dependencies.
e. Transformer Architecture: A neural network architecture that has become a foundational structure for many NLP models. It uses self-attention mechanisms to process input data in parallel, capturing global context and enabling efficient training.
III. Notable NLP Models:
a. BERT (Bidirectional Encoder Representations from Transformers): BERT revolutionized NLP by pre-training a model on a massive amount of text data, allowing it to capture context from both left and right directions. This bidirectional understanding greatly improved performance on various tasks.
b. GPT (Generative Pre-trained Transformer) Series: These models, including GPT-2 and GPT-3, are designed to generate coh
Inspiring young women to code and be actively participating in AI projects. Understanding upcoming technologies and frameworks is critical for their professional development.
The Practical Evolution of the Auto-Tagging Technology as a ServiceImagga Technology
Georgi Kadrev - CEO at Imagga was talking about image recognition at Nvidia event on in Silicone Valley. Learn the latest developments in the industry as well as future technological focuses of the company.
Imagga helps business to extract meaning from photos by offering easy to implement image recognition APIs for color detection, categorization and automated keywording.
Imagga - Democratizing image understanding technologies in a cloud platform of APIs and tools. 3 layer image understanding platform of code technologies, commercial APIs and end-user tools.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
6. 6
pixels
lots of data themselves
in spatial relationships
multiple levels and scales of interest
from low-level features such as texture to high-level
features such as composition
needs data-augmentation
compensate for sensitivity - training with blurred,
cropped, scaled, noised, etc
What is specific: ML for images
7. 7
huge architecture
(both deep and wide) - requires massive amount of
memory and processing power
inter- and intra-class variety
need to describe the universe (huge and diverse
datasets required to feed the data greedy CNNs)
takes time
10+ days for large architectures, even after 10x
reduction thanks to using GPUs
What is specific: ML for images
8. 8
cuda-convnet
python interface, fermi-generation nVidia GPU, no multi-
GPU support
cuda-convnet2
an upgrade to cuda-convnet, optimized for new kepler-
generation nVidia GPUs, multi-GPU support
caffe
deep learning framework, developed by Berkeley Vision
and Learning Center, big community of contributors
Convnet implementations for images
9. 9
torch7
ML algorithms, CNN extensions: fbcnn by Facebook,
used by Google DeepMind
theano
python library, open-ended in terms of network
architecture & transfer functions
…many others
Convnet implementations for images
13. 13
semantic expansion
‘car’ -> ‘vehicle’ -> ‘mean of transportation’
feedback-loop
instant learning from user feedback, to be released in
May
custom training
with specific set of tags
Imagga Auto Tagging
15. 15
Personal Photo Applications
•Apps for mobile photos organization
•Integration in telecom solutions
•Cloud services for consumers
•Device manufacturers
http://getsliki.com
19. 19
Big image data management and
organization
Image driven platforms (DAMs)
Contextual advertising
Interactive/behavioural campaigns (adSense like)
User profiling
insight market, profiling based on image content
Interactive campaigns for brands
new ways to interact with customers
Use Cases