The Briefing Room with Robin Bloor and Cirro
Live Webcast on Dec. 11, 2012
As the information landscape expands with all kinds of Big Data, businesses are searching for ways to unite their traditional analytics with this new source of insight. One ambitious approach involves federating access to multiple data sources, even across various operating systems. The idea is to take analytic processing to the data, then intelligently assemble the results for a business user. Could this be the long-awaited alternative to data virtualization?
Check out this episode of The Briefing Room to hear veteran Analyst Robin Bloor explain how federated access to data sources can pave the way for a truly integrated data fabric. Bloor will be briefed by Mark Theissen of Cirro, who will tout his company's patent-pending Data Hub, which simplifies data access by federating queries across multiple sources of structured, semi-structured, and unstructured data. He'll discuss Cirro's cost based optimizer, smart caching, dynamic query plan re-optimization, normalization of cost estimates and a metadata repository for unstructured data sources.
Visit: http://www.insideanalysis.com
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsInside Analysis
The Briefing Room with Barry Devlin and NuoDB
Live Webcast on Oct. 23, 2012
Three major factors in enterprise computing are combining to rewrite how data is stored, accessed and managed: 1) the demand of analytics that now spreads across hundreds, even thousands of users; 2) the pervasiveness of Big Data in all its forms and sizes; and 3) the rise of the commodity data center, aka Cloud computing. The convergence of these forces calls for a new data foundation, one that can handle the scalability and workload issues that face today's information managers.
Check out this episode of The Briefing Room to learn from veteran Analyst Barry Devlin, one of the very first architects of data warehousing, who will explain how today's information architectures require a radically different approach. He'll be briefed by Barry Morris, Founder and CEO of NuoDB, who will tout his company's product, described as a peer-to-peer messaging system that acts as a database. It behaves just like a traditional relational database, but was designed with a completely distributed and scalable architecture.
http://www.insideanalysis.com
Talend Open Studio and Hortonworks Data PlatformHortonworks
Data Integration is a key step in a Hadoop solution architecture. It is the first obstacle encountered once your cluster is up and running. OK, I have a cluster…now what? Complex scripts? For wide scale adoption of Apache Hadoop, an intuitive set of tools that abstract away the complexity of integration is necessary.
This session covers a brief introduction about Fusion Applications and the session progresses into the discussion of some of the highlights of the Fusion MDM for Customer application.
Webinar | Using Hadoop Analytics to Gain a Big Data AdvantageCloudera, Inc.
Learn about:
Why big data matters to your business: realize revenue, increase customer loyalty, and pinpoint effective strategies
The business and technical challenges of big data solutions
How to leverage big data for competitive advantage
The “must haves” of an effective big data solution
Real-world examples of Cloudera, Pentaho and Dell big data solutions in action
Introduction to Hortonworks Data Platform for WindowsHortonworks
According to IDC, Windows Servers run more than 50% of the servers in the Enterprise Data Center. Hortonworks has worked closely with Microsoft to port Apache Hadoop to Windows to enable organizations to take advantage of this emerging Big Data technology. Join us in this informative webinar to hear about the new Hortonworks Data Platform for Windows.
In less than an hour, you’ll learn:
-Key capabilities available in Hortonworks Data Platform for Windows
-How HDP for Windows integrates with Microsoft tools
-Key workloads and use cases for driving Hadoop today
Hadoop World 2011: Big Data Architecture: Integrating Hadoop with Other Enter...Cloudera, Inc.
Recent research has pointed out the complementary nature of Hadoop and other data management solutions and the importance of leveraging existing systems, SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve analytic processing. Come to this session to learn how companies optimize the use of Hadoop with other enterprise systems to improve overall analytical throughput and build new data-driven products. This session covers: ways to achieve high-performance integration between Hadoop and relational-based systems; Hadoop+NoSQL vs Hadoop+SQL architectures; high-speed, massively parallel data transfer to analytical platforms that can aggregate web log data with granular fact data; and strategies for freeing up capacity for more explorative, iterative analytics and ad hoc queries.
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsInside Analysis
The Briefing Room with Barry Devlin and NuoDB
Live Webcast on Oct. 23, 2012
Three major factors in enterprise computing are combining to rewrite how data is stored, accessed and managed: 1) the demand of analytics that now spreads across hundreds, even thousands of users; 2) the pervasiveness of Big Data in all its forms and sizes; and 3) the rise of the commodity data center, aka Cloud computing. The convergence of these forces calls for a new data foundation, one that can handle the scalability and workload issues that face today's information managers.
Check out this episode of The Briefing Room to learn from veteran Analyst Barry Devlin, one of the very first architects of data warehousing, who will explain how today's information architectures require a radically different approach. He'll be briefed by Barry Morris, Founder and CEO of NuoDB, who will tout his company's product, described as a peer-to-peer messaging system that acts as a database. It behaves just like a traditional relational database, but was designed with a completely distributed and scalable architecture.
http://www.insideanalysis.com
Talend Open Studio and Hortonworks Data PlatformHortonworks
Data Integration is a key step in a Hadoop solution architecture. It is the first obstacle encountered once your cluster is up and running. OK, I have a cluster…now what? Complex scripts? For wide scale adoption of Apache Hadoop, an intuitive set of tools that abstract away the complexity of integration is necessary.
This session covers a brief introduction about Fusion Applications and the session progresses into the discussion of some of the highlights of the Fusion MDM for Customer application.
Webinar | Using Hadoop Analytics to Gain a Big Data AdvantageCloudera, Inc.
Learn about:
Why big data matters to your business: realize revenue, increase customer loyalty, and pinpoint effective strategies
The business and technical challenges of big data solutions
How to leverage big data for competitive advantage
The “must haves” of an effective big data solution
Real-world examples of Cloudera, Pentaho and Dell big data solutions in action
Introduction to Hortonworks Data Platform for WindowsHortonworks
According to IDC, Windows Servers run more than 50% of the servers in the Enterprise Data Center. Hortonworks has worked closely with Microsoft to port Apache Hadoop to Windows to enable organizations to take advantage of this emerging Big Data technology. Join us in this informative webinar to hear about the new Hortonworks Data Platform for Windows.
In less than an hour, you’ll learn:
-Key capabilities available in Hortonworks Data Platform for Windows
-How HDP for Windows integrates with Microsoft tools
-Key workloads and use cases for driving Hadoop today
Hadoop World 2011: Big Data Architecture: Integrating Hadoop with Other Enter...Cloudera, Inc.
Recent research has pointed out the complementary nature of Hadoop and other data management solutions and the importance of leveraging existing systems, SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve analytic processing. Come to this session to learn how companies optimize the use of Hadoop with other enterprise systems to improve overall analytical throughput and build new data-driven products. This session covers: ways to achieve high-performance integration between Hadoop and relational-based systems; Hadoop+NoSQL vs Hadoop+SQL architectures; high-speed, massively parallel data transfer to analytical platforms that can aggregate web log data with granular fact data; and strategies for freeing up capacity for more explorative, iterative analytics and ad hoc queries.
Calpont CTO Jim Tommaney provides an overview InfiniDB 3, Calpont’s analytic data platform.
Discussion Topics
•How InfiniDB is architected for Big Data analytics
•How InfiniDB is provisioned for Amazon EC2 with an AMI
•How to quickly create a small or large cluster
•How InfiniDB’s parallel load capabilities deliver linear load scaling
Teradata Aster: Big Data Discovery Made Easy
Brad Elo, VP, Aster Data, Teradata
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
Hadoop Summit 2012 | Integrating Hadoop Into the EnterpriseCloudera, Inc.
The power of Hadoop lies in its ability to help users cost effectively analyze all kinds of data. We are now seeing the emergence of a new class of analytic applications that can only be enabled by a comprehensive big data platform. Such a platform extends the Hadoop framework with built-in analytics, robust developer tools, and the integration, reliability, and security capabilities that enterprises demand for complex, large scale analytics. In this session, we will share innovative analytics use cases from actual customer implementations using an enterprise-class big data analytics platform.
Analytic Platforms in the Real World with 451Research and Calpont_July 2012Calpont Corporation
Matt Aslett, 451 Research, and Bob Wilkinson, VP Engineering for Calpont, discuss the emergence of the analytic platform, its place the new ecosystem for Big Data, considerations for selection, and applied use cases of Calpont’s analytic platform, InfiniDB, in Telco and Mobile Advertising.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
IT @ Intel: Preparing the Future Enterprise with the Internet of ThingsIntel IT Center
The Internet of Things (IoT) is the concept of diverse machines, devices, and technologies connecting, interacting, and negotiating with each other to help improve and enrich our lives. No longer is this limited to just computer or smart phone technology. Everyday items such as household appliance, cars and even toys can connect to the internet to integrate with other computing things, processes and services. This new paradigm is changing how data is used and collected, and introducing new challenges for enterprises.
Solving the Really Big Tech Problems with IoTEric Kavanagh
The Briefing Room with Dr. Robin Bloor and HPE Security
The Internet of Things brings new technological problems: sensor communications are bi-directional, the scale of data generation points has no precedent and, in this new world, security, privacy and data protection need to go out to the edge. Likely, most of that data lands in Hadoop and Big Data platforms. With the need for rapid analytics never greater, companies try to seize opportunities in tighter time windows. Yet, cyber-threats are at an all-time high, targeting the most valuable of assets—the data.
Register for this episode of The Briefing Room to hear Analyst Dr. Robin Bloor explain the implications of today's divergent data forces. He’ll be briefed by Reiner Kappenberger of HPE, who will discuss how a recent innovation -- NiFi -- is revolutionizing the big data ecosystem. He’ll explain how this technology dramatically simplifies data flow design, enabling a new era of business-driven analysis, while also protecting sensitive data.
Hortonworks and Red Hat Webinar - Part 2Hortonworks
Learn more about creating reference architectures that optimize the delivery the Hortonworks Data Platform. You will hear more about Hive, JBoss Data Virtualization Security, and you will also see in action how to combine sentiment data from Hadoop with data from traditional relational sources.
MomentumInfoCare - An official partners of Citrix, Microsoft, HP, Vmware, Websence, Soniwall, Trend micro, Symantac, Atempo, EMC, RSA, IBM in Delhi. An IT infrastructure Specialist.
SQL Server Data Mining - Taking your Application Design to the Next LevelMark Ginnebaugh
Presentation to the Silicon Valley SQL Server User Group on July 21, 2009. Microsoft MVP Peter Myers talks about Microsoft SQL Server 2005 and 2008 Data Mining and demonstrates how to develop data mining models that can be embedded into applications.
You Will Learn:
•What business problems SQL Server 2008 data mining can solve
•SQL Server’s data mining capabilities
•The data mining development cycle
•How to create, train, test and query mining models
•How to Embed data mining into reports and applications
Webinar | From Zero to Big Data Answers in Less Than an Hour – Live Demo SlidesCloudera, Inc.
Slides describing Cloudera and Karmasphere, and how combined their products can install a Hadoop cluster, import data, run queries and generate results.
Making your Analytics Investment Pay Off - StampedeCon 2012StampedeCon
At StampedeCon 2012 in St. Louis, Bill Eldredge of Nokia presents: At Nokia, we expect to save millions on avoided license fees this year on a single “Big Data” project by creating a symbiotic relationship between our traditional RDBMS storage and our newer Hadoop cluster. Our hybrid approach to data enables us to manage the convergence of structured and unstructured data, and save money. In our case we use Hadoop to process and import data into traditional systems. We have found that this use of Hadoop as a preprocessing engine has enabled maximum value to be derived from our systems, our data and our people.
Gena M. San Nicholas, a taxonomist and biology subject-matter expert (SME) at Access Innovations, Inc., shows how Data Harmony's machine-aided indexing (M.A.I.) module produces tagged subject terms within bodies of text for XML and other repositories. This aids in search and leverages subject metadata, resulting in added value to data collections.
When architecting for Big Data, you have to think outside the box.
This talk describes how an IBM project at CUNA Mutual Group in 1998 morphed into MIOsoft and planted the seeds of innovative ideas that eventually yielded a novel database paradigm called the Context Database that further facilitated the creation of a tightly-integrated distributed application development ecosystem for Big Data.
We contend that MIOdb is capable of storing and analyzing Big Data while simultaneously being capable of scaling to meet the demands of high-throughput transactional applications because of a unique combination of features, such as:
• Lock-free operations
• Logical and physical clustering of data
• Context-oriented concurrency
• Parallel reporting requiring only a single physical pass across each disk
• First-order modeling of complex, deeply-nested structures
• Automatic management of denormalization
• Sophisticated data integration concepts like transitive matching of like data, auto-relationship discovery, and conservation of data provenance.
The talk concludes by looking to the future, the “Internet of Things” and real time contextual analytics, and detailing how a platform like MIOedge makes understanding individual entities in real time possible.
Calpont CTO Jim Tommaney provides an overview InfiniDB 3, Calpont’s analytic data platform.
Discussion Topics
•How InfiniDB is architected for Big Data analytics
•How InfiniDB is provisioned for Amazon EC2 with an AMI
•How to quickly create a small or large cluster
•How InfiniDB’s parallel load capabilities deliver linear load scaling
Teradata Aster: Big Data Discovery Made Easy
Brad Elo, VP, Aster Data, Teradata
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
Hadoop Summit 2012 | Integrating Hadoop Into the EnterpriseCloudera, Inc.
The power of Hadoop lies in its ability to help users cost effectively analyze all kinds of data. We are now seeing the emergence of a new class of analytic applications that can only be enabled by a comprehensive big data platform. Such a platform extends the Hadoop framework with built-in analytics, robust developer tools, and the integration, reliability, and security capabilities that enterprises demand for complex, large scale analytics. In this session, we will share innovative analytics use cases from actual customer implementations using an enterprise-class big data analytics platform.
Analytic Platforms in the Real World with 451Research and Calpont_July 2012Calpont Corporation
Matt Aslett, 451 Research, and Bob Wilkinson, VP Engineering for Calpont, discuss the emergence of the analytic platform, its place the new ecosystem for Big Data, considerations for selection, and applied use cases of Calpont’s analytic platform, InfiniDB, in Telco and Mobile Advertising.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
IT @ Intel: Preparing the Future Enterprise with the Internet of ThingsIntel IT Center
The Internet of Things (IoT) is the concept of diverse machines, devices, and technologies connecting, interacting, and negotiating with each other to help improve and enrich our lives. No longer is this limited to just computer or smart phone technology. Everyday items such as household appliance, cars and even toys can connect to the internet to integrate with other computing things, processes and services. This new paradigm is changing how data is used and collected, and introducing new challenges for enterprises.
Solving the Really Big Tech Problems with IoTEric Kavanagh
The Briefing Room with Dr. Robin Bloor and HPE Security
The Internet of Things brings new technological problems: sensor communications are bi-directional, the scale of data generation points has no precedent and, in this new world, security, privacy and data protection need to go out to the edge. Likely, most of that data lands in Hadoop and Big Data platforms. With the need for rapid analytics never greater, companies try to seize opportunities in tighter time windows. Yet, cyber-threats are at an all-time high, targeting the most valuable of assets—the data.
Register for this episode of The Briefing Room to hear Analyst Dr. Robin Bloor explain the implications of today's divergent data forces. He’ll be briefed by Reiner Kappenberger of HPE, who will discuss how a recent innovation -- NiFi -- is revolutionizing the big data ecosystem. He’ll explain how this technology dramatically simplifies data flow design, enabling a new era of business-driven analysis, while also protecting sensitive data.
Hortonworks and Red Hat Webinar - Part 2Hortonworks
Learn more about creating reference architectures that optimize the delivery the Hortonworks Data Platform. You will hear more about Hive, JBoss Data Virtualization Security, and you will also see in action how to combine sentiment data from Hadoop with data from traditional relational sources.
MomentumInfoCare - An official partners of Citrix, Microsoft, HP, Vmware, Websence, Soniwall, Trend micro, Symantac, Atempo, EMC, RSA, IBM in Delhi. An IT infrastructure Specialist.
SQL Server Data Mining - Taking your Application Design to the Next LevelMark Ginnebaugh
Presentation to the Silicon Valley SQL Server User Group on July 21, 2009. Microsoft MVP Peter Myers talks about Microsoft SQL Server 2005 and 2008 Data Mining and demonstrates how to develop data mining models that can be embedded into applications.
You Will Learn:
•What business problems SQL Server 2008 data mining can solve
•SQL Server’s data mining capabilities
•The data mining development cycle
•How to create, train, test and query mining models
•How to Embed data mining into reports and applications
Webinar | From Zero to Big Data Answers in Less Than an Hour – Live Demo SlidesCloudera, Inc.
Slides describing Cloudera and Karmasphere, and how combined their products can install a Hadoop cluster, import data, run queries and generate results.
Making your Analytics Investment Pay Off - StampedeCon 2012StampedeCon
At StampedeCon 2012 in St. Louis, Bill Eldredge of Nokia presents: At Nokia, we expect to save millions on avoided license fees this year on a single “Big Data” project by creating a symbiotic relationship between our traditional RDBMS storage and our newer Hadoop cluster. Our hybrid approach to data enables us to manage the convergence of structured and unstructured data, and save money. In our case we use Hadoop to process and import data into traditional systems. We have found that this use of Hadoop as a preprocessing engine has enabled maximum value to be derived from our systems, our data and our people.
Gena M. San Nicholas, a taxonomist and biology subject-matter expert (SME) at Access Innovations, Inc., shows how Data Harmony's machine-aided indexing (M.A.I.) module produces tagged subject terms within bodies of text for XML and other repositories. This aids in search and leverages subject metadata, resulting in added value to data collections.
When architecting for Big Data, you have to think outside the box.
This talk describes how an IBM project at CUNA Mutual Group in 1998 morphed into MIOsoft and planted the seeds of innovative ideas that eventually yielded a novel database paradigm called the Context Database that further facilitated the creation of a tightly-integrated distributed application development ecosystem for Big Data.
We contend that MIOdb is capable of storing and analyzing Big Data while simultaneously being capable of scaling to meet the demands of high-throughput transactional applications because of a unique combination of features, such as:
• Lock-free operations
• Logical and physical clustering of data
• Context-oriented concurrency
• Parallel reporting requiring only a single physical pass across each disk
• First-order modeling of complex, deeply-nested structures
• Automatic management of denormalization
• Sophisticated data integration concepts like transitive matching of like data, auto-relationship discovery, and conservation of data provenance.
The talk concludes by looking to the future, the “Internet of Things” and real time contextual analytics, and detailing how a platform like MIOedge makes understanding individual entities in real time possible.
Detailed information on the operation of the Data Harmony Thesaurus Master module from Access Innovation’s, Inc. Presented by Alice Redmond-Neal and Jack Bruce at the 2012 Data Harmony User Group meeting on February 7, 2012 at the Access Innovations, Inc. offices.
Data Lake vs. Data Warehouse: Which is Right for Healthcare?Health Catalyst
The data lake style of a data warehouse architecture is a flexible alternative to a traditional data warehouse. It allows for unstructured data. When a warehousing approach requires that the data be in a structured format, there are constraints on the analyses that can be performed because not all of the data can be structured early. The data lake concept is very similar to our Late-Binding approach in that data lakes are our source marts. We increase the efficiency and effectiveness of these through: 1. Metadata, 2. Source Mart Designer, and 3. Subject Area Mart Designer.
Enterprise Data Hub: The Next Big Thing in Big DataCloudera, Inc.
If you missed Strata + Hadoop World, you missed quite a bit. This year's event was packed with Big Data practitioners across industries who shared their experiences and how they are driving new innovations like never before. Just because you weren't there, doesn't mean you missed out.
In this session, we'll touch on a few of the key highlights from the show, including:
Key trends in Big Data adoption
The enterprise data hub
How the enterprise data hub is used in practice
All Grown Up: Maturation of Analytics in the CloudInside Analysis
The Briefing Room with Wayne Eckerson and Birst
Live Webcast on Nov. 6, 2012
The desire for analytics today extends far beyond the traditional domain of Business Intelligence. The challenge is that operational systems come in countless shapes and sizes. Furthermore, each application treats data somewhat differently. But there are patterns of data flow and transformation that pervade all such systems. And there's one big place where all these data types and use cases have come together architecturally: the Cloud.
Watch this episode of the Briefing Room to hear veteran Analyst Wayne Eckerson explain how Cloud computing is ushering in a new era of analytics and intelligence. He'll be briefed by Brad Peters of Birst who will tout his company's purpose-built analytics platform. He'll discuss how the Birst engine processes and delivers raw data from disparate systems, offering the deployment flexibility of Software-as-a-Service, together with the capabilities of enterprise-class BI.
The Anywhere Enterprise – How a Flexible Foundation Opens DoorsInside Analysis
The Briefing Room with Dr. Robin Bloor and InfiniDB
Live Webcast on August 12, 2014
Watch the archive:
https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=1e562c3a4b9e9cb9a054f0ec216d578b
Today’s organizations need all kinds of data, from a wide and growing array of sources. Marshaling all that data into one location can be difficult, even unrealistic. Increasingly, innovative companies are taking a much more distributed approach to storing and processing data. The end result is an information architecture that supports a broader range of business activities, and reduces dependence on costly data movement.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor, as he explains how a distributed approach to data management can open doors to new business opportunities. He’ll be briefed by Jim Tommaney of InfiniDB who will explain how his company’s database has the flexibility to run on-prem, in the cloud, with cluster files systems or even Hadoop’s HDFS. He’ll also show how InfiniDB can serve as a conduit to companies looking to transform their information architecture to better satisfy changing market demands.
Visit InsideAnlaysis.com for more information.
A Strategic View of Enterprise Reporting and Analytics: The Data FunnelInside Analysis
The Briefing Room with Colin White and Jaspersoft
Slides from the Live Webcast on June 12, 2012
As the corporate appetite for analytics and reporting grows, companies must find a way to secure a strategic view of their information architecture. End users with varying degrees of expertise need a wide range of data and reports delivered in a timely fashion. As the audience for analytics expands, that puts pressure on IT infrastructure and staff. And now with the promise of Hadoop and MapReduce, the organization's desire for business insight becomes even more significant.
In this episode of The Briefing Room, veteran Analyst Colin White of BI Research will explain the value of being strategic with enterprise reporting. White will be briefed by Karl Van den Bergh of Jaspersoft, who will tout his company's “data funnel” concept, which is designed to strategically manage an organization's information architecture. By aligning information assets along this funnel, IT can effectively address the spectrum of analytical needs – from simple reporting to complex, ad hoc analysis – without over-taxing personnel and system resources.
The Cloudera Impala project is pioneering the next generation of Hadoop capabilities: the convergence of interactive SQL queries with the capacity, scalability, and flexibility of a Hadoop cluster. In this webinar, join Cloudera and MicroStrategy to learn how Impala works, how it is uniquely architected to provide an interactive SQL experience native to Hadoop, and how you can leverage the power of MicroStrategy 9.3.1 to easily tap into more data and make new discoveries.
Hadoop Reporting and Analysis - JaspersoftHortonworks
Hadoop is deployed for a variety of uses, including web analytics, fraud detection, security monitoring, healthcare, environmental analysis, social media monitoring, and other purposes.
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...Cloudera, Inc.
"Amr Awadallah served as the VP of Engineering of Yahoo's Product
Intelligence Engineering (PIE) team for a number of years. The PIE
team was responsible for business intelligence and advanced data
analytics across a number of Yahoo's key consumer facing properties (search, mail, news, finance, sports, etc). Amr will share the data architecture that PIE had implementted before Hadoop was deployed and the headaches that architecture entailed. Amr will then show how most, if not all of these headaches were eliminated once Hadoop was deployed. Amr will illustrate how Hadoop and Relational Database complement each other within the traditional business intelligence data stack, and how that enables organizations to access all their data under different
operational and economic constraints."
All data accessible to all my organization - Presentation at OW2con'19, June...OW2
It is clear that all employees must have access to data wherever they are to make decisions. However, tools that allow to share data just as easily as the best collaborative tools such as a google doc or an office 365 should be used.
Open source driven by the big data ecosystem and a number of large companies have provided solutions to allow organizations to federate data systems and secure their access.
After a quick overview of existing open source solutions, and how such projects can be organized, it will be necessary to detail Dremio implementation, a unique and centralized interface on all your data. Some real feedbacks will conclude the presentation.
What it takes to bring Hadoop to a production-ready stateClouderaUserGroups
While Hadoop may be a hot topic and is probably the buzziest big data term, the fact is that many Hadoop projects get stuck in pilot mode. We hear a number of reasons for this.
• “It’s too complicated.”
• “I don’t have the right resources.”
• “Security and compliance are never going to approve this.”
This session digs deep into why certain projects seem destined to remain in development. We’ll also cover what it takes to bring Hadoop to a production-ready state and convince management that it’s time to start using Hadoop to store and analyze real business data.
Webinar future dataintegration-datamesh-and-goldengatekafkaJeffrey T. Pollock
The Future of Data Integration: Data Mesh, and a Special Deep Dive into Stream Processing with GoldenGate, Apache Kafka and Apache Spark. This video is a replay of a Live Webinar hosted on 03/19/2020.
Join us for a timely 45min webinar to see our take on the future of Data Integration. As the global industry shift towards the “Fourth Industrial Revolution” continues, outmoded styles of centralized batch processing and ETL tooling continue to be replaced by realtime, streaming, microservices and distributed data architecture patterns.
This webinar will start with a brief look at the macro-trends happening around distributed data management and how that affects Data Integration. Next, we’ll discuss the event-driven integrations provided by GoldenGate Big Data, and continue with a deep-dive into some essential patterns we see when replicating Database change events into Apache Kafka. In this deep-dive we will explain how to effectively deal with issues like Transaction Consistency, Table/Topic Mappings, managing the DB Change Stream, and various Deployment Topologies to consider. Finally, we’ll wrap up with a brief look into how Stream Processing will help to empower modern Data Integration by supplying realtime data transformations, time-series analytics, and embedded Machine Learning from within data pipelines.
GoldenGate: https://www.oracle.com/middleware/tec...
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
The flexibility of Apache Hadoop is one of its biggest assets – enabling businesses to generate value from data that was previously considered too expensive to be stored and processed in traditional databases – but also results in Hadoop meaning different things to different people. In this session 451 Research’s Matt Aslett will explore the impact that Hadoop is having on the traditional data processing landscape, examining the expanding ecosystem of vendors and their relationships with Apache Hadoop, investigating the increasing variety of Hadoop use-cases, and exploring adoption trends around the world.
The New Frontier: Optimizing Big Data ExplorationInside Analysis
The Briefing Room with Dr. Robin Bloor and Cirro
Live Webcast on February 11, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=0ec1fa381886313cc06d841015c65898
As information ecosystems continue to expand, businesses are searching for ways to combine traditional analytics with a new source of insight: Big Data. But with data flooding in from all kinds of sources, fast access and performance at scale can easily become an issue. One effective approach for solving this challenge is data federation, a method that involves taking the analytical processing to the data, allowing streamlined access to multiple data sources without the expensive ETL overhead or building of semantic layers.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor as he explains how the prevalence of distributed data calls for a new approach to Big Data. He will be briefed by Mark Theissen of Cirro, who will tout his company’s Data Hub, a data federation solution that provides a single point of access to all enterprise data assets without excessive data movements, preprocessing or staging. He will discuss how data federation differs from virtualization and ETL approaches, and demonstrate how a Cirro deployment solves the analytics challenge of integrating data silos across the data center – and the cloud – using the BI tools you already have on your desktop for real-time distributed analytics.
Visit InsideAnlaysis.com for more information.
Infochimps #1 Big Data Platform for the CloudBrian Krpec
The Infochimps Platform is the simplest, fastest, and most flexible way to implement proven big data infrastructure in the cloud. Scalably and affordably ingest data from wherever you need — your in-house systems, external data feeds, data from the web, or our Data Marketplace. Make it useful with in-stream data decoration and augmentation. Store and analyze it in the best place for your application. Hadoop, NoSQL, real-time analytics — how do you tie it all together? The Infochimps Platform takes the mystery and difficulty out of big data and seamlessly integrates it with your existing environment, so you can focus on gaining business insights from your data fast.
Similar to Self-Service Access and Exploration of Big Data (20)
Smart companies know that business intelligence surfaces insights. With complex analytics, data mining and everything in between, it takes many moving parts to serve up the big picture. The key is to provide full-stack visibility into the entire BI environment, ensuring solid service and system performance.
Learn more at http://www.insideanalysis.com
Agile, Automated, Aware: How to Model for SuccessInside Analysis
The Briefing Room with David Loshin and Embarcadero
Live Webcast October 27, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=eea9877b71c653c499c809c5693eae8fe
Data management teams face some tough challenges these days. Organizations need business-driven visibility that enables understanding and awareness of enterprise data assets – without worrying about definitions and change management. But with information architectures evolving into a hybrid mix of data objects and data services built over relational databases as well as big data stores, serving up accurately defined, reusable data can become a complex issue.
Register for this episode of The Briefing Room to learn from veteran Analyst David Loshin as he explains the importance of agile, automated workflows in today’s enterprise. He’ll be briefed by Ron Huizenga of Embarcadero, who will discuss how his company’s ER/Studio suite approaches data modeling and management from a modern architecture standpoint. He will explain that unifying the way information is represented can not only eliminate the need for costly workarounds, but also foster collaboration between data architects, developers and business users.
Visit InsideAnalysis.com for more information.
First in Class: Optimizing the Data Lake for Tighter IntegrationInside Analysis
The Briefing Room with Dr. Robin Bloor and Teradata RainStor
Live Webcast October 13, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=012bb2c290097165911872b1f241531d
Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful data management solutions require a fusion of all relevant data, new and old, which has proven challenging for many companies. With a data lake that’s been optimized for fast queries, solid governance and lifecycle management, users can take data management to a whole new level.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses the relevance of data lakes in today’s information landscape. He’ll be briefed by Mark Cusack of Teradata, who will explain how his company’s archiving solution has developed into a storage point for raw data. He’ll show how the proven compression, scalability and governance of Teradata RainStor combined with Hadoop can enable an optimized data lake that serves as both reservoir for historical data and as a "system of record” for the enterprise.
Visit InsideAnalysis.com for more information.
Fit For Purpose: Preventing a Big Data LetdownInside Analysis
The Briefing Room with Dr. Robin Bloor and RedPoint Global
Live Webcast October 6, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=9982ad3a2603345984895f279e849d35
Gartner recently placed Big Data in its “trough of disillusionment,” reflective of many leaders’ struggle to prove the value of Hadoop within their organization. While the promise of enhanced data integration and enrichment is obvious, measurable results have remained elusive. This episode of The Briefing Room will outline how to successfully tie Big Data to existing business applications, preventing your next Hadoop project from being another “Big Data letdown.”
Register today to learn from veteran Analyst Dr. Robin Bloor as he discusses the importance of converging enterprise data integration with intelligence and scalability. He’ll be briefed by George Corugedo of RedPoint Global, who will provide concrete examples of how the convergence of scalable cloud platforms, ever-expanding data sources and intelligent execution can turn the Big Data hype into demonstrable business value.
Visit InsideAnalysis.com for more information.
To Serve and Protect: Making Sense of Hadoop Security Inside Analysis
The Briefing Room with Dr. Robin Bloor and HP Security Voltage
Live Webcast September 22, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=45ece7082b1d7c2cc8179bc7a1a69ea5
Hadoop is rapidly becoming a development platform and dominant server environment, and organizations are keen to take advantage of its massively scalable – and relatively inexpensive – resources. It is not, however, without its limitations, and it often requires a contingent of complementary components in order to behave within an information architecture. One area often overlooked is security, a factor that, if not considered from the onset, can insert great risk when putting sensitive data in Hadoop.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses how security was never a design point for Hadoop and what organizations can do about it. He’ll be briefed by Sudeep Venkatesh of HP Security Voltage, who will explain the intricacies surrounding a secure Hadoop implementation. He will show how techniques like format-preserving and partial-field encryption can allow for analytics over protected data, with zero performance impact.
Visit InsideAnalysis.com for more information.
The Hadoop Guarantee: Keeping Analytics Running On TimeInside Analysis
The Briefing Room with Dr. Robin Bloor and Pepperdata
Live Webcast September 15, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=32f198185d9d0c4cf32c27bdd1498b2a
Industry researchers agree: the importance of Hadoop will continue to grow as more companies recognize the range of benefits they can reap, from lower-cost storage to better business insights. At the same time, advances in the Hadoop ecosystem are addressing many of the key concerns that have hampered adoption, including performance and reliability. As a result, Hadoop is fast becoming a first-class citizen in the world of enterprise computing.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor explain how the Hadoop ecosystem is evolving into a mature foundation for managing enterprise data. He’ll be briefed by Sean Suchter of Pepperdata, who will explain how his company’s software brings predictability and reliability to Hadoop through dynamic, policy-based controls and monitoring. He’ll show how to guarantee service-level agreements by slowing down low-priority tasks as needed. He’ll also discuss the holy grail of Hadoop: how to enable mixed workloads.
Visit InsideAnalysis.com for more information.
Special Edition with Dr. Robin Bloor
Live Webcast September 9, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=e8b9ac35d8e4ffa3452562c1d4286a975
Do the math: algebra will transform information management. Just as the relational database revolutionized the information landscape, so will a just-released, complete algebra of data overhaul the industry itself. So says Dr. Robin Bloor in his new book, the Algebra of Data, which he’ll outline in this special one-hour webcast.
Once organizations learn how to express their data sets algebraically, the benefits will be significant and far-reaching. Data quality problems will slowly subside; queries will run orders of magnitude faster; integration challenges will fade; and countless tedious jobs in the data management space will bid their farewell. But first, software companies must evolve, and that will take time.
Visit InsideAnalysis.com for more information.
The Role of Data Wrangling in Driving Hadoop AdoptionInside Analysis
The Briefing Room with Mark Madsen and Trifacta
Live Webcast September 1, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=eb655874d04ba7d560be87a9d906dd2fd
Like all enterprise software solutions, Hadoop must deliver business value in order to be a success. Much of the innovation around the big data industry these days therefore addresses usability. While there will always be a technical side to the Hadoop equation, the need for user-friendly tools to manage the data will continue to focus on business users. That’s why self-service data preparation or "data wrangling" is a serious and growing trend, one which promises to move Hadoop beyond the early adopter phase and more into the mainstream of business.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain why business users will play an increasingly important role in the evolution of big data. He’ll be briefed by Trifacta's Will Davis and Alon Bartur, who will demonstrate how Trifacta's solution empowers business users to “wrangle" data of all shapes and sizes faster and easier than ever before. They’ll discuss why a new approach to accessing and preparing diverse data is required and how it can accelerate and broaden the use of big data within organizations.
Visit InsideAnalysis.com for more information.
Ahead of the Stream: How to Future-Proof Real-Time AnalyticsInside Analysis
Business seems to move faster by the day, with the most cutting edge companies taking advantage of real-time data streams for heavy duty analytics. But with so much innovation happening in so many places, how can companies stay ahead of the game? One answer is to future-proof your analytics architecture by using an abstraction layer that can translate your business use-case or work-flow to one of many leading innovative technologies to address the growing number of use cases in this dynamic field.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor, as he explains how a data flow architecture can harness a wide range of streaming solutions. He'll be briefed by Anand Venugopal of Impetus Technologies, who will showcase his company's StreamAnalytix platform, which was designed from the ground up to leverage multiple major streaming engines available today, including Apache Spark, Apache Storm and others. He'll demonstrate how StreamAnalytix provides enterprise-class performance while incorporating best-of-breed open-source components.
View the archive at: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=925d1e9b639b78c6cf76a1bbbf485b2b
All Together Now: Connected Analytics for the Internet of EverythingInside Analysis
The Briefing Room with Mark Madsen and Cisco
Live Webcast August 18, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=0eff120f8b2879b582b77f4ff207ee54
Today's digital enterprises are seeing an explosion of data at the edge. The Internet of Everything is fast approaching a critical mass that will demand a sea change in how companies process data. This new world of information is widely distributed, streaming, and overall becoming too big to move. Experts predict that within two to three years, the bulk of analytic processing will take place on the fringes of information architectures. As a result, forward-thinking companies are dramatically shifting their analytic strategies.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain how a new era of information architectures is now unfolding, paving the way to much more responsive and agile business models. He'll be briefed by Kim Macpherson of the Cisco Data and Analytics Business Unit, who will explain how her company's platform is uniquely suited for this new, federated analytic paradigm. She'll demonstrate how edge analytics can help companies address opportunities quickly and effectively.
Visit InsideAnalysis.com for more information.
Goodbye, Bottlenecks: How Scale-Out and In-Memory Solve ETLInside Analysis
The Briefing Room with Dr. Robin Bloor and Splice Machine
Live Webcast August 11, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=e1b33c9d45b178e13784b4a971a4c1349
The ETL process was born out of necessity, and for decades it has been the glue between data sources and target applications. But as data
growth soars and increased competition demands real-time data, standard ETL has become brittle and often unmanageable. Scaling up resources can do the trick, but it’s very costly and only a matter of time before the processes hit another bottleneck. When outmoded ETL stands in the way of real-time analytics, it might be time to consider a completely new approach.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he explains how modern, data-driven architectures must adopt an equally capable data integration strategy. He’ll be briefed by Rich Reimer of Splice Machine, who will discuss how his company solves ETL performance issues and enables real-time analytics and reports on big data. He will show that by leveraging the scale-out power of Hadoop and the in-memory speed of Spark, users can bring both analytical and operational systems together, eventually performing transformations only when needed.
Visit InsideAnalysis.com for more information.
The Biggest Picture: Situational Awareness on a Global LevelInside Analysis
The Briefing Room with Dr. Robin Bloor and Modus Operandi
Live Webcast July 28, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=efc4082d9b0b0adfcd753a7435d2d6a1b
The analytic bottlenecks of yesterday need not apply today. The boundaries are also falling thanks in large part to the abundance of third-party data. The most data-driven companies these days are finding creative ways to dynamically incorporate data from within and beyond the firewall, thus building highly accurate, multidimensional views of their business, customer, competition or other subject areas.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor as he explains the magnitude of change that's occurring in the world of data, why it's happening now, and how you can take advantage. He'll be briefed by Mike Gilger and Boris Pelakh, who will showcase their company's enterprise analytics platform, which combines a range of battle-tested functionality to deliver dynamic situational awareness that can leverage a comprehensive array of data sets. They'll explain how the platform's reasoner benefits from a highly scalable rules engine, and a flexible modeling capability that can optimize data storage virtually on the fly.
Visit InsideAnalysis.com for more information.
Structurally Sound: How to Tame Your ArchitectureInside Analysis
The Briefing Room with Krish Krishnan and Teradata
Live Webcast July 21, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=602b2a8413e8719d39465f4d6291d505
Technology changes all the time, but the basic needs of the business are the same: BI and analytics. With new types of data, various analytics engines and multiple systems, giving business users seamless access to enterprise data can be a rather daunting process. One solution is to provide a complete fabric that spans the organization, touching all data points and masking the complexity behind disparate sources.
Register for this episode of The Briefing Room to learn from veteran Analyst Krish Krishnan as he explores how and why architectures have changed over the years. He’ll be briefed by Imad Birouty of Teradata, who will discuss his company’s QueryGrid, an analytics solution designed to provide access to data across all systems. He will show how QueryGrid essentially creates a logical data warehouse and enables users to leverage SQL over multiple data types.
Visit InsideAnalysis.com for more information.
SQL In Hadoop: Big Data Innovation Without the RiskInside Analysis
The Briefing Room with Dr. Robin Bloor and Actian
Live Webcast July 14, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=bbd4395ea2f8c60a03cfefc68c7aa823
Innovation often implies risk, which is why businesses have many issues to weigh when considering change. Yet the remarkable growth of data is driving many traditional systems into the ground, forcing information workers to take a critical look at their existing tools. Technologies like Hadoop offer economical solutions to big data management, but to truly take advantage of its capabilities, organizations must modernize their infrastructure.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he explains how and why organizations should improve legacy systems. He’ll be briefed by Todd Untrecht of Actian, who will tout his company’s Actian Vortex, a SQL-in-Hadoop solution. He will show how integrating a SQL engine directly in the Hadoop cluster can lead to faster analytics and greater control, while still maintaining existing investments.
Visit InsideAnalysis.com for more information.
The Briefing Room with Dr. Robin Bloor and SYSTAP
Live Webcast June 30, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=0ff3889293f6c090483295fd7362c5a4
There's a reason why the biggest Web companies these days leverage graph technology: it is incredibly powerful for revealing a wide range of insights. Unlike other analytical databases, graph can very quickly identify the kinds of patterns that lead to better business decisions. Though relatively nascent in existing data centers, graph databases are proving to be well-suited for all kinds of business use cases, from clustering and hypothesis generation to failure detection and cyber analytics.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses how semantic technology fits in the spectrum of database and discovery solutions. He’ll be briefed by Brad Bebee of SYSTAP, who will showcase his company’s Blazegraph products and Mapgraph technology. He will explain how SYSTAP’s approach overcomes the challenge of scalability, and how graph technology’s powerful data management capabilities can deliver better enterprise performance and analytics using GPUs and other approaches.
Visit InsideAnalysis.com for more information.
A Revolutionary Approach to Modernizing the Data WarehouseInside Analysis
Hot Technologies with Rick Sherman, Dr. Robin Bloor and Snowflake Computing
Live Webcast June 25, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=e6e6de6cdfa8926e7a9d52e099a1a08e2
Enterprise software tends to advance in one of two ways: evolutionary and revolutionary. Evolutionary advances happen through incremental improvements made to an existing code base over a long period of time. Revolutionary advances happen when a new solution is designed from scratch, breaking cleanly from legacy approaches to take advantage of technology innovations that can span from hardware to software and methodologies.
Register for this episode of Hot Technologies to hear veteran analysts Rick Sherman of Athena IT Solutions and Dr. Robin Bloor along with Bob Muglia, CEO of Snowflake Computing, explain how a confluence of advances in the data world have opened up new doors for revolutionary advances in data warehousing. They will discuss new technology innovations and how they can be used to create data warehouses with the power, flexibility, and resiliency that modern enterprises need without the complexities and latencies inherent to traditional approaches.
Visit InsideAnalysis.com for more information.
The Maturity Model: Taking the Growing Pains Out of HadoopInside Analysis
The Briefing Room with Rick van der Lans and Think Big, a Teradata Company
Live Webcast on June 16, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=197f8106531874cc5c14081ca214eaff
Hadoop is arguably one of the most disruptive technologies of the last decade. Once lauded solely for its ability to transform the speed of batch processing, it has marched steadily forward and promulgated an array of performance-enhancing accessories, notably Spark and YARN. Hadoop has evolved into much more than a file system and batch processor, and it now promises to stand as the data management and analytics backbone for enterprises.
Register for this episode of The Briefing Room to learn from veteran Analyst Rick van der Lans, as he discusses the emerging roles of Hadoop within the analytics ecosystem. He’ll be briefed by Ron Bodkin of Think Big, a Teradata Company, who will explore Hadoop’s maturity spectrum, from typical entry use cases all the way up the value chain. He’ll show how enterprises that already use Hadoop in production are finding new ways to exploit its power and build creative, dynamic analytics environments.
Visit InsideAnalysis.com for more information.
Rethinking Data Availability and Governance in a Mobile WorldInside Analysis
The Briefing Room with Malcolm Chisholm and Druva
Live Webcast on June 9, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=baf82d3835c5dfa63202dcbe322a3ad7
The emergence of the mobile workforce has left an indelible mark on the enterprise; every employee is now mobile, and business data continues to be dispatched to the far reaches of the enterprise. While this has added enormous opportunity for increased productivity, it has also muddied the waters when it comes to controlling and protecting valuable data assets. As companies quickly evolve to address the new set of challenges posed by this shift in data usage, IT must ensure that all data, no matter where it’s generated or stored, is available and governed just as if it were still safely behind the corporate firewall.
Register for this episode of The Briefing Room to hear veteran Analyst Malcolm Chisholm as he explains the myriad challenges that mobile data introduces when addressing regulations and compliance needs, requiring new approaches to data governance. He’ll be briefed by Dave Packer of Druva, who will outline his company’s converged data protection strategy, which brings data center class capabilities to backup, availability and governance for the mobile workforce. He will share strategies to meet regional data residency, data recovery, legal hold and eDiscovery requirements and more.
Visit InsideAnalysis.com for more information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
2. Welcome
Host:
Eric Kavanagh
eric.kavanagh@bloorgroup.com
Twitter Tag: #briefr The Briefing Room
3. Mission
! Reveal the essential characteristics of enterprise software,
good and bad
! Provide a forum for detailed analysis of today s innovative
technologies
! Give vendors a chance to explain their product to savvy
analysts
! Allow audience members to pose serious questions... and get
answers!
Twitter Tag: #briefr The Briefing Room
4. December: Innovators
January: Big Data
February: Analytics
March: Data in Motion
Twitter Tag: #briefr The Briefing Room
5. Innovators
! Charles Babbage conceived the Analytical Engine in 1834.
! Automation and ease of use have driven innovation in
computing ever since.
! The Cloud and Big Data are raising the bar.
Twitter Tag: #briefr The Briefing Room
6. Analyst: Robin Bloor
Robin Bloor is
Chief Analyst at
The Bloor Group
robin.bloor@bloorgroup.com
Twitter Tag: #briefr The Briefing Room
7. Cirro
! Cirro provides a single method to access any type of data,
on any platform, in any environment.
! Its product suite consists of Cirro Data Hub, Analyst for
Excel and Multi Store – all designed to remove complexity
from Big Data analytics.
! Cirro’s products are cloud based and can run in public,
private and on-premise environments.
Twitter Tag: #briefr The Briefing Room
8. Mark Theissen
Mark is CEO at Cirro. He is a respected analytics and data
warehousing expert with more than 22 years in the industry.
Most recently Mark was the worldwide data warehousing
technical lead at Microsoft following the acquisition of
DATAllegro. At DATAllegro Mark was the COO and a member
of the board of directors. Prior to joining DATAllegro, Mark
was Vice President and Research Lead at META Group
(Gartner Group) for Enterprise Analytics Strategies, covering data warehousing,
business intelligence and data integration markets. Before META, Mark was VP
of Professional Services at Accruent where he was responsible for domestic and
overseas services and operations. Mark has a BS in Computer Information
Systems from Chapman University and a MBA from the University of California,
Irvine.
Twitter Tag: #briefr The Briefing Room
28. Hadoop & The Big Data Dynamic
Hadoop has become the de facto reservoir for data
The Bloor Group
29. Hadoop & The Big Data Dynamic
– We witnessed something like this a long time
ago, with ISAM files - before the advent of
RDBMS
– The difference this time is that Hadoop has an
ecosystem and it is growing
– Big Data (usually caught first by Hadoop) is
mostly new data and mostly event data
– Hadoop is not (yet) a performance engine. It is
an all-purpose capability
– It is delivering business benefits in a big way: it
is hot….
The Bloor Group
30. BI Categories
HINDSIGHT Regular reporting/operational BI, Excel
OVERSIGHT Dashboards, OLAP, BPM, Excel
Data mining, statistical analysis
INSIGHT (trends and relationships)
FORESIGHT Predictive analytics
The Bloor Group
32. Data Sources
Graph
DBMS,
XML
Standard DBMS, NoSQL
SQL Flat files
Hadoop
and Metadata
Hadoop Hub?
++
The Bloor Group
33. Problems Of The Data Layer
Hadoop is capable of ETL and often
Hadoop is multi-role and hence
used for ETL, but that usually
can spawn multiple instances
involves coding of a kind
BI tools, which had good-enough
The data layer is more
interfaces to RDBMS, don’t link to
complicated than it was and its
Hadoop directly, and probably
complexity is increasing
shouldn’t
Point to point connectivity usually
A connectivity architecture is
was, is and may always be a bad
needed
idea
IT REQUIRES SIMPLE CONNECTORS
The Bloor Group
34. ! How would one use the Cirro Multi Store?
! Which companies/products do you regard as
competitors (either directly or close competitors)?
! How does a Cirro implementation proceed, i.e.,
where do you start, what are the medium term
goals, what do you replace?
! Conceptually a hub for the data layer is attractive.
But how well does it scale out?
The Bloor Group
35. ! Can the hub be physically distributed, i.e., one
logical instance with multiple physical instances?
! How does your proprietary MapReduce differ from
Hadoop MapReduce?
! Is there any aspect of BI that you don’t or can’t
cater for (CEP, Data governance, MDM, etc.)?
The Bloor Group
37. Upcoming Topics
January: Big Data
February: Analytics
March: Data in Motion
2013 Editorial Calendar
www.insideanalysis.com
Twitter Tag: #briefr The Briefing Room
38. Thank You
for Your
Attention
Twitter Tag: #briefr The Briefing Room