DKAN Drupal Distribution Presentation at Drupal Gov Days 2013Andrew Hoppin
Slides from presentation at Drupal Gov Days 2013 (http://drupalgovdays2013.org/content/dkan-drupal-distro-open-data) about the DKAN Open Data Distribution of Drupal
Open Data Portals: 9 Solutions and How they CompareSafe Software
Get a comparison of CKAN, Socrata, ArcGIS Open Data and other top open data solutions. Plus get answers to best practice questions such as: Which datasets are important to share? What are the approximate costs? Which file formats should the data be shared in? How often should the data get updated? And overall, how can we ensure success with our open data portal?
Enforcing Schemas with Kafka Connect | David Navalho, Marionete and Anatol Lu...HostedbyConfluent
"Applying some measure of governance over how schemas are managed helps ensure good quality data, as well as better lineage tracking and governance.
At Saxo, we have been on a journey to take control of how we manage our data through the use of rich, governed schemas. We hit a challenge when we wanted to ingest data with Kafka Connect, as there was no way to ensure the data coming through was matched with these existing schemas. We were left having to either build a second step of manual transformations for simply matching generic data into our internal schemas, or play a lengthy game of cat and mouse with Connect exceptions and complex per-field transformations.
During this talk, we will be presenting how we tackled this issue by developing our own Schema Matching transformation. Our SMT can automatically match fields into a referenced schema. We will go through our experience designing the solution, and some of the key findings developing the SMT for both Avro and Protobuf."
Artificial intelligence and machine learning are currently all the rage. Every organisation is trying to jump on this bandwagon and cash in on their data reserves. At ThoughtWorks, we’d agree that this tech has huge potential — but as with all things, realising value depends on understanding how best to use it.
DKAN Drupal Distribution Presentation at Drupal Gov Days 2013Andrew Hoppin
Slides from presentation at Drupal Gov Days 2013 (http://drupalgovdays2013.org/content/dkan-drupal-distro-open-data) about the DKAN Open Data Distribution of Drupal
Open Data Portals: 9 Solutions and How they CompareSafe Software
Get a comparison of CKAN, Socrata, ArcGIS Open Data and other top open data solutions. Plus get answers to best practice questions such as: Which datasets are important to share? What are the approximate costs? Which file formats should the data be shared in? How often should the data get updated? And overall, how can we ensure success with our open data portal?
Enforcing Schemas with Kafka Connect | David Navalho, Marionete and Anatol Lu...HostedbyConfluent
"Applying some measure of governance over how schemas are managed helps ensure good quality data, as well as better lineage tracking and governance.
At Saxo, we have been on a journey to take control of how we manage our data through the use of rich, governed schemas. We hit a challenge when we wanted to ingest data with Kafka Connect, as there was no way to ensure the data coming through was matched with these existing schemas. We were left having to either build a second step of manual transformations for simply matching generic data into our internal schemas, or play a lengthy game of cat and mouse with Connect exceptions and complex per-field transformations.
During this talk, we will be presenting how we tackled this issue by developing our own Schema Matching transformation. Our SMT can automatically match fields into a referenced schema. We will go through our experience designing the solution, and some of the key findings developing the SMT for both Avro and Protobuf."
Artificial intelligence and machine learning are currently all the rage. Every organisation is trying to jump on this bandwagon and cash in on their data reserves. At ThoughtWorks, we’d agree that this tech has huge potential — but as with all things, realising value depends on understanding how best to use it.
How Semantics Solves Big Data ChallengesDATAVERSITY
Today, organizations want both IT simplicity and innovation, but reliance on traditional databases only leads to more complexity, longer development cycles, and more silos. In fact, organizations report that the #1 impediment to big data success is having too many silos. In this webinar, we will discuss how a new database technology, semantics, solves this problem by providing a new approach to modeling data that focuses on relationships and context, making it easier for data to be understood, searched, and shared. With semantics, world-leading organizations are integrating disparate data faster and easier and building smarter applications with richer analytic capabilities—benefits that we look forward to diving into during the webinar.
Using the Semantic Web Stack to Make Big Data SmarterMatheus Mota
This presentation will discuss how just a few parts of the Semantic Web Cake can already boost your analytics by making your (big) data smarter and even more connected.
Towards Digital Twin standards following an open source approachFIWARE
Digital Twins are gaining momentum when designing smart solutions in different application domains. However, there is a lack of open standards that warrant interoperability and portability of solutions, avoiding vendor lock-in.
During the presentation, we will review major developments in this area, focused on the adoption of a standard API for accessing Digital Twin Data and Smart Data Models. We will review how a Digital Twin approach enables data integration at different levels: architecting vertical smart solutions, within smart organizations and across organizations. At all levels interfacing with IoT, BigData, AI/ML, Blockchain, or Robotics technologies.
Diyotta DataMover is an intuitive and easy to use tool built
to develop, monitor and schedule data movement jobs to
load into Hadoop from disparate data sources including
RDBMS, flatfiles, mainframes and other Hadoop instances.
DataMover enables users to graphically design data import or
export jobs by identifying source objects and then schedule
them for execution.
Enabling digital transformation api ecosystems and data virtualizationDenodo
Watch the full webinar here: https://buff.ly/2KBKzLJ
Digital transformation, as cliché as it sounds, is on top of every decision maker’s strategic initiative list. And at the heart of any digital transformation, no matter the industry or the size of the company, there is an application programming interface (API) strategy. While API platforms enable companies to manage large numbers of APIs working in tandem, monitor their usage, and establish security between them, they are not optimized for data integration, so they cannot easily or quickly integrate large volumes of data between different systems. Data virtualization, however, can greatly enhance the capabilities of an API platform, increasing the benefits of an API-based architecture. With data virtualization as part of an API strategy, companies can streamline digital transformations of any size and scope.
Join us for this webinar to see these technologies in action in a demo and to get the answers to the following questions:
*How can data virtualization enhance the deployment and exposure of APIs?
*How does data virtualization work as a service container, as a source for microservices and as an API gateway?
*How can data virtualization create managed data services ecosystems in a thriving API economy?
*How are GetSmarter and others are leveraging data virtualization to facilitate API-based initiatives?
A possible future role of schema.org for business reportingsopekmir
The presentation demonstrates a vision for the “reporting extension” that could enhance the processes related to business reporting and the role it could have for the SBR vision.
This is Part 4 of the GoldenGate series on Data Mesh - a series of webinars helping customers understand how to move off of old-fashioned monolithic data integration architecture and get ready for more agile, cost-effective, event-driven solutions. The Data Mesh is a kind of Data Fabric that emphasizes business-led data products running on event-driven streaming architectures, serverless, and microservices based platforms. These emerging solutions are essential for enterprises that run data-driven services on multi-cloud, multi-vendor ecosystems.
Join this session to get a fresh look at Data Mesh; we'll start with core architecture principles (vendor agnostic) and transition into detailed examples of how Oracle's GoldenGate platform is providing capabilities today. We will discuss essential technical characteristics of a Data Mesh solution, and the benefits that business owners can expect by moving IT in this direction. For more background on Data Mesh, Part 1, 2, and 3 are on the GoldenGate YouTube channel: https://www.youtube.com/playlist?list=PLbqmhpwYrlZJ-583p3KQGDAd6038i1ywe
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
Mr. Pollock is an expert technology leader for data platforms, big data, data integration and governance. Jeff has been CTO at California startups and a senior exec at Fortune 100 tech vendors. He is currently Oracle VP of Products and Cloud Services for Data Replication, Streaming Data and Database Migrations. While at IBM, he was head of all Information Integration, Replication and Governance products, and previously Jeff was an independent architect for US Defense Department, VP of Technology at Cerebra and CTO of Modulant – he has been engineering artificial intelligence based data platforms since 2001. As a business consultant, Mr. Pollock was a Head Architect at Ernst & Young’s Center for Technology Enablement. Jeff is also the author of “Semantic Web for Dummies” and "Adaptive Information,” a frequent keynote at industry conferences, author for books and industry journals, formerly a contributing member of W3C and OASIS, and an engineering instructor with UC Berkeley’s Extension for object-oriented systems, software development process and enterprise architecture.
overview of the RDF graph database-as-a-service (GraphDB based) on the Self-Service Semantic Suite (S4)
http://s4.ontotext.com
presentation for the AKSW Group of the University of Leipzig
Yelp has operated our connector ecosystem to feed vital data to domain-specific teams and data stores. We share some of our learning and experiences on operating such system. We will touch on what is the next phase of the system evolution.
Promote the Good of the People of the United Kingdom by Maintaining Monetary ...DataWorks Summit
The Bank of England is the central Bank of the United Kingdom, established in 1694. Representatives from the Bank’s Data Analytics & Modelling team will discuss the Bank of England's journey to delivering a Big Data capability and how the Hortonworks HDP platform is helping us deliver on our mission statement of “promote the good of the people of the United Kingdom by maintaining monetary and financial stability". We will explore the challenges we've faced, how we have overcome some of these and those that remain to be conquered. We will also present our strategy for the Bank’s future Big Data platform as we look to scale up further in the coming years.
We will focus in particular on our first successful ‘Big Data’ production system. This exists in response to the financial crises of 2008 and the subsequent push to make the derivative markets safer by reducing systemic risk. In Europe this was delivered through the European Market Infrastructure Regulation (EMIR). We will explain the Bank of England’s role in monitoring UK entities within this important market and describe the significant challenges facing our team in building a data analytics platform to facilitate this
Speakers
Nick Vaughan, Domain SME - Data Analytics & Modelling
Bank of England
Adrian Waddy, Technical Lead
Bank of England
Linked Data from a Digital Object Management SystemUldis Bojars
Lightning talk about generating Linked Data from a digital object management system at the National Library of Latvia. Conference: http://swib.org/swib12/programme.php
Data Management Systems for Government Agencies - with CKANSteven De Costa
Over the last two days (5th and 6th of November 2015) I was very happy to present to a range of Victorian Government agencies and give them some context on what data management can do for their organisations.
From first principles we went through why data was important and what infrastructure was already in place via data.vic.gov.au for them to leverage. We covered examples of how other agencies, such as the Office of Environment and Heritage in NSW, are rebuilding their data management system to provide a more efficient pipeline for publishing internal and public data.
As always, I could not help highlighting the awesome leadership of WA Parks and Wildlife and the work done by Florian Mayer as the best case example for reducing the costs and friction often involved with publishing data as contextually marked up knowledge.
We covered a number of scenarios where the concept of resource containers for data were considered. This created valuable feedback which has further galvanized my thoughts about how to further extend CKAN to meet the needs of both private and open data portals, and other forms of realtime or unstructured data.
How Semantics Solves Big Data ChallengesDATAVERSITY
Today, organizations want both IT simplicity and innovation, but reliance on traditional databases only leads to more complexity, longer development cycles, and more silos. In fact, organizations report that the #1 impediment to big data success is having too many silos. In this webinar, we will discuss how a new database technology, semantics, solves this problem by providing a new approach to modeling data that focuses on relationships and context, making it easier for data to be understood, searched, and shared. With semantics, world-leading organizations are integrating disparate data faster and easier and building smarter applications with richer analytic capabilities—benefits that we look forward to diving into during the webinar.
Using the Semantic Web Stack to Make Big Data SmarterMatheus Mota
This presentation will discuss how just a few parts of the Semantic Web Cake can already boost your analytics by making your (big) data smarter and even more connected.
Towards Digital Twin standards following an open source approachFIWARE
Digital Twins are gaining momentum when designing smart solutions in different application domains. However, there is a lack of open standards that warrant interoperability and portability of solutions, avoiding vendor lock-in.
During the presentation, we will review major developments in this area, focused on the adoption of a standard API for accessing Digital Twin Data and Smart Data Models. We will review how a Digital Twin approach enables data integration at different levels: architecting vertical smart solutions, within smart organizations and across organizations. At all levels interfacing with IoT, BigData, AI/ML, Blockchain, or Robotics technologies.
Diyotta DataMover is an intuitive and easy to use tool built
to develop, monitor and schedule data movement jobs to
load into Hadoop from disparate data sources including
RDBMS, flatfiles, mainframes and other Hadoop instances.
DataMover enables users to graphically design data import or
export jobs by identifying source objects and then schedule
them for execution.
Enabling digital transformation api ecosystems and data virtualizationDenodo
Watch the full webinar here: https://buff.ly/2KBKzLJ
Digital transformation, as cliché as it sounds, is on top of every decision maker’s strategic initiative list. And at the heart of any digital transformation, no matter the industry or the size of the company, there is an application programming interface (API) strategy. While API platforms enable companies to manage large numbers of APIs working in tandem, monitor their usage, and establish security between them, they are not optimized for data integration, so they cannot easily or quickly integrate large volumes of data between different systems. Data virtualization, however, can greatly enhance the capabilities of an API platform, increasing the benefits of an API-based architecture. With data virtualization as part of an API strategy, companies can streamline digital transformations of any size and scope.
Join us for this webinar to see these technologies in action in a demo and to get the answers to the following questions:
*How can data virtualization enhance the deployment and exposure of APIs?
*How does data virtualization work as a service container, as a source for microservices and as an API gateway?
*How can data virtualization create managed data services ecosystems in a thriving API economy?
*How are GetSmarter and others are leveraging data virtualization to facilitate API-based initiatives?
A possible future role of schema.org for business reportingsopekmir
The presentation demonstrates a vision for the “reporting extension” that could enhance the processes related to business reporting and the role it could have for the SBR vision.
This is Part 4 of the GoldenGate series on Data Mesh - a series of webinars helping customers understand how to move off of old-fashioned monolithic data integration architecture and get ready for more agile, cost-effective, event-driven solutions. The Data Mesh is a kind of Data Fabric that emphasizes business-led data products running on event-driven streaming architectures, serverless, and microservices based platforms. These emerging solutions are essential for enterprises that run data-driven services on multi-cloud, multi-vendor ecosystems.
Join this session to get a fresh look at Data Mesh; we'll start with core architecture principles (vendor agnostic) and transition into detailed examples of how Oracle's GoldenGate platform is providing capabilities today. We will discuss essential technical characteristics of a Data Mesh solution, and the benefits that business owners can expect by moving IT in this direction. For more background on Data Mesh, Part 1, 2, and 3 are on the GoldenGate YouTube channel: https://www.youtube.com/playlist?list=PLbqmhpwYrlZJ-583p3KQGDAd6038i1ywe
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
Mr. Pollock is an expert technology leader for data platforms, big data, data integration and governance. Jeff has been CTO at California startups and a senior exec at Fortune 100 tech vendors. He is currently Oracle VP of Products and Cloud Services for Data Replication, Streaming Data and Database Migrations. While at IBM, he was head of all Information Integration, Replication and Governance products, and previously Jeff was an independent architect for US Defense Department, VP of Technology at Cerebra and CTO of Modulant – he has been engineering artificial intelligence based data platforms since 2001. As a business consultant, Mr. Pollock was a Head Architect at Ernst & Young’s Center for Technology Enablement. Jeff is also the author of “Semantic Web for Dummies” and "Adaptive Information,” a frequent keynote at industry conferences, author for books and industry journals, formerly a contributing member of W3C and OASIS, and an engineering instructor with UC Berkeley’s Extension for object-oriented systems, software development process and enterprise architecture.
overview of the RDF graph database-as-a-service (GraphDB based) on the Self-Service Semantic Suite (S4)
http://s4.ontotext.com
presentation for the AKSW Group of the University of Leipzig
Yelp has operated our connector ecosystem to feed vital data to domain-specific teams and data stores. We share some of our learning and experiences on operating such system. We will touch on what is the next phase of the system evolution.
Promote the Good of the People of the United Kingdom by Maintaining Monetary ...DataWorks Summit
The Bank of England is the central Bank of the United Kingdom, established in 1694. Representatives from the Bank’s Data Analytics & Modelling team will discuss the Bank of England's journey to delivering a Big Data capability and how the Hortonworks HDP platform is helping us deliver on our mission statement of “promote the good of the people of the United Kingdom by maintaining monetary and financial stability". We will explore the challenges we've faced, how we have overcome some of these and those that remain to be conquered. We will also present our strategy for the Bank’s future Big Data platform as we look to scale up further in the coming years.
We will focus in particular on our first successful ‘Big Data’ production system. This exists in response to the financial crises of 2008 and the subsequent push to make the derivative markets safer by reducing systemic risk. In Europe this was delivered through the European Market Infrastructure Regulation (EMIR). We will explain the Bank of England’s role in monitoring UK entities within this important market and describe the significant challenges facing our team in building a data analytics platform to facilitate this
Speakers
Nick Vaughan, Domain SME - Data Analytics & Modelling
Bank of England
Adrian Waddy, Technical Lead
Bank of England
Linked Data from a Digital Object Management SystemUldis Bojars
Lightning talk about generating Linked Data from a digital object management system at the National Library of Latvia. Conference: http://swib.org/swib12/programme.php
Data Management Systems for Government Agencies - with CKANSteven De Costa
Over the last two days (5th and 6th of November 2015) I was very happy to present to a range of Victorian Government agencies and give them some context on what data management can do for their organisations.
From first principles we went through why data was important and what infrastructure was already in place via data.vic.gov.au for them to leverage. We covered examples of how other agencies, such as the Office of Environment and Heritage in NSW, are rebuilding their data management system to provide a more efficient pipeline for publishing internal and public data.
As always, I could not help highlighting the awesome leadership of WA Parks and Wildlife and the work done by Florian Mayer as the best case example for reducing the costs and friction often involved with publishing data as contextually marked up knowledge.
We covered a number of scenarios where the concept of resource containers for data were considered. This created valuable feedback which has further galvanized my thoughts about how to further extend CKAN to meet the needs of both private and open data portals, and other forms of realtime or unstructured data.
Getting to Know CKAN, 24 June 2015, SingaporeSteven De Costa
Presented in Singapore on 24 June 2015 as part of the Infocomm Development Authority Data 101 series.
Provides an overview on what CKAN is and what organisations are using it for. The session also covered a number of topics related to the organisation of published data.
Ckan foo - CKAN Association overview at CKANcon 2015, OttawaSteven De Costa
Presenting the statement of purpose for the CKAN Association, running through a SWOT analysis for the CKAN project and providing a short overview of Link Digital's activities.
Cloud Asia presentation in Singapore, 29 October 2015Steven De Costa
'The Perfect Storm' - covering service oriented Government, data classification and public cloud.
Presented as part of the Data as a Service track and hosted by iDA.
Drupal, CKAN and Public Data. DrupalGov 08 february 2016Steven De Costa
Main points from this presentation are:
DKAN is not CKAN
CKAN is owning Australian Government
Data.Vic, Data.NSW, Data.SA and Data.Brisbane use Drupal and CKAN together
Single Sign on – https://github.com/ckan/ckanext-drupal7
Taxonomies and CKAN - pulling from CKAN into Drupal to enhance content for Government websites.
Webforms to CKAN - for an 'open data' form collection process.
Resource Views for Drupal - configured for a CKAN portal and orgsanisation.
Telling stories with data...
CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available.
We will introduce key concepts for a data lake and present aspects related to its implementation. Also discussing critical success factors, pitfalls to avoid operational aspects, and insights on how AWS enables a server-less data lake architecture.
Speaker: Sebastien Menant, Solutions Architect, Amazon Web Services
An introduction to the free and open source software for data catalogs, CKAN (Comprehensive Knowledge Archive Network). Presented at the IV Moscow Urban Forum, Russia, in December 2014. http://mosurbanforum.com/forum2014/
Transparency to Innovation Civic Technology Keynote Case Study at Drupalcamp ...Andrew Hoppin
Case study given as a keynote Drupalcamp Ottawa 2/22/13, discussing NYSenate.gov, the new Drupal AppCatalog distribution, and the role of Drupal in the civic technology ecosystem.
OpenStack in the Enterprise - Interop Las Vegas 2014Seth Fox
OpenStack has been making tremendous progress, with production deployments proliferating globally. But is OpenStack hardened and ready for the Enterprise? Is it mature enough to run production and mission critical workloads? Does it adequately address security and compliance requirements? We believe that the
answer is a resounding “yes”.
This session will deliver the insights you need to fully embrace OpenStack by addressing:
Common Pitfalls - common reasons why OpenStack deployments typically fail in enterprise environmentsInterop_Las_Vegas
Economics - total cost of ownership of a typical OpenStack footprint within the enterprise, and highlight the areas where benefits are primarily achieved
Ecosystem - the importance of the OpenStack ecosystem, and why this helps the enterprise in the short and long-term
Private, Public or Hybrid - where to deploy one of the models, and explain why OpenStack is the right choice for all of them
Real world enterprise case studies - successful deployment models
An overview of datonixOne a new, evolutionary and effective Data Preparation Platform.
We do introduce a new disruptive technology into the Data Management Space, it is the Data Scannew.
Using the Data Scanner Data Science is more accurate and feasible.
datonixOne is a perfect Satellite of any Enterprise Data Hub.
Databases: The Neglected Technology in DevOpsDevOps.com
Much has been written about software delivery in DevOps, with much less focus on the database. However, DevOps can—and should—play an equally critical role in both software and database development. In this ebook, we examine how DevOps can be used for database development and delivery, factors influencing DevOps’ role in database delivery, and some of the technologies designed to help.
Join us for this lively panel discussion!
Roman Pavlyuk, Yaroslav Ravlinko, Intellias. Enterprise IT Transformation and...IT Arena
With more than 17 years of experience in IT, Roman has outstanding expertise in top-of-the-line IT consulting and advisory practices and delivering the high-value services. He is proficient in technologies, product management and business analysis, as well as leading large scale programs for emerging and enterprise markets. Roman’s experience in driving transformation projects for US and Middle East clients is extremely valuable for Intellias in terms of reaching the company’s goal to become an advisory partner for its clients and partners. Currently, Roman is holding a VP Technology role and leads on the Technology Office organization with a focus on IT advisory and excellence.
Co-speaker – Yaroslav Ravlinko, Head of IT Advisory and DevOps Group, Technology Office at Intellias.
Yaroslav has been working in IT industry since 2008, delivering more than 50 projects in different domains across the globe. In the last few years, he’s been cooperating with high profile clients to provide guidance and support on their organizations’ journey to digital transformation. He worked with the biggest retailers in North America and tech giants such as Cisco, Dell, Suse, and Canonical. His most significant recent project is Data Science Platform (ML) developed in collaboration with Dell, Canonical, SUSE, and Intel (officially announced on February 2020). Today, he is a head of the IT Advisory Group at the Intellias Technology Office.
Speech Overview:
Why distributed SQL databases will dominate the Big Data world and how technologies like Kubernetes can help in achieving that.
Part 1: IT Organization Transformation
The main pillars of IT organization transformation;
The main trends and finding the right balance between hype and actual usefulness;
Technologies and platforms that will have the biggest impact on our lives in the future
Part 2: Big Data is dead, long live big data?
“Did Google Send the Big Data Industry on a 10-Year Head Fake?”
Spanned vs Hadoop
Spanner: Google’s Globally-Distributed Database;
Spanner and BigQuery (design and architecture);
Why distributed SQL databases will dominate the “big” data world;
So, what’s next?
Enabling Self-Service Analytics with Logical Data WarehouseDenodo
Watch full webinar here: https://buff.ly/2GNO8PC
What makes data scientists happy? Of course data. They want it fast and flexible, and they want to do it themselves. But most classic data warehouses (DW) and data lakes are not easy to deal with for agile data access. A more practical solution is the logical data warehouse (LDW), which has shown to be a more agile foundation for delivering and transforming data and makes it easy to quickly plug in new data sources.
Attend this webinar to learn:
* How easily new data sources can be made available for analytics and data science
* How your organization can successfully migrate to a flexible LDW architecture in a step-by-step fashion
* How LDWs help integrate self-service analytics with classic forms of business intelligence
9. Beechwood Reverse Pitch FINALISTS 6.14.17 Joel Bennett
Open data initiatives promise to liberate data isolated in local government data stores but most of these programs are tackled independently without the benefit of collective thought and innovation. We believe a centralized and common collaboration platform that hosts open data, metadata cross-walking, and applications will facilitate community supported application development, standardization, code sharing, and regional data views across borders. This regional sharing will allow staff and community to focus on refinement of existing applications and new innovation rather than replicating standard solutions independently and yield a higher return on investment.
To build great sites and digital experiences, teams require easy to use tools focused on key capabilities: Rapid development tools, radically simple page building components, drag-and-drop widgets to manage and use rich media and content assets, and intuitive tools to review, approve and publish quickly.
Acquia Lightning, a Drupal 8 open source distribution from the Drupal experts at Acquia, is the content management foundation that provides these capabilities and more to developers and experience teams in thousands of organizations worldwide.
Join this webinar, led by Drupal Project Lead and Acquia CTO Dries Buytaert, and Jeff Beeman, Acquia Lighting Product Manager, to discover how Lightning is the best content management foundation for developers and next-generation experience builders.
Attend this webinar to:
- Learn about Acquia Lightning and how to make it the best starting point for your Drupal projects
- Discover Lightning’s benefits and learn how developers are reducing development time by 30% while creating better, more usable sites
- Understand through demos how Lightning makes it easier for experience builders to create page layouts, embed rich media, create, approve and publish content more easily to engage with audiences faster
- Discover why Lightning, on Acquia Cloud, is the secret to building better, faster, and delivering more engaging sites that drive engagement and outcomes.
Watch a replay of the webinar: https://www.youtube.com/watch?v=BtzPgLBy56w
451 Research and NuoDB outline the key database criteria for cloud applications. Explore how applications deployed in the cloud require a combination of standard functionality, such as ANSI SQL, and new capabilities specifically required to take full advantage of cloud economics, such as elastic scalability and continuous availability.
Watch here: https://bit.ly/3i2iJbu
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
Join us for an exciting session that will cover:
- The most interesting trends in data management.
- Our predictions on how those trends will change the data management world.
- How these trends are shaping the future of data virtualization and our own software.
The standard 'Getting to know CKAN' deck used to give people interested in the CKAN project an overview of what it is and how they might like to get involved.
Presentation by Bart Gielen (DataSense) at the Data Vault Modelling and Data ...Patrick Van Renterghem
DataSense is, together with the internal team, implementing a new Enterprise Data Warehouse (EDW) at Bank Degroof Petercam. The goal of the EDW is to integrate and historicize all changes of data of the Belgian and Luxembourg entities in the medium term and also those of the for the other entities in the long term. All of this complying with and respecting regulations (legal reporting) and service obligations (tax certificates, reports, taxes on funds, return on portfolio, ...). The implementation is done according to the principles of Data Vault 2.0 and automation of the RAW Data Vault layer. For this purpose they use Talend (Enterprise Open Source data integration platform), Vaultspeed (Data Vault Automation) and Oracle (db platform).
Bridging the Last Mile: Getting Data to the People Who Need ItDenodo
Watch full webinar here: https://bit.ly/3cUA0Qi
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
Big Data Fabric: A Necessity For Any Successful Big Data InitiativeDenodo
Watch this webinar in full here: https://buff.ly/2IxM8Iy
Watch all webinars from the Denodo Packed Lunch webinar series here: https://buff.ly/2IR3q6w
While big data initiatives have become necessary for any business to generate actionable insights, big data fabric has become a necessity for any successful big data initiative. The best of breed big data fabrics should deliver actionable insights to the business users with minimal effort, provide end-to-end security to the entire enterprise data platform and provide real-time data integration, while delivering self-service data platform to business users.
Attend this session to learn how big data fabric enabled by data virtualization:
• Provides lightning fast self-service data access to business users
• Centralizes data security, governance and data privacy
• Fulfills the promise of data lakes to provide actionable insights
Similar to DKAN: The Drupal Open Data Distribution (presented at SANDCamp San Diego Drupalcamp) (20)
OpenCivic Drupal Distribution presentation at CapitalCampAndrew Hoppin
This presentation at CapitalCamp 2013 in Washington DC discusses the Drupal "OpenCivic" distribution, and it's role in supporting the burgeoning civic software ecosystem with an open platform to run app catalogs and hackathons.
Open Collaboration in New York City DoITTAndrew Hoppin
Presentation about NYC Dept of Information Technology & Telecommunications move towards open-source software collaboration to solve civic software needs.
New York State Senate NCSL20 PresentationAndrew Hoppin
Presentation given at "Opening Up State Legislatures With Social Media" session at National Conference of State Legislatures conference July 21, 2009 in Philadelphia.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
17. Why Open-Source Matters...
• No vendor lock-In / choice of consultants / ability to build inhouse capacity
• Collaborate w/ our peers (White House)
• Security transparency (US DoD is a major consumer for this
reason)
• Open-Source platforms often pay more heed to open formats
and standards (e.g.: DCAT, RDFa, OData, JSON vs Shapefiles, PDF,
etc.)
• Innovation: healthy open-source projects can aggregate more
engineering effort than proprietary alternatives, propagate great
new extensions faster
• Freedom of Hosting Options: consume as a cloud-hosted
service today, change our mind and host in-house tomorrow, etc.
18. But
Data.gov.uk,
Data.gov,
HealthData.gov,
OpenGovPlaKorm,
etc.
all
added
Drupal
to
CKAN
http://flickr.com/photos/rocketqueen/1573565705/
20. With
DKAN
Distro,
Drupal
Itself
Now
Also
Becoming
a
Public
Sector
Data
Management
System
(“DMS”)
21. Why DKAN Instead of Drupal+CKAN?
• Manage content, data, permissions through same platform
• Single software stack to maintain
• Single site to design & theme
• Easy to extend with social features
• Transparent, well-governed upgrade path of Drupal
• Extensive Drupal ecosystem of civil service talent,
consultants, hosting, support
22. Why Drupal?
•
MATURE:
>1
million
sites
(2%
of
all
sites),
3,718
Code
commits/wk,
6,388
issue
comments/wk
•
IN-‐HOUSE
SKILLS:
24%
of
.gov
sites
•
EXTENSIBLE:
18,489
Modules,
1,512
Themes,
21,009
Contributors
•
FISMA-‐Cer&fied
Cloud
Hos&ng
Op&ons
•
INTEGRATES
easily
w/
public
websites
lots
of
de
facto
data
is
already
published
as
content
34. Open Data is Just “Sharing Your Files”
• Datasets
are
collec&ons
of
resources,
with
some
descrip&ve
metadata
• Resources
are
just
files.
They
can
be
any
kind
of
file,
but
ohen
they
are
CSV
files,
spreadsheets
or
some
other
kind
of
tabular
data
file.
• Organiza&ons
create
datasets
and
upload
resources.
• Data
consumers
can
browse
datasets
and
some&mes
see
visualiza0ons
of
resources.
34
35. DKAN
•
Fully functional data portal housing datasets, Solr search, accessible
via JSON and RDF; csv or xml files uploaded through Drupal, stored in
*SQL, visualized through Recline.js
• Seeks to replicate CKAN 2.0 functionality, design, standards, & API
• Reuses CKAN components wherever possible (e.g.: Recline.js)
• Built with support and input from the Open Knowledge Foundation
• Fully open project, with code on Drupal.org/project/DKAN
49. Ongoing Development
•
Adding
feedback
on
datasets,
other
social
features
•
Support
for
addi&onal
file
types
•
Adding
DKAN_DataSet
&
DKAN_DataStore
modules
to
other
Distros
like
OpenCivic
•Offering
enterprise
support
&
hosted
OpenSaaS
DKAN
50. “NuData DKAN” OpenSaaS Offering
•
NuData
=
our
DKAN
as
a
turnkey
hosted
24/7
supported
sohware-‐as-‐a-‐service
•
Governments
like
SaaS
like
Socrata
because
it’s
quick,
affordable,
and
no
technology
burden
on
exis&ng
staff
•
Governments
like
open-‐source
(e.g.:
CKAN)
because
they’re
in
control-‐-‐
no
vendor
lock-‐in,
ability
to
customize,
innovate
•
OpenSaaS
=
the
best
of
both
worlds;
SaaS
but
truly
open-‐-‐
you
can
take
your
app
and
your
data
with
you
with
minimal
switching
cost
•
Drupal
is
excep&onally
well
posi&oned
to
enable
OpenSaaS
businesses
51. Drupal Open Data Policy Compliance Recipes
add /data.html &
/data.json pages
to existing Drupal
site with new Open
Data Module?
(sandbox project)
add data management
&
publishing features
to a Drupal site with
DKAN Data Set &
DKAN Data Store
Modules
deploy new Open
Data Catalog / Portal
with the DKAN
Distribution, on your
own, or as SaaS