dashDB Enterprise MPP is a new fully managed cloud data warehouse service with massive scale and performance. Powered by IBM's network cluster architecture, dashDB MPP is an easy to use, self service solution for building: standalone data warehouses; data science data marts; hybrid warehousing; development and QA environments; and analytics for NoSQL. It is available through IBM Bluemix along with IBM's other Cloud Data Services, including Cloudant and SQL DB.
Many Oracle pros are looking to take their data warehousing strategy to the cloud, but have been waiting for a cloud solution that offers both compatibility and ease of use. Well, the wait is over - with IBM dashDB, you can leverage your existing Oracle (as well as SQL) application skills, and get all the cost, scalability and performance advantages of a fully managed data warehousing service in the IBM Cloud.
IBM® dashDB™ is a fast, fully managed, cloud data warehouse that utilizes integrated analytics to rapidly deliver answers. dashDB’s unique in-database analytics, R predictive modeling and business intelligence tools free you to analyze your data and get precise insights, quicker. dashDB is simple to get up and running with rapid provisioning in IBM Bluemix™. You can test the solution or start using dashDB for no charge, for up to one gigabyte of data and then just $50 US
per month for 20 gigabytes of data storage. Larger instance sizes with multi-terabyte capacity are available as you grow your data, and as your users require a dedicated environment. Massively Parallel Processing (MPP) enables even faster query speeds as well as larger scale data sets.
Cloud and Software as a Service (SaaS) can make a huge impact on a business. Unfortunately, most start the evaluation of SaaS from an IT perspective and traditional data center advantages (i.e. on-premises costs, staffing and savings). While savings are important, cloud is about agility and speed. For these reasons, line-of-business (LOB) leaders have been more interested in SaaS solutions. Learn how Cognos Business Intelligence on Cloud and IBM dashdb make it simple to get started with collaboration, reporting and analytics.
John Park, Offering Manager, for IBM Cloud Data Services covers the touchstones for tomorrow’s information systems: data and integration. Stovepipe applications are no longer acceptable, and siloed data sources must evolve and open up to the full enterprise. All this in an environment where more is expected faster, and at a lower cost. If your GIS doesn’t watch out, it will be replaced by less capable alternatives that “fit better” into mainstream IT. But dashDB, a cloud-native offspring of DB2, can provide a bridge that keeps both sides happy. This session introduce this popular cloud data warehousing solution and illustrate how it works in concert with ArcGIS. You will learn about the built-in geospatial functions in dashDB and how you can easily use them to build applications rapidly. You’ll see an application that uses weather data and mobile application data to calculate insurance risk, detect potential fraud, and prevent damage.
Many Oracle pros are looking to take their data warehousing strategy to the cloud, but have been waiting for a cloud solution that offers both compatibility and ease of use. Well, the wait is over - with IBM dashDB, you can leverage your existing Oracle (as well as SQL) application skills, and get all the cost, scalability and performance advantages of a fully managed data warehousing service in the IBM Cloud.
IBM® dashDB™ is a fast, fully managed, cloud data warehouse that utilizes integrated analytics to rapidly deliver answers. dashDB’s unique in-database analytics, R predictive modeling and business intelligence tools free you to analyze your data and get precise insights, quicker. dashDB is simple to get up and running with rapid provisioning in IBM Bluemix™. You can test the solution or start using dashDB for no charge, for up to one gigabyte of data and then just $50 US
per month for 20 gigabytes of data storage. Larger instance sizes with multi-terabyte capacity are available as you grow your data, and as your users require a dedicated environment. Massively Parallel Processing (MPP) enables even faster query speeds as well as larger scale data sets.
Cloud and Software as a Service (SaaS) can make a huge impact on a business. Unfortunately, most start the evaluation of SaaS from an IT perspective and traditional data center advantages (i.e. on-premises costs, staffing and savings). While savings are important, cloud is about agility and speed. For these reasons, line-of-business (LOB) leaders have been more interested in SaaS solutions. Learn how Cognos Business Intelligence on Cloud and IBM dashdb make it simple to get started with collaboration, reporting and analytics.
John Park, Offering Manager, for IBM Cloud Data Services covers the touchstones for tomorrow’s information systems: data and integration. Stovepipe applications are no longer acceptable, and siloed data sources must evolve and open up to the full enterprise. All this in an environment where more is expected faster, and at a lower cost. If your GIS doesn’t watch out, it will be replaced by less capable alternatives that “fit better” into mainstream IT. But dashDB, a cloud-native offspring of DB2, can provide a bridge that keeps both sides happy. This session introduce this popular cloud data warehousing solution and illustrate how it works in concert with ArcGIS. You will learn about the built-in geospatial functions in dashDB and how you can easily use them to build applications rapidly. You’ll see an application that uses weather data and mobile application data to calculate insurance risk, detect potential fraud, and prevent damage.
Technical overview in relation to the IBM SmartCloud offering (Private Cloud - IaaS) for IBM i customers: the reference architecture leverages IBM PowerVM virtualisation and IBM VMControl virtualisation and automation management capabilities. Self service portal, VM provisioning, metering and billing are provided by IBM SmartCloud Entry.
IBM Cloud Pak for Data is a single unified platform which helps to unify and simplify the collection, organization and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible, easily customized to unique client data and AI landscapes through an integrated catalog of IBM, open source and third-party microservices add-ons
Snowflake + Power BI: Cloud Analytics for EveryoneAngel Abundez
Learn how Power BI and Snowflake can work together to bring a best-in-class data and analytics experience to your enterprise. You can combine Snowflake’s easy to use, robust, and scalable data platform with Power BI’s data visualization, built-in AI, and collaboration platform to create a data-driven culture for everyone.
New! Real-Time Data Replication to SnowflakePrecisely
Your business is adopting the Snowflake cloud data platform to rapidly deliver data insights and lower the costs of your data warehouse. But you have a problem – what happens when data changes on your mainframe and IBM i systems? How do you make sure Snowflake is always up-to-date and in sync with these systems of record?
If you can’t integrate changes occurring on your mainframe and IBM i systems to Snowflake, your business will miss the critical data it needs to drive real-time insights and decision making.
Join us to learn how the latest enhancements to Precisely Connect help your business meet its data-driven goals by sharing changes made on legacy, mainframe, and IBM systems to Snowflake in real time.
During this webinar, you will learn more about:
- How to easily support data replication from mainframe and IBM i to Snowflake
- Connect’s enhanced data replication capabilities for cloud data platforms
- How customers are using Connect to support their cloud data platform strategies
RightScale Webinar: August 25, 2009 – In this webinar we introduced the first business intelligence solution stack running on the cloud. This cutting-edge solution has been created by business intelligence industry leaders Jaspersoft, Talend and Vertica with the leader in cloud computing management, RightScale.
This presentation is about The IBM Cloud, what it is? why do we use it? what are the key features of IBM Cloud? Tools and Services provided by the IBM Cloud, Pricing, and also shows the basic of getting started with IBM Cloud Dashboard and key features.
IBM THINK 2019 - What? I Don't Need a Database to Do All That with SQL?Torsten Steinbach
You don't necessarily have to set up a relational database, tables and load data in order to use a surprisingly rich set of SQL capabilities on your data in the cloud. IBM SQL Query lets you analyze terabytes of distributed data of heterogeneous formats with a complete ANSI SQL dialect in a completely serverless usage model, elegantly ETL data between formats and partitioning layouts as needed, and run complex time series transformations, analysis and correlations with advanced built-in timeseries SQL algorithms that are differentiating in the entire industry. It also support a complete PostGIS compliant geospatial SQL function set. Come explore the stunningly advanced world of SQL without a database in IBM Cloud.
Connections in AWS with cloud native servicesMartin Schmidt
HCL currently only publishes a guide for installing HCL Connections Component Pack in a private reference installation of Kubernetes. One should not underestimate the effort and knowledge required to provide and operate this basic infrastructure.This presentation shows which services of AWS can be used to run HCL Connections and its Component Pack to access the expert knowledge of AWS. The used services are among others: EKS, EFS, ElasticSearch, CloudFormation, RDS.
Democratizing AI/ML with GCP - Abishay Rao (Google) at GoDataFest 2019GoDataDriven
Every company today is talking about AI/ML, but when most companies talk about AI/ML in their transformation journey, you hear terms like Proof of Concept, Feasibility Study, Pilot, A/B Test. We are at the peak of AI's hype, but only 12% of enterprises have deployed AI in production. Google aims to make big data processing available for everyone, the possiblities of Big Query ML are endless: Marketing, retail, industrial and IoT, media, gaming, and so fort.
Smart application on Azure at Vattenfall - Rens Weijers & Peter van 't HofGoDataDriven
During GoDataFest 2019, Rens Weijers, manager data & strategy and Peter van ' t Hof, data engineer, share the story of how Vattenfall develops smart applications on Azure. Vattenfall has the ambition to transition to fossil-free living within one generation. But what about decentral energy solutions in the Customers & Solutions business unit? Data is key to help customers to reduce their CO2 footprint. Azure enables Vattenfall to be personal and relevant towards customers.
For those contemplating re-architecting or greenfields data lakes/data hubs/data warehouses in a cloud environment, talk to our Altis AWS Practice Lead - Guillaume Jaudouin about why you should be considering the "tour de force" combination of AWS and Snowflake.
Emerging Trends in Hybrid-Cloud & Multi-Cloud StrategiesChaitanya Atreya
As Cloud Computing rapidly evolves, newer deployment strategies such as Hybrid-Cloud, Multi-Cloud and On-Prem Cloud are emerging. More and more enterprise solution providers are offering support for a combination of these deployment targets. It is imperative that the larger organizations have a clear Hybrid-Cloud and Multi-Cloud strategy to avoid cloud lock-in and to de-risk business decisions.
What do each of these terminologies mean? What is the scope of each and overlap if any? We will discuss the emerging best-practices across these interdisciplinary trends, especially in the context of Modern Data and Analytics Platforms and Enterprise Self-Service.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
Technical overview in relation to the IBM SmartCloud offering (Private Cloud - IaaS) for IBM i customers: the reference architecture leverages IBM PowerVM virtualisation and IBM VMControl virtualisation and automation management capabilities. Self service portal, VM provisioning, metering and billing are provided by IBM SmartCloud Entry.
IBM Cloud Pak for Data is a single unified platform which helps to unify and simplify the collection, organization and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible, easily customized to unique client data and AI landscapes through an integrated catalog of IBM, open source and third-party microservices add-ons
Snowflake + Power BI: Cloud Analytics for EveryoneAngel Abundez
Learn how Power BI and Snowflake can work together to bring a best-in-class data and analytics experience to your enterprise. You can combine Snowflake’s easy to use, robust, and scalable data platform with Power BI’s data visualization, built-in AI, and collaboration platform to create a data-driven culture for everyone.
New! Real-Time Data Replication to SnowflakePrecisely
Your business is adopting the Snowflake cloud data platform to rapidly deliver data insights and lower the costs of your data warehouse. But you have a problem – what happens when data changes on your mainframe and IBM i systems? How do you make sure Snowflake is always up-to-date and in sync with these systems of record?
If you can’t integrate changes occurring on your mainframe and IBM i systems to Snowflake, your business will miss the critical data it needs to drive real-time insights and decision making.
Join us to learn how the latest enhancements to Precisely Connect help your business meet its data-driven goals by sharing changes made on legacy, mainframe, and IBM systems to Snowflake in real time.
During this webinar, you will learn more about:
- How to easily support data replication from mainframe and IBM i to Snowflake
- Connect’s enhanced data replication capabilities for cloud data platforms
- How customers are using Connect to support their cloud data platform strategies
RightScale Webinar: August 25, 2009 – In this webinar we introduced the first business intelligence solution stack running on the cloud. This cutting-edge solution has been created by business intelligence industry leaders Jaspersoft, Talend and Vertica with the leader in cloud computing management, RightScale.
This presentation is about The IBM Cloud, what it is? why do we use it? what are the key features of IBM Cloud? Tools and Services provided by the IBM Cloud, Pricing, and also shows the basic of getting started with IBM Cloud Dashboard and key features.
IBM THINK 2019 - What? I Don't Need a Database to Do All That with SQL?Torsten Steinbach
You don't necessarily have to set up a relational database, tables and load data in order to use a surprisingly rich set of SQL capabilities on your data in the cloud. IBM SQL Query lets you analyze terabytes of distributed data of heterogeneous formats with a complete ANSI SQL dialect in a completely serverless usage model, elegantly ETL data between formats and partitioning layouts as needed, and run complex time series transformations, analysis and correlations with advanced built-in timeseries SQL algorithms that are differentiating in the entire industry. It also support a complete PostGIS compliant geospatial SQL function set. Come explore the stunningly advanced world of SQL without a database in IBM Cloud.
Connections in AWS with cloud native servicesMartin Schmidt
HCL currently only publishes a guide for installing HCL Connections Component Pack in a private reference installation of Kubernetes. One should not underestimate the effort and knowledge required to provide and operate this basic infrastructure.This presentation shows which services of AWS can be used to run HCL Connections and its Component Pack to access the expert knowledge of AWS. The used services are among others: EKS, EFS, ElasticSearch, CloudFormation, RDS.
Democratizing AI/ML with GCP - Abishay Rao (Google) at GoDataFest 2019GoDataDriven
Every company today is talking about AI/ML, but when most companies talk about AI/ML in their transformation journey, you hear terms like Proof of Concept, Feasibility Study, Pilot, A/B Test. We are at the peak of AI's hype, but only 12% of enterprises have deployed AI in production. Google aims to make big data processing available for everyone, the possiblities of Big Query ML are endless: Marketing, retail, industrial and IoT, media, gaming, and so fort.
Smart application on Azure at Vattenfall - Rens Weijers & Peter van 't HofGoDataDriven
During GoDataFest 2019, Rens Weijers, manager data & strategy and Peter van ' t Hof, data engineer, share the story of how Vattenfall develops smart applications on Azure. Vattenfall has the ambition to transition to fossil-free living within one generation. But what about decentral energy solutions in the Customers & Solutions business unit? Data is key to help customers to reduce their CO2 footprint. Azure enables Vattenfall to be personal and relevant towards customers.
For those contemplating re-architecting or greenfields data lakes/data hubs/data warehouses in a cloud environment, talk to our Altis AWS Practice Lead - Guillaume Jaudouin about why you should be considering the "tour de force" combination of AWS and Snowflake.
Emerging Trends in Hybrid-Cloud & Multi-Cloud StrategiesChaitanya Atreya
As Cloud Computing rapidly evolves, newer deployment strategies such as Hybrid-Cloud, Multi-Cloud and On-Prem Cloud are emerging. More and more enterprise solution providers are offering support for a combination of these deployment targets. It is imperative that the larger organizations have a clear Hybrid-Cloud and Multi-Cloud strategy to avoid cloud lock-in and to de-risk business decisions.
What do each of these terminologies mean? What is the scope of each and overlap if any? We will discuss the emerging best-practices across these interdisciplinary trends, especially in the context of Modern Data and Analytics Platforms and Enterprise Self-Service.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
Presentazione sull'evento Surfing The Cloud Tsunami 2016 a Padova: cavalcare l'onda della digital transformation, e l'importanza di fare del Cloud una leva per il proprio business.
Top 10 ways BigInsights BigIntegrate and BigQuality will improve your lifeIBM Analytics
BigInsights BigIntegrate and BigQuality offer a cost-effective opportunity to fully leverage the scale and promise of Hadoop. Here are 10 ways BigIntegrate and BigQuality are making it easier for organizations to harness the power of their entire data ecosystems. Learn more at ibm.co/datagovernance
2011.06.24. - Cloud Services Solution Provider - Forum des Partenaires du Clo...Club Alliances
Deck exploité lors du Forum des Partenaires du Cloud IBM - Atelier "Cloud Services Solution Provider" - Loic Simon [Club Cloud des Partenaires, Club Alliances, Cloud Channel Development]
The webcast will cover the benefits of assessment, diagnostics, analysis and measurement in human capital management. There will be discussion of employee engagement, data driven decision making, implementation and ROI.
Content will range start with why does Text Analytics need a special session on convincing boss, followed by a role play summarizing current mistakes, a sample elevator pitch for your boss and a proposed execution plan. The content is tailored for Mid to Senior Level Managers trying to convince Leaders/Executives/Heads. It doesn’t provide any technical details –methodologies, tools, vendors or hardware investments.
This was presented at Text Analytics West Summit 2014 at San Francisco. Questions? Reach out at Ramkumar Ravichandran @ Linkedin.
Three Steps to a Hard Dollar ROI from Talent ManagementInfor HCM
Organizations need HR, but often regard it as a tactical necessity rather than a strategic essential. In this webinar, Infor HCM’s Michael Brandt explains how to build on HR’s daily transactional activity to create great strategic impact with a solid dollar value. He’ll examine how great value depends on doing the daily work of HR well, and then sharing the results for wider, deeper impact. Using real-life examples, Michael will explore:
• The crucial importance of data
• Getting hard dollar impact from soft cultural change
• Why systems usability and integration are key to success
• How HR can build value across the employee life cycle
• The technology and systems you’ll need
IBM InfoSphere Data Replication for Big DataIBM Analytics
How do you balance the need for business agility against the real-time availability of essential big data insights – without impacting your mission critical systems? Review this slideshare and learn how InfoSphere Data Replication can help enable your big data environment.
2012.02.09 - Leveraging the IBM Cloud Partner Ecosystem - Cloud Top Gun - Loi...Club Cloud des Partenaires
This is presentation material I prepared about IBM Cloud Partner Ecosystem.
Target audience for the 45 mn session it supports is mostly IBM Sellers from SWG [Software] and STG [Hardware] attending the Cloud Top Gun education.
Un approccio completo di tipo cognitivo comprende tre componenti: un metodo, un ecosistema e una piattaforma. In questa sessione scopriremo come realizzare questo approccio grazie anche a Watson Data Platform, che aiuta i data scientist e gli esperti di business analytics a far “lavorare i dati” in un’ottica cognitive. In questo modo si può dare impulso alla crescita e al cambiamento aziendale. Ci concentreremo sulla possibilità di analizzare i dati provenienti dai Social Media per valutare la percezione dell’Amministrazione da parte di studenti, genitori, stampa, blogger…
Al cuore della soluzione ci sono una serie di servizi disegnati per funzione aziendale (sviluppatori, data scientist, data engineers, comunicazione / marketing) e la capacità di imparare propria della tecnologia cognitiva, che completano l’architettura e aiutano a “comporre” nuove soluzioni di business.
Learn about IBM's Hadoop offering called BigInsights. We will look at the new features in version 4 (including a discussion on the Open Data Platform), review a couple of customer examples, talk about the overall offering and differentiators, and then provide a brief demonstration on how to get started quickly by creating a new cloud instance, uploading data, and generating a visualization using the built-in spreadsheet tooling called BigSheets.
Making the Most of Data in Multiple Data Sources (with Virtual Data Lakes)DataWorks Summit
Most organizations today implement different data stores to support business operations. As a result, data ends up stored across a multitude of often heterogenous systems, like RDBMS, NoSQL, data warehouses, data marts, Hadoop, etc., with limited interaction and/or interoperability between them. The end result is often a vast eco-system of data stores with different "temperature" data, some level of duplication and, no effective way of bringing it all together for business analytics. With such disparate data, how can an organization exploit the wealth of information? This opens up the need for proven techniques to quickly and easily deliver the data to the people who need it. In this session, you'll see how to modernize your enterprise by making data accessible with enterprise capabilities like querying using SQL, granular security for data access, and maintaining high query performance and high concurrency.
The Future of Data Warehousing, Data Science and Machine LearningModusOptimum
Watch the on-demand recording here:
https://event.on24.com/wcc/r/1632072/803744C924E8BFD688BD117C6B4B949B
Evolution of Big Data and the Role of Analytics | Hybrid Data Management
IBM, Driving the future Hybrid Data Warehouse with IBM Integrated Analytics System.
Since GeoJSON is a standard for storing geographic data in JSON format, it is a best practice to adhere to this format when storing geo-coordinates in Cloudant and CouchDB.
The concept of data movement lies at the heart of Apache CouchDB. CouchDB’s replication protocol lets developers synchronize copies of their data to remote CouchDB-based systems – including Cloudant – at the push of a button. Replication jobs can also run continuously, and in both directions.
Mango allows users to declaratively define and query Apache CouchDB indexes. Mango leverages Lucene not only to perform text search, but also to enable ad-hoc querying capabilities.
CouchDB is a document database. It stores JSON objects with a few special field names. The _id field represents a unique identifier for a document. The _rev field is the revision marker for a document. The _rev field is used for Multi-Version Concurrency Control, a form of optimistic concurrency.
Apache CouchDB is accessed through an HTTP API. HTTP Basic authentication is a simple way to authenticate with an HTTP server. Other approaches, such as cookies and OAuth, are often used as well.
For more than 10 years, developers have relied on Apache(R) CouchDB(TM) - a versatile and highly scalable open source database - to build apps for web, mobile and IoT platforms.
The release of CouchDB 2.0 in 2016 has generated even more interest in the freely available JSON database, which now includes clustering capabilities contributed from IBM Cloudant for high availability and performance.
In the world of NoSQL, each database has its own strengths and weaknesses. Understanding which open source database is "the right tool for the job" is half the battle if you want to start building better applications quickly. IBM developer advocate Glynn Bird explores practical examples of how two popular NoSQL databases - the Cloudant JSON document store and the Redis in-memory key-value store - can be used together to create performant and scalable Web applications. It also includes real world use cases you can try today, for free, using the IBM Cloud Data Services suite of fully managed NoSQL databases-as-a-service.
IBM Cloudant describe the geospatial tools used in their database-as-a-service offering (DBaaS). Based upon Apache CouchDB, the geospatial extensions used by IBM Cloudant rely on a number of well known open source libraries to provide geospatial indexing, query and projection support to Apache CouchDB. Discussion topics include:
- Overview of the architecture & tools
- Best practices for building geospatial apps with NoSQL doc stores
- Use cases for leveraging geospatial capabilities of a NoSQL doc store
BM Cloudant is a NoSQL Database-as-a-Service. Discover how you can outsource the data layer of your mobile or web application to Cloudant to provide high availability, scalability and tools to take you to the next level.
SQL-based databases have been around for decades and they power a wide range of applications. So what exactly do NoSQL databases bring to the table? In this webcast, you'll find out how NoSQL can liberate your development cycle, allow your application to scale and improve your system's uptime.
Our March 2, 2016 event featured Billy Beane, Executive Vice President of Baseball Operations at the Oakland As and Derek Schoettle, GM of Analytics Platform Services at IBM. Billy and Derek shared their experiences of how professional sports teams and businesses alike are gaining hidden insights and competitive advantages by using the latest data discovery techniques and platforms.
Find out how NoSQL can help your application with practical examples and use-cases from our Cloud Data Services Developer Advocate Glynn Bird. This webinar won't dwell on the science behind the database, but will walk you through real-life use-cases for NoSQL technologies that you can start using today.
Webinar: https://youtu.be/M_Jqw
Learn what you need to consider when moving from the world of relational databases to a NoSQL document store.
Hear from Developer Advocate Glynn Bird as he explains the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
Presented by David Taieb, Architect, IBM Cloud Data Services
Along with Spark Streaming, Spark SQL and GraphX, MLLib is one of the four key architectural components of Spark. It provides easy-to-use (even for beginners), powerful Machine Learning APIs that are designed to work in parallel using Spark RDDs. In this session, we’ll introduce the different algorithms available in MLLib, e.g. supervised learning with classification (binary and multi class) and regression but also unsupervised learning with clustering (K-means) and recommendation systems. We’ll conclude the presentation with a deep dive on a sample machine learning application built with Spark MLLib that predicts whether a scheduled flight will be delayed or not. This application trains a model using data from real flight information. The labeled flight data is combined with weather data from the “Insight for Weather” service available on IBM Bluemix Cloud Platform to form the training, test and blind data. Even if you are not a black belt in machine learning, you will learn in this session how to leverage powerful Machine Learning algorithms available in Spark to build interesting predictive and prescriptive applications.
About the Speaker: For the last 4 years, David has been the lead architect for the Watson Core UI & Tooling team based in Littleton, Massachusetts. During that time, he led the design and development of a Unified Tooling Platform to support all the Watson Tools including accuracy analysis, test experiments, corpus ingestion, and training data generation. Before that, he was the lead architect for the Domino Server OSGi team responsible for integrating the eXpeditor J2EE Web Container in Domino and building first class APIs for the developer community. He started with IBM in 1996, working on various globalization technologies and products including Domino Global Workbench (used to develop multilingual Notes/Domino NSF applications) and a multilingual Content Management system for the Websphere Application Server. David enjoys sharing his experience by speaking at conferences. You’ll find him at various events like the Unicode conference, Eclipsecon, and Lotusphere. He’s also passionate about building tools that help improve developer productivity and overall experience.
Mobile web apps shouldn't stop working when there's no network connection. Offline-enabled apps built using PouchDB can provide a better, faster user experience while potentially reducing battery and bandwidth usage.
Hear from Developer Advocate Glynn Bird to find out how to use the HTML5 Offline Application Cache, PouchDB, IBM Cloudant and Cordova/PhoneGap to develop fully-featured and cross-platform native apps and responsive mobile web apps that work just as well offline as they do online.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Hi and welcome everyone to another dashDB webcast from IBM Cloud Data Services. My name is Doug Flora, and today I’ll be joined by Sam Lightstone, distinguished engineer and chief architect for IBM Data Warehousing. Sam will be providing an in depth overview of dashDB Enterprise MPP, a new offering from IBM that brings even more scale and performance to the dashDB cloud data warehousing service through a network cluster architecture.
But before diving into the nitty gritty of dashDB MPP, I’m going to briefly introduce our new viewers to IBM Cloud Data Services and the dashDB cloud data warehouse service.
Launched in 2014, IBM Cloud Data Services provides a comprehensive set of rich, integrated cloud data services covering content, data and analytics that let you innovate faster, reduce your risk of failure and reduce costs. With CDS, all of IBM’s renowned, trusted technical leadership is now available to you via the cloud, instantly provisioned and managed for you, so you can stay focused on creating new systems that wow your customers. Our fully managed services run on the IBM Softlayer bare metal cloud platform, with industry leading performance that gives you faster throughput, lower latency and more consistency. And we offer the flexibility to deploy your cloud data on public, private or hybrid cloud environments in order to maximize cost efficiency, control and performance. All IBM Cloud Data Services are available on IBM Bluemix, and you can get started with them by just heading over to IBM.com/Bluemix and signing up for a free account.
IBM Cloud Data Services include IBM Cloudant, the NoSQL database as a service optimized for handling workloads for web and mobile apps; IBM SQL DB, a relational database service that handles web and transactional workloads; IBM BigInsights on Cloud, an enterprise Hadoop as a service that simplifies the adoption and scaling of Hadoop so you can build big data stores to analyze relational and no relational data.
And, last but not least, IBM dashDB is a fully managed cloud data warehouse service designed for performance and scale, and compatible with a wide range of business intelligence tools and analytics. With dashDB, you have a 24/7 fully managed service that enables a wide variety of data warehousing operations for today’s builders – including developers, database administrators, solutions architects, data scientists and many more.
dashDB lets you extend your on premises data warehouse to the cloud, so you can bring the cloud’s scale and agility to the simplicity and performance of a data warehouse appliance.
dashDB also lets you build entirely new, self service and on demand data warehousing infrastructure in the cloud, optimal for ingesting born-on-the-cloud data like web and mobile data.
dashDB is also a staging ground for analytics, and can ingest both structured and unstructured data from a wide variety of sources, on prem and in the cloud.
And like all cloud data services, dashDB is available through IBM Bluemix, starting for free. IBM manages the setup, configuration, tuning and disaster recovery operations, so you can get straight to building your newest solutions, applications and architectures and bringing them to market, without having to invest in costly new on premises data warehousing infrastructure.
The keys to dashDB’s performance lie in its secret sauce – a combination of powerful technologies including IBM BLU Acceleration dynamic in-memory column store, which minimizes input output operations and achieves an order of magnitude in speedup compared to conventional row store databases. dashDB is also directly integrated with Cloudant, making it easy to ingest JSON documents to incorporate unstructured data into your analytics and business intelligence operations; and embedded IBM Netezza in-database analytics, allowing you to run analytics natively in the database, where the data resides, and gain huge efficiencies. Underneath all of this is IBM Softlayer bare metal infrastructure, providing a high degree of performance, control and flexibility.
What’s more, because no cloud services should exist in isolation, dashDB is designed with the greater business intelligence ecosystem in mind, and is highly compatible with the technologies and toolsets you already know.
That means you can easily integrate dashDB with your existing data management solutions, ranging from Oracle on premises systems to IBM Pure Data for Analytics appliances to Cloudant NoSQL as a service.
And you can seamlessly connect to analytics toolkits including aginity, Watston Analytics, esri and R, to realize more value from your data
And finally, you can integrate with a wide variety of business intelligence tools, including IBM solutions like Cognos and third party offerings like Looker and Tableau.
Let’s take a quick look at why these integrations are so essential to getting the most out of dashDB. IBM’s own developer advocates recently built an open source application, called the simple data pipe, which enables you to easily move Salesforce CRM data to dashDB, and then build reports on that data using Looker. This solution brings together today’s leading cloud based services to provide a simple way to get a daily readout of Salesforce data – and it’s all made possible because IBM experts are constantly working to build new integrations that help accelerate your business.
The simple data pipe is available for free on github, and we’ll be announcing more about it soon – in the meantime, if you’d like to learn more, head on over to the URL displayed here and connect with our IBM Cloud Data Services developer advocates.
Of course, you don’t have to take our word – there are many innovative companies that are already redefining their businesses using dashDB and other solutions in the Cloud Data Services portfolio.
These companies are finding that dashDB’s ease of use, performance, great support and advanced analytics capabilities are reducing the time and resources they spend building and managing data warehousing infrastructure, leaving them with more bandwidth to build the systems that will give them a competitive edge in today’s data-driven world.
Now that you have a feel for the basics of dashDB, I’m going to hand the mic to Sam Lightstone. Sam is going to take a deeper dive into the newly available dashDB Enterprise MPP offering. Welcome, Sam.
One dashDB MPP customer maintains customer data for sports and entertainment venues in a large number of small on-premise SQL Server data marts, and they leverage this data for analytics to help improve operations. The company is now incrementally moving their data marts to the cloud, but they need to scale beyond the confines of a single server in short order.
The dashDB MPP service is enabling them to seamlessly migrate their data, and to accelerate their analytics and reporting, without having to manage the system themselves. Long term, the customer plans to consolidate their data on dashDB and move away from their legacy on-premises environment entirely.
dashDB’s scalability and performance mean you can use it as a standalone, fully managed cloud data warehousing service – regardless of your size, and if you want to build a data mart, a development environment, or an enterprise data warehouse
If you have a powerful data warehouse on-premise, which you’re using for critical workloads, you may not want your developers testing new code there. With dashDB, your developers can experiment, build new application code and test it on the cloud without disrupting on-premise operations. Because dashDB is compatible with Oracle, DB2, Netezza and PostgreSQL, you can have confidence that code developed and tested on dashDB will run well on premises, too. For data sets up to about 50 GB you can even use dashDB for development and QA free of charge.
With dashDB you can build your hybrid information management strategy and extend on-premise data warehouse environments to the cloud. Since you pay for capacity as you need it, the platform is elastic and can grow with your business needs.
You can easily synchronize JSON documents within IBM Cloudant to structured data within dashDB, providing a way to bring BI and analytics to your unstructured data.
The dashDB service maintains a robust set of predictive analytic algorithms for data scientists and analysts, and includes R runtime and RStudio built in. This makes dashDB an optimal data warehouse to support data analysis and statistical software development.
Approximate pre-load uncompressed data.
An IBM internal data warehouse benchmark* shows that a dashDB MPP instance (3-server configuration) yields a performance speedup of 10 times when compared to a dashDB Enterprise single-server instance (4TB server configuration) in a throughput run of 60 users
This stunning speedup is achieved by
dashDB MPP scale-out technology
Newest generation of Intel E5 v3 chipset
2 times more cores and 3 times more memory
Better IO subsystem with higher IOPS
and more …
* Internal benchmark
Composed of 30 deep analytic queries that a sales report analyst would run to generate reports or dashboard analysis
60 concurrent user streams, each running a random order of these 30 queries
Thanks for joining us today, and please visit us at dashdb.com, where you’ll be just a few clicks away from trying dashDB free of charge. And make sure to follow us on Twitter at getdashDB to get the latest updates and resources on dashDB – see you next time.