Using Bluemix and dashDB for Twitter Analysis
This document discusses using IBM's Bluemix and dashDB services for Twitter analysis. It provides an overview of the IBM Insights for Twitter service in Bluemix, which allows querying and searching over enriched Twitter data stored in dashDB. Examples are given of queries that can be performed, such as searching for tweets about an upcoming movie within a time frame or searching for tweets with positive sentiment about a product. The document also discusses loading Twitter data into dashDB using a Bluemix app and performing predictive analytics on the data using built-in R and Python capabilities in dashDB.
Analyze Twitter data completely in Bluemix. Collect data, add sentiment, copy to in-memory database, analyze with R or WatsonAnalytics. All in the cloud.
Leveraging IBM Bluemix for Conversation and Personality InsightsHandly Cameron
An overview of the IBM Bluemix service and how to get started leveraging the Watson APIs for Conversations and Personality Insights. Presented to the Atlanta Collaboration Users Group (ATLUG) for their virtual meeting on August 11, 2016.
(BDT302) Real-World Smart Applications With Amazon Machine LearningAmazon Web Services
Have you always wanted to add predictive capabilities to your application, but haven’t been able to find the time or the right technology to get started? In this session, learn how an end-to-end smart application can be built in the AWS cloud. We demonstrate how to use Amazon Machine Learning (Amazon ML) to create machine learning models, deploy them to production, and obtain predictions in real-time. We then demonstrate how to build a complete smart application using Amazon ML, Amazon Kinesis, and AWS Lambda. We walk you through the process flow and architecture, demonstrate outcomes, and then dive into the code for implementation. In this session, you learn how to use Amazon ML as well as how to integrate Amazon ML into your applications to take advantage of predictive analysis in the cloud.
Vortrag "Real-World Smart Applications with Amazon Machine Learning" von Alex Ingerman beim AWS Machine Learning Web Day. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1XP3dz9
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
Analyze Twitter data completely in Bluemix. Collect data, add sentiment, copy to in-memory database, analyze with R or WatsonAnalytics. All in the cloud.
Leveraging IBM Bluemix for Conversation and Personality InsightsHandly Cameron
An overview of the IBM Bluemix service and how to get started leveraging the Watson APIs for Conversations and Personality Insights. Presented to the Atlanta Collaboration Users Group (ATLUG) for their virtual meeting on August 11, 2016.
(BDT302) Real-World Smart Applications With Amazon Machine LearningAmazon Web Services
Have you always wanted to add predictive capabilities to your application, but haven’t been able to find the time or the right technology to get started? In this session, learn how an end-to-end smart application can be built in the AWS cloud. We demonstrate how to use Amazon Machine Learning (Amazon ML) to create machine learning models, deploy them to production, and obtain predictions in real-time. We then demonstrate how to build a complete smart application using Amazon ML, Amazon Kinesis, and AWS Lambda. We walk you through the process flow and architecture, demonstrate outcomes, and then dive into the code for implementation. In this session, you learn how to use Amazon ML as well as how to integrate Amazon ML into your applications to take advantage of predictive analysis in the cloud.
Vortrag "Real-World Smart Applications with Amazon Machine Learning" von Alex Ingerman beim AWS Machine Learning Web Day. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1XP3dz9
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
AWS January 2016 Webinar Series - Building Smart Applications with Amazon Mac...Amazon Web Services
In this presentation, learn how an end-to-end smart application can be built in the AWS cloud. We will demonstrate how to use Amazon Machine Learning (Amazon ML) to create machine learning models, deploy them to production, and obtain predictions in real-time. We will then demonstrate how to build a complete smart application using Amazon ML, Amazon Kinesis, and AWS Lambda. We will walk you through the process flow and architecture, demonstrate outcomes, and then dive into the code for implementation. In this session, you will learn how to use Amazon ML as well as how to integrate Amazon ML into your applications to take advantage of predictive analysis in the cloud.
Learning Objectives:
Learn about AWS services needed to build smart applications on AWS, e.g. Amazon Kinesis, AWS Lambda, Amazon Mechanical Turk, Amazon SNS
Learn how to deploy such implementation
Get the code on GitHub for you to use immediately
Who Should Attend:
Developers, Engineers, Solutions Architects
This session is recommended for anyone interested in building real-time streaming applications using AWS. In this session, you will get a deep understanding of how data can be ingested by Amazon Kinesis and made available for real-time analysis and processing. We’ll also show how you can leverage the Kinesis client to make your applications highly available and fault tolerant. We’ll explore various design considerations in implementing real-time solutions and explain key concepts against the backdrop of an actual use case. Finally, we’ll situate stream processing in the broader context of your big data applications.
Azure Enterprise Data Analyst (DP-500) Exam Dumps 2023.pdfSkillCertProExams
• For a full set of 340+ questions. Go to
https://skillcertpro.com/product/azure-enterprise-data-analyst-dp-500-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
AWS November Webinar Series - Advanced Analytics with Amazon Redshift and the...Amazon Web Services
Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology and Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. The combination of the two can provide a solution to power advanced analytics for not only what has happened in the past, but make intelligent predictions about the future. Please join this webinar to learn how get the most value from your data for your data driven business.
Learning Objectives:
How to scale your Redshift queries with user-defined functions (UDFs)
How to apply Machine learning to historical data in Amazon Redshift
How to visualize your data with Amazon QuickSight
Present a reference architecture for advanced analytics
Who Should Attend:
Application developers looking to add UDFs, or predictive analytics to their applications, database administrators that need to meet the demand of data driven organizations, decision makers looking to derive more insight from their data
DownUnder Dreaming - 5 steps to dreamy dataClive Astbury
At the recent Salesforce User group, DownUnder Dreaming, I presented 5 key steps to ensuring your Data is Dreamy, an issue I feel is close to any Salesforce Admin user’s heart.
I covered key tips, including migration, maintenance and data quality.
Check out the presentation and contact me if you'd like more details.
Analyze Amazon CloudFront and Lambda@Edge Logs to Improve Customer Experience...Amazon Web Services
Nowadays, web servers are often fronted by a global content delivery network, such as Amazon CloudFront, to accelerate delivery of websites, APIs, media content, and other web assets. In this hands-on-workshop, learn to improve website availability, optimize content based on devices, browser and user demographics, identify and analyze CDN usage patterns, and perform end-to-end debugging by correlating logs from various points in a request-response pipeline. Build an end-to-end serverless solution to analyze Amazon CloudFront logs using AWS Glue and Amazon Athena, generate visualization to derive deeper insights using Amazon QuickSight, and correlate with other logs such as CloudWatch logs to provide finer debugging experiences. Discuss how you can extend the pipeline you just built to generate deeper insights needed to improve the overall experience for your users.
• For a full set of 340+ questions. Go to
https://skillcertpro.com/product/aws-data-analytics-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
Any structure expected to stand the test of time and change needs a strong foundation! Software is no exception. Engineering your code to grow in a stable and effective way is critical to your ability to rapidly meet the growing demands of users, new features, technologies, and platform capabilities. Join us to obtain architect-level design patterns for use in your Apex code to keep it well factored, easy to maintain, and in line with platform best practices. You'll follow a Force.com interpretation of Martin Fowler's Enterprise Architecture Application patterns, and the practice of Separation of Concerns.
Social Media and the Customer-centric Data Strategy #data17Alexander Loth
With over three billion active social media users, establishing an active presence on social media networks is becoming increasingly essential in getting your business front of your ideal audience. These days, more and more consumers are looking to engage, connect and communicate with their favorite brands on social media. Adding social media to your customer-centric data strategy will help boost brand awareness, increase followership, drive traffic to your website and generate leads for your sales funnel. In 2017, no organization should be without a plan that actively places their brand on social media, and analyzes their social media data. Once you’ve started diving into social media analytics, how do you bring it to the next level? This session covers a customer-centric data strategy for scaling a social media data program.
AWS Neptune - A Fast and reliable Graph Database Built for the CloudAmazon Web Services
Dickson Yue, Solutions Architect, AWS
Amazon Neptune is a fully managed graph database service which has been built ground up for handling rich highly connected data. Come learn how to transform your business with Amazon Neptune and hear diverse use cases such as recommendation engines, knowledge graphs, fraud detection, social networks, network management and life sciences.
As You Seek – How Search Enables Big Data AnalyticsInside Analysis
The Briefing Room with Robin Bloor and MarkLogic
Live Webcast on June 18, 2013
http://www.insideanalysis.com
The heart and soul of Big Data Analytics revolves around search. That's why we keep hearing about NoSQL database vendors aligning themselves with third-party search engines. Because these purpose-built database engines do not leverage the Structured Query Language, search is the means by which valuable insights are gleaned from them. But bolted-on search engines typically don't offer the kind of deep functionality that built-in engines can.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor explain how search functionality provides a window into the possibilities for Big Data Analytics. He'll be briefed by David Gorbet of MarkLogic who will tout his company's object database offering, which boasts more than 10 years of use in production. He'll discuss how search can be used to expose relationships in Big Data and thus help generate insights. He'll also provide details on MarkLogic's enterprise-caliber capabilities, such as ACID compliance, its SQL interface, and where semantics fit in the roadmap.
MongoDB.local Austin 2018: Pissing Off IT and Delivery: A Tale of 2 ODS'sMongoDB
Long live RDBMs! For years they have been a staple of large data set storage, manipulation & retrieval. But what if I told you that we were able to simplify every aspect of our new ODS; from data maintenance and implementation to API design, scalability and maintainability by doing one simple thing?
Presented by: Scott Jones, Acxiom Fellow, Acxiom
Best Practices for Running SQL Server on Amazon RDS (DAT323) - AWS re:Invent ...Amazon Web Services
Amazon Relational Database Service (Amazon RDS) provides a managed service to run SQL Server databases in AWS. While Amazon RDS handles provisioning and maintaining the SQL Server instance, there are things you can do to ensure that the SQL Server instance is healthy. We'll review some best practices involved in configuring the Amazon RDS SQL Server instance, focusing on availability, security and migration. We'll also hear from our customer Allstate, sharing details about their use of Amazon RDS.
AWS January 2016 Webinar Series - Building Smart Applications with Amazon Mac...Amazon Web Services
In this presentation, learn how an end-to-end smart application can be built in the AWS cloud. We will demonstrate how to use Amazon Machine Learning (Amazon ML) to create machine learning models, deploy them to production, and obtain predictions in real-time. We will then demonstrate how to build a complete smart application using Amazon ML, Amazon Kinesis, and AWS Lambda. We will walk you through the process flow and architecture, demonstrate outcomes, and then dive into the code for implementation. In this session, you will learn how to use Amazon ML as well as how to integrate Amazon ML into your applications to take advantage of predictive analysis in the cloud.
Learning Objectives:
Learn about AWS services needed to build smart applications on AWS, e.g. Amazon Kinesis, AWS Lambda, Amazon Mechanical Turk, Amazon SNS
Learn how to deploy such implementation
Get the code on GitHub for you to use immediately
Who Should Attend:
Developers, Engineers, Solutions Architects
This session is recommended for anyone interested in building real-time streaming applications using AWS. In this session, you will get a deep understanding of how data can be ingested by Amazon Kinesis and made available for real-time analysis and processing. We’ll also show how you can leverage the Kinesis client to make your applications highly available and fault tolerant. We’ll explore various design considerations in implementing real-time solutions and explain key concepts against the backdrop of an actual use case. Finally, we’ll situate stream processing in the broader context of your big data applications.
Azure Enterprise Data Analyst (DP-500) Exam Dumps 2023.pdfSkillCertProExams
• For a full set of 340+ questions. Go to
https://skillcertpro.com/product/azure-enterprise-data-analyst-dp-500-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
AWS November Webinar Series - Advanced Analytics with Amazon Redshift and the...Amazon Web Services
Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology and Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. The combination of the two can provide a solution to power advanced analytics for not only what has happened in the past, but make intelligent predictions about the future. Please join this webinar to learn how get the most value from your data for your data driven business.
Learning Objectives:
How to scale your Redshift queries with user-defined functions (UDFs)
How to apply Machine learning to historical data in Amazon Redshift
How to visualize your data with Amazon QuickSight
Present a reference architecture for advanced analytics
Who Should Attend:
Application developers looking to add UDFs, or predictive analytics to their applications, database administrators that need to meet the demand of data driven organizations, decision makers looking to derive more insight from their data
DownUnder Dreaming - 5 steps to dreamy dataClive Astbury
At the recent Salesforce User group, DownUnder Dreaming, I presented 5 key steps to ensuring your Data is Dreamy, an issue I feel is close to any Salesforce Admin user’s heart.
I covered key tips, including migration, maintenance and data quality.
Check out the presentation and contact me if you'd like more details.
Analyze Amazon CloudFront and Lambda@Edge Logs to Improve Customer Experience...Amazon Web Services
Nowadays, web servers are often fronted by a global content delivery network, such as Amazon CloudFront, to accelerate delivery of websites, APIs, media content, and other web assets. In this hands-on-workshop, learn to improve website availability, optimize content based on devices, browser and user demographics, identify and analyze CDN usage patterns, and perform end-to-end debugging by correlating logs from various points in a request-response pipeline. Build an end-to-end serverless solution to analyze Amazon CloudFront logs using AWS Glue and Amazon Athena, generate visualization to derive deeper insights using Amazon QuickSight, and correlate with other logs such as CloudWatch logs to provide finer debugging experiences. Discuss how you can extend the pipeline you just built to generate deeper insights needed to improve the overall experience for your users.
• For a full set of 340+ questions. Go to
https://skillcertpro.com/product/aws-data-analytics-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
Any structure expected to stand the test of time and change needs a strong foundation! Software is no exception. Engineering your code to grow in a stable and effective way is critical to your ability to rapidly meet the growing demands of users, new features, technologies, and platform capabilities. Join us to obtain architect-level design patterns for use in your Apex code to keep it well factored, easy to maintain, and in line with platform best practices. You'll follow a Force.com interpretation of Martin Fowler's Enterprise Architecture Application patterns, and the practice of Separation of Concerns.
Social Media and the Customer-centric Data Strategy #data17Alexander Loth
With over three billion active social media users, establishing an active presence on social media networks is becoming increasingly essential in getting your business front of your ideal audience. These days, more and more consumers are looking to engage, connect and communicate with their favorite brands on social media. Adding social media to your customer-centric data strategy will help boost brand awareness, increase followership, drive traffic to your website and generate leads for your sales funnel. In 2017, no organization should be without a plan that actively places their brand on social media, and analyzes their social media data. Once you’ve started diving into social media analytics, how do you bring it to the next level? This session covers a customer-centric data strategy for scaling a social media data program.
AWS Neptune - A Fast and reliable Graph Database Built for the CloudAmazon Web Services
Dickson Yue, Solutions Architect, AWS
Amazon Neptune is a fully managed graph database service which has been built ground up for handling rich highly connected data. Come learn how to transform your business with Amazon Neptune and hear diverse use cases such as recommendation engines, knowledge graphs, fraud detection, social networks, network management and life sciences.
As You Seek – How Search Enables Big Data AnalyticsInside Analysis
The Briefing Room with Robin Bloor and MarkLogic
Live Webcast on June 18, 2013
http://www.insideanalysis.com
The heart and soul of Big Data Analytics revolves around search. That's why we keep hearing about NoSQL database vendors aligning themselves with third-party search engines. Because these purpose-built database engines do not leverage the Structured Query Language, search is the means by which valuable insights are gleaned from them. But bolted-on search engines typically don't offer the kind of deep functionality that built-in engines can.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor explain how search functionality provides a window into the possibilities for Big Data Analytics. He'll be briefed by David Gorbet of MarkLogic who will tout his company's object database offering, which boasts more than 10 years of use in production. He'll discuss how search can be used to expose relationships in Big Data and thus help generate insights. He'll also provide details on MarkLogic's enterprise-caliber capabilities, such as ACID compliance, its SQL interface, and where semantics fit in the roadmap.
MongoDB.local Austin 2018: Pissing Off IT and Delivery: A Tale of 2 ODS'sMongoDB
Long live RDBMs! For years they have been a staple of large data set storage, manipulation & retrieval. But what if I told you that we were able to simplify every aspect of our new ODS; from data maintenance and implementation to API design, scalability and maintainability by doing one simple thing?
Presented by: Scott Jones, Acxiom Fellow, Acxiom
Best Practices for Running SQL Server on Amazon RDS (DAT323) - AWS re:Invent ...Amazon Web Services
Amazon Relational Database Service (Amazon RDS) provides a managed service to run SQL Server databases in AWS. While Amazon RDS handles provisioning and maintaining the SQL Server instance, there are things you can do to ensure that the SQL Server instance is healthy. We'll review some best practices involved in configuring the Amazon RDS SQL Server instance, focusing on availability, security and migration. We'll also hear from our customer Allstate, sharing details about their use of Amazon RDS.
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM CloudTorsten Steinbach
Cloud is a sharing economy that reduces your spending. But does this also apply to data and analytics? Doesn't this require you to provision dedicated data warehouse systems to run analytics SQL queries on terabytes of data? With IBM Cloud, the answer is no. By using serverless analytics via IBM Cloud SQL Query, you can analyze your data directly where it sits, be it in IBM Cloud Object Storage or in your NoSQL databases. Due to the serverless nature of SQL Query, you only pay for your queries depending on the data volume that they process. There are no standing costs. You do not need to provision and wait for a data warehouse. But you can still run SQLs on terabytes of data.
IBM THINK 2019 - What? I Don't Need a Database to Do All That with SQL?Torsten Steinbach
You don't necessarily have to set up a relational database, tables and load data in order to use a surprisingly rich set of SQL capabilities on your data in the cloud. IBM SQL Query lets you analyze terabytes of distributed data of heterogeneous formats with a complete ANSI SQL dialect in a completely serverless usage model, elegantly ETL data between formats and partitioning layouts as needed, and run complex time series transformations, analysis and correlations with advanced built-in timeseries SQL algorithms that are differentiating in the entire industry. It also support a complete PostGIS compliant geospatial SQL function set. Come explore the stunningly advanced world of SQL without a database in IBM Cloud.
IBM THINK 2019 - Cloud-Native Clickstream Analysis in IBM CloudTorsten Steinbach
Agile user and workload insights are one of the key elements of a cloud-native solution. When done well, this represents a real competitive advantage. In this session, we show you how to run cloud-native clickstream analysis with IBM Cloud. By combining serverless mechanisms like object storage for affordable and scalable persistency with SQL Query for serverless analysis of your clickstream data, you can establish a very cost-effective clickstream analysis pipeline easily and quickly.
IBM THINK 2019 - Self-Service Cloud Data Management with SQL Torsten Steinbach
SQL is a powerful language to express data transformations. But did you know that you can also use IBM Cloud SQL to convert data between various data formats and layouts on disks? In this session, you will see the full power of using SQL Query to move and transform your cloud data in an entirely self-service fashion. You can specify any data format, layout or partitioning with a simple SQL statement. See how you can move and transform terabytes of data in the cloud in a very scalable fashion and still being charged only for the individual SQL movement and transformation jobs without having standing costs.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
2. Please Note:
• IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion.
• Information regarding potential future products is intended to outline our general product direction and it should not be relied on in
making a purchasing decision.
• The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any
material, code or functionality. Information about potential future products may not be incorporated into any contract.
• The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
Performance is based on measurements and projections using standard IBM benchmarks in a
controlled environment. The actual throughput or performance that any user will experience will vary
depending upon many factors, including considerations such as the amount of multiprogramming in the
user’s job stream, the I/O configuration, the storage configuration, and the workload processed.
Therefore, no assurance can be given that an individual user will achieve results similar to those stated
here.
2
5. Query exactly the data that
your social application needs.
Get IBM analytics enrichments
in addition to base Twitter data.
Whenever needed, check
whether previously received
Tweets are still valid
(compliance).
Ingest, enrich, curate,
govern Decahose
data over time.
Receive & process
compliance events.
Social Application using the IBM Insights for Twitter Service
IBM Insights for
Twitter Service:
Search over enriched
Decahose Data
IBM Insights for
Twitter Service:
Search over enriched
Decahose Data
Twitter
GNIP APIs
Twitter
GNIP APIs
Social
Application
Social
Application
IBM Insights for
Twitter System
on Softlayer
IBM Insights for
Twitter System
on Softlayer
Twitter Data enriched
through IBM Analytics
Twitter Data enriched
through IBM Analytics
Store and Index up to 2-year history of
enriched Tweets, point in time compliant
5
PowerTrack
collection rules &
filters.
6. Queries
6
keyword Matches tweets that have “keyword” in their body. The search is case-insensitive. cat
“exact phrase match”
Matches tweets that contain the exact keyword sequence <”exact”, “phrase”,
“match”>.
"cats and dogs"
#hashtag Matches tweets with the hashtag “#hashtag”. #insight2014
from: twitterHandle
Returns tweets from authors with the preferredUsername twitterHandle. Must not
contain the @ sign.
from:alexlang11
followers_count:lower
followers_count:lower,upper
Matches tweets of authors that have at least “lower” followers. The upper bound is
optional and both limits are inclusive.
followers_count:500
posted:startTime
posted:startTime, endTime
Matches tweets that have been posted at or after “startTime”. The “endTime” bound
is optional, and is inclusive.
Timestamps have to be in one of the following two formats:
“yyyy-mm-dd”
“yyyy-mm-dd'T'HH:MM:SS'Z'”
Timezone is UTC
posted: 2014-12-1T00:00:00Z,
2014-12-12T00:00:00Z
The query language mimics the Gnip Powertrack query language, a subset of Powertrack operators is available. See documentation in Bluemix as we roll out more query
operators.
Boolean Operators
Operator precedence: “-” is stronger than “AND” and “AND” is stronger “than OR”. You can (and should) use parentheses to make operator precedence explicit.
Example: ibm twitter -(lame OR boring) searches for tweets that contain both the terms “ibm” and “twitter” but neither “lame” nor “boring”.
Query terms
All of the following query terms can be freely combined with the boolean operators introduced above, e.g. ibm apple followers_count:500
Operator Example(s) Description
term1 AND term2
cat dog
cat AND dog
#cutecat food
Returns tweets that contain both term1 and term2.
Whitespace between two terms is treated as AND, so the
operator can be omitted
term1 OR term2 #money OR broke Returns tweets that contain either term1 or term2
-term1 ibm -apple Returns tweets that do not contain term1
7. Count: /messages/count?q=QUERY
• Use to find out how many Tweets match a given query
7
Http Code Description Example Response
200
Number of results at json_path(“search.results”)
URL to retrieve documents at
json_path(“related.search.href”)
Note: add you client_id and your client_secret to this URL
{
"search":{ "results":21695 }
"related":{ "search":
{ "href":"https://server.bluemix.net/api/v1/mes
sages/search?q=ibm" } },
}
4xx
There was a problem with your query. Please have a look at
json_path(“error”) to identify the problem.
5xx
There was a problem with the service. Please have a look at
json_path(“error”) and contact support.
8. Search: /messages/search?q=QUERY&size=NUMBER
• Search & retrieve <= NUMBER Tweets matching QUERY
8
Http Code Description Example Response
200
Number of overall results at
json_path(“search.results”)
First batch of results at json_path("tweets")
URL to retrieve the next batch of documents
(if available) at json_path(“related.next.href”)
Note: add you client_id and your
client_secret to this URL
{ "search": { "results": 16283624 },
"tweets": [ { "message": {
…
“body”: “this is a nice tweet ”
…
“actor” : { “followersCount”: 456,
“displayName”: “IBM Tweeter”
…
“cde” : {
"sentiment": { "polarity": "POSITIVE" ...
“author”: { “gender”:”male” …
}
4xx
There was a problem with your query.
Please have a look at json_path(“error”) to
identify the problem.
5xx
There was a problem with the service.
Please have a look at json_path(“error”) and
contact support.
9. Example Queries
• Get Tweets about an upcoming movie for a given time frame to sense interest &
reactions to trailer:
search?q="posted:2015-02-01T00:00:00Z AND #starwars"&size=5
• Get Tweets with positive/negative sentiment about a product to learn what
customers like / dislike about the product:
search?q="IBM Bluemix sentiment:positive"
• Get Tweets about a product being marketed and compare over time to sense
audience reaction to the campaign:
search?q="posted:2015-02-01T00:00:00Z,2015-02-15T00:00:00Z
AND #IBM"
9
12. dashDBdashDB
Predictive Analytics With R In dashDB 1/3
• Built-in R runtime
& R Studio
• ibmdbR package
Data frames logically representing data physically residing in dashDB tables
> con <- idaConnect("BLUDB", "", "")
> idaInit(con)
> sysusage<-ida.data.frame('DB2INST1.SHOWCASE_SYSUSAGE')
> systems<-ida.data.frame('DB2INST1.SHOWCASE_SYSTEMS')
> systypes<-ida.data.frame('DB2INST1.SHOWCASE_SYSTYPES’)
Push down of R data preparation to dashDB
> sysusage2 <- sysusage[sysusage$MEMUSED>50000,c("MEMUSED","USERS")]
> mergedSys<-idaMerge(systems, systypes, by='TYPEID')
> mergedUsage<-idaMerge(sysusage2, mergedSys, by='SID’)
Push down of analytic algorithms to in-db execution
> lm1 <- idaLm(MEMUSED~USERS, mergedUsage)
R RuntimeR Runtime
BrowserBrowser
Any R RuntimeAny R Runtime
ibmdbRibmdbR
ibmdbRibmdbR
RStudioRStudio
REST Client
REST
13. Predictive Analytics With R In dashDB 2/3
Dynamite-native implementation of statistical functions
• colnames, cor, cov, dim, head, length, max, mean, min, names, print, sd, summary, var
Logically derived columns pushed down to Dynamite
> myDF <- ida.data.frame('DB2INST1.SHOWCASE_SYSUSAGE')
> myDF$MemPerUser <- myDF$MEMUSED / myDF$USERS
Sampling of tables in Dynamite
> idaSample(myDF, 3)
SID DATE USERS MEMUSED ALERT MemPerUser
1 8 2014-02-14 23:39:00.000000 34 5015 f 147
2 5 2014-01-22 07:52:00.000000 96 11512 f 119
3 7 2013-09-12 05:17:00.000000 39 5592 t 143
Statistics about tables in Dynamite
> summary(myDF)
SID USERS MEMUSED ALERT MemPerUser
Min. :0.000 Min. : 3.000 Min. : 350.000 f :3655563 Min. :105.000
1st Qu.:2.000 1st Qu.: 35.000 1st Qu.: 5113.000 t :1344437 1st Qu.:135.000
Median :4.500 Median : 64.000 Median : 9455.000 NA's: NA Median :150.000
Mean : NA Mean : NA Mean : NA Mean : NA
3rd Qu.:7.000 3rd Qu.:111.000 3rd Qu.:16517.000 3rd Qu.:165.000
Max. :9.000 Max. :347.000 Max. :62379.000 Max. :209.000
Statistics about categorical values
> idaTable(myDF)
ALERT
f t
3655563 1344437
15. Create you R script with RStudio
• Storing it in home dir inside dashDB
POST <dashdb-server>/dashdb-api/rscript/<fileName>
• Run the specified R script
GET <dashdb-server>/dashdb-api/home
• List all files under user home (recursively)
– E.g. list the output written by your R script
GET <dashdb-server>/dashdb-api/home/<fileName>
• Download the specified file
Running R in dashDB via REST API
15
16. dashDBdashDB
Predictive Analytics With Python In dashDB
• Bluemix Analytic Notebooks
• ibmdbPy package
https://pypi.python.org/pypi/ibmdbpy
Data frames logically representing data physically residing in dashDB tables
from ibmdbpy import IdaDataFrame
idadf = IdaDataFrame(idadb, "IRIS", indexer = "ID")
idadf = idadf[["ID","sepal_length", "sepal_width"]]
idadf['new'] = idadf['sepal_width'] + idadf['sepal_length'].mean()
idadf.head()
Push down of analytic algorithms to in-db execution
from ibmdbpy.learn import KMeans
kmeans = KMeans(3) # clustering with 3 clusters
kmeans.fit_predict(idadf).head()
Analytics for Spark
Notebook in Bluemix
Analytics for Spark
Notebook in Bluemix
BrowserBrowser
Any Python RuntimeAny Python Runtime
ibmdbPyibmdbPy
ibmdbPyibmdbPy
17. Loading Twitter Data to dashDB with Bluemix App
Show Case for box office analysis with Twitter:
www.youtube.com/watch?v=9yVNwOs9L4c
Twitter loader app for dashDB: hub.jazz.net/project/torsstei/Twitter-Loader/overview
(www.youtube.com/watch?v=ANakSSGM4zU)
18. 18
Movie Analysis Show Case
Public map data for US counties
https://www.census.gov/geo/maps-data/data/tiger-line.html
In Bluemix
dashDB service for analytics and
correlation between Tweets and
box office data
Box Office stats from the-numbers.com
Interactive app for visualization
using Node.JS and D3.js libraryTweets about movies
from Bluemix service
dashDB
Analysis using
built-in R &
RStudio
https://hub.jazz.net/project/torsstei/movie-analysis
19. Movie Analysis Show Case https://hub.jazz.net/project/torsstei/movie-analysis
21. S3
Swift
Populating dashDB with Data
dashDB
Geodata in Esri
ShapefilesOn Premise Databases
Mobile App Data
in Cloudant
GeoJSON
Twitter
The Weather Company
CSVs
Open Data
Bluemix
Cloud Storage
data.gc.ca, data.gov, data.gov.uk,
datahub.io, openAFRICA
25. dashDB: Key Use Cases
• Minimize capital expense of DR solutionDR in the Cloud
26. We Bring Netezza Compatible Analytic Platform to the
Cloud
Analytic Extension FrameworkAnalytic Extension Framework
UDX C++ APIUDX C++ API
Canned AnalyticsCanned Analytics
Application IntegrationApplication Integration
AE FrameworkAE Framework In-DB RIn-DB R In-DB LUAIn-DB LUAIn-DB PythonIn-DB Python In-DB PerlIn-DB Perl
OLAP FunctionsOLAP Functions
ROW_NUMBERROW_NUMBER
RANKRANK
LAGLAG LEADLEAD
DENSE_RANKDENSE_RANK Linear RegressionLinear Regression
Kmeans
Clustering
Kmeans
Clustering Decision TreeDecision Tree
Association RulesAssociation Rules
Association RulesAssociation Rules
Naive BayesNaive Bayes
Spatial OperatorsSpatial Operators
ContainsContains
TouchesTouches
WithinWithin
IntersectsIntersects
CrossesCrosses
OverlapsOverlaps
R WrapperR Wrapper Watson AnalyticsWatson Analytics ESRI ArcGIS
Connector
ESRI ArcGIS
Connector ……
Analytics Applications of ISVs and CustomersAnalytics Applications of ISVs and Customers
STDDEVSTDDEV
COVARCOVAR
…………
27. Analytic Code &
Algorithms:
Analytic Data:
Data pulled out and processed in analytic
application
Analytic
Applications
This is where we start from: All analytic processing done on application side
Analytics of Warehouse Data
28. SQLs
Analytic Code &
Algorithms:
Analytic Data:
Simple data lookup & massage operations
pushed down as SQL operations
Analytic
Applications
Benefit: Acceleration with no SQL skills required
SQLs
Push Down Step 1: BLU tables only logically represented in analytic application
Accelerate Analytics for Warehouse Data
29. SQLs
Analytic Code &
Algorithms:
Analytic Data:
Call built-in functions via SQL to execute
typical algorithms inside db
Cloud Tooling
Analytic
Applications
Benefit: Bring Standard Analytics to the Data
SQLs
Canned Algorithms
Push Down Step 2: Typical and popular algorithms pushed down to canned UDFs in the db
Accelerate Analytics for Warehouse Data
30. LanguageFramework
(UDX&AE)
Analytic Code &
Algorithms:
Analytic Data:
Deploy customer code and call via special
SQL function interfaces
SQLs
SQLs
Canned Algorithms
Analytic
Applications
Benefit: Bring Custom Analytics to the Data
Push Down Step 3: Execute entire customer analytic programs inside the db
Accelerate Analytics for Warehouse Data
31. Don’t forget to submit your Insight session and speaker feedback! Your
feedback is very important to us – we use it to continually improve the
conference.
Access your surveys at insight2015survey.com to quickly submit your surveys
from your smartphone, laptop or conference kiosk.
We Value Your Feedback!
31
33. 33
Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly
available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights,
trademarks or other intellectual property right.
•IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DB2® , DOORS®, Emptoris®, Enterprise
Document Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM
SmartCloud®, IBM Social Business®, IMS™, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON,
OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ, Tealeaf®,
Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International
Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or
other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at:
www.ibm.com/legal/copytrade.shtml.