This tutorial was presented at a Boston KNIME User Meetup in 2014 and offers a crash course in KNIME, text processing, text mining, and topic classification.
Containers: Don't Skeu Them Up. Use Microservices Instead.Gordon Haff
from LinuxCon Japan 2016
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Creating a customer segmentation workflow with knimeKnoldus Inc.
Knime is a drag and drop-based platform that allow it's users to focus on the data in hand and how to transform it, without worrying about the way to code that transformation. It has thousands of nodes that do a variety of jobs, from reading data from any source to transforming it, analyzing it, and reporting the final result. Not only knime provides easy workflow creation, but it also allows easy deployment as well with knime servers. Any workflow can be deployed as an analytics service and application. Among the many use cases, Knime can be used for guided analytics as well. Using the knime analytics platform, the user can create a workflow for Customer Segmentation and deploy it as an application on the knime server. Other business stakeholders can use the deployed web form to configure the workflow parameters and generate and visualize the final reports as well. Looking at the reports, final business decisions can be easily taken. Thus, every stakeholder in the data process focuses on what he/she does best.
The HDF Group provides support for NPP/NPOESS in a number of ways, including development and maintenance of software capabilities in HDF5 libraries and tools that help NPP/NPOESS data producers and users, software testing on platforms of importance to NPP/NPOESS, high quality rapid response user support for NPP/NPOESS, and performance of special projects. The purposes of this presentation are to apprise attendees of the areas of emphasis for FY 2010, and to solicit ideas and opinions that will help the project understand how best to use its resources in order to best serve the needs of NPP/NPOESS.
Are you curious about KNIME Software?
Do you know the difference between KNIME Analytics Platform and KNIME Server?
Which data sources can KNIME connect to?
Can you run an R script from within a KNIME workflow? A Python script? Which other integrations are available?
How can KNIME help with ETL, data preparation, and general data manipulation? Which machine learning algorithms can KNIME offer?
This webinar answers all of these questions! There’s also information about connecting to big data clusters and how you can run the whole or part of your analysis on a big data platform. It also covers everything you need to know about Microsoft Azure and Amazon AWS
The New Platform: You Ain't Seen Nothing YetGordon Haff
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. Today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.
A True Story About Database OrchestrationInfluxData
Gianluca shared the architecture of the project, described the criticalities of the infrastructure and how the team strives to make this powerful service secure, fast, and reliable for all customers using InfluxCloud.
This tutorial was presented at a Boston KNIME User Meetup in 2014 and offers a crash course in KNIME, text processing, text mining, and topic classification.
Containers: Don't Skeu Them Up. Use Microservices Instead.Gordon Haff
from LinuxCon Japan 2016
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Creating a customer segmentation workflow with knimeKnoldus Inc.
Knime is a drag and drop-based platform that allow it's users to focus on the data in hand and how to transform it, without worrying about the way to code that transformation. It has thousands of nodes that do a variety of jobs, from reading data from any source to transforming it, analyzing it, and reporting the final result. Not only knime provides easy workflow creation, but it also allows easy deployment as well with knime servers. Any workflow can be deployed as an analytics service and application. Among the many use cases, Knime can be used for guided analytics as well. Using the knime analytics platform, the user can create a workflow for Customer Segmentation and deploy it as an application on the knime server. Other business stakeholders can use the deployed web form to configure the workflow parameters and generate and visualize the final reports as well. Looking at the reports, final business decisions can be easily taken. Thus, every stakeholder in the data process focuses on what he/she does best.
The HDF Group provides support for NPP/NPOESS in a number of ways, including development and maintenance of software capabilities in HDF5 libraries and tools that help NPP/NPOESS data producers and users, software testing on platforms of importance to NPP/NPOESS, high quality rapid response user support for NPP/NPOESS, and performance of special projects. The purposes of this presentation are to apprise attendees of the areas of emphasis for FY 2010, and to solicit ideas and opinions that will help the project understand how best to use its resources in order to best serve the needs of NPP/NPOESS.
Are you curious about KNIME Software?
Do you know the difference between KNIME Analytics Platform and KNIME Server?
Which data sources can KNIME connect to?
Can you run an R script from within a KNIME workflow? A Python script? Which other integrations are available?
How can KNIME help with ETL, data preparation, and general data manipulation? Which machine learning algorithms can KNIME offer?
This webinar answers all of these questions! There’s also information about connecting to big data clusters and how you can run the whole or part of your analysis on a big data platform. It also covers everything you need to know about Microsoft Azure and Amazon AWS
The New Platform: You Ain't Seen Nothing YetGordon Haff
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. Today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.
A True Story About Database OrchestrationInfluxData
Gianluca shared the architecture of the project, described the criticalities of the infrastructure and how the team strives to make this powerful service secure, fast, and reliable for all customers using InfluxCloud.
We develop this application for the industry developers where they can handle there own multiple projects separately , Using this application they can easily handle or manipulate each project to each client.
Building IoT Apps in the Cloud WebinarDreamFactory
Ben Busse of DreamFactory and Nat Frampton of FramTack talk about architecting IoT apps in the cloud, including:
- How FramTack is architecting IoT apps for the cloud
- The importance of open standards for IoT
- How DreamFactory helps FramTack develop and deploy IoT apps in the cloud
- Demo of FramTack's Solution Family product for IoT
You can also view the webinar recording here https://www.youtube.com/watch?v=SYd6wcMt_aQ
Infrastructure as Code in Large Scale OrganizationsXebiaLabs
The adoption of tools for the provisioning and automatic configuration of "Infrastructure as Code" (eg Terraform, Cloudformation or Ansible) reduces cost, time, errors, violations and risks when provisioning and configuring the necessary infrastructure so that our software can run .
However, those who have begun to make intensive use of this technology at the business level agree to identify the emergence of a very critical problem regarding the orchestration and governance needs of supply requests such as security, compliance, scalability, integrity and more.
Learn how The Digital.ai DevOps Platform (formerly XebiaLabs DevOps Platform) responds to all these problems and many more, allowing you to continue working with your favorite tools.
A DevOps Approach for Building 100 iOS AppsTechWell
Apple and IBM forged a global partnership to transform enterprise mobility, which includes delivering 100 applications built exclusively for iOS devices. There are myriad challenges involved in producing that many mobile apps quickly—and with excellent user experience and quality. The team had to work smarter rather than simply throw more people at the project. Join Leigh Williamson as he discusses the DevOps techniques they implemented to accelerate their huge mobile development project: cloud hosted services for Xcode-driven continuous integration; an extended quality cycle for the mobile app once in production; and linked front-end/back-end deployments. Because integrating multiple tools from multiple vendors was unavoidable, they employed an automated pipeline for testing and integrating the code for 100 mobile apps. As the mobile landscape continues to evolve, the importance of continuously delivering engaging mobile apps integrated with your enterprise remains critical to everyone's success. Hear how one team met the challenge at scale.
PaaS Anywhere - Deploying an OpenShift PaaS into your Cloud Provider of ChoiceIsaac Christoffersen
Choice matters. And OpenShift Enterprise by Red Hat is the only Platform-as-a-Service (PaaS) product that runs in the hosting environment of your choice.
While true PaaS products provide a level of infrastructure abstraction so developers can be more productive, choosing the right environment to host your PaaS product is critical. You must take many factors into consideration, like cost, availability, scalability, and manageability. Having a PaaS product that’s truly environment-agnostic can maximize your options and help you make the choice that best fits your needs.
This talk has a demo of deploying OpenShift Enterprise to multiple public cloud providers and within local virtual machines. By using common provisioning tools and the various cloud vendor APIs, you’ll see the realization of open, hybrid PaaS. You’ll also learn about considerations when running OpenShift Enterprise in these different environments, and monitoring strategies for proactively maintaining the health of an OpenShift Enterprise environment.
DevOps on Steroids Featuring Red Hat & Alantiss - Pop-up Loft Tel AvivAmazon Web Services
Session 1:"Git, Bitbucket, Jira, Chef an ALM DevOps demo by ManageWare"
In this session we will explore the method and tools for an end-to-end workflow, from Story to code Deployment.
Session 2: ""DevOps with Red Hat OpenShift and our offering on AWS""
Integrating the workflow between Development & Operations in an agile way through automation
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2019-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, delivers the presentation "Current and Planned Standards for Computer Vision and Machine Learning" at the Embedded Vision Alliance's December 2019 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-trevett
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: An Industry Overview of Options and Trade-offs" tutorial at the May 2019 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to evolve rapidly. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT, being fed by neural network file formats such as NNEF and ONNX.
Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project? Trevett answers these and other questions in this presentation.
Leverage Progress Technologies for Telerik DevelopersAbhishek Kant
Telerik Developers are Ninjas in their software development capabilities. Now, they have new tools/technologies to leverage in their quest for better solutions. These exciting enterprise grade technologies range from Business Rules Engine to Drag and Drop Application Development.
This session will be an overview of the Progress tools.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
1. The HDF Group
www.hdfgroup.orgJuly 8, 2014 2014 Summer ESIP Federation Meeting / HDF and HDF-EOS Workshop XVII
HDF and ENVI Services Engine
Hyo-Kyung Joe Lee - hyoklee@hdfgroup.org
Thomas Harris - Thomas.Harris@ExelisInc.com
1
2. www.hdfgroup.org2014 Summer ESIP Federation Meeting
What is ESE?
2
ENVI Services Engine:
Image and Data Analytics for the Cloud and Enterprise
Cloud and/or enterprise-based processing engine which leverages the power of IDL, ENVI, and ENVI
LiDAR analytics to create products for consumption by desktop, thin, or mobile clients. Because ESE
analytics are performed server-side, these applications can retrieve, process, and publish data to the cloud
without ever running any data I/O or processing on the client computer or device.
A client can be anything that speaks HTTP, such as a webpage, IDL, C++, etc...
This allows developers to write applications in the language of their choice, then use ESE to do analytics
using IDL, and the ENVI and ENVI LiDAR Application Programming Interface (API).
Because it is deployed on a server, it is always "on", giving you on-demand distributed processing over an
enterprise network or on the web.
3. www.hdfgroup.org2014 Summer ESIP Federation Meeting
ENVI Services Engine Architecture
Application developer
builds customized
application, leveraging
the power of ENVI image
analysis algorithms
The
developed
services are
deployed to a
server
architecture
in the cloud
Results are delivered
back from the Engine
and can be saved out,
utilized in further
analysis, or …
displayed in a
variety of web,
desktop, or mobile
clients.
HTTP REST requests are
made from middleware to
server
ENVI Services Engine
executes the application
• Application developer builds custom applications
• Developed services are deployed to a server architecture
• ENVI Services Engine executes the application
• Results are delivered back to ENVI Services Engine for further analysis, or…
• Results can be displayed on a desktop, across an enterprise, or on any device that has access to
the world wide web
3
4. www.hdfgroup.org2014 Summer ESIP Federation Meeting
HDF ESE Architecture
We build customized
NASA HDF-EOS
visualization application,
leveraging the existing
IDL examples.
The
developed
services are
deployed to a
server
architecture
in the cloud
Results are delivered
back from the Engine
and can be …
displayed in a
variety of web,
desktop, or mobile
clients.
HTTP REST requests are
made from middleware to
server
ENVI Services Engine
executes the application
• The HDF Group builds custom visualization applications for NASA HDF-EOS data.
• Developed NASA HDF-EOS services are deployed to a server architecture at The HDF Group
• ENVI Services Engine executes the NASA HDF-EOS visualization application
• Results can be displayed on a desktop, across an enterprise, or on any device that has access to
the world wide web
4
5. www.hdfgroup.org2014 Summer ESIP Federation Meeting
IDL Comprehensive Examples
5
NASA Product Specific Complete Code for
visualization at http://hdfeos.org/zoo
6. www.hdfgroup.org2014 Summer ESIP Federation Meeting
How can we serve users better?
6
• Try IDL codes without purchase & installation
• Run everything with web browser
• Mobile platforms are welcome!
• UX - simple and interactive
8. www.hdfgroup.org2014 Summer ESIP Federation Meeting
We need your input.
8
ESE can make your existing IDL program
run as a web service. Super!
What kind of web services would you like
to see most at hdfeos.org?
Send us your favorite IDL codes for NASA
HDF at eoshelp@hdfgroup.org.
9. www.hdfgroup.org2014 Summer ESIP Federation Meeting
Acknowledgement
9
This work was supported by Subcontract
number 114820 under Raytheon Contract
number NNG10HP02C, funded by the
National Aeronautics and Space
Administration (NASA). Any opinions,
findings, conclusions, or recommendations
expressed in this material are those of the
authors and do not necessarily reflect the
views of Raytheon or the National
Aeronautics and Space Administration.