This document provides an overview of RESTful SCA using Apache Tuscany. It discusses what Apache Tuscany and Apache Wink are, and how they enable RESTful SCA. It then covers several usage scenarios: exposing an SCA service as REST, allowing HTTP GET access to RPC services, accessing external REST services from SCA, and using JAX-RS applications within SCA. The document concludes with a summary and discussion of future plans.
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
These are the slides of our first Webinar of HelixCloud - our middleware software to enable Video Streaming Cloud Services using Helix Universal Server by RealNetworks
These are the presentation slides for the sixth Mulesoft Meetup held in Huddersfield. The topics include covering RAML design - a comparison of queryParams vs queryStrings and Dataweave 2.0
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
These are the slides of our first Webinar of HelixCloud - our middleware software to enable Video Streaming Cloud Services using Helix Universal Server by RealNetworks
These are the presentation slides for the sixth Mulesoft Meetup held in Huddersfield. The topics include covering RAML design - a comparison of queryParams vs queryStrings and Dataweave 2.0
Dive to get an idea about Apache Kafka.
Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and later on became a part of
the Apache project.
Ubicity pure tosca orchestration 2021-04-28Chris Lauwers
Overview of the Ubicity TOSCA Orchestrator. Describes service catalog and inventory of resources and active services. Uses deployments of Kubernetes Clusters as an example, and shows how moving of a cluster as an example of service updates. Also shows deployment of OpenRAN RIC on the newly deployed Kubernetes Cluster.
This presentation is about how Node.js will take the place of SpiderMonkey in SAP HANA SPS11. It will also give an overview of HANA's new architecture and how compatibility between the two Javascript layers will be guaranteed.
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming...Trieu Nguyen
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming Stack
Why we still need SQL for Big Data ?
How to make Big Data more responsive and faster ?
This blog serves the purpose of making comprehensive information available about LAMP, which is the acronym for Linux, Apache, MySQL, PHP/Perl/Python. It describes in detail its components as well as its advantages.
S314011 - Developing Composite Applications for the Cloud with Apache TuscanyLuciano Resende
Today's cloud environments pose new challenges for application developers: hiding cloud infrastructure from business logic, assembling components on heterogeneous and distributed cloud environments, and optimizing the provisioning of the required cloud resources. This session will demonstrate how to use Apache Tuscany and the Service Component Architecture (SCA) to develop, build, and run an application composed of several service components in a distributed cloud environment. We'll illustrate how to encapsulate cloud infrastructure services as SCA components to simplify the construction and assembly of the application and how to move components around and rewire the application to adjust to new business and cloud deployment conditions.
Apache Tuscany is an open source project that simplifies the development, deployment and management of distributed applications built as compositions of service components. It is based on the Service Component Architecture specifications being defined by the OASIS Open SCA Collaboration. This presentation describe the experience to OSGi enable the Tuscany SCA runtime.
How mentoring programs can help newcomers get started with open sourceLuciano Resende
As adoption of Open Source code and development practices continues to gain momentum, more newcomers have become interested in getting involved and contributing to Open Source. However, it's usually not easy for newcomers to start contributing to open source projects. This session will discuss how mentoring programs can ease the way for newcomers to get started with open source, and will provide an overview of existing mentoring programs such as Google Summer of Code and our in-house Apache Mentoring Programme.
Dive to get an idea about Apache Kafka.
Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and later on became a part of
the Apache project.
Ubicity pure tosca orchestration 2021-04-28Chris Lauwers
Overview of the Ubicity TOSCA Orchestrator. Describes service catalog and inventory of resources and active services. Uses deployments of Kubernetes Clusters as an example, and shows how moving of a cluster as an example of service updates. Also shows deployment of OpenRAN RIC on the newly deployed Kubernetes Cluster.
This presentation is about how Node.js will take the place of SpiderMonkey in SAP HANA SPS11. It will also give an overview of HANA's new architecture and how compatibility between the two Javascript layers will be guaranteed.
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming...Trieu Nguyen
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming Stack
Why we still need SQL for Big Data ?
How to make Big Data more responsive and faster ?
This blog serves the purpose of making comprehensive information available about LAMP, which is the acronym for Linux, Apache, MySQL, PHP/Perl/Python. It describes in detail its components as well as its advantages.
S314011 - Developing Composite Applications for the Cloud with Apache TuscanyLuciano Resende
Today's cloud environments pose new challenges for application developers: hiding cloud infrastructure from business logic, assembling components on heterogeneous and distributed cloud environments, and optimizing the provisioning of the required cloud resources. This session will demonstrate how to use Apache Tuscany and the Service Component Architecture (SCA) to develop, build, and run an application composed of several service components in a distributed cloud environment. We'll illustrate how to encapsulate cloud infrastructure services as SCA components to simplify the construction and assembly of the application and how to move components around and rewire the application to adjust to new business and cloud deployment conditions.
Apache Tuscany is an open source project that simplifies the development, deployment and management of distributed applications built as compositions of service components. It is based on the Service Component Architecture specifications being defined by the OASIS Open SCA Collaboration. This presentation describe the experience to OSGi enable the Tuscany SCA runtime.
How mentoring programs can help newcomers get started with open sourceLuciano Resende
As adoption of Open Source code and development practices continues to gain momentum, more newcomers have become interested in getting involved and contributing to Open Source. However, it's usually not easy for newcomers to start contributing to open source projects. This session will discuss how mentoring programs can ease the way for newcomers to get started with open source, and will provide an overview of existing mentoring programs such as Google Summer of Code and our in-house Apache Mentoring Programme.
Today's Cloud environments poses new challenges for application developers: Hiding Cloud infrastructure from business logic, assembling components on heterogeneous and distributed Cloud environment, optimizing the provisioning of the required Cloud resources and moving application components around to recompose the application. This presentation will demonstrate how to use Apache Tuscany and the Service Component Architecture (SCA) to assemble an application composed of several service components (written in Java, Python and C++) and deploy it to a distributed Cloud (EC2, Eucalyptus, Google AppEngine). We will show how to take the SCA assembly and automate the provisioning and configuration of the cloud platform services required by the application components on each platform, using Apache Libcloud and Apache Deltacloud. We will also illustrate how to encapsulate Cloud infrastructure services (Data store, queueing etc) as SCA components to simplify the construction and assembly of the application, and how to move components around, rewire the application to adjust to new business and Cloud deployment conditions.
Building and composing the components of a distributed application can be a challenge and complex bespoke solutions are commonplace. The Apache Tuscany runtime, and the Service Component Architecture (SCA) on which the runtime is based, simplifies the process be presenting a component based application assembly model. In this talk we look at the Tuscany travel booking application and explain how the individual components of the application are constructed using a variety of technologies including Java, BPEL, Python and Javascript. We also look at how these services are wired together using a different communication protocols such as SOAP/HTTP and JSON-RPC. The complete model can then be deployed to both stand-alone and distributed runtimes without changes to the application itself.
How mentoring can help you start contributing to open sourceLuciano Resende
As adoption of Open Source code and development practices continues to gain momentum, more newcomers have become interested in getting involved and contributing to Open Source. However, it's usually not easy for newcomers to start contributing to open source projects. This session will discuss how community mentors can ease the way for newcomers to get started with open source, and will provide an overview of existing mentoring programs such as Google Summer of Code that can help you get paired with community mentors and start contributing to open source right away.
Writing Apache Spark and Apache Flink Applications Using Apache BahirLuciano Resende
Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality.
Machine learning in the enterprise is an iterative process. Data scientists will tweak or replace their learning algorithm in a small data sample until they find an approach that works for the business problem and then apply the Analytics to the full data set. Apache SystemML is a new system that accelerates this kind of exploratory algorithm development for large-scale machine learning problems. SystemML provides a high-level language to quickly implement and run machine learning algorithms on Spark. SystemML’s cost-based optimizer takes care of low-level decisions about how to use Spark’s parallelism, allowing users to focus on the algorithm and the real-world problem that the algorithm is trying to solve. This talk will introduce you to SystemML and get you started building declarative analytics with SystemML using a simple Zeppelin notebook and running on Apache Spark environment.
Building REST API using Akka HTTP with ScalaKnoldus Inc.
Akka HTTP helps you to build reactive applications and facilitates seamless integration with Akka, Akka Streams, and Slick. Though there are a number of tools that help build REST APIs, Akka HTTP comes with its unique advantages. It’s more like a general toolkit that provides a complete server and client-side HTTP solution.
The Pdf will walk you through:
1. Brief Introduction of Akka HTTP
2. Akka HTTP Client API
3. Akka HTTP Server API
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform and more and more is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate, ORDS APIs and bridging Kafka with Oracle AQ.
Confluent REST Proxy and Schema Registry (Concepts, Architecture, Features)Kai Wähner
High level introduction to Confluent REST Proxy and Schema Registry (leveraging Apache Avro under the hood), two components of the Apache Kafka open source ecosystem. See the concepts, architecture and features.
Presentation slides used duing Apache Stratos Hangout-VI. The Slides explain, how we can design a RESTful API layer for Stratos admin operations. Talks about, #REST #Carbon #Stratos #CXF #OSGi
Cask Webinar
Date: 08/10/2016
Link to video recording: https://www.youtube.com/watch?v=XUkANr9iag0
In this webinar, Nitin Motgi, CTO of Cask, walks through the new capabilities of CDAP 3.5 and explains how your organization can benefit.
Some of the highlights include:
- Enterprise-grade security - Authentication, authorization, secure keystore for storing configurations. Plus integration with Apache Sentry and Apache Ranger.
- Preview mode - Ability to preview and debug data pipelines before deploying them.
- Joins in Cask Hydrator - Capabilities to join multiple data sources in data pipelines
- Real-time pipelines with Spark Streaming - Drag & drop real-time pipelines using Spark Streaming.
- Data usage analytics - Ability to report application usage of data sets.
- And much more!
A Jupyter kernel for Scala and Apache Spark.pdfLuciano Resende
Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what's new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment.
In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab
Elyra - a set of AI-centric extensions to JupyterLab Notebooks.Luciano Resende
In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It's a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning.
From Data to AI - Silicon Valley Open Source projects come to you - Madrid me...Luciano Resende
The IBM Center for Open Source, Data and AI Technology "CODAIT" (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others.
The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, it’s very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle.
Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.
Strata - Scaling Jupyter with Jupyter Enterprise GatewayLuciano Resende
Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements.
Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a “bring your own notebook” model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems.
Scaling notebooks for Deep Learning workloadsLuciano Resende
Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebook’s interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.
Jupyter Enterprise Gateway enables Jupyter Notebook to launch remote kernels in a distributed cluster, including Apache Spark managed by YARN, IBM Spectrum Conductor or Kubernetes.
It provides out of the box support for the following kernels:
Python using IPython kernel
R using IRkernel
Scala using Apache Toree kernel
Inteligencia artificial, open source e IBM Call for CodeLuciano Resende
Nesta palestra vamos abordar algumas das tendências em Inteligência Artificial e as dificuldades na uso da Inteligência Artificial. Por isso, também apresentaremos algumas ferramentas disponíveis em código livre que podem ajudar a simplificar a adoção da IA. E faremos uma breve introdução ao “Call for Code” que é uma iniciativa da IBM para construir soluções na prevenção e reação a desastres naturais.
IoT Applications and Patterns using Apache Spark & Apache BahirLuciano Resende
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data.
We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
Getting insights from IoT data with Apache Spark and Apache BahirLuciano Resende
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT.
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
Building analytical microservices powered by jupyter kernelsLuciano Resende
The Jupyter Kernels, which abstracts the computing engine used in Jupyter Notebooks, are a very powerful component that can be reutilized in different scenarios to bring analytical capabilities to applications. In this session, we will discuss how you can build a simple python based micro service that leverages Jupyter Kernels to incorporate sentiment analysis to the service it provides.
Building iot applications with Apache Spark and Apache BahirLuciano Resende
We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
An Enterprise Analytics Platform with Jupyter Notebooks and Apache SparkLuciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017Luciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
What's new in Apache SystemML - Declarative Machine LearningLuciano Resende
SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics.
This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML.
Jupyter con meetup extended jupyter kernel gatewayLuciano Resende
Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster.
1. IBM Software Group
1
http://tuscany.apache.org
RESTful SCA with Apache Tuscany
Luciano Resende
lresende@apache.org
http://lresende.blogspot.com
Jean-Sebastien Delfino
jsdelfino@apache.org
http://jsdelfino.blogspot.com
2. IBM Software Group
2
http://tuscany.apache.org
Agenda
• What is Apache Tuscany
• What is Apache Wink
• RESTFul SCA Overview
• Usage and architecture walk-through
- Including a look at various scenarios
• What’s next
• Getting involved
4. IBM Software Group
4
http://tuscany.apache.org
Apache Tuscany
• Apache Tuscany provides a component based
programming model which simplifies development,
assembly and deployment and management of
composite applications.
• Apache Tuscany implements SCA standards
defined by OASIS OpenCSA and extensions
based on real user feedback.
4
5. IBM Software Group
5
http://tuscany.apache.org
Building Applications using SCA
• Business functions are
defined as SCA
Components
– That expose services
- Using different
communication
protocols (bindings)
– And have dependencies on
other services through
References
Store
Catalog
Currency
Converter
http
currencyCode=USD
Fruit
Catalog
ShoppingCart
atom
jsonrpc
Collection
Total
6. IBM Software Group
6
http://tuscany.apache.org
SCA Overview
Composite A
Component
AService
Service Binding
Web Service
JMS
SLSB
Rest
JSONRPC
JCA
…
Reference Binding
Web Service
JMS
SLSB
Rest
JSONRPC
JCA
…
Component
B
Service Interface
- Java
- WSDL
Reference Interface
Reference
property setting
Property
promotepromote wire
Implementation
Java
BPEL
PHP
SCA composite
Spring
EJB module
…
- Java
- WSDL
8. IBM Software Group
8
http://tuscany.apache.org
Apache Wink
• Apache Wink is a simple yet solid open source framework for building
RESTful Web services. It is comprised of a Server module and a Client
module for developing and consuming RESTful Web services.
• The Wink Server module is a complete implementation of the JAX-RS
v1.0 specification. On top of this implementation, the Wink Server
module provides a set of additional features that were designed to
facilitate the development of RESTful Web services.
• The Wink Client module is a Java based framework that provides
functionality for communicating with RESTful Web services. The
framework is built on top of the JDK HttpURLConnection and adds
essential features that facilitate the development of such client
applications.
10. IBM Software Group
10
http://tuscany.apache.org
REST-related user stories
• As a developer, I want to expose new or existing services over HTTP in
REST styles with the flexibility to use different wire formats including (but
not limited to) JSON, XML.
• As a developer, I want to allow RPC services to be accessed via HTTP
GET method in REST styles to take advantage of HTTP caching.
• As a developer, I want to configure a service exposed using REST to use
different cache control mechanisms to maximize performance of static
resources and or cache dynamic content.
• As a developer, I want to invoke existing RESTful services via a business
interface without dealing with the HTTP client APIs.
• As a developer, I want to compose services (both RESTful and non-
RESTful) into a solution using SCA.
November 4, 2010
11. IBM Software Group
11
http://tuscany.apache.org
A win-win situation using SCA and JAX-RS
• SCA gives us the power to declare, configure and compose services in
a technology neutral fashion.
• REST is an important aspect of the Web 2.0 world. Building RESTful
services can be a challenge as REST is just an architectural style.
JAX-RS emerges as the standard REST programming model.
• A balanced middle-ground to leverage the power of declarative
services using SCA and annotation based REST/HTTP mapping using
JAX-RS (without tying business logic to the technology specifics) (try to
avoid JAX-RS APIs directly).
12. IBM Software Group
12
http://tuscany.apache.org
Tuscany’s offerings for REST
• The Tuscany Java SCA runtime provides the integration with REST
services out of the box via several extensions.
- Tuscany REST binding type (binding.rest)
• Leverage JAX-RS annotations to map business operations to HTTP operations
such as POST, GET, PUT and DELETE to provide a REST view to SCA
services.
• Support RPC over HTTP GET
• Allow SCA components to invoke existing RESTful services via a JAX-RS
annotated interfaces without messing around HTTP clients.
- Tuscany JAX-RS implementation type (implementation.jaxrs)
• JAX-RS applications and resources can be dropped into the SCA assembly as
JAX-RS implementation (implementation.jaxrs).
- Tuscany also enrich the JAX-RS runtime with more databindings to provide
support for data representations and transformation without the
interventions from application code.
13. IBM Software Group
13
http://tuscany.apache.org
Runtime overview
• Related Tuscany modules
- interface-java-jaxrs (Introspection of JAX-RS annotated interfaces)
- binding-rest (binding.rest XML/Java model)
- binding-rest-runtime (runtime provider)
- implementation-jaxrs (implementation.jaxrs XML/Java model)
- implementation-jaxrs-runtime (runtime provider)
• Tuscany uses Apache Wink 1.1.1-incubating as the JAX-RS runtime
- Contributions were made to the Wink runtime to facilitate embedding Wink
and it’s going to be available in Wink 1.1.2 which should be available in the
near future
15. IBM Software Group
15
http://tuscany.apache.org
Use Case #1:
Exposing an SCA service to HTTP using REST
• We have an existing SCA service and want to make it RESTful
1. Define a Java interface with JAX-RS annotations
• Provide URI
• Map between business operations and HTTP methods
• Map Input/Output (Path segments, query parameters, headers, etc)
• Configure wire formats
2. Each method should have a compatible operation in the SCA service
16. IBM Software Group
16
http://tuscany.apache.org
REST Services
• Supports exposing existing SCA components as RESTFul services
• Exposing a service as RESTful resource
• Consuming REST Services
- Multiple JavaScript frameworks such as Dojo (XHR)
- Regular Web browsers
- Regular utilities such as cUrl, wGet, etc
- SCA component
<component name=”Catalog">
<implementation.java class="services.FruitsCataloglmpl" />
<service name=”Catalog">
<t:binding.rest uri="http://localhost:8080/Cartalog" >
<t:wireFormat.json />
<t:operationSelector.jaxrs />
</t:binding.rest>
</service>
</component>
Fruit
Catalog
json
17. IBM Software Group
17
http://tuscany.apache.org
Mapping business interfaces using JAX-RS
@Remotable
public interface Catalog{
@GET
Item[] getItem();
@GET
@Path("{id}")
Item getItemById(@PathParam("id") String itemId);
@POST
void addItem(Item item);
@PUT
void updateItem(Item item);
@DELETE
@Path("{id}")
void deleteItem(@PathParam("id") String itemId);
}
19. IBM Software Group
19
http://tuscany.apache.org
Use Case #2:
Allowing HTTP GET access to an SCA RPC service
• We have an existing RPC SCA service and want to allow remote
accesses using HTTP GET
- No standard JAX-RS client is defined by the JAX-RS spec
- We need to figure the URI, parameter passing (positional or name/value
pairs, headers, etc)
20. IBM Software Group
20
http://tuscany.apache.org
RPC Services over HTTP GET
• The REST binding provides mapping your RPC style calls over HTTP
GET
• Exposing a service as RESTful resource
• Client Invocation
- http://localhost:8085/EchoService?method=echo&msg=Hello RPC
- http://localhost:8085/EchoService?
method=echoArrayString&msgArray=Hello RPC1&msgArray=Hello RPC2"
<component name=”Catalog">
<implementation.java class="services.FruitsCataloglmpl" />
<service name=”Catalog">
<t:binding.rest uri="http://localhost:8080/Cartalog" />
<t:wireFormat.json />
<t:operationSelector.rpc />
</t:binding.rest>
</service>
</component>
Fruit
Catalog
json
21. IBM Software Group
21
http://tuscany.apache.org
Mapping queryString to RPC method parameters
@Remotable
public interface Echo {
String echo(@QueryParam("msg") String msg);
int echoInt(int param);
boolean echoBoolean(boolean param);
String [] echoArrayString(@QueryParam("msgArray") String[] stringArray);
int [] echoArrayInt(int[] intArray);
}
22. IBM Software Group
22
http://tuscany.apache.org
Design note
• As of today, we have a special path to handle the RPC over HTTP
GET in the REST binding runtime (operationSelector.rpc). Potentially,
we can unify this approach with the runtime design that implements the
logic for Use case #1.
- Imaging that you are writing a JAX-RS resource method that supports the
RPC style (GET with a list of query parameters, one of them is the
“method”)
• For a service method that is not annotated with JAX-RS http method
annotations, we will generate a JAX-RS resource class with a mapping
so that:
- @GET
- @Path
- @QueryParam for each of the arguments
23. IBM Software Group
23
http://tuscany.apache.org
Use Case #3:
Access external RESTful services using SCA
• We want to access external RESTful services from an SCA component
using SCA references (w/ dependency injection) instead of calling
technology APIs such as HTTP client
24. IBM Software Group
24
http://tuscany.apache.org
Tuscany SCA client programming model for JAX-RS
• Model as an SCA reference configured with binding.rest in the client
component
• Use a JAX-RS annotated interface to describe the outbound HTTP
invocations (please note we use the same way to handle inbound
HTTP invocations for RESTful services exposed by SCA)
• A proxy is created by Tuscany to dispatch the outbound invocation per
the metadata provided by the JAX-RS annotations (such as Path for
URI, @GET for HTTP methods, @QueryParam for parameters, etc).
- If no HTTP method is mapped, we use the RPC over HTTP GET
25. IBM Software Group
25
http://tuscany.apache.org
Sample configuration
• To invoke a RESTful resource such as getPhotoById(String id):
@GET
@Path(“/photos/{id}”)
InputStream getPhotoById(@PathParam(“id”) String id);
• SCA Reference
<reference name=“photoService”>
<interface.java interface=“…”/>
<tuscany:binding.rest uri=“http://example.com/”/>
</reference>
27. IBM Software Group
27
http://tuscany.apache.org
Use case #4:
Drop in a JAX-RS application into SCA
• We already have a JAX-RS application that is written following JAX-RS
programming model (to take full advantage of JAX-RS for the HTTP
protocol beyond just business logic).
• We want to encapsulate it as an SCA component so that it can be used
in the SCA composite application. (Potentially be able to use SCA
@Reference or @Property to inject service providers or property
values).
32. IBM Software Group
32
http://tuscany.apache.org
Summary
• What do we support today ?
- Expose SCA Services as RESTFul services
- Expose RPC SCA Services via HTTP
- Declaratively configure RESTFul services
• Cache Controls Headers and general HTTP Headers
- Consume RESTFul services as SCA References
- Re-use JAX-RS Resources into your composite applications
34. IBM Software Group
34
http://tuscany.apache.org
What’s next ?
• What’s on my “next” todo list ?
- WADL Support via service endpoint ?wadl
- Support for partial response and partial updates
• What’s on yours ?
- Submit your suggestions to our user / development list