- Apache Thrift is a cross-language services framework that allows for the easy definition of data types and remote procedure calls (RPCs).
- It uses an interface definition language (IDL) to define data types and services, and generates code in various languages to implement clients and servers.
- Apache Thrift supports a wide range of languages and transports, making it useful for building high-performance, scalable distributed applications and microservices.
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
Apache thrift-RPC service cross languagesJimmy Lai
This slides illustrate how to use Apache Thrift for building RPC service and provide demo example code in Python. The example scenario is: we have a prepared machine learning model, and we'd like to load the model in advance as a server for providing prediction service.
Building scalable and language independent java services using apache thriftTalentica Software
This presentation is about the key challenges of cross language interactions and how they can be overcome. We discuss the Apache Thrift as a solution and understand its principle of Operation with code snippets and examples.
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
Apache thrift-RPC service cross languagesJimmy Lai
This slides illustrate how to use Apache Thrift for building RPC service and provide demo example code in Python. The example scenario is: we have a prepared machine learning model, and we'd like to load the model in advance as a server for providing prediction service.
Building scalable and language independent java services using apache thriftTalentica Software
This presentation is about the key challenges of cross language interactions and how they can be overcome. We discuss the Apache Thrift as a solution and understand its principle of Operation with code snippets and examples.
Multiplexing in Thrift: Enhancing thrift to meet Enterprise expectations- Imp...Impetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
This paper addresses the challenge and details the approach that Impetus has devised, to enhance the caliber of Thrift and enable it to meet enterprise expectations.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
Slides from my talk at RootConf 2012, Bangalore (http://rootconf.in/bangalore2012). The talk covers some general tips and practices to be followed when building web applications for scale on the LAMPhp stack.
Now FLOSS is so common that even Microsoft use it and develop it. But do you have control over your tools for daily use?
Building your own tools is the best way to develop software, and Ruby is the best language for such use. In this talk, I introduce my own tools and my development style.
This article tries to demystify HTTP, "servlet", "web server", "application server", "servlet container" and gives the fundamentals of the Java Servlet API (that comes with the J2EE SDK).
Apache Avro and Messaging at Scale in LivePersonLivePerson
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency, how we used Apache Avro technology and what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Amihay Zer-Kavod is I'm a Senior Software Architect at LivePerson.
Learn how to build REST servers using features available in Delphi XE, how to extend them with extra Delphi support code and how to take advantage of the jQuery library.
Multiplexing in Thrift: Enhancing thrift to meet Enterprise expectations- Imp...Impetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
This paper addresses the challenge and details the approach that Impetus has devised, to enhance the caliber of Thrift and enable it to meet enterprise expectations.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
Slides from my talk at RootConf 2012, Bangalore (http://rootconf.in/bangalore2012). The talk covers some general tips and practices to be followed when building web applications for scale on the LAMPhp stack.
Now FLOSS is so common that even Microsoft use it and develop it. But do you have control over your tools for daily use?
Building your own tools is the best way to develop software, and Ruby is the best language for such use. In this talk, I introduce my own tools and my development style.
This article tries to demystify HTTP, "servlet", "web server", "application server", "servlet container" and gives the fundamentals of the Java Servlet API (that comes with the J2EE SDK).
Apache Avro and Messaging at Scale in LivePersonLivePerson
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency, how we used Apache Avro technology and what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Amihay Zer-Kavod is I'm a Senior Software Architect at LivePerson.
Learn how to build REST servers using features available in Delphi XE, how to extend them with extra Delphi support code and how to take advantage of the jQuery library.
SKOS - 2007 Open Forum on Metadata Registries - NYCjonphipps
An brief introduction to SKOS (Simple Knowledge Organization Systems) and its usage in the NSDL Metadata Registry, with some discussion of current challenges.
Registry types, Synergies and Differences (Data registry, metadata registry, terminology registry, ...) Talk at the NKOS Special Session, International Conference on Dublin Core and Metadata Applications, Berlin, 2008-09-22-26.
As a follow up to my illustration of the Axolotl Protocol as implemented by the TextSecure Protocol V2 https://github.com/WhisperSystems/TextSecure/wiki/ProtocolV2 here is TextSecure's Protocol Buffer usage https://code.google.com/p/protobuf/ illustrated
HBase Advanced Schema Design - Berlin Buzzwords - June 2012larsgeorge
While running a simple key/value based solution on HBase usually requires an equally simple schema, it is less trivial to operate a different application that has to insert thousands of records per second. This talk will address the architectural challenges when designing for either read or write performance imposed by HBase. It will include examples of real world use-cases and how they
http://berlinbuzzwords.de/sessions/advanced-hbase-schema-design
Metadata for Terminology / KOS ResourcesMarcia Zeng
1. Why do we need metadata for terminology resources? 2. What do we need to know about a terminology resource? 3. Is there a standardized set of metadata elements for terminology resources?-- a presentation at the "New Dimensions in Knowledge Organization Systems", a Joint NKOS/ CENDI Workshop, World Bank, Washington, DC. September 11, 2008 http://nkos.slis.kent.edu/2008workshop/NKOS-CENDI2008.htm
AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns (B...Amazon Web Services
Amazon EMR is one of the largest Hadoop operators in the world. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost-efficient. Finally, we dive into some of our recent launches to keep you current on our latest features. This session will feature Asurion, a provider of device protection and support services for over 280 million smartphones and other consumer electronics devices. Asurion will share how they architected their petabyte-scale data platform using Apache Hive, Apache Spark, and Presto on Amazon EMR.
Data Science & Best Practices for Apache Spark on Amazon EMRAmazon Web Services
Organizations need to perform increasingly complex analysis on their data — streaming analytics, ad-hoc querying and predictive analytics — in order to get better customer insights and actionable business intelligence. However, the growing data volume, speed, and complexity of diverse data formats make current tools inadequate or difficult to use. Apache Spark has recently emerged as the framework of choice to address these challenges. Spark is a general-purpose processing framework that follows a DAG model and also provides high-level APIs, making it more flexible and easier to use than MapReduce. Thanks to its use of in-memory datasets (RDDs), embedded libraries, fault-tolerance, and support for a variety of programming languages, Apache Spark enables developers to implement and scale far more complex big data use cases, including real-time data processing, interactive querying, graph computations and predictive analytics. In this session, we present a technical deep dive on Spark running on Amazon EMR. You learn why Spark is great for ad-hoc interactive analysis and real-time stream processing, how to deploy and tune scalable clusters running Spark on Amazon EMR, how to use EMRFS with Spark to query data directly in Amazon S3, and best practices and patterns for Spark on Amazon EMR.
This talk introduces the main concepts to be able to understand RINA, explains its key principles and introduces the architectural problems it addresses.
Hardware IBM Servers Information in one PPT.
Definition of Server
Different Types of Server their Application and Benefits .
Client – Server Model
Hardware Components of Server
Types of RAID and their Concept
Server Processor Diagnostics
Created By:. Mitesh Vartak mvmiteshvartak133@gmail.com
Building high performance microservices in finance with Apache ThriftRX-M Enterprises LLC
Apache Roadshow Chicago Talk on May 14, 2019
In this talk we’ll look at the ways Apache Thrift can solve performance problems commonly facing next generation applications deployed in performance sensitive capital markets and banking environments. The talk will include practical examples illustrating the construction, performance and resource utilization benefits of Apache Thrift. Apache Thrift is a high-performance cross platform RPC and serialization framework designed to make it possible for organizations to specify interfaces and application wide data structures suitable for serialization and transport over a wide variety of schemes. Due to the unparalleled set of languages supported by Apache Thrift, these interfaces and structs have similar interoperability to REST type services with an order of magnitude improvement in performance. Apache Thrift services are also a perfect fit for container technology, using considerably fewer resources than traditional application server style deployments. Decomposing applications into microservices, packaging them into containers and orchestrating them on systems like Kubernetes can bring great value to an organization; however, it can also take a very fast monolithic application and turn it into a high latency web of slow, resource hungry services. Apache Thrift is a perfect solution to the performance and resource ills of many microservice based endeavors.
Adding Real-time Features to PHP ApplicationsRonny López
It's possible to introduce real-time features to PHP applications without deep modifications of the current codebase.
Using WAMP you can build distributed systems out of application components which are loosely coupled and communicate in (soft) real-time.
There is no need to learn a whole new language, with the implications it has.
It also opens the door to write reactive, event-based, distributed architectures and to achieve easier scalability by distributing messages to multiple systems.
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...Timothy Spann
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-ramp 2022
As the Pulsar communities grows, more and more connectors will be added. To enhance the availability of sources and sinks and to make use of the greater Apache Streaming community, joining forces between Apache NiFi and Apache Pulsar is a perfect fit. Apache NiFi also adds the benefits of ELT, ETL, data crunching, transformation, validation and batch data processing. Once data is ready to be an event, NiFi can launch it into Pulsar at light speed.
I will walk through how to get started, some use cases and demos and answer questions.
https://www.devfest-uki.com/schedule
https://linktr.ee/tspannhw
Building Modern Digital Services on Scalable Private Government Infrastructur...Andrés Colón Pérez
These are a series of presentations and knowledge collected from the web to help knowledge sharing at the government of Puerto Rico, created with the hope of helping transform government culture by engaging key personnel in diverse areas of central government IT. We discussed design and development methodologies as well as implementation, network and server technologies that led to the successful launch of the most popular online service in PR.gov, in the hope that the knowledge is retained and used to prevent problems that have plagued digital services of the past.
How did Puerto Rico build the New Good standing Certificate Online Service? How did it scale to handle millions of visitors while having 0 licensing costs? This is the technical overview of the design, philosophy and implementation.
- Good standing certificate knowledge transfer presentation by Andrés Colón
Note on attribution: some content such as logos and designs were used from the web. Rights remain with their original authors. Thanks for sharing with the world.
Intro to big data analytics using microsoft machine learning server with sparkAlex Zeltov
Alex Zeltov - Intro to Big Data Analytics using Microsoft Machine Learning Server with Spark
By combining enterprise-scale R analytics software with the power of Apache Hadoop and Apache Spark, Microsoft R Server for HDP or HDInsight gives you the scale and performance you need. Multi-threaded math libraries and transparent parallelization in R Server handle up to 1000x more data and up to 50x faster speeds than open-source R, which helps you to train more accurate models for better predictions. R Server works with the open-source R language, so all of your R scripts run without changes.
Microsoft Machine Learning Server is your flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business with full support for Python and R. Machine Learning Server meets the needs of all constituents of the process – from data engineers and data scientists to line-of-business programmers and IT professionals. It offers a choice of languages and features algorithmic innovation that brings the best of open source and proprietary worlds together.
R support is built on a legacy of Microsoft R Server 9.x and Revolution R Enterprise products. Significant machine learning and AI capabilities enhancements have been made in every release. In 9.2.1, Machine Learning Server adds support for the full data science lifecycle of your Python-based analytics.
This meetup will NOT be a data science intro or R intro to programming. It is about working with data and big data on MLS .
- How to Scale R
- Work with R and Hadoop + Spark
-Demo of MLS on HDP/HDInsight server with RStudio
- How to operationalize deploying models using MLS Webservice operationalization features on MLS Server or on the cloud Azure ML (PaaS) offering. Speaker Bio:
Alex Zeltov is Big Data Solutions Architect / Software Engineer / Programmer Analyst / Data Scientist with over 19 years of industry experience in Information Technology and most recently in Big Data and Predictive Analytics. He currently works as Global black belt Technical Specialist in Microsoft where he concentrates on Big Data and Advanced Analytics use cases. Previously to joining Microsoft he worked as a Sr. Solutions Engineer at Hortonworks where he specialized in HDP and HDF platforms.
Similar to Apache Thrift, a brief introduction (20)
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
2. polyglotism
• Modern distributed applications are rarely composed of
modules written in a single language
• Weaving together innovations made in a range of languages
is a core competency of successful enterprises
• Cross language communications are a necessity, not a luxury
3. thrift • Apache Thrift
– A high performance, scalable cross language serialization and RPC framework
• What?
– Full RPC Implementation - Apache Thrift supplies a complete RPC solution:
clients, servers, everything but your business logic
– Modularity - Apache Thrift supports plug-in serialization protocols:
binary, compact, json, or build your own
– Multiple End Points – Plug-in transports for network, disk and memory end points,
making Thrift easy to integrate with other communications and storage solutions like
AMQP messaging and HDFS
– Performance - Apache Thrift is fast and efficient, solutions for minimal parsing
overhead and minimal size
– Reach - Apache Thrift supports a wide range of languages and platforms:
Linux, OSX, Windows, Embedded Systems, Mobile, Browser, C++, Go, PHP, Erlang,
Haskell, Ruby, Node.js, C#, Java, C, OCaml, ObjectiveC, D, Perl, Python, SmallTalk, …
– Flexibility - Apache Thrift supports interface evolution, that is to say, CI/CD
environments can roll new interface features incrementally without breaking existing
infrastructure
4. • Service
Interfaces
are described
with the
Apache Thrift
Interface
Definition
Language
(IDL)
• Client/Server
Stubs
are generated
with the
Apache Thrift
IDL Compiler
• The IDL Compiler can generate stubs in over 15 languages
• Existing code modules are easily converted into RPC services
using the Apache Thrift Server Library
rpcservices
5. performance
&reach
• Thrift provides excellent performance for
all but the most demanding solutions
• Thrift provides broad reach, supporting a
wide range of languages and platforms
Custom Thrift REST
Extreme
Performance
Extreme
Reach
High Performance
Broad Reach
Enterprise
Compact
Frameworks, C, etc.
Web tech,
scripting
SOA, RPC Servers,
Java, C#, C++, etc.
WebEmbedded
6. thriftidl
1. Define the service interface in IDL
2. Compile the IDL to generate client/server stubs
3. Connect the server stubs to the desired implementation
4. Choose an Apache Thrift server to host your service
5. Call RPC functions like local function using the client stubs
#sail_stats.thrift
service SailStats {
double GetSailorRating(1: string SailorName)
double GetTeamRating(1: string TeamName)
double GetBoatRating(1: i64 BoatSerialNumber)
list<string> GetSailorsOnTeam(1: string TeamName)
list<string> GetSailorsRatedBetween(1: double MinRating,
2: double MaxRating)
string GetTeamCaptain(1: string TeamName)
}
7. helloworld
~/thrift/hello $ ls -l
-rw-r--r-- 1 dev dev 95 Mar 26 16:28 hello.thrift
~/thrift/hello $ thrift -gen py hello.thrift
~/thrift/hello $ ls -l
drwxr-xr-x 3 dev dev 4096 Mar 26 16:31 gen-py
-rw-r--r-- 1 dev dev 95 Mar 26 16:28 hello.thrift
~/thrift/hello $ ls -l gen-py
drwxr-xr-x 2 dev dev 4096 Mar 26 16:31 hello
-rw-r--r-- 1 dev dev 0 Mar 26 16:31 __init__.py
~/thrift/hello $ ls -l gen-py/hello
-rw-r--r-- 1 dev dev 248 Mar 26 16:31 constants.py
-rw-r--r-- 1 dev dev 5707 Mar 26 16:31 HelloSvc.py
-rwxr-xr-x 1 dev dev 1896 Mar 26 16:31 HelloSvc-remote
-rw-r--r-- 1 dev dev 46 Mar 26 16:31 __init__.py
-rw-r--r-- 1 dev dev 398 Mar 26 16:31 ttypes.py
~/thrift/hello $
• Compiling IDL for an
RPC service
Generated Code
• Interface Constants
• HelloSvc Stubs
• Sample Client
• Package Init File
• Interface Types
13. • Apache Thrift uses a compiled IDL implementation
– IDL is compiled generating stubs used at run time
– Runtime serialization code allows for interface evolution
• Apache Thrift IDL supports
– Service Interface Definition
– Type Definition
• Service interfaces are exposed by Servers
– Servers can implement many service interfaces
– Interfaces can inherit from other interfaces
• Types define serialization schemas
– Types can be serialized to memory, disk or networks
– Collections are supported (map, list, set)
– Structures and Unions support composite types
– DAGs are supported
• New addition not widely implemented as of yet
morethanrpc
Interface Evolution
Apache Thrift allows
fields and parameters to
be added and removed
incrementally without
breaking pre-existing
code, allowing systems to
grow incrementally over
time (just like businesses)
Particularly effective
in dynamic CI/CD
environments
14. abstractand
isolated
• Crafting an effective IDL requires understanding
some of the most important things about your
system
– What are the key entities in your system and how are they
described
– What are their cardinalities
– What are their keys
– Which are immutable
– Can you define idempotent interfaces for mutations
– What are the operational affinity groups in your system
– What is your system model and how will state and services
be distributed across it
• All of these things bear directly on IDL design
• Free of implementation, you can get the concepts
right and then choose the best languages and tools
to implement them
15. entities&idl
• Modern IDLs, like
Apache Thrift, provide
a rich set of tools for
describing system
entities (aka. messages)
• Capturing and codifying
the key system entities
is prerequisite to
effective interface
specification
• In some settings
crafting the entities
(e.g. an order) is all that
the IDL need do
Services
are
Optional!
16. commschemes
• Streaming – Communications
characterized by an ongoing flow of
bytes from a server to one or more
clients.
– Example: An internet radio broadcast
where the client receives bytes over time
transmitted by the server in an ongoing
sequence of small packets.
• Messaging – Message passing
involves one way asynchronous, often
queued, communications, producing
loosely coupled systems.
– Example: Sending an email message
where you may get a response or you
may not, and if you do get a response you
don’t know exactly when you will get it.
• RPC – Remote Procedure Call systems
allow function calls to be made
between processes on different
computers.
– Example: An iPhone app calling a service
on the Internet which returns the
weather forecast.
Apache Thrift is an efficient cross platform
serialization solution for streaming interfaces
Apache Thrift provides a complete RPC framework
17. architecture
• User Code
– client code calls RPC
methods and/or
[de]serializes objects
– service handlers
implement RPC service
behavior
• Generated Code
– RPC stubs supply client
side proxies and server
side processors
– type serialization code
provides serialization for
IDL defined types
• Library Code
– servers host user
defined services,
managing connections
and concurrency
– protocols perform
serialization
– transports move bytes
from here to there
18. • The Thrift framework was originally developed at Facebook and
released as open source in 2007. The project became an Apache
Software Foundation incubator project in 2008, after which four
early versions were released.
• 0.2.0 released 2009-12-12
• 0.3.0 released 2010-08-05
• 0.4.0 released 2010-08-23
• 0.5.0 released 2010-10-07
• In 2010 the project was moved to Apache top level status where
several additional versions have been released.
• 0.6.0 released 2011-02-08
• 0.6.1 released 2011-04-25
• 0.7.0 released 2011-08-13
• 0.8.0 released 2011-11-29
• 0.9.0 released 2012-10-15
• 0.9.1 released 2013-07-16
• 0.9.2 released 2014-06-01
• 1.0.0 released 2015-01-01
versions
it is difficult to make
predictions, particularly
about the future.
-- Mark Twain, Yogi Berra,
etc.
Open Source
Community Developed
Apache License Version 2.0
19. resources
• Web
– thrift.apache.org
– github.com/apache/thrift
• Mail
– Users: user-subscribe@thrift.apache.org
– Developers: dev-subscribe@thrift.apache.org
• Chat
– #thrift
• Book
– Abernethy (2014), The Programmer’s Guide to
Apache Thrift, Manning Publications Co.
[http://www.manning.com/abernethy/]
Chapter
1 is free
Randy Abernethy
ra@apache.org
Editor's Notes
Roll the auto discussion into the non member begin/end discussion
Roll the auto discussion into the non member begin/end discussion