Visualforce Remote Objects, Visual Workflow, and Developer Console received new features. Canvas Apps can now be added to page layouts and support SAML single sign-on. The Apex Flex Queue pilot allows submitting more batch jobs simultaneously. Push notifications can now be configured for Mobile SDK connected apps. Change Sets and deployment tools gained additional monitoring capabilities.
Why you care about relational algebra (even though you didn’t know it)Julian Hyde
A talk given by Julian Hyde at Enterprise Data World on in Washington, DC on April 2nd, 2015.
With data in different systems, in different formats, and accessed via different tools, we need a lingua franca for data. Not all tools speak SQL, and data cannot be moved into a single convenient location.
Relational algebra underpins SQL and many other DB languages. It is also perfect for optimizing, caching and mediating.
Apache Calcite (formerly Optiq) is a framework for building and optimizing expressions in relational algebra. We show how to write queries, optimize queries using rewrite rules, and write adapters for back-end systems. We also show to configure Calcite to materialize queries, so your interactive analytics are effectively running against a fast in-memory database.
Streaming is necessary to handle IoT data rates and latency but SQL is unquestionably the lingua franca of data. Apache Samza and Apache Storm have new high-level query interfaces based on standard SQL with streaming extensions, both powered by Apache Calcite. Calcite's relational algebra allows query optimization and federation with data-at-rest in databases, memory, or HDFS.
A talk given by Julian Hyde at Hadoop Summit, San Jose, on 2016/06/29.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Apex is using Calcite to support streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Julian Hyde gave this talk at Apex Big Data World, Mountain View, on April 4, 2017.
Streaming is a paradigm for data processing that is rapidly growing in popularity, because it allows high throughput, low latency responses, and efficiently manages multitudes of IoT devices. Is it an alternative to database processing, or is it complementary? Julian Hyde argues for applying the database paradigm to streaming systems, using SQL as a high-level language for streaming. He presents streaming SQL, a super-set of standard SQL developed in collaboration with several Apache projects, and the use cases it can solve, such as combining data in flight with historic data at rest. He also shows how query optimization techniques can make streaming applications more efficient.
A talk given by Julian Hyde at 9th XLDB conference at SLAC, Menlo Park, on 2016/05/25.
Cost-based query optimization in Apache HiveJulian Hyde
Tez is making Hive faster, and now cost-based optimization (CBO) is making it smarter. A new initiative in Hive 0.13 introduces cost-based optimization for the first time, based on the Optiq framework.
Optiq’s lead developer Julian Hyde shows the improvements that CBO is bringing to Hive 0.13. For those interested in Hive internals, he gives an overview of the Optiq framework and shows some of the improvements that are coming to future versions of Hive.
Enterprise data is moving into Hadoop, but some data has to stay in operational systems. Apache Calcite (the technology behind Hive’s new cost-based optimizer, formerly known as Optiq) is a query-optimization and data federation technology that allows you to combine data in Hadoop with data in NoSQL systems such as MongoDB and Splunk, and access it all via SQL.
Hyde shows how to quickly build a SQL interface to a NoSQL system using Calcite. He shows how to add rules and operators to Calcite to push down processing to the source system, and how to automatically build materialized data sets in memory for blazing-fast interactive analysis.
What is SamzaSQL, and what might I use it for? Does this mean that Samza is turning into a database? What is a query optimizer, and what can it do for my streaming queries?
How does Apache Calcite parse, validate and optimize streaming SQL queries? How is relational algebra extended to handle streaming?
(DAT311) Large-Scale Genomic Analysis with Amazon RedshiftAmazon Web Services
Genomics analysis is one of the biggest data problems out there. With DNA sequencing finally down to an affordable cost, the current bottleneck is shifting from sequencing genomes to deriving meaning from genomes at a large scale. Learn how Human Longevity, Inc., uses Amazon Redshift to analyze thousands of whole genomes every month. Dive into their detailed architecture, including how they ingest terabytes of genomic information each day. Learn how they optimize their schema, rapidly analyzing thousands of genomes in a single query using a "select, aggregate, annotate" paradigm. Finally, learn best practices for using Amazon Redshift to accelerate research.
Why you care about relational algebra (even though you didn’t know it)Julian Hyde
A talk given by Julian Hyde at Enterprise Data World on in Washington, DC on April 2nd, 2015.
With data in different systems, in different formats, and accessed via different tools, we need a lingua franca for data. Not all tools speak SQL, and data cannot be moved into a single convenient location.
Relational algebra underpins SQL and many other DB languages. It is also perfect for optimizing, caching and mediating.
Apache Calcite (formerly Optiq) is a framework for building and optimizing expressions in relational algebra. We show how to write queries, optimize queries using rewrite rules, and write adapters for back-end systems. We also show to configure Calcite to materialize queries, so your interactive analytics are effectively running against a fast in-memory database.
Streaming is necessary to handle IoT data rates and latency but SQL is unquestionably the lingua franca of data. Apache Samza and Apache Storm have new high-level query interfaces based on standard SQL with streaming extensions, both powered by Apache Calcite. Calcite's relational algebra allows query optimization and federation with data-at-rest in databases, memory, or HDFS.
A talk given by Julian Hyde at Hadoop Summit, San Jose, on 2016/06/29.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Apex is using Calcite to support streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Julian Hyde gave this talk at Apex Big Data World, Mountain View, on April 4, 2017.
Streaming is a paradigm for data processing that is rapidly growing in popularity, because it allows high throughput, low latency responses, and efficiently manages multitudes of IoT devices. Is it an alternative to database processing, or is it complementary? Julian Hyde argues for applying the database paradigm to streaming systems, using SQL as a high-level language for streaming. He presents streaming SQL, a super-set of standard SQL developed in collaboration with several Apache projects, and the use cases it can solve, such as combining data in flight with historic data at rest. He also shows how query optimization techniques can make streaming applications more efficient.
A talk given by Julian Hyde at 9th XLDB conference at SLAC, Menlo Park, on 2016/05/25.
Cost-based query optimization in Apache HiveJulian Hyde
Tez is making Hive faster, and now cost-based optimization (CBO) is making it smarter. A new initiative in Hive 0.13 introduces cost-based optimization for the first time, based on the Optiq framework.
Optiq’s lead developer Julian Hyde shows the improvements that CBO is bringing to Hive 0.13. For those interested in Hive internals, he gives an overview of the Optiq framework and shows some of the improvements that are coming to future versions of Hive.
Enterprise data is moving into Hadoop, but some data has to stay in operational systems. Apache Calcite (the technology behind Hive’s new cost-based optimizer, formerly known as Optiq) is a query-optimization and data federation technology that allows you to combine data in Hadoop with data in NoSQL systems such as MongoDB and Splunk, and access it all via SQL.
Hyde shows how to quickly build a SQL interface to a NoSQL system using Calcite. He shows how to add rules and operators to Calcite to push down processing to the source system, and how to automatically build materialized data sets in memory for blazing-fast interactive analysis.
What is SamzaSQL, and what might I use it for? Does this mean that Samza is turning into a database? What is a query optimizer, and what can it do for my streaming queries?
How does Apache Calcite parse, validate and optimize streaming SQL queries? How is relational algebra extended to handle streaming?
(DAT311) Large-Scale Genomic Analysis with Amazon RedshiftAmazon Web Services
Genomics analysis is one of the biggest data problems out there. With DNA sequencing finally down to an affordable cost, the current bottleneck is shifting from sequencing genomes to deriving meaning from genomes at a large scale. Learn how Human Longevity, Inc., uses Amazon Redshift to analyze thousands of whole genomes every month. Dive into their detailed architecture, including how they ingest terabytes of genomic information each day. Learn how they optimize their schema, rapidly analyzing thousands of genomes in a single query using a "select, aggregate, annotate" paradigm. Finally, learn best practices for using Amazon Redshift to accelerate research.
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
Gaining actionable insights in real time enables organizations to grab opportunities and omit threats. Sensing the world, detecting actionable insights, and acting upon them has now become far easier than ever with the advancements of streaming SQL. Below are the topics discussed in this slide.
- Building stream processing applications using streaming SQL
- Deploying and monitoring streaming applications
- Scaling streaming applications
- Building domain specific business UIs
- Visualizing stream processing outputs via dashboards
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives. In this tutorial, the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/TridentJulian Hyde
A talk given at Hadoop Summit 2016, Dublin.
The internet of things (IoT) generates data at an unprecedented rate and requires results at low latency. Only streaming technologies can keep up. But IoT applications must be integrated with existing applications, such as Tableau, whose lingua franca is SQL. Samza and Storm are adding support standard SQL with extensions for streaming, using Calcite for parsing and planning. We show, with examples, how you would accomplish typical tasks in Kafka/Samza or Storm/Trident using SQL. Streaming SQL allows you to work at a higher level. For example, many queries need to combine streams with historic or reference data in databases or HDFS, or recent data in memory. We show how to define multiple data source systems, and how to write queries that compute joins or aggregations on streams.
AWS June Webinar Series - Getting Started: Amazon RedshiftAmazon Web Services
Amazon Redshift is a fast, fully-managed petabyte-scale data warehouse service, for less than $1,000 per TB per year. In this presentation, you'll get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Learn how, with just a few clicks in the AWS Management Console, you can set up with a fully functional data warehouse, ready to accept data without learning any new languages and easily plugging in with the existing business intelligence tools and applications you use today. This webinar is ideal for anyone looking to gain deeper insight into their data, without the usual challenges of time, cost and effort. In this webinar, you will learn: • Understand what Amazon Redshift is and how it works • Create a data warehouse interactively through the AWS Management Console • Load some data into your new Amazon Redshift data warehouse from S3 Who Should Attend • IT professionals, developers, line-of-business managers
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...Julian Hyde
A talk from given by Julian Hyde and Tomer Shiran at Hadoop Summit, Dublin.
Data scientists and analysts want the best API, DSL or query language possible, not to be limited by what the processing engine can support. Polyalgebra is an extension to relational algebra that separates the user language from the engine, so you can choose the best language and engine for the job. It also allows the system to optimize queries and cache results. We demonstrate how Ibis uses Polyalgebra to execute the same Python-based machine learning queries on Impala, Drill and Spark. And we show how to build Polyalgebra expressions in Calcite and how to define optimization rules and storage handlers.
In this performance-oriented session, we will cover tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
Apache Spark jest narzędziem do przetwarzania danych na dużą skalę. Zastosowanie tego narzędzia w rozproszonym środowisku, w celu przetwarzania dużych zbiorów danych daje ogromne korzyści.
Ale co z szybką pętlą zwrotną podczas opracowywania aplikacji z użyciem Apache Spark? Testowanie aplikacji w klastrze jest niezbędne, lecz nie wydaje się być tym, do czego większość programistów przywykło podczas praktykowania TDD.
Podczas wystąpienia, Łukasz podzielił się z kilkoma wskazówkami, jak można napisać testy jednostkowe oraz integracyjne i jak Docker może być używany do testowania Sparka na lokalnej maszynie.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
SQL on Big Data is not a "one size fits all". Optiq is a framework that allows you to build a data management system on top of any back-end system, including NoSQL and Hadoop, and rules that optimize query processing for capabilities of the data source. We show how Optiq is used in the Apache Drill and Cascading Lingual projects, and how we plan to combine Optiq materialized views, Mondrian, and a data grid to create next-generation in-memory analytics.
This presentation was given at the Real-Time Big Data meetup at RichRelevance in San Francisco, 2013-04-09.
Explore DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over schema design best practices with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others.
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
Gaining actionable insights in real time enables organizations to grab opportunities and omit threats. Sensing the world, detecting actionable insights, and acting upon them has now become far easier than ever with the advancements of streaming SQL. Below are the topics discussed in this slide.
- Building stream processing applications using streaming SQL
- Deploying and monitoring streaming applications
- Scaling streaming applications
- Building domain specific business UIs
- Visualizing stream processing outputs via dashboards
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives. In this tutorial, the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/TridentJulian Hyde
A talk given at Hadoop Summit 2016, Dublin.
The internet of things (IoT) generates data at an unprecedented rate and requires results at low latency. Only streaming technologies can keep up. But IoT applications must be integrated with existing applications, such as Tableau, whose lingua franca is SQL. Samza and Storm are adding support standard SQL with extensions for streaming, using Calcite for parsing and planning. We show, with examples, how you would accomplish typical tasks in Kafka/Samza or Storm/Trident using SQL. Streaming SQL allows you to work at a higher level. For example, many queries need to combine streams with historic or reference data in databases or HDFS, or recent data in memory. We show how to define multiple data source systems, and how to write queries that compute joins or aggregations on streams.
AWS June Webinar Series - Getting Started: Amazon RedshiftAmazon Web Services
Amazon Redshift is a fast, fully-managed petabyte-scale data warehouse service, for less than $1,000 per TB per year. In this presentation, you'll get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Learn how, with just a few clicks in the AWS Management Console, you can set up with a fully functional data warehouse, ready to accept data without learning any new languages and easily plugging in with the existing business intelligence tools and applications you use today. This webinar is ideal for anyone looking to gain deeper insight into their data, without the usual challenges of time, cost and effort. In this webinar, you will learn: • Understand what Amazon Redshift is and how it works • Create a data warehouse interactively through the AWS Management Console • Load some data into your new Amazon Redshift data warehouse from S3 Who Should Attend • IT professionals, developers, line-of-business managers
Planning with Polyalgebra: Bringing Together Relational, Complex and Machine ...Julian Hyde
A talk from given by Julian Hyde and Tomer Shiran at Hadoop Summit, Dublin.
Data scientists and analysts want the best API, DSL or query language possible, not to be limited by what the processing engine can support. Polyalgebra is an extension to relational algebra that separates the user language from the engine, so you can choose the best language and engine for the job. It also allows the system to optimize queries and cache results. We demonstrate how Ibis uses Polyalgebra to execute the same Python-based machine learning queries on Impala, Drill and Spark. And we show how to build Polyalgebra expressions in Calcite and how to define optimization rules and storage handlers.
In this performance-oriented session, we will cover tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
Apache Spark jest narzędziem do przetwarzania danych na dużą skalę. Zastosowanie tego narzędzia w rozproszonym środowisku, w celu przetwarzania dużych zbiorów danych daje ogromne korzyści.
Ale co z szybką pętlą zwrotną podczas opracowywania aplikacji z użyciem Apache Spark? Testowanie aplikacji w klastrze jest niezbędne, lecz nie wydaje się być tym, do czego większość programistów przywykło podczas praktykowania TDD.
Podczas wystąpienia, Łukasz podzielił się z kilkoma wskazówkami, jak można napisać testy jednostkowe oraz integracyjne i jak Docker może być używany do testowania Sparka na lokalnej maszynie.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
SQL on Big Data is not a "one size fits all". Optiq is a framework that allows you to build a data management system on top of any back-end system, including NoSQL and Hadoop, and rules that optimize query processing for capabilities of the data source. We show how Optiq is used in the Apache Drill and Cascading Lingual projects, and how we plan to combine Optiq materialized views, Mondrian, and a data grid to create next-generation in-memory analytics.
This presentation was given at the Real-Time Big Data meetup at RichRelevance in San Francisco, 2013-04-09.
Explore DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over schema design best practices with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others.
My iLeverage Business Opportunity Presentation SALI NA!!!Nesnorman Felicen
MADAMI NA ANG KUMITA MAG PAPAHULI KA PA BA? SALI NA...!!
REGISTER: http://www.ileverage.biz/nesnormanfelicen01
FACEBOOK: NesnormanLazarteFelicen
CONTACT NUMBER: 0921-746-0046
At Yahoo, our Salesforce developers are thinking 'above and beyond' to create innovative solutions with Apex and Visualforce. Join us as we discuss patterns for deep clone, mass and bulk edit, and walk through the data import wizard we built to allow our sales team to synchronously modify 10,000 records at a time.
Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use work load management, tune your queries, and use Amazon Redshift's interleaved sorting features. Finally, learn how to use these best practices to give their entire organization access to analytic insights at scale.
Presented by: Alex Sinner, Solutions Architecture PMO, Amazon Web Services
Customer Guest: Luuk Linssen, Product Manager, Bannerconnect
Informatica Power Center - Workflow ManagerZaranTech LLC
50-55 hours Training + Assignments + Actual Project Based Case Studies
All attendees will receive,
Assignment after each module, Video recording of every session
Notes and study material for examples covered.
Access to the Training Blog & Repository of Materials
Training Highlights
Focus on Hands on training
30-35 hours of Assignments, Live Case Studies
Video Recordings of sessions provided
Demonstration of Concepts using different tools
One Problem Statement discussed across the Whole training program
Informatica Certification Guidance
Resume prep, Interview Questions provided
Introduction to Data Warehousing, Infomatica Designer
Understand the Transformation, Mapping and Qualifier
Informatica Advanced Features
Tuning and optimizing webcenter spaces application white paperVinay Kumar
This white paper focuses on Oracle WebCenter Spaces performance problem and analysis after post production deployment. We will tune JVM ( JRocket). Webcenter Portal, Webcenter content and ADF task flow.
Serverless technologies and capabilities are here and are accessible now more than ever.
The power of infinite scale and system capabilities has never been more accessible. This also affects traditional front end development as serverless technologies allow for easy construction of backend support for any frontend with ease and simplicity.
In this talk, we will demonstrate how to build a fully functional Graphql endpoint for FE applications using Apollo Server and Client libraries, utilizing different cloud providers. We will also demonstrate the usage of Servless.com framework to set up the required infrastructure as code to simplify and support this setup
The video of the presentation (Hebrew):
https://youtu.be/8ba4cpdtK-8
SQL Performance Tuning and New Features in Oracle 19cRachelBarker26
What's new in Oracle 19c (and CMiC R12) and the reporting software Jaspersoft Studios. If you are not interested in Jasper go ahead and skip to page 26. Explains how to read an execution plan and what to look for in an optimized execution plan.
Online SAP Testing Training is an experienced SAP Consulting and Training institute to deliver highest quality solutions to our clients to meet the requirement with consistency.
We are committed to helping you train a handful of employees or your entire organization on software essentials and advanced techniques. Our comprehensive online virtual Training libraries cover hot topics related to SAP Testing Techniques.
Our flexible and scalable options are well-suited for companies of any size. We work with leading global organizations to positively impact workforce productivity and efficiency. Our solutions are proven to increase utilization of software investments and provide the confidence to continue to invest as new software applications become available.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
2. 1. Visualforce Remote Objects Enhancements
2. Canvas Updates
3. Visual workflow Enhancement
4. Developer Console Additions
5. Change Sets & Deployment Tools
6. New Apex Enhancements
7. Configure Push Notifications for Your Salesforce
Mobile SDK Connected Apps
3. Visualforce Remote Objects Enhancements
Remains in Developer Preview for this release
● Remote method overrides
● An upsert() operation
● The orderby query condition for specifying sort order
● Geolocation fields
● New query operators for where conditions
4. ★ Remote method overrides:
Extend or replace the built-in behavior of the basic
CRUD operations.Provide new flexibility by enabling
you to add your own custom logic to the basic
operations. With your own Apex code, can extend or
customize the behavior of Remote Objects.
★ An upsert() operation
A shortcut for saving new data by updating an object or
creating an object if it doesn’t exist.
5. ★ The orderby query condition for specifying sort
order:
Can sort on up to three fields.
orderby: [ {Phone: "DESC NULLS LAST"} ,
{FirstName: "ASC"} ]
★ Where Condition Operators
1. ne: not equals
2. lte: less than or equals
3. gte: greater than or equals
6. Canvas Updates
Canvas - platform for integrating external apps to SF
from their native environment.
★ Canvas Apps in Page Layouts
★ Request a Signed Request on Demand
★ User Approved Signed Canvas Apps
★ SAML -Single Sign-on for Canvas Apps
★ Custom App Life Cycle
7. In PageLayout
Go to CanvasApp Settings section in CanvasApp created.
Add layout and MobileCards to Location.
8. Request a Signed Request on Demand
● Useful while refreshing an Expired Session.(Not interrupting the
User)
● Use Javascript SDK. (use canvas-all.js in your JavaScript code).
● Use refreshSignedRequest() and rePost() method in SDK.
9. User Approved Signed Canvas Apps
● Canvas App accessed in 2 ways:- SignedRequest and Oauth.
● Previously, App is user-approved if OAuth used for access.
● Now App with SignedRequest can also be user-approved.
10. SAML -Single Sign-on for Canvas Apps
● SAML (Security Assertion Markup Language)
● Provide App users with seamless auth flow.(whether oauth or
signed request).
11. Using Custom App Life Cycle
In the CanvasApp Settings add custom apex class to LifeCycleClasses.
Use CanvasLifecycleHandler interface in apex class to:
● Control request data get sent to your app.
● Alter behaviour of Canvas App based on retrieved data while app
rendered.
12. Visual workflow Enhancement
Visual Workflow, built using Force.com's Cloud Flow Designer, lets you visually
string together one or more forms, business rules, and calls to backend APIs to
implement a complete business process without writing code.New features for this
release include:
● Changes to trigger ready flows
● Manipulate Multiple Salesforce Fields and Records at One Time in a Flow
(Generally Available)
● Cross-Object Field References in Flows
● Use an Object-Specific or Global Action in a Flow
● Send Email from a Flow
● Governor Limits Enforced on All Flows
13. ★ Changes to trigger ready flows : -
A trigger-ready flow is a flow that can be launched without user interaction. Also updated the list
of elements and resources that trigger-ready flows can contain.
★ Manipulate Multiple Salesforce Fields and Records at One Time in a Flow (Generally
Available)
Collect the values for multiple fields for Salesforce records with a single query, and manipulate
that data with a single DML statement by using sObject variables, sObject collection variables,
and loops in a flow.
★ Cross-Object Field References in Flows
When building a flow, you can now reference fields for records that are related to the values that
are stored in an sObject variable. To do so, you must manually enter the references. You can
reference cross-object fields to use their values anywhere you can reference a flow resource or
enter a value.
14. ★ Use an Object-Specific or Global Action in a Flow
As an alternative to Record or Fast elements, flows can now use object-specific and global
actions to create and update Salesforce records. Set the input assignments to transfer data from
the flow to the action.
★ Send Email from a Flow
You now have two options for sending email from a flow: call an existing workflow email alert or
configure the email within the flow.
★ Governor Limits Enforced on All Flows
Previously, flows could potentially consume more resources than are allowed by our governor
limits
15. Developer Console Additions
● Search and Edit Files with the Edit Menu
● Find Files By Name
● Speed Up Queries with the Query Plan Tool
● View Color-Coded Logging Expiration Data
★ Search and Edit Files with the Edit Menu
➔ Find
➔ Find Next
➔ Find/Replace
➔ Search in Files
➔ Fix Indentation
16. ★ Find Files By Name
In addition to the type-based Open dialog, the new Open Resource dialog allows you to search for a file by
name by entering a simple string, or by using regular expressions (prefix with “re:”). Click File > Open
Resource or press SHIFT+CTRL+O.
★ Speed Up Queries with the Query Plan Tool
Use the Query Plan tool to optimize and speed up queries done over large volumes. To enable the tool,
click Help > Preferences and set Enable Query Plan to true. To use the tool, enter your query and click the
Query Plan button in the Query Editor.
★ View Color-Coded Logging Expiration Data
The Change Log Levels dialog in the Developer Console now includes a color-coded Expiration field that
displays the current logging status.
GREEN(10 or more mins)>YELLOW (Less than 10 mins)>RED(After expired)
17. Change Sets & Deployment Tools
● Change Sets in the Deployment Status Page
● Force.com Migration Tool Support for rollbackOnError
★ Change Sets in the Deployment Status Page
You can monitor the status of all deployments in one place. improved to include more
information about deployments, including errors and test failures
★ Force.com Migration Tool Support for rollbackOnError
The Force.com Migration Tool now accepts the rollbackOnError parameter in build.xml for
deployment targets and no longer ignores this value.Now can specify whether a complete
rollback of a deployment is performed if a failure occurs by setting this parameter to true or false.
This parameter must be set to true if you’re deploying to a production organization. If the
rollbackOnError parameter is not specified, the default is true.
18. New Apex Enhancements
● Describe Limits Removed
● Script Statement Limits Methods Removed
● Submit More Batch Jobs with Apex Flex Queue (Pilot)
● Monitoring and Reordering the Apex Flex Queue
● Submitting Jobs by Calling Database.executeBatch
● AsyncApexJob Status Field
● Run Future Methods with Higher Limits (Pilot)
19. ★ Describe Limits Removed
You’re no longer limited to describing 100 objects or to executing 100 describe statements.
Schema.DescribeSobjectResult[] results = Schema.describeSObjects(new String[]{'Account','Contact',...});
You’re no longer bound by 100 fields or fieldSets statements in your code to describe fields and field sets
respectively.
Schema.DescribeFieldResult fieldResult = Schema.sObjectType.Account.fields.Name;
Schema.FieldSet fs1 = Schema.SObjectType.Account.fieldSets.fieldset1;
The affected methods are:
getChildRelationshipsDescribes(), getFieldsDescribes(), getFieldSetsDescribes(), getPicklistDescribes(),
getRecordTypesDescribes(), getLimitChildRelationshipsDescribes(), getLimitFieldsDescribes(),
getLimitFieldSetsDescribes(), getLimitPicklistDescribes(),getLimitRecordTypesDescribes()
20. ★ Script Statement Limits Methods Removed
They are available and deprecated in API version 30.0 and earlier. Script statement limit is no longer
enforced.So the associated Limits methods are no longer needed. The affected methods in the Limits class
are:
getScriptStatements() , getLimitScriptStatements()
★ Submit More Batch Jobs with Apex Flex Queue (Pilot)
Can submit more batch jobs simultaneously and actively manage the order of the queued jobs.The Apex
Flex Queue pilot enables you to submit batch jobs beyond the allowed limit of five queued or active jobs.
Any jobs that are submitted for execution but aren’t processed immediately by the system are in holding
status and are placed in Apex flex queue. Up to 100 batch jobs can be in the holding status. When system
resources become available, the system picks up jobs from the Apex flex queue and moves them to the
batch job queue. The status of these moved jobs changes from Holding to Queued.
Without administrator intervention, jobs are processed first-in first-out—in the order in which they’re
submitted. Administrators can modify the order of jobs that are held in the Apex flex queue to control when
they get processed by the system.
21. ★ Monitoring and Reordering the Apex Flex Queue
From Setup, click Jobs > Apex Flex Queue.
Can monitor the moved job in the Apex Jobs page by clicking Apex Jobs.
★ Submitting Jobs by Calling Database.executeBatch
To submit a batch job, call Database.executeBatch. The resulting outcome of Database.executeBatch
depends on whether your organization has reached the five queued or active batch job limit.
● If system resources are available for processing jobs, the batch job is queued for execution and
its status is Queued.
● If no system resources are available, the batch job is placed in the Apex flex queue and its status
is Holding.
● If the Apex flex queue has the maximum number (100) of jobs, this method returns an error and
doesn’t place the job in the queue.
22. ★ AsyncApexJob Status Field
The AsyncApexJob object, which represents a batch job, has a new status field value of Holding. This new
status indicates that the job is placed in the Apex flex queue and is waiting to be picked up when system
resources are available.
★ Run Future Methods with Higher Limits (Pilot)
One of the following limits can be doubled or tripled for each future method : Heap size , CPU timeout ,
Number of SOQL queries ,Number of DML statements issued, Number of records that were processed as
a result of DML operations, Aprroval.process, or Database.emptyRecycleBin
Note: Running future methods with higher limits might slow down the execution of all your future methods.
Running your method with double or triple capacity. Syntax : @future(limits='2x|3xlimitName')
Example : @future(limits='2xHeap')
public static void myFutureMethod() {
}
Tip :Keep in mind that you can specify only one higher limit per future method. Decide which of the
modifiable limits you need the most for your method.
23. Configure Push Notifications for Your Salesforce
Mobile SDK Connected Apps
With these mobile app settings, developers of Salesforce Mobile SDK connected apps can
configure their users’ mobile devices to receive push notifications from Salesforce.
If you provide a native Mobile SDK app for your organization, you can use the new Mobile Push
Notifications feature to alert your users to important changes and events in your business. The
Mobile Push Notifications feature supports Android and iOS devices and requires additional
configuration:
● With the mobile OS provider (Apple or Google)
● In your Mobile SDK app
● Using the new Messaging.PushNotification and Messaging.PushNotificationPayload
classes OR With the Chatter REST API, using the new push notifications resource
24. In addition, we’ve provided a push notification test page. In this page, you can quickly test your
push notification setup before the feature goes live in your mobile app.
To reach the test page:
1. In Setup, go to Create > Apps.
2. Click the name of your connected app.
3. Click Send test notification next to Supported Push Platform. This link appears only if
you’ve configured your connected app to support mobile push notifications.
Note: Push notifications for connected apps are available only for custom mobile connected
apps, such as Mobile SDK apps. This feature does not apply to Salesforce1 or SalesforceA
apps.