Have you ever been involved in developing a strategy for loading, extracting, and managing large amounts of data in salesforce.com? Join us to learn multiple solutions you can put in place to help alleviate large data volume concerns. Our architects will walk you through scenarios, solutions, and patterns you can implement to address large data volume issues.
Following best practices can help ensure your success. This is especially true for Force.com applications or large Salesforce orgs that have the potential to push platform limits.
Salesforce allows you to easily scale up from small to large amounts of data. Mostly this is seamless, but as data sets get larger, the time required for certain operations may grow too. Join us to learn different ways of designing and configuring data structures and planning a deployment process to significantly reduce deployment times and achieve operational efficiency.
Watch this webinar to:
Explore best practices for the design, implementation, and maintenance phases of your app's lifecycle.
Learn how seemingly unrelated components can affect one another and determine the ultimate scalability of your app.
See live demos that illustrate innovative solutions to tough challenges, including the integration of an external data warehouse using Force.com Canvas.
Walk away with practical tips for putting best practices into action.
Salesforce API Series: Fast Parallel Data Loading with the Bulk API WebinarSalesforce Developers
Can you load 20 million records into Salesforce in under an hour? If not, this webinar is for you.
You want to load tons of data into Salesforce. No problem, right? Just use the Bulk API and turn on parallel loading. Think again. Unless you carefully plan the big data loads that you want to break up into parallel operations to achieve maximum throughput, those loads can turn out more like slow, serial loads.
In this webinar, Sean and Steve will teach you how to realize awesome throughput in your parallel data loads on the Salesforce1 Platform. After learning from the webinar's demos and code samples, you'll be able to apply your new deep knowledge of platform internals to measure load performance, recognize problems that slow your loads down, and work around these roadblocks.
Key Takeaways
:: Learn what parallelism is and how significant optimizing it is for performance
:: Learn how to architect an integration or load tool to optimize parallelism, and obtain the maximum possible throughput
:: Learn how to manage locks to avoid lock exceptions that can significantly reduce the throughput in your loads and integrations
Intended Audience
:: Salesforce architects or Force.com developers with a working understanding of data loading and integration concepts. A high-level understanding of the Bulk API and Java is also useful.
Following best practices can help ensure your success. This is especially true for Force.com applications or large Salesforce orgs that have the potential to push platform limits.
Salesforce allows you to easily scale up from small to large amounts of data. Mostly this is seamless, but as data sets get larger, the time required for certain operations may grow too. Join us to learn different ways of designing and configuring data structures and planning a deployment process to significantly reduce deployment times and achieve operational efficiency.
Watch this webinar to:
:: Explore best practices for the design, implementation, and maintenance phases of your app's lifecycle.
:: Learn how seemingly unrelated components can affect one another and determine the ultimate scalability of your app.
:: See live demos that illustrate innovative solutions to tough challenges, including the integration of an external data warehouse using Force.com Canvas.
:: Walk away with practical tips for putting best practices into action.
Intended Audience
This webinar is perfect for Salesforce or Force.com architects and developers that want to better understand data management best practices to ensure both short and long-term implementation success. Although many topics focus on large data volumes, the recommendations in this presentation are equally relevant to smaller orgs.
The Salesforce Platform allows developers to build enterprise applications using Visualforce, Apex and SOQL. To ensure that your applications perform and scale as your business grows, you'll want to write efficient and selective queries. The Force.com query optimizer uses several algorithms to determine the best SQL to generate from your SOQL. Some factors involved in this process include multitenancy, metadata and indexes.
Watch this webinar to:
Get an overview of multitenancy and metadata
Understand how to write selective and scalable SOQL queries
Learn how the Force.com query optimizer converts SOQL to SQL
See examples of the performance impact of indexes
Find out how skinny tables work
Visualforce and Apex allow developers to extend the Force.com platform and build custom applications. There are, however, some common themes in customer implementations that can adversely affect performance. Join us to learn best practices and debugging techniques for tuning performance of your Apex and Visualforce code.
Have you ever been involved in developing a strategy for loading, extracting, and managing large amounts of data in salesforce.com? Join us to learn multiple solutions you can put in place to help alleviate large data volume concerns. Our architects will walk you through scenarios, solutions, and patterns you can implement to address large data volume issues.
Following best practices can help ensure your success. This is especially true for Force.com applications or large Salesforce orgs that have the potential to push platform limits.
Salesforce allows you to easily scale up from small to large amounts of data. Mostly this is seamless, but as data sets get larger, the time required for certain operations may grow too. Join us to learn different ways of designing and configuring data structures and planning a deployment process to significantly reduce deployment times and achieve operational efficiency.
Watch this webinar to:
Explore best practices for the design, implementation, and maintenance phases of your app's lifecycle.
Learn how seemingly unrelated components can affect one another and determine the ultimate scalability of your app.
See live demos that illustrate innovative solutions to tough challenges, including the integration of an external data warehouse using Force.com Canvas.
Walk away with practical tips for putting best practices into action.
Salesforce API Series: Fast Parallel Data Loading with the Bulk API WebinarSalesforce Developers
Can you load 20 million records into Salesforce in under an hour? If not, this webinar is for you.
You want to load tons of data into Salesforce. No problem, right? Just use the Bulk API and turn on parallel loading. Think again. Unless you carefully plan the big data loads that you want to break up into parallel operations to achieve maximum throughput, those loads can turn out more like slow, serial loads.
In this webinar, Sean and Steve will teach you how to realize awesome throughput in your parallel data loads on the Salesforce1 Platform. After learning from the webinar's demos and code samples, you'll be able to apply your new deep knowledge of platform internals to measure load performance, recognize problems that slow your loads down, and work around these roadblocks.
Key Takeaways
:: Learn what parallelism is and how significant optimizing it is for performance
:: Learn how to architect an integration or load tool to optimize parallelism, and obtain the maximum possible throughput
:: Learn how to manage locks to avoid lock exceptions that can significantly reduce the throughput in your loads and integrations
Intended Audience
:: Salesforce architects or Force.com developers with a working understanding of data loading and integration concepts. A high-level understanding of the Bulk API and Java is also useful.
Following best practices can help ensure your success. This is especially true for Force.com applications or large Salesforce orgs that have the potential to push platform limits.
Salesforce allows you to easily scale up from small to large amounts of data. Mostly this is seamless, but as data sets get larger, the time required for certain operations may grow too. Join us to learn different ways of designing and configuring data structures and planning a deployment process to significantly reduce deployment times and achieve operational efficiency.
Watch this webinar to:
:: Explore best practices for the design, implementation, and maintenance phases of your app's lifecycle.
:: Learn how seemingly unrelated components can affect one another and determine the ultimate scalability of your app.
:: See live demos that illustrate innovative solutions to tough challenges, including the integration of an external data warehouse using Force.com Canvas.
:: Walk away with practical tips for putting best practices into action.
Intended Audience
This webinar is perfect for Salesforce or Force.com architects and developers that want to better understand data management best practices to ensure both short and long-term implementation success. Although many topics focus on large data volumes, the recommendations in this presentation are equally relevant to smaller orgs.
The Salesforce Platform allows developers to build enterprise applications using Visualforce, Apex and SOQL. To ensure that your applications perform and scale as your business grows, you'll want to write efficient and selective queries. The Force.com query optimizer uses several algorithms to determine the best SQL to generate from your SOQL. Some factors involved in this process include multitenancy, metadata and indexes.
Watch this webinar to:
Get an overview of multitenancy and metadata
Understand how to write selective and scalable SOQL queries
Learn how the Force.com query optimizer converts SOQL to SQL
See examples of the performance impact of indexes
Find out how skinny tables work
Visualforce and Apex allow developers to extend the Force.com platform and build custom applications. There are, however, some common themes in customer implementations that can adversely affect performance. Join us to learn best practices and debugging techniques for tuning performance of your Apex and Visualforce code.
Salesforce Integration: Talking the Pain out of Data LoadingDarren Cunningham
Webinar delivered by Salesforce.com and Informatica, featuring an Informatica Cloud Express customer case study. The focus of the presentation is to clearly outline the options for loading data. Salesforce has free tools and there are many options available on the AppExchange. Informatica introduced a new pay-per-use service called Informatica Cloud Express.
If you can build an app on Force.com, you can build a mobile app today. Join us to learn more about what developers can do on this mobile platform. We'll cover Visualforce pages, JavaScript APIs, integration, and how to enable your admins by building declarative tools.
Redshift is a petabyte-scale data warehouse that is a lot faster, a lot less expensive and a whole lot simpler to use. How can you get your data into Amazon Redshift? In this webinar, hear from representatives of Attunity (Amazon Redshift Partner), and AWS as they present many of the options available for data integration. Whether your data is in an on premise platform or a cloud based database like DynamoDB, we will show you how you can easily load your data in to Re
dshift.
Reasons to attend: - Learn about best practices to efficiently integrate data into Redshift. - Attend Q&A session with Redshift experts
Highly available and scalable architecturesPhil Wicklund
SharePoint 2010 has many new service applications. This presentation takes a look at how those services impact performance and sizing, as well as some availability strategies for SharePoint 2010.
Wave Analytics: Developing Predictive Business Intelligence AppsSalesforce Developers
Apttus has been working with Wave Analytics since it was launched at Dreamforce 14 - collecting customer requirements, building and piloting prototypes, and reworking apps based on customer feedback. Join us as we showcase the Wave apps we built with some of the world's largest companies. More importantly, we'll share how we got there and the lessons we learned along the way. We will discuss business requirements definition for predictive business intelligence, advanced analytics, visualization, data management and ETL processes, external integrations, and APIs. Learn how to drive business results with machine learning.
SAP HANA - The Foundation of Real Time, Now on the AWS Cloud Computing PlatformAmazon Web Services
The future holds many transformational opportunities. Capitalize on the new technology frontier and realize value previously unattainable with SAP HANA. Learn how SAP and AWS are collaborating to empower businesses, developers, startups and even sports fans to take advantage real time computing in the cloud.
SAP and AWS experts will outline:
• How do SAP and AWS align to support your cloud strategy, large or small?
• What is SAP HANA and how does it enable you to run your business in real-time?
• Use cases for developers and enterprise; and capabilities including analytics, business processes, sentiment data processing, and predictive.
• How to get immediate value by deploying SAP HANA on the AWS cloud computing platform.
• Best practices and reference guides to help you.
• Customer success stories, including the Women’s Tennis Association (WTA).
Ludwig: A code-free deep learning toolbox | Piero Molino, Uber AIData Science Milan
The talk will introduce Ludwig, a deep learning toolbox that allows to train models and to use them for prediction without the need to write code. It is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures.
Bio:
Piero Molino is a Senior Research Scientist at Uber AI with focus on machine learning for language and dialogue. Piero completed a PhD on Question Answering at the University of Bari, Italy. Founded QuestionCube, a startup that built a framework for semantic search and QA. Worked for Yahoo Labs in Barcelona on learning to rank, IBM Watson in New York on natural language processing with deep learning and then joined Geometric Intelligence, where he worked on grounded language understanding. After Uber acquired Geometric Intelligence, he became one of the founding members of Uber AI Labs.
Modern data is massive, quickly evolving, unstructured, and increasingly hard to catalog and understand from multiple consumers and applications. This session will guide you though the best practices for designing a robust data architecture, highlightning the benefits and typical challenges of data lakes and data warehouses. We will build a scalable solution based on managed services such as Amazon Athena, AWS Glue, and AWS Lake Formation.
Database Week at the San Francisco Loft
Amazon Aurora
A closer look at the MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. We’ll explore how Aurora uses the AWS cloud to provide high reliability, high durability, and high throughput.
Level: 200
Speakers:
Mahesh Pakala - Solutions Architect, AWS
Arabinda Pani - Partner Solutions Architect, Database Specialist, AWS
Abstract: Data preparation and modelling are the activities that take most of the time in a typical data scientist workday. In this session we’ll see how AWS services for Analytics and data management can be effectively used and integrated in AI/ML pipelines. We’ll focus on AWS Glue, AWS Glue DataBrew and AWS Data Wrangler with a bit of theory and hands-on demos.
Bio:
Francesco Marelli is a senior solutions architect at Amazon Web Services. He has lived and worked in UK, italy, Switzerland and other countries in EMEA. He is specialized in the design and implementation of Analytics, Data Management and Big Data systems. Francesco also has a strong experience in systems integration and design and implementation of applications.
Topics: machine learning pipelines, AWS, cloud.
Query tuning within SQL Server can be a tough skill to master. The new tooling released with SQL Server 2016 and 2017 changes how you identify poorly performing queries, troubleshoot their behavior and tune the queries, all a little easier.
This workshop teaches new techniques for tuning queries using all the new tools introduced in SQL Server 2016 and 2017. You'll be able to put this knowledge to work immediately, not only in your 2016 or better instances but also in your Azure SQL Database databases. You will be tuning your queries faster and more accurately using the new tools available.
Good development habits should include lots and lots of testing. And testing needs data. But loading data by hand can be a real pain. Join us for a deep dive on integrating the Salesforce Data Loader tool with ANT to create repeatable automated scripts for loading data. Data exports and imports are a breeze to script and incorporate into your CI strategy once you know the secrets and tricks.
Integrating SIS’s with Salesforce: An Accidental Integrator’s GuideSalesforce.org
Join our next Success webinar, Integrating Student Information Systems with Salesforce: Strategies and Best Practices, to explore the many ways system integration benefits your school. Whether you want an aggregated view of your students, the ability to trigger actions based on status changes, or the automation of manual work, you will learn the three simple steps to successful integration. By highlighting how higher education institutions have integrated with the most popular Student Information Systems, Grant Miller, director of Alliances and Jill Kenney, Director of Sales Engineering at the Salesforce Foundation, will explain the layers of integration and discuss considerations like synchronous-versus-asynchronous and buy-versus-build options.
SharePoint Databases: What you need to know (201512)Alan Eardley
Presented at SQL Saturday Southampton (2015)
An introduction to the different databases that SharePoint uses, with recommendations for High Availability, Disaster Recovery and configuration settings for SQL Server, including the constraints imposed in a single farm, a stretched farm between data centres and a separate DR farm.
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...RTTS
Testing of Hadoop, NoSQL and Data Warehouses Visually
-----------------------------------------------------------------------------
We just made automated data testing really easy. Automate your Big Data testing visually, with no programming needed.
See how to automate Hadoop, No SQL and Data Warehouse testing visually, without writing any SQL or HQL. See how QuerySurge, the leading Big Data testing solution, provides novices and non-technical team members with a fast & easy way to be productive immediately while speeding up testing for team members skilled in SQL/HQL.
This webinar is geared towards:
- Big Data & Data Warehouse Architects, ETL Developers
- ETL Testers, Big Data Testers
- Data Analysts
- Operations teams
- Business Intelligence (BI) Architects
- Data Management Officers & Directors
You will learn how to:
• Improve your Data Quality
• Accelerate your data testing cycles
• Reduce your costs & risks
• Realize a huge ROI
Top 5 ETL Tools for Salesforce Data MigrationIntellipaat
Top 5 ETL Tools for Salesforce Data Migration: 1.Apex Data Loader, 2.Progress Data Direct, 3.Talend Open Studio, 4.JitterBit Data Loader, 5.Pentaho Community Edition.
Source URL - https://intellipaat.com/etl-testing-training/
Salesforce Integration: Talking the Pain out of Data LoadingDarren Cunningham
Webinar delivered by Salesforce.com and Informatica, featuring an Informatica Cloud Express customer case study. The focus of the presentation is to clearly outline the options for loading data. Salesforce has free tools and there are many options available on the AppExchange. Informatica introduced a new pay-per-use service called Informatica Cloud Express.
If you can build an app on Force.com, you can build a mobile app today. Join us to learn more about what developers can do on this mobile platform. We'll cover Visualforce pages, JavaScript APIs, integration, and how to enable your admins by building declarative tools.
Redshift is a petabyte-scale data warehouse that is a lot faster, a lot less expensive and a whole lot simpler to use. How can you get your data into Amazon Redshift? In this webinar, hear from representatives of Attunity (Amazon Redshift Partner), and AWS as they present many of the options available for data integration. Whether your data is in an on premise platform or a cloud based database like DynamoDB, we will show you how you can easily load your data in to Re
dshift.
Reasons to attend: - Learn about best practices to efficiently integrate data into Redshift. - Attend Q&A session with Redshift experts
Highly available and scalable architecturesPhil Wicklund
SharePoint 2010 has many new service applications. This presentation takes a look at how those services impact performance and sizing, as well as some availability strategies for SharePoint 2010.
Wave Analytics: Developing Predictive Business Intelligence AppsSalesforce Developers
Apttus has been working with Wave Analytics since it was launched at Dreamforce 14 - collecting customer requirements, building and piloting prototypes, and reworking apps based on customer feedback. Join us as we showcase the Wave apps we built with some of the world's largest companies. More importantly, we'll share how we got there and the lessons we learned along the way. We will discuss business requirements definition for predictive business intelligence, advanced analytics, visualization, data management and ETL processes, external integrations, and APIs. Learn how to drive business results with machine learning.
SAP HANA - The Foundation of Real Time, Now on the AWS Cloud Computing PlatformAmazon Web Services
The future holds many transformational opportunities. Capitalize on the new technology frontier and realize value previously unattainable with SAP HANA. Learn how SAP and AWS are collaborating to empower businesses, developers, startups and even sports fans to take advantage real time computing in the cloud.
SAP and AWS experts will outline:
• How do SAP and AWS align to support your cloud strategy, large or small?
• What is SAP HANA and how does it enable you to run your business in real-time?
• Use cases for developers and enterprise; and capabilities including analytics, business processes, sentiment data processing, and predictive.
• How to get immediate value by deploying SAP HANA on the AWS cloud computing platform.
• Best practices and reference guides to help you.
• Customer success stories, including the Women’s Tennis Association (WTA).
Ludwig: A code-free deep learning toolbox | Piero Molino, Uber AIData Science Milan
The talk will introduce Ludwig, a deep learning toolbox that allows to train models and to use them for prediction without the need to write code. It is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures.
Bio:
Piero Molino is a Senior Research Scientist at Uber AI with focus on machine learning for language and dialogue. Piero completed a PhD on Question Answering at the University of Bari, Italy. Founded QuestionCube, a startup that built a framework for semantic search and QA. Worked for Yahoo Labs in Barcelona on learning to rank, IBM Watson in New York on natural language processing with deep learning and then joined Geometric Intelligence, where he worked on grounded language understanding. After Uber acquired Geometric Intelligence, he became one of the founding members of Uber AI Labs.
Modern data is massive, quickly evolving, unstructured, and increasingly hard to catalog and understand from multiple consumers and applications. This session will guide you though the best practices for designing a robust data architecture, highlightning the benefits and typical challenges of data lakes and data warehouses. We will build a scalable solution based on managed services such as Amazon Athena, AWS Glue, and AWS Lake Formation.
Database Week at the San Francisco Loft
Amazon Aurora
A closer look at the MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. We’ll explore how Aurora uses the AWS cloud to provide high reliability, high durability, and high throughput.
Level: 200
Speakers:
Mahesh Pakala - Solutions Architect, AWS
Arabinda Pani - Partner Solutions Architect, Database Specialist, AWS
Abstract: Data preparation and modelling are the activities that take most of the time in a typical data scientist workday. In this session we’ll see how AWS services for Analytics and data management can be effectively used and integrated in AI/ML pipelines. We’ll focus on AWS Glue, AWS Glue DataBrew and AWS Data Wrangler with a bit of theory and hands-on demos.
Bio:
Francesco Marelli is a senior solutions architect at Amazon Web Services. He has lived and worked in UK, italy, Switzerland and other countries in EMEA. He is specialized in the design and implementation of Analytics, Data Management and Big Data systems. Francesco also has a strong experience in systems integration and design and implementation of applications.
Topics: machine learning pipelines, AWS, cloud.
Query tuning within SQL Server can be a tough skill to master. The new tooling released with SQL Server 2016 and 2017 changes how you identify poorly performing queries, troubleshoot their behavior and tune the queries, all a little easier.
This workshop teaches new techniques for tuning queries using all the new tools introduced in SQL Server 2016 and 2017. You'll be able to put this knowledge to work immediately, not only in your 2016 or better instances but also in your Azure SQL Database databases. You will be tuning your queries faster and more accurately using the new tools available.
Good development habits should include lots and lots of testing. And testing needs data. But loading data by hand can be a real pain. Join us for a deep dive on integrating the Salesforce Data Loader tool with ANT to create repeatable automated scripts for loading data. Data exports and imports are a breeze to script and incorporate into your CI strategy once you know the secrets and tricks.
Integrating SIS’s with Salesforce: An Accidental Integrator’s GuideSalesforce.org
Join our next Success webinar, Integrating Student Information Systems with Salesforce: Strategies and Best Practices, to explore the many ways system integration benefits your school. Whether you want an aggregated view of your students, the ability to trigger actions based on status changes, or the automation of manual work, you will learn the three simple steps to successful integration. By highlighting how higher education institutions have integrated with the most popular Student Information Systems, Grant Miller, director of Alliances and Jill Kenney, Director of Sales Engineering at the Salesforce Foundation, will explain the layers of integration and discuss considerations like synchronous-versus-asynchronous and buy-versus-build options.
SharePoint Databases: What you need to know (201512)Alan Eardley
Presented at SQL Saturday Southampton (2015)
An introduction to the different databases that SharePoint uses, with recommendations for High Availability, Disaster Recovery and configuration settings for SQL Server, including the constraints imposed in a single farm, a stretched farm between data centres and a separate DR farm.
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...RTTS
Testing of Hadoop, NoSQL and Data Warehouses Visually
-----------------------------------------------------------------------------
We just made automated data testing really easy. Automate your Big Data testing visually, with no programming needed.
See how to automate Hadoop, No SQL and Data Warehouse testing visually, without writing any SQL or HQL. See how QuerySurge, the leading Big Data testing solution, provides novices and non-technical team members with a fast & easy way to be productive immediately while speeding up testing for team members skilled in SQL/HQL.
This webinar is geared towards:
- Big Data & Data Warehouse Architects, ETL Developers
- ETL Testers, Big Data Testers
- Data Analysts
- Operations teams
- Business Intelligence (BI) Architects
- Data Management Officers & Directors
You will learn how to:
• Improve your Data Quality
• Accelerate your data testing cycles
• Reduce your costs & risks
• Realize a huge ROI
Top 5 ETL Tools for Salesforce Data MigrationIntellipaat
Top 5 ETL Tools for Salesforce Data Migration: 1.Apex Data Loader, 2.Progress Data Direct, 3.Talend Open Studio, 4.JitterBit Data Loader, 5.Pentaho Community Edition.
Source URL - https://intellipaat.com/etl-testing-training/
Are you hitting your Governor Limits? Is your system performance not up to expectations? Are you worried about your capacity to grow or merge multiple orgs? Then this session is for you. Join us as we line up the suspects, find out who's guilty, and how you can avoid being a victim in the closest thing to a murder-mystery at this year's DreamForce. We'll walk you through real situations, and most importantly, how we solved them.
Muvr is a real-time personal trainer system. It must be highly available, resilient and responsive, and so it relies on heavily on Spark, Mesos, Akka, Cassandra, and Kafka—the quintuple also known as the SMACK stack. In this talk, we are going to explore the architecture of the entire muvr system, exploring, in particular, the challenges of ingesting very large volume of data, applying trained models on the data to provide real-time advice to our users, and training & evaluating new models using the collected data. We will specifically emphasize on how we have used Cassandra for consuming lots of fast incoming biometric data from devices and sensors, and how to securely access the big data sets from Cassandra in Spark to compute the models.
We will finish by showing the mechanics of deploying such a distributed application. You will get a clear understanding of how Mesos, Marathon, in conjunction with Docker, is used to build an immutable infrastructure that allows us to provide reliable service to our users and a great environment for our engineers.
06 august meetup - enterprise integration architectureAldo Fernandez
Salesforce Enterprise Integration Architecture: Lessons learned along the way.
What are the components of a good Salesforce Integration Architecture? The Salesforce1 Platform offers architects and developers a wide array of integration technologies and recommended patterns. However, without the correct Integration Architecture and technology infrastructure your projects and solutions will be at risk for performance, scalability, data integrity, and many other problems.
On this session we are going to talk about the different lessons learned working on different enterprise integration scenarios.
How to Position Enterprise Architects in Today's Business Salesforce
At salesforce.com, we are committed to thinking differently and constantly pursuing fresh new ideas. Salesforce Services, the world’s highest group of SFDC experts & innovators, hosts knowledge share events to share ideas with those within the professionals services community.
SFDC thought leaders Vijay Sood and Jonathan Brayshaw facilitated a discussion on "How to Position Enterprise Architects in Today's Business", focusing on both the Sales and Delivery perspectives.
Here is the slide deck from their presentation...enjoy!
Salesforce Application Lifecycle Management presented to EA Forum by Sam Garf...Sam Garforth
Sam Garforth presented this at the Salesforce Enterprise Architect Forum on January 12th 2017. It covers governance and best practices for developing, deploying and supporting applications running on the Salesforce platform, whether these be apps or configurations of Sales or Service Cloud or Communities.
How to Setup Continuous Integration With Git, Jenkins, and Force.comSalesforce Developers
Join us we walk through setting up a continuous integration system for Salesforce development, from scratch, using Git, Jenkins, the Force.com Migration Tool, and the Apex Data Loader, following a proven, step-by-step approach that you can use with your own project. You'll learn how to manage code using feature-specific sandboxes and feature-specific branches. We'll present the actual configuration scripts we use to make all this work for our group of eight developers, working together on the same managed product, spanning 65+ objects, 350+ classes, and 600+ Apex tests.
Big Data Day LA 2015 - Event Driven Architecture for Web Analytics by Peyman ...Data Con LA
As integrated web analytics evolves to both a service oriented and event based model, there will be higher emphasis on moving toward event based analytics. Business analytics is moving from purely counts of analytics to time-series, relationship and usage analytics. Examples of web analytics that can take advantage of this architecture are conversions analytics or cross channel marketing.
The advantage of storing raw event data is that you have maximum flexibility for analysis. For example, you can trace the sequence of pages that one person visited over the course of their session. You can’t do that if you’ve squashed all the events into e.g. counters. That sort of analysis is really important for some offline processing tasks, such as training a recommender system (“people who bought X also bought Y”, that sort of thing). For such use cases, it’s best to simply keep all the raw events, so that you can later feed them all into your shiny new machine learning system.
In this session we are going to elaborate on using Kafka, an Event Processing framework (e.g. Storm or Spark Streaming) and either Hadoop or EDW for building an Event Driven Architecture.
Talk about Salesforce REST API: how to perform query, search or single-record CRUD operations; how to retrieve versions, list of custom object and object metadata and field metadata and presentation of demo page performing these requests
Understanding the Salesforce Architecture: How We Do the Magic We DoSalesforce Developers
Join us for a deep dive into the architecture of the Salesforce1 Platform. We'll explain how multitenancy actually works and how it affects you as a Salesforce customer. By understanding the technology we use and the design principles we adhere to, you'll see how our platform teams manage three major upgrades a year without causing any issues to existing development. We'll cover the performance and security implications around the platform to give you an understanding of how limits have evolved. By the end of the session, you'll have a better grasp of the architecture underpinning Force.com and understand how to get the most out of it.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
Managing Large Amounts of Data with SalesforceSense Corp
Critical "design skew" problems and solutions - Engaging Big Objects, MuleSoft, Snowflake and Tableau at the right time
Salesforce’s ability to handle large workloads and participate in high-consumption, mobile-application-powering technologies continues to evolve. Pub/sub-models and the investment in adjacent properties like Snowflake, Kafka, and MuleSoft, has broadened the development scope of Salesforce. Solutions now range from internal and in-platform applications to fueling world-scale mobile applications and integrations. Unfortunately, guidance on the extended capabilities is not well understood or documented. Knowing when to move your solution to a higher-order is an important Architect skill.
In this webinar, Paul McCollum, UXMC and Technical Architect at Sense Corp, will present an overview of data and architecture considerations. You’ll learn to identify reasons and guidelines for updating your solutions to larger-scale, modern reference infrastructures, and when to introduce products like Big Objects, Kafka, MuleSoft, and Snowflake.
Optimizing Query is very important to improve the performance of the database. Analyse query using query execution plan, create cluster index and non-cluster index and create indexed views
Moyez Dreamforce 2017 presentation on Large Data Volumes in SalesforceMoyez Thanawalla
As enterprises continue to push more or their data to the cloud, Salesforce has seen data volumes in its tenant orgs grow at an exponential rate. How do you manage such volumes efficiently? How do you build queries and reports that respond in a timely manner?
OLAP performs multidimensional analysis of business data and provides the capability for complex calculations, trend analysis, and sophisticated data modeling.
Similar to Handling of Large Data by Salesforce (20)
Salesforce spring 18 release highlights by thinqloudThinqloud
Thinqloud team conducted a session where everyone prepared a presentation about Spring 18 Release Highlights and shared knowledge . This is summary from everyone's presentations.
Women In Tech (WIT) Jaipur - Jina Chetia - ThinqloudThinqloud
Jina Chetia, Director & Principal @ Thinqloud, connected virtually with Salesforce enthusiasts at Jaipur Women in Tech Meetup and shared her experience and key takeaways from Dreamforce 2017, the largest software conference on the planet.
What Is Salesforce CRM, Editions, Licenses?Thinqloud
Salesforce.com is a cloud-based customer relationship management (CRM) software solution for sales, service, marketing, collaboration, analytics, and building custom mobile apps
Salesforce Einstein - Everything You Need To KnowThinqloud
Einstein is Artificial Intelligence in the Salesforce Platform. Einstein works smarter in less time. It thinks like humans and gives appropriate predictive results. It uses machine learning language which has algorithms written to make predictions.
Our top picks from Salesforce Winter'17 release !!Thinqloud
Here we have presented our top picks from lightning,sales, service,analytics,customization and development features in the Salesforce Winter'17 release notes.This upcoming release will definitely enhance the developing experience.
Digital Marketing Automation with Salesforce Marketing CloudThinqloud
This presentation explains how Salesforce Marketing cloud enables your organization to build and manage powerful 1-to-1 customer journeys.Discover the strategy to target more customers in today's competitive world.
Automate the organization’s workflow with Process Builder,the next generation workflow tool. Gain an overview of How to create a process using Process Builder.
Learn about the various amazing fundraising applications built on top of Salesforce Platform. Applications that address the limitations of NPSP and help exceed the Fundraising goals of NonProfits. Explore the applications, its features and see what suits you best !!
This presentation explains how to perform security testing using ZAP in Salesforce .Learn how to Install and configure ZAP to Automate Security Testing !!
Discover the Connection - NonProfits and CRMThinqloud
This presentation outlines how Non-Profits can leverage Salesforce as their Constituent Relationship Management to solve their day to day functional challenges.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
2. Agenda
1. Overview
2. Handling of Large data by Salesforce
• The Underlying Concept
• Infrastructure – Salesforce Components and Capabilities
3. Techniques for Optimizing Performance
4. Best Practices
5. Case Studies
3. Overview
Salesforce automatically enables customers to easily scale their applications up from small to large
amounts of data.
Ho e e if the e s la ge data olu e, ti e e ui ed fo e tai p o esses ight g o .
The processing time is dependent upon the architecture and the design of the application.
Main Processes that gets affected.
Loading/Uploading large number of records.
Extraction of data through Reports/Dashboards, List Views.
Strategy for optimizing the processes
Following industry-standard practices for accommodating schema changes and operations in database-enabled applications.
Deferring or bypassing business rule and sharing processing.
Choosing the most efficient operation for accomplishing a task.
5. The Underlying Concept
• Multitenancy is a means of providing a single application
to multiple organizations.
• Multitenancy requires that applications behave reliably,
even when architects are making Salesforce-supported
customizations.
• When organizations create custom objects, the platform
tracks metadata about the objects and their fields,
relationships, and other object definition characteristics.
See the diagram below:
• As a customer, the SQL underlying many application
operations cannot be modified as it is generated by the
system, not written by each tenant.
Multitenancy of Metadata
• Search is the capability to query records based on free-
form text.
• Salesforce Search Architecture is based on its own data
which makes it easier to search.
• For data to be searched, it has to be indexed.
• Salesforce Search Capabilities.
– The sidebar
– Advanced and global searches
– Find boxes and lookup fields
– Suggested Solutions and Knowledge Base
– Web-to-Lead and Web-to-Case
– Duplicate lead processing
– Salesforce Object Search Language (SOSL) for
Apex and the API
Search Architecture
6. Infrastructure – Salesforce Components and Capabilities
Force.com Query Optimizer
• Helps the data ase s ste s opti ize p odu e
effective execution plans for Salesforce queries.
• Works on automatically generated queries to
handle reports, list views, and both SOQL
queries and the other queries that piggyback on
them.
• The platform must keep its own set of statistical
information to help the database understand the
best way to access the data.
• As a result, when large amounts of data are
created, updated, or deleted using the API, the
database must gather statistics before the application
can efficiently access data.
Indexes Index Tables
• The Salesforce multitenant architecture
makes the underlying data table for custom
fields unsuitable for indexing.
• To overcome this limitation, the platform
creates an index table that contains a copy of
the data, along with information about the data
types.Skinny Tables
Database Statistics
Salesforce supports custom indexes
to speed up queries, and one can
create custom indexes by contacting
Salesforce Customer Support.
• Salesforce creates skinny tables to contain
frequently used fields and to avoid joins, and it
keeps the skinny tables in sync with their source
tables when the source tables are modified.
• Can be created on custom objects, and on
Account, Contact, Opportunity, Lead, and Case
objects.
Divisions are a means of partitioning the data
of large deployments to reduce the number
of records returned by queries and reports.
Divisions
7. Techniques for Optimizing Performance
Using
Mashups
Defer
Sharing
Calculation
Deleting
Data
Search
Data
Archiving
9. Best Practices
BEST PRACTICES
Reporting
• Reduce number records to query-use value in data to segment query.
• Reduce number of objects queried and number of relationships use to generate the report.
• Reduce number of fields queried- only add fields to a report , list view, or SOQL query that is required.
• Reduce amount of data by archiving unused records.
• Use report filter that emphasize the use of standard or custom indexed fields.
Loading Data
from the API
• Use the Salesforce Bulk API when you have more than a few hundred thousand records.
• Use the fastest operation possible-—insert() is fastest, update() is next, and upsert() is next after that.
• Ensure that data is clean before loading when using the Bulk API.
• When updating, send only fields that have changed.
• For custom integrations : Authenticate once per load, not on each record.
• If possible for initial loads, populate roles before populating sharing rules.
• When changing child records, group them by parent.
• When using the SOAP API, use as many batches as possible.
Extracting
Data from the
API
• Use the getUpdated() and getDeleted() SOAP API to sync an external system with Salesforce at intervals greater than
5 minutes.
• When a query returning more than one million results, consider using the query capability of the Bulk API.
Searching • Keep searches specific and avoid using wildcards, if possible.
• Use single-object searches for greater speed and accuracy.
ACTIONS
10. Best Practices
SOQL and SOSL
• Decompose the query- break the query into two queries and join the results.
• If querying on formula fields is required, make sure that they are deterministic formulas.
• Use values such as NA to replace NULLS options.
• Use SOQL and SOSL where appropriate, and minimize the amount of data being queried or searched.
• Tune the SOQL query, reducing query scope, and using selective filters. Consider using Bulk API with bulk
query.
Deleting Data • When deleting large volumes of data use the hard delete option for Deleting large volumes of data of the Bulk
AP.
• When deleting records that have many children, delete the children first.
General
• Avoid having any user own more than 10,000 records.
• Use a data-tiering strategy that spreads data across multiple objects, and brings in data on demand.
• Whe eati g opies of p odu tio sa d o es, e lude field histo if it is t e ui ed, do t ha ge a lot of
data until sandbox copy is created.
ACTIONS BEST PRACTICES
12. Case Study: Indexing with Nulls
Example :
1. Create formula field:
Status Value c = IF(ISBLANK(Status c),"blank",
Status__c)
2. Contact Salesforce to get the formula field indexed!
3. Update Query SELECT Name FROM Object c
WHERE Status Value = ' la k
Requirement
• The customer needed to allow nulls in
a field and be able to query against
them.
• Because single-column indexes for
picklists and foreign key fields exclude
rows in which the index column is equal
to null, an index could not have been
used for the null queries.
Solution
• Use some other string, such as N/A, in
place of NULL.
• If you cannot do that, possibly because
records already exist in the object with
null values, create a formula field that
displays text for nulls, and then index
that formula field.
13. Non
deterministic
Force.com
formula
fields can
Reference other entities
Include other formula fields that
span over other entities
Use dynamic date and time functions
Standard fields with special
functionalities E.g. Opportunity Amount etc
References to fields that Force.com
cannot index
Currency fields in a multicurrency
organization
Long text area fields
Blob, file, or encrypted text
Custom indexes can be created on a formula field, provided that the formula field is deterministic.
Formula field indexing considerations
Note: If the formula is modified after the index is created, the index is disabled. To re-enable an index, one needs to
contact Salesforce Customer Support.
14. • Create an aggregation custom object.
• Use Apex Batch.Data Aggregation
• To aggregate monthly and yearly metrics
using standard reports.
• Data is stored across two objects.
Requirement Solution
Custom Search
• To search in large data volumes across
multiple objects using specific values and
wildcards.
• 1-20 fields.
• Use only essential search fields.
• De-normalize the data.
• Dynamically determine the use of SOQL
or SOSL.
Related Lists
Detail page takes long time to load
due to large data volume in related
lists.
Enable Separate Loading of Related Lists.
Case Studies
15. Requirement Solution
Sharing rules
Large number of changes need to be made
to roles, territories, groups, users, portal
account ownership, or public groups
participating in sharing.
• Give Access to all data.
• Create a delta extraction, lowering the
volume of data that needed to be
processed.
API Performance
• Synchronize Salesforce data with external
customer applications.
• Integration used a specific API user that was
part of the sharing hierarchy.
• Queries taking minutes to complete .
Data Deletion
Defer Sharing Calculations
(Contact Salesforce).
Delete millions of records.
• Use the Bulk API s ha d delete fu tio .
• T u ate usto o je t Co ta t Salesfo e .
• If e o ds ha e a hild e , delete hild e
first.
Sharing as per
region
Share data with users on basis of location. Divisions(Contact Salesforce).
Case Studies
16. jQuery
SALESFORCE CUSTOMIZATION
SALESFORCE AUTOMATION
ADVISORY SERVICES
INTEGRATED SOLUTIONS
Lightning
Bootstrap
Visualforce
Appexc
hange
Commun
ities
Service
Cloud
Sales
Cloud
GitHub
Apex
Web
Services
Visit www.zen4orce.com for further details about Zen4orce Services & Offerings.
Skillset
Zen4orce Service Offerings