Businesses make critical decisions using key data assets, but stakeholders often find it difficult to navigate the complex data landscape to ensure they have the right data and understand it correctly. Companies are dealing with a number of different technologies, multiple data formats, and high data volumes, along with the requirements for data security and governance.
Watch the companion webinar at:
Join John Sterrett, Senior Advisor at Linchpin People and Scott Walz, Director of Software Consultants, to learn how execution plans get invalidated and why data skew could be the root cause to seeing different execution plans for the same query. We will look at options for forcing a query to use a particular execution plan. Finally, you will learn how this complex problem can be identified and resolved simply using a new feature in SQL Server 2016 called Query Store.
Watch the companion webinar at: http://forms.embarcadero.com/AgileAutomatedAware
Data management teams face some tough challenges these days. Organizations need business-driven visibility that enables understanding and awareness of enterprise data assets – without worrying about definitions and change management. But with information architectures evolving, serving up accurately defined, reusable data can become a complex issue.
In this episode of The Briefing Room hosted by the Bloor Group, veteran analyst David Loshin explains the importance of agile, automated workflows in today’s enterprise data architectures. Ron Huizenga of Embarcadero discusses how the ER/Studio suite approaches data modeling and management from a modern architecture standpoint. He explains that unifying the way information is represented can not only eliminate the need for costly workarounds, but also foster collaboration between data architects, developers and business users.
Learn more about data modeling and data architecture with ER/Studio at http://www.embarcadero.com/products/er-studio.
Register for the companion webinar:
http://forms.embarcadero.com/Dealing-with-New-Datatypes
Data modeling is going back to the future! No, it doesn’t include a hoverboard (yet), but it does include some new datatypes that capture temporal and spatial information. In the past, datatypes were used to classify various types of data, whether integers, characters, or alphanumeric strings. With the technologies introduced in recent years, these basic datatypes can’t address everything – data modelers now need more specialized datatypes for specific needs and new formats.
Multiple database platforms have introduced new datatypes that can make it easier to support more advanced data concepts in physical data models. If you do not know about what new things are happening in the physical data modeling world, or what to do with them, Karen Lopez will discuss using a variety of new datatypes including:
•Temporal, such as period, with keywords
•Spatial, including geospatial
•Others, incorporating JSON/BSON/UBJSON usage
Learn more about ER/Studio at:
http://www.embarcadero.com/products/er-studio
Smart companies know that business intelligence surfaces insights. With complex analytics, data mining and everything in between, it takes many moving parts to serve up the big picture. The key is to provide full-stack visibility into the entire BI environment, ensuring solid service and system performance.
Learn more at http://www.insideanalysis.com
Presentation by Mark Rittman, Technical Director, Rittman Mead, on ODI 11g features that support enterprise deployment and usage. Delivered at BIWA Summit 2013, January 2013.
The Convergence of Reporting and Interactive BI on HadoopDataWorks Summit
Since the early days of Hive, SQL on Hadoop has evolved from being a SQL wrapper on top of MapReduce to a viable replacement for the traditional EDW. In the meantime, while SQL on Hadoop vendors were busy adding enterprise capabilities and comparing their TPC-DS prowess against Hive, a niche industry emerged on the side for OLAP (a.k.a. "Interactive BI") on Hadoop data. Unlike general-purpose SQL on Hadoop engines, which deal with the multiple aspects of warehousing, including reporting, OLAP on Hadoop engines focus almost exclusively on answering OLAP queries fast by using implementation techniques that had not been part of the SQL on Hadoop toolbox so far.
But SQL on Hadoop engines are not standing still. After having made huge progress in catching up to traditional EDWs for reporting workloads, SQL on Hadoop engines are now setting their sights on Interactive BI. This is great news for enterprises: as the line between reporting and OLAP gets blurred, enterprises can now start considering using a single engine for both reporting and interactive BI on their Hadoop data, as opposed to having to host, manage and license two separate products.
Can a single engine satisfy both your reporting and Interactive BI needs? This may be a hard question to answer. Vendors use inconsistent terminology to describe their products and make ambitious and sometimes conflicting claims. This makes it very hard for enterprises to compare products, let alone decide which is the product that best matches their needs.
In this presentation, we'll provide an overview of the different approaches to OLAP on Hadoop, and explain the key technologies behind each of them. We'll use consistent terminology to describe what you get from multiple proprietary and open source products, and outline advantages and disadvantages. You'll come out equipped with the knowledge you need to read past marketing and sales pitches; you'll be able to compare products and make an informed decision on whether a single engine for both reporting and Interactive BI on Hadoop is right for you.
Speaker
Gustavo Arocena, Big Data Architect, IBM
Watch the companion webinar at:
Join John Sterrett, Senior Advisor at Linchpin People and Scott Walz, Director of Software Consultants, to learn how execution plans get invalidated and why data skew could be the root cause to seeing different execution plans for the same query. We will look at options for forcing a query to use a particular execution plan. Finally, you will learn how this complex problem can be identified and resolved simply using a new feature in SQL Server 2016 called Query Store.
Watch the companion webinar at: http://forms.embarcadero.com/AgileAutomatedAware
Data management teams face some tough challenges these days. Organizations need business-driven visibility that enables understanding and awareness of enterprise data assets – without worrying about definitions and change management. But with information architectures evolving, serving up accurately defined, reusable data can become a complex issue.
In this episode of The Briefing Room hosted by the Bloor Group, veteran analyst David Loshin explains the importance of agile, automated workflows in today’s enterprise data architectures. Ron Huizenga of Embarcadero discusses how the ER/Studio suite approaches data modeling and management from a modern architecture standpoint. He explains that unifying the way information is represented can not only eliminate the need for costly workarounds, but also foster collaboration between data architects, developers and business users.
Learn more about data modeling and data architecture with ER/Studio at http://www.embarcadero.com/products/er-studio.
Register for the companion webinar:
http://forms.embarcadero.com/Dealing-with-New-Datatypes
Data modeling is going back to the future! No, it doesn’t include a hoverboard (yet), but it does include some new datatypes that capture temporal and spatial information. In the past, datatypes were used to classify various types of data, whether integers, characters, or alphanumeric strings. With the technologies introduced in recent years, these basic datatypes can’t address everything – data modelers now need more specialized datatypes for specific needs and new formats.
Multiple database platforms have introduced new datatypes that can make it easier to support more advanced data concepts in physical data models. If you do not know about what new things are happening in the physical data modeling world, or what to do with them, Karen Lopez will discuss using a variety of new datatypes including:
•Temporal, such as period, with keywords
•Spatial, including geospatial
•Others, incorporating JSON/BSON/UBJSON usage
Learn more about ER/Studio at:
http://www.embarcadero.com/products/er-studio
Smart companies know that business intelligence surfaces insights. With complex analytics, data mining and everything in between, it takes many moving parts to serve up the big picture. The key is to provide full-stack visibility into the entire BI environment, ensuring solid service and system performance.
Learn more at http://www.insideanalysis.com
Presentation by Mark Rittman, Technical Director, Rittman Mead, on ODI 11g features that support enterprise deployment and usage. Delivered at BIWA Summit 2013, January 2013.
The Convergence of Reporting and Interactive BI on HadoopDataWorks Summit
Since the early days of Hive, SQL on Hadoop has evolved from being a SQL wrapper on top of MapReduce to a viable replacement for the traditional EDW. In the meantime, while SQL on Hadoop vendors were busy adding enterprise capabilities and comparing their TPC-DS prowess against Hive, a niche industry emerged on the side for OLAP (a.k.a. "Interactive BI") on Hadoop data. Unlike general-purpose SQL on Hadoop engines, which deal with the multiple aspects of warehousing, including reporting, OLAP on Hadoop engines focus almost exclusively on answering OLAP queries fast by using implementation techniques that had not been part of the SQL on Hadoop toolbox so far.
But SQL on Hadoop engines are not standing still. After having made huge progress in catching up to traditional EDWs for reporting workloads, SQL on Hadoop engines are now setting their sights on Interactive BI. This is great news for enterprises: as the line between reporting and OLAP gets blurred, enterprises can now start considering using a single engine for both reporting and interactive BI on their Hadoop data, as opposed to having to host, manage and license two separate products.
Can a single engine satisfy both your reporting and Interactive BI needs? This may be a hard question to answer. Vendors use inconsistent terminology to describe their products and make ambitious and sometimes conflicting claims. This makes it very hard for enterprises to compare products, let alone decide which is the product that best matches their needs.
In this presentation, we'll provide an overview of the different approaches to OLAP on Hadoop, and explain the key technologies behind each of them. We'll use consistent terminology to describe what you get from multiple proprietary and open source products, and outline advantages and disadvantages. You'll come out equipped with the knowledge you need to read past marketing and sales pitches; you'll be able to compare products and make an informed decision on whether a single engine for both reporting and Interactive BI on Hadoop is right for you.
Speaker
Gustavo Arocena, Big Data Architect, IBM
Using OBIEE and Data Vault to Virtualize Your BI Environment: An Agile ApproachKent Graziano
First we interview the users, then we design a reporting model based on those interviews. We follow that up with mounds of ETL development to load the new model, basically keeping the user community in the dark during all that development. Does this sound familiar?
This presentation will demonstrate an alternative approach using the Data Vault Data Modeling technique to build a flexible, easily-extensible “Foundation” layer in our data warehouse with an Agile, iterative methodology. Relying on the Business Model and Mapping (BMM) functionality of OBIEE, we can rapidly virtualize a dimensional reporting model using the pattern-based Data Vault Foundation layer to decrease the time, and money, it takes to get BI content in front of end users. Attendees will see a sample Data Vault model designed iteratively and deployed to the semantic model of OBIEE.
Worst Practices in Data Warehouse DesignKent Graziano
This presentation was given at OakTable World 2014 (#OTW14) in San Francisco. After many years of designing data warehouses and consulting on data warehouse architectures, I have seen a lot of bad design choices by supposedly experienced professional. A sense of professionalism, confidentiality agreements, and some sense of common decency have prevented me from calling people out on some of this. No more! In this session I will walk you through a typical bad design like many I have seen. I will show you what I see when I reverse engineer a supposedly complete design and walk through what is wrong with it and discuss options to correct it. This will be a test of your knowledge of data warehouse best practices by seeing if you can recognize these worst practices.
A few months back I spoke with some graduate students about "what is data warehousing". In this talk I covered the past, present, and probably future of what data warehousing is and how it can add value to a company.
HOW TO SAVE PILEs of $$$BY CREATING THE BEST DATA MODEL THE FIRST TIME (Ksc...Kent Graziano
A good data model, done right the first time, can save you time and money. We have all seen the charts on the increasing cost of finding a mistake/bug/error late in a software development cycle. Would you like to reduce, or even eliminate, your risk of finding one of those errors late in the game? Of course you would! Who wouldn't? Nobody plans to miss a requirement or make a bad design decision (well nobody sane anyway). No data modeler or database designer worth their salt wants to leave a model incomplete or incorrect. So what can you do to minimize the risk?
In this talk I will show you a best practice approach to developing your data models and database designs that I have been using for over 15 years. It is a simple, repeatable process for reviewing your data models. It is one that even a non-modeler could follow. I will share my checklist of what to look for and what to ask the data modeler (or yourself) to make sure you get the best possible data model. As a bonus I will share how I use SQL Developer Data Modeler (a no-cost data modeling tool) to collect the information and report it.
Top Five Cool Features in Oracle SQL Developer Data ModelerKent Graziano
This is the presentation I gave at OUGF14 in Helsinki, Finland in June 2014.
Oracle SQL Developer Data Modeler (SDDM) has been around for a few years now and is up to version 4.x. It really is an industrial strength data modeling tool that can be used for any data modeling task you need to tackle. Over the years I have found quite a few features and utilities in the tool that I rely on to make me more efficient (and agile) in developing my models. This presentation will demonstrate at least five of these features, tips, and tricks for you. I will walk through things like modifying the delivered reporting templates, how to create and applying object naming templates, how to use a table template and transformation script to add audit columns to every table, and using the new meta data export tool and several other cool things you might not know are there. Since there will likely be patches and new releases before the conference, there is a good chance there will be some new things for me to show you as well. This might be a bit of a whirlwind demo, so get SDDM installed on your device and bring it to the session so you can follow along.
Learn how can you create Tableau dashboards for OBIEE data that provide you valuable insight from business critical data without wasting a ton of time.
How to Handle DEV&TEST&PROD for Oracle Data IntegratorGurcan Orhan
Most of us have development teams apart from test and operation teams using the different repository environments. And there are generally 3 different ODI installations and repositories which each of the teams use separately. Chaos is usually expected and happened who will test which development and what to deploy into production.
In this session hear how ODI can handle your development hierarchy with ease of usage and in simplified/synchronized way for successful deployments.
A simple project will be built up and will be enlarged to enterprise level step by step.
Automating Data Quality Processes at ReckittDatabricks
Reckitt is a fast-moving consumer goods company with a portfolio of famous brands and over 30k employees worldwide. With that scale small projects can quickly grow into big datasets, and processing and cleaning all that data can become a challenge. To solve that challenge we have created a metadata driven ETL framework for orchestrating data transformations through parametrised SQL scripts. It allows us to create various paths for our data as well as easily version control them. The approach of standardising incoming datasets and creating reusable SQL processes has proven to be a winning formula. It has helped simplify complicated landing/stage/merge processes and allowed them to be self-documenting.
But this is only half the battle, we also want to create data products. Documented, quality assured data sets that are intuitive to use. As we move to a CI/CD approach, increasing the frequency of deployments, the demand of keeping documentation and data quality assessments up to date becomes increasingly challenging. To solve this problem, we have expanded our ETL framework to include SQL processes that automate data quality activities. Using the Hive metastore as a starting point, we have leveraged this framework to automate the maintenance of a data dictionary and reduce documenting, model refinement, testing data quality and filtering out bad data to a box filling exercise. In this talk we discuss our approach to maintaining high quality data products and share examples of how we automate data quality processes.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
In this session, you will see a demo of Oracle Business Intelligence Visual Analyzer, taking a real-world business use case from end to end, to learn how straightforward it is to tell a compelling story with data and prototype with greater speed, while gaining insights into information with this new cutting-edge data visualization access.
Building the Artificially Intelligent EnterpriseDatabricks
This session looks at where we are today with data and analytics and what is needed to transition to the Artificially Intelligent Enterprise.
How do you mobilise developers to exploit what data scientists and business analysts have built? How do you align it all with business strategy to maximise business outcomes? How do you combine BI, predictive and prescriptive analytics, automation and reinforcement learning to get maximum value across the enterprise? What is the blueprint for building the artificially intelligent enterprise?
•Data and analytics – Where are we?
•Why is the journey only half-way done?
•2021 and beyond – The new era of AI usage and not just build
•The requirement – event-driven, on-demand and automated analytics
•Operationalising what you build – DataOps, MLOps and RPA
•Mobilising the masses to integrate AI into processes – what needs to be done?
•Business strategy alignment – the guiding light to AI utilisation for high reward
•Agility step change – the shift to no-code integration of AI by citizen developers
•Recording decisions, and analysing business impact
•Reinforcement-learning – transitioning to continuous reward
Enable the business and make Artificial Intelligence accessible for everyone! Marc Lelijveld
Microsoft is doing a great job in enabling every user to apply Artificial intelligence in his or her daily business by implementing AI functionality in Power BI, Microsoft's end-user BI and analytics tool. Finding insights from the data can be challenging with the massive volumes of data generated today. This is where AI can help to automatically find patterns, help users understand what the data means, and predict future outcomes. But most important of all, enabling the business to make data driven decisions!
In this session I will tell you all about the AI capabilities which Microsoft offers and made available for each and every user within the organization. I'll show you how business users will be able to work with this, without writing a line of code. A session with an overview of AI and a bunch of live demos on how you can implement AI to your daily business.
In this session:
- Azure Cognitive Services
- Auto ML (Machine Learning)
- Power BI Dataflows
INFORMATICA ONLINE TRAINING BY QUONTRA SOLUTIONS WITH PLACEMENT ASSISTANCE
We offer online IT training with placements, project assistance in different platforms with real time industry consultants to provide quality training for all it professionals, corporate clients and students etc. Special features by Quontra Solutions are Extensive Training will be in both Informatica Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics which are essential and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient I.T Training. We have always been and still are focusing on the key aspects which are providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Training Features at Quontra Solutions:
We believe that online training has to be measured by three major aspects viz., Quality, Content and Relationship with the Trainer and Student. Not only our online training classes are important but apart from that the material which we provide are in tune with the latest IT training standards, so a student has not to worry at all whether the training imparted is outdated or latest.
Course content:
• Basics of data warehousing concepts
• Power center components
• Informatica concepts and overview
• Sources
• Targets
• Transformations
• Advanced Informatica concepts
Please Visit us for the Demo Classes, we have regular batches and weekend batches.
QUONTRASOLUTIONS
204-226 Imperial Drive,Rayners Lane, Harrow-HA2 7HH
Phone : +44 (0)20 3734 1498 / 99
Email: info@quontrasolutions.co.uk
Slide deck used during the May 19, 2016 Embarcadero RAD Server Launch Webinar.
RAD Server is a turn-key application foundation for rapidly building and deploying services based applications. RAD Server provides automated Delphi and C++ REST/JSON API publishing and management, Enterprise database integration middleware, IoT Edgeware and an array of application services such as User Directory and Authentication services, Push Notifications, Indoor/Outdoor Geolocation and JSON data storage. RAD Server enables developers to quickly build new application back-ends or migrate existing Delphi or C++ client/server business logic to a modern services based architecture that is open, stateless, secure and scalable. RAD Server is easy to develop, deploy and operate making it ideally suited for ISVs and OEMs building re-deployable solutions.
“Oh my goodness! What did I do?” Chances are you have heard, or even uttered this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days.
Using OBIEE and Data Vault to Virtualize Your BI Environment: An Agile ApproachKent Graziano
First we interview the users, then we design a reporting model based on those interviews. We follow that up with mounds of ETL development to load the new model, basically keeping the user community in the dark during all that development. Does this sound familiar?
This presentation will demonstrate an alternative approach using the Data Vault Data Modeling technique to build a flexible, easily-extensible “Foundation” layer in our data warehouse with an Agile, iterative methodology. Relying on the Business Model and Mapping (BMM) functionality of OBIEE, we can rapidly virtualize a dimensional reporting model using the pattern-based Data Vault Foundation layer to decrease the time, and money, it takes to get BI content in front of end users. Attendees will see a sample Data Vault model designed iteratively and deployed to the semantic model of OBIEE.
Worst Practices in Data Warehouse DesignKent Graziano
This presentation was given at OakTable World 2014 (#OTW14) in San Francisco. After many years of designing data warehouses and consulting on data warehouse architectures, I have seen a lot of bad design choices by supposedly experienced professional. A sense of professionalism, confidentiality agreements, and some sense of common decency have prevented me from calling people out on some of this. No more! In this session I will walk you through a typical bad design like many I have seen. I will show you what I see when I reverse engineer a supposedly complete design and walk through what is wrong with it and discuss options to correct it. This will be a test of your knowledge of data warehouse best practices by seeing if you can recognize these worst practices.
A few months back I spoke with some graduate students about "what is data warehousing". In this talk I covered the past, present, and probably future of what data warehousing is and how it can add value to a company.
HOW TO SAVE PILEs of $$$BY CREATING THE BEST DATA MODEL THE FIRST TIME (Ksc...Kent Graziano
A good data model, done right the first time, can save you time and money. We have all seen the charts on the increasing cost of finding a mistake/bug/error late in a software development cycle. Would you like to reduce, or even eliminate, your risk of finding one of those errors late in the game? Of course you would! Who wouldn't? Nobody plans to miss a requirement or make a bad design decision (well nobody sane anyway). No data modeler or database designer worth their salt wants to leave a model incomplete or incorrect. So what can you do to minimize the risk?
In this talk I will show you a best practice approach to developing your data models and database designs that I have been using for over 15 years. It is a simple, repeatable process for reviewing your data models. It is one that even a non-modeler could follow. I will share my checklist of what to look for and what to ask the data modeler (or yourself) to make sure you get the best possible data model. As a bonus I will share how I use SQL Developer Data Modeler (a no-cost data modeling tool) to collect the information and report it.
Top Five Cool Features in Oracle SQL Developer Data ModelerKent Graziano
This is the presentation I gave at OUGF14 in Helsinki, Finland in June 2014.
Oracle SQL Developer Data Modeler (SDDM) has been around for a few years now and is up to version 4.x. It really is an industrial strength data modeling tool that can be used for any data modeling task you need to tackle. Over the years I have found quite a few features and utilities in the tool that I rely on to make me more efficient (and agile) in developing my models. This presentation will demonstrate at least five of these features, tips, and tricks for you. I will walk through things like modifying the delivered reporting templates, how to create and applying object naming templates, how to use a table template and transformation script to add audit columns to every table, and using the new meta data export tool and several other cool things you might not know are there. Since there will likely be patches and new releases before the conference, there is a good chance there will be some new things for me to show you as well. This might be a bit of a whirlwind demo, so get SDDM installed on your device and bring it to the session so you can follow along.
Learn how can you create Tableau dashboards for OBIEE data that provide you valuable insight from business critical data without wasting a ton of time.
How to Handle DEV&TEST&PROD for Oracle Data IntegratorGurcan Orhan
Most of us have development teams apart from test and operation teams using the different repository environments. And there are generally 3 different ODI installations and repositories which each of the teams use separately. Chaos is usually expected and happened who will test which development and what to deploy into production.
In this session hear how ODI can handle your development hierarchy with ease of usage and in simplified/synchronized way for successful deployments.
A simple project will be built up and will be enlarged to enterprise level step by step.
Automating Data Quality Processes at ReckittDatabricks
Reckitt is a fast-moving consumer goods company with a portfolio of famous brands and over 30k employees worldwide. With that scale small projects can quickly grow into big datasets, and processing and cleaning all that data can become a challenge. To solve that challenge we have created a metadata driven ETL framework for orchestrating data transformations through parametrised SQL scripts. It allows us to create various paths for our data as well as easily version control them. The approach of standardising incoming datasets and creating reusable SQL processes has proven to be a winning formula. It has helped simplify complicated landing/stage/merge processes and allowed them to be self-documenting.
But this is only half the battle, we also want to create data products. Documented, quality assured data sets that are intuitive to use. As we move to a CI/CD approach, increasing the frequency of deployments, the demand of keeping documentation and data quality assessments up to date becomes increasingly challenging. To solve this problem, we have expanded our ETL framework to include SQL processes that automate data quality activities. Using the Hive metastore as a starting point, we have leveraged this framework to automate the maintenance of a data dictionary and reduce documenting, model refinement, testing data quality and filtering out bad data to a box filling exercise. In this talk we discuss our approach to maintaining high quality data products and share examples of how we automate data quality processes.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
In this session, you will see a demo of Oracle Business Intelligence Visual Analyzer, taking a real-world business use case from end to end, to learn how straightforward it is to tell a compelling story with data and prototype with greater speed, while gaining insights into information with this new cutting-edge data visualization access.
Building the Artificially Intelligent EnterpriseDatabricks
This session looks at where we are today with data and analytics and what is needed to transition to the Artificially Intelligent Enterprise.
How do you mobilise developers to exploit what data scientists and business analysts have built? How do you align it all with business strategy to maximise business outcomes? How do you combine BI, predictive and prescriptive analytics, automation and reinforcement learning to get maximum value across the enterprise? What is the blueprint for building the artificially intelligent enterprise?
•Data and analytics – Where are we?
•Why is the journey only half-way done?
•2021 and beyond – The new era of AI usage and not just build
•The requirement – event-driven, on-demand and automated analytics
•Operationalising what you build – DataOps, MLOps and RPA
•Mobilising the masses to integrate AI into processes – what needs to be done?
•Business strategy alignment – the guiding light to AI utilisation for high reward
•Agility step change – the shift to no-code integration of AI by citizen developers
•Recording decisions, and analysing business impact
•Reinforcement-learning – transitioning to continuous reward
Enable the business and make Artificial Intelligence accessible for everyone! Marc Lelijveld
Microsoft is doing a great job in enabling every user to apply Artificial intelligence in his or her daily business by implementing AI functionality in Power BI, Microsoft's end-user BI and analytics tool. Finding insights from the data can be challenging with the massive volumes of data generated today. This is where AI can help to automatically find patterns, help users understand what the data means, and predict future outcomes. But most important of all, enabling the business to make data driven decisions!
In this session I will tell you all about the AI capabilities which Microsoft offers and made available for each and every user within the organization. I'll show you how business users will be able to work with this, without writing a line of code. A session with an overview of AI and a bunch of live demos on how you can implement AI to your daily business.
In this session:
- Azure Cognitive Services
- Auto ML (Machine Learning)
- Power BI Dataflows
INFORMATICA ONLINE TRAINING BY QUONTRA SOLUTIONS WITH PLACEMENT ASSISTANCE
We offer online IT training with placements, project assistance in different platforms with real time industry consultants to provide quality training for all it professionals, corporate clients and students etc. Special features by Quontra Solutions are Extensive Training will be in both Informatica Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics which are essential and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient I.T Training. We have always been and still are focusing on the key aspects which are providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Training Features at Quontra Solutions:
We believe that online training has to be measured by three major aspects viz., Quality, Content and Relationship with the Trainer and Student. Not only our online training classes are important but apart from that the material which we provide are in tune with the latest IT training standards, so a student has not to worry at all whether the training imparted is outdated or latest.
Course content:
• Basics of data warehousing concepts
• Power center components
• Informatica concepts and overview
• Sources
• Targets
• Transformations
• Advanced Informatica concepts
Please Visit us for the Demo Classes, we have regular batches and weekend batches.
QUONTRASOLUTIONS
204-226 Imperial Drive,Rayners Lane, Harrow-HA2 7HH
Phone : +44 (0)20 3734 1498 / 99
Email: info@quontrasolutions.co.uk
Slide deck used during the May 19, 2016 Embarcadero RAD Server Launch Webinar.
RAD Server is a turn-key application foundation for rapidly building and deploying services based applications. RAD Server provides automated Delphi and C++ REST/JSON API publishing and management, Enterprise database integration middleware, IoT Edgeware and an array of application services such as User Directory and Authentication services, Push Notifications, Indoor/Outdoor Geolocation and JSON data storage. RAD Server enables developers to quickly build new application back-ends or migrate existing Delphi or C++ client/server business logic to a modern services based architecture that is open, stateless, secure and scalable. RAD Server is easy to develop, deploy and operate making it ideally suited for ISVs and OEMs building re-deployable solutions.
“Oh my goodness! What did I do?” Chances are you have heard, or even uttered this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days.
Build & test once, deploy anywhere - Vday.hu 2016Zsolt Molnar
This talk is about a packaging workflow for custom-made linux applications that can help us to get rid of heavy and error-prone installation guides. Why couldn't those applications become as easy to install and upgrade as any mobile app on a smartphone? After ellaborating the problem space Zsolt is going to show how can we build application packages to any production platform on every git commit in a predictable and easy manner. The aim is to test your application/business with all dependencies only once and then wrap the verified code into multiple deployment format can be consumed directy without any major installation process. This is going to be a demo-heavy presentation touching automation tools like: vagrant, packer, saltstack, docker, jenkins, cloudformation/terraform. The challenge is to build docker packages, OVA/OVF bundles and AWS AMI images from a relatively simple application within 30 minutes.
Learn about the latest features of C++11 that you can take advantage of today in C++Builder 10.1 Berlin.
David Millington, Embarcadero's new C++Builder Product Manager, shows cool C++11 code in the IDE that can be compiled for Windows, macOS, iOS and Android using the Embarcadero C++Builder Clang-enhanced compiler.
C++11 language features covered include:
Auto typed variables
Variadic templates
Lambda expressions
Atomic operations
Unrestricted unions
and more
ER/Studio is the complete business-driven data architecture solution that combines data modeling, business process, and application modeling and reporting with cross-organizational team collaboration for data architectures and enterprises of all sizes.
Are you still using FTP to deploy your code? Are you still manually performing the same steps of deploying a feature, again and again? How many hours have you spent on ssh-ing into the server, pulling the repo, migrating the database, reloading the web server and so on, for each deployment?? Ever wondered if there is a process as simple as a single click to perform all these steps for you?
Automated Deployment does exactly these things for you. It takes the burden of remembering all the steps required in each deployment process and execute it smoothly.
Quicker Insights and Sustainable Business Agility Powered By Data Virtualizat...Denodo
Watch full webinar here: https://bit.ly/3xj6fnm
Presented at Chief Data Officer Live 2021 A/NZ
The world is changing faster than ever. And for companies to compete and succeed they need to be agile in order to respond quickly to market changes and emerging opportunities. Data plays an integral role in achieving this business agility. However, given the complex nature of the enterprise data architecture finding and analysing data is an increasingly challenging task. Data virtualization is a modern data integration technique that integrates data in real-time, without having to physically replicate it.
Watch on-demand this session to understand what data virtualization is and how it:
- Delivers data in real-time, and without replication
- Creates a logical architecture to provide a single view of truth
- Centralises the data governance and security framework
- Democratises data for faster decision making and business agility
Data Fabric - Why Should Organizations Implement a Logical and Not a Physical...Denodo
Watch full webinar here: https://bit.ly/3fBpO2M
Data Fabric has been a hot topic in town and Gartner has termed it as one of the top strategic technology trends for 2022. Noticeably, many mid-to-large organizations are also starting to adopt this logical data fabric architecture while others are still curious about how it works.
With a better understanding of data fabric, you will be able to architect a logical data fabric to enable agile data solutions that honor enterprise governance and security, support operations with automated recommendations, and ultimately, reduce the cost of maintaining hybrid environments.
In this on-demand session, you will learn:
- What is a data fabric?
- How is a physical data fabric different from a logical data fabric?
- Which one should you use and when?
- What’s the underlying technology that makes up the data fabric?
- Which companies are successfully using it and for what use case?
- How can I get started and what are the best practices to avoid pitfalls?
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...Denodo
Watch full webinar here: https://bit.ly/3offv7G
Presented at AI Live APAC
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spend most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Watch this on-demand session to learn how companies can use data virtualization to:
- Create a logical architecture to make all enterprise data available for advanced analytics exercise
- Accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- Integrate popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc.
Your Data is Waiting. What are the Top 5 Trends for Data in 2022? (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3saONRK
COVID-19 has pushed every industry and organization to embrace digital transformation at scale, upending the way many businesses will operate for the foreseeable future. Organizations no longer tolerate monolithic and centralized data architecture; they are embracing flexibility, modularity, and distributed data architecture to help drive innovation and modernize processes.
The pandemic has compelled organizations to accelerate their digital transformation initiatives and look for smarter and more agile ways to manage and leverage their corporate data assets. Data governance has become challenging in the ever-increasing complexity and distributed nature of the data ecosystem. Interoperability, collaboration and trust in data are imperative for a business to succeed. Data needs to be easily accessible and fit for purpose.
In this session, Denodo experts will discuss 5 key trends that are expected to be top of mind for CIOs and CDOs;
- Distributed Data Environments
- Decision Intelligence
- Modern Data Architecture
- Composable Data & Analytics
- Hyper-personalized Experiences
A Logical Architecture is Always a Flexible Architecture (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3joZa0a
The current data landscape is fragmented, not just in location but also in terms of processing paradigms: data lakes, IoT architectures, NoSQL, and graph data stores, SaaS applications, etc. are found coexisting with relational databases to fuel the needs of modern analytics, ML, and AI. The physical consolidation of enterprise data into a central repository, although possible, is both expensive and time-consuming. A logical data warehouse is a modern data architecture that allows organizations to leverage all of their data irrespective of where the data is stored, what format it is stored in, and what technologies or protocols are used to store and access the data.
Watch this session to understand:
- What is a logical data warehouse and how to architect one
- The benefits of logical data warehouse – speed with agility
- Customer use case depicting logical architecture implementation
When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
Top 10 guidelines for deploying modern data architecture for the data driven ...LindaWatson19
Enterprises are facing a new revolution, powered by the rapid adoption of data analytics with modern technologies like machine learning and artificial intelligence (A).
Is your big data journey stalling? Take the Leap with Capgemini and ClouderaCloudera, Inc.
Transitioning to a Big Data architecture is a big step; and the complexity of moving existing analytical services onto modern platforms like Cloudera, can seem overwhelming.
Data Science Operationalization: The Journey of Enterprise AIDenodo
Watch full webinar here: https://bit.ly/3kVmYJl
As we move into a world driven by AI initiatives, we find ourselves facing new and diverse challenges when it comes to operationalization. Creating a solution and putting it into practice, is certainly not the same. The challenges span various organizational and data facades. In many instances, the data scientists may be working in silos and connecting to the live data may not always be possible. But how does one guarantee their developed model in a silo is still relevant to live data? How can we manage the data flow and data access across the entire AI operationalization cycle?
Watch on-demand to explore:
- The journey and challenges of the Data Scientist
- How Denodo data virtualization with data movement streamlines operationalization
- The best practices and techniques when dealing with siloed data
- How customers have used data virtualization in their data science initiatives
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Denodo
This content was presented during the Smart Data Summit Dubai 2015 in the UAE on May 25, 2015, by Jesus Barrasa, Senior Solutions Architect at Denodo Technologies.
In the era of Big Data, IoT, Cloud and Social Media, Information Architects are forced to rethink how to tackle data management and integration in the enterprise. Traditional approaches based on data replication and rigid information models lack the flexibility to deal with this new hybrid reality. New data sources and an increasing variety of consuming applications, like mobile apps and SaaS, add more complexity to the problem of delivering the right data, in the right format, and at the right time to the business. Data Virtualization emerges in this new scenario as the key enabler of agile, maintainable and future-proof data architectures.
Watch full webinar here: https://bit.ly/3mdj9i7
You will often hear that "data is the new gold"? In this context, data management is one of the areas that has received more attention from the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar, we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Watch this on-demand webinar as we cover:
- The most interesting trends in data management
- How to build a data fabric architecture?
- How to manage your data integration strategy in the new hybrid world
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of voice computing in future data analytic
Watch full webinar here: https://bit.ly/2SaBj5l
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Join us for an exciting session that will cover:
- The most interesting trends in data management
- How to build a logical data fabric architecture?
- How to manage your data integration strategy in the new hybrid world?
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of the voice computing in the future of data analytics?
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Data Democratization for Faster Decision-making and Business Agility (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3ogsO7F
Presented at 3rd Chief Digital Officer Asia Summit
The idea behind Data democratization is to enable every type of user in a company to have access to data and to ensure that there is no dependency on any single party that might create a bottleneck to data access. But this is easier said than done especially given the complex data management landscape that most organizations have today. Data virtualization is a modern data integration technique that not only delivers data in real time without replication but also simplifies data discovery, data exploration and navigating between related data sets.
In this on-demand session, you will understand how data virtualization enables enterprises to:
- Reduce up to 80% the time required to deliver data to the business adapted to the needs of each user
- Apply consistent security and governance policies across the self-service data delivery process
- Seamlessly implement the concept of 'Data Marketplace'
¿En qué se parece el Gobierno del Dato a un parque de atracciones?Denodo
Watch full webinar here: https://bit.ly/3Ab9gYq
Imagina llegar a un parque de atracciones con tu familia y comenzar tu día sin el típico plano que te permitirá planificarte para saber qué espectáculos ver, a qué atracciones ir, donde pueden o no pueden montar los niños… Posiblemente, no podrás sacar el máximo partido a tu día y te habrás perdido muchas cosas. Hay personas que les gusta ir a la aventura e ir descubriendo poco a poco, pero cuando hablamos de negocios, ir a la aventura puede ser fatídico...
En la era de la explosión de la información repartida en distintas fuentes, el gobierno de datos es clave para garantizar la disponibilidad, usabilidad, integridad y seguridad de esa información. Asimismo, el conjunto de procesos, roles y políticas que define permite que las organizaciones alcancen sus objetivos asegurando el uso eficiente de sus datos.
La virtualización de datos, herramienta estratégica para implementar y optimizar el gobierno del dato, permite a las empresas crear una visión 360º de sus datos y establecer controles de seguridad y políticas de acceso sobre toda la infraestructura, independientemente del formato o de su ubicación. De ese modo, reúne múltiples fuentes de datos, las hace accesibles desde una sola capa y proporciona capacidades de trazabilidad para supervisar los cambios en los datos.
En este webinar aprenderás a:
- Acelerar la integración de datos provenientes de fuentes de datos fragmentados en los sistemas internos y externos y obtener una vista integral de la información.
- Activar en toda la empresa una sola capa de acceso a los datos con medidas de protección.
- Cómo la virtualización de datos proporciona los pilares para cumplir con las normativas actuales de protección de datos mediante auditoría, catálogo y seguridad de datos.
Myth Busters VII: I’m building a data mesh, so I don’t need data virtualizationDenodo
Watch full webinar here: https://bit.ly/3DBA4EP
A data mesh architecture offers a lot of promise to change the way we manage data – and for the better. But there’s a lot of confusion about a data mesh. People will tell you that you can build a data mesh on top of a data lake or on top of a data warehouse, and that you don’t need data virtualization to build a data mesh.
Many vendors are jumping on to the data mesh bandwagon and are claiming that they inherently support a data mesh architecture. But do they? How much of this is hype versus reality? Is it true that you don’t need data virtualization to build a scalable, enterprise-grade data mesh?
This is the myth we will attempt to bust in this next Myth Busters webinar.
Watch this session on-demand to learn about the concepts and components of a data mesh, and hear how the logical approach to data management and integration – powered by data virtualization - is critical for a data mesh.
Similar to Driving Business Value Through Agile Data Assets (20)
Replay and more: https://blogs.embarcadero.com/pytorch-for-delphi-with-the-python-data-sciences-libraries/
The next installment of the Embarcadero Open Source Live Stream takes a look at the Delphi side of the Python Ecosystem with the new Python Data Sciences Libraries and related projects that make it super easy write Delphi code against Python libraries and easily deploy on Windows, Linux, MacOS, and Android. Specific examples with the Python Natural Language Toolkit and PyTorch, the library that powers projects like Tesla Autopilot, Uber's Pyro, Hugging Face's Transformers.
This is part of a series of regular live streams discussing the latest in Embarcadero open source projects. Hosted by Jim McKeeth and joined by members of the community and developers involved in these open source projects, as well as members of Embarcadero and Idera’s Product Management. A great opportunity to see behind the scenes and help shape the future of Embarcadero’s Open Source projects.
Android on Windows 11 - A Developer's Perspective (Windows Subsystem For Andr...Embarcadero Technologies
The Windows Subsystem for Android (WSA) brings native Android applications to the Windows 11 desktop. Learn how to set up and configure Windows Subsystem for Android for use in software development. See what is required to run WSA as well as what is required to target it from your Android development. Windows Subsystem for Android is available for public preview on Windows 11.
Webinar replay and more: https://blogs.embarcadero.com/?p=134192
for Linux (WSL2) with full GUI and X windows support. Join this webinar to better understand WSL2, how it works, proper setup, configuration options, and learn to target it in your application development. Test your Linux applications on your Windows desktop without the need for a second computer or the overhead of a virtual machine. Learn to leverage additional Linux features and APIs from your applications.
Examples with Delphi 11 Alexandria and FMXLinux
Learn how Embarcadero's newly released free Python modules bring the power and flexibility of Delphi's GUI frameworks to Python. VCL and FireMonkey (FMX) are mature GUI libraries. VCL is focused on native Windows development, while FireMonkey brings a powerful flexible GUI framework to Windows, Linux, macOS, and even Android. This webinar will introduce you to these new free Python modules and how you can use them to build graphical users interfaces with Python. Part 2 will show you how to target Android GUI applications with Python!
Introduction to Python GUI development with Delphi for Python - Part 1: Del...Embarcadero Technologies
Learn how Embarcadero’s newly released free Python modules bring the power and flexibility of Delphi’s GUI frameworks to Python. VCL and FireMonkey (FMX) are mature GUI libraries. VCL is focused on native Windows development, while FireMonkey brings a powerful flexible GUI framework to Windows, Linux, macOS, and even Android. This webinar will introduce you to these new free Python modules and how you can use them to build graphical users interfaces with Python. Part 2 will show you how to target Android GUI applications with Python!
Join Jim McKeeth as he introduces you to FMXLinux, and shows how you can bring the power of FireMonkey to Linux.
Outline:
Installation via GetIt Package Manager
Linux, PAServer, SDK, & Package Installation
FMXLinux usage and Samples
FireDAC Database Access on Linux
Migrating from Windows VCL to FMXLinux
3rd Party FMXLinux Support
Deploying rich web apps via Broadway
https://embt.co/FMXLinuxIntro
Combining the Strenghts of Python and Delphi
Links replay and more
https://blogs.embarcadero.com/combining-the-strengths-of-delphi-and-python/
Python4Delphi repository
https://github.com/pyscripter/python4delphi
Part 1
https://blogs.embarcadero.com/webinar-replay-python-for-delphi-developers-part-1-introduction/
Webinar by Kiriakos Vlahos (aka PyScripter)
and Jim McKeeth (Embarcadero)
Replay https://youtu.be/aCz5h96ObUM
Find out more, and register for part 2
https://embt.co/3hSAKrg
Check out the library
https://github.com/pyscripter/python4delphi
Agenda
Motivation and Synergies
Introduction to Python
Introduction to Python for Delphi
Simple Demo
TPythonModule
TPyDelphiWrapper
Embeddable Databases for Mobile Apps: Stress-Free Solutions with InterBaseEmbarcadero Technologies
When it comes to developing mobile applications, keeping data on your device is a must-have feature, but can still be risky. With embedded InterBase, you can deploy high-performance multi-device applications that maintain 256-bit encryption, have a small footprint and need little, if any, administration.
What can participants expect to learn: Using InterBase in your mobile apps is easier than you may expect. Learn to develop mobile applications using InterBase, and how to take advantage of some of the convenient features about InterBase like Change Views and 256-bit security.
Join Mary Kelly, InterBase Engineer & RAD Software Consultant, and Jim McKeeth, Chief Developer Advocate & Engineer, for this webinar replay.
Replay: https://embt.co/2qUPwWY
TMS Software's Map Packs make it easy to integrate mapping into your applications. Based on the Google Maps and OpenStreet Maps sources. Join us for this webinar to learn how to take your mapping to the next level.
Works on VCL, FireMonkey (FMX), Windows, Android, iOS, macOS, Delphi and C++Builder.
Applications built with Delphi and C++ Builder for the Windows platform have proven to be indispensable instruments for businesses, but rewriting them for the cloud is often cost-prohibiting. rollApp offers a cloud platform that can run existing desktop applications in the cloud without any need to modify them. At this webinar you will learn how to move your application to the cloud and offer the benefits of a cloud solution to your users in a matter of a few weeks.
Slide deck for the June 2, 2016 Embarcadero Webinar
This webinar will show you how to build mobile applications for iOS and Android using Delphi and C++Builder 10.1 Berlin. We will cover getting started, best practices for mobile UI/UX, building your first app, using FireUI Live Preview, creating custom design views and Live Previews, a real world example of creating, submitting and getting store acceptance for an iOS and Android app, working with databases, what’s new for mobile development and more.
This webinar will also give advice to Windows VCL desktop application developers who want to migrate their as much of their existing code to the iOS and Android mobile platforms
In this webinar we take a deeper dive into:
• How to get started building Mobile Apps if you are a Windows VCL desktop developer
• Building Mobile Apps using the different target platforms configurations
• Best practices and Apple/Google UI/UX guidelines for mobile applications – you’ll need to follow these to get your apps accepted.
• Creating FireUI Designer Custom IDE Views for other Mobile Devices
• FireUI Live Preview – extending the App to support custom component viewing
• Accessing Local and Remote Databases from your mobile apps
• Submitting apps to the Apple App Store, Google Play
Technical demonstrations will be presented by the team. Live Q&A will be done during and at the end of the webinar.
This is a presentation from the DBArtisan and Rapid SQL 2016 product launch. See what's new in these tools for database administrators (DBAs) and database developers. And learn about the revolutionary new Performance IQ tool. See the companion webinar at: http://forms.embarcadero.com/DBArtisan-RapidSQL-2016-Release
Is This Really a SAN Problem? Understanding the Performance of Your IO Subsy...Embarcadero Technologies
Learn more about Embarcadero database tools at: http://www.embarcadero.com/products/database-tools
Nearly 80% of performance issues appear to be related to the performance of storage. In reality, only about half of those are actual bottlenecks - frequently things like missing indexes, bad database design or misuse of features can either negatively impact the performance of the storage, or make it look like the root cause of the issue.
Join Microsoft MVP, Joseph D’Antoni and Embarcadero Director of Software Consultants, Scott Walz as they shed light on diagnosing your IO subsystem.
In this session, you will learn:
+ Where to look in SQL Server to gather information
+ How to use Windows Performance Monitor to analyze storage performance
+ What a "false positive" storage problem might look like
There are only so many times you can yell at the SAN admin, before they get cranky and start giving you 1GB drives, so attend this session and learn when the time is right.
Learn more at: http://www.embarcadero.com/products/rad-studio?cid=701G0000000WLhl
Embarcadero® RAD Studio™ 10 Seattle is the fastest way to build and update data-rich, hyper connected, visually engaging applications for Windows 10, Mac, Mobile, IoT and more using Object Pascal and C++. Quickly and easily bring your apps and customers to Windows 10 with a wide range of Windows 10 enabling features such as new Windows 10 VCL Controls, VCL and FMX UI Styles, and UWP (Universal Windows Platform) services like notifications.
Watch the companion webinar at: http://embt.co/1hjDU8s
Many DBAs may only know enough about data modeling to be dangerous. There are a number of challenges that DBAs face when trying to do data modeling, as well as some preconceived notions of what they think data modeling can (or can’t) do for them, such as generating useful DDL code.
This 90-minute session will provide specific insights and examples to show DBAs how a data modeling tool can help them improve database performance. Data modeling can simplify routine tasks and provide valuable context for a database implementation. Karen Lopez and John Sterrett will debunk seven dangerous myths that DBAs believe about data modeling, and also discuss and demonstrate:
+ Challenges DBAs encounter with data modeling
+ What data modeling really means and how it adds value
+ Why data modeling is key to successful agile projects
+ How data model-driven development saves time and money
+ Why data modeling should be done throughout the development lifecycle
View the companion webinar at: http://embt.co/1L8V6dI
Some claim that, in the age of Big Data, data modeling is less important or even not needed. However, with the increased complexity of the data landscape, it is actually more important to incorporate data modeling in order to understand the nature of the data and how they are interrelated. In order to do this effectively, the way that we do data modeling needs to adapt to this complex environment.
One of the key data modeling issues is how to foster collaboration between new groups, such as data scientists, and traditional data management groups. There are often different paradigms, and yet it is critical to have a common understanding of data and semantics between different parts of an organization. In this presentation, Len Silverston will discuss:
+ How Big Data has changed our landscape and affected data modeling
+ How to conduct data modeling in a more ‘agile’ way for Big Data environments
+ How we can collaborate effectively within an organization, even with differing perspectives
About the Presenter:
Len Silverston is a best-selling author, consultant, and a fun and top rated speaker in the field of data modeling, data governance, as well as human behavior in the data management industry, where he has pioneered new approaches to effectively tackle enterprise data management. He has helped many organizations world-wide to integrate their data, systems and even their people. He is well known for his work on "Universal Data Models", which are described in The Data Model Resource Book series (Volumes 1, 2, and 3).
Understanding Hardware: The Right Fights for the DBA to Pick with the Server ...Embarcadero Technologies
Watch the companion webinar at:
http://forms.embarcadero.com/Right-Fights-for-DBAs
Whether it’s a cloud, virtual machines or storage area networks (SANs) your databases need to run on hardware – it may be your hardware, it may be a cloud vendors or it may be in a co-located data center. In this webinar, you will learn about infrastructure and how it interacts with your databases.
From detailed examples, you’ll gain an understanding of how virtualization can negatively impact your performance if configured incorrectly, different types of storage and a little bit about the cloud. Most importantly, you will learn about the right questions to ask your server team for which environments. Finally, you will gain understanding of how to identify hardware bottlenecks and work with the server team to resolve. In this session, you will learn the wait types associated with these issues that will help you quickly identify the root cause of performance bottleneck, what to monitor for inconsistencies (ex., page life expectancy for SQL Server) and what questions to ask of your infrastructure team.
Join Joseph D’Antoni, SQL expert and Evangelist and Scott Walz, Director of Embarcadero Software Consultants as they provide insight on techniques and tools to help you pick your battles.
In this presentation, you will learn:
+ Understand what you really need from you infrastructure team in terms of server configuration
+ A solid understanding of virtualization and how it can impact databases
+ How to troubleshoot a hardware related problem through wait statistics and operating system monitoring
Learn more about DBArtisan at: http://www.embarcadero.com/products/dbartisan
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
15. EMBARCADERO TECHNOLOGIES
Agenda
• What’s happening with data?
• The new lifecycle
• Data landscape complexity
• Discovery & identification through models
– Specific capabilities
• What’s happening in reality?
• Concluding remarks
2
19. EMBARCADERO TECHNOLOGIES
Key Skill Sets
• Data Design & Management
• ETL and Software Development
• Data Analysis / Stats
• Business Analysis & Discovery
Value Delivered
• Validation
• Integration
• Enrichment
• Usability
Value and the New Lifecycle
6
Discover
Document
(Model)
Integrate
20. EMBARCADERO TECHNOLOGIES
Data Landscape Complexity
7
• Comprised of:
– Proliferation of disparate systems
– Mismatched departmental solutions
– Many database platforms
– Big data platforms
– ERP, SAAS
– Obsolete legacy systems
• Compounded by:
– Poor decommissioning strategy
– Point-to-point interfaces
– Data warehouse, data marts, ETL …
Data Archaeologist?
21. EMBARCADERO TECHNOLOGIES
Discovery and Identification Through Models
• Identify candidate data sources
• Reverse engineer data sources into models
• Identify, name and define
• Classify through metadata
• Map “like” items across models
• Data lineage / chain of custody
• Repository
• Collaboration & publishing
8
22. EMBARCADERO TECHNOLOGIES
ER/Studio: Native Big Data Support
• MongoDB
– Diagramming
– Reverse & Forward Engineering (JSON, BSON)
– MongoDB certification for 2.x and 3.0
• Certified for HDP 2.1
– Forward and reverse engineering
– Hive DDL
• Additonal MetaWizard capabilities for additional
platforms
9
24. EMBARCADERO TECHNOLOGIES
ER/Studio: Apply naming Standards
• Can invoke with other wizards
– General Physical Model
– Compare & Merge
– XML Schema Generation
– Model Validation
• Can apply to model or sub-model at any
time
• Either Direction
• Selective review/apply
• Enabled by loose model coupling
• Name lockdown (freeze names)
11
25. EMBARCADERO TECHNOLOGIES
ER/Studio: Universal Mappings
• Ability to link “like” or related objects
– Within same model file
– Across separate model files
• Entity/Table level
• Attribute/Column level
12
31. EMBARCADERO TECHNOLOGIES
Increasing volumes,
velocity, and variety of
Enterprise Data
30% - 50% year/year
growth
Decreasing % of
enterprise data which is
effectively utilized
5% of all Enterprise data
fully utilized
Increased risk from data
misunderstanding and
non-compliance
$600bn/annual cost for
data clean-up in U.S.
Enterprise Data Trends
32. EMBARCADERO TECHNOLOGIES
Business Stakeholders’ Data Usage
19
Suspect that business stakeholders
INTERPRET DATA INCORRECTLY
Yes,
frequently
14%
Yes,
occasionally
67%
No, never
9%
I don’t know
10%
Suspect that business stakeholders make decisions
USING THE WRONG DATA?
Yes,
frequently
11%
Yes,
occasionally
64%
No, never
13%
I don’t know
12%
33. EMBARCADERO TECHNOLOGIES
Data Model Usage & Understanding
20
13%
3%
16%
19%
31%
18%
0% 5% 10% 15% 20% 25% 30% 35%
We don’t use data models
Other
Our data team does most data
models but developers also build
them as needed
Our database administrators own
data modeling
Developers develop their own data
models
We have a data modeling team that
is responsible for data models
What is your organization’s approach to data modeling?
How well does your organization’s technology leadership team
understand the value of using data models?
Completely
understand
20%
Understand
somewhat
60%
Don’t
understand
17%
I don’t know
3%
87%
34. EMBARCADERO TECHNOLOGIES
Call to Action
• Audit, map and define existing data assets using
models, with the capabilities discussed
• Share, collaborate, govern
• Leverage data modeling to enable business agility
• Adapt to the “new” lifecycle
• Instill a data culture based on a philosophy of
continuous improvement
21
35. EMBARCADERO TECHNOLOGIES
Thank you!
• Learn more about the ER/Studio product family:
http://www.embarcadero.com/data-modeling
• Trial Downloads:
http://www.embarcadero.com/downloads
• To arrange a demo, please contact Embarcadero
Sales: sales@embarcadero.com, (888) 233-2224
22