Audience Level
All levels
Synopsis
Our journey towards solving our Application and Infrastructure Problems using Immutability, Codification, Mesos, Docker and Ironic.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Webinar: Don't believe the hype, you don't need dedicated storage for VDI NetApp
This webinar covers how the combination of SolidFire and Citrix XenDesktop enables customers to confidently support the storage demands of a virtual desktop environment in a multi-tenant or multi-application environment.
Get the most out OpenStack block storage with SolidFireNetApp
Learn how SolidFire storage can be used to deliver predictable application performance and improve infrastructure efficiency in your OpenStack environment.
This presentation covers:
- Consolidation of multiple mission critical applications on the same storage system
- Integrated End-to-End Quality of Service allocation using OpenStack Cinder Volume Types to eliminate inconsistent performance
- Boot from Cinder Volumes
- Cinder snapshots and clone offloading with SolidFire
Low-latency real-time data processing at giga-scale with Kafka | John DesJard...HostedbyConfluent
Data volumes continue to grow, demanding new, more scalable solutions for low-latency data processing. Previously, the default approach to deploying such systems was to throw a ton of hardware at the problem. However, that is no longer necessary, as newer technologies showcase a level of efficiency that enables smaller, more manageable clusters while handling extreme workloads. Processing billions of events per second on Kafka can now be done with a modest investment in compute resources. In this session, you will learn how to architect and build the fastest data processing applications that scale linearly, and combine streaming data and reference data data-in-motion and data-at-rest with machine learning. We will take you through the end-to-end framework and example application, built on the Hazelcast Platform, an open source software engine designed for ultra-fast performance. We will also show how you can leverage SQL to further explore the operational data in the solution including querying Kafka topics and key-value data on the in-memory data store. Attendees will also get access to the Github sample application shown.
Presto: Fast SQL-on-Anything Across Data Lakes, DBMS, and NoSQL Data StoresAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Presto: Fast SQL-on-Anything Across Data Lakes, DBMS, and NoSQL Data Stores
Kamil Bajda-Pawlikowski, CTO, Starburst Data
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
In January of this year, Kyligence announced the immediate availability of Kyligence Cloud 4, the first fully cloud-native, distributed OLAP platform. During our announcement, EMA analyst John Santaferraro said:
“As the race for unified analytics heats up, Kyligence offers a solution that overcomes the challenges of querying data in both data lakes and data warehouses located both in the cloud and on premises.”
Join Li Kang - VP of North America at Kyligence - as he provides an overview of the Kyligence Cloud 4 release that will show:
--The new cloud native architecture that employs Apache Kylin, Apache Spark, and Apache Parquet to ensure optimal performance.
--How KC4 delivers sub-second query responses on very large datasets using precomputed aggregate indexes (hyper-cubes) and table indexes.
--The AI-Augmented engine that intelligently organizes your data and reduces data modeling time from days/weeks to minutes.
In this presentation, we will present the Kyligence Cloud 4 story - high-speed analytics with unprecedented sub-second query response times against petabyte datasets.
Building the Next-gen Digital Meter Platform for FluviusDatabricks
Fluvius is the network operator for electricity and gas in Flanders, Belgium. Their goal is to modernize the way people look at energy consumption using a digital meter that captures consumption and injection data from any electrical installation in Flanders ranging from households to large companies. After full roll-out there will be roughly 7 million digital meters active in Flanders collecting up to terabytes of data per day. Combine this with regulation that Fluvius has to maintain a record of these reading for at least 3 years, we are talking petabyte scale. delaware BeLux was assigned by Fluvius to setup a modern data platform and did so on Azure using Databricks as the core component to collect, store, process and serve these volumes of data to every single consumer and beyond in Flanders. This enables the Belgian energy market to innovate and move forward. Maarten took up the role as project manager and solution architect.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
Webinar: Don't believe the hype, you don't need dedicated storage for VDI NetApp
This webinar covers how the combination of SolidFire and Citrix XenDesktop enables customers to confidently support the storage demands of a virtual desktop environment in a multi-tenant or multi-application environment.
Get the most out OpenStack block storage with SolidFireNetApp
Learn how SolidFire storage can be used to deliver predictable application performance and improve infrastructure efficiency in your OpenStack environment.
This presentation covers:
- Consolidation of multiple mission critical applications on the same storage system
- Integrated End-to-End Quality of Service allocation using OpenStack Cinder Volume Types to eliminate inconsistent performance
- Boot from Cinder Volumes
- Cinder snapshots and clone offloading with SolidFire
Low-latency real-time data processing at giga-scale with Kafka | John DesJard...HostedbyConfluent
Data volumes continue to grow, demanding new, more scalable solutions for low-latency data processing. Previously, the default approach to deploying such systems was to throw a ton of hardware at the problem. However, that is no longer necessary, as newer technologies showcase a level of efficiency that enables smaller, more manageable clusters while handling extreme workloads. Processing billions of events per second on Kafka can now be done with a modest investment in compute resources. In this session, you will learn how to architect and build the fastest data processing applications that scale linearly, and combine streaming data and reference data data-in-motion and data-at-rest with machine learning. We will take you through the end-to-end framework and example application, built on the Hazelcast Platform, an open source software engine designed for ultra-fast performance. We will also show how you can leverage SQL to further explore the operational data in the solution including querying Kafka topics and key-value data on the in-memory data store. Attendees will also get access to the Github sample application shown.
Presto: Fast SQL-on-Anything Across Data Lakes, DBMS, and NoSQL Data StoresAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Presto: Fast SQL-on-Anything Across Data Lakes, DBMS, and NoSQL Data Stores
Kamil Bajda-Pawlikowski, CTO, Starburst Data
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
In January of this year, Kyligence announced the immediate availability of Kyligence Cloud 4, the first fully cloud-native, distributed OLAP platform. During our announcement, EMA analyst John Santaferraro said:
“As the race for unified analytics heats up, Kyligence offers a solution that overcomes the challenges of querying data in both data lakes and data warehouses located both in the cloud and on premises.”
Join Li Kang - VP of North America at Kyligence - as he provides an overview of the Kyligence Cloud 4 release that will show:
--The new cloud native architecture that employs Apache Kylin, Apache Spark, and Apache Parquet to ensure optimal performance.
--How KC4 delivers sub-second query responses on very large datasets using precomputed aggregate indexes (hyper-cubes) and table indexes.
--The AI-Augmented engine that intelligently organizes your data and reduces data modeling time from days/weeks to minutes.
In this presentation, we will present the Kyligence Cloud 4 story - high-speed analytics with unprecedented sub-second query response times against petabyte datasets.
Building the Next-gen Digital Meter Platform for FluviusDatabricks
Fluvius is the network operator for electricity and gas in Flanders, Belgium. Their goal is to modernize the way people look at energy consumption using a digital meter that captures consumption and injection data from any electrical installation in Flanders ranging from households to large companies. After full roll-out there will be roughly 7 million digital meters active in Flanders collecting up to terabytes of data per day. Combine this with regulation that Fluvius has to maintain a record of these reading for at least 3 years, we are talking petabyte scale. delaware BeLux was assigned by Fluvius to setup a modern data platform and did so on Azure using Databricks as the core component to collect, store, process and serve these volumes of data to every single consumer and beyond in Flanders. This enables the Belgian energy market to innovate and move forward. Maarten took up the role as project manager and solution architect.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Unified Data Access with Gimel
Deepak Chandramouli, Engineering Lead
Anisha Nainani, Sr. Software Engineer
Dr. Vladimir Bacvanski, Principal Architect (Paypal)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Speeding Up Atlas Deep Learning Platform with Alluxio + FluidAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Speeding Up Atlas Deep Learning Platform with Alluxio + Fluid
Yuandong Xie, Platform Researcher (Unisound)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Do you need to move enterprise database information into a Data Lake in real time, and keep it current? Or maybe you need to track real-time customer actions in order to engage them while they are still accessible. Perhaps you have been tasked with ingesting and processing large amounts of IoT data.
How to Build Modern Data Architectures Both On Premises and in the CloudVMware Tanzu
Enterprises are beginning to consider the deployment of data science and data warehouse platforms on hybrid (public cloud, private cloud, and on premises) infrastructure. This delivers the flexibility and freedom of choice to deploy your analytics anywhere you need it and to create an adaptable and agile analytics platform.
But the market is conspiring against customer desire for innovation...
Leading public cloud vendors are interested in pushing their new, but proprietary, analytic stacks, locking customers into subpar Analytics as a Service (AaaS) for years to come.
In tandem, Legacy Data Warehouse vendors are trying to extend the lifecycle of their costly and aging appliances with new features of marginal value, simply imitating the same limiting models of public cloud vendors.
New vendors are coming up with interesting ideas, but these ideas are often lacking critical features that don’t provide support for hybrid solutions, limiting the immediate value to users.
It is 2017—you can, in fact, have your analytics cake and eat it too! Solve your short term costs and capabilities challenges, and establish a long term hybrid data strategy by running the same open source analytics platform on your infrastructure as it exists today.
In this webinar you will learn how Pivotal can help you build a modern analytical architecture able to run on your public, private cloud, or on-premises platform of your choice, while fully leveraging proven open source technologies and supporting the needs of diverse analytical users.
Let’s have a productive discussion about how to deploy a solid cloud analytics strategy.
Presenter : Jacque Istok, Head of Data Technical Field for Pivotal
https://content.pivotal.io/webinars/jul-20-how-to-build-modern-data-architectures-both-on-premises-and-in-the-cloud
Big data ingest frameworks ship with an array of connectors for common data origins and destinations, such as flat files, S3, HDFS, Kafka etc, but sometimes, you need to send data to, or receive data from a system that's not on the list. StreamSets includes template code for building your own connectors and processors; we'll walk through the process of building a simple destination that sends data to a REST web service, and show how it can be extended to target more sophisticated systems such as Salesforce Wave Analytics.
Extracting Value from IOT using Azure Cosmos DB, Azure Synapse Analytics and ...HostedbyConfluent
Due to explosion of IoT, we have streaming data that needs to be processed in real-time. This needs to be made available for applications as well as analytics scenarios such as anomaly detection. This workshop presents a solution using Confluent Cloud on Azure, Azure Cosmos DB and Azure Synapse Analytics which can be connected in a secure way within Azure VNET using Azure Private link configured on Kafka clusters.
This Big Data case study outlines the Hadoop infrastructure deployment for a Fortune 100 media and telecommunications company.
Hadoop adoption in this company had grown organically across multiple different teams, starting with “science projects” and lab initiatives that quickly grew and expanded. Going forward, some of the options they considered for their Big Data deployment included expanding their on-premises infrastructure and using a Hadoop-as-a-Service cloud offering.
Fortunately, they realized that there is a third option: providing the benefits of Hadoop-as-a-Service with on-premises infrastructure. They selected the BlueData EPIC software platform to virtualize their Hadoop infrastructure and provide on-demand access to virtual Hadoop clusters in a secure, multi-tenant model.
Learn more about this case study in the blog post at: http://www.bluedata.com/blog/2015/05/big-data-case-study-hadoop-infrastructure
SolidFire's presentations at CoudStack Collab 2013
This presentation reviews different cloud storage options in the market today, including current storage-related features and functionality available within a CloudStack infrastructure. We will then dive into key enhancements needed to bring CloudStack’s storage integration and functionality to the next level.
Achieving cyber mission assurance with near real-time impactElasticsearch
See how Elastic and ECS support the Mission Assurance Decision Support System (MADSS) program for the Navy with observability, data enrichment, and powerful search in a containerized environment. By correlating data from diverse sources using web-based services and secure, automated data transformation services, MADSS improves responsiveness, predictions, and event analysis for critical network and infrastructure outages.
Denodo DataFest 2017: Integrating Big Data and Streaming Data with Enterprise...Denodo
Watch live presentation here: https://goo.gl/UcZEHU
Big data projects are becoming mature and consistent. However, they remain siloed compared to the enterprise data. In addition, now new streaming data needs to integrated as well.
Watch this Denodo DataFest 2017 session to discover:
• How big data projects can be combined with other enterprise data.
• How to integrate streaming data into the mix.
• Benefits of aggregating the data without having to move them into a centralized repository.
Cost Effectively Run Multiple Oracle Database Copies at Scale NetApp
Scaling multiple databases with a single legacy storage system works well from a cost perspective, but workload conflicts and hardware contention make these solutions an unattractive choice for anything but low-performance applications.
Complex Analytics with NoSQL Data Store in Real TimeNati Shalom
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines.
We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a meshaup between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
- See more at: http://nosql2014.dataversity.net/sessionPop.cfm?confid=81&proposalid=6335#sthash.PNSZi5TJ.dpuf
Webinar | How Clear Capital Delivers Always-on Appraisals on 122 Million Prop...DataStax
In online residential and commercial real estate, even fractions of seconds in response times affect customer satisfaction and conversion to revenue. The need for continuous availability is paramount to deliver the levels of service customers demand from modern online applications.
Join David Prinzing, Enterprise Architect at Clear Capital to discover why Clear Capital, a premium provider of real estate asset valuation and collateral risk assessment, chose DataStax as the database-backbone for their ClearCollateral Platform. David will discuss how DataStax Enterprise, the world’s fastest, most scalable distributed database technology built on Apache Cassandra ensures 100% uptime for over 122 million properties (90% of all the properties in the United States) and supports reporting on over 1 Billion total valuations while never going down.
- The challenges in building real-time applications using relational technologies is forcing financial services firms to migrate to distributed database technologies
- How Clear Capital delivers 100% availability and real-time decision support across multiple data centers in the Amazon Cloud using DataStax Enterprise
- Why Apache Cassandra’s architecture delivers always-on, customer engaging applications that capture new business opportunities
Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Pa...MSAdvAnalytics
Lance Olson. Cortana Analytics is a fully managed big data and advanced analytics suite that helps you transform your data into intelligent action. Come to this two-part session to learn how you can do "big data" processing and storage in Cortana Analytics. In the first part, we will provide an overview of the processing and storage services. We will then talk about the patterns and use cases which make up most big data solutions. In the second part, we will go hands-on, showing you how to get started today with writing batch/interactive queries, real-time stream processing, or NoSQL transactions all over the same repository of data. Crunch petabytes of data by scaling out your computation power to any sized cluster. Store any amount of unstructured data in its native format with no limits to file or account size. All of this can be done with no hardware to acquire or maintain and minimal time to setup giving you the value of "big data" within minutes. Go to https://channel9.msdn.com/ to find the recording of this session.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Unified Data Access with Gimel
Deepak Chandramouli, Engineering Lead
Anisha Nainani, Sr. Software Engineer
Dr. Vladimir Bacvanski, Principal Architect (Paypal)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Speeding Up Atlas Deep Learning Platform with Alluxio + FluidAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Speeding Up Atlas Deep Learning Platform with Alluxio + Fluid
Yuandong Xie, Platform Researcher (Unisound)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Do you need to move enterprise database information into a Data Lake in real time, and keep it current? Or maybe you need to track real-time customer actions in order to engage them while they are still accessible. Perhaps you have been tasked with ingesting and processing large amounts of IoT data.
How to Build Modern Data Architectures Both On Premises and in the CloudVMware Tanzu
Enterprises are beginning to consider the deployment of data science and data warehouse platforms on hybrid (public cloud, private cloud, and on premises) infrastructure. This delivers the flexibility and freedom of choice to deploy your analytics anywhere you need it and to create an adaptable and agile analytics platform.
But the market is conspiring against customer desire for innovation...
Leading public cloud vendors are interested in pushing their new, but proprietary, analytic stacks, locking customers into subpar Analytics as a Service (AaaS) for years to come.
In tandem, Legacy Data Warehouse vendors are trying to extend the lifecycle of their costly and aging appliances with new features of marginal value, simply imitating the same limiting models of public cloud vendors.
New vendors are coming up with interesting ideas, but these ideas are often lacking critical features that don’t provide support for hybrid solutions, limiting the immediate value to users.
It is 2017—you can, in fact, have your analytics cake and eat it too! Solve your short term costs and capabilities challenges, and establish a long term hybrid data strategy by running the same open source analytics platform on your infrastructure as it exists today.
In this webinar you will learn how Pivotal can help you build a modern analytical architecture able to run on your public, private cloud, or on-premises platform of your choice, while fully leveraging proven open source technologies and supporting the needs of diverse analytical users.
Let’s have a productive discussion about how to deploy a solid cloud analytics strategy.
Presenter : Jacque Istok, Head of Data Technical Field for Pivotal
https://content.pivotal.io/webinars/jul-20-how-to-build-modern-data-architectures-both-on-premises-and-in-the-cloud
Big data ingest frameworks ship with an array of connectors for common data origins and destinations, such as flat files, S3, HDFS, Kafka etc, but sometimes, you need to send data to, or receive data from a system that's not on the list. StreamSets includes template code for building your own connectors and processors; we'll walk through the process of building a simple destination that sends data to a REST web service, and show how it can be extended to target more sophisticated systems such as Salesforce Wave Analytics.
Extracting Value from IOT using Azure Cosmos DB, Azure Synapse Analytics and ...HostedbyConfluent
Due to explosion of IoT, we have streaming data that needs to be processed in real-time. This needs to be made available for applications as well as analytics scenarios such as anomaly detection. This workshop presents a solution using Confluent Cloud on Azure, Azure Cosmos DB and Azure Synapse Analytics which can be connected in a secure way within Azure VNET using Azure Private link configured on Kafka clusters.
This Big Data case study outlines the Hadoop infrastructure deployment for a Fortune 100 media and telecommunications company.
Hadoop adoption in this company had grown organically across multiple different teams, starting with “science projects” and lab initiatives that quickly grew and expanded. Going forward, some of the options they considered for their Big Data deployment included expanding their on-premises infrastructure and using a Hadoop-as-a-Service cloud offering.
Fortunately, they realized that there is a third option: providing the benefits of Hadoop-as-a-Service with on-premises infrastructure. They selected the BlueData EPIC software platform to virtualize their Hadoop infrastructure and provide on-demand access to virtual Hadoop clusters in a secure, multi-tenant model.
Learn more about this case study in the blog post at: http://www.bluedata.com/blog/2015/05/big-data-case-study-hadoop-infrastructure
SolidFire's presentations at CoudStack Collab 2013
This presentation reviews different cloud storage options in the market today, including current storage-related features and functionality available within a CloudStack infrastructure. We will then dive into key enhancements needed to bring CloudStack’s storage integration and functionality to the next level.
Achieving cyber mission assurance with near real-time impactElasticsearch
See how Elastic and ECS support the Mission Assurance Decision Support System (MADSS) program for the Navy with observability, data enrichment, and powerful search in a containerized environment. By correlating data from diverse sources using web-based services and secure, automated data transformation services, MADSS improves responsiveness, predictions, and event analysis for critical network and infrastructure outages.
Denodo DataFest 2017: Integrating Big Data and Streaming Data with Enterprise...Denodo
Watch live presentation here: https://goo.gl/UcZEHU
Big data projects are becoming mature and consistent. However, they remain siloed compared to the enterprise data. In addition, now new streaming data needs to integrated as well.
Watch this Denodo DataFest 2017 session to discover:
• How big data projects can be combined with other enterprise data.
• How to integrate streaming data into the mix.
• Benefits of aggregating the data without having to move them into a centralized repository.
Cost Effectively Run Multiple Oracle Database Copies at Scale NetApp
Scaling multiple databases with a single legacy storage system works well from a cost perspective, but workload conflicts and hardware contention make these solutions an unattractive choice for anything but low-performance applications.
Complex Analytics with NoSQL Data Store in Real TimeNati Shalom
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines.
We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a meshaup between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
- See more at: http://nosql2014.dataversity.net/sessionPop.cfm?confid=81&proposalid=6335#sthash.PNSZi5TJ.dpuf
Webinar | How Clear Capital Delivers Always-on Appraisals on 122 Million Prop...DataStax
In online residential and commercial real estate, even fractions of seconds in response times affect customer satisfaction and conversion to revenue. The need for continuous availability is paramount to deliver the levels of service customers demand from modern online applications.
Join David Prinzing, Enterprise Architect at Clear Capital to discover why Clear Capital, a premium provider of real estate asset valuation and collateral risk assessment, chose DataStax as the database-backbone for their ClearCollateral Platform. David will discuss how DataStax Enterprise, the world’s fastest, most scalable distributed database technology built on Apache Cassandra ensures 100% uptime for over 122 million properties (90% of all the properties in the United States) and supports reporting on over 1 Billion total valuations while never going down.
- The challenges in building real-time applications using relational technologies is forcing financial services firms to migrate to distributed database technologies
- How Clear Capital delivers 100% availability and real-time decision support across multiple data centers in the Amazon Cloud using DataStax Enterprise
- Why Apache Cassandra’s architecture delivers always-on, customer engaging applications that capture new business opportunities
Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Pa...MSAdvAnalytics
Lance Olson. Cortana Analytics is a fully managed big data and advanced analytics suite that helps you transform your data into intelligent action. Come to this two-part session to learn how you can do "big data" processing and storage in Cortana Analytics. In the first part, we will provide an overview of the processing and storage services. We will then talk about the patterns and use cases which make up most big data solutions. In the second part, we will go hands-on, showing you how to get started today with writing batch/interactive queries, real-time stream processing, or NoSQL transactions all over the same repository of data. Crunch petabytes of data by scaling out your computation power to any sized cluster. Store any amount of unstructured data in its native format with no limits to file or account size. All of this can be done with no hardware to acquire or maintain and minimal time to setup giving you the value of "big data" within minutes. Go to https://channel9.msdn.com/ to find the recording of this session.
Business Intelligence basics emphasizing the advantages and requirements for SME's of adopting the appropriate BI tool(s) and the rationale for doing so.
Hybrid Analytics in Healthcare: Leveraging Power BI and Office 365 to Make Sm...Perficient, Inc.
As organizations realize the cost savings and scalability benefits of hybrid environments, the focus turns to implementing an analytics platform in these new environments. On-premises vs. cloud is the big choice, but how are some companies leveraging the best of both worlds?
Perficient and UnityPoint Health discussed the benefits of Power BI and Office 365, and how one technology-savvy healthcare provider is leveraging its hybrid environment of Power BI, Excel-enabled dashboards and SharePoint 2013.
Marketers have more data available than ever but struggle to pull in together in a usable format. Customer Data Platforms promise to solve this problem by offering easy-to-deploy systems specializing in data unification and sharing. But can CDP really deliver on its promise? This workshop will equip you to understand the definition of a CDP, how CDPs differ from other systems, which features are shared by all CDPs and which are found in only some, the most important CDP use cases, how to select the right CDP, how to manage a successful deployment and where to look next for more information.
The Business Value of Business IntelligenceSenturus
Learn about various BI architectures and approaches, as well as a comparison of different vendors’ BI offerings. See a demonstration of OLAP cube building. View the video recording and download this deck: http://www.senturus.com/resources/the-business-value-of-business-intelligence/
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
Presentation from Smart ERP Solutions conveying the benefits of embedded analytic dashboards in PeopleSoft applications compared to offline static reports. Many organizations are hampered by continued use of inefficient static reports to run their business. Covered is how to easily advance beyond legacy static reports to more visual and interactive methods of accessing information to improve decision making. Covers a solution that embeds rich dynamic dashboards providing key analytics directly into your PeopleSoft applications, not in some separate costly reporting warehouse. Sample dashboards embedded into PeopleSoft applicatons include: Employee Onboarding; Employee Survey; Grants Distribution; Workflow Approval; Sales Review; and Tax Reconciliation.
Swinburne University of Technology - Shunde Zhang & Kieran Spear, AptiraOpenStack
We recently teamed up with our good friends at SUSE to build a very high-performing and scalable storage landscape at a fraction of the cost than with traditional storage systems for the Swinburne University of Technology.
The challenge that the IT team at Swinburne were facing is how to accommodate the ever-growing need for performance and capacity while sticking to a tight budget. With SUSE Enterprise Storage we have been able to deploy a compelling and affordable solution to support Swinburne’s storage needs. Their new storage platform is fast and flexible, meaning the IT team at Swinburne can support the needs of researchers more effectively.
https://aptira.com/swinburne-university-technology/
Related OSS Projects - Peter Rowe, Flexera SoftwareOpenStack
Audience Level
Intermediate
Synopsis
Today’s fast-paced development environment has changed the compliance landscape. Many software projects consist of more than 50% Open Source Software (OSS) components, but as much as 99% are undocumented, increasing the complexities of managing your company’s software compliance process.
Of particular concern is “Zombie software”, or software that is outdated and contains vulnerable versions of certain components. Zombies can live in your code forever if you’re not aware of them. The acceleration of modern development lifecycles and the breakdown of an undocumented software supply chain have opened up new pathways for zombies to enter your software – leaving you exposed to security threats.
This presentation discusses best practices for implementing an Open Source Software management strategy that covers common pitfalls and commercial licence issues as well as the optimal way to track and eliminate the risks associated with Zombies!
Speaker Bio:
Involved in and around IT development for over 20 years, starting as a web developer using NotePad in 1995 when the most exciting thing online was Sun’s animated Java coffee cup, through Numega Pre-Sales selling BoundsChecker and now into the brave, new World of Open Source and software composition analysis.
Supercomputing by API: Connecting Modern Web Apps to HPCOpenStack
Audience Level
Intermediate
Synopsis
The traditional user experience for High Performance Computing (HPC) centers around the command line, and the intricacies of the underlying hardware. At the same time, scientific software is moving towards the cloud, leveraging modern web-based frameworks, allowing rapid iteration, and a renewed focus on portability and reproducibility. This software still has need for the huge scale and specialist capabilities of HPC, but leveraging these resources is hampered by variation in implementation between facilities. Differences in software stack, scheduling systems and authentication all get in the way of developers who would rather focus on the research problem at hand. This presentation reviews efforts to overcome these barriers. We will cover container technologies, frameworks for programmatic HPC access, and RESTful APIs that can deliver this as a hosted solution.
Speaker Bio
Dr. David Perry is Compute Integration Specialist at The University of Melbourne, working to increase research productivity using cloud and HPC. David chairs Australia’s first community-owned wind farm, Hepburn Wind, and is co-founder/CTO of BoomPower, delivering simpler solar and battery purchasing decisions for consumers and NGOs.
Federation and Interoperability in the Nectar Research CloudOpenStack
Audience Level
Beginner
Synopsis
The Nectar Research Cloud provides an OpenStack cloud for Australia’s academic researchers. Since its inception in 2012 it has grown steadily to over 30,000 CPUs, with over 10,000 registered users from more than 50 research institutions. It is different to many clouds in being a federation across eight organisations, each of which runs cloud infrastructure in one or more data centres and contributes to a distributed help desk and user support. A Nectar core services team runs centralised cloud services. This presentation will give an overview of the experiences, challenges and benefits of running a federated OpenStack cloud and a short demonstration on using the Nectar cloud. We will also describe some current approaches that are looking to extend this federation to encompass other institutions including some in New Zealand, to extend the infrastructure using commercial cloud providers, and to move towards interoperability with the growing number of international science and research clouds through the new Open Research Cloud initiative.
Speaker Bio
Dr Paul Coddington is a Deputy Director of Nectar, responsible for the Nectar national Research Cloud, and also Deputy Director of eResearch SA. He has over 30 years experience in eResearch including computational science, high performance and distributed computing, cloud computing, software development, and research data management.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Audience Level
Intermediate
Synopsis
In this presentation, Shunde will show you how to simplify the migration process with a workload migration engine, making the move to OpenStack easy. This talk will address the various difficulties operators and administrators face when migrating workloads and resources between various cloud platforms, including removing time consuming, repetitive and complicated steps.
This tool can be applied to many cloud migrations, including between Virtual Machines and OpenStack, between Public and Private clouds, as well as between OpenStack and OpenStack. This tool integrates completely with other OpenStack projects minimising deployment and maintenance efforts. So whether you’re looking to upgrade from your existing traditional virtualisation platform, setup a new OpenStack instance, or upgrade to a newer version of OpenStack, we will show you how to simplify this process using GUTS.
Speaker Bio
Shunde is a senior software developer in Aptira with over 15 years experience in software development, automation and system administration. He has worked with OpenStack since the Diablo cycle and has been involved in projects from OpenStack infrastructure to distributed systems running on top of OpenStack.
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatOpenStack
Audience Level
Intermediate
Synopsis
Hypercoverged Compute, Network and Storage is ready for production workloads – where it makes sense.
Whether you’re a telecommunications carrier, service provider or enterprise; implementing Network Function Virtualisation (NFV), focusing on specific known workloads or simply a dev / test cloud – deploying a hypercoverged OpenStack cloud makes a lot of sense.
Come along and discover which workloads fit a hyperconverged architecture, see examples and look into the very near future and learn how OpenStack is truly ready to serve your every need.
Speaker Bio
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
Migrating your infrastructure to OpenStack - Avi Miller, OracleOpenStack
Audience Level
Beginner
Synopsis
Migrating is never simple, but migrating from a traditional infrastructure to a private cloud infrastructure adds a whole new layer of complexity and raises a number of questions for IT decision makers. Come learn first hand how to begin to migrate your traditional infrastructure management tools and processes to OpenStack.
This session will provide details on common questions and answers to help administrators avoid costly mistakes. Learn what to look out for, what to avoid, how to identify risks and how to mitigate them.
Speaker Bio:
Avi is an accomplished technical product manager with extensive experience across the operating system, virtualisation and application stacks.
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...OpenStack
Audience Level
All levels
Synopsis
Often times, prospects and existing OpenStack users wonder if there is indeed strong business and technical value proposition for cloud platforms serving a specific industry vertical.
In this session, EasyStack would like to share with the participants our experience engaging with an industry leader to build a credible solution platform catering to their current and future business and technology roadmap.
Speaker Bio:
Adrian is Director Global Business Development in EasyStack and has 20 years working experience in leading tech companies in the IT industry.
Prior to joining EasyStack, Adrian was with IBM Singapore and IBM China and served in roles such as Offering Manager, Engagement Manager, Solution Architect, Services Consultant, IT Specialist.
Adrian holds a Bachelor of Science degree with honors in Computer Sciences from University of Texas at Austin.
Enabling OpenStack for Enterprise - Tarso Dos Santos, VeritasOpenStack
Audience Level
All levels
Synopsis
OpenStack offers many advantages for organisations building out their cloud environments, including flexibility and community-driven innovation. However, enterprises looking to deploy OpenStack in production typically find its storage management capabilities wanting from the perspective of management complexity and business resiliency. Enterprises are also challenged when it comes to ensuring protection of their data and providing the necessary performance – especially for their tier one applications. Meeting these fundamental needs is critical for enterprises to proceed confidently with their OpenStack deployments.
Veritas HyperScale for OpenStack is a software-defined storage management solution uniquely developed for OpenStack based clouds. It leverages direct attached storage (DAS) and provides enterprise-strength capabilities that enable robust, production-scale deployment while meeting performance and data protection needs. Learn how this innovative solution, coupled with other relevant Veritas offerings, solve the remaining issues around implementing OpenStack within the enterprise.
Speaker Bio:
Tarso dos Santos works as a Technical Account Manager at Veritas, directly engaging with customers to develop strategies, architectures and solutions with focus on Cloud – Openstack, Containers, Data Protection, High Availability and Compliance.
He has over 21 years in the IT industry architecting, delivering and positioning solutions such as private clouds, distributed systems, hpc, storage, and high available platforms.
Tarso has a great interest in distributed systems performance, and scientific organizations that push the boundaries of existing technologies, but also need to link these into the Enterprise.
Tarso in his life has enjoyed working in some of the most amazing projects ranging from mission critical systems protecting Australian lives, to IT infrastructure projects that are looking at the sky and discovering new planets out in the space.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEOpenStack
Audience Level
Intermediate
Synopsis
Ceph – the most popular storage solution for OpenStack – stores all data as a collection of objects. This object store was originally implemented on top of a POSIX filesystem, an approach that turned out to have a number of problems, notably with performance and complexity.
BlueStore, a new storage backend for Ceph, was created to solve these issues; the Ceph Jewel release included an early prototype. The code and on-disk format were declared stable (but experimental) for Ceph Kraken, and now in the upcoming Ceph Luminous release, BlueStore will be the recommended default storage backend.
With a 2-3x performance boost, you’ll want to look at migrating your Ceph clusters to BlueStore. This talk goes into detail about what BlueStore does, the problems it solves, and what you need to do to use it.
Speaker Bio:
Tim works for SUSE, hacking on Ceph and related technologies. He has spoken often about distributed storage and high availability at conferences such as linux.conf.au. In his spare time he wrangles pigs, chickens, sheep and ducks, and was declared by one colleague “teammate most likely to survive the zombie apocalypse”.
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...OpenStack
Audience Level
Intermediate
Synopsis
Data Analytics is a hot topic today for most organisations as they race to convert vast amounts of data into useful information that can be leveraged to make critical decisions or recommendations in a very limited time window.
Today, there is a widely accepted talent gap when it comes to creating and managing Hadoop Clusters. Even for the experts can take hours (or days) to get a fully functional Hadoop farm up and running.
On top of that it can be difficult to find java programmers that have enough experience to be productive with Map Reduce.
OpenStack Sahara is looking to address most of this challenges by facilitating the deployment of Hadoop clusters and provide a set of API to provide data processing tasks.
This session will provide an insight into OpenStack Sahara capabilities and how the end users can leverage on it.
Speaker Bio:
Alex has been working with Open Source enterprise technologies for the better part of his 15 years IT career in companies like Hewlett-Packard Enterprise, Red Hat, IBM and Sun Microsystems.
He started his OpenStack journey with Grizzly, delivering the first HPC cloud in APAC for a Singapore University making use of SRIOV technologies combined with big data. He has extensive deployment experience on configuration management and automation of private cloud based on OpenStack.
Alex is currently an APJ Cloud Consultant in the Helion Cloud team at Hewlett-Packard Enterprise, where he evangelizes the OpenSource side of the Helion portfolio (OpenStack / Docker / Ceph).
He enjoys running automation workshops and seminars in the APJ region for cloud adopters.
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
Audience Level
All levels
Synopsis
Peter has been involved in OpenStack community since its B-release, and he has been enabling and helping customers across various industries adopt OpenStack in strategic ways. In this session, you will learn from his experience what Red Hat’s perspective is on the current state of affairs in the OpenStack community and the path we see ahead that Red Hat is putting its efforts in. OpenStack is not a product that tries to solve any one business problem in particular, but a technology that aims to be usable for many – what are the required steps to make sure that your organisation is ready for the OpenStack-based cloudification and transformation.
Speaker Bio:
Peter Jung is a Senior Business Development Manager at Red Hat where he leads the practice in the areas of Cloud, SDN/NFV and IoT across Australia and New Zealand. He is passionate about open innovation and open source software development model as the foundation for next generation society and ICT systems. Prior to Red Hat, he had various roles at Cisco and Dell for 15 years. He holds a BSEE and an MBA.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
Audience Level
Intermediate
Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).
Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.
David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
Audience Level
Intermediate
Synopsis
High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.
The deployment of the Spartan cloud-HPC hybrid system at the University of Melbourne last year is an example of such a design. Despite its small size, it has attracted international attention due to its design features. This presentation, in addition to providing a grounding on why one would wish to build an HPC-cloud hybrid system and the results of the deployment, provides a complete technical overview of the design from the ground up, as well as problems encountered and planned future developments.
Speaker Bio
Lev Lafayette is the HPC and Training Officer at the University of Melbourne. Prior to that he worked at the Victorian Partnership for Advanced Computing for several years in a similar role.
Traditional Enterprise to OpenStack Cloud - An Unexpected JourneyOpenStack
Audience Level
Intermediate
Synopsis
Hostworks is a part of the Inabox group and an Australian based Managed Hosting and Services Provider with a strong background in management of infrastructure, servers, and applications following Traditional Enterprise management practices. 2 years ago, we began our transition to adopt and deploy cloud technologies and mindsets with an on On-Premises OpenStack platform as part of a larger hybrid cloud offering. This presentation details how we did it, what went wrong, and what went right, important lessons and recommendations for other businesses who wish to follow the same path.
Speaker Bio:
Daniel is Platform Engineer at Hostworks and specialises in their On-Premises OpenStack cloud offering.
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash UniversityOpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...OpenStack
Audience Level
Intermediate
Synopsis
We will discuss how we do monitoring on the Nectar research cloud, utilising tools like OpenStack tempest, Nagios and translating this into a user facing dashboard.
Speaker Bio:
Andy is a DevOps engineer working at the University of Melbourne in the Core Services team for the Nectar Research Cloud.
Containers and OpenStack: Marc Van Hoof, Kumulus: Containers and OpenStackOpenStack
Containers and OpenStack
Audience: Intermediate
Topic: Infrastructure
Abstract: Containers are the new darling of the development world, and many are calling for an end of the IaaS world. But there are still key reasons that IaaS is important even as Container based development becomes the desired path for the development community. We will review containers in the context of their growth in popularity, and look at how OpenStack both continues to support and enable Container solutions, and the latest developments in OpenStack as a containerized solution directly.
Speaker Bio: Marc Van Hoof, Kumulus
Marc van Hoof has been in the technology industry for over 20 years, focused on developing, deploying, and scaling internet applications. He was part of a team that built the first internet data centre in Australia, has worked on some of the largest online real-time events, and advises companies on how to take advantage of the true benefits of migrating to the cloud.
OpenStack Australia Day Government - Canberra 2016
https://events.aptira.com/openstack-australia-day-canberra-2016/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. Downstream
systems
• Specialised
management
systems
• Reporting Systems
• Product
management
Channel &
product
systems
Master Data
Management
Hadoop
• Leverage all data & reduce
integration costs
• Comprehensive dataset –
internal & external, realtime &
batch, structured & unstructured
• Advanced analytics / machine
learning
Group Data
Warehouse
• Understand our business
• Accurate, conformed, and
reconciled data
• Access layer to support BI &
reporting
BI/Reporting
• User facing tools
• Regulatory reporting
• Dishoarding
• Self service BI for the
masses
Customer record &
insights
All data
Price,
conversation,
credit dec.
etc.
Financial Data
Subset of
data
User
access
Information for
people
Core Financial
Systems and
functions
• P&L
• Recon
• General Ledger
• Etc…
Closed loop,
automated ‘decisions’
Decisioning
• Personalise/optimise decisions,
maximise customer value
• E.g. price, credit decision, next
conversation, experience
Core information repositories
Analytics applications
Other systems