In the PowerPoint presentation about Azure Synapse, we begin by introducing Azure Synapse as an integrated analytics service, emphasizing its role in unifying big data and data warehousing. Key features such as unlimited information processing, querying of both relational and non-relational data, and integration with AI and BI capabilities are highlighted. The presentation delves into the architecture of Azure Synapse, illustrating how it interconnects with Azure Data Lake, Power BI, and Azure Machine Learning. We explore its robust data integration capabilities, including Azure Synapse Pipelines for efficient ETL processes. The discussion then moves to its prowess in analytics and big data processing, supporting various languages like T-SQL, Python, and Scala. The integration of Azure Synapse with AI and machine learning is underscored, showcasing its application in predictive analytics. Security features form a crucial part of the talk, emphasizing data protection and compliance aspects. Real-world use cases demonstrate Azure Synapse's practical applications in business settings. A comparative analysis with other data platforms highlights Synapse's unique benefits. The presentation concludes with guidance on getting started with Azure Synapse, followed by a summary, inviting audience questions and providing contact information for further engagement.
Fast, distributed NoSQL and relational database at any scale. This contains many features including Partition and Indexes,
Data movement, Change Feed
Integration (Azure Functions and Search), Consistency Models, Replication and Multi-write, etc.,
Presentation giving as part of the Global Azure Bootcamp 2017, April 22, 2017. Subject: one-day hands-on workshop about the Cortana Intelligence Suite.
Going Serverless - an Introduction to AWS GlueMichael Rainey
Going "serverless" is the latest technology trend for enterprises moving their processing to the cloud, including data integration and ETL tools. But what does that mean and when should I use serverless ETL? In this session, we'll dive into the world of Amazon's fully managed data processing service called AWS Glue. With no server to provision or resources to allocate, and an easy to populate metadata catalog, AWS Glue allows the data engineer to focus on his or her craft; building data transformations and pipelines. Gaining an understanding of the similarities and differences between traditional ETL tools, such as Oracle Data Integrator, and Glue will prepare attendees for the new world of data integration. Presented at Collaborate 18.
Non è necessario tirare in ballo l’IoT per immaginare quanto possa essere utile per fare query sui dati mentre questi fluiscono verso il database, e non solamente dopo. Si apre un mondo di possibilità per quanto riguarda alerting & monitoring in tempo reale, che è chiaramente la parte più immediata, ma è anche possibile pensare a cose come real-time dasboarding e soluzioni per aggiustare prezzi ed offerte di prodotti in tempo reale. In questa sessione vedremo come è possibile utilizzare Azure Stream Analytics ed il suo linguaggio SQL-Like per analizzare i dati in streaming, e quindi iniziare a prendere confidenza con questo nuovo approccio ormai sempre pià in voga e sempre più richesto, sia nel mondo dell’IoT che non.
Fast, distributed NoSQL and relational database at any scale. This contains many features including Partition and Indexes,
Data movement, Change Feed
Integration (Azure Functions and Search), Consistency Models, Replication and Multi-write, etc.,
Presentation giving as part of the Global Azure Bootcamp 2017, April 22, 2017. Subject: one-day hands-on workshop about the Cortana Intelligence Suite.
Going Serverless - an Introduction to AWS GlueMichael Rainey
Going "serverless" is the latest technology trend for enterprises moving their processing to the cloud, including data integration and ETL tools. But what does that mean and when should I use serverless ETL? In this session, we'll dive into the world of Amazon's fully managed data processing service called AWS Glue. With no server to provision or resources to allocate, and an easy to populate metadata catalog, AWS Glue allows the data engineer to focus on his or her craft; building data transformations and pipelines. Gaining an understanding of the similarities and differences between traditional ETL tools, such as Oracle Data Integrator, and Glue will prepare attendees for the new world of data integration. Presented at Collaborate 18.
Non è necessario tirare in ballo l’IoT per immaginare quanto possa essere utile per fare query sui dati mentre questi fluiscono verso il database, e non solamente dopo. Si apre un mondo di possibilità per quanto riguarda alerting & monitoring in tempo reale, che è chiaramente la parte più immediata, ma è anche possibile pensare a cose come real-time dasboarding e soluzioni per aggiustare prezzi ed offerte di prodotti in tempo reale. In questa sessione vedremo come è possibile utilizzare Azure Stream Analytics ed il suo linguaggio SQL-Like per analizzare i dati in streaming, e quindi iniziare a prendere confidenza con questo nuovo approccio ormai sempre pià in voga e sempre più richesto, sia nel mondo dell’IoT che non.
AWS Česko-Slovenský Webinár 03: Vývoj v AWSVladimir Simek
Služba Amazon Web Services poskytuje vysoce spolehlivou, škálovatelnou a nízkorozpočtovou cloudovou platformu, kterou používají stovky tisíc firem v 190 zemích po celém světě. Startupy, malé a střední podniky, velké enterprise firmy a zákazníci ve veřejném sektoru mají přístup ke stavebním kamenům, které slouží na rychlý vývoj aplikací jako reakce na měnící se obchodní požadavky. Bez ohledu na to, zda chcete vytvářet webové nebo mobilní aplikace, prípadně postavené na klasických serverech či kontejnerech, AWS davá vývojářům do rukou mnoho nástrojů, které jim pomáhají vytvářet a nasazovat aplikace jednoduše, rychle a při nízkých nákladech.
Building and deploying an analytic service on Cloud is a challenge. A bigger challenge is to maintain the service. In a world where users are gravitating towards a model where cluster instances are to provisioned on the fly, in order for these to be used for analytics or other purposes, and then to have these cluster instances shut down when the jobs get done, the relevance of containers and container orchestration is more important than ever. In short Customers are looking for Serverless Spark Clusters. The Intent of this presentation is to share what is Serverless Spark and what are the benefits of running Spark in serverless manner.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Accelerating Business Intelligence Solutions with Microsoft Azure passJason Strate
Business Intelligence (BI) solutions need to move at the speed of business. Unfortunately, roadblocks related to availability of resources and deployment often present an issue. What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day. In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI. By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.
Amazon Redshift is a hosted data warehouse product, which is part of the larger cloud computing platform Amazon Web Services. It is built on top of technology from the massive parallel processing (MPP) data warehouse
Hello All,
It is time for the second Tokyo Azure Meetup!
As a natural continuation of our first topic, we will proceed with Big Data.
Until recently you needed to learn new language or master new concepts in order get started with Big Data.
Moreover, you needed to spend a lot of time setting up infrastructure that will meet the business demands for Big Data processing.
Not any more!
If you know C# and T-SQL you are ready to become Big Data master!
Public cloud and especially Microsoft Azure are very well suited for working with Big Data.
Join us for our next event and and I can assure you that after the session you will be ready to start working with Big Data.
And maybe you are asking why this is important.
I believe that we don't have choice but build smart applications and get as much possible insights from the data we collect from various sources in order to take the best business decisions and please our customers.
Today we have so much data available publicly or coming from our customers and it is very challenging to process it and turn it into valuable business asset.
Not any more!
Join for our next meetup and you will see how Microsoft create amazing opportunity for each .Net developer to become Big Data expert and every company to start using Big Data to accelerate its growth.
I have been working closely with the product team developing U-SQL language that empower Azure Data Lake Analytics, which is one of the processing engines for Azure Data Lake and I will be very happy to share my experience with you!
See you very soon!
Kanio
4Developers 2018: Przetwarzanie Big Data w oparciu o architekturę Lambda na p...PROIDEA
Według szacunków do 2020 roku wygenerujmy 40 Zetta byte’ów, a do roku 2025 aż 163 Zetta byte’ów różnego rodzaju danych, a ich dokładna analiza ACpozwali na odkrywanie nowych zjawisk, optymalizacje procesów, czy wspomaganie procesów decyzyjnych. Aby efektywnie przetwarzać tak duże zbiory danych potrzebujemy nowych technik analizy danych oraz innowacyjnych rozwiązań technologicznych. Ważną role pełni tutaj chmura Azure, która oferuje szereg usług, przy użyciu których możemy tworzyć rozwiązania na potrzeby przetwarzania Big Data zarówno w trybie batch’owych jak i ‘near real time’. Podczas sesji stworzymy przykładowe rozwiązanie przetwarzania Big Data oparte o architekturę Lambda , z wykorzystaniem usług platformy Azure, takich jak Azure Data Factory, Azure Stream Analytics, Azure HdInsight, Azure Event (IoT) Hub, czy Azure Data Lake.
In this session, Sergio covered the Lakehouse concept and how companies implement it, from data ingestion to insight. He showed how you could use Azure Data Services to speed up your Analytics project from ingesting, modelling and delivering insights to end users.
Explore the cutting-edge of AI and search technology with Udaiappa Ramachandran (Udai), CTO/CSO of Akumina Inc. and Microsoft Azure MVP, in his presentation 'RAG Patterns and Vector Search in Generative AI'. This comprehensive overview covers the essentials of Keyword and Vector Search, highlighting their strengths and limitations. Udai brilliantly introduces Hybrid Search, combining the best of both worlds for enhanced accuracy and relevance. Real-world applications in companies like Amazon, Google, and Netflix illustrate the practical implications of these technologies. The presentation also delves into the mechanics of cosine similarity and explores various vector databases, providing a well-rounded understanding of current AI search technologies. Ideal for professionals and enthusiasts in the AI and search technology fields, this presentation offers a glimpse into the future of intelligent search solutions.
In "Level Up Your Security Using Intune," Udaiappa Ramachandran, an expert in cloud technologies, presents a detailed guide on using Microsoft Intune for enhancing mobile application and device security. The presentation covers two main integration strategies: the Intune SDK, which provides fine-grained control, customization, and long-term maintainability, and the Intune App Wrapper, suitable for legacy apps and rapid prototyping with some feature limitations. Udaiappa's talk, aimed at modern developers, emphasizes the importance of robust mobile security and showcases Intune's capabilities in managing both corporate-owned devices and BYOD scenarios, underlining its critical role in contemporary digital security management.
AWS Česko-Slovenský Webinár 03: Vývoj v AWSVladimir Simek
Služba Amazon Web Services poskytuje vysoce spolehlivou, škálovatelnou a nízkorozpočtovou cloudovou platformu, kterou používají stovky tisíc firem v 190 zemích po celém světě. Startupy, malé a střední podniky, velké enterprise firmy a zákazníci ve veřejném sektoru mají přístup ke stavebním kamenům, které slouží na rychlý vývoj aplikací jako reakce na měnící se obchodní požadavky. Bez ohledu na to, zda chcete vytvářet webové nebo mobilní aplikace, prípadně postavené na klasických serverech či kontejnerech, AWS davá vývojářům do rukou mnoho nástrojů, které jim pomáhají vytvářet a nasazovat aplikace jednoduše, rychle a při nízkých nákladech.
Building and deploying an analytic service on Cloud is a challenge. A bigger challenge is to maintain the service. In a world where users are gravitating towards a model where cluster instances are to provisioned on the fly, in order for these to be used for analytics or other purposes, and then to have these cluster instances shut down when the jobs get done, the relevance of containers and container orchestration is more important than ever. In short Customers are looking for Serverless Spark Clusters. The Intent of this presentation is to share what is Serverless Spark and what are the benefits of running Spark in serverless manner.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Accelerating Business Intelligence Solutions with Microsoft Azure passJason Strate
Business Intelligence (BI) solutions need to move at the speed of business. Unfortunately, roadblocks related to availability of resources and deployment often present an issue. What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day. In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI. By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.
Amazon Redshift is a hosted data warehouse product, which is part of the larger cloud computing platform Amazon Web Services. It is built on top of technology from the massive parallel processing (MPP) data warehouse
Hello All,
It is time for the second Tokyo Azure Meetup!
As a natural continuation of our first topic, we will proceed with Big Data.
Until recently you needed to learn new language or master new concepts in order get started with Big Data.
Moreover, you needed to spend a lot of time setting up infrastructure that will meet the business demands for Big Data processing.
Not any more!
If you know C# and T-SQL you are ready to become Big Data master!
Public cloud and especially Microsoft Azure are very well suited for working with Big Data.
Join us for our next event and and I can assure you that after the session you will be ready to start working with Big Data.
And maybe you are asking why this is important.
I believe that we don't have choice but build smart applications and get as much possible insights from the data we collect from various sources in order to take the best business decisions and please our customers.
Today we have so much data available publicly or coming from our customers and it is very challenging to process it and turn it into valuable business asset.
Not any more!
Join for our next meetup and you will see how Microsoft create amazing opportunity for each .Net developer to become Big Data expert and every company to start using Big Data to accelerate its growth.
I have been working closely with the product team developing U-SQL language that empower Azure Data Lake Analytics, which is one of the processing engines for Azure Data Lake and I will be very happy to share my experience with you!
See you very soon!
Kanio
4Developers 2018: Przetwarzanie Big Data w oparciu o architekturę Lambda na p...PROIDEA
Według szacunków do 2020 roku wygenerujmy 40 Zetta byte’ów, a do roku 2025 aż 163 Zetta byte’ów różnego rodzaju danych, a ich dokładna analiza ACpozwali na odkrywanie nowych zjawisk, optymalizacje procesów, czy wspomaganie procesów decyzyjnych. Aby efektywnie przetwarzać tak duże zbiory danych potrzebujemy nowych technik analizy danych oraz innowacyjnych rozwiązań technologicznych. Ważną role pełni tutaj chmura Azure, która oferuje szereg usług, przy użyciu których możemy tworzyć rozwiązania na potrzeby przetwarzania Big Data zarówno w trybie batch’owych jak i ‘near real time’. Podczas sesji stworzymy przykładowe rozwiązanie przetwarzania Big Data oparte o architekturę Lambda , z wykorzystaniem usług platformy Azure, takich jak Azure Data Factory, Azure Stream Analytics, Azure HdInsight, Azure Event (IoT) Hub, czy Azure Data Lake.
In this session, Sergio covered the Lakehouse concept and how companies implement it, from data ingestion to insight. He showed how you could use Azure Data Services to speed up your Analytics project from ingesting, modelling and delivering insights to end users.
Explore the cutting-edge of AI and search technology with Udaiappa Ramachandran (Udai), CTO/CSO of Akumina Inc. and Microsoft Azure MVP, in his presentation 'RAG Patterns and Vector Search in Generative AI'. This comprehensive overview covers the essentials of Keyword and Vector Search, highlighting their strengths and limitations. Udai brilliantly introduces Hybrid Search, combining the best of both worlds for enhanced accuracy and relevance. Real-world applications in companies like Amazon, Google, and Netflix illustrate the practical implications of these technologies. The presentation also delves into the mechanics of cosine similarity and explores various vector databases, providing a well-rounded understanding of current AI search technologies. Ideal for professionals and enthusiasts in the AI and search technology fields, this presentation offers a glimpse into the future of intelligent search solutions.
In "Level Up Your Security Using Intune," Udaiappa Ramachandran, an expert in cloud technologies, presents a detailed guide on using Microsoft Intune for enhancing mobile application and device security. The presentation covers two main integration strategies: the Intune SDK, which provides fine-grained control, customization, and long-term maintainability, and the Intune App Wrapper, suitable for legacy apps and rapid prototyping with some feature limitations. Udaiappa's talk, aimed at modern developers, emphasizes the importance of robust mobile security and showcases Intune's capabilities in managing both corporate-owned devices and BYOD scenarios, underlining its critical role in contemporary digital security management.
Semantic Kernel, an open-source SDK, streamlines the integration and orchestration of AI models, supporting a diverse range of languages like C#, Python, and Java. It offers a suite of tools for AI application development, including specialized plugins for extending functionalities and planners for automating complex workflows and improving efficiency. A key feature of Semantic Kernel is its focus on memory and context management, enhancing AI agent performance and understanding. The copilot feature stands out for its real-time user interaction capabilities and its seamless integration with existing systems. Aimed at facilitating the development of sophisticated AI-driven applications, Semantic Kernel provides comprehensive support for task automation, model integration, and responsible AI practices, backed by extensive documentation and community support on Microsoft's platforms and GitHub repositories.
The presentation "Semantic Kernel" covers the Semantic Kernel, an open-source Software Development Kit (SDK) for AI model integration and agent development. It discusses key concepts like plugins, planners, personas, and co-pilots in AI applications, emphasizing their roles in task automation and AI orchestration. The presentation highlights features such as prompt engineering, AI memory management, and embedding storage for enhanced AI performance. It also outlines steps for building AI agents using Semantic Kernel, integrating AI models, and managing memory and context. Additionally, the importance of real-time assistance and user feedback in enhancing AI interactions is discussed, along with supported languages for the Semantic Kernel SDK.
.NET 8 is poised to deliver significant advancements with features such as Primary Constructors for cleaner code, enhanced Garbage Collection for better memory management, and optimized JSON Serialization for efficient data handling. Performance is further bolstered by Fast Search, Dynamic Profile Guided Optimization (PGO), and Native AOT for faster runtime and startup. Time Abstraction offers refined time operations, while improved Cryptography and Compression with ZipFile support enhance security and data management. Immutable data structures are introduced with FrozenSet, and RegEx Code Generation promises more efficient pattern matching. Additionally, Redis Output Caching could enhance distributed caching mechanisms, Background Worker enhancements may improve asynchronous task execution, and Semantic Kernel suggests more intelligent code analysis capabilities. Collectively, these features aim to streamline development workflows and boost application performance in the .NET 8 framework.
Discover the power of Vector Search using OpenAI in Azure Cognitive Search through a comprehensive .NET application tutorial. This presentation will delve into the intricacies of integrating Azure OpenAI with your .NET applications, focusing specifically on the creation and utilization of vector embeddings. Learn how to effectively harness the capabilities of Azure OpenAI for generating precise vector embeddings, which are crucial for enhancing search functionalities in your applications. We will explore the concept of Hybrid search, demonstrating how it combines traditional keyword search with the advanced vector search to provide more relevant and context-aware results. This session is designed to equip developers with the knowledge and skills needed to implement state-of-the-art search capabilities in their .NET applications, leveraging the cutting-edge AI and machine learning technologies provided by Azure OpenAI.
Key less access to Azure Services using AD Authentication using Managed Identity, User Managed Identity or Service Principal. Some samples include Cosmos DB, Azure Storage, Application Insight, Key Vault, etc.,
Azure OpenAI Service provides REST API access to OpenAI's powerful language models, including the GPT-3, GPT-4, DALL-E, Codex, and Embeddings model series. These models can be easily adapted to any specific task, including but not limited to content generation, summarization, semantic search, translation, transformation, and code generation. Microsoft offers the accessibility of the service through REST APIs, Python or C# SDK, or the Azure OpenAI Studio.
ChatGPT (Chat Generative pre-defined transformer) is OpenAI's application that performs human like interactions. GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor. Deck contains more details about ChatGPT, AI, AGI, CoPilot, OpenAI API, and use case scenarios.
.NET 7 is the latest version of .NET that was released in Nov 2022. .NET 7 ecosystem offers simplifications on development, high performance, and ultimate productivity.
Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.
Azure Billing features are used to review your invoiced costs and manage access to billing information. In larger organizations, procurement and finance teams usually conduct billing tasks.
Billing is the process of invoicing customers for goods or services and managing the commercial relationship.
Cost Management shows the organizational cost and usage patterns with advanced analytics. Azure Portal let you manage both Billings and cost management for all your accounts.
.NET 6 is the latest version of .NET that was released in Nov 2021. .NET 6 ecosystem offers simplifications on development, high performance, and ultimate productivity.
Azure Automation delivers cloud-based automation, operating system updates, and configuration service that supports consistent management across your Azure and non-Azure environments. It includes process automation, configuration management, update management, shared capabilities, and heterogeneous features.
Azure Static Web Apps allows you to develop modern full-stack web apps quickly and easily with a static front-end and dynamic back end powered by Serverless APIs with custom routing, security including authentication/authrization, custom domains, private endpoint, etc. Azure Static Web Apps offers cost-effective pricing from hobby to production apps.
Azure Private Link provides private connectivity from a virtual network to Azure platform as a service (PaaS), customer-owned, or Microsoft partner services.
Azure Security Center provides security posture management and threat protection for your hybrid cloud workloads. Cloud Security Posture Management includes Policies, initiatives, recommendations, secure scores, and security controls. Cloud Workload Protection protects threats against servers, cloud-native workloads, databases, and storage security alerts and incidents.
Azure SignalR Service simplifies the process of adding real-time web functionality to applications over HTTP. Eliminates the need for polling and provides high availability, resiliency, and disaster recovery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
2. About me
• Udaiappa Ramachandran ( Udai )
• CTO/CSO-Akumina, Inc.
• Microsoft Azure MVP
• Cloud Expert
• Microsoft Azure, Amazon Web Services, and Google
• New Hampshire Cloud User Group (http://www.meetup.com/nashuaug )
• https://udai.io
3. Agenda
• Quick review on Azure Data Factory, Azure Databricks
• Azure Synapse Analytics
• Aggregating data from multiple data sources
• Exploring processed data
• Azure Synapse Security
• Demo…Demo…Demo…
4. Azure Datafactory
• Easy to use
• Wide range of connectors and features (90+)
• Powerful data integration capabilities (ingestion and transformation)
• GUI – Pipelines, data flows, power query
5. Azure Databricks
• Powerful data processing capabilities
• Machine learning and real-time analytics capabilities
• Managed service
• Notebooks
• Steeper learning curve
• Can be more expensive
7. Azure Synapse Analytics - Components
• Data Warehouse
• SQL Pool
• Dedicated
• Serverless
• Spark Pool
• Python, SQL and C#
• Big Data Engine
• Serverless Engine
• Data Flows
• Ecosystem- PowerBI+Azure Machine Learning
8. What is Azure Synapse Analytics?
Source: https://learn.microsoft.com/en-us/azure/synapse-analytics/overview-what-is
9. Azure Synapse Analytics - Capabilities
• Unified analytics platform
• Serverless and dedicated options
• Enterprise data warehouse
• Data lake exploration
• Code-free hybrid data integration
• Deeply integrated Apache Spark and SQL engines
• Cloud-native HTAP
• Choice of language (T-SQL, Python, Scala, SparkSQL, and .NET)
• Integrated AI and BI
• Data Security
10. Synapse Analytics – SQL Pools
• Serverless SQL
• Query data from ADLS Gen2 directly
• Using T-SQL to query CSV, Parquet, JSON, etc.,
• No infrastructure needed
• Stand-alone polybase service
• Pay-per query model
• No charges for metadata queries (ex., select * from sys.objects)
• When to use?
• Quick ad-hoc queries
• Logical data warehouse
• Transform data in lake
• Dedicated SQL
• Provisioned Resource: Setup infrastructure in advance
• Massively Parallel Processing (MPP) Engine
14. Data Explorer Pool
• Unified experience
• Real-time insights
• Scalability
• Security
• High performance
• Real-time ingestion
• Time series analysis
• Machine learning
15. Data Explorer Pool
Source: https://learn.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview
16. When to use Azure Synapse Analytics?
• Large-scale data warehousing
• Advanced analytics
• Data exploration and discovery
• Real time analytics
• Data integration
• Integrated analytics
17. Synapse Analytics Vs. Synapse Private Hub
Feature Azure Synapse Analytics Azyre Synapse Analytics Private
Hub
Access Public access over the internet Private access over a private
connection
Security Data is encrypted at rest and in
transit
Data never leaves your network
Compliance Complies with a variety of data
regulations
Can be used to comply with sticker
data privacy regulations
Use cases General-purpose data analytics Secure access to Azure synapse
Analytics from on-premises network
or another virtual network
18. Azure Synapse – Use Case
• Propose a solution for ABC company to build real-time analytics using various data
sources such as Cosmos DB, Log Analytics, and SharePoint List Items. How can we
achieve this?
19. Demo
• Create Azure Synapse
• Walkthrough Azure Synapse properties
• Create Pools
• Run Samples
• Link Cosmos DB
• Create External table
• Data Explorer --Add Table and export data / Data explorer ingest data
• PowerBI
20. Azure Synapse – Use Case
• Aggregation
• Azure Cosmos DB – Synapse Link, then external view
• Azure Log Analytics Workspace – Continuous Export then Parquet transformer using Spark and
then external table
• SharePoint Lists – Continuous export then parquet transformer using spark and then external
table
• Presentation
• PowerBI – Direct Access
• HTML controls – DW Queries
• Cost
• SQL Server – Serverless/Dedicated
• Spark Nodes
• https://azure.com/e/6233ac854ace4eddb06d15b8b056df21
23. Security on Azure Synapse
• Data at REST encryption using TDE (Transparent Data Encryption)
• In-Transit (in motion) Encryption using TLS
• Key Management
• Customer Managed
• Bring your own key (BYOK)
• Must enabled when creating Azure Synapse
• TDE Protector (key to encrypt DEK)
• Data Masking – Dynamic and Static
• Row-Level and Column-Level Security
25. Thanks for your time and trust!
New Hampshire CLOUD .NET User Group
Editor's Notes
Azure SQL Data Warehouse – a cloud-based enterprise data warehouse (EDW) that uses massively parallel processing (MPP) to reun complex queries across petabytes of data quickly.
Azure SQL Data Warehouse – a cloud-based enterprise data warehouse (EDW) that uses massively parallel processing (MPP) to reun complex queries across petabytes of data quickly.
Descriptive analytics, which answers the question “What is happening in my business?”. The data to answer this question is typically answered through the creation of a data warehouse in which historical data is persisted in relational tables for multidimensional modeling and reporting.
Diagnostic analytics, which deals with answering the question “Why is it happening?”. This may involve exploring information that already exists in a data warehouse, but typically involves a wider search of your data estate to find more data to support this type of analysis.
Predictive analytics, which enables you to answer the question “What is likely to happen in the future based on previous trends and patterns?”
Prescriptive analytics, which enables autonomous decision making based on real-time or near real-time analysis of data, using predictive analytics.
Data Warehouse: The already popular Azure Data Warehouse technology for storing and managing data for analysis and decision making, now through SQL pools.
Big Data engine: With Spark pools, engineers can now run scalable analytics with Spark languages to do Big Data processing with them .
Serverless engine: Query Data Lakes directly using SQL statements in a simple way.
Data flows: To Develop ETL flows that consume or receive data in your Data Warehouse or Data Lake with the same engine used with Azure Data Factory.
Azure Data Lake Storage+Azure SQL Data Warehouse+Azure Analytics=Azure Synapse Analytics
Data Warehouse: The already popular Azure Data Warehouse technology for storing and managing data for analysis and decision making, now through SQL pools.
Big Data engine: With Spark pools, engineers can now run scalable analytics with Spark languages to do Big Data processing with them .
Serverless engine: Query Data Lakes directly using SQL statements in a simple way.
Data flows: To Develop ETL flows that consume or receive data in your Data Warehouse or Data Lake with the same engine used with Azure Data Factory.
Azure Data Lake Storage+Azure SQL Data Warehouse+Azure Analytics=Azure Synapse Analytics
Azure SQL Data Warehouse – a cloud-based enterprise data warehouse (EDW) that uses massively parallel processing (MPP) to reun complex queries across petabytes of data quickly.
Quick ad-hoc queries – before you decide how to proceed
Logical Data warehouse- abstract layer on top of raw data
Transform data in lake-consume it directly using powerBI
The number of compute nodes ranges from 1 to 60, and is determined by the service level for Synapse SQL.
Spark notebooks- combine code, text, markdown and data visualization
YARN (Yet Another Resource Negotiator)
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook
Spark notebooks- combine code, text, markdown and data visualization
Azure SQL Data Warehouse – a cloud-based enterprise data warehouse (EDW) that uses massively parallel processing (MPP) to reun complex queries across petabytes of data quickly.
Double Encryption on top of Microsoft managed keys
TDE using Az Key vault--Get/Wrap/Unwrap DEK
key length 2048 or 3072
supported formats for imported key .pfx, .byok, .backup
backup your keys before using it
create a new backup when changes are made to the key
Dynamic data masking
mask data to non-privileged users
ability to specify how much is revealed
configured on specific databse fields
can be used alongside encrytion, auditing, row-level-security etc.,
can be enabled via as portal or t-sql statements
types of data masking
full xxxx
partial uxxx@xxx.com
random salary=10000;FUNCTION='random(1,8)';Masked=6
custom string ex., name=Udai; FUNCTION='partial(1,'XXXX',1);masked=UxxxxI
create user testuser without login
grant select on sales.customer to testuser
execute as user='testuser'
select....
revert
go
grant unmask to test user
revoke unmask to testuser
select c.name,tbl.name as table_name,c.is_masked,c.masking_function from sys.masked_columns as c
join sys.tables as tbl
on c.[object_it]=tbl.[object_id]
where is_masked=1
how does row level sec works
not permissin based but predicate based
security policy
security predicate is an inline table-valued function (iTVF)
filter predicate
creating rls
create table, insert rows, create users, create a schema(create schema),create security redicate(create function),create securith policy
RLS best practices
crate a separate chcema for the securit predicate function
alter any security permission is required
drop components in the following order: security policy, Table, function, schemas
avoid excessive table joins in the predicate function
CLS
control access to specific column
based on users context
grant access to -sql user and azure ad