This document provides recommendations for optimizing performance of .NET Core and C# applications. It discusses identifying and making asynchronous hot code paths, avoiding blocking calls, returning large collections across multiple pages, optimizing data access and I/O, minimizing exceptions and large object allocations, and using tools like PerfTips and profiling in Visual Studio. Diagnosing and improving performance requires understanding areas that impact scalability like hot code paths, asynchronous programming, and efficient data handling.
This document discusses best practices for improving .NET Core performance, including understanding hot code paths, avoiding blocking calls, minimizing large object allocations, optimizing data access and I/O, pooling HTTP connections, completing long-running tasks outside requests, minifying assets, compressing responses, using the latest release, and minimizing exceptions. Specific tips are provided such as making code asynchronous, reusing HTTP connections, profiling tools, and caching data.
Just like you can't defeat the laws of physics there are natural laws that ultimately decide software performance. Even the latest technology beta is still bound by Newton's laws, and you can't change the speed of light, even in the cloud!
The document discusses various topics related to concurrency and parallelism including threads, shared state, locks, asynchronous programming, parallel processing, and reactive programming. It provides examples of using locks, reader-writer locks, thread pools, tasks, and reactive streams. It also covers challenges with concurrent programming such as race conditions, deadlocks, and debugging concurrent applications.
The aim of this report is to introduce developers to the world of Magento optimization, giving suggestions and practical examples of the best practices to apply.
ConFoo 2017: Introduction to performance optimization of .NET web appsPierre-Luc Maheu
This document discusses performance optimization of .NET web apps. It defines performance as response time and resource utilization. It emphasizes measuring performance before and during optimization to identify the right things to optimize. A variety of tools are presented for different levels of performance monitoring and profiling, including Application Performance Monitoring, lightweight code profilers, and code profilers for highest detail. Best practices like leveraging load balancers and avoiding implicit transactions are also recommended.
Caching involves temporarily storing data that is likely to be used again to improve performance. In web applications, caching can occur at different levels: page caching caches entire pages, action caching caches actions along with filters, and fragment caching caches parts of views. Ruby on Rails provides built-in support for caching through page, action, and fragment caching. Page caching is fastest but ignores parameters, action caching runs filters before caching, and fragment caching is used when parts of pages change independently. Caching improves performance by reducing load on application servers.
Scaling asp.net websites to millions of usersoazabir
This document discusses various techniques for optimizing ASP.NET applications to scale from thousands to millions of users. It covers topics such as preventing denial of service attacks, optimizing the ASP.NET process model and pipeline, reducing the size of ASP.NET cookies on static content, improving System.net settings, optimizing queries to ASP.NET membership providers, issues with LINQ to SQL, using transaction isolation levels to prevent deadlocks, and employing a content delivery network. The overall message is that ASP.NET requires various "hacks" at the code, database, and configuration levels to scale to support millions of hits.
This document discusses best practices for improving .NET Core performance, including understanding hot code paths, avoiding blocking calls, minimizing large object allocations, optimizing data access and I/O, pooling HTTP connections, completing long-running tasks outside requests, minifying assets, compressing responses, using the latest release, and minimizing exceptions. Specific tips are provided such as making code asynchronous, reusing HTTP connections, profiling tools, and caching data.
Just like you can't defeat the laws of physics there are natural laws that ultimately decide software performance. Even the latest technology beta is still bound by Newton's laws, and you can't change the speed of light, even in the cloud!
The document discusses various topics related to concurrency and parallelism including threads, shared state, locks, asynchronous programming, parallel processing, and reactive programming. It provides examples of using locks, reader-writer locks, thread pools, tasks, and reactive streams. It also covers challenges with concurrent programming such as race conditions, deadlocks, and debugging concurrent applications.
The aim of this report is to introduce developers to the world of Magento optimization, giving suggestions and practical examples of the best practices to apply.
ConFoo 2017: Introduction to performance optimization of .NET web appsPierre-Luc Maheu
This document discusses performance optimization of .NET web apps. It defines performance as response time and resource utilization. It emphasizes measuring performance before and during optimization to identify the right things to optimize. A variety of tools are presented for different levels of performance monitoring and profiling, including Application Performance Monitoring, lightweight code profilers, and code profilers for highest detail. Best practices like leveraging load balancers and avoiding implicit transactions are also recommended.
Caching involves temporarily storing data that is likely to be used again to improve performance. In web applications, caching can occur at different levels: page caching caches entire pages, action caching caches actions along with filters, and fragment caching caches parts of views. Ruby on Rails provides built-in support for caching through page, action, and fragment caching. Page caching is fastest but ignores parameters, action caching runs filters before caching, and fragment caching is used when parts of pages change independently. Caching improves performance by reducing load on application servers.
Scaling asp.net websites to millions of usersoazabir
This document discusses various techniques for optimizing ASP.NET applications to scale from thousands to millions of users. It covers topics such as preventing denial of service attacks, optimizing the ASP.NET process model and pipeline, reducing the size of ASP.NET cookies on static content, improving System.net settings, optimizing queries to ASP.NET membership providers, issues with LINQ to SQL, using transaction isolation levels to prevent deadlocks, and employing a content delivery network. The overall message is that ASP.NET requires various "hacks" at the code, database, and configuration levels to scale to support millions of hits.
The document provides tips for building a scalable and high-performance website, including using caching, load balancing, and monitoring. It discusses horizontal and vertical scalability, and recommends planning, testing, and version control. Specific techniques mentioned include static content caching, Memcached, and the YSlow performance tool.
Performance is the most important attribute for success of any commercial and Enterprise Software. In a client server environment, developers focus a lot on optimizing the Data and Logical Tiers. Optimization of Presentation Tier which is responsible for more than 30 % of performance is usually ignored.
The document is developed with the intension to teach the technical staff on Optimizing the Presentation Tier which significantly improves the performance of the Client Server applications.
Gentle introduction to asynchronous programming on .NET stack. Async-await construct of .Net languages (e.g. C#), its benefits, threads, thread-pool, task, asp.net request handling etc.
This document discusses various techniques for optimizing website performance, including:
1. Network optimizations like compression, HTTP caching, and keeping connections alive.
2. Structuring content efficiently and using tools like YSlow to measure performance.
3. Application caching of pages, database queries, and other frequently accessed content.
4. Database tuning through indexing, query optimization, and offloading text searches.
5. Monitoring resource usage and business metrics to ensure performance meets targets.
Deferred Processing in Ruby - Philly rb - August 2011rob_dimarco
The document discusses various options for deferred processing and queuing in Ruby, including Delayed::Job, Resque, Amazon SQS, and AMQP. It provides an overview of how each works, how to install and use them, their advantages and disadvantages, and when each may or may not be a good fit for different needs.
Black Friday and Cyber Monday- Best Practices for Your E-Commerce DatabaseTim Vaillancourt
This document provides best practices for scaling e-commerce databases for Black Friday and Cyber Monday. It discusses scaling both synchronous and asynchronous applications, efficiently using data at scale through techniques like caching, queues, and counters. It also covers scaling out through techniques like sharding, pre-sharding, and kill switches. Testing performance and capacity, as well as asking the right questions at development time are also recommended.
Serverless computing provides several key benefits including no need to provision or manage servers, automatic scaling with usage, only paying for resources used, and built-in high availability and fault tolerance. Some common use cases for serverless include backend services and apps, data processing, voice and chat bots, and web applications. When building serverless applications, developers should consider factors like repository structure, cold starts, package size and dependencies, timeouts and errors, performance optimization, and security practices.
Assignment #2
Lab Exercise – HTTP
INT6143, Enterprise Network Infrastructure 1
Objective
HTTP (HyperText Transfer Protocol) is the main protocol underlying the Web. HTTP is covered in Chapter
2 of your text. Review that section before doing this lab.
Requirements
Wireshark: This lab uses Wireshark to capture or examine a packet trace. A packet trace is a record of
traffic at some location on the network, as if a snapshot was taken of all the bits that passed across a
particular wire. The packet trace records a timestamp for each packet, along with the bits that make up
the packet, from the low-layer headers to the higher-layer contents. Wireshark runs on most operating
systems, including Windows, Mac and Linux. It provides a graphical UI that shows the sequence of pack-
ets and the meaning of the bits when interpreted as protocol headers and data. The packets are color-
coded to convey their meaning, and Wireshark includes various ways to filter and analyze them to let
you investigate different aspects of behavior. It is widely used to troubleshoot networks. You can down-
load Wireshark from www.wireshark.org.
telnet: This lab uses telnet to set up an interactive two-way connection to a remote computer. telnet is
installed on Window, Linux and Mac operating systems. It may need to be enabled under Windows. Se-
lect “Control Panel” and “More Settings” (Windows 8) or “Programs and Features” (Windows 7), then
“Turn Windows Features on or off”. From the list that is displayed, make sure that “Telnet Client” is
checked. If you cannot see the text you type when in a telnet session, you may need to use a telnet
command to set the “local echo” variable. Alternatively, if you are having difficulty enabling or using
Windows telnet, you may install the PuTTY client which uses a GUI to launch a telnet session.
Browser: This lab uses a web browser to find or fetch pages as a workload. Any web browser will do.
Step 1: Manual GET with Telnet
Use your browser to find a reasonably simple web page with a short URL, making sure it is a plain HTTP
URL with no special port number. Since HTTP is a text-based application protocol, we can see how it
works by entering our own HTTP requests and inspecting the HTTP responses. To do this you will use
telnet in the place of a web browser, using the URL you select as a test case. You might a top level page
of your school web server, e.g., http://www.mit.edu/index.html.
Divide the URL into the server name, and the path portion, e.g., www.mit.edu and “/index.html”. If your
URL ends with a “/” then the path portion will be “/”. Or it may be that the path is really “/index.html”
and the browser and web server are performing the translation for you. To check if this is the real URL,
enter the URL with /index.html at the end into your browser and see if it works.
Use telnet to fetch the page. What you will do is telnet to port 80 on the server, the standard HTTP port,
and th.
C sharp and asp.net interview questionsAkhil Mittal
The document provides summaries of common questions and answers related to ASP.NET, C#, and the .NET framework. It defines view state as storing the current property settings of an ASP.NET page and controls to detect form submissions. It explains that user controls allow reusing ASP.NET pages as controls, and validation controls perform input checking on server controls. The document also distinguishes between Response.Write and Response.Output.Write, and lists page life cycle methods like Init, Load, and Unload.
The document summarizes new features in ASP.NET 4.5 for asynchronous processing of HTTP requests and responses. It introduces the ability to read and write request/response streams asynchronously without blocking threads. It also describes how tasks and async/await keywords simplify asynchronous code. Finally, it discusses new request validation features that allow selectively reading unvalidated request data.
Linear layouts are more efficient than relative or absolute layouts for list items. Relative layouts are more flexible but also more expensive to render. Developers should use tools like LayoutOpt and TraceView to optimize layout hierarchies and identify performance bottlenecks. ProGuard can also help by removing unused code and optimizing code size.
This document discusses web application architecture and frameworks. It argues that frameworks should not dictate project structure, and that the code should separate domain logic from infrastructure logic. This allows focusing on the core problem domain without concerning itself with technical details like databases or web requests. It also advocates splitting code into ports that define intentions like persistence, and adapters that provide framework-specific implementations, allowing for independence of the domain logic from any particular framework or technology. This architecture, known as hexagonal or ports and adapters, facilitates testing, replacement of parts, and future-proofing of the application.
The document discusses optimizing performance for Ajax applications. It recommends:
- Keeping client-side code light by only requesting necessary data from the server via JSON messages.
- Avoiding unnecessary DOM touches and reflows which are computationally expensive.
- Measuring performance before and after optimizations to validate improvements rather than relying on intuition.
- Optimizing algorithms and avoiding unnecessary work rather than prematurely optimizing without evidence of need.
Since the introduction of C#, async/await concepts are still misunderstood by many developers.
Async programming tries to solve three problems (Offloading, Concurrency, Scalability) in a mean abstraction.
This presentation is a good starting point to asynchronous programming in .net. There are many links and references, so do not hesitate to go deeper.
1. The document discusses various optimizations that can be made to an ASP.NET MVC application to improve performance, including compiled LINQ queries, URL caching, and data caching.
2. Benchmark results show that optimizing partial view rendering, LINQ queries, and URL generation improved performance from 8 requests/second to 61.5 requests/second. Additional caching of URLs, statistics, and content improved performance to over 400 requests/second.
3. Turning off ASP.NET debug mode also provided a significant performance boost, showing the importance of running production sites in release mode.
This document discusses the benefits of using the .NET framework for web development. It begins by explaining that .NET compiles code to intermediate language (IL) rather than machine code. This allows the common language runtime (CLR) to manage aspects like garbage collection and exception handling. ASP.NET uses dynamic compilation for improved performance. The .NET framework also includes a large set of reusable classes. Additional benefits discussed include object-oriented architecture, caching, XML configuration, code separation, mobile support, powerful data access, language preference, and easy creation of web services.
The document discusses best practices for scalability and performance when developing PHP applications. Some key points include profiling and optimizing early, cooperating between development and operations teams, testing on production-like data, caching frequently accessed data, avoiding overuse of hard-to-scale resources, and using compiler caching and query optimization. Decoupling applications, caching, data federation, and replication are also presented as techniques for improving scalability.
Headless approach for offloading heavy tasks in MagentoSander Mangel
Sander Mangel discusses implementing a headless architecture for Magento using a middleware approach. The middleware would normalize and combine data from various sources and offer stable APIs for clients like Magento's frontend and backend. It would leverage various PHP libraries and frameworks like Slim, Monolog, and Illuminate/Database to build the middleware and expose data through REST APIs. The middleware is intended to offload tasks from Magento to keep it lightweight while centralizing data storage and access through the middleware APIs.
ASP.NET Best Practices - Useful Tips from the TrenchesHabeeb Rushdan
This document outlines an ASP.NET best practices presentation, including an introduction to ASP.NET, demonstrations of best practices like using object browsers and handling page events, and tips on state management, exceptions, and AJAX. The target audience is programmers new to .NET development and existing ASP.NET developers. The presentation covers the ASP.NET page lifecycle, separation of concerns using multiple projects, and disabling viewstate where possible. Useful resources like MSDN and CodeProject are also listed.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
The document provides tips for building a scalable and high-performance website, including using caching, load balancing, and monitoring. It discusses horizontal and vertical scalability, and recommends planning, testing, and version control. Specific techniques mentioned include static content caching, Memcached, and the YSlow performance tool.
Performance is the most important attribute for success of any commercial and Enterprise Software. In a client server environment, developers focus a lot on optimizing the Data and Logical Tiers. Optimization of Presentation Tier which is responsible for more than 30 % of performance is usually ignored.
The document is developed with the intension to teach the technical staff on Optimizing the Presentation Tier which significantly improves the performance of the Client Server applications.
Gentle introduction to asynchronous programming on .NET stack. Async-await construct of .Net languages (e.g. C#), its benefits, threads, thread-pool, task, asp.net request handling etc.
This document discusses various techniques for optimizing website performance, including:
1. Network optimizations like compression, HTTP caching, and keeping connections alive.
2. Structuring content efficiently and using tools like YSlow to measure performance.
3. Application caching of pages, database queries, and other frequently accessed content.
4. Database tuning through indexing, query optimization, and offloading text searches.
5. Monitoring resource usage and business metrics to ensure performance meets targets.
Deferred Processing in Ruby - Philly rb - August 2011rob_dimarco
The document discusses various options for deferred processing and queuing in Ruby, including Delayed::Job, Resque, Amazon SQS, and AMQP. It provides an overview of how each works, how to install and use them, their advantages and disadvantages, and when each may or may not be a good fit for different needs.
Black Friday and Cyber Monday- Best Practices for Your E-Commerce DatabaseTim Vaillancourt
This document provides best practices for scaling e-commerce databases for Black Friday and Cyber Monday. It discusses scaling both synchronous and asynchronous applications, efficiently using data at scale through techniques like caching, queues, and counters. It also covers scaling out through techniques like sharding, pre-sharding, and kill switches. Testing performance and capacity, as well as asking the right questions at development time are also recommended.
Serverless computing provides several key benefits including no need to provision or manage servers, automatic scaling with usage, only paying for resources used, and built-in high availability and fault tolerance. Some common use cases for serverless include backend services and apps, data processing, voice and chat bots, and web applications. When building serverless applications, developers should consider factors like repository structure, cold starts, package size and dependencies, timeouts and errors, performance optimization, and security practices.
Assignment #2
Lab Exercise – HTTP
INT6143, Enterprise Network Infrastructure 1
Objective
HTTP (HyperText Transfer Protocol) is the main protocol underlying the Web. HTTP is covered in Chapter
2 of your text. Review that section before doing this lab.
Requirements
Wireshark: This lab uses Wireshark to capture or examine a packet trace. A packet trace is a record of
traffic at some location on the network, as if a snapshot was taken of all the bits that passed across a
particular wire. The packet trace records a timestamp for each packet, along with the bits that make up
the packet, from the low-layer headers to the higher-layer contents. Wireshark runs on most operating
systems, including Windows, Mac and Linux. It provides a graphical UI that shows the sequence of pack-
ets and the meaning of the bits when interpreted as protocol headers and data. The packets are color-
coded to convey their meaning, and Wireshark includes various ways to filter and analyze them to let
you investigate different aspects of behavior. It is widely used to troubleshoot networks. You can down-
load Wireshark from www.wireshark.org.
telnet: This lab uses telnet to set up an interactive two-way connection to a remote computer. telnet is
installed on Window, Linux and Mac operating systems. It may need to be enabled under Windows. Se-
lect “Control Panel” and “More Settings” (Windows 8) or “Programs and Features” (Windows 7), then
“Turn Windows Features on or off”. From the list that is displayed, make sure that “Telnet Client” is
checked. If you cannot see the text you type when in a telnet session, you may need to use a telnet
command to set the “local echo” variable. Alternatively, if you are having difficulty enabling or using
Windows telnet, you may install the PuTTY client which uses a GUI to launch a telnet session.
Browser: This lab uses a web browser to find or fetch pages as a workload. Any web browser will do.
Step 1: Manual GET with Telnet
Use your browser to find a reasonably simple web page with a short URL, making sure it is a plain HTTP
URL with no special port number. Since HTTP is a text-based application protocol, we can see how it
works by entering our own HTTP requests and inspecting the HTTP responses. To do this you will use
telnet in the place of a web browser, using the URL you select as a test case. You might a top level page
of your school web server, e.g., http://www.mit.edu/index.html.
Divide the URL into the server name, and the path portion, e.g., www.mit.edu and “/index.html”. If your
URL ends with a “/” then the path portion will be “/”. Or it may be that the path is really “/index.html”
and the browser and web server are performing the translation for you. To check if this is the real URL,
enter the URL with /index.html at the end into your browser and see if it works.
Use telnet to fetch the page. What you will do is telnet to port 80 on the server, the standard HTTP port,
and th.
C sharp and asp.net interview questionsAkhil Mittal
The document provides summaries of common questions and answers related to ASP.NET, C#, and the .NET framework. It defines view state as storing the current property settings of an ASP.NET page and controls to detect form submissions. It explains that user controls allow reusing ASP.NET pages as controls, and validation controls perform input checking on server controls. The document also distinguishes between Response.Write and Response.Output.Write, and lists page life cycle methods like Init, Load, and Unload.
The document summarizes new features in ASP.NET 4.5 for asynchronous processing of HTTP requests and responses. It introduces the ability to read and write request/response streams asynchronously without blocking threads. It also describes how tasks and async/await keywords simplify asynchronous code. Finally, it discusses new request validation features that allow selectively reading unvalidated request data.
Linear layouts are more efficient than relative or absolute layouts for list items. Relative layouts are more flexible but also more expensive to render. Developers should use tools like LayoutOpt and TraceView to optimize layout hierarchies and identify performance bottlenecks. ProGuard can also help by removing unused code and optimizing code size.
This document discusses web application architecture and frameworks. It argues that frameworks should not dictate project structure, and that the code should separate domain logic from infrastructure logic. This allows focusing on the core problem domain without concerning itself with technical details like databases or web requests. It also advocates splitting code into ports that define intentions like persistence, and adapters that provide framework-specific implementations, allowing for independence of the domain logic from any particular framework or technology. This architecture, known as hexagonal or ports and adapters, facilitates testing, replacement of parts, and future-proofing of the application.
The document discusses optimizing performance for Ajax applications. It recommends:
- Keeping client-side code light by only requesting necessary data from the server via JSON messages.
- Avoiding unnecessary DOM touches and reflows which are computationally expensive.
- Measuring performance before and after optimizations to validate improvements rather than relying on intuition.
- Optimizing algorithms and avoiding unnecessary work rather than prematurely optimizing without evidence of need.
Since the introduction of C#, async/await concepts are still misunderstood by many developers.
Async programming tries to solve three problems (Offloading, Concurrency, Scalability) in a mean abstraction.
This presentation is a good starting point to asynchronous programming in .net. There are many links and references, so do not hesitate to go deeper.
1. The document discusses various optimizations that can be made to an ASP.NET MVC application to improve performance, including compiled LINQ queries, URL caching, and data caching.
2. Benchmark results show that optimizing partial view rendering, LINQ queries, and URL generation improved performance from 8 requests/second to 61.5 requests/second. Additional caching of URLs, statistics, and content improved performance to over 400 requests/second.
3. Turning off ASP.NET debug mode also provided a significant performance boost, showing the importance of running production sites in release mode.
This document discusses the benefits of using the .NET framework for web development. It begins by explaining that .NET compiles code to intermediate language (IL) rather than machine code. This allows the common language runtime (CLR) to manage aspects like garbage collection and exception handling. ASP.NET uses dynamic compilation for improved performance. The .NET framework also includes a large set of reusable classes. Additional benefits discussed include object-oriented architecture, caching, XML configuration, code separation, mobile support, powerful data access, language preference, and easy creation of web services.
The document discusses best practices for scalability and performance when developing PHP applications. Some key points include profiling and optimizing early, cooperating between development and operations teams, testing on production-like data, caching frequently accessed data, avoiding overuse of hard-to-scale resources, and using compiler caching and query optimization. Decoupling applications, caching, data federation, and replication are also presented as techniques for improving scalability.
Headless approach for offloading heavy tasks in MagentoSander Mangel
Sander Mangel discusses implementing a headless architecture for Magento using a middleware approach. The middleware would normalize and combine data from various sources and offer stable APIs for clients like Magento's frontend and backend. It would leverage various PHP libraries and frameworks like Slim, Monolog, and Illuminate/Database to build the middleware and expose data through REST APIs. The middleware is intended to offload tasks from Magento to keep it lightweight while centralizing data storage and access through the middleware APIs.
ASP.NET Best Practices - Useful Tips from the TrenchesHabeeb Rushdan
This document outlines an ASP.NET best practices presentation, including an introduction to ASP.NET, demonstrations of best practices like using object browsers and handling page events, and tips on state management, exceptions, and AJAX. The target audience is programmers new to .NET development and existing ASP.NET developers. The presentation covers the ASP.NET page lifecycle, separation of concerns using multiple projects, and disabling viewstate where possible. Useful resources like MSDN and CodeProject are also listed.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
2. ASP
.NET Core
Performance Best
Practices
Understand hot code paths
a hot code path is defined as a code
path that is frequently called and where
much of the execution time occurs. Hot
code paths typically limit app scale-out
and performance.
3. Avoid blocking calls
ASP
.NET Core apps should be designed to process
many requests simultaneously. Asynchronous APIs
allow a small pool of threads to handle thousands of
concurrent requests by not waiting on blocking calls.
Rather than waiting on a long-running synchronous
task to complete, the thread can work on another
request.
A common performance problem in ASP
.NET Core
apps is blocking calls that could be asynchronous.
Many synchronous blocking calls lead to degraded
response times.
4. Make hot code paths asynchronous.
Call data access, I/O, and long-running operations APIs asynchronously if an
asynchronous API is available. Do not use Task.Run to make a synchronous
API asynchronous.
Make controller/Razor Page actions asynchronous. The entire call stack is
asynchronous in order to benefit from async/await patterns.
A profiler, such as PerfView, can be used to find threads frequently added
to the Thread Pool.
5. Return large collections across multiple
smaller pages
A webpage shouldn't load large amounts of data all at once.
When returning A collection of objects, consider whether it could
lead to performance issues. Determine if the design could
produce the following poor outcomes:
Outofmemoryexception or high memory consumption
Slow response times
Frequent garbage collection
Do add pagination to mitigate the preceding scenarios. Using
page size and page index parameters, developers should favor
the design of returning A partial result. When an exhaustive result
is required, pagination should be used to asynchronously
populate batches of results to avoid locking server resources.
6. Return IEnumerable<T> or IAsyncEnu
merable<T>
Returning IEnumerable<T> from an action results in synchronous
collection iteration by the serializer. The result is the blocking of calls and a
potential for thread pool starvation. To avoid synchronous enumeration,
use ToListAsync before returning the enumerable.
Beginning with ASP
.NET Core 3.0, IAsyncEnumerable<T> can be used as
an alternative to IEnumerable<T> that enumerates asynchronously.
7. Minimize large object allocations
The .NET Core garbage collector manages allocation and
release of memory automatically in ASP
.NET Core apps.
Automatic garbage collection generally means that developers
don't need to worry about how or when memory is freed.
However, cleaning up unreferenced objects takes CPU time, so
developers should minimize allocating objects in hot code paths.
Garbage collection is especially expensive on large objects (> 85
K bytes). Large objects are stored on the large object heap and
require a full (generation 2) garbage collection to clean up.
Frequent allocation and de-allocation of large objects can
cause inconsistent performance.
8. Recommendations:
Do consider caching large objects that are frequently used.
Caching large objects prevents expensive allocations.
Do pool buffers by using an ArrayPool<T> to store large arrays.
Do not allocate many, short-lived large objects on hot code
paths.
Memory issues, such as the preceding, can be diagnosed by
reviewing garbage collection (GC) stats in PerfView and examining:
Garbage collection pause time.
What percentage of the processor time is spent in garbage
collection.
9. Optimize data access and I/O
Interactions with a data store and other remote
services are often the slowest parts of an ASP
.NET
Core app. Reading and writing data efficiently is critical
for good performance.
10. Recommendations:
Do call all data access APIs asynchronously.
Do not retrieve more data than is necessary. Write queries to return just the data that's necessary for the
current HTTP request.
Do consider caching frequently accessed data retrieved from a database or remote service if slightly out-of-
date data is acceptable. Depending on the scenario, use a MemoryCache or a DistributedCache.
Do minimize network round trips. The goal is to retrieve the required data in a single call rather than several
calls.
Do use no-tracking queries in Entity Framework Core when accessing data for read-only purposes. EF Core can
return the results of no-tracking queries more efficiently.
11. Recommendations:
Do filter and aggregate LINQ queries (with .Where, .Select, or .Sum statements, for example) so that the filtering
is performed by the database.
Do consider that EF Core resolves some query operators on the client, which may lead to inefficient query
execution.
Do not use projection queries on collections, which can result in executing "N + 1" SQL queries.
We recommend measuring the impact of the preceding high-performance approaches before committing the
base. The additional complexity of compiled queries may not justify the performance improvement.
Query issues can be detected by reviewing the time spent accessing data with Application Insights or with
tools. Most databases also make statistics available concerning frequently executed queries.
12. Minify client assets
ASP
.NET Core apps with complex front-ends frequently serve many JavaScript,
CSS, or image files. Performance of initial load requests can be improved by:
Bundling, which combines multiple files into one.
Minifying, which reduces the size of files by removing whitespace and
comments.
Recommendation: Do consider other third-party tools, such as Webpack, for
complex client asset management.
13. Minimize exceptions
Exceptions should be rare. Throwing and catching exceptions is slow
relative to other code flow patterns. Because of this, exceptions shouldn't be
used to control normal program flow.
Recommendations:
Do not use throwing or catching exceptions as a means of normal program
flow, especially in hot code paths.
Do include logic in the app to detect and handle conditions that would
cause an exception.
Do throw or catch exceptions for unusual or unexpected conditions.
App diagnostic tools, such as Application Insights, can help to identify
common exceptions in an app that may affect performance.
16. The preceding code frequently
captures a null or
incorrect HttpContext in the
constructor.
Do this: The following example:
Stores
the IHttpContextAccessor in a
field.
Uses the HttpContext field at
the correct time and checks
for null.
17. Do not use the
HttpContext after the
request is complete
HttpContext is only valid as long as there is
an active HTTP request in the ASP
.NET Core
pipeline. The entire ASP
.NET Core pipeline is
an asynchronous chain of delegates that
executes every request. When
the Task returned from this chain completes,
the HttpContext is recycled.
Do not do this: The following example
uses async void which makes the HTTP
request complete when the first await is
reached:
Which is ALWAYS a bad practice in ASP
.NET
Core apps.
Accesses the HttpResponse after the HTTP
request is complete.
Crashes the process.
18. Do this: The following example
returns a Task to the framework,
so the HTTP request doesn't
complete until the action
completes.
19. Do not capture the
HttpContext in
background threads
Do not do this: The following
example shows a closure is
capturing the HttpContext from
the Controller property. This is a
bad practice because the work
item could:
Run outside of the request
scope.
Attempt to read the
wrong HttpContext.
20. Do this: The following example:
Copies the data required in the
background task during the
request.
Doesn't reference anything
the controller.
21. Do not capture services
injected into the controllers on
background threads
Do not do this: The following
example shows a closure is
capturing the DbContext from
the Controller action parameter.
This is a bad practice. The work
item could run outside of the
request scope.
The ContosoDbContext is scoped
to the request, resulting in
an ObjectDisposedException.
22. Do this: The following example:
Injects an IServiceScopeFactory in
order to create a scope in the
background work item.
IServiceScopeFactory is a singleton.
Creates a new dependency injection
scope in the background thread.
Doesn't reference anything from the
controller.
Doesn't capture the
ContosoDbContext from the incoming
request.
24. PerfTips
“Typical” way of measuring code performance in
development: StopWatch – or worse: DateTime.Now
Visual Studio (since VS 2015) does it automatically with
breakpoints and PerfTips
25. Performance Profiler – For .NET Core
Sampling Profiler
Tracking memory allocations
Unfortunately the Instrumentation Profiler does not support .NET Core in the current VS Version (15.3.2)