What is RDFa, what is new in RDFa 1.1, why is important for Linked Data, who uses it and how? - Talk given at WebDirectionsSouth 2010 in Sydney, 14/10/2010
The .htaccess file is a configuration file for web servers running the Apache web server software. In this quick tutorial you will see some of the possible uses of the .htaccess file along with examples for each case.
This document provides a summary of a presentation about using library resources to get the best grades. It covers evaluating different types of resources based on currency and authority, producing a good varied bibliography, and correctly writing references. The presentation includes tasks where students evaluate sample bibliographies, correct references in a sample bibliography, and a quiz to conclude the session. The overall aim is to help students understand how to effectively use and cite library resources in their assignments.
This document outlines Jessica Lappin's vision as Manhattan borough president to empower communities through community-based planning. Her key initiatives include: 1) Working with community boards to create and regularly update 197-a plans to guide neighborhood development; 2) Creating a Community Board College program to provide training to board members on planning issues; and 3) Developing a CommunityStat software program to track constituent complaints and identify problems to direct city resources towards improving services.
This 3 page document contains initial ideas and proposals for designing a front cover, contents page, and double page spread. The front cover ideas focus on titles and imagery. The contents page ideas explore layout and formatting options. The double page spread concepts consider visual elements and storytelling approaches.
Steps for adding sitemap to webmaster toolOM Maurya
Google Search Console (previously Google Webmaster Tools) is a no-charge web service by Google for webmasters. It allows webmasters to check indexing status and optimize visibility of their websites.
What is RDFa, what is new in RDFa 1.1, why is important for Linked Data, who uses it and how? - Talk given at WebDirectionsSouth 2010 in Sydney, 14/10/2010
The .htaccess file is a configuration file for web servers running the Apache web server software. In this quick tutorial you will see some of the possible uses of the .htaccess file along with examples for each case.
This document provides a summary of a presentation about using library resources to get the best grades. It covers evaluating different types of resources based on currency and authority, producing a good varied bibliography, and correctly writing references. The presentation includes tasks where students evaluate sample bibliographies, correct references in a sample bibliography, and a quiz to conclude the session. The overall aim is to help students understand how to effectively use and cite library resources in their assignments.
This document outlines Jessica Lappin's vision as Manhattan borough president to empower communities through community-based planning. Her key initiatives include: 1) Working with community boards to create and regularly update 197-a plans to guide neighborhood development; 2) Creating a Community Board College program to provide training to board members on planning issues; and 3) Developing a CommunityStat software program to track constituent complaints and identify problems to direct city resources towards improving services.
This 3 page document contains initial ideas and proposals for designing a front cover, contents page, and double page spread. The front cover ideas focus on titles and imagery. The contents page ideas explore layout and formatting options. The double page spread concepts consider visual elements and storytelling approaches.
Steps for adding sitemap to webmaster toolOM Maurya
Google Search Console (previously Google Webmaster Tools) is a no-charge web service by Google for webmasters. It allows webmasters to check indexing status and optimize visibility of their websites.
R&B originated in the late 1970s when artists like Michael Jackson and Quincy Jones added electronic elements to black music to make it more danceable. Over the decades that followed, R&B evolved as different artists incorporated elements of hip hop and other genres. It grew hugely popular in the 2000s alongside hip hop. Today, R&B is a massively successful global genre with many artists from various backgrounds creating diverse styles of music enjoyed by wide audiences worldwide.
Favicon is a small icon associated with a website, typically displayed in the browser tab or address bar. It is created by taking a logo image and converting it to a 16x16 or 32x32 pixel favicon file format like PNG, GIF, or ICO. To create a favicon, a website owner uploads their logo to an online favicon generator, selects the 16x16 pixel option, downloads the generated favicon.ico file, adds the favicon code to website page headers, and uploads the favicon.ico file and updated pages to their website. Once complete, the favicon logo will display next to the website URL.
An RSS feed is a file that contains the latest content from a blog or website in an easily readable format. It includes items that each have a title, description, and link. RSS feeds are made of XML tags to designate these parts and allow automatic updating of content. To create an RSS feed, one can manually write the XML code or use an online generator. The generator asks for website details and feed items, generates the XML code, and saves it as an RSS file to upload and validate on a validation site to check for errors.
Chapple, R. M. 2014 A Game of Murals. Westeros & Changing Times in East Belfa...Robert M Chapple
This document summarizes and discusses a large mural celebrating Game of Thrones that was painted beside a Twelfth of July bonfire in East Belfast, Northern Ireland in 2014. The mural uses iconic images from the series set against a flowing map of Westeros. The author notes this mural represents a cultural shift away from traditional sectarian political murals toward more inclusive street art influenced by pop culture. While some locals still engage in racism, murals like this one give hope that the area is becoming more open to outside cultural influences.
An RSS feed is a file that contains the latest content from a source in a standardized format. It uses tags like <title>, <description>, and <link> to define items that each represent a piece of content. RSS feeds allow content to be automatically updated and syndicated across different directories and sites. To create an RSS feed, one can either manually write the XML code following the basic syntax structure, or use an online RSS generator tool. The tool allows entering website details and pages to include as feed items. Once generated, the RSS file should be uploaded and its validity can be checked using an online validator.
how to setup Google analytics tracking code for websiteOM Maurya
Google Analytics is a free web analytics service provided by Google that tracks and reports website traffic. To use it, you first sign up for a Google Analytics account and get a tracking ID. Then you copy the tracking code into the <head> section of all the web pages you want to track. Finally, you can view real-time reports on your website traffic and activity in your Google Analytics account. In summary, Google Analytics allows you to track website traffic by generating a tracking code, adding it to your site pages, and viewing analytics reports.
A sitemap is an XML file that lists the URLs of a website and includes additional metadata about each page, such as update frequency and importance. This allows search engines to better understand the site's structure and content. The document then provides 8 steps for creating a sitemap using an online generator, downloading the sitemap.xml file, uploading it to the website files via FTP, and verifying it is accessible at the sitemap.xml URL.
Este documento describe los elementos clave de un reporte académico. Explica que un reporte presenta los resultados de una investigación y contiene información de diferentes fuentes. Además, destaca que un reporte debe tener un orden lógico y claro, y respaldar la información con argumentos y referencias confiables. Finalmente, resume los elementos mínimos que debe contener un reporte como introducción, métodos, resultados y bibliografía.
The document provides an overview of web development. It discusses how the web was created in 1989 by Tim Berners-Lee and the initial technologies of HTTP, HTML, and URLs. It then explains how a basic web application works with a browser connecting to a web server to request and receive HTML files and other resources. The document also summarizes key concepts in web development including front-end versus back-end code, common programming languages and frameworks, database usage, and standards that allow interoperability across systems.
These are the slides from my "HTML5 Real-TIme and Connectivity" presentation at the San Francisco HTML5 User Group (http://sfhtml5.org). The presentation covers:
Web Origin
Cross Document Messaging (PostMessage)
CORS
XHR Level2
WebSocket
Server-Sent Events (EventSource)
SPDY
Top 10 HTML5 Features for Oracle Cloud DevelopersBrian Huff
This document discusses top HTML5 features for Oracle Cloud developers. It begins with an introduction to various Oracle Cloud services that use HTML5 extensively, such as Oracle Sites Cloud Service. It then discusses why HTML5 is important for cloud development due to its wide acceptance, rapid development cycles, and cheaper hosting model. The document outlines the top 10 HTML5 features developers should know, including semantic HTML, local storage, geolocation, OAuth2, CORS, advanced forms, WebSockets, WebWorkers, built-in audio/video support, and custom DOM elements. It provides details and examples for each feature.
Of CORS thats a thing how CORS in the cloud still kills securityJohn Varghese
This document discusses how Cross-Origin Resource Sharing (CORS) is intended to allow cross-domain requests but can impact security if misconfigured. CORS uses HTTP headers to enable controlled cross-domain access and is supported by services like Amazon S3, CloudFront, API Gateway, and Lambda. While CORS allows legitimate cross-domain content sharing, misconfigurations can bypass the same-origin policy and allow attackers to steal user sessions, credentials, or other sensitive data across domains. The document provides examples of how CORS has been exploited in the past and cautions that even minor CORS issues can become major security vulnerabilities when user contexts are involved.
often times SEO is not a technical priority for a development team, mostly because it is difficult and takes a significant amount of invested time and effort. This session will cover how-to information and SEO advice on how to adjust for server and design issues that may be negatively impacting your search engine optimization efforts. We will discuss the 3 main factors of technical SEO: crawling,indexation, and ranking. Additional topics include redirects & server delivery, robots, site architecture, site performance, sitemap protocols, and more.
Publishing strategies for API documentationTom Johnson
Most of the common tools for publishing help material fall short when it comes to API documentation. Much API documentation (such as for Java, C++, or .NET APIs) is generated from comments in the source code. Their outputs don’t usually integrate with other help material, such as programming tutorials or scenario-based code samples.
REST APIs are a breed of their own, with almost no standard tools for generating documentation from the source. The variety of outputs for REST APIs are as diverse as the APIs themselves, as you can see by browsing the 11,000+ web APIs on programmableweb.com.
As a technical writer, what publishing strategies do you use for API documentation? Do you leave the reference material separate from the tutorials and code samples? Do you convert everything to DITA and merge it into a single output? Do you build your own help system from scratch that imports your REST API information?
There’s not a one-size-fits-all approach. In this presentation, you’ll learn a variety of publishing strategies for different kinds of APIs, with examples of what works well for developer audiences. No matter what kind of API you’re working with, you’ll benefit from this survey of the API doc publishing scene.
- See more at: http://idratherbewriting.com
This document provides an introduction to web crawlers. It defines a web crawler as a computer program that browses the World Wide Web in a methodical, automated manner to gather pages and support functions like search engines and data mining. The document outlines the key features of crawlers, including robustness, politeness, distribution, scalability, and quality. It describes the basic architecture of a crawler, including the URL frontier that stores URLs to fetch, DNS resolution, page fetching, parsing, duplicate URL elimination, and filtering based on robots.txt files. Issues like prioritizing URLs, change rates, quality, and politeness policies are also discussed.
Often, web developers keep hearing about "Same Origin Policy (SOP)" of browsers but live with half-knowledge or with several confusions. This session attempts to clear the misconceptions of SOP.
Web scraping involves extracting information from websites using computer software. Common uses of web scraping include price comparison, contact scraping, and weather data monitoring. Libraries like Pismo, Mechanize, and Anemone allow scraping metadata and content from pages. Anemone is an all-encompassing scraping library that can navigate sites, follow redirects and links, and record page response times and depths using a breadth-first search algorithm. The robots.txt file allows websites to specify which pages crawlers and bots should not access.
URI refers to Uniform Resource Identifiers, which include URLs and URNs used to identify resources on the web. URLs contain the protocol, host, path, and name to locate a resource using its network location. URIs are encoded to represent unsafe characters like spaces using percent encoding. Web browsers make HTTP requests to web servers, which respond by sending the requested pages back to the browser over the TCP protocol in a stateless manner according to the HTTP specification. HTML forms allow collecting user input on web pages for submission to servers via the GET or POST methods.
R&B originated in the late 1970s when artists like Michael Jackson and Quincy Jones added electronic elements to black music to make it more danceable. Over the decades that followed, R&B evolved as different artists incorporated elements of hip hop and other genres. It grew hugely popular in the 2000s alongside hip hop. Today, R&B is a massively successful global genre with many artists from various backgrounds creating diverse styles of music enjoyed by wide audiences worldwide.
Favicon is a small icon associated with a website, typically displayed in the browser tab or address bar. It is created by taking a logo image and converting it to a 16x16 or 32x32 pixel favicon file format like PNG, GIF, or ICO. To create a favicon, a website owner uploads their logo to an online favicon generator, selects the 16x16 pixel option, downloads the generated favicon.ico file, adds the favicon code to website page headers, and uploads the favicon.ico file and updated pages to their website. Once complete, the favicon logo will display next to the website URL.
An RSS feed is a file that contains the latest content from a blog or website in an easily readable format. It includes items that each have a title, description, and link. RSS feeds are made of XML tags to designate these parts and allow automatic updating of content. To create an RSS feed, one can manually write the XML code or use an online generator. The generator asks for website details and feed items, generates the XML code, and saves it as an RSS file to upload and validate on a validation site to check for errors.
Chapple, R. M. 2014 A Game of Murals. Westeros & Changing Times in East Belfa...Robert M Chapple
This document summarizes and discusses a large mural celebrating Game of Thrones that was painted beside a Twelfth of July bonfire in East Belfast, Northern Ireland in 2014. The mural uses iconic images from the series set against a flowing map of Westeros. The author notes this mural represents a cultural shift away from traditional sectarian political murals toward more inclusive street art influenced by pop culture. While some locals still engage in racism, murals like this one give hope that the area is becoming more open to outside cultural influences.
An RSS feed is a file that contains the latest content from a source in a standardized format. It uses tags like <title>, <description>, and <link> to define items that each represent a piece of content. RSS feeds allow content to be automatically updated and syndicated across different directories and sites. To create an RSS feed, one can either manually write the XML code following the basic syntax structure, or use an online RSS generator tool. The tool allows entering website details and pages to include as feed items. Once generated, the RSS file should be uploaded and its validity can be checked using an online validator.
how to setup Google analytics tracking code for websiteOM Maurya
Google Analytics is a free web analytics service provided by Google that tracks and reports website traffic. To use it, you first sign up for a Google Analytics account and get a tracking ID. Then you copy the tracking code into the <head> section of all the web pages you want to track. Finally, you can view real-time reports on your website traffic and activity in your Google Analytics account. In summary, Google Analytics allows you to track website traffic by generating a tracking code, adding it to your site pages, and viewing analytics reports.
A sitemap is an XML file that lists the URLs of a website and includes additional metadata about each page, such as update frequency and importance. This allows search engines to better understand the site's structure and content. The document then provides 8 steps for creating a sitemap using an online generator, downloading the sitemap.xml file, uploading it to the website files via FTP, and verifying it is accessible at the sitemap.xml URL.
Este documento describe los elementos clave de un reporte académico. Explica que un reporte presenta los resultados de una investigación y contiene información de diferentes fuentes. Además, destaca que un reporte debe tener un orden lógico y claro, y respaldar la información con argumentos y referencias confiables. Finalmente, resume los elementos mínimos que debe contener un reporte como introducción, métodos, resultados y bibliografía.
The document provides an overview of web development. It discusses how the web was created in 1989 by Tim Berners-Lee and the initial technologies of HTTP, HTML, and URLs. It then explains how a basic web application works with a browser connecting to a web server to request and receive HTML files and other resources. The document also summarizes key concepts in web development including front-end versus back-end code, common programming languages and frameworks, database usage, and standards that allow interoperability across systems.
These are the slides from my "HTML5 Real-TIme and Connectivity" presentation at the San Francisco HTML5 User Group (http://sfhtml5.org). The presentation covers:
Web Origin
Cross Document Messaging (PostMessage)
CORS
XHR Level2
WebSocket
Server-Sent Events (EventSource)
SPDY
Top 10 HTML5 Features for Oracle Cloud DevelopersBrian Huff
This document discusses top HTML5 features for Oracle Cloud developers. It begins with an introduction to various Oracle Cloud services that use HTML5 extensively, such as Oracle Sites Cloud Service. It then discusses why HTML5 is important for cloud development due to its wide acceptance, rapid development cycles, and cheaper hosting model. The document outlines the top 10 HTML5 features developers should know, including semantic HTML, local storage, geolocation, OAuth2, CORS, advanced forms, WebSockets, WebWorkers, built-in audio/video support, and custom DOM elements. It provides details and examples for each feature.
Of CORS thats a thing how CORS in the cloud still kills securityJohn Varghese
This document discusses how Cross-Origin Resource Sharing (CORS) is intended to allow cross-domain requests but can impact security if misconfigured. CORS uses HTTP headers to enable controlled cross-domain access and is supported by services like Amazon S3, CloudFront, API Gateway, and Lambda. While CORS allows legitimate cross-domain content sharing, misconfigurations can bypass the same-origin policy and allow attackers to steal user sessions, credentials, or other sensitive data across domains. The document provides examples of how CORS has been exploited in the past and cautions that even minor CORS issues can become major security vulnerabilities when user contexts are involved.
often times SEO is not a technical priority for a development team, mostly because it is difficult and takes a significant amount of invested time and effort. This session will cover how-to information and SEO advice on how to adjust for server and design issues that may be negatively impacting your search engine optimization efforts. We will discuss the 3 main factors of technical SEO: crawling,indexation, and ranking. Additional topics include redirects & server delivery, robots, site architecture, site performance, sitemap protocols, and more.
Publishing strategies for API documentationTom Johnson
Most of the common tools for publishing help material fall short when it comes to API documentation. Much API documentation (such as for Java, C++, or .NET APIs) is generated from comments in the source code. Their outputs don’t usually integrate with other help material, such as programming tutorials or scenario-based code samples.
REST APIs are a breed of their own, with almost no standard tools for generating documentation from the source. The variety of outputs for REST APIs are as diverse as the APIs themselves, as you can see by browsing the 11,000+ web APIs on programmableweb.com.
As a technical writer, what publishing strategies do you use for API documentation? Do you leave the reference material separate from the tutorials and code samples? Do you convert everything to DITA and merge it into a single output? Do you build your own help system from scratch that imports your REST API information?
There’s not a one-size-fits-all approach. In this presentation, you’ll learn a variety of publishing strategies for different kinds of APIs, with examples of what works well for developer audiences. No matter what kind of API you’re working with, you’ll benefit from this survey of the API doc publishing scene.
- See more at: http://idratherbewriting.com
This document provides an introduction to web crawlers. It defines a web crawler as a computer program that browses the World Wide Web in a methodical, automated manner to gather pages and support functions like search engines and data mining. The document outlines the key features of crawlers, including robustness, politeness, distribution, scalability, and quality. It describes the basic architecture of a crawler, including the URL frontier that stores URLs to fetch, DNS resolution, page fetching, parsing, duplicate URL elimination, and filtering based on robots.txt files. Issues like prioritizing URLs, change rates, quality, and politeness policies are also discussed.
Often, web developers keep hearing about "Same Origin Policy (SOP)" of browsers but live with half-knowledge or with several confusions. This session attempts to clear the misconceptions of SOP.
Web scraping involves extracting information from websites using computer software. Common uses of web scraping include price comparison, contact scraping, and weather data monitoring. Libraries like Pismo, Mechanize, and Anemone allow scraping metadata and content from pages. Anemone is an all-encompassing scraping library that can navigate sites, follow redirects and links, and record page response times and depths using a breadth-first search algorithm. The robots.txt file allows websites to specify which pages crawlers and bots should not access.
URI refers to Uniform Resource Identifiers, which include URLs and URNs used to identify resources on the web. URLs contain the protocol, host, path, and name to locate a resource using its network location. URIs are encoded to represent unsafe characters like spaces using percent encoding. Web browsers make HTTP requests to web servers, which respond by sending the requested pages back to the browser over the TCP protocol in a stateless manner according to the HTTP specification. HTML forms allow collecting user input on web pages for submission to servers via the GET or POST methods.
This document provides an overview of web development using Visual Studio 2012 and ASP.NET MVC. It discusses web fundamentals like HTTP transactions and the role of web servers. It also introduces ASP.NET MVC, covering the model-view-controller pattern, request lifecycle, and creating a sample project. The presenter demonstrates building a simple dynamic website using ASP.NET MVC in Visual Studio 2012.
Code for Startup MVP (Ruby on Rails) Session 1Henry S
First Session on Learning to Code for Startup MVP's using Ruby on Rails.
This session covers the web architecture, Git/GitHub and makes a real rails app that is deployed to Heroku at the end.
Thanks,
Henry
Drupal is not intended to directly generate entire web pages. It is better suited as a back-end content management system, with other technologies handling page assembly and delivery. For high-traffic sites, offloading elements like user comments, real-time updates and cached content to external services improves scalability. Edge side includes and client-side technologies can incorporate dynamic fragments into cached pages without involving Drupal. This allows Drupal to focus on content while distributing page load across the technical stack.
Rendering: Or why your perfectly optimized content doesn't rankWeLoveSEO
The document discusses how Google renders webpages for indexing. It explains that Google uses the Chromium browser and its components like Blink, V8, and the headless Chrome browser to render pages. The rendering process involves crawling the initial HTML, extracting links and resources, loading necessary scripts and files, and finally rendering the fully assembled page. Issues like undiscoverable links, blocking resources, dependencies, and client-side rendering can cause content to be missing from what Google indexes. The document provides tips to improve rendering such as ensuring visibility, adding diagnostics, and taking iterative steps.
Short presentation given at a local Kotlin meetup on what to look for in a server framework and pros/cons of Kotlin server frameworks that are available
So, you heard "the Web is Programmable, Internet of Things, Digitalization", but have NO to little programming skills. Nevertheless, this is 2016, and you want to get enough about Web Programming to be part of the some fun and exciting Web challenge, participate in an Hackathon may be …
Well, I am happy we meet. I suggest you take the tour “from ZERO to REST in a hour”: we’ll teach you to forge your own HTTP requests against the Github API. After this tour, you’ll know enough to interact with any RESTful Web APIs. Worth mentionning this presentation is entirely scripted: so give attention to each slide comments.
Did you enjoy the tour ? look forward to learn more ?
Post your comments below about enhancements, and for any subjects you’d like to see covered.
2. Join the Cisco developers community : https://developer.cisco.com/
3. Take a free online Coding Lab (REST, Python, Parsing JSON, RAML, Git…)
https://learninglabs.cisco.com/labs/tags/Coding
4. Meet DevNet teams at a physical event: conferences, hackathons
https://developer.cisco.com/site/devnet/events-contests/events/
This document provides an overview of web technologies, including:
- Core technologies like web browsers, web servers, URIs, and HTTP.
- Client-side technologies like HTML, CSS, JavaScript, DOM, AJAX, and HTML5 for enhancing user interfaces.
- Server-side technologies like CGI, PHP, Java servlets, and JSPs for building dynamic web applications.
Political Strategist India | Significance of social media in political campaignIshan Mishra
This document discusses the significance of social media in political campaigns. It notes that major social media platforms like Facebook and Twitter now have over a billion and 200 million users respectively. Social media has changed political communication by allowing politicians and parties to directly engage with voters online through platforms. Parties in India like the BJP and AAP are very active on social media and have built large followings. Social media is particularly important for reaching younger voters. Data shows that in some constituencies, the number of Facebook users exceeds the margin of victory, showing social media's potential influence. The timing of optimal social media engagement is also discussed for platforms like Facebook, Twitter, and LinkedIn.
Social Media Agency & Digital Marketing Company in IndoreIshan Mishra
The document discusses various digital marketing strategies and tactics including leveraging digital platforms, generating custom content, measuring campaign success, and identifying target audiences. It also provides tips on search engine optimization, social media marketing, content creation, and using analytics tools like Google Analytics. Specific tactics mentioned include keyword research, link building, social media posting best practices, creating shareable content, and tracking trends.
ISHANTECH - AN INTERACTIVE MARKETING AGENCY SPECIALIZING IN SEO, PPC, CRO, CV...Ishan Mishra
ISHANCONSULT is an award winning, professional SEO, web design & development company. We work with everyone, from Fortune 500’s to local companies, creating sustainable relationships between brands and consumers.
AdSense Optimization Tips for increased ad RevenueIshan Mishra
This document contains tips for optimizing websites for Google AdSense advertising. It includes recommendations around ad placement like using light or dark backgrounds, helping search engines find the site through good anchor text and avoiding back button issues. It also suggests tracking metrics like click-through rate, bounce rate and referral sources. Additional tips include using descriptive titles, avoiding similar content/titles, allowing comments and being aware of pricing and audience.
Online Travel Agency Report on Social Media Habits of TraveIshan Mishra
The document analyzes the social media habits of major Indian travel and tourism brands on Facebook and Twitter over a one month period. On Facebook, MakeMyTrip led in engagement score and number of replies to fans, while Goibibo had the most fans. On Twitter, iXiGo had the most tweets while MakeMyTrip responded most quickly to followers. The document provides metrics on posts, growth rates, and response times for various brands on both platforms.
The Pandavas were better prepared and organized than the Kauravas, having turned weaknesses into strengths, built strong alliances, and developed a cooperative team spirit with distributed leadership. In contrast, the Kauravas lacked cohesion and commitment, as their leaders had personal motives conflicting with the war effort. The Pandavas' experience with diverse peoples and ideologies made them well-rounded and aware of realities, unlike the Kauravas who were isolated in their power.
Atif Aslam is a Pakistani singer, songwriter and actor born in 1983 in Wazirabad, Pakistan. He studied at PAF College Lahore where he won his first singing competition in 1998. After completing his bachelor's degree, he formed the band Jal which recorded their first popular song "Aadat." After leaving Jal due to personal differences, Atif pursued a successful solo career. He has recorded numerous hit songs for Bollywood and Hollywood films. Atif is also set to make his acting debut in the 2011 film "Bol." He has received many awards and is one of Pakistan's most popular and acclaimed singers.
The document discusses reviving Crystal IT Park in Indore by acquiring its existing buildings and vacant land for Rs. 100 crores. This would provide eligible IT and diamond processing companies tax exemptions for the next 10 years. However, the document notes that tax exemptions for Special Economic Zones are scheduled to end in March 2011 and are unlikely to be extended further based on government signals. It proposes forming a consortium to commit to buying space, which would incentivize a developer to finish constructing another building by December 2010 for companies to move in.
Global Management Consulting, Technology and Outsourcing Services from ISHAN...Ishan Mishra
The document contains three short stories that provide management lessons:
1. A turkey eats dung which gives it the strength to climb to the top of a tree, but it is then shot by a farmer, teaching that "Bullshit might get you to the top, but it won't keep you there."
2. A frozen bird is warmed by cow dung but starts singing, attracting a cat who digs it out and eats it, teaching "Not everyone who drops shit on you is your enemy," "Not everyone who gets you out of shit is your friend," and "When you're in deep shit, keep your mouth shut."
3. A manager asks to do nothing
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
2. Outline
Robot applications
How it works
Cycle Avoidance
2
3. Applications
Behavior of web robots
Wander from web site to site (recursively),
1. Fetching content,
2. Following hyperlinks,
3. Process the data they find.
Colorful names
Crawlers,
Spiders,
Worms,
Bots
3
4. Where to Start: The “Root Set”
A G L S
B C D M N T U
H I
J O
E F
K P Q R
4
5. Cycle Avoidance
A B E B E B E
AB
A C A C A ABC C
D D D
(a) Robot fetches page A, (b) Robot follows link (c) Robot follows link and
follows link, fetches B and fetches page C is back to A
5
6. Loops
Cycles are bad for crawlers for there
reasons.
Spending robot’s time and space
Overwhelm the web site.
Duplicate content.
6
7. Data structure for robot
Trees and hash table
Lossy presence bit maps
Checkpoints
Save the list of visited URL to disk, in case the
robot crashes
Partitioning
Robot farms
7
8. Canonicalizing URLs
Most web robots try to eliminate the
obvious aliases by “canonicalizing” URL
into a standard form, by:
adding “:80” to the hostname, if the port
isn’t specified.
Converting all %xx escaped characters into
their character equivalents.
Removing # tags
8
9. Symbolic link cycles
/ /
index.html subdir index.html subdir
index.html logo.gif
(a) subdir is a directory (b) subdir is an upward symbolic link
9
10. Dynamic Virtual Web Spaces
It can be possible to publish a URL that looks like a normal
file but really is a gateway application.
This application can generate HTML on the fly that
contains links to imaginary URLs on the same server.
When these imaginary URLs are requested, new imaginary
URLs are generated.
Such kind of malicious web server take the poor robot on
an Alice-in-Wonderland journey through an infinite virtual
space, even if the web server doesn’t really contain any
files. Sometimes the robot is hard to detect this trap,
because HTML and URLs may look very different all the
time.
For example, a CGI-based calendaring program
10
12. Techniques for avoiding loops
Canonicalizing URLs
Breath-first crawling
Throttling
Limit the number of pages the robot can fetch from a
web site in a period of time.
Limit URL size
Avoid symbolic cycle problem.
Problem: many sites use URLs to maintain user state.
URL/site blacklist
vs. “excluding Robot”
12
13. Techniques for avoiding loops
Pattern detection
e.g., “subdir/subdir/subdir…”
e.g., “subdir/images/subdir/images/subdir/…”
Content fingerprinting
A checksum concept, while the odds of two different pages
having the same check sum are small.
Message digest functions such as MD5 are popular for this
purpose.
Human monitoring
Should design your robot with diagnostics and logging, so
human beings can easily monitor the robot’s process and be
warned quickly if something unusual is happening.
13
14. Robotic HTTP
No different from any other HTTP client program.
Many robots try to implement the minimum
amount of HTTP needed to request the content
they seek.
It is recommended that robot implementers
send some basic header information to notify
the site of the capabilities of the robot, the robot
identify, and where it originated.
14
15. Identifying Request Header
User-Agent
Tell the server the robot’s name
From
Tell the email of the robot’s user/admin email.
Accept
Tell the server what media types are okay to send.
(e.g. only fetch text and sound).
Referer
Tell the server how a robot found links to this site’s
content.
15
16. Virtual docroots cause trouble if
no Host header is sent
Robot tries to request index.html
from www.csie.ncnu.edu.tw, but does
Servers is configured to serve
not include a Host header.
both sites, but serves
www.ncnu.edu.tw by default.
Web robot client
Request message
GET /index.html HTTP/1.0
User-agent: ShopBot 1.0
www.ncnu.edu.tw
www.csie.ncnu.edu.tw
Response message
HTTP/1.0 200 OK
[…]
<HTML>
<TITLE>National Chi Nan University</TITLE>
[…] 16
17. What else a robot should support
Support Virtual Hosting
Not including this can lead to robots identifying the wrong content with
a particular URL.
Conditional Requests
To minimize the amount of content retrieved, by conditional HTTP
requests. (like cache revalidation)
Response Handling
Status code: 200 OK, 404 Not Found, 304
Entities: <meta http-equiv=“refresh” content”1; URL=index.html”>
User-Agent Targeting
Web master should keep in mind that many robot will visit their site.
Many sites optimize content for various user agents (I.E. or netscape).
Problem: “your browser does not support frame.”
17
18. Misbehaving Robots
Runaway robot
Robots issue HTTP requests as fast as they can.
Stale URLs
Robots visit the old lists of URLs.
Long, wrong URLs
May reduce web server’s performance, clutter server’s access
logs, even crash server.
Nosy robots
Some robots may get URLs that point to private data and make
that data easily accessible through search engine.
Dynamic gateway access
Robots don’t always know what they are accessing.
18
19. Excluding Robots
www.ncnu.edu.tw
Robot parses the robots.txt file and
determines if it is allowed to access
the acetylene-torches.html file.
It is, so it proceeds with the request.
19
20. robots.txt format
#allow google, csiebot to crawl the public parts
of our site, but no other robots are allowed to
crawl anything of our sites
User-Agent: googlebot
User-Agent: csiebot
Disallow: /private
User-Agent: *
Disallow:
20
21. Robots Exclusion Standard
versions
Version Title and description Date
0.0 A Standard for Robot Exclusion-Martijn Koster’s June 1994
original robot.txt mechanism with Disallow
directive
1.0 A Method for Web Robots Control-Martijn Nov. 1996
Koster’s IETF draft with additional support for
Allow
2.0 An Extended Standard for Robot Exclusion-Sean Nov. 1996
Conner’s extension including regex and timing
information; not widely supported
21
22. Robots.txt path matching
examples
Rule path URL path Match? Comments
/tmp /tmp ˇ Rule path==URL path
/tmp /tmpfile.html ˇ Rule path is a prefix of URL
path
/tmp /tmp/a.html ˇ Rule path is a prefix of URL
path
/tmp/ /tmp x /tmp/ is not a prefix of /tmp
README.TXT ˇ Empty rule path matches
everything
/~fred/hi.html %7Efred/hi.html ˇ %7E is treated the same as ~
/%7Efred/hi.html /~fred/hi.html ˇ %7E is treated the same as ~
/%7efred/hi.html /%7Efred/hi.html ˇ Case isn’t significant in escapes
/~fred/hi.html ~fred%2Fhi.html x %2F is slash, but slash is a
special case that must match
exactly 22
23. HTML Robot-control Meta Tags
e.g.
<META NAME=“ROBOTS” CONTENT=directive-list>
Directive-list
NOINDEX
Not to process this document content
NOFOLLOW
Not to crawl any outgoing links from this page
INDEX
FOLLOW
NOARCHIVE
Should not cache a local copy of the page
ALL (equivalent to INDEX, FOLLOW)
NONE (equivalent to NOINDEX, NOFOLLOW)
23
24. Additional META tag directives
name= content= Description
DESCRIPTION <text> Allows an author to define a short text summary of the web
page. Many search engines look at META DESCROPTION
tags,allowing page author to specify appropriate short
abstracts to describe their web pages.
<meta name=“description”
content=“Welcome to Mary’s Antiques web site”>
KEYWORDS <comma Associates a comma-separated list of words that describes the
list> web page, to assist in keyword searches.
<meta name=“keywords”
content=“antiques,mary,furniture,restoration”>
REVISIT-AFTER* <no.days> Instructs the robot or search engine that the page should be
revisited, presumably because it is subject to change, after the
specified number of days.
<meta name=“revisit-after” content=“10 days”>
* This directive is not likely to have wide support. 24
30. Modern Search Engine
Architecture
User
Web server
User
Web server
Web search Search engine
gateway crawler/indexer
User
Full-text index
database
Web server
User
Web search users Query engine Crawling and indexing
30
32. Posting the Query
User fills out HTML search
form (with a GET action
HTTP method) on site in
browser and hits Submit
Client Query:”drills”
Request message
Results:File”BD.html”
GET /search.html?query=drills HTTP/1.1
Host: www.csie.ncnu.edu.tw www.csie.ncnu.edu.tw
Accept: *
User-agent: ShopBot
Response message Search gateway
HTTP/1.1 200 OK
Content-type: text/html
Content-length: 1037
<HTML>
<HEAD><TITLE>Search Results</TITLE>
[…]
32
33. Reference (HW#4)
paper reading: “searching the Web”
paper reading: “Hyperlink analysis for the Web,” IEEE Internet Computing, 2001.
http://www.searchtools.com
Search Tools for Web Sites and Intranets-resources for search tools and
robots.
http://www.robotstxt.org/wc/robots.html
The Web Robots Pages-resources for robot developers, including the
registry of Internet Robots.
http://www.searchengineworld.com
Search Engine World-resource for search engines and robots.
http://search.cpan.org/dist/libwww-perl/lib/WWW/RobotRules.pm
RobotRules Perl source.
http://www.conman.org/people/spc/robots2.html
An Extended Standard for Robot Exclusion.
Managing Gigabytes: Compressing and Indexing Documents and Images
Written, I., Moffat, A., and Bell, T., Morgan Kaufmann. 33