High-speed dial-up works by compressing web pages, images, and text to accelerate the transfer of data over traditional dial-up connections. It uses techniques like on-the-fly compression, filtering of unnecessary content like pop-ups, and caching of frequently accessed content on the server and client-side to reduce load times. Testing showed that popular websites loaded 2 to 5 times faster with high-speed dial-up compared to standard dial-up.
High-speed dial-up providers speed up connections by using acceleration servers with broadband connections to quickly find requested web pages and compress files before transferring them. Compression works best for text files but cannot change encrypted files. Acceleration servers filter requests and cache frequently accessed content to load web pages and connect users to high-speed dial-up systems faster than traditional dial-up. Understanding how high-speed dial-up works helps extend the life of dial-up Internet and provides an alternative for users with slow connection speeds.
Внедрение протокола SPDY в социальной сети LinkedIn, Omer Shapira (LinkedIn)Ontico
This document discusses how LinkedIn is adopting SPDY to improve website speed. It begins with an overview of LinkedIn and how speed matters for their business. It then discusses challenges with the existing HTTP protocol, such as latency and limited requests. The document introduces SPDY as a replacement for HTTP that allows multiplexing of requests over a single connection, server push of resources, and header/compression optimizations. The rest of the document discusses how LinkedIn is deploying SPDY, finding performance improvements over CDNs for certain pages/regions, and their plans to combine SPDY with CDNs for faster content delivery globally.
AJAX allows web pages to be updated asynchronously by exchanging data with a web server behind the scenes, allowing parts of a page to change without reloading the entire page. Tuenti uses AJAX extensively to update parts of their single-page application, caching content on both client and server sides for scalability. They route requests to different server farms based on client location and cache content to improve performance. Tuenti serves billions of images per day using multiple CDNs and pre-fetches content to minimize load times.
Performance is important for user experience. While some myths exist around performance, such as XML being much slower than JSON, tests show they are essentially identical. Easy techniques can improve performance, such as using content delivery networks and image compression. Emerging standards like HTTP 2.0, server-side push, and WebSockets allow pushing data to clients. Frameworks like MessagePack provide smaller binary serialization. Proper use of threading, reusing elements, preloading, and prioritizing content can also boost performance. The perception of speed matters - even 100ms delays impact user behavior.
Building and Scaling a WebSockets Pubsub SystemKapil Reddy
Talk about how we built and maintained a WebSockets platform on AWS infra.
You can will learn insights about,
* How to build and evovle a WebSockets platform on AWS
* How we made the platform more resilient to failures known and unknown
* How we saved costs by using right strategy for auto-scaling and load balancing
* How to monitor a WebSockets platform
High-speed dial-up works by compressing web pages, images, and text to accelerate the transfer of data over traditional dial-up connections. It uses techniques like on-the-fly compression, filtering of unnecessary content like pop-ups, and caching of frequently accessed content on the server and client-side to reduce load times. Testing showed that popular websites loaded 2 to 5 times faster with high-speed dial-up compared to standard dial-up.
High-speed dial-up providers speed up connections by using acceleration servers with broadband connections to quickly find requested web pages and compress files before transferring them. Compression works best for text files but cannot change encrypted files. Acceleration servers filter requests and cache frequently accessed content to load web pages and connect users to high-speed dial-up systems faster than traditional dial-up. Understanding how high-speed dial-up works helps extend the life of dial-up Internet and provides an alternative for users with slow connection speeds.
Внедрение протокола SPDY в социальной сети LinkedIn, Omer Shapira (LinkedIn)Ontico
This document discusses how LinkedIn is adopting SPDY to improve website speed. It begins with an overview of LinkedIn and how speed matters for their business. It then discusses challenges with the existing HTTP protocol, such as latency and limited requests. The document introduces SPDY as a replacement for HTTP that allows multiplexing of requests over a single connection, server push of resources, and header/compression optimizations. The rest of the document discusses how LinkedIn is deploying SPDY, finding performance improvements over CDNs for certain pages/regions, and their plans to combine SPDY with CDNs for faster content delivery globally.
AJAX allows web pages to be updated asynchronously by exchanging data with a web server behind the scenes, allowing parts of a page to change without reloading the entire page. Tuenti uses AJAX extensively to update parts of their single-page application, caching content on both client and server sides for scalability. They route requests to different server farms based on client location and cache content to improve performance. Tuenti serves billions of images per day using multiple CDNs and pre-fetches content to minimize load times.
Performance is important for user experience. While some myths exist around performance, such as XML being much slower than JSON, tests show they are essentially identical. Easy techniques can improve performance, such as using content delivery networks and image compression. Emerging standards like HTTP 2.0, server-side push, and WebSockets allow pushing data to clients. Frameworks like MessagePack provide smaller binary serialization. Proper use of threading, reusing elements, preloading, and prioritizing content can also boost performance. The perception of speed matters - even 100ms delays impact user behavior.
Building and Scaling a WebSockets Pubsub SystemKapil Reddy
Talk about how we built and maintained a WebSockets platform on AWS infra.
You can will learn insights about,
* How to build and evovle a WebSockets platform on AWS
* How we made the platform more resilient to failures known and unknown
* How we saved costs by using right strategy for auto-scaling and load balancing
* How to monitor a WebSockets platform
The document provides an overview of techniques for scaling a system from serving a single user to millions of users. It begins with a single server setup and gradually introduces additional components like load balancers, database replication, caching, and CDNs. It explains how each component helps improve performance, availability and scalability. The key techniques covered are load balancing, database replication, caching frequently accessed data, and using a CDN to distribute static content.
User-centric Networks for Immersive Communicationlauratoni4
The document discusses research being conducted at the Learning And Signal Processing (LASP) Lab at University College London. The LASP team is developing novel adaptive strategies for large-scale networks that exploit graph structures. Their research focuses on topics like multimedia processing, signal processing, networking, and machine learning. Some key applications of their research include virtual reality systems, self-learning networks for IoT devices, and influence maximization problems. Their overall goal is to rethink network control strategies and develop distributed, adapting strategies to support intelligent systems over networks.
Delivering Optimal Images for Phones and Tablets on the Modern WebJoshua Marantz
Evolving mobile hardware and networks have made it challenging for web sites to deliver an optimal experience to each client. If you send the same image to both a WiFi Retina tablet and a 3G phone, you compromise speed and bandwidth cost against image quality. We'll look at using HTML and CSS image markup, CDNs, HTTP caching directives and how WPO can deliver a great UX with minimal effort.
Images blast off at the speed of Jamstack! - Alba Silvente FuentesWey Wey Web
This document discusses optimizing images for web performance. It begins by introducing common issues like large image file sizes and outdated formats that hurt performance. It then provides solutions for each issue, such as compressing images, using next-generation formats like WebP, adding width and height attributes, responsive images for different resolutions and sizes, art direction for different devices, lazy loading images, and caching images. The document ends by describing a case study of an image component built in Storyblok to help implement these optimizations.
Streaming in Mulesoft allows for efficient processing of large data by streaming it through applications rather than loading entire documents into memory. It provides advantages like consuming very large messages efficiently and not reading payloads into memory. To enable streaming, properties like streaming and deferred writer need to be configured. Streaming supports formats like CSV, JSON, and XML by accessing each record/element sequentially. DataWeave can validate if a script is stream-capable by checking criteria like single variable reference. The demo shows streaming reduces processing time for large payloads compared to non-streaming.
Practical Thin Server Architecture With Dojo Peter Svenssonrajivmordani
The document discusses thin server architecture, which moves user interface code from servers to clients. This improves scalability by distributing processing across clients. It also enhances responsiveness by allowing immediate client-side reactions to user input. Key benefits include improved scalability, responsiveness, programming model, and support for offline/interoperable applications. The document provides examples using Dojo to demonstrate how client-side widgets and data stores can be implemented following thin server principles.
This is my presentation from code|works in NYC 2009 on Thin Server Architecture. The funny animal slides were "sleeper checks" as this was the morning session.
This document summarizes Jan Jongboom's presentation on building web applications for offline use. Some key points:
1. Only 2.5 billion people out of 7 billion have internet access, so mobile users often don't have a connection. Applications need to work offline.
2. Applications have two parts - the shell (code, UI, assets) and app content (dynamic data). The shell can be cached using the AppCache API to work offline.
3. App content is fetched via AJAX but can be stored in localStorage to serve offline. Path caching pre-fetches related data to improve performance.
4. While AppCache works today, the ServiceWorker API proposed by Google
Data-Driven Transformation: Leveraging Big Data at Showtime with Apache SparkDatabricks
Interested in learning how Showtime is leveraging the power of Spark to transform a traditional premium cable network into a data-savvy analytical competitor? The growth in our over-the-top (OTT) streaming subscription business has led to an abundance of user-level data not previously available. To capitalize on this opportunity, we have been building and evolving our unified platform which allows data scientists and business analysts to tap into this rich behavioral data to support our business goals. We will share how our small team of data scientists is creating meaningful features which capture the nuanced relationships between users and content; productionizing machine learning models; and leveraging MLflow to optimize the runtime of our pipelines, track the accuracy of our models, and log the quality of our data over time. From data wrangling and exploration to machine learning and automation, we are augmenting our data supply chain by constantly rolling out new capabilities and analytical products to help the organization better understand our subscribers, our content, and our path forward to a data-driven future.
Authors: Josh McNutt, Keria Bermudez-Hernandez
How Netflix Monitors Applications in Near Real-time w Amazon Kinesis - ABD401...Amazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, you'll learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
The document discusses various tools and techniques for optimizing mobile and web performance, including testing sites using tools like WebPageTest and Video Optimizer, optimizing delivery of content like images, videos and text through techniques like compression and CDNs, and best practices for mobile video streaming to reduce startup delays and prevent stalls. Common issues covered include large file sizes, unnecessary connections, and choosing video streams appropriate for available bandwidth.
Puppet – Make stateful apps easier than statelessStarcounter
Stateful apps are considered hard and unpractical. The truth is the opposite! With the correct technology, you can develop a thick client SPA with state entirely controlled on the server. Forget writing countless lines of glue code and the callback hell. Welcome to the DRY world of JSON-Patch and PuppetJS!
Practical Thin Server Architecture With Dojo Sapo Codebits 2008codebits
This document discusses the benefits of thin server architecture, where user interface code is moved from the server to the client. Some key benefits include improved scalability, immediate user response times, an organized programming model with clear separation of client and server code, client-side state management, support for offline applications, and improved interoperability. The document provides examples to illustrate how scenarios like styling changes, adding new features, and replacing backend code are simpler with a thin server architecture approach. It argues that separating the user interface from the server using structured data and services allows each layer to focus on its own concerns without unnecessary complexity.
FINRA's Managed Data Lake: Next-Gen Analytics in the Cloud - ENT328 - re:Inve...Amazon Web Services
FINRA faced challenges with their on-premises data infrastructure, including difficulty tracking data, limited scalability, and high costs. They migrated to a managed data lake on AWS to address these issues. This provided centralized data management with a catalog, separation of storage and compute, encryption, and cost optimization. It enabled faster analytics through Presto querying, machine learning model development, and reduced TCO by 30% compared to their on-premises environment. Lessons learned included embracing disruption, automating infrastructure, and treating infrastructure as code. FINRA is exploring additional AWS services like Athena, Lambda, and Step Functions to continue improving their analytics capabilities.
Tableau & MongoDB: Visual Analytics at the Speed of ThoughtMongoDB
This document discusses how Tableau and MongoDB can work together for visual analytics of big data. It describes how MongoDB is a NoSQL database that can handle unstructured and semi-structured data like JSON, and how Tableau allows users to connect to MongoDB through an ODBC driver and visualize the data without needing to write code. The document outlines scenarios where big data comes from human, machine, and process sources and how the combination of Tableau and MongoDB's schema-on-read approach reduces the need for ETL. It also previews demos of connecting Tableau to MongoDB using both the ODBC driver and a PostgreSQL interface.
VPN stands for Virtual Private Network. It uses public networks like the internet to provide secure connections for remote users to access private resources as if they were locally connected. VPNs use protocols like IPSec and SSL to encrypt data in transit and authenticate users. They allow companies to have virtual private networks without having to use expensive private lines. VPNs provide security, flexibility and cost savings compared to traditional private WAN solutions.
The document discusses how high-speed dial-up internet works. It explains that there are two handshakes involved - the modem handshake which initializes the internet connection, and the software handshake which authenticates the user's access to the ISP. It then describes how acceleration servers, file compression, filtering, and caching technologies are used to speed up the process and overcome some limitations of standard dial-up internet. These advances allow dial-up to remain a viable alternative for those not ready to switch to broadband.
The document provides a checklist for front-end performance optimization. It includes recommendations to establish performance metrics and goals, optimize assets like images, videos, fonts and JavaScript, choose frameworks and CDNs wisely, and set priorities to optimize the core experience for all users. Key metrics to target include a Time to Interactive under 5 seconds on 3G and First Input Delay below 100ms.
This document discusses the limitations of the shapefile format and promotes the use of the OGC GeoPackage format as an alternative. It outlines 11 issues with shapefiles, including their multi-file structure, limited attribute name lengths, maximum file size and number of attributes. It then introduces GeoPackages as a single-file SQLite-based format that supports both vector and raster data, has a defined schema and is an open OGC standard implemented in many software programs like QGIS and ArcGIS. The document argues that GeoPackages provide a more sustainable and full-featured replacement for shapefiles.
The document provides an overview of techniques for scaling a system from serving a single user to millions of users. It begins with a single server setup and gradually introduces additional components like load balancers, database replication, caching, and CDNs. It explains how each component helps improve performance, availability and scalability. The key techniques covered are load balancing, database replication, caching frequently accessed data, and using a CDN to distribute static content.
User-centric Networks for Immersive Communicationlauratoni4
The document discusses research being conducted at the Learning And Signal Processing (LASP) Lab at University College London. The LASP team is developing novel adaptive strategies for large-scale networks that exploit graph structures. Their research focuses on topics like multimedia processing, signal processing, networking, and machine learning. Some key applications of their research include virtual reality systems, self-learning networks for IoT devices, and influence maximization problems. Their overall goal is to rethink network control strategies and develop distributed, adapting strategies to support intelligent systems over networks.
Delivering Optimal Images for Phones and Tablets on the Modern WebJoshua Marantz
Evolving mobile hardware and networks have made it challenging for web sites to deliver an optimal experience to each client. If you send the same image to both a WiFi Retina tablet and a 3G phone, you compromise speed and bandwidth cost against image quality. We'll look at using HTML and CSS image markup, CDNs, HTTP caching directives and how WPO can deliver a great UX with minimal effort.
Images blast off at the speed of Jamstack! - Alba Silvente FuentesWey Wey Web
This document discusses optimizing images for web performance. It begins by introducing common issues like large image file sizes and outdated formats that hurt performance. It then provides solutions for each issue, such as compressing images, using next-generation formats like WebP, adding width and height attributes, responsive images for different resolutions and sizes, art direction for different devices, lazy loading images, and caching images. The document ends by describing a case study of an image component built in Storyblok to help implement these optimizations.
Streaming in Mulesoft allows for efficient processing of large data by streaming it through applications rather than loading entire documents into memory. It provides advantages like consuming very large messages efficiently and not reading payloads into memory. To enable streaming, properties like streaming and deferred writer need to be configured. Streaming supports formats like CSV, JSON, and XML by accessing each record/element sequentially. DataWeave can validate if a script is stream-capable by checking criteria like single variable reference. The demo shows streaming reduces processing time for large payloads compared to non-streaming.
Practical Thin Server Architecture With Dojo Peter Svenssonrajivmordani
The document discusses thin server architecture, which moves user interface code from servers to clients. This improves scalability by distributing processing across clients. It also enhances responsiveness by allowing immediate client-side reactions to user input. Key benefits include improved scalability, responsiveness, programming model, and support for offline/interoperable applications. The document provides examples using Dojo to demonstrate how client-side widgets and data stores can be implemented following thin server principles.
This is my presentation from code|works in NYC 2009 on Thin Server Architecture. The funny animal slides were "sleeper checks" as this was the morning session.
This document summarizes Jan Jongboom's presentation on building web applications for offline use. Some key points:
1. Only 2.5 billion people out of 7 billion have internet access, so mobile users often don't have a connection. Applications need to work offline.
2. Applications have two parts - the shell (code, UI, assets) and app content (dynamic data). The shell can be cached using the AppCache API to work offline.
3. App content is fetched via AJAX but can be stored in localStorage to serve offline. Path caching pre-fetches related data to improve performance.
4. While AppCache works today, the ServiceWorker API proposed by Google
Data-Driven Transformation: Leveraging Big Data at Showtime with Apache SparkDatabricks
Interested in learning how Showtime is leveraging the power of Spark to transform a traditional premium cable network into a data-savvy analytical competitor? The growth in our over-the-top (OTT) streaming subscription business has led to an abundance of user-level data not previously available. To capitalize on this opportunity, we have been building and evolving our unified platform which allows data scientists and business analysts to tap into this rich behavioral data to support our business goals. We will share how our small team of data scientists is creating meaningful features which capture the nuanced relationships between users and content; productionizing machine learning models; and leveraging MLflow to optimize the runtime of our pipelines, track the accuracy of our models, and log the quality of our data over time. From data wrangling and exploration to machine learning and automation, we are augmenting our data supply chain by constantly rolling out new capabilities and analytical products to help the organization better understand our subscribers, our content, and our path forward to a data-driven future.
Authors: Josh McNutt, Keria Bermudez-Hernandez
How Netflix Monitors Applications in Near Real-time w Amazon Kinesis - ABD401...Amazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, you'll learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
The document discusses various tools and techniques for optimizing mobile and web performance, including testing sites using tools like WebPageTest and Video Optimizer, optimizing delivery of content like images, videos and text through techniques like compression and CDNs, and best practices for mobile video streaming to reduce startup delays and prevent stalls. Common issues covered include large file sizes, unnecessary connections, and choosing video streams appropriate for available bandwidth.
Puppet – Make stateful apps easier than statelessStarcounter
Stateful apps are considered hard and unpractical. The truth is the opposite! With the correct technology, you can develop a thick client SPA with state entirely controlled on the server. Forget writing countless lines of glue code and the callback hell. Welcome to the DRY world of JSON-Patch and PuppetJS!
Practical Thin Server Architecture With Dojo Sapo Codebits 2008codebits
This document discusses the benefits of thin server architecture, where user interface code is moved from the server to the client. Some key benefits include improved scalability, immediate user response times, an organized programming model with clear separation of client and server code, client-side state management, support for offline applications, and improved interoperability. The document provides examples to illustrate how scenarios like styling changes, adding new features, and replacing backend code are simpler with a thin server architecture approach. It argues that separating the user interface from the server using structured data and services allows each layer to focus on its own concerns without unnecessary complexity.
FINRA's Managed Data Lake: Next-Gen Analytics in the Cloud - ENT328 - re:Inve...Amazon Web Services
FINRA faced challenges with their on-premises data infrastructure, including difficulty tracking data, limited scalability, and high costs. They migrated to a managed data lake on AWS to address these issues. This provided centralized data management with a catalog, separation of storage and compute, encryption, and cost optimization. It enabled faster analytics through Presto querying, machine learning model development, and reduced TCO by 30% compared to their on-premises environment. Lessons learned included embracing disruption, automating infrastructure, and treating infrastructure as code. FINRA is exploring additional AWS services like Athena, Lambda, and Step Functions to continue improving their analytics capabilities.
Tableau & MongoDB: Visual Analytics at the Speed of ThoughtMongoDB
This document discusses how Tableau and MongoDB can work together for visual analytics of big data. It describes how MongoDB is a NoSQL database that can handle unstructured and semi-structured data like JSON, and how Tableau allows users to connect to MongoDB through an ODBC driver and visualize the data without needing to write code. The document outlines scenarios where big data comes from human, machine, and process sources and how the combination of Tableau and MongoDB's schema-on-read approach reduces the need for ETL. It also previews demos of connecting Tableau to MongoDB using both the ODBC driver and a PostgreSQL interface.
VPN stands for Virtual Private Network. It uses public networks like the internet to provide secure connections for remote users to access private resources as if they were locally connected. VPNs use protocols like IPSec and SSL to encrypt data in transit and authenticate users. They allow companies to have virtual private networks without having to use expensive private lines. VPNs provide security, flexibility and cost savings compared to traditional private WAN solutions.
The document discusses how high-speed dial-up internet works. It explains that there are two handshakes involved - the modem handshake which initializes the internet connection, and the software handshake which authenticates the user's access to the ISP. It then describes how acceleration servers, file compression, filtering, and caching technologies are used to speed up the process and overcome some limitations of standard dial-up internet. These advances allow dial-up to remain a viable alternative for those not ready to switch to broadband.
The document provides a checklist for front-end performance optimization. It includes recommendations to establish performance metrics and goals, optimize assets like images, videos, fonts and JavaScript, choose frameworks and CDNs wisely, and set priorities to optimize the core experience for all users. Key metrics to target include a Time to Interactive under 5 seconds on 3G and First Input Delay below 100ms.
This document discusses the limitations of the shapefile format and promotes the use of the OGC GeoPackage format as an alternative. It outlines 11 issues with shapefiles, including their multi-file structure, limited attribute name lengths, maximum file size and number of attributes. It then introduces GeoPackages as a single-file SQLite-based format that supports both vector and raster data, has a defined schema and is an open OGC standard implemented in many software programs like QGIS and ArcGIS. The document argues that GeoPackages provide a more sustainable and full-featured replacement for shapefiles.
Just some thoughts, about costs and price of using and developing free and open source software from the point of view of business, developer and society.
PyWPS is an open source Python implementation of the OGC Web Processing Service standard. It allows users to publish and discover geospatial processes that can be invoked remotely through a RESTful API. Some key points about PyWPS include that it supports all geospatial tools available in Python, uses standards like WFS and WCS, and allows processes to be run asynchronously and in isolated containers. The current version, PyWPS 4.0.0, features improvements like enhanced data validation, multiprocessing support, and an updated codebase to work with newer Python and geospatial technologies.
Testing web mapping applications and services using Python provides concise summaries in 3 sentences or less:
The document discusses using Python for testing web mapping applications and services, including unit, integration, and system testing. It provides an example of using Selenium to test a web map application, writing tests to interact with elements and assert expected behavior. Implementing tests in Python makes the process easy and allows new programmers to get involved, helping to catch bugs and improve software quality.
Danube hack 2015 - Open (-data, -communities)Jachym Cepicky
This document discusses open communities and how they work. It provides tips for helping open communities grow, such as getting people together, organizing inclusive events like hackathons and code sprints, connecting people through online forums and mailing lists, and having a common goal for the community to work towards. The document also discusses open source software, open data, open standards, and how these elements can work together in open communities.
The document discusses how the city of Prague opened its spatial data. It describes how Prague worked with non-governmental organizations to hold hackathons to evaluate the data and technologies. Based on these events and user surveys, Prague established a technical and legal framework for opening its data using open standards and services. The city's Institute for Planning and Development then began publishing spatial data on a new open data portal, making datasets available through Atom feeds with Creative Commons licensing. This has increased access to and reuse of Prague's spatial data.
What can open source do for your business?
Or maybe better: what can your business do for open source?
Slides inspired by @Arnulf Christl http://www.slideshare.net/arnulfchristl/open-standards-open-source-open-data
Jachym Cepicky gave a status report on PyWPS. PyWPS is an implementation of the OGC WPS standard written in Python. Version 4 is being rewritten to take advantage of improvements in Python and geospatial libraries since version 1 was created in 2006. Version 4.0 includes validators, a server based on Werkzeug, an IOHandler, and file storage. Version 4.1 is planned to include output via GeoServer, MapServer and QGIS, a REST API, and database/external storage. Progress has been limited by lack of resources for the open source project.
The document discusses the transition of Geosense's mapping portal from OpenLayers 2 to OpenLayers 3. Some key points made:
- Geosense wanted to replace their old OpenLayers 2-based portal which had performance issues with large datasets.
- After attending FOSS4G in 2014 and seeing OpenLayers 3 presentations, Geosense decided to rewrite the portal from scratch using OpenLayers 3.
- The new portal using OpenLayers 3 is faster, handles 10,000 features with 300KB of code, and allows both map-centric and data-centric views of data.
This document provides an overview of PyWPS, an open source Python library for implementing OGC Web Processing Services (WPS). It discusses what PyWPS is and is not, provides code examples of defining and executing a buffer process, and outlines the project's history and future directions. Key points are that PyWPS allows connecting Python and other tools to perform geospatial analyses as WPS processes, is lightweight and modular, and a new version (PyWPS 4) is being developed to improve performance and compatibility.
The document compares the performance of several open source web mapping frameworks - OpenLayers 2, OpenLayers 3, and Leaflet. It conducted tests rendering and panning points, lines, and polygons using these frameworks. OpenLayers 3 API Branch had the best performance, followed by Leaflet 0.8-dev, with OpenLayers 2 Canvas also performing well. The document discusses optimizations like using Canvas instead of DOM for rendering and the potential of WebGL.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Push it through the wire
1. Push it through the wire
If it does not work, push more
J´achym ˇCepick´y1
1Geosense s.r.o. http://geosense.cz
FOSS4G-Europe 2015
Push it through the wire
If it does not work, push more
J´achym ˇCepick´y1
1Geosense s.r.o. http://geosense.cz
FOSS4G-Europe 2015
2015-07-16
Push it through the wire
Hi
2. 2015-07-16
Push it through the wire
Hi everybody, my name is Jachym Cepicky, and I’ve been involved for
longer the 10 years in developent of the open source software for
geospatial. I started on the desktop and server side, last years I’m
involved more with development of JavaScript based client
applications.
3. 2015-07-16
Push it through the wire
I would like to give you brief overview about the one of the problems
we are struggling in today’s development of web mapping
applications. It’s the data. Especially, when they are slightly bigger,
then normally.
4. 2015-07-16
Push it through the wire
You probably know, that the development of web-based applications
has completely changed during last couple of years. We are no
longer writing JavaScript code and sending directly to the client like
we did at the beginning.
5. OpenLayers 3.7.0
ol.js: 459K
ol-debug.js 3.6M
ol-optimized.js < 100K
OpenLayers 3.7.0
ol.js: 459K
ol-debug.js 3.6M
ol-optimized.js < 100K
2015-07-16
Push it through the wire
We are compiling the code, so it’s optimized, running faster so it is
transfared faster from the server to client. In this way, you can
compress javascript code from OpenLayers from more then
megabyte to several kilobytes.
6. Raster × Vectors
Size × Rendering speed × Transport time × Flexibility ×
Amount of informations
Raster × Vectors
Size × Rendering speed × Transport time × Flexibility ×
Amount of informations
2015-07-16
Push it through the wire
Nowadays JavaScript interpreters and renderers do not have any
problem with rendering of really big amounts of data. Raster images
are usually coming in compressed form of JPEGs or PNG files and to
render them, no rasterization is needed any more.
Also amount of informations in raster file can be significantly lower,
then it is in vector data. We can say, that amount of information in
raster file is limited to its resolution and number of colors, which can
be interpreted by the user, amount of informations in vector file is
given by number of vertexes describing the phenomenon and number
of attributes per feature. You also can join more tables together and
add another level of complexity to the data.
7. 2015-07-16
Push it through the wire
From certain point from file size perspective, raster files are smaller
then vector files. It would make hight sense to transform your vector
data on the server and send them as rasters to the client. Why would
you need vector data on client side anyway?
8. Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
2015-07-16
Push it through the wire
• You want to enable user interaction with the data, add dynamic
aspect of the data NEXT
• you want to enable the editing NEXT
• ...
9. Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
2015-07-16
Push it through the wire
• You want to enable user interaction with the data, add dynamic
aspect of the data NEXT
• you want to enable the editing NEXT
• ...
10. Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
2015-07-16
Push it through the wire
• You want to enable user interaction with the data, add dynamic
aspect of the data NEXT
• you want to enable the editing NEXT
• ...
11. Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
Why vector in the browser anyway?
better user interaction
dynamic aspect of the data
editing in the browser
...
2015-07-16
Push it through the wire
but the main reason is
12. Vectors are cool!
Vectors are cool!
2015-07-16
Push it through the wire
vector data are just cool, gives you the feeling of power, to have them
on the client side
13. Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
2015-07-16
Push it through the wire
Those are suggested steps, when dealing with large amount of vector
data:
• Compress the communication between server and client NEXT
• Choose the file format carefully NEXT
• Use lossy compression of the vector data NEXT
• Do transfer only the data, user needs to see
14. Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
2015-07-16
Push it through the wire
Those are suggested steps, when dealing with large amount of vector
data:
• Compress the communication between server and client NEXT
• Choose the file format carefully NEXT
• Use lossy compression of the vector data NEXT
• Do transfer only the data, user needs to see
15. Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
2015-07-16
Push it through the wire
Those are suggested steps, when dealing with large amount of vector
data:
• Compress the communication between server and client NEXT
• Choose the file format carefully NEXT
• Use lossy compression of the vector data NEXT
• Do transfer only the data, user needs to see
16. Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
Compress the communication between server and client
Proper data format
Consider lossy compression of the data on the server
Transfer only the data, you need to transfer
2015-07-16
Push it through the wire
Those are suggested steps, when dealing with large amount of vector
data:
• Compress the communication between server and client NEXT
• Choose the file format carefully NEXT
• Use lossy compression of the vector data NEXT
• Do transfer only the data, user needs to see
17. Compress the server-client communication
mod gzip
80MB GeoJSON → 7MB
Compress the server-client communication
mod gzip
80MB GeoJSON → 7MB
2015-07-16
Push it through the wire
Setting zip compression on the server seems to be the most primitive
thing, but you would be surprised, how often do we forget to switch
this option on on the Apache server (or any other).
In this way, you can compress about 80MB big GeoJSON file to
nearly 7MB, what is about 10× smaller.
18. File format1
GML vs. GeoJSON
GML ≈ 40MB
GeoJSON ≈ 80MB
File size vs. processing time
1
50 000 polygon features, OSM dataset
File format1
GML vs. GeoJSON
GML ≈ 40MB
GeoJSON ≈ 80MB
File size vs. processing time
1
50 000 polygon features, OSM dataset
2015-07-16
Push it through the wire
Choosing proper file format could seem as easy option. On our test
data - file of about 50 000 of polygon features, which file format do
you think will be bigger - GML or GeoJSON ?
Yes, it’s GeoJSON - while GeoJSON needs about 80MB of space,
GML needed only 40MB. I do not want to say, generally speaking,
GML is less verbose then GeoJSON, I want to say, choosing the right
format can be tricky.
Where GeoJSON beats GML is the speed of data processing on the
client in any case.
19. 2015-07-16
Push it through the wire
If we are displaying compressed JPEG images to users saying ”it’s
ok, you see the data nearly as they are in reality, why are we note
doing the same with vectors?
20. https://www.jasondavies.com/simplify/
https://www.jasondavies.com/simplify/
2015-07-16
Push it through the wire
Who needs 1000 vertexes per line, when at certain zoom level only 3
would be enough?
Vector mapping libraries like OpenLayers or Leaflet are doing heavy
pre-processing of the data, before they are send to canvas, so that
only reasonable amount of vertexes is displayed. Is there a way, how
to send pre-processed data from the server?
21. TopoJSON
Data size progress
GeoJSON 80MB → TopoJSON 3.3MB → zipped 500KB
2
http://github.com/mbostock/topojson
2
50 000 polygon features, OSM dataset
TopoJSON
Data size progress
GeoJSON 80MB → TopoJSON 3.3MB → zipped 500KB
2
http://github.com/mbostock/topojson
2
50 000 polygon features, OSM dataset
2015-07-16
Push it through the wire
TopoJSON
The answer is TopoJSON format. While the original GeoJSON file
had 80MB, new created TopoJSON file has about 3.3MB. If you zip it,
it then has about 500kb. How is this possible?
22. TopoJSON
http://github.com/mbostock/topojson
Extension of GeoJSON
Introduces new type ”topology”
Coordinates stored in arcs array
Arcs are similar to line strings. More arcs form together
geometry
Lat & Long → relative coordinates, ∆ values
TopoJSON
http://github.com/mbostock/topojson
Extension of GeoJSON
Introduces new type ”topology”
Coordinates stored in arcs array
Arcs are similar to line strings. More arcs form together
geometry
Lat & Long → relative coordinates, ∆ values
2015-07-16
Push it through the wire
TopoJSON
TopoJSON format was introduced by Mike Bostock who is author of
many useful tools, which are addressing some of the issues, web
developers are facing, when dealing with bigger amounts of data.
You’ve probably heard about D3.js library for drawing graphs, but
there is more.
TopoJSON is an extension of GeoJSON. TopoJSON introduces a
new type, ”Topology”, that contains GeoJSON objects. A topology
has an objects map which indexes geometry objects by name. These
are standard GeoJSON objects, such as polygons, multi-polygons
and geometry collections. However, the coordinates for these
geometries are stored in the topology’s arcs array, rather than on
each object separately.
An arc is a sequence of points, similar to a line string; the arcs are
stitched together to form the geometry.
Lastly, the topology has a transform which specifies how to convert
delta-encoded integer coordinates to their native values (such as
23. Does not support big integer – problem with big IDs
3
3
https://github.com/mbostock/topojson/wiki
Does not support big integer – problem with big IDs
3
3
https://github.com/mbostock/topojson/wiki
2015-07-16
Push it through the wire
TopoJSON uses among others lossy compression, by simplifying the
arcs to certain scale. If you want to go more into details, there is nice
presentation of Mike Bostock explaining details of the algorithm at
https://github.com/mbostock/topojson/wiki
One issue I see with TopoJSON is, it does not support big integer
attributes. I wanted to apply it for czech cadaster data, which are
using big integer as feature id, unluckily, this was not possible.
24. Transfer only the data, you need to transfer
Transfer only the data, you need to transfer
2015-07-16
Push it through the wire
Always have in mind: you do not want to transfer all the data. You just
want the user to make believe, he sees all the data.
25. 2015-07-16
Push it through the wire
We are all used to transfer only needed amount of raster data, as
needed for current view - either big images using WMS or smaller
tiles, using one of the proprietary or open standards, like WMTS. Why
are we not doing this the same way with vectors?
26. Tiled vectors
http://openlayers.org/en/v3.7.0/examples/
tile-vector.html →
http://jsfiddle.net/og7m21t7/
Tiled vectors
http://openlayers.org/en/v3.7.0/examples/
tile-vector.html →
http://jsfiddle.net/og7m21t7/
2015-07-16
Push it through the wire
Tiled vectors
There are probably two reasons for that: first, vector data were always
somehow smaller the raters, but this is not necessary the fact any
more.
Second: it’s much more complicated, especially if you consider all the
edge conditions, like polygon data, with holes in it, spread over 4 tiles.
28. TileStache
http://tilestache.org
TileStache
http://tilestache.org
2015-07-16
Push it through the wire
TileStache
Other option is to create vector cache using TileStache server. The
tiles can be either cut at edges - one polygon is split into 2 or 4, if it is
not fitting into one. Or you can use e.g. centroids who are ”within”the
pre-defined tile and assign only those features to the ”tile”.
TileStache can produce geojson tiles as well as topojson tiles.
29. http://jsfiddle.net/og7m21t7/
http://jsfiddle.net/og7m21t7/
2015-07-16
Push it through the wire
With tiled polygon data, you are facing another problem: polygons are
usually spread between more tiles. You have to either resign on
displaying polygon boundaries, what is possible e.g. for big water
bodies. But if you are dealing with e.g. parcel data, different approach
has to be taken.
You can read e.g. only polygons, which centroid fit’s into given tile -
but then, long streets are not seen.
There is a link under images, which contains OpenLayers tiled vector
example loaded into jsfiddle environment, where you can play with
style settings and see the problem. I hope to show it to you at the end
of the session.
30. https://goo.gl/oSutyN
https://goo.gl/oSutyN
2015-07-16
Push it through the wire
Here is a screenshot from video, which you can reach on the link
below, showing the possibility of vector tiles, which do not contain
croped vector features. In this approach only the features which
centroids fit into the tile are downloaded.
31. 2015-07-16
Push it through the wire
As result of all the effort, user will see similar progress, has he sees
already with vector data - they will be loaded in tiles.
To conclude:
32. Conclusion
Do I need to use all this compression, file format, tiling always I
want to display single vector point / GPX log file?
Conclusion
Do I need to use all this compression, file format, tiling always I
want to display single vector point / GPX log file?
2015-07-16
Push it through the wire
Conclusion
Do I need to use all this compression, file format, tiling always I want
to display single vector point or GPX log file?
34. Conclusion
Use it when you needed, when things seems slow.
Conclusion
Use it when you needed, when things seems slow.
2015-07-16
Push it through the wire
Conclusion
Those approaches are for the situations, your application loading
seems to be too slow and you are trying to define the bottlenecks -
either it’s just the download speed or it seems to be rendering time as
well.
35. Conclusion
With tiling, I get faster vectors, what do I loose?
Conclusion
With tiling, I get faster vectors, what do I loose?
2015-07-16
Push it through the wire
Conclusion
Caching and tiling of vector data gives you more speed
36. Conclusion
Topicality and simplicity – you can not have up-to-date
pre-cached data. dataspeed × datasize × datatopicality
Conclusion
Topicality and simplicity – you can not have up-to-date
pre-cached data. dataspeed × datasize × datatopicality
2015-07-16
Push it through the wire
Conclusion
But you loose simplicity and topicality of the data. You can not have
fast, up-to-date and big data at the same time.