As a Node.js developer chances are you’ve had to tackle the issue of how to accept filter criteria from user input in your web APIs. This seemingly mundane problem can quickly become quite complicated. How do you ingest this input safely? How do you ensure this does not become a leaky abstraction? How do you handle complex filter graphs while minimizing the amount of boilerplate you have to write and maintain?
In this talk, we’ll explore several options for accepting filter criteria in web APIs, discuss their pros and cons, and present a new tool for solving this issue.
Arpad Ray's PHPNW08 slides:
Looking at websites from the perspective of potential attackers is a useful technique not only for security professionals.
This talk demonstrates how to use simple PHP scripts to exploit many common security holes in PHP applications, hopefully giving developers a deeper understanding of what it is they are protecting against.
* Getting around common precautions against SQL injection
* Free spam with SMTP injection
* Making a malicious website to exploit PHP sessions
* The holes every attacker hopes for
* Making use of a newly exploited website
Description of the API concept for engineering and how it can be useful. Particularly how it should be used with respect to genomics data. Finally, an analogy of the API concept in synthetic biology and how evolution allows encapsulation.
The journey of an (un)orthodox optimizationSian Lerk Lau
We live in a world that celebrates diversity. When it comes to code and database, we don’t. However, reality hits when we are working on an existing code base which it served its purpose, time-tested, just work™, but just one tiny little problem… it’s slow. What can we do?
Model relationships in our application often a reflection of the needs of our business requirements. However these requirements change over time and the relationships can be a hell lot difficult to normalize. Putting aside a potential time consuming and bug-friendly code refactoring, migration on a big database will incur long downtime and perhaps significant hair lost, if not money.
The above scenario perhaps ring a bell on your current workplace. As the data grows larger each day, scalability issues surfaced and long response time haunt us, if not our client. Perhaps we can no longer sweep it under the carpet.
In this talk, I would like to share my journey in optimizing a service task from 10 minutes to 30 seconds.
The breakdown as follow: 1. Database optimisation 2. Python code optimisation 3. Recommendation on optimisation best practices
Arpad Ray's PHPNW08 slides:
Looking at websites from the perspective of potential attackers is a useful technique not only for security professionals.
This talk demonstrates how to use simple PHP scripts to exploit many common security holes in PHP applications, hopefully giving developers a deeper understanding of what it is they are protecting against.
* Getting around common precautions against SQL injection
* Free spam with SMTP injection
* Making a malicious website to exploit PHP sessions
* The holes every attacker hopes for
* Making use of a newly exploited website
Description of the API concept for engineering and how it can be useful. Particularly how it should be used with respect to genomics data. Finally, an analogy of the API concept in synthetic biology and how evolution allows encapsulation.
The journey of an (un)orthodox optimizationSian Lerk Lau
We live in a world that celebrates diversity. When it comes to code and database, we don’t. However, reality hits when we are working on an existing code base which it served its purpose, time-tested, just work™, but just one tiny little problem… it’s slow. What can we do?
Model relationships in our application often a reflection of the needs of our business requirements. However these requirements change over time and the relationships can be a hell lot difficult to normalize. Putting aside a potential time consuming and bug-friendly code refactoring, migration on a big database will incur long downtime and perhaps significant hair lost, if not money.
The above scenario perhaps ring a bell on your current workplace. As the data grows larger each day, scalability issues surfaced and long response time haunt us, if not our client. Perhaps we can no longer sweep it under the carpet.
In this talk, I would like to share my journey in optimizing a service task from 10 minutes to 30 seconds.
The breakdown as follow: 1. Database optimisation 2. Python code optimisation 3. Recommendation on optimisation best practices
Node.js: scalability tips - Azure Dev Community VijayawadaLuciano Mammino
You finally built that amazing start-up idea you had in mind for years and you did it using Node.js! That's Great! You just launched it on Hacker News and you are very happy and proud... but now more and more people are using it and you start to have a nasty fear that Node.js won't scale because you now... it's single-threaded! Is your project doomed now? Do you have to invest your time on rewriting it in something like C++ or maybe Rust or even Go? You'd rather invest your time on adding valuable features for your users rather than learning a new language and rewriting everything from scratch, but what if nothing works anymore? And... by the way, what the heck "single-threaded" really means?! Fear no more, dear fellow developer! In this talk, we will discuss the architecture of Node.js going through its strengths and its weaknesses. We will then talk about scalability and I will share some valuable tips and tricks to make your Node.js app scale! Spoiler alert: you probably won't need Go or Rust :)
Apache Spark jest narzędziem do przetwarzania danych na dużą skalę. Zastosowanie tego narzędzia w rozproszonym środowisku, w celu przetwarzania dużych zbiorów danych daje ogromne korzyści.
Ale co z szybką pętlą zwrotną podczas opracowywania aplikacji z użyciem Apache Spark? Testowanie aplikacji w klastrze jest niezbędne, lecz nie wydaje się być tym, do czego większość programistów przywykło podczas praktykowania TDD.
Podczas wystąpienia, Łukasz podzielił się z kilkoma wskazówkami, jak można napisać testy jednostkowe oraz integracyjne i jak Docker może być używany do testowania Sparka na lokalnej maszynie.
Crossing the Bridge: Connecting Rails and your Front-end FrameworkDaniel Spector
Presented at Railsconf 2015 by Daniel Spector, @danielspecs.
Crossing the Bridge explores tools, patterns and best practices to connect your Javascript MVC framework to Rails in the most seamless way possible. The talk progresses from demonstrating the standard API request cycle to preloading data to your client-side framework to rendering your javascript on the server. It explores Isomorphic Javascript and ways of implementing it with Rails.
Pulsar Architectural Patterns for CI/CD Automation and Self-ServiceDevin Bost
We examine real-world architectural patterns involving Apache Pulsar to automate the creation of function and pub/sub flows for improved operational scalability and ease of management. We’ll cover CI/CD automation patterns and reveal our innovative approach of leveraging streaming data to create a self-service platform that automates the provisioning of new users. We will also demonstrate the innovative approach of creating function flows through patterns and configuration, enabling non-developer users to create entire function flows simply by changing configurations. These patterns enable us to drive the automation of managing Pulsar to a whole new level. We also cover CI/CD for on-prem, GCP, and AWS users.
Implementing pseudo-keywords through Functional ProgramingVincent Pradeilles
Swift is a great language. But we’ve all met situations where a certain feature was missing, and we wished were part of the it.
Since programs are basically functions, and functions are first-class citizen in Swift, it’s possible to implement “pseudo-keywords” at a short cost and to great value.
OSMC 2016 - Alerting with Time Series by Fabian ReinartzNETWAYS
Fabian Reinarz ist Software Ingenieur bei CoreOS und einer der Core Developer von Prometheus, ein Monitoringsystem und Timeseries-Datenbank. Davor war er Produktionsingenieur bei SoundCloud und arbeitete Im Bereich Informationsgewinnung an der Universität Saarland.
Specification-Driven Development of REST APIs by Alexander Zinchuk OdessaJS Conf
A good API specification isn’t just about writing API documentation well. I want to share examples of how one can:
- Make unit tests simpler and more reliable;
- Set up user input preprocessing and validation;
- Automate serialization and ensure response consistency, and event
- Enjoy the benefits of static typing;
This approach is a compilation of different discrete ideas, all based on the smart use of an API specification.
Node.js: scalability tips - Azure Dev Community VijayawadaLuciano Mammino
You finally built that amazing start-up idea you had in mind for years and you did it using Node.js! That's Great! You just launched it on Hacker News and you are very happy and proud... but now more and more people are using it and you start to have a nasty fear that Node.js won't scale because you now... it's single-threaded! Is your project doomed now? Do you have to invest your time on rewriting it in something like C++ or maybe Rust or even Go? You'd rather invest your time on adding valuable features for your users rather than learning a new language and rewriting everything from scratch, but what if nothing works anymore? And... by the way, what the heck "single-threaded" really means?! Fear no more, dear fellow developer! In this talk, we will discuss the architecture of Node.js going through its strengths and its weaknesses. We will then talk about scalability and I will share some valuable tips and tricks to make your Node.js app scale! Spoiler alert: you probably won't need Go or Rust :)
Apache Spark jest narzędziem do przetwarzania danych na dużą skalę. Zastosowanie tego narzędzia w rozproszonym środowisku, w celu przetwarzania dużych zbiorów danych daje ogromne korzyści.
Ale co z szybką pętlą zwrotną podczas opracowywania aplikacji z użyciem Apache Spark? Testowanie aplikacji w klastrze jest niezbędne, lecz nie wydaje się być tym, do czego większość programistów przywykło podczas praktykowania TDD.
Podczas wystąpienia, Łukasz podzielił się z kilkoma wskazówkami, jak można napisać testy jednostkowe oraz integracyjne i jak Docker może być używany do testowania Sparka na lokalnej maszynie.
Crossing the Bridge: Connecting Rails and your Front-end FrameworkDaniel Spector
Presented at Railsconf 2015 by Daniel Spector, @danielspecs.
Crossing the Bridge explores tools, patterns and best practices to connect your Javascript MVC framework to Rails in the most seamless way possible. The talk progresses from demonstrating the standard API request cycle to preloading data to your client-side framework to rendering your javascript on the server. It explores Isomorphic Javascript and ways of implementing it with Rails.
Pulsar Architectural Patterns for CI/CD Automation and Self-ServiceDevin Bost
We examine real-world architectural patterns involving Apache Pulsar to automate the creation of function and pub/sub flows for improved operational scalability and ease of management. We’ll cover CI/CD automation patterns and reveal our innovative approach of leveraging streaming data to create a self-service platform that automates the provisioning of new users. We will also demonstrate the innovative approach of creating function flows through patterns and configuration, enabling non-developer users to create entire function flows simply by changing configurations. These patterns enable us to drive the automation of managing Pulsar to a whole new level. We also cover CI/CD for on-prem, GCP, and AWS users.
Implementing pseudo-keywords through Functional ProgramingVincent Pradeilles
Swift is a great language. But we’ve all met situations where a certain feature was missing, and we wished were part of the it.
Since programs are basically functions, and functions are first-class citizen in Swift, it’s possible to implement “pseudo-keywords” at a short cost and to great value.
OSMC 2016 - Alerting with Time Series by Fabian ReinartzNETWAYS
Fabian Reinarz ist Software Ingenieur bei CoreOS und einer der Core Developer von Prometheus, ein Monitoringsystem und Timeseries-Datenbank. Davor war er Produktionsingenieur bei SoundCloud und arbeitete Im Bereich Informationsgewinnung an der Universität Saarland.
Specification-Driven Development of REST APIs by Alexander Zinchuk OdessaJS Conf
A good API specification isn’t just about writing API documentation well. I want to share examples of how one can:
- Make unit tests simpler and more reliable;
- Set up user input preprocessing and validation;
- Automate serialization and ensure response consistency, and event
- Enjoy the benefits of static typing;
This approach is a compilation of different discrete ideas, all based on the smart use of an API specification.
Re:inventing EC2 Instance Launches with Launch Templates - SRV335 - Chicago A...Amazon Web Services
Launch templates enable you to define a configuration, persist it, and reuse it later across many Amazon EC2 offerings and API operations. In this chalk talk, we investigate launch templates, and we address their various scenarios and use cases. We also explore how to use them with Amazon EC2 RunInstances, Spot Fleet, and automatic scaling API operations. With launch templates, we are truly reinventing Amazon EC2. Come join us and learn how you can benefit from launch templates.
Stop the noise! - Introduction to the JSON:API specification in DrupalBjörn Brala
If you’ve ever argued about the way your JSON responses should be formatted, JSON:API can be your anti-bikeshedding tool. JSON:API is a great way to expose a consistent API in your application.
In this session, we will talk about how JSON:API got to where it is today and how it can help you make Drupal the core of all your online transactions. We will check out the specifications and look at the main benefits of JSON:API and see how Drupal implemented the spec.
Expect to learn the structure and features of the JSON:API specifications and why it should be your smart default. You should be able to get started right away with some examples we will provide in this session.
Overview of the new JSON processing functionality in MySQL: the new JSON type, the in-line JSON path expressions, the JSON functions and how to go about indexing JSON
Everything That Is Really Useful in Oracle Database 12c for Application Devel...Lucas Jellema
The functionality available to Oracle Database developers has evolved over all releases of Oracle Database. The improvements have allowed for faster development, richer functionality, and better-performing code as well as clearly establishing the role of the database in multitier applications and SOA architectures. Areas of recent improvement include core SQL (with inline PL/SQL), flashback, data pattern processing, zero-downtime application upgrades, XML manipulation, JSON support, inbound and outbound HTTP processing, data redaction, fine-grained auditing and authorization, and PL/SQL language extensions. This session demonstrates the most-useful 12c database features for application developers.
Altitude NY 2018: Leveraging Log Streaming to Build the Best Dashboards, EverFastly
If knowing is half the battle, having the most information available is the best way to win. Using real-time log streaming and a knowledge of the data passing through the system, metrics can provide more depth and breadth in to the goings on requests as they pass through various parts of the stack. This session will cover the difference between logging and metrics, writing JSON and Influx Line Protocol in VCL, and building out dashboards to give deeper insights (and more importantly, alerting) on requests and responses at the edge.
How Bitbucket Pipelines Loads Connect UI Assets Super-fastAtlassian
Connect add-ons deliver better user experience when they load fast. Between CDN, server-side rendering, service workers, and code splitting, there are loads of techniques you can use to achieve this. In this session, Atlassian Developer Peter Plewa will reveal Bitbucket Pipelines' secret for fast loads, and what they can do in the future to make Pipelines even faster.
Peter Plewa, Development Principal, Atlassian
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
6. Agenda
• Web API Filtering
• Common Approaches
• Challenges
• A New Tool
7. Introducing spleen
A dynamic filter expression dialect, library,
and toolset.
(...because finding available names on NPM is an exercise in futility)
8. Agenda
• Web API Filtering
• Common Approaches
• Challenges
• A New Tool
15. Common Approaches
Query String Parameters with Custom Operators
GET api.somehrms.com/v1/employees?managerId=eq:2
Equal To
GET api.somehrms.com/v1/employees?title=neq:Physicist
Not Equal To
GET api.somehrms.com/v1/employees?salary=gt:30000
Greater Than
GET api.somehrms.com/v1/employees?age=lte:40
Less Than Equal To
GET api.somehrms.com/v1/employees?name=like:E*
Like Pattern
16. Common Approaches
Query String Parameters with Custom Operators
What about conjunctions?
managerId == 2 AND salary >= 30000 OR name like “E*”
17. Common Approaches
Query String Parameters with Custom Operators
GET api.somehrms.com/v1/employees
?managerId=eq:2
&salary=and:gte:30000
&name=or:like:E*
18. Common Approaches
Query String Parameters with Custom Operators
GET api.somehrms.com/v1/employees
?managerId=eq:2
&salary=and:gte:30000
&name=or:like:E*
managerId == 2 AND salary >= 30000 OR name like “E*”
salary >= 30000 AND managerId == 2 OR name like “E*”
name like “E*” OR managerId == 2 AND salary >= 30000
19. Common Approaches
Query String Parameters with Custom Operators
GET api.somehrms.com/v1/employees
?managerId=eq:2
&salary=and:gte:30000
&name=or:like:E*
managerId == 2 AND salary >= 30000 OR name like “E*”
salary >= 30000 AND managerId == 2 OR name like “E*”
name like “E*” OR managerId == 2 AND salary >= 30000
managerId == 2 OR name like “E*” AND salary >= 30000
20. Common Approaches
Query String Parameter with SQL Query
GET api.somehrms.com/v1/employees
?filter=managerId=2+AND+salary>=30000+OR+name+like+”E%25”
21. Common Approaches
Query String Parameter with SQL Query
GET api.somehrms.com/v1/employees
?filter=managerId=2+AND+salary>=30000+OR+name+like+”E*”
• Leaks implementation details
• Unsafe
23. Common Approaches
Off-the-Shelf Architectures
• GraphQL
• Falcor
• OData
----------------------------------------------------------------------------------------
• A LOT more than just filtering collections!
24. Common Approaches
Off-the-Shelf Architectures
• GraphQL
• Falcor
• OData
----------------------------------------------------------------------------------------
• A LOT more than just filtering collections!
• Legacy systems?
25. Common Approaches
Off-the-Shelf Architectures
• GraphQL
• Falcor
• OData
----------------------------------------------------------------------------------------
• A LOT more than just filtering collections!
• Legacy systems?
• Opinionated
26. Common Approaches
Off-the-Shelf Architectures
• GraphQL
• Falcor
• OData
----------------------------------------------------------------------------------------
• A LOT more than just filtering collections!
• Legacy systems?
• Opinionated
• Non-trivial to implement
32. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
33. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
34. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
• Opinions
35. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
• Opinions
• Validation
36. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
• Opinions
• Validation
• Vector for SQL injection attack?
37. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
• Opinions
• Validation
• Vector for SQL injection attack?
• Vector for DoS’ing the database?
Lots of expensive comparisons against non-indexed fields
Inefficient ordering of clauses
38. Challenges
• Robustness
Different comparison operators
Conjunctive (AND) and disjunctive (OR) logical operators
Logical groups
• Proper abstraction
• Idiomatic
• Opinions
• Validation
• Vector for SQL injection attack?
• Vector for DoS’ing the database?
Lots of expensive comparisons against non-indexed fields
Inefficient ordering of clauses
• Complexity
39. Agenda
• Web API Filtering
• Common Approaches
• Challenges
• A New Tool
40. Introducing spleen
A dynamic filter expression dialect, library,
and toolset.
(...because finding available names on NPM is an exercise in futility)
41. Introducing spleen
A dynamic filter expression dialect, library,
and toolset.
(...because finding available names on NPM is an exercise in futility)
43. Goals for the spleen Dialect
• Human readable
• Terse
44. Goals for the spleen Dialect
• Human readable
• Terse
• Reference complex structures (nested JSON objects)
45. Goals for the spleen Dialect
• Human readable
• Terse
• Reference complex structures (nested JSON objects)
• Support for a variety of common comparisons
46. Goals for the spleen Dialect
• Human readable
• Terse
• Reference complex structures (nested JSON objects)
• Support for a variety of common comparisons
• Conjunctive and disjunctive logical operators
47. Goals for the spleen Dialect
• Human readable
• Terse
• Reference complex structures (nested JSON objects)
• Support for a variety of common comparisons
• Conjunctive and disjunctive logical operators
• Logical grouping
48. Goals for the spleen Dialect
• Human readable
• Terse
• Reference complex structures (nested JSON objects)
• Support for a variety of common comparisons
• Conjunctive and disjunctive logical operators
• Logical grouping
• Works in a query string parameter
49. The spleen Dialect
Field references are JSON pointers (RFC 6901)
/foo/bar/0
{
foo: {
bar: [‘a‘, ‘b‘, ‚‘c‘]
}
}
Result: ‘a‘
50. The spleen Dialect
Comparison operators:
eq: equal to
neq: not equal to
gt: greater than
gte: greater than or equal to
lt: less than
lte: less than or equal to
between: value is greater than and equal to x by less than or equal to y
nbetween: value is less than x or greater than y
in: value is in an array of values
nin: value is not in an array of values
like: string value is like the given pattern
nlike: string value is not like the given pattern
51. The spleen Dialect
Logical operators:
and: conjunctive logical operator
or: disjunctive logical operator
(: open logical group
): close logical group
52. The spleen Dialect Examples
/foo eq 42
/foo/bar gt 42
/foo eq 42 and /bar/baz between 0,500
/foo eq 42
and (/bar/baz nbetween 0,500 or /qux like “_abc*”)
and (/quux in [1,2.3] or /corge gte 312)
53. Introducing spleen
A dynamic filter expression dialect, library,
and toolset.
(...because finding available names on NPM is an exercise in futility)
55. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
56. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
• Parses spleen expressions
57. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
• Parses spleen expressions
• Build spleen expressions
58. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
• Parses spleen expressions
• Build spleen expressions
• Instances of spleen.Filter serve as an abstraction
59. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
• Parses spleen expressions
• Build spleen expressions
• Instances of spleen.Filter serve as an abstraction
• Match objects
60. The spleen Library
• Not a framework.
• Available on NPM (npm install spleen –S)
• Parses spleen expressions
• Build spleen expressions
• Instances of spleen.Filter serve as an abstraction
• Match objects
• Prioritize filter clauses
64. Introducing spleen
A dynamic filter expression dialect, library,
and toolset.
(...because finding available names on NPM is an exercise in futility)
70. Database Query Conversion Plugins
• Whitelist or blacklist queryable fields
• Require fields to be present in the filter
71. Database Query Conversion Plugins
• Whitelist or blacklist queryable fields
• Require fields to be present in the filter
• Specify an identifier
72. Database Query Conversion Plugins
• Whitelist or blacklist queryable fields
• Require fields to be present in the filter
• Specify an identifier
• Parameterize (prevent SQL injection)
73. Database Query Conversion Plugins
• Whitelist or blacklist queryable fields
• Require fields to be present in the filter
• Specify an identifier
• Parameterize (prevent SQL injection)
• Map fields in a JSON object columns in a database table
I’m here to talk to you about a fairly common problem that we all, as Node.js engineers, have likely had to tackle at some point. And that is, how do we accept filter criteria in web API endpoints.
We’ll examine some common approaches to these challenges, and analyze their pros and cons.
While this sounds like a fairly mundane problem, there are some potential technical and security-related challenges involved.
And this will segue into a discussion on a tool I built that will hopefully help you tackle this problem. It’s a tool I call...
So, lets talk briefly about what I mean by Web API filtering, just so we’re on the same page.
Say you have a REST API with a resource called “employees.” In REST the endpoint shown here functions as a collection of employees. As you see here, we have a paged result of 10 employees from a total of 130,042.
Now lets say we need to filter that result, to work with a particular subset.
Lets say we want to get all of the people who directly report to General Leslie Groves. So, we need to filter on managerId=1.
A typical approach to solve this use case would be add support for a query string parameter that allows us to filter on managerId.
Okay, so lets walk through a couple of approaches.
We’ve already seen one approach, and I would conjecture it is the most common. That is to simply add support filtering on various datapoints via query string parameters.
We can just continue adding support quite easily this way.
Now lets expand upon this a bit, and say we want to perform a comparison that is not an “equals” operation. Query strings don’t have support different operators. So, we’ll have to come up with something ourselves.
One way of solving this is to require that all filter parameters specify a comparison operator, as seen here we’re prefixing our filter value with ”neq,” and then delimiting the oeprator and value with a colon.
Internally, we’d have to write some code to parse out the operator from the filter value, and use this information to construct our database queries in the persistence layer of our application.
And we could easily use this pattern to support a variety of operators.
Say the complexity of our requirements are expanding, and we need to support disjunctive Boolean operators, as well as logical conjunctions. In other words, a mix of AND and OR conjunction operators.
This is where our approach up this point begins to fall over. Eventually, our code has to reassemble these clauses into something usable by a database.
And we can’t guarantee order. The examples here should work.
But since we cannot guarantee order, we will inevitably run into a situation where reassembling clauses results in a statement that is logically different from what was intended.
One method I’ve seen developers try is to simply take something looks like a SQL WHERE clause, or even MongoDB find statement, in a “filter” query parameter, and just pass that on through to the persistence layer of their application.
One method I’ve seen developers try is to simply take something looks like a SQL WHERE clause, or even MongoDB find statement, in a “filter” query parameter, and just pass that on through to the persistence layer of their application.
PLEASE PLEASE PLEASE DO NOT DO THIS!
It leaks the underlying database technology you’re using. So, now you’ve coupled API clients to your database technology.
And, obviously, it’s extremely difficult to secure.
What seems to be en vogue these days is to utilize an off-the-shelf architecture like GraphQL, Falcor, or, if you’re feeling especially masochistic, OData.
Personally, I’ve really enjoyed working with GraphQL and Falcor, and I encourage you explore these concepts.
That said there are some things to consider before you jump on the GraphQL bandwagon...
These are, on their own, API design concepts. They include tools for:
Defining your model
Allowing clients to create views in an ad hoc manner
Batch mutations
Etc
If you have an existing system that you’re maintaining and expanding upon, then introducing something Falcor or GraphQL would probably require a paradigm shift in your architecture.
And that’s because these concepts are opinionated. And those opinions can have deeper ramifications on the underlying system design and technology choices.
And depending on your technology choices, these things can be fairly non-trivial to implement.
Just to be clear, my intent is not to discourage you from using these technologies. These are merely points of consideration. If you find Falcor or GraphQL or, even, OData solves your problems then awesome.
For those of us for whom these off-the-shelf tools are not an option, we continue our journey.
So, to solve this problem, we need to develop a bit more sophisticated structure with which to serialize our filters. One way to do this is to represent our filters as JSON.
In this example, we’re creating an array of objects that represent a clause in the filter. All clauses can then specify a conjunction operator.
This gives us a structure that allows us to guarantee order such that we can assemble a database query that logically matches the intention of our API user.
It is also worth noting that at this point our code is probably becoming complex enough to breaking this logic off into a different code path. Here, we are creating a sub-resource of “employees” called “searches.” So, the REST semantic would be to POST to this resource.
And we can begin to expand on this structure, and do things like logical grouping.
This is starting to get complicated.
We’ve covered a number of different options, and they all require varying levels of effort to implement. We’ve talked about a few issues that may come up, so lets review them, and expand a bit on our list.
Your solution, obviously, has to be robust enough to suit the functional requirements of your system.
What kind of comparisons do you need in your filter?
Do you need support for conjunctive and disjunctive Boolean logic, or a mix of the two?
Do you need to be able to logically group clauses together?
You don’t want to leak the technologies, such as the database you’re using, to the client.
This is something that can be said about virtually any system you design, but consistency is a good thing. It makes it easier for users to learn your system, and conjecture how something works.
For example, if you’re going to implement things like sub-resources for “searches,” then do so across the board. You do not want to leave your users guessing whether or not they should be POST’ing searches, or GET’ing from a collection with a bunch of query parameters.
What is the impact of your solution on the underlying architecture?
If, for example, your system is based on event sourcing with CQRS, and is composed of dozens of microservices pulling from disparate databases using a multitude of technologies, then GraphQL may not be a practical solution.
Any solution you implement will require input sanitizing. In the event you have a complex dialect or JSON graph, this can become non-trivial.
This is an obvious one, but, amazingly, is still a problem a lot of companies.
Personally, I like the idea of having library that handles filtering like this for me, as it reduces the chance of developer mistakes resulting in security holes.
This one is less obvious, and is even a potential issue with GraphQL, Falcor, and OData.
Lets say a client application supplies your API with a filter that is doing something computationally expensive, such as a LIKE comparison on a field on a table with a million rows. Then lets say that field is not indexed. All of a sudden, you’re receiving several hundred of these queries per second, your database’s CPU spikes, and grinds everything to a halt.
You have some options to fix this. You could...
Index that field.
Not allow non-indexed fields to be queried.
Or you could require certain indexed fields to appear in any filter to minimize the resources filter on non-indexed fields consume.
Option “c” may only get you so far. Some database engines rely on the order of clauses in a WHERE statement to understand what indexes to use and when. So, if you have that expensive LIKE comparison on a non-indexed field appearing before the simple equality comparison on an indexed field, then you haven’t solved the problem.
As you can see, depending on your needs complexity can start to explode.
For example, if you’re reordering clauses in a user-provided filter statement based on a priority, this can become quite complicated when you also have to support conjunctive and disjunctive logical operators.
That’s a lot of complicated code to write. There’s a lot of edge cases, and that means lots of unit tests.
So, where does that leave us? We’ve discussed some options, but we may be stuck having to write and maintain a great deal of highly-complicated code.
And that was the motivation for writing...
Perhaps first and foremost, Spleen is a dialect for creating filter expressions.
And...
Big JSON graphs are neither human readable or terse.
If you have a field that is an object with its own set of fields, or if you have a field that is an array. We want to be sure that the way we are reference fields is flexible.
AND and OR
The AND operator is typically evaluated before OR, so if we need to evaluate OR before AND, then we can group statements together.
Little to know escaping is required.
Uses JSON pointers. Here we’re reference the first element in an array on the field “bar,” which is nested in an object that is the value of the field “foo.”
JSON has become the preferred data serialization format for the web. So, the use of JSON pointers not only gives us flexibility, it provides another layer of abstraction in our filter expressions.
Supports the common operators, and some of the more robust operators like range comparisons, array searching, and pattern matching.
Pretty straightforward.
The project provides a library for working with spleen filter expressions.
Un-opinionated.
Method for parsing a spleen expression into an instance of spleen’s Filter class.
Or build Filter instances directly with no parsing.
Intended to be the transport between the various layers in your application.
Match method.
Provides a method to reorganize clauses in an expression based on a given an ordered list of fields. This method is pretty intelligent, and will preserve the logical structure of the filter expression.
Lets dive in a bit some example code. Here we’re taking a spleen expression as a string, and parsing it into an instance of the Filter class.
We can now take advantage of the Filter class’ features. Here we using the filter to match against an object.
We could also pass the Filter class to different layers in our app for version into something else. More on that in a bit.
This is preferable over parsing in many use cases. It’s more performant, and provides a method for application code to easily and dynamically construct Filter instances.
Spleen is also a set of tools.
And that means plugins.
We have our filter instance, so what can we do with it. We’ve already seen we can use it to programmatically match JSON objects.
And we know this is an abstraction that can neatly be passed between layers.
The typical use case is to pass this down into your persistence layer, and convert it into something the database understands.
The spleen ecosystem currently only fully supports N1QL (Couchbase queries), but a number of other database plugins are in the works. First up is PostgreSQL, which will be published towards the end of next week. MySQL and MongoDB will immediately follow.
Also in the works is support for the Joi validation library. The idea here is to validate that filter expressions match the intended resource’s schema. For example, if someone provides a clause reference “foo” and “foo” is a string, but the user provided a Boolean, you can validate that and respond back to the client with a 400.
Some notes on the functionality you’ll find with all database plugins.
For example, if different fields are coming from different tables via a JOIN, you can specify which identifier to use for what field in the resulting SQL.
Some very lightweight, non-obtrusive ORM functionality.
Very robust, with support for conjunctive and disjunctive logical operators, a wide variety of comparison operators, complex data structures, and so on.
Very easy to implement. Less code you have to write, debug, and maintain.
Prevents SQL injection attacks, and DoS’ing via poorly composed filter expressions.
This an active and open source project. If you’d like to contribute, please reach out to me. There is a lot of work to be done, and I’m always looking for volunteers to help expand functionality, and port spleen to other languages.