This document discusses using AllegroCache with a multi-threaded web server application. It describes the challenges of using AllegroCache in a multi-threaded environment where multiple threads need to access the database simultaneously. It then provides code for a simple password database application using AllegroCache that can be accessed by multiple threads of a web server safely. The code defines functions for creating and opening the password database, getting and setting password values for a given user, and closing the database.
This document provides instructions for configuring a mail server on Red Hat/CentOS 6 that includes Postfix, Dovecot, SquirrelMail, and virus scanning/spam tagging. The 16 step process covers installing and configuring the software components, creating SSL certificates, enabling user authentication, and testing mail delivery to POP3/IMAP clients and via a web interface. Configuration files such as main.cf and dovecot.conf are edited to enable the required options.
This document summarizes a talk given at ApacheCon 2015 about replacing Squid with ATS (Apache Traffic Server) as the proxy server at Yahoo. It discusses the history of using Squid at Yahoo, limitations with Squid that prompted the switch to ATS, key differences in configuration between the two systems, examples of forwarding and reverse proxy use cases, and learnings around managing open source projects and migration testing.
A technical overview of PowerShell. See http://blogs.msdn.com/allandcp/archive/2009/03/11/powershell-to-the-people-the-aftermath.aspx for more background and resources.
The document describes Yahoo's failsafe mechanism for its homepage using Apache Storm and Apache Traffic Server. The key points are:
1. The failsafe architecture uses AWS components like EC2, ELB, S3 and autoscaling to serve traffic from failsafe servers if the primary servers fail.
2. Apache Traffic Server is used as a caching proxy between the user and origin servers. The "Escalate" plugin in ATS fetches content from failsafe servers if the origin server response is not good.
3. Apache Storm Crawler crawls content for different devices and maps URLs to the failsafe domain for storage in S3 with query parameters in the path. This provides more relevant fail
The document discusses Ajax, which uses a combination of technologies like HTML, JavaScript, XML and CSS to retrieve data from a server asynchronously in the background without interfering with the display and behavior of the existing page. It explains what Ajax is, the technologies used, how it works using XMLHttpRequest object, and provides an example of creating an Ajax request and handling responses from the server. It also touches upon drawbacks and browser compatibility issues with Ajax.
We browse the Internet. We host our applications on a server or a cloud that is hooked up with a nice domain name. That’s all there is to know about DNS, right? This talk is a refresher about how DNS works. How we can use it and how it can affect availability of our applications. How we can use it as a means of configuring our application components. How this old geezer protocol is a resilient, distributed system that is used by every Internet user in the world. How we can use it for things that it wasn’t built for. Come join me on this journey through the innards of the web!
The document provides configuration details for setting up a Capistrano deployment with multistage environments and recipes for common tasks like installing gems, configuring databases, and integrating with Thinking Sphinx. It includes base configuration definitions, recipes for setting up Thinking Sphinx indexes and configuration files, and instructions for packaging the Capistrano configurations as a gem.
This document provides instructions for configuring a mail server on Red Hat/CentOS 6 that includes Postfix, Dovecot, SquirrelMail, and virus scanning/spam tagging. The 16 step process covers installing and configuring the software components, creating SSL certificates, enabling user authentication, and testing mail delivery to POP3/IMAP clients and via a web interface. Configuration files such as main.cf and dovecot.conf are edited to enable the required options.
This document summarizes a talk given at ApacheCon 2015 about replacing Squid with ATS (Apache Traffic Server) as the proxy server at Yahoo. It discusses the history of using Squid at Yahoo, limitations with Squid that prompted the switch to ATS, key differences in configuration between the two systems, examples of forwarding and reverse proxy use cases, and learnings around managing open source projects and migration testing.
A technical overview of PowerShell. See http://blogs.msdn.com/allandcp/archive/2009/03/11/powershell-to-the-people-the-aftermath.aspx for more background and resources.
The document describes Yahoo's failsafe mechanism for its homepage using Apache Storm and Apache Traffic Server. The key points are:
1. The failsafe architecture uses AWS components like EC2, ELB, S3 and autoscaling to serve traffic from failsafe servers if the primary servers fail.
2. Apache Traffic Server is used as a caching proxy between the user and origin servers. The "Escalate" plugin in ATS fetches content from failsafe servers if the origin server response is not good.
3. Apache Storm Crawler crawls content for different devices and maps URLs to the failsafe domain for storage in S3 with query parameters in the path. This provides more relevant fail
The document discusses Ajax, which uses a combination of technologies like HTML, JavaScript, XML and CSS to retrieve data from a server asynchronously in the background without interfering with the display and behavior of the existing page. It explains what Ajax is, the technologies used, how it works using XMLHttpRequest object, and provides an example of creating an Ajax request and handling responses from the server. It also touches upon drawbacks and browser compatibility issues with Ajax.
We browse the Internet. We host our applications on a server or a cloud that is hooked up with a nice domain name. That’s all there is to know about DNS, right? This talk is a refresher about how DNS works. How we can use it and how it can affect availability of our applications. How we can use it as a means of configuring our application components. How this old geezer protocol is a resilient, distributed system that is used by every Internet user in the world. How we can use it for things that it wasn’t built for. Come join me on this journey through the innards of the web!
The document provides configuration details for setting up a Capistrano deployment with multistage environments and recipes for common tasks like installing gems, configuring databases, and integrating with Thinking Sphinx. It includes base configuration definitions, recipes for setting up Thinking Sphinx indexes and configuration files, and instructions for packaging the Capistrano configurations as a gem.
This document describes how to set up monitoring for MySQL databases using Prometheus and Grafana. It includes instructions for installing and configuring Prometheus and Alertmanager on a monitoring server to scrape metrics from node_exporter and mysql_exporter. Ansible playbooks are provided to automatically install the exporters and configure Prometheus. Finally, steps are outlined for creating Grafana dashboards to visualize the metrics and monitor MySQL performance.
This document discusses the steps to install and configure the Apache web server on a Linux system. It includes downloading and extracting the Apache source files, configuring the files with the ./configure command, building and installing Apache with make and make install, customizing the httpd.conf configuration file, and testing the Apache installation by accessing http://localhost in a web browser. Key configuration directives like AccessConfig, AddDefaultCharset, AllowOverride, and DefaultType are also briefly described.
The document discusses configuring the Apache web server. It covers topics like:
- The Apache configuration file httpd.conf and options within it like DocumentRoot
- Using .htaccess files to override httpd.conf settings for specific directories
- Configuring password authentication for directories using htpasswd
- Setting up virtual hosts to serve different websites from the same server using different IP addresses
How to scheduled jobs in a cloudera cluster without oozieTiago Simões
This presentation, it’s for everyone that is looking for an oozie alternative to scheduled jobs in a secured Cloudera Cluster.With this, you will be able to add and configure the Airflow Service an manage it with in Cloudera Manager.
Hammr Project Update: Machine Images and Docker Containers for your Cloud, OW...OW2
Hammr is an OW2 open source, command-line tool for creating consistent and repeatable machine images for different cloud or virtual environments, or migrating live systems from one environment to another. Designed for cloud era environments, where agility and automation are key, hammr helps organizations automate the creation of machine images for hybrid environments. This presentation will focus on the hybrid capabilities of hammr for any virtual or cloud target environment. It will also include a focus on DevOps and Docker integration, and show how hammr can be used to quickly build and run Docker images, helping accelerate development and test processes among other benefits. Finally, we will present the latest hammr features, including the ability for cloud providers to customize target platforms and expose their own IaaS infrastructure as top-level branded objects accessible via hammr, thus easing the path from user images to their cloud infrastructure.
This document proposes using RPM packages to deploy Java applications to Red Hat Linux systems in a more automated and standardized way. Currently, deployment is a manual multi-step process that is slow, error-prone, and requires detailed application knowledge. The proposal suggests using Maven and Jenkins to build Java applications into RPM packages. These packages can then be installed, upgraded, and rolled back easily using common Linux tools like YUM. This approach simplifies deployment, improves speed, enables easy auditing of versions, and allows for faster rollbacks compared to the current process.
How to create a secured cloudera clusterTiago Simões
This presentation, it’s for everyone that is curious with Big Data and does have the know how to start learning...
With this, you will be able to create quickly a Kerberos secured Cloudera Cluster.
IT Infrastructure Through The Public Network Challenges And SolutionsMartin Jackson
Identifying the challenges that companies face when they wish to adopt Infrastructure as a Service like those from Amazon and Rackspace and possible solutions to those problems. This presentation seeks to provide insight and possible solutions, covering the areas of security, availability, cloud standards, interoperability, vendor lock in and performance management.
Why and How Powershell will rule the Command Line - Barcamp LA 4Ilya Haykinson
PowerShell is a command shell for Windows that treats commands as objects that interact through pipes and objects. It provides a fully-fledged programming language where commands manipulate objects and share a common naming convention. PowerShell holds that commands should do one thing well and interact through a consistent environment, addressing issues with text parsing between traditional command line programs.
The document provides instructions for setting up a Bacula backup system. It discusses installing Bacula and its components, configuring the director, storage daemon, and file daemon. It describes setting passwords, creating a backup pool and schedule, and configuring a client. Specific configuration files are edited to configure the director, storage daemon, file daemon, and set addresses, passwords and other settings. Commands are provided to start services, run backups and restores, and check configurations for errors. The goal is to have a working Bacula system that can back up and restore a client on a scheduled basis.
Given at zendcon 2007 unconference. My first "talk" ever given. Had a live demo. Note that although some of the information here is still pertinent, a lot is now outdated and most of the links no longer work. This is here for archival reasons
The document discusses Rack, a Ruby web server interface. Rack provides a standard interface between web servers and Ruby frameworks/applications. If a Ruby object has a "call" method that takes an environment hash and returns a status, headers, and response body array, it can be run by any Rack-compatible web server. This allows Ruby web applications to be run using many different servers without code changes. The document provides examples of Rack applications and integrating them with servers like Thin and Mongrel.
This document discusses creating REST APIs with Express, Node.js, and MySQL. It provides an overview of Express and its advantages. It then demonstrates how to set up a Node.js and MySQL environment, create an Express server, and implement API routes to GET, POST, DELETE, and PUT data from a MySQL database table. Code examples are provided to retrieve all todos, a single todo by ID, search todos by keyword, add a new todo, delete a todo, and update an existing todo.
In this tutorial we will build a "nameservice", a mapping of strings to other strings (similar to Namecoin, ENS, or Handshake), in which to buy the name, the buyer has to pay the current owner more than the current owner paid to buy it!
This document provides an overview of Apache Cassandra including:
- What Cassandra is and how it differs from an RDBMS by not supporting joins, having an optional schema, and being transactionless.
- Cassandra's data model using keyspaces, column families, and static vs dynamic column families.
- How to integrate Cassandra with Java applications using the Hector client and ColumnFamilyTemplate for querying, updating, and deleting data.
- Additional topics covered include the CAP theorem, data storage and compaction, and using CQL via JDBC.
This document provides tips and examples for deploying applications using Capistrano and related tools. It demonstrates how to provision virtual machines with Vagrant, configure multi-machine deployments, use Git for faster deployments, set up RVM and Bundler integration, add exception tracking and logging, and schedule tasks with Whenever. It also discusses monitoring tools like New Relic RPM and best practices like log rotation, coming soon pages, and development database dumps.
The document summarizes new features in Rails 3.1 beta, including asset handling changes where JavaScript and CSS files are now placed in app/assets, identity maps to improve performance of object loading, simpler database migrations that use a single change method, and improved test output formatting. It also discusses installing Rails 3.1 in a isolated gemset and using Sass and CoffeeScript as default asset compilers.
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
This document describes how to set up monitoring for MySQL databases using Prometheus and Grafana. It includes instructions for installing and configuring Prometheus and Alertmanager on a monitoring server to scrape metrics from node_exporter and mysql_exporter. Ansible playbooks are provided to automatically install the exporters and configure Prometheus. Finally, steps are outlined for creating Grafana dashboards to visualize the metrics and monitor MySQL performance.
This document discusses the steps to install and configure the Apache web server on a Linux system. It includes downloading and extracting the Apache source files, configuring the files with the ./configure command, building and installing Apache with make and make install, customizing the httpd.conf configuration file, and testing the Apache installation by accessing http://localhost in a web browser. Key configuration directives like AccessConfig, AddDefaultCharset, AllowOverride, and DefaultType are also briefly described.
The document discusses configuring the Apache web server. It covers topics like:
- The Apache configuration file httpd.conf and options within it like DocumentRoot
- Using .htaccess files to override httpd.conf settings for specific directories
- Configuring password authentication for directories using htpasswd
- Setting up virtual hosts to serve different websites from the same server using different IP addresses
How to scheduled jobs in a cloudera cluster without oozieTiago Simões
This presentation, it’s for everyone that is looking for an oozie alternative to scheduled jobs in a secured Cloudera Cluster.With this, you will be able to add and configure the Airflow Service an manage it with in Cloudera Manager.
Hammr Project Update: Machine Images and Docker Containers for your Cloud, OW...OW2
Hammr is an OW2 open source, command-line tool for creating consistent and repeatable machine images for different cloud or virtual environments, or migrating live systems from one environment to another. Designed for cloud era environments, where agility and automation are key, hammr helps organizations automate the creation of machine images for hybrid environments. This presentation will focus on the hybrid capabilities of hammr for any virtual or cloud target environment. It will also include a focus on DevOps and Docker integration, and show how hammr can be used to quickly build and run Docker images, helping accelerate development and test processes among other benefits. Finally, we will present the latest hammr features, including the ability for cloud providers to customize target platforms and expose their own IaaS infrastructure as top-level branded objects accessible via hammr, thus easing the path from user images to their cloud infrastructure.
This document proposes using RPM packages to deploy Java applications to Red Hat Linux systems in a more automated and standardized way. Currently, deployment is a manual multi-step process that is slow, error-prone, and requires detailed application knowledge. The proposal suggests using Maven and Jenkins to build Java applications into RPM packages. These packages can then be installed, upgraded, and rolled back easily using common Linux tools like YUM. This approach simplifies deployment, improves speed, enables easy auditing of versions, and allows for faster rollbacks compared to the current process.
How to create a secured cloudera clusterTiago Simões
This presentation, it’s for everyone that is curious with Big Data and does have the know how to start learning...
With this, you will be able to create quickly a Kerberos secured Cloudera Cluster.
IT Infrastructure Through The Public Network Challenges And SolutionsMartin Jackson
Identifying the challenges that companies face when they wish to adopt Infrastructure as a Service like those from Amazon and Rackspace and possible solutions to those problems. This presentation seeks to provide insight and possible solutions, covering the areas of security, availability, cloud standards, interoperability, vendor lock in and performance management.
Why and How Powershell will rule the Command Line - Barcamp LA 4Ilya Haykinson
PowerShell is a command shell for Windows that treats commands as objects that interact through pipes and objects. It provides a fully-fledged programming language where commands manipulate objects and share a common naming convention. PowerShell holds that commands should do one thing well and interact through a consistent environment, addressing issues with text parsing between traditional command line programs.
The document provides instructions for setting up a Bacula backup system. It discusses installing Bacula and its components, configuring the director, storage daemon, and file daemon. It describes setting passwords, creating a backup pool and schedule, and configuring a client. Specific configuration files are edited to configure the director, storage daemon, file daemon, and set addresses, passwords and other settings. Commands are provided to start services, run backups and restores, and check configurations for errors. The goal is to have a working Bacula system that can back up and restore a client on a scheduled basis.
Given at zendcon 2007 unconference. My first "talk" ever given. Had a live demo. Note that although some of the information here is still pertinent, a lot is now outdated and most of the links no longer work. This is here for archival reasons
The document discusses Rack, a Ruby web server interface. Rack provides a standard interface between web servers and Ruby frameworks/applications. If a Ruby object has a "call" method that takes an environment hash and returns a status, headers, and response body array, it can be run by any Rack-compatible web server. This allows Ruby web applications to be run using many different servers without code changes. The document provides examples of Rack applications and integrating them with servers like Thin and Mongrel.
This document discusses creating REST APIs with Express, Node.js, and MySQL. It provides an overview of Express and its advantages. It then demonstrates how to set up a Node.js and MySQL environment, create an Express server, and implement API routes to GET, POST, DELETE, and PUT data from a MySQL database table. Code examples are provided to retrieve all todos, a single todo by ID, search todos by keyword, add a new todo, delete a todo, and update an existing todo.
In this tutorial we will build a "nameservice", a mapping of strings to other strings (similar to Namecoin, ENS, or Handshake), in which to buy the name, the buyer has to pay the current owner more than the current owner paid to buy it!
This document provides an overview of Apache Cassandra including:
- What Cassandra is and how it differs from an RDBMS by not supporting joins, having an optional schema, and being transactionless.
- Cassandra's data model using keyspaces, column families, and static vs dynamic column families.
- How to integrate Cassandra with Java applications using the Hector client and ColumnFamilyTemplate for querying, updating, and deleting data.
- Additional topics covered include the CAP theorem, data storage and compaction, and using CQL via JDBC.
This document provides tips and examples for deploying applications using Capistrano and related tools. It demonstrates how to provision virtual machines with Vagrant, configure multi-machine deployments, use Git for faster deployments, set up RVM and Bundler integration, add exception tracking and logging, and schedule tasks with Whenever. It also discusses monitoring tools like New Relic RPM and best practices like log rotation, coming soon pages, and development database dumps.
The document summarizes new features in Rails 3.1 beta, including asset handling changes where JavaScript and CSS files are now placed in app/assets, identity maps to improve performance of object loading, simpler database migrations that use a single change method, and improved test output formatting. It also discusses installing Rails 3.1 in a isolated gemset and using Sass and CoffeeScript as default asset compilers.
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
The document describes deploying Cosmos DB resources using Terraform in Azure. It outlines prerequisites, environment details, and the configuration files and process used to create a resource group, Cosmos DB account, database, and collection. The main.tf file defines these resources, variables.tf contains configurable values, and output.tf displays output after deployment. Running terraform init and terraform plan commands prepares for deploying the resources.
The document provides an overview of middleware in Node.js and Express. It defines middleware as functions that have access to the request and response objects and can run code and make changes to these objects before the next middleware in the chain. It discusses common uses of middleware like logging, authentication, parsing request bodies. It also covers Connect middleware and how Express builds on Connect by adding features like routing and views. Key aspects covered include the middleware pipeline concept, error handling with middleware, and common middleware modules.
Listen up, developers. You are not special. Your infrastructure is not a beautiful and unique snowflake. You have the same tech debt as everyone else. This is a talk about a better way to build and manage infrastructure: Terraform Modules. It goes over how to build infrastructure as code, package that code into reusable modules, design clean and flexible APIs for those modules, write automated tests for the modules, and combine multiple modules into an end-to-end techs tack in minutes.
You can find the video here: https://www.youtube.com/watch?v=LVgP63BkhKQ
Docker Java App with MariaDB – Deployment in Less than a Minutedchq
The document summarizes how DCHQ can automate the deployment of a Java-based Pizza Shop application on Docker containers across different application servers and databases. It describes how DCHQ templates can be used to build and deploy the application using Nginx, Tomcat, and MariaDB as well as Nginx, Jetty, and MariaDB. DCHQ then provisions the underlying infrastructure on Rackspace, deploys the applications, enables monitoring and continuous delivery from Jenkins, and allows auto-scaling of the application server clusters.
A gateway server is a server through which the computers in a LAN access the Internet. This is
usually done through NAT. It should also provide firewall protection for the LAN and it can also serve
as a DNS and DHCPD server for the LAN. Some years ago I have been involved in a project for building gateway servers like this, using
slackware on old PCs. In this article I will try to explain the things that I have done on this project and
how I did them.
SaltStack can be used to automate and orchestrate the provisioning of virtual machines on VMware ESXi 6.0. It implements the VMware APIs to allow defining VM profiles and templates that specify VM configurations, and then uses Salt commands to rapidly deploy new VMs from templates with customized configurations. Open-VM tools must be installed on templates to enable customizing VMs, such as setting the network configuration. Salt files define VM profiles and provider credentials, separating configuration from deployment logic for flexibility and reusability.
This document provides release notes and supplementary information for Delphi 7. It notes that some components have been deprecated and recommends newer alternatives. It also describes changes made to string handling functions, warnings added by the compiler, and issues fixed in streaming of subcomponents. Finally, it provides notes on various other topics like Apache, UDDI, Windows XP input, and databases.
The document discusses various stages in architecting an application for the cloud as it grows in scale and complexity.
Stage 1 involves a simple architecture suitable for startups with low overhead. Stage 2 adds redundancy as the business grows. Stage 3 requires the addition of load balancers and more servers as publicity increases load. Stage 4 requires database replication and partitioning as single databases can no longer handle the load. Later stages involve rearchitecting the application and databases to support further scaling through techniques like data partitioning, database clustering, and optimizing code and resources.
This document provides instructions for running a JavaServer Page (JSP) program using the Apache Tomcat web server. It describes how to install Java and Apache Tomcat, create a simple "Hello World" JSP file, and access it by entering a URL in a web browser. Key steps include creating a JSP file called "first.jsp", adding it to a project folder, and accessing it at a URL like http://localhost:8080/projectfolder/first.jsp to run the program on the Tomcat server.
This document discusses using WebSockets for bidirectional communication between a GWT client and server. It provides an overview of setting up WebSocket connections on both the client and server sides in GWT, including writing a JSNI wrapper to initialize the WebSocket on the client. It also discusses using GWT's existing RPC serialization mechanism to serialize and deserialize Java objects sent over the WebSocket connection, avoiding the need for additional serialization libraries. Code examples are provided for initializing the WebSocket and handling messages on both the client and server sides, as well as using GWT's serialization streams to serialize and deserialize objects between the client and server.
An introduction to the Docker concept. Experiences with ASP.NET Core and Docker, How Docker can help produce modular deployments for ASP.NET web applications. Presented at Vermont Code Camp #8, UVM, Burlington VT, September 17, 2016
This document discusses JSP and JSTL. It begins with an introduction to JSP, explaining that JSP is a server-side technology used to create dynamic web content by inserting Java code into HTML pages. It then covers some advantages of JSP over servlets, features of JSP like ease of coding and database connectivity, and how to run a JSP project in Eclipse. The document next discusses JSTL, the JavaServer Pages Standard Tag Library, which provides commonly used JSP tags. It classifies JSTL tags and provides examples. Finally, it discusses interfacing a Java servlet program with JDBC and MySQL to insert data into a database table.
Catalyst is a web framework for Perl that allows developers to build dynamic web applications in a modular, reusable way. It utilizes common Perl techniques like Moose, DBIx::Class and Template Toolkit to handle tasks like object modeling, database access and view rendering. Catalyst applications can be built in a model-view-controller style to separate application logic, data access and presentation layers. This framework provides a standard way to write reusable code and build web UIs for tasks like system administration and automation.
The document discusses server-side JavaScript and introduces JASSA, a framework that allows JavaScript to be used on the server-side. JASSA aims to provide access to server environments, libraries like MySQL and CURL, and expose system functionality through namespaces. It uses the JavaScript engines SpiderMonkey and V8 to enable server-side scripting with JavaScript. The document provides examples of using JASSA to connect to a MySQL database and make an HTTP request using CURL.
This document discusses profiling PHP applications to improve performance. It recommends profiling during development to identify inefficiencies. The document introduces Xdebug for profiling PHP code and Webgrind, a PHP frontend for visualizing Xdebug profiles. It provides an example of profiling a sample PHP application, identifying issues, making code changes, and verifying performance improvements through re-profiling.
Play is a web framework that supports Scala and Java. It provides features like easy error reporting, hot reloading of code and configuration changes, RESTful architecture, powerful routing, and horizontal scalability. Play uses Akka and Netty for asynchronous and non-blocking I/O. It has a MVC structure with template rendering and supports features like database evolutions, dependency injection, and unit testing.
Caching and tuning fun for high scalability @ FrOSCon 2011Wim Godden
The document discusses using caching and tuning techniques to improve scalability for websites. It covers caching full pages, parts of pages, SQL queries, and complex processing results. Memcache is presented as a fast and distributed caching solution. The document also discusses installing and using Memcache, as well as replacing Apache with Nginx as a lighter-weight web server that can serve static files and forward dynamic requests.
1) File uploads in PHP require configuring php.ini settings like enabling file uploads and setting temporary storage directories with correct permissions.
2) Forms for file uploads need to use POST with multipart/form-data encoding and include file input fields and hidden fields.
3) PHP stores uploaded files in the $_FILES array, including the temporary filename, size, type, and original name, which can then be processed and moved to a permanent location.
Running and Developing Tests with the Apache::Test Frameworkwebhostingguy
The Apache::Test framework allows running and developing tests for Apache modules and products. Key features include:
- Running existing tests through the t/TEST program
- Setting up a new testing environment by installing Apache::Test and generating a Makefile
- Developing new tests by writing Perl scripts that use Apache::Test functions and assert results
- Options for running tests individually, repeatedly without restarts, or in parallel on different ports
This document provides instructions on installing and configuring memcached to improve the performance and scalability of MySQL. Memcached is installed using package managers or by compiling from source. It is configured to listen on ports and interfaces, allocate memory, and set thread counts. The typical usage involves applications loading data from MySQL into memcached for faster retrieval, with MySQL as the backing store if data is not found in the cache.
The document discusses Novell iChain, a solution for securing web applications and servers. It provides single sign-on, encrypts data as it passes through proxies, and removes direct access to web servers. It authenticates users through LDAP or certificates and authorizes access through rules stored in eDirectory. This simplifies management and security across multiple web server platforms and applications.
Load-balancing web servers Load-balancing web serverswebhostingguy
The document discusses different approaches to load balancing web servers to address issues like scaling performance, tolerating failures, and rolling upgrades. It describes three common solutions: redirecting requests through a front-end server; using round-robin DNS to distribute requests; and employing an intelligent load balancer switch that can distribute requests based on server load and detect failures. Each approach has advantages and disadvantages related to ease of implementation, visibility to users, and ability to handle session state. The document also discusses network designs and protocols involved in load balancing, including TCP connection setup and teardown.
The document compares three methods for consolidating SQL Server databases: 1) multiple databases on a single SQL Server instance, 2) a single database on multiple SQL Server instances, and 3) hypervisor-based virtualization. It finds that consolidating multiple databases onto a single instance has the lowest direct costs but reduces security and manageability. Using multiple instances improves security but has higher resource needs. Hypervisor-based virtualization maintains security while enabling features like high availability, but has higher licensing costs. The document aims to help decide which approach best balances these technical and business factors for a given environment.
Mod_perl brings together the Apache web server and Perl programming language. It allows Apache to be configured and extended using Perl, and significantly accelerates dynamic Perl content. Mod_perl supports Apache versions 1.3 and 2.x and integrates Perl at every stage of the request process to provide great flexibility and control over Apache functionality. The mod_perl community provides extensive documentation and quick support responses.
Mod_perl brings together the Apache web server and Perl programming language. It allows Apache to be configured and extended using Perl, and significantly accelerates dynamic Perl content. Mod_perl supports Apache versions 1.3 and 2.x and integrates Perl at every stage of the request process to provide great flexibility and control over Apache functionality. The mod_perl community provides extensive documentation and quick support responses.
The document discusses various aspects of designing an effective website, including analyzing content and target audiences, organizing site structure and navigation, and implementing design elements. The key steps outlined are to analyze content and audience needs, organize the site structure into main sections and subsections, and implement an intuitive navigation system to help users easily find relevant information. Maintaining and optimizing the site over time are also emphasized.
This white paper provides an architectural overview and configuration guidelines for deploying Microsoft SQL Server 2005 with Microsoft Windows Server 2008 on Dell PowerEdge servers and Dell storage systems. It documents best practices for implementing SQL Server 2005 solutions using Dell hardware and software components that have been tested and validated to help ensure successful deployment and optimal performance. The white paper covers storage configuration, network configuration, operating system configuration, and SQL Server configuration recommendations.
1. The document discusses the evolution of business models for IT infrastructure from proprietary systems within individual companies to more open standards and shared infrastructure leveraging the internet.
2. It describes new service models like client-server computing, web services, and on-demand/utility computing which allow flexible provisioning of computing resources on a needs basis.
3. Managing diverse IT infrastructures requires considerations around outsourcing non-core functions, developing service level agreements, managing legacy systems, and aligning infrastructure capabilities to business strategy through appropriate investment.
The document discusses different types of websites that can be created for business purposes including traditional, blog-based, and group/network sites. It provides information on setting up each type of site for free or at low cost using online tools or designers, and how to add features like domains, payment systems, and linking domains to sites. Options for free and cheap site creation using tools like Google Sites are demonstrated.
This document outlines Saint Louis University's strategy for improving power management of IT equipment to reduce costs and environmental impact. Key points include:
1) SLU aims to standardize power-optimized default settings on all managed PCs and laptops through automated software and establish policies around exceptions and existing devices.
2) Potential savings are estimated from generational improvements in computer hardware and adopting lower-power modes like sleep versus screensavers.
3) The strategy also involves consolidating servers, enabling energy-efficient settings on printers and other electronics, and establishing institutional goals around student computer energy use.
Excel and SQL Quick Tricks for Merchandiserswebhostingguy
This document provides instructions for using Microsoft SQL and Excel to extract data from a SmartSite SQL database, manipulate it in Excel using functions, and update the SQL tables with the modified data to streamline content work. It covers connecting to and querying SQL databases, common Excel functions for editing data, and steps for importing an Excel file into a SQL table to update information. Examples of specific SQL queries and Excel functions are provided along with a scenario walking through the full process.
The document discusses various topics related to virtualization including drivers for virtualization, practical applications, definitions of terms like virtualization and paravirtualization, and tools like Xen, VMWare, and Microsoft virtualization products. It provides information on installing and configuring Xen on SuSE Linux, discusses security and auditing considerations for Xen, and demonstrates Xen functionality.
The document discusses strategies for converting low-value hosting clients into high-value customers by marketing additional services. It recommends continuously marketing to clients through email, forums, blogs and surveys to promote add-on services like collaboration tools, applications, and infrastructure-as-a-service offerings which can significantly increase revenue per client. Measuring marketing campaigns and conversions is key to optimizing efforts to up-sell existing clients.
Microsoft PowerPoint presentation 2.175 Mbwebhostingguy
The document discusses WebMapping Solutions and their products and services. It summarizes their middleware and mapping tools like MapBroker, Generic GUI Builder, and MapOrganiser. MapBroker powers many of their applications. Their products allow users to build custom web mapping applications and manage both geographic and non-geographic data in a single view. Their services include publishing data online, application development, and strategic consulting. Some examples of government and organization clients are listed.
This document provides an overview and guide for using HSPcomplete, a hosting automation solution that allows hosting service providers to manage infrastructure, billing, sales channels, and e-commerce through a single system. It describes HSPcomplete's advantages like integrated billing and credit card processing, virtual private server management, and domain registration. Hardware, software, and user requirements for HSPcomplete deployment are also outlined.
2. 03/17/08 12:33:21 PM AllegroCache with Web Servers 1
Introduction
Using AllegroCache with a web server poses certain challenges that we'll describe in this document.
The primary challenge is using AllegroCache in a multi-threaded environment. In a multi-threaded
environment you need to worry about serializing operations on the database and handling conflicts
should they arise. While we discuss these problems in the context of a web server what is said also
applies to other multi-threaded applications.
The web server we'll be using for sample code is AllegroServe.
Multi-threading
In Allegro Common Lisp a program can create multiple threads of execution using such lisp functions
as mp:process-run-function. In ACL threads are referred to as processes for historic reasons but we'll
use the term thread since it better describes what they are.
In AllegroCache a program opens a connection to a database and that connection can only be used by
one thread at a time. In a multi-threaded application such as a web server it's likely that many threads
will need to access the AllegroCache database. This document will show some common ways to share
a database connection or connections.
Password Database
Consider the following scenario. Your web site has pages that only authorized clients can view.
You'll require users to supply a name and password to access the protected pages. Therefore you
would like to keep a simple name/password database in AllegroCache. Furthermore this database can
be updated by users wishing to change their passwords.
This example only shows a fraction of the power of AllegroCache. That is intentional. We aren't
trying to demonstrate AllegroCache as much as we are demonstrating how to construct an application
that uses AllegroCache. Once you know how the framework is built you can add in the details of your
own application as needed.
What follows is one of the simplest solutions to this password database design. For more serious web
sites where authorization is required we would recommend using AllegroServe's WebActions
framework as this provides a better way to do authorization using sessions.
For our simple password database we don't need to define any classes. We can use the built-in ac-map
class from AllegroCache. An object of this class will map a user name to that user's password.
Database code
We'll begin the code with a package definition for our sample code
(require :aserve)
3. 03/17/08 12:33:21 PM AllegroCache with Web Servers 2
(require :acache)
(defpackage :sample (:use :common-lisp :excl
:db.allegrocache
:net.aserve
:net.html.generator
))
(in-package :sample)
Next we'll define some global variables.
(defparameter *password-db-file* "password.db")
(defvar *password-db*)
(defvar *password-map*)
(defvar *db-lock* (mp:make-process-lock))
The following function will create the database we'll be using. We only call this function once when
we're setting up our web site. It allows us to set the main site password (which we'll need to use to add
new users and do other administrative actions).
(defun create-password-database (db-file admin-name admin-password)
(let (*allegrocache*)
(create-file-database db-file)
(unwind-protect
(let ((map (make-instance 'ac-map :ac-map-name "password")))
(setf (map-value map admin-name) admin-password)
(commit))
(close-database))))
You'll note that we create a new binding for the special variable *allegrocache*. This isn't strictly
necessary but is good style. We know that create-file-database will set the value of *allegrocache* and
by binding it here we prevent that setting of *allegrocache* from affecting other threads running in the
same Lisp which may be making use of the global value of *allegrocache*.
After creating the file database we make the map instance we'll be using for our database and we give
that map instance the name “password.” Then we commit the creation of the map and close the
database. Now the database is ready to be used by our application.
We are using AllegroCache in standalone mode. For our simple web site we will have only one web
server, thus we won't need to create a shared name/password database.
Each time the web server starts it will have to open up this database and prepare to use it. This function
does that work:
(defun open-password-database (db-file)
(let (*allegrocache*)
(open-file-database db-file)
(setf *password-db* *allegrocache*)
(setf *password-map*
(retrieve-from-index 'ac-map 'ac-map-name "password"))))
4. 03/17/08 12:33:21 PM AllegroCache with Web Servers 3
Again we create a private binding of *allegrocache* so that the call to open-file-database doesn't
change the global value of *allegrocache*. Next we store the database connection object in the global
variable *password-db* and we store a pointer to the ac-map object we'll need in the variable
*password-map*. We do not close the database.
At some point we'll need to look up a password given a user name. This function will do that:
(defun get-password-for (user)
(mp:with-process-lock (*db-lock*)
(let ((*allegrocache* *password-db*))
(map-value *password-map* user))))
The call to map-value looks up the password for the given name. Before we can call map-value we
have to make sure it's safe to do so. In the standalone AllegroCache we aren't concerned about
multiple threads calling map-value at the same time. (That would however be a problem in the
client/server AllegroCache). What we're worried about is one thread doing a commit and another
calling map-value on the same database connection. That can cause problems. Thus to be safe we
have a global lock *db-lock* which we must grab before doing any database operations. Given that
the map-value call is very fast it's not a burden to keep the database locked away from other threads
during the execution of this call. We locally bind *allegrocache* to the database connection we'll be
using. Many AllegroCache functions implicitly use the value of *allegrocache* so it's critical that it be
bound to the current connection when AllegroCache functions such as map-value are called.
*password-map* has an ac-map object as its value. This value can only be examined when
*allegrocache* is bound to the database connection from which the ac-map object was obtained.
We've taken some pains to keep from modifying the global value of *allegrocache* in order to
demonstrate a good style of programming. When you're debugging your application and need to view
objects interactively you may wish to set the global value *allegrocache* to the active database
connection, thus allowing object inspectors to work.
Setting a password value is very similar to getting the value:
(defun set-password-for (user new-password)
(mp:with-process-lock (*db-lock*)
(let ((*allegrocache* *password-db*))
(setf (map-value *password-map* user) new-password)
(commit))))
One major difference is the call to commit. Because we're using standalone AllegroCache the commit
cannot fail like it can in client/server mode. Thus we don't need any code to retry the operation if the
commit fails.
And finally it would be nice to have a function to close the database.
(defun close-password-database ()
(close-database :db *password-db*))
It's a good idea to close the database explicitly before exiting the application. This ensures that all
partially filled buffers are flushed to the disk. In this simple case the close-database isn't necessary
5. 03/17/08 12:33:21 PM AllegroCache with Web Servers 4
since the only changes that are made are immediately followed by a commit which flushes all written
buffers to the disk. Thus there should be no work for close-database to do. However calling close-
database is always good style.
Testing the database code
We can test out our database code before connecting it to the web server.
First we need to load in AllegroServe and AllegroCache.
cl-user(1): (require :aserve)
; Fast loading /usr/acl/aserve.fasl
t
cl-user(2): (require :acache)
; Fast loading /usr/acl/acache.fasl
AllegroCache version 1.0.1
t
Next we load in the code shown above and switch to the package in which we've placed this code
cl-user(4): :cl password
; Fast loading /usr/tmp/password.fasl
cl-user(5): :package sample
sample(6):
We build the password database, put in the name/password pair admin/foo and close the database:
sample(6): (create-password-database *password-db-file* "admin" "foo")
t
sample(7):
With the database built we can now open it up and perform some operations:
sample(7): (open-password-database *password-db-file*)
#<ac-map oid: 12, ver 5, trans: 7, not modified @ #x71dee9d2>
sample(8): (get-password-for "admin")
"foo"
t
sample(11): (set-password-for "smith" "john")
t
sample(12): (get-password-for "smith")
"john"
t
sample(13): (set-password-for "smith" "fred")
t
sample(14): (get-password-for "smith")
"fred"
t
sample(15):
6. 03/17/08 12:33:21 PM AllegroCache with Web Servers 5
The AllegroServe code
Now that we've written and tested the database code for our password database let's look at the
AllegroServe code to make use of it.
We start the server with
(defun start-test (&key (port 9999))
(open-password-database *password-db-file*)
(start :port port)
)
The open-password-database function is given above.
Let's first show how to use the password database in the most transparent way possible. Here is how
we would publish a page which needed the user to supply the correct name/password values before it
could be viewed:
(publish
:path "/secret-page"
:content-type "text/html"
:function
#'(lambda (req ent)
(multiple-value-bind (name password)
(get-basic-authorization req)
(if* (and (> (length name) 0)
(equal password
(get-password-for name)))
then ; success!!
(with-http-response (req ent)
(with-http-body (req ent)
(html (:head (:title "Secret Page"))
(:body "Here is the secret word: fish"))))
else ; need correct name/password
(with-http-response (req ent
:response *response-unauthorized*)
(set-basic-authorization req "secretserver")
(with-http-body (req ent)))))))
We make use of the get-password-for function we defined above to see if the the user has supplied the
correct name/password values.
If you have only one page to protect then the above solution is acceptable, but if you wish to protect a
whole group of pages then it's better to use an authorizer object. You would begin by building such an
object:
7. 03/17/08 12:33:21 PM AllegroCache with Web Servers 6
(defvar *ac-authorizer*
(make-instance 'function-authorizer
:function
#'(lambda (req ent auth)
(declare (ignore auth))
(multiple-value-bind (name password)
(get-basic-authorization req)
(if* (and (> (length name) 0)
(equal password
(get-password-for name)))
then t ; authorized
else ; send back request for name/password
(with-http-response (req ent
:response *response-
unauthorized*)
(set-basic-authorization req "secretserver")
(with-http-body (req ent)))
:done)))))
and then you can use this object with an entity you wish to protect:
(publish
:path "/other-secret-page"
:content-type "text/html"
:authorizer *ac-authorizer*
:function #'(lambda (req ent)
(with-http-response (req ent)
(with-http-body (req ent)
(html (:head (:title "Secret Page"))
(:body "Here is the secret word: cake"))))))
The following page allows the user to change their password:
(publish
:path "/change-password"
:content-type "text/html"
:function
#'(lambda (req ent)
(if* (eq (request-method req) :post)
then ; do password change if authorized
(let ((name (request-query-value "name" req))
(old (request-query-value "old" req))
(new (request-query-value "new" req)))
(if* (and (> (length name) 0)
(> (length new) 0)
(equal old (get-password-for name)))
then (set-password-for name new)
(with-http-response (req ent)
(with-http-body (req ent)
8. 03/17/08 12:33:21 PM AllegroCache with Web Servers 7
(html (:body "Password change successful"))))
else (with-http-response (req ent)
(with-http-body (req ent)
(html (:body "Password change unsuccessful"
:br
((:a :href "/change-password")
"Retry")))))))
else ; put up form for password change
(with-http-response (req ent)
(with-http-body (req ent)
(html (:h1 "Password Change")
((:form :method "POST")
"Name: " ((:input :type "text"
:name "name"))
:br
"Current Password: "
((:input :type "password"
:name "old"))
:br
"New Password: "
((:input :type "password"
:name "new"))
:br
((:input :type "submit")))))))))
Database in client/server mode
For this application we chose to use AllegroCache in standalone mode. This is the appropriate mode
for a simple single-machine web server. It is instructive however to consider what it would take to
convert this example to use AllegroCache in client/server mode. A password database built this way
will allow multiple web servers to access the database and would also allow one to write administrative
programs to add accounts to the database and generate reports based on the contents of the database.
The database creation function will start a server and then create a connection to the server. A server
needs to run on a particular port so we must specify that.
(defparameter *server-port* 8888)
(defun create-password-database-cs (db-file admin-name admin-password)
(let (*allegrocache*)
(start-server db-file *server-port*
:if-exists :supersede
:if-does-not-exist :create)
(open-network-database "localhost" *server-port*)
(unwind-protect
(let ((map (make-instance 'ac-map :ac-map-name "password")))
(setf (map-value map admin-name) admin-password)
(commit))
(close-database :stop-server t))))
9. 03/17/08 12:33:21 PM AllegroCache with Web Servers 8
When we call close-database we specify that the server should shut down as well.
In order to connect to the database we start up the server on the database and then create a connection
to the database. If we wanted to have multiple web servers connect to this database then we would
separately start the server and the open-password-database-cs function would only open-network-
database.
(defun open-password-database-cs (db-file)
(let (*allegrocache*)
(start-server db-file *server-port*)
(open-network-database "localhost" *server-port*)
(setf *password-db* *allegrocache*)
(setf *password-map*
(retrieve-from-index 'ac-map 'ac-map-name "password"))))
There is a subtle but important change in the password retrieval function, namely the addition of a call
to rollback.
(defun get-password-for-cs (user)
(mp:with-process-lock (*db-lock*)
(let ((*allegrocache* *password-db*))
(rollback)
(map-value *password-map* user))))
We first do a rollback to advance this connection's view of the database to the most recent time. A
rollback operation does two things: it undoes any changes we've made (and we've made none so this
has no effect) and it advances the view of the database so we're seeing the result of the most recent
commit. Since at this point we only have a single client connection to the database this call to rollback
is unnecessary, but it's good style and it means we're ready should we add more connections to the
database.
The only other change from our standalone version is to the function to set the password
(defun set-password-for-cs (user new-password)
(mp:with-process-lock (*db-lock*)
(let ((*allegrocache* *password-db*))
(with-transaction-restart ()
(setf (map-value *password-map* user) new-password)
(commit)))))
We've added the with-transaction-restart macro so that the database modification and commit is retried
if the commit fails. If there's only one client connection then the commit can never fail so until we
actually start sharing the database with multiple web servers the use of this macro is unnecessary (but it
is good style).
10. 03/17/08 12:33:21 PM AllegroCache with Web Servers 9
Connection pools
So far we've had one connection to the database that all threads shared. For the password database
that's appropriate. But let's imagine that each thread is doing more than just looking up a password.
The thread may be navigating through the persistent object space or creating new objects and
committing them. In that case we don't want the bottleneck of a single database connection. We
would like each thread to have its own connection to the database. In practice though we know that
each database connection takes up system resources so we don't want to create database connections
that we don't really need. Thus we introduce the idea of a connection pool, which is a set of database
connections that all threads share. If a thread needs to do a database operation it takes a connection
out of the pool, uses it and then puts it back when its done. If there are no connections in the pool a
thread can either create a new connection or it can wait for an existing connection to be freed by
another thread.
We'll show how a connection pool can be created for our simple password database. We acknowledge
that for this simple example there is little need for a connection pool however you should imagine that
each thread is doing a lot more than just looking up passwords when it gets a database connection.
The connection pool is managed by the with-client-connection macro we'll show below. This macro
retrieves a connection from the pool. If no connection is available it will create a new connection but it
will create no more than *connection-count* connections. If all connections are created and a new
connection is needed the requesting thread will be put to sleep until a connection is returned to the
pool.
In our password example we'll need to access the map object that contains the password data. In the
preceding examples we've stored the map object in the global variable *password-map*. This strategy
will not work in our connection pool case since each database connection will have it's own unique
object that represents the single map object in the database. Thus we need to store a map object for
each connection. Our connection pool will hold the database object and either the map object or nil.
The code will look up and cache the map object as needed. That makes this connection pool code
application specific. You'll likely want to tune your connection pool code for your application as well.
The global variables we'll need are the following. *client-connections* will be a list of poolobj objects.
The lock *pool-lock* must be held before a thread can look at *client-connections*. The gate *pool-
gate* will be used to block threads waiting for a connection to be freed. *connection-host* and
*connection-port* describe the location of the database server. *connection-count* is the number of
database connections yet to be allocated.
(defvar *client-connections* nil)
(defvar *pool-lock* (mp:make-process-lock))
(defvar *pool-gate* (mp:make-gate t))
(defvar *connection-host*)
(defvar *connection-port*)
(defvar *connection-count*)
The connection pool consists of a list of poolobj objects. The database slot contains the database
connection and the map slot is used to cache a pointer to the ac-map object named “password”.
11. 03/17/08 12:33:21 PM AllegroCache with Web Servers 10
(defstruct poolobj
database
map)
12. 03/17/08 12:33:21 PM AllegroCache with Web Servers 11
(defmacro with-client-connection (var &body body)
(let ((alloc-connection (gensym)))
`(let (,var ,alloc-connection)
(loop
(mp:with-process-lock (*pool-lock*)
(if* (null (setq ,var (pop *client-connections*)))
then (if* (> *connection-count* 0)
then (decf *connection-count*)
(setq ,alloc-connection t)
else (mp:close-gate *pool-gate*))))
(if* ,var
then (return)
elseif ,alloc-connection
then (let (*allegrocache*)
(setq ,var (make-poolobj
:database (open-network-database
*connection-host*
*connection-port*)))
(return))
else (mp:process-wait "gate" #'mp:gate-open-p *pool-gate*)))
(unwind-protect
(let ((*allegrocache* (poolobj-database ,var)))
,@body)
(mp:with-process-lock (*pool-lock*)
(push ,var *client-connections*)
(mp:open-gate *pool-gate*))))))
In with-client-connection we first see if a connection is available and if so it is returned. If not then we
see if there are any connections yet to be allocated. If so we allocate the connection. If not we block
on the gate *pool-gate*. If a connection is available we bind *allegrocache* to the database object
and run the body of the macro. Once the body finishes we put the connection back in the pool and
open the gate so that any threads waiting for a connection are awoken.
To use this code we must start the server just once
(defun start-password-database-server (db-file)
(start-server db-file *server-port*))
Then when the web server starts up we call this function to initialize all the global variables
(defun open-password-database-pool ()
(setq *connection-host* "localhost"
*connection-port* *server-port*
*connection-count* 5
*client-connections* nil))
The code to retrieve a password may have to first find the map object and cache it before it can lookup
the user in the map.
13. 03/17/08 12:33:21 PM AllegroCache with Web Servers 12
(defun get-password-for-pool (user)
(with-client-connection poolobj
(rollback)
(map-value (or (poolobj-map poolobj)
(setf (poolobj-map poolobj)
(retrieve-from-index 'ac-map
'ac-map-name "password")))
user)))
Note that we're no longer grabbing the *db-lock* as we did in preceding examples. That is because
there is no longer a global lock to access the database. Instead each thread runs independently and
AllegroCache is dealing with all concurrency issues.
Setting the password is written in a similar way. Since we're doing a commit we have to use with-
transaction-restart.
(defun set-password-for-pool (user new-password)
(with-client-connection poolobj
(with-transaction-restart nil
(setf (map-value (or (poolobj-map poolobj)
(setf (poolobj-map poolobj)
(retrieve-from-index 'ac-map
'ac-map-name
"password")))
user)
new-password)
(commit))))
Webactions with AllegroCache
Sophisticated web sites should be written using a higher level web programming framework such as
Webactions. In this section we'll discuss how to integrate AllegroCache into a Webactions project.
One of the key features of Webactions is its session support. A session is information associated with
a particular web browser that lasts through many http requests. Webactions stores the session
information in the web server's memory.
In our first scenario we'll take a case of a web site that has multiple web servers serving the same
pages. The user just asks for http://www.this-popular-site.com and that request is sent to one of the N
web servers serving this site due to some IP redirection software at the server end. Webactions
sessions are inadequate in this scenario since session data stored in one web server will not be available
to the other N-1 web servers. What we need is a common database that all web servers can use to
store the session data. This is where AllegroCache is very useful.
First let's review how Webactions stores denotes sessions. Each session has a unique session key
14. 03/17/08 12:33:21 PM AllegroCache with Web Servers 13
which is a long string. When a web browser contacts a server it passes its session key (if it has one) to
that server. If no session key is passed then the server creates a new session and sends that session's
key back to the web browser along with the response. Thus after one request/response the server and
web browser have established a session and agreed on its key.
If a web browser passes a session key to the server that the server does not recognize, the server creates
a session with that key. This means that if you have multiple servers for a site then after the first
server creates a session and passes back the key if the web browser should happen to send a request to a
different web server that web server will create a session with the same key. This is important since
its the session key that will be used to find the persistent object that's associated with the session in the
AllegroCache database.
For example, suppose a user starts his web browser and visits http://www.mysite.com which is
implemented using Webactions. The web server will find that no session key was sent with the
request so it will create a new session and with its reply send back the session key, which we'll say is
“abcd1234”. On the next request from the same browser to a page on http://www.mysite.com the
browser will send the session key “abcd1234”. If this request is sent to the same web server as
handled the original request then it will cause that web server to locate in its table of sessions the
session named “abcd1234” and pass that session object to the code handling the request. If that
request goes to another web server however then that web server will not find a session named
“abcd1234”. It will then create a session named “abcd1234” and pass that session object to the code
handling the request. These session objects are transient values stored in the Lisp heap of each
process. They have no relation to each other. What we show in this example is how to use
AllegroCache to allow those session objects to share memory so that values set in one session can be
read in another.
Persistent Session Example
We'll show how two distinct web servers can share session data. The web servers can be running on
different machines or the same machine (although to make our example as simple as possible we use
the machine name “localhost” so both lisp servers and the database server must run on the same
machine.)
In our example we have two pieces of data we store in our session, one called 'a' and the other called
'b'. We create a Webactions project which has one page that shows the current values of 'a' and 'b' and
allows the user to change either value. That value, once changed, will be visible to all web servers
running this sample code.
The complete lisp code for this is this file:
;; persistent sessions
(eval-when (compile load eval)
(require :aserve)
(require :webactions)
(require :acache))
(defpackage :wasample (:use :common-lisp :excl
17. 03/17/08 12:33:22 PM AllegroCache with Web Servers 16
(with-transaction-restart ()
(let ((psession (get-psession-object key nil)))
(setf (psession-a psession)
(websession-variable session "a")
)
(setf
(psession-b psession)
(websession-variable session "b"))
)
(commit))))
:continue
)
(defun start-db-server (port &optional create)
(start-server "wasample.db" port
:if-exists (if* create then :supersede else :open)
:if-does-not-exist
(if* create then :create else :error)))
(defun start-webserver (web-port db-port)
(setq *connection-host* "localhost"
*connection-port* db-port
*connection-count* 5)
(net.aserve:start :port web-port))
There is one file main.clp that's needed as well
<html>
<head><title>Webactions/AllegroCache Session test</title></head>
<body>
The value of A is <clp_value name="a" session/>.
<br>
<form action="setvalues">
Set A to <input type="text" name="set-a">
<input type="submit" value="Set A">
</form>
<hr>
The value of B is <clp_value name="b" session/>.
<br>
<form action="setvalues">
Set B to <input type="text" name="set-b">
<input type="submit" value="Set B">
</form>
</body>
</html>
In order to run this example you need to choose a port for the AllegroCache server to listen on and then
a port for each web server. If you want AllegroCache to listen on port 8888 and then to start up web
18. 03/17/08 12:33:22 PM AllegroCache with Web Servers 17
servers on ports 8000, 8001 etc you would run the example by executing the following commands:
(wasample::start-db-server 8888 t) ; t means create fresh database
Then on each process you want to be a web server do
(wasample::start-webserver 8000 8888)
and then on the next process
(wasample::start-webserver 8001 8888)
and so on. Start a browser on the machine and visit http://localhost:8000 and then start a different
browser and visit http://localhost:8001. Set and read the results on different browsers and convince
yourself that the session data is shared between the two web servers.
Now let's go back and look at the code. We're using the connection pooling idea introduced before.
This time however our pool is simply a list of connections and not a structure holding a connection.
We do this because this is what's appropriate for this example. The get-psession-object is responsible
for returning the persistent object that's associated with a session. Note that it tries to get the object
first before creating one. There is a possibility that two lisp threads in the same web server will try to
create a psession object with the same key. This would be a disaster if it occurred but it can't occur
since we've declared the key slot of the psession object to have a unique index. Thus the database
connection that tried to commit a psession object with a given key will get a commit error. The with-
transaction-restart macro will then cause the transaction to be rolled back and get-psession-object will
be called again. This time the retrieve-from-index function will find that existing psession object rather
than creating a new one. So the next time the commit is done it won't fail.
In your code where you use unique indexes you'll want to use the strategy shown in get-psession-object
to retrieve or create an object.
You'll note that our strategy for using AllegroCache with Webactions is to do quick database
operations and not to keep a database connection during a whole sequence of actions. This is certainly
the simplest solution and a safe one. If you just allocate a database connection from the pool and then
do a sequence of actions then you may be in trouble if the commit fails since you then have to do the
rollback and repeat the actions yourself. In our code the action-store-session-vals modifies the
database and the commit here can fail. However its easy to see that if the commit fails its easy for the
connection to be rolled back and the store operations repeated.