Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. Paul Reed
Download to read offline and view in fullscreen.


[Lucas Films] Using a Perforce Proxy with Alternate Transports

Download to read offline

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

[Lucas Films] Using a Perforce Proxy with Alternate Transports

  1. 1.  MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO • APRIL 24−26Perforce Merge Conference, April 2013To facilitate transporting large amounts of databetween global sites, a Perforce proxy with an alternatetransport can be used to work around latency orbandwidth limitations inherent with TCP/IP networks.Using a Perforce Proxy withAlternate TransportsOvercoming High-Latency or Low-Bandwidth NetworksMatthew Janulewicz, Lucasfilm Ltd.
  2. 2. 2 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes treatment.What is a Perforce Proxy?A Perforce proxy is a server that collects and maintains a copy of archive files to be used byremote teams that are geographically separated from the Perforce server. The purpose of aPerforce proxy is to “cache” frequently used data on a server local to the remote users toprevent repeated transfer of commonly used data over long distances.A Perforce proxy is stateless in that it does not track the state or presence of archive filesthrough a database. Sync requests are passed through a proxy to the associated Perforceserver. Once the server calculates the file revisions required for the sync request, they aredelivered from the Perforce proxy if they are available. If they are not available, files aredelivered from the Perforce server and a copy is deposited on the Perforce proxy to be used insubsequent requests for that file revision. A Perforce proxy starts empty and the file revisionsavailable from it are built over time based on sync requests.When new file revisions are submitted through a Perforce proxy, a copy is deposited in theproxy and does not have to be resynced to the proxy.Some sites use an automated, scheduled “pre-cache” script to sync updated files to thePerforce proxy at set intervals. This way they can update the proxy with new revisions beforelocal users need them. Using ‘p4 –Zproxyload’ will load files into the Perforce proxy withouthaving to physically sync them to a workspace.Older file revisions in the Perforce proxy can be removed without penalty because the proxymerely checks for the presence of the file when it is requested. If someone specificallyrequests a removed file, the proxy will simply download it from the Perforce server again. Thearchive file layout on the Perforce proxy mirrors the layout on the Perforce server.Why Not Use a Replica?Perforce replicas do have some advantages over proxies. In the case of a forwarding replica,the replica holds a copy of the database so certain queries (sync calculations, fstat calls, etc.)return results faster because the request does not have to go all the way to the originatingserver. The burden on the master server is greatly reduced by using a forwarding replicainstead of a Perforce proxy.The drawback to any replica is that it replicates an entire server, including all archive filerevisions (up to p4d 2012.2). In Lucasfilm’s case, the server we wanted to share data from hasmore than 6 TB of archive data in it (and is growing). The files we need to collaborate on inremote offices take up about 1.5 TB of that. The cost of building out a system to handle thatmuch storage, storage that will mostly not be used, was not something we were willing tocommit to.A replica basically serves the same purpose of an “actual” server, in that it has to handle all thesame requests to the database that a regular server might. You essentially have to build out anew server with specs to match. A proxy, on the other hand, is primarily a file cache and willuse fewer resources than a full Perforce server or replica.The size of the stored data on a Perforce proxy is also easier to manage. As mentioned earlier,you do not manage archive storage on a replica as much as you simply provide it with anequal amount of storage as the master server it replicates. You can, however, remove oldrevisions of files from a proxy with no penalty. A basic tactic in managing the required size ofstorage in a proxy might be to remove any files that have not been accessed within a certain
  3. 3. 3 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes treatment.number of days. You could also be more aggressive, depending on need, by writing a shortscript to find files that have more than one revision and removing any that are not the current#head or perhaps, additionally, #head-1. Most administrators will likely want to strike a balancebetween the two approaches.Lastly, as this white paper will outline, you are not limited to a TCP/IP transport when workingwith a Perforce proxy. A Perforce proxy has no knowledge about how or why an archive file ispresent; it only checks that it is there.The Enemy: Network LatencyCommunication between Perforce clients and servers is handled through TCP/IP. TCP/IP is afine protocol but it is “chatty” and is adversely affected by high-latency networks. Our pinglatency between California and the Pacific Rim is typically 200-300 ms. Ping results fromPerforce itself (p4 ping) were even slower (see Figure 1).Figure 1: Response time from ‘p4 ping’: local average = 0.000s, remote average = 1.953sAlthough we have high bandwidth (1 GB), the high latency can wreak havoc on large datatransfers over the Pacific Ocean. More latency means slower response times and more delaysfor each packet going through. For short bursts it is unnoticeable, but with file transfers of evena few GB, you can see a significant drop in expected transfer speeds, especially during times
  4. 4. 4 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes treatment.of high network usage. On our network there was a practical cap of well under .5 MB/s whentransporting large datasets between remote offices. Our initial test and production servers ranon CentOS with a Linux 2.6.24 kernel (see Figure 2).Figure 2: Initial transfer throughput (in MB/s) for Centos 5.4 and default TCP/IP settings using a P4 proxySignificant improvements in throughput where achieved by upgrading the OS (CentOS) andtweaking the TCP/IP stack on the server and associated Perforce proxy. These results arefrom CentOS 5.5 with a Linux 2.6.32 kernel (see Figure 3). Additional TCP/IP tweaks wereapplied as outlined in the Appendix.
  5. 5. 5 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes treatment.Figure 3: Initial transfer throughput (in MB/s) for Centos 5.5 and optimized TCP/IP settings using a P4proxyThis improvement was still not enough. Considering the size of datasets we might need toshare during game development, especially in the time leading up to a milestone, we wereseeing too much delay prior to all our data arriving in the remote office. The end of our day,when most work is submitted, coincided with the beginning of their day. It would not beunusual for the remote office to have to wait until the middle of their day, or even the next day,to begin work on the items we had updated the day before.As an example, sending a 1 GB file with our default OS and settings could take an entire dayor more to transfer. With the updated OS/kernel and optimized TCP/IP settings, this transferwas cut to 4 hours. This was a great improvement but still nowhere near approaching realtime. For collaboration on large datasets, we needed to improve transfer times even more.Solution: Dispense with TCP/IPAs mentioned earlier in this paper, the layout of the archive files in a Perforce proxy mirrorsthat of the associated Perforce server. Because the proxy is not “smart” (stateless) and doesnot know what files it houses at any given time, it also has no knowledge of how files weredelivered to it in the first place. It checks for the presence of files during each request. Acommon way to seed a large Perforce proxy is to make a backup (tar, CD, tape) of significantfiles from the server and restore it to the associated proxy server.You can take advantage of this statelessness by using a different transport protocol, in ourcase User Datagram Protocol (UDP). UDP is a lightweight transfer protocol that when
  6. 6. 6 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes treatment.compared to TCP/IP has less (no) error checking, no concept of retransmitting droppedpackets, no ordering of packets, and no congestion control.Open source and commercial UDP applications have added support for error checking,security, and bandwidth control. Lucasfilm already had experience with the fasp technologyfrom Asperasoft and decided to use it to help overcome the increasing transfer speedlimitations we encountered over time while our Perforce proxy grew.We already owned licenses for its point-to-point server product, which essentially provides scp-like UDP transfers for single files at a time. We have been using this for several internal toolsfor some years. However, the nature of those tools is that the files/packages being transferredare known. We were not doing any type of “mirroring” of file systems but were simplytransferring files on a per-user basis.To effectively mirror a file system, or parts thereof, we used Asperasoft’s Aspera Sync (async)product, which provides a drop-in replacement for rsync. We were then able to set up a simpleone-way mirroring of relevant file paths between our Perforce server and associated proxy.Because async allows you to throttle the utilized throughput, we could realize just about anyjump in throughput we chose. We found that between 20-25 MB/s was ideal because it gaveus a significant enough improvement without affecting the other transfers that might behappening at any given time.At these rates, the time to send a 1 GB file improved from 4 hours to a few minutes. Whenneeded, we can coordinate with other UDP users to utilize more bandwidth to send content toremote offices even faster.ConclusionUnder certain circumstances a Perforce proxy is preferable to a Perforce replica. The inherentstatelessness of a proxy lends itself to clever solutions for overcoming suboptimal bandwidthor latency in your global network. You can effectively take TCP/IP out of the equation to realizeorders of magnitude improvements in transfer rates for Perforce proxy assets.AppendixThe following settings were tweaked to realize modest improvements in TCP/IP transfer rates.In CentOS 5, the settings are in /etc/sysctl.conf### New 2.6.x TCP tuning:net.ipv4.tcp_adv_win_scale=7net.ipv4.tcp_window_scaling=1net.ipv4.tcp_congestion_control=bicnet.ipv4.tcp_moderate_rcvbuf=1### IPV4-TCP specific settingsnet.ipv4.tcp_timestamps = 1net.ipv4.tcp_sack = 1net.ipv4.tcp_rmem = 16777216 16777216 16777216net.ipv4.tcp_wmem = 16777216 16777216 16777216
  7. 7. 7 Using a Perforce Proxy with Alternate TransportsThis is text for annotations in footer. Similar to footnotes = 16777216 16777216 16777216net.ipv4.ipfrag_high_thresh = 83886080net.ipv4.ipfrag_low_thresh = 41943040net.ipv4.ipfrag_time = 10net.ipv4.tcp_low_latency = 0### CORE settings (mostly for socket and UDP effect)net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.rmem_default = 16777216net.core.wmem_default = 16777216net.core.optmem_max = 16777216net.core.netdev_max_backlog = 3000000# disable swapping if at all possible:vm.swappiness = 0# Initial 9.3 rev. Increase max shared memory segment size.# Disable response to broadcasts.# You dont want yourself becoming a Smurf = 1# enable route verification on all interfacesnet.ipv4.conf.all.rp_filter = 1kernel.shmmax = 536870912# modify overcommit settings to allow large zeno processes to fork.vm.overcommit_memory = 2vm.overcommit_ratio = 200  


Total views


On Slideshare


From embeds


Number of embeds