Your SlideShare is downloading. ×
Housekeeping Data: Can You Afford Not to Compress It?                                       David Evans 1 and Rainer Timm ...
In sections II-IV the algorithm’s performance is tested on different missions and the differences are investigated.In sect...
III. Results from different missions    A potential gap in our previous experiments was that we only used data from a sing...
100000          90000          80000          70000          60000          50000          40000          30000          2...
100000                90000                80000                70000                60000                50000           ...
Figure 6. Noisy parameter (Velocity Error X)Figure 7. High precision change parameter (Estimated Attitude Q1)             ...
Figure 8. Quantization victim parameter (reaction wheels)    Visual inspections of the packets from the different spacecra...
V. Influence of numbering format   We began to consider how to reduce the number of bit transitions without going up to th...
VI. How much data is needed for efficient compression?    Up to this point we had always used a large amount of data to pe...
have relatively low telemetry downlink rates making fast access to high frequency sampling impossible. Thistechnique would...
reasonable separation in time between the transmissions to avoid the clustered losses we see in real life. Theselection of...
IX. Is the technique feasible in terms of on-board software?    Although the algorithm appeared simple to implement on-boa...
an advantage of the bit transposition process: fewer constraints on packet design. If a packet contains parameterswith lot...
XIII. Conclusion    We have seen that the bit transposition-RLE algorithm is extremely effective at compressing housekeepi...
Upcoming SlideShare
Loading in...5
×

Packet Reader Compression: A simple way to compress on-board stored data SpaceOps 2012

1,407

Published on

It is commonly believed that HKTM packets do not compress well. However, we have developed a lossless compression technique that provides massive data reduction. It is so simple that it can be implemented on-board. The packet reader compression technique groups packet of the same type and reads them using the binary transposed feed, which experiences far less transitions than the traditional feed. This allows use of very simple and efficient compression algorithms (e.g. Run Length Encoding).
The packet reader compression technique has been validated on ground with satisfactory results (e.g. only 14% of data needed for all HKTM Rosetta packets). It has been prototyped and validated on on-board hardware (LEON2 processor). The Packet Reader Compression has been identified as enabling technology for the Mars atmospheric sample return mission.
The packet reader compression technique groups packet of the same type and reads them using the binary transposed feed, which experiences far less transitions than the traditional feed. This allows to use very simple and efficient compression algorithms (e.g. Run Length Encoding) to achieve good compression rates.
The ESA Patent Group has decided to protect this Packet Reader Compression technique by filing a patent application in the United States Patent and Trademark Office.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,407
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
10
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Packet Reader Compression: A simple way to compress on-board stored data SpaceOps 2012"

  1. 1. Housekeeping Data: Can You Afford Not to Compress It? David Evans 1 and Rainer Timm 2 ESA/ESOC, Darmstadt, Germany José-Antonio Martínez-Heras 3 Black Hat S.L., Córdoba, Spain and Maxime Perrotin 4 ESA/ESTEC, Noordwijk, Netherlands Previous work at ESA/ESOC showed that spacecraft housekeeping telemetry contains significant amounts of information redundancy. A simple algorithm was proposed that removed this redundancy almost as effectively as well known commercial products like zip. However conventional wisdom says that even if stored housekeeping telemetry can be compressed - it should not be. At first glance the arguments appear reasonable; it would reduce the robustness of the ground space link, it would make using existing infrastructure more difficult, it would be complicated to implement as on-board software or too resource hungry. Finally there is the old argument that as housekeeping telemetry is typically only a fraction of the total mission data it would not be worth compressing. In this paper we intent to turn that conventional wisdom on its head as we make the move from theory to reality. We test our algorithm across a variety of mission types, solve the risk problem, propose an architecture that reuses existing infrastructure and implement the algorithm on real hardware using real data. Finally we show that there are additional advantages to using this compression technique besides freeing bandwidth and these can be even more significant than the obvious advantages. Can you afford not to compress housekeeping data? The results in this paper might change your answer. NomenclatureAOCS = Attitude and Orbit Control SystemCCSDS = Consultative Committee for Space Data SystemsESA = European Space AgencyFER = Frame Error RateHKTM = House-keeping TelemetryOBSW = On-board softwarePUS = Packet Utilization StandardRLE = Run Length EncodingSLE = Space Link Extension I. IntroductionT 1 HIS paper expands on previous work done at ESA/ESOC in 2009. Up to now we have proved that compression of housekeeping telemetry was possible using a simple algorithm on the ground. Now we want to solve all thepractical issues to clear the path for its implementation on a spacecraft.1 PROBA 3 Ground Segment Manager2 Mission Operations Concept Engineer3 Advanced Mission Concept Software Engineer4 PROBA 3/PROBA V Software Engineer 1 American Institute of Aeronautics and Astronautics
  2. 2. In sections II-IV the algorithm’s performance is tested on different missions and the differences are investigated.In section V we demonstrate a simple improvement which further increases compression performance. In sectionsVI-VIII many practical issues are dealt with including how much data we need to store before compressing, riskmitigation and CCSDS compatibility. Section IX-X describes how the algorithm was tested on representative on-board hardware and how it compares to the CCSDS recommended algorithm for on-board data compression. Finallywe wrap up by describing the many operational advantages that this technique brings in addition to the obviousbandwidth reduction. II. Previous work 1 In previous work , we investigated how to compress housekeeping telemetry stored on-board in CCSDS packets.The main problem identified was the data mixing occurring in the generation and storage processes. Packets containparameters each of which has a fixed length and data type. However each packet contains a mix of differentparameters which means the lengths and types of data vary as the packet is read. To make matters worse differentpacket types are stored into memory areas, called packet stores, in time generation order. This causes even moremixing of lengths and data types as each packet store is read. This data mixing is so disruptive for simplecompression algorithms that some expand our test packet store rather than compress it. In our paper1 we described a method of preprocessing packet stores so that this data mixing is reversed. The aimwas to make the resulting data more compressible using simple algorithms. The following experiment was set-up.For a selected test mission, a week’s worth of housekeeping data was retrieved from the ESA mission archives. Thiswas then processed to extract the CCSDS source packets, thereby effectively reconstructing the original onboardpacket store. This packet store was split into files containing packets with identical PUS structures2 (i.e. the sameCCSDS apid, type, subtype, pi1val and pi2val). A simple process called bit transposition was then carried out oneach file. The bit transposition process is represented in Fig. 1. The diagram shows a file where “#” represents oneand “.” represents zero. This is arranged so that each row consists of one packet. Therefore each column representsthe same bit in each packet read in time generation order. The traditional feed to the compression algorithm isthrough the packet (i.e. by row) which experiences many transitions. By changing the feed to read the same bit ofeach packet before moving to the next bit (i.e. by column) we see that the number of bit transitions is dramaticallyreduced. ....#...##...#..###.#.#.#.##............##.#..##....#.##.#..#..###.#.#######... ....#...##...#..###.#.#.#.##...#........##.#..##....#.##.#..#..###.##......#... ....#...##...#..###.#.#.#.##..#.........##.#..##....#.##.#..#..###.##.....##... traditional feed ....#...##...#..###.#.#.#.##.#..........##.#..##....#.##.#..#..###.##....#.#... ....#...##...#..###.#.#.#.##.#.#........##.#..##....#.##.#..#..###.##....###... ....#...##...#..###.#.#.#.##.##.........##.#..##....#.##.#..#..###.##...#..#... ....#...##...#..###.#.#.#.##.###........##.#..##....#.##.#..#..###.##...#.##... ....#...##...#..###.#.#.#.###...........##.#..##....#.##.#..#..###.##...##.#... ....#...##...#..###.#.#.#.###..#........##.#..##....#.##.#..#..###.##...####... ....#...##...#..###.#.#.#.###.#.........##.#..##....#.##.#..#..###.##..#...#... ....#...##...#..###.#.#.#.###.##........##.#..##....#.##.#..#..###.##..#..##... ....#...##...#..###.#.#.#.####..........##.#..##....#.##.#..#..###.##..#.#.#... ....#...##...#..###.#.#.#.####.#........##.#..##....#.##.#..#..###.##..#.###... ....#...##...#..###.#.#.#.#####.........##.#..##....#.##.#..#..###.##..##..#... ....#...##...#..###.#.#.#.######........##.#..##....#.##.#..#..###.##..##.##... transposed feed Figure 1. Bit transposition process The bit transposed files were then compressed using the simple Run Length Encoding (RLE) algorithm. RLEworks by coding data as a single byte followed by a counter of how many time that byte is repeated. It is thereforevery efficient at compressing repetitive information. By comparing the sum of the compressed file sizes to theoriginal packet store size a compression performance for the week’s data was obtained. We tested the technique on the real data from a flying spacecraft, Rosetta. On average, our algorithmscompressed Rosetta data down to 14% of the size of original packet stores. This is a remarkable result and indicatesthat there is a lot of information redundancy in spacecraft housekeeping data, even after it has been optimised atengineering level. We then compared our results to what could be obtained using the best algorithms on the market.The best algorithm compressed the same Rosetta data down to 9% of the size of the original packet store. To put thisin perspective, the difference between the best algorithm on the market and our very simple algorithms was only 5%.We felt that given this performance and simplicity of code, the technique would be very interesting for all missionsconsidering spacecraft housekeeping compression. 2 American Institute of Aeronautics and Astronautics
  3. 3. III. Results from different missions A potential gap in our previous experiments was that we only used data from a single mission. Rosetta waschosen because its low downlink data rate meant a significant optimization effort had already been performed duringthe packet design. For instance, asynchronous packets are used to signal status parameter changes. We thereforeassumed that a lot of information redundancy had been removed at engineering level. However we decided to testthis hypothesis by running the same experiment for a much wider range of missions. The results are given in Table1. MISSION NAME MISSION TYPE COMPRESSION (% of original) Columbus Human Spaceflight 5.75% Rosetta Interplanetary 14.06% Venus Express Interplanetary 18.23% Proba-1 Technology Demo 25.93% Herschel Astronomy 28.00% Goce Earth Observation 38.20% Table 1. Compression performance comparison by mission We were surprised at the variation in compression performance between missions. Columbus’s housekeepingwas compressing twice as well as Rosetta’s but GOCE’s two and half times worse. Even the two interplanetarymissions that have very similar platforms (Venus Express and Rosetta) showed a variation of 4%. This result wastotally unexpected and further investigation was required to determine the cause. IV. Packet level compressibility In order to understand the variation between missions, we decided to take a bottom up approach and look at thevariation in the compressibility of the individual packets themselves. As discussed in our previous paper1, thecompressibility of the packet types varies enormously but the results give no indication as to the reason. To answerthis question we needed a way to look inside the packets themselves to investigate the factors influencingcompression at parameter level. To achieve this goal, we followed this procedure. For each mission, between a week and a day’s worth ofhousekeeping data was retrieved from the ESA mission archives. This was then processed to extract the CCSDSsource packets, thereby effectively reconstructing the original onboard packet store. This packet store was split intofiles containing packets with identical PUS structures2 (i.e. the same CCSDS apid, type, subtype, pi1val and pi2val).The transposed files were read column by column and the number of bit transitions in each column counted. Theneach column was compressed separately using the RLE algorithm and the amount of compression recorded. For thefirst time we could really see interesting variations within the packets themselves. The number of transitions in eachcolumn and the compression performance for each column are graphed in Figures 2–5 for two packets. The twopackets were selected as one represents packets that compress well and the other represents packets that compressbadly. They are both from the Herschel mission. 3 American Institute of Aeronautics and Astronautics
  4. 4. 100000 90000 80000 70000 60000 50000 40000 30000 20000 10000 0 1 129 257 385 513 641 769 897 1025 1153 1281 1409 1537 1665 1793 1921 2049 2177 2305 Figure 2. Number of bit transitions for a packet that compresses well [CDMU Herschel Periodic P1 HK Parameter Report] 160 140 120 100 80 60 40 20 0 1 107 213 319 425 531 637 743 849 955 1061 1167 1273 1379 1485 1591 1697 1803 1909 2015 2121 2227Figure 3. Compressibility (% of original size) of each bit column for a packet that compresses well [CDMU Herschel Periodic P1 HK Parameter Report] 4 American Institute of Aeronautics and Astronautics
  5. 5. 100000 90000 80000 70000 60000 50000 40000 30000 20000 10000 0 1 258 515 772 1029 1286 1543 1800 2057 2314 2571 2828 3085 3342 3599 3856 4113 Figure 4. Number of transitions in each bit column for a packet that compresses badly [Herschel SCM Mode TM] 120 100 80 60 40 20 0 1 161 321 481 641 801 961 1121 1281 1441 1601 1761 1921 2081 2241 2401 2561 2721 2881 3041 3201 3361 3521 3681 3841 4001 4161 Figure 5. Compressibility (% of original size) of each bit column for a packet that compresses badly [Herschel SCM Mode TM] An obvious advantage of this view is that it becomes easy to identify which parameters are not compressing well.This is an important aid when looking for ways to improve bandwidth usage at all levels. Fig. 4 shows dense regionsthat all peak at around half the column length e.g. bit columns 350 to 1000. This indicates that the data is noisy (i.e.the chances of the next bit being the same as the one before is 50%). An example of the parameter behavior fromthese regions is given in Fig. 6. Other regions show the same plateau but are interspersed with columns thatcompress well. This makes them look much less dense e.g. bit columns 1050 to 1650. This indicates that there ismore structure to the data. Investigations showed that these areas are associated with parameters that we termed“high precision change” i.e. parameters that change slowly but are sampled with high precision, see Fig.7. Otherareas are even less densely packed e.g. bit columns 3000 to 3150. These often contained “quantization victims”, seeFig. 8. Wide troughs indicate data that is not measured with high precision and that changes slowly. 5 American Institute of Aeronautics and Astronautics
  6. 6. Figure 6. Noisy parameter (Velocity Error X)Figure 7. High precision change parameter (Estimated Attitude Q1) 6 American Institute of Aeronautics and Astronautics
  7. 7. Figure 8. Quantization victim parameter (reaction wheels) Visual inspections of the packets from the different spacecraft led us to believe that the design of the attitudecontrol system is the governing factor for housekeeping telemetry compressibility. This is because the majority ofnoisy data is generated by the attitude control system so it tends to dominate the bandwidth usage after compression.AOCS design (for example whether accelerometer and gyros are on) and system decisions (the frequency andprecision of required measurements) are critical. This explains why Columbus data is so compressible (no AOCSdata) and why GOCE data does not compress well (the payload accelerometers are used as part of the AOCS controlloop). There are also secondary effects related to the mission profile. For example, Rosetta and Venus Express usean almost identical AOCS, but there is a 4% difference in compression performance between them. Inspectionsshowed this difference was mainly due to a special Venus Express AOCS packet used during wheel offloading witha high sampling rate and due to power subsystem measurements. Venus Express is orbiting Venus in a 24 hour,highly ecliptic orbit with various different pointing attitudes during one orbit and possibly an eclipse. It has a verydynamic thermal and power environment. Rosetta’s thermal and power environment is very stable as it slowly orbitsthe sun with constant Sun pointing on its way to a comet. Armed with this understanding we could see improvements at parameter level that would result in fewertransitions and therefore aid RLE compression. For example, a simple rule for the quantization victims might be “inthe case of repetitive jumping always takes the highest value”. We also identified techniques which result in massiveimprovements in compressibility if the end user is willing to accept a small amount of error for the measurements.These will be addressed in a separate paper. The aim of this work is to find algorithms that work at the data level i.e.without requiring knowledge of the parameters being compressed. For our purposes the most important conclusionof these graphs, is that the number of bit transitions and the compressibility of a column are well correlated.Therefore any change that results in fewer transitions will help the compressibility. 7 American Institute of Aeronautics and Astronautics
  8. 8. V. Influence of numbering format We began to consider how to reduce the number of bit transitions without going up to the parameter level. Apossible variable was the type of computer numbering format used to code the parameter values. For the missions inour tests, we determined that 73% of the parameter definitions in the databases were defined as unsigned integersand 19% as reals. This indicated where one might look for improvements. An example of the influence thatnumbering format can have on the number of bit transitions is given in Table 2. The transition from 7 decimal to 8decimal in binary involves four bit column transitions. This effect can be especially severe when reals are used incombination with high precision measurements. For a 32 bit real parameter a relatively small jump in value willcause many bit column transitions. Decimal Binary Gray 0 0000 0000 1 0001 0001 2 0010 0011 3 0011 0010 4 0100 0110 5 0101 0111 6 0110 0101 7 0111 0100 8 1000 1100 Table 2. Comparison between binary and Gray codes We decided to test the potential benefit of more efficient formatting by running a simple test. We modified theexperimental setup to add a step before the files were bit transposed. The whole file was converted to Gray coding3on the byte level. Gray coding is a binary numeral system where any two successive values differ by a single bitchange. The implementation is extremely simple as can be seen from Fig. 9. The coding itself is given table 2. static int binaryToGray(int num){ return ((num>>1) ^ num) & 0xff; } Figure 9 Gray code conversion simplicity This simple step produced a marked improvement in compression performance for the packet under test. We ranthe modification on the full week’s data for three missions and each time saw significant improvements, see Table 3. Mission Compression without Compression with Difference Gray coding Gray coding Rosetta 13.86% 11.77% +2.09% Herschel 28.00% 23.06% +4.94% Goce 38.20% 32.90% +5.30% Table 3. Compression performance with and without Gray coding It must be highlighted that Gray coding was the first and only technique tested in this area. We also applied itwithout considering parameter boundaries or data types. The good result indicates that there are further gains to bemade. Other potential techniques under consideration but not yet tested include fixed point numbers. These are usedin the banking world for storing monetary values, where the inexact values of binary floating-point numbers areoften a liability. We intent to investigate this field in future work. 8 American Institute of Aeronautics and Astronautics
  9. 9. VI. How much data is needed for efficient compression? Up to this point we had always used a large amount of data to perform the compression. Any practical systemwould have to work on smaller datasets due to on-board memory buffer size constraints. To investigate the impact ofthis, we modified the experimental set-up. The packet store was split into files containing packets with identical PUSstructures2 (i.e. the same CCSDS apid, type, subtype, pi1val and pi2val). Then every file was further split intochunks containing a certain number of packets. Then each chunk was bit transposed and compressed with RLE. Bycomparing the sum of the compressed chunk sizes to the original packet store size a compression performance wasobtained. By varying the number of packets allowed in a chunk the relationship between compression performanceand the number of packets in the chunks could be determined for different packets. The results are shown in Fig. 10. 100 90 80 70 % Original size 60 50 40 30 20 10 0 10 50 100 250 500 1000 5000 # Packets Figure 10. Compression Performance against the number of packets in chunk for different packet types The results show that the majority of the compression can be attained using chunks of just 48 packets. Theincrease in compression performance between this small chunk size and very large ones is only 10%. Furthermorethere appears to be a point at about 512 packets beyond which the performance gain levels off completely. Thereforethere is little point in compressing chunks that are bigger than 512 packets. We found this characteristic in all thepackets tested. Initially we were surprised, because it means the absolute sizes of the input chunks for variouspackets will vary in relation to the different packet lengths. However, we came to realize that this was a naturalconsequence of the bit transposition process. For the RLE coding to work efficiently it requires a certain column size(determined by the number of packets in the chunk) and is therefore independent of row size (packet length). The consequences of this are quite profound. The simplicity of the RLE algorithm means that we can alreadystate the required minimum size of the input chunks (48 x packet size) and the amount of time we have to waitbefore this data can be compressed (48 x packet generation period). These are generic rules and will simplify the on-board software and operations system design. Another important point is that 48 packets is a surprisingly low number. In fact it is so low that we couldconsider using this compression technique to satisfy real-time control and analysis requirements at the same time.For example, imagine we had a requirement to check a parameter value on the ground every minute and a dataanalysis requirement to sample it every second. We would usually have to sample the parameter every second andsend the packets directly to the ground. Now we have an alternative; sample once a second and store the results in abuffer containing 48 packets. As soon as the buffer is full, compress these packets and send them to the ground. Thiswould meet both the real-time update and the high frequency sampling requirement while only using a fraction ofthe bandwidth (if the packet compresses well). Furthermore, if the packet compresses well we could increase thesampling frequency to get real-time updates faster. A high frequency sampling like this could be useful for spikecapture for instance. Another practical application might be for telecommunication satellites. These missions usually 9 American Institute of Aeronautics and Astronautics
  10. 10. have relatively low telemetry downlink rates making fast access to high frequency sampling impossible. Thistechnique would allow the operations engineers to sample fast and still check the data frequently. VII. Does this compression technique increase mission risk? One of the most frequent arguments against housekeeping data compression has been that it will increase missionrisk; in this context the risk of loss of information. Before discussing this, it is important to understand that thecritical link on which information can be lost is the space-ground link. The error correction applied for the space datalink works at frame level, i.e. either all the data in a frame is received or the full frame is lost. Therefore it makeslittle sense to talk of individual packet loss; when we lose data we lose the whole frame. The risk of losing a frame ismeasured by the Frame Error Rate (FER). The link is usually designed to provide a FER ≤ 10-5 (some mission useFER ≤ 10-6 and for missions with science data compression a FER ≤ 10-7 may be used). As this is defined for a worstcase situation and the FER curve with respect to the signal to noise ratio is usually very steep, the actual FER duringnominal operations is much lower, i.e. loss is even less probable. Possible exceptions are for very low elevationsand/or very bad weather. To investigate the impact of compression on information loss we built a simulator. This consisted of a systemwhereby a packet store could be preprocessed, compressed and then sent through a simulated ground-space link witha chance of frame loss equal to the FER. Then the total amount of retrieved information was compared to thattransmitted as a figure of merit. Using a FER of 10-5 it was impossible to detect any information loss at all. This is obviously not the case in thereal world so we investigated further. Frames are usually lost due to problems outside the link itself, e.g. operatorerrors (wrong settings) or glitches during switching etc. Also for extreme contingency situations (e.g. safe mode) thelosses are typically not due to the signal to noise ratio of the link but due to intermittent contact because of aspinning spacecraft. The operationally relevant loss statistics during nominal operations are thus typically notthermal/Gaussian. It became clear that to investigate the relationship between compression and information loss itwas necessary to do something drastic. In the end we had to increase the FER to an incredible 10-2 before we sawthat information loss was equal to the FER multiplied by the average number of frames needed to carry each chunk. In theory, the chances of losing information across a link should be exactly equal to the FER whether the data iscompressed or not. This is because with compression, a frame loss means more data is lost but you also need to sendfewer frames. These two effects exactly cancel each other out. However this is not true if a frame loss implies thatdata in other frames becomes unusable. In the extreme, a single frame loss of compressed data could cause all theinformation transmitted in a pass to be unusable e.g. like losing a part of a zip file. How can we avoid this? We can ensure that each compressed chunk’s data fits in a single frame. This solution ispossible but would impose constraints on packet design. So we looked for a solution that would allow us touncompress the information in a frame - even if it was carrying just part of a chunk’s data. In this case using RLEbecomes a major advantage. As it works by storing consecutive identical bytes as a single data value and a count, itis already self contained and can be uncompressed without needing any other data. Since we have no problem expanding the data with RLE, we just need enough information to reverse the bittransposition process. Primarily we would just need to know the number of packets in the chunk and position in thechunk (column and row) of the first byte compressed in this frame. For timing and identification information wewould also need the compressed chunk packet headers. We could either ensure these are transmitted twice (dualdiversity) or add the information to each frame. Either way the overhead would be very small as packet headerscompress exceptionally well. With this system, there is no longer any increase in information loss risk due to compression. The only differenceis that we would lose some parameters values over a longer period as opposed to all parameter values for a shorterperiod. Again, the ability of RLE to work efficiently with small chunks of data means that even here the risk isminimal and deterministic. If we are risk adverse, we already know that the maximum time period with which wemight lose some parameter values is the time it takes to us to generate 48 packets. If we really cannot accept any loss (a stricter requirement than presently applied) then we could establish atransport layer protocol over the space ground link. Requesting re-dumps requires storing the data after it has beentransmitted until confirmation of reception. This would be no problem as this data could be stored as smallcompressed chunks and identified as such by the protocol. This looks like it calls for a file based solution, howeverin section VIII we propose a CCSDS compatible solution. Given the low FERs discussed earlier, resend requestswould be so rare there would be almost no overhead on the link. If this is impractical due to light time issues then selective dual diversity could be employed. Here some of thebandwidth released by compression is used to send selected compressed packets twice. This is done with a 10 American Institute of Aeronautics and Astronautics
  11. 11. reasonable separation in time between the transmissions to avoid the clustered losses we see in real life. Theselection of which packets to send twice is based on which packets the mission decides it cannot afford to lose underany circumstances. A combination of techniques could be used for deep space missions. The transport layer protocolcould be used during a pass until the point at which the approaching end of pass makes it impractical to requestresends. At this point (equivalent to the present two way light time) the system could be switched to dual diversity. In summary, for a standard link the risk of losing packets is already very low. If the compressed frames are selfcontained then the chances of losing information are identical whether compression is used or not. The bigadvantage of the compression is it releases time and bandwidth for risk mitigation techniques like delivery protocolsand dual diversity. This can be clearly exploited in the situation of an intermittent contingency link where bandwidthis limited and the loss of a frame could be a major problem. Compression in combination with self contained framesoffers the bonus to increase the amount of data received, to increase the speed with which data are received, toincrease the probability of reception by a higher frequency of sending, and to increase the probability to receive afull data set. So the answer to the title of this section is clear: “No, this compression technique does not increasemission risk and the resources it releases can be used to reduce risk far below that normally accepted”. Thosemissions that are concerned about the risk of information loss must compress their housekeeping telemetry. VIII. CCSDS compatibility Two common objections to our proposal have been that compression will result in a system that is incompatiblewith current CCSDS standards and that it requires a file system on-board. Before addressing these concerns it isimportant to recognize that we are proposing an algorithm that will work with any dataset as long as it exhibits“packet like” qualities. This means the data must consist of fixed length messages containing fields whichcorrespond to a particular measurement. Each field must be fixed in position and bit length. This definition is verybroad and could easily apply to lines in a file as to CCSDS packet stores. However we understand the underlyingconcern is the reuse of existing infrastructure. In Fig. 11 we propose an architecture that answers this concern. A CCSDS packet store is split by packet structure, bit transposed and compressed by RLE. The resulting data isthen written into special packets that are each sized to fill one transfer frame each. The special packets are writteninto another smaller packet store. Once completed the data in the original packet store can be overwritten thusincreasing the effective on-board storage. During ground coverage the packets in the compressed packet store are transported to the mission control centreusing the usual processes (PUS services, CCSDS frames, SLE etc). Just before the mission control system, thespecial packets are routed to a preprocessor that decompresses them and reverses the bit transposition process. Therecovered original packets are then released to the router and passed to the mission control centre. They are thenprocessed as normal. This architecture reuses every aspect of the classic existing infrastructure and can be controlledusing the standard operational practices. Packet Store Packet extraction Bit transposition RLE encoding Store in special “frame length” packets Compressed Packet Store Transfer to mission control centre using normal PUS services, CCSDS frames, coding, modulation etc Compressed packets routed to Pre-processor Router pre-processor for decompression Recovered packets passed to Mission Control mission control system for normal System processing Figure 11. Figure showing proposed CCSDS compatible architecture 11 American Institute of Aeronautics and Astronautics
  12. 12. IX. Is the technique feasible in terms of on-board software? Although the algorithm appeared simple to implement on-board we realized the only way to answer this questionwas to perform a test using real target hardware. In February 2010, a prototype was developed by the SoftwareSystems Engineering section at ESA/ESTEC with the aim of checking the feasibility of a flight implementation. Ourbit transposition and RLE algorithms were recoded to respect space coding standards (no dynamic memoryallocation, no system calls). The RLE algorithm was completely re-written and simplified further to work withbuffers rather than files. The ASSERT/TASTE tool chain (http://www.assert-project.net/) was then used to automatically generate acyclic task for the LEON2 processor and produce a binary that was directly loaded on the target. When runningunder TSIM the code worked immediately thus proving the feasibility of the approach. The code was then run on a real LEON2 board in order to establish timing performance. The 212 byte long“PDU normal acquisitions packet” from Venus Express was used for the test. A chunk size of 200 packets wasselected meaning the input buffer was 42.4 kilobytes. The algorithm compressed this chunk to a size of 6.55kilobytes (i.e. 15% of the original) in 523 milliseconds. Although this single test is not enough to establish completetiming requirements a quick calculation indicates that this performance is completely adequate. Housekeepingtelemetry generation rates range from 1 to 10 kbps, so this algorithm compresses it between 65 and 650 times fasterthan it is generated. The only uncertainty identified in the process is the sizing of the resulting compressed data. The caller has toprovide a buffer that is big enough for the worse situation. Although we had never experienced that the RLE codinghad expanded a chunk we used our column view of packets to check what was happening for each column. Indeedwe saw that some columns were expanding, see Fig. 5. Therefore it is theoretically possible this could happen for thewhole packet. To solve the problem we changed the compression algorithm to use an alternative RLE scheme. Thisworks by adding a one byte counter only if two identical bytes are followed by a third. Therefore it is impossible toproduce an expansion. The worst that can happen is that the compressed chunk is the same as the input chunk. Theadvantage is that we only ever need to allocate an output memory buffer which is the same size as the chunk beingcompressed. We compared this alternative coding to classic RLE and confirmed that the compression performancewas similar and sometimes slightly better. X. Comparison with RICE4 The CCSDS recommendation for data compression is RICE. The straight application of RICE to housekeepingpacket stores results in bad compression and sometimes expansion. However this algorithm should benefit from thesame type of pre-processing techniques we have employed for RLE. In collaboration with the Onboard Payload DataProcessing section at ESA/ESTEC we performed some comparison tests between the two techniques. The results areshown in Table 4. As expected, both RICE and RLE benefit from transposition at the bit and the byte level. The bittransposition-RLE technique performed better for packets that compressed well. On the other hand, bytetransposition-RICE performed slightly better for packets that did not compress well. Herschel Packet Description RICE Byte Bit Byte Bit alone Trans. Trans. Trans. Trans. RICE RICE RLE RLE Herschel SCM Mode TM 102.70 60.13 67.51 74.51 61.04 CDMU Herschel Periodic P1 HK Parameter Report 80.90 12.61 14.31 9.72 7.13 PACS_PHOT_HK 98.06 44.67 32.41 61.95 26.46 H TM Data for 9 Stars Com1 67.95 25.30 30.46 28.98 23.21 Nominal HK Parameter Report 89.08 12.67 14.94 15.11 8.33 CDMU Herschel Diagnostic ASW1 HK Parameter Report 104.17 48.72 59.29 63.46 52.24 Table 4. Compression performance (% of original) for RICE/RLE with byte/bit transposition. An interesting packet in Table 4 is PACS_PHOT_HK. This compresses significantly better using bittransposition than byte transposition (the difference between bit transposition-RLE and byte transposition-RICE is18.21%). Investigations revealed that the PACS_PHOT_HK packet contained many parameters within single byteswhereas the other packets contained parameters that were at least one full byte long. We believe this result highlights 12 American Institute of Aeronautics and Astronautics
  13. 13. an advantage of the bit transposition process: fewer constraints on packet design. If a packet contains parameterswith lots of small parameter lengths then byte transposition is not effective. On the other hand the bit transpositionprocess can prepare the data effectively no matter how the parameters are mixed. For the packet designer this is anadvantage as he simply does not have to care about the order or mix of parameters in a packet when considering thecompression. In summary, we can say that if bit transposition is used, the RLE algorithm has obvious advantages in terms ofboth performance and simplicity. However if byte level data transposition is chosen then RICE has the performanceadvantage while RLE has the simplicity advantage. This is an interesting trade which we intent to perform in thefuture. XI. Housekeeping compression advantages In our last paper, we discussed housekeeping compression primarily from a bandwidth saving point of view. Thisprovides a direct cost saving by enabling shorter or less passes or freeing bandwidth for application data. We nowtake the opportunity to discuss less obvious advantages. The authors strongly believe that they are often far moresignificant than the direct bandwidth savings. Firstly time; spacecraft control is a constant race against the clock and housekeeping compression will give anedge to those operators that employ it. Control centre operations costs are dominated by manpower and this iseffectively charged by time. By cutting the time it takes to perform an operation we cut waiting time and so makethat manpower more efficient. Manpower costs do not just vary with “how long” but also “when”. If we can reducethe time an operation takes we also get more choice on when to do it. This can reduce extra costs like overtime orshift work premiums. Another time element to consider is that housekeeping compression will drastically cut reaction times. This isuseful even if there are no strict requirements on ground response time, timeliness etc. If the full housekeeping dataset is available faster on the ground then it will enable operational feedback loops otherwise not feasible. We havealready given the example of transport protocols in the risk section but the same principle applies at the engineeringlevel. For example the required spacecraft autonomy period during a solar conjunction communication blackout foran interplanetary mission could be shortened. This is simply because all data can be made available within the firstpass after the blackout period ends. Secondly packet and system design; operations preparation costs make up a significant proportion of the overallmission operation cost. A part of that work is to carefully select the parameters and their respective sampling rate inthe housekeeping packets. The aim is to get a balanced compromise between bandwidth use and information content.This requires an in depth understanding of the spacecraft/mission and involves a lot of packet definition work thatmay even have to be adapted to different mission phases. To reduce the required bandwidth for certain parameterssometimes events are defined that trigger when parameter values go out of pre-defined limits. This requires not onlya selection of parameters but also an anticipation of their variation and the respective operational relevance. Takinginto account the large number of parameters and the difficulty of a precise prediction of the satellite behaviour it isquite obvious that this drives operational engineering design effort. It also requires a high test effort as the packetand event definitions are custom made. How can housekeeping compression help? It allows us to include much more parameters that compress well intoour stored housekeeping packets without impacting the bandwidth usage. There is no need for a careful selection ofsampling rates for these parameters as a higher sampling rate no longer has a negative impact. The work required todefine asynchronous events for recording changes in these parameters can be eliminated. The work on the groundrequired to process these asynchronous events can be eliminated. All associated work with testing these differentprocesses and configurations both on the satellite and the ground can be eliminated. The need for a detailedanticipation of the possible satellite behavior for these parameters can be eliminated. For some missions this willamount to a significant saving in the engineering effort. At the same time we reduce operational risk because allparameters of operational relevance can be included in the stored housekeeping packet(s) rather than having to maketrade-offs at the engineering level. XII. Future work Since starting this work in February 2009, the authors have been repeatedly surprised at how simply andeffectively housekeeping compression can be applied to solve various mission operations concept problems. It is fairto say that it is becoming a standard solution during Phase 0/A work at ESA/ESOC. It is also baseline for a missionin Phase B: PROBA-3 plans to demonstrate how a formation can be controlled via ground communication with onlyone spacecraft. In this case the driving force is the bandwidth restriction caused by the inter satellite link. 13 American Institute of Aeronautics and Astronautics
  14. 14. XIII. Conclusion We have seen that the bit transposition-RLE algorithm is extremely effective at compressing housekeepingtelemetry for a wide variety of missions. Variations in compression performance for different missions and packetswere investigated using a powerful new method to analyze packets at parameter level. This has allowed us todetermine that the main driver for housekeeping compressibility is the AOCS design of the mission. We have startedwork on trying to further improve compression performance at the data level i.e. without requiring knowledge of theparameters being compressed. The potential of this route was demonstrated by the simple application of Gray codingwhich improved relative compression performance by 10-20%. The algorithm now compresses the Rosettahousekeeping packet store three times more effectively than running zip on the original data. Experiments showed that the algorithm already compresses effectively with just 48 stored packets as input. Thisquality is generic and will simplify the on-board software and operations system design. This surprisingly smallamount opens up the possibility to combine high frequency sampling and near real time monitoring. The technical feasibility of the proposed algorithm was demonstrated by implementing it on real spacecrafthardware. Preliminary performance results on a LEON2 processor showed that it can compress housekeeping data at650 kbps. That is between 65 and 650 times faster than the housekeeping generation rate. (The only problemidentified was how to deal with the risk of data expansion rather than compression. A slight modification of thealgorithm has eliminated that risk.) The compression performance of the algorithm was compared to the present CCSDS recommendation (RICE).We saw that it often outperforms the more complicated RICE algorithm even when the same preprocessing isapplied. The results also demonstrated that bit transposition offers advantages over byte transposition in that it canprepare the data effectively no matter how parameters are mixed together within the packets. This will help tosimplify packet design. We have shown how the simplicity of the RLE algorithm can be used to make the information in each frame selfcontained. This means that this compression technique never increases the risk of information loss. In fact, theopposite is true as the resources released (time, bandwidth) can be used to dramatically reduce mission risk. Thisleads to the counter intuitive result that missions that are concerned about the risk of information loss must compresstheir housekeeping telemetry in particular during contingency situations. Finally we have argued that there are additional advantages to housekeeping compression besides freeingbandwidth or reducing ground station usage. These include increasing the cost efficiency of control centremanpower and exploiting shorter reaction times by implementing new operational feedback loops. There are alsosignificant savings to be made in the operations preparation phase as the need for careful selection of parameters andsampling rates can be eliminated for those parameters that compress well. The authors believe that these advantagesmay well be even more significant than the direct savings. Bit transposition-RLE is an incredibly simple but powerful technique for compressing housekeeping telemetry. Itproperties are so deterministic and generic that the on-board software can be written independently from the specificmission design. It has been proved technically feasible from an on-board software point of view, has demonstrated agood performance on the target hardware and can be implemented without changing the present CCSDS/PUS basedinfrastructure. Implementing it will save ground station usage, increase mission return, increase on-board storage,enable more efficient use of operations manpower, reduce engineering costs in the preparation phase, simplifypacket design and decrease mission risk. The message is clear; there is very little downside and everything to gain.Given these results there is no longer any mission which can afford to leave housekeeping data uncompressed. Acknowledgments The authors wish to thank Dr. Roland Trautner of the Onboard Payload Data Processing/Payload Support sectionin ESA/ESTEC for supporting the RICE comparison. References 1 Martinez-Heras, J.A., Evans, D., Timm, R.; “Housekeeping Telemetry Compression: When, how and why bother?”International Conference on Advances in Satellite and Space Communications (SPACOMM 2009). Colmar, France July 20 - 25,2009 2 ECSS-E-70-41A, "Ground systems and operations — Telemetry and telecommand packet utilization", 30 January 2003 3 Frank Gray, US patent application for “Pulse code communication”, number 2632058, filed November 13, 1947 4 CCSDS 120.0-G-2, “Lossless Data Compression, Green Book”, The Consultative Committee for Space Data Systems,December 2006 14 American Institute of Aeronautics and Astronautics

×