GLAST / LAT > DAQ and FSW > FSW > Doxygen Index> EMP_DB / dev > asc_db_data / sun-gcc
#include <ASC_DB_macros.h>
Defines | |
#define | CDM_EMP_DB_ASC_DB_DATA_INSTANCE ASC_DB_INSTANCE |
#define | CDM_EMP_DB_ASC_DB_DATA_HANDLE data |
#define | INPUT_BUFFER_COUNT 4 |
The number of available input buffers. | |
#define | INPUT_EVENT_PRESCALE 10 |
The input event prescale factor. | |
#define | OUTPUT_BUFFER_COUNT 5 |
The number of maximally sized output buffers. | |
#define | OUTPUT_EVENT_COUNT 100000 |
The number of events to process before posting an output buffer. This value is after any prescaling. | |
#define | OUTPUT_TIMEOUT_MSEC 10000 |
The minimum time before posting an output buffer. | |
#define | COMPRESSION_LEVEL 5 |
The GZIP compression level. | |
Variables | |
static const ASC_DB_Schema | data = { ASC_DB_DATA } |
Configurable settings for the ASC system. |
CVS $Id: ASC_DB_data.h,v 1.2 2011/03/25 21:56:37 apw Exp $
#define COMPRESSION_LEVEL 5 |
The GZIP compression level.
The value of this parameter can range from 0 to 9, where 0 represents no compression and 9 the maximal compression. The setting of this value is a compromise between the compression factor and the CPU time it takes to do the compression. Typical compression level vs speed is
Level | Kbytes | Time (msecs)
|
0 | 16544 | .25
|
1 | 1560 | 1.25
|
2 | 1479 | 1.33
|
3 | 1399 | 1.38
|
4 | 1368 | 2.07
|
5 | 1328 | 2.27
|
6 | 1302 | 2.98
|
7 | 1284 | 4.03
|
8 | 1203 | 10.90
|
9 | 1140 | 28.20 |
All times are on a background event sample run on FLORA02. Typical RAD750 times should be about a factor of 3 higher.
#define INPUT_BUFFER_COUNT 4 |
The number of available input buffers.
This is not a critical parameter and the buffering requirements of the ASC package are not high. A buffer is out of action from the time it is posted for output till the it is returned to the free pool. The processing time is the time it takes to compress the data to an output buffer and clear the buffer. This is on the order of 50 milli-seconds. Given the inter-post times are on the order of multiple seconds, a couple of buffers should be sufficient.
#define INPUT_EVENT_PRESCALE 10 |
The input event prescale factor.
This determines how many events are actually processed. For example, if this value is 10, then process every 1/10 events.
A value of 10 is typical and is based on the processing of an event taking about 20 usecs/event to process and wishing to limit the amount of time to < 5 usecs.
#define OUTPUT_BUFFER_COUNT 5 |
The number of maximally sized output buffers.
This is not a critical parameter. An output buffer is unavailable from the time it is posted to the output SSR stream until it is returned. This time is roughly proportional to the SSR output bandwidth of 20Mbits/sec plus an LCB posting overhead of 75usecs. Given output buffer sizes ranging in size from about 1-16 Kbyte, or 8-128Kbits, the output buffer time is dominated by the LCB posting overhead or roughly .1 milliseconds. With postings occuring on the order of multiple seconds, this gives plenty of time to return the buffer to the free list.
Also note that this is a worst case number based on maximally sized output buffers. Typical compression is on the order of a factor of 16, so even a value of 2 would provide on the order of 32 output buffers
#define OUTPUT_EVENT_COUNT 100000 |
The number of events to process before posting an output buffer. This value is after any prescaling.
This number is based on limiting the SSR load to 1500 bytes/sec,i.e. about 1% of the available bandwidth. Compressed packets seem to be around 1Kbytes (worst case is 16Kbytes). A prescale value of 10,000 at 10KHz input rate and a prescale of 10 results in about 100 bytes/second, worst case 1.6 Kbytes/second.
A value of 0 disables this event count. When this is done, the output posting is usually controlled by specifying a non-zero value for OUTPUT_TIMEOUT_MSEC.
When used with OUTPUT_TIMEOUT_MSEC, the first to expire, either the event count or the timer, forces the buffer to be output.
#define OUTPUT_TIMEOUT_MSEC 10000 |
The minimum time before posting an output buffer.
This number is based on limiting the SSR load to 1500 bytes/sec,i.e. about 1% of the available bandwidth. Compressed packets seem to be around 1Kbytes (worst case is 16Kbytes), so a value equivalent to 10 seconds would be around 100 bytes/second, worst case 1.6 Kbytes/second.
A value of 0 disables the timeout. When this is done, the output posting is usually controlled by specifying a non-zero value for OUTPUT_EVENT_COUNT
When used with OUTPUT_EVENT_COUNT, the first to expire, either the event count or the timer, forces the buffer to be output.