SAP Sybase Event Stream Processor Performance
The Uncontested Performance Leader in Complex Event Processing (CEP)
Sybase is the first and only CEP vendor to publish independently verified performance benchmarks and to this date remains the only CEP vendor to have done so. The Securities Technology Analysis Center (STAC) published performance benchmarks on the Sybase CEP engine in September 2008. STAC is an independent group that performs lab tests on financial technology to help firms in the capital markets make informed decisions on which technology to use and how to use it. They have established a reputation for thoroughness and objectivity. The STAC-certified benchmarks are publicly available, and still to date, no other CEP vendor has published performance data, leaving SAP Sybase Event Stream Processor, the uncontested leader.
When Performance Matters
Many firms are using CEP for applications that have very high performance demands. Some need to be scalable so that they can handle very high message rates - hundreds of thousands of messages per second. Others need very low latency - producing results within a few milliseconds or less. Some applications need both - predictably low latency, even under heavy loads. Not all CEP applications have high performance demands, of course, but for those that do, it's important to know that your CEP engine can deliver.
How Performance is Measured
Measuring performance is tricky business, and understanding how it's measured is critical to interpreting measurements. Is throughput measured as the total number of incoming events per second? Or is it the total of incoming events and output events? Or as some rules engines measure it, is it the number of "event rules" fired per second? What are the testing conditions and how closely do they replicate real-world conditions? Are measurements done at "steady state" message rates or are they taken under a load that includes peaks? The phenomenon known at "micro bursts" - very high peak rates of a very short duration - can dramatically affect performance.
Measuring latency in a meaningful way is even harder. First there's the challenge of measuring latency in microseconds, when normal timestamps have millisecond granularity, and the fact that at this level, the process of simply adding timestamps can distort the results. It becomes impossible to synchronize machine clocks with this level of precision, so beginning and ending timestamps need to be taken on the same machine. Then there's the question of what you are measuring: do you measure end-to-end from the perspective of a client application? Or do you measure across individual components or sub-components? And what data is reported? Applications that are highly sensitive to latency need to pay attention not just to average latency but maximum latency. If there is a lot of "jitter" in the system, average latency might be reasonable but the system may display occasional latency "spikes" that are 100 times the average. That could be a problem.
STAC-Certified Performance Data for Sybase Aleri Streaming Platform, the predecessor to the SAP Sybase Event Stream Processor
- Mean latency not exceeding approximately 1.5 milliseconds at ingress rates of up to 180,000 order book updates per second and 1.6 ms at 300,000 updates per second.
- 99th percentile latency not exceeding approximately 3.0 milliseconds at ingress rates of up to 180,000 order book updates per second and 3.2 ms at 300,000 updates per second.
- All Sybase Aleri Streaming Platform components ran on a single Intel-powered, Solaris-based Intel server, including the Aleri adapter for RMDS/OMM, the Aleri event processing platform, and the STAC test client consuming data from the Aleri platform through the Aleri API.
Details on the Test Used to Produce the Results Above
To simulate real-world conditions, the STAC test of Sybase Aleri Streaming Platform used an order book aggregation model that consolidated equity order book data across multiple exchanges. The specific model chosen for testing required the CEP platform to maintain the state of all order books and apply new messages from each exchange as inserts (new orders), updates (changes to an existing order), and deletes (order cancellations). Note that this model involves more intensive processing than models that operate against simple time series data where state maintenance is not required. Also, the test model did not filter data, meaning that every incoming message triggered updates to the output stream.
The test model was fed by the Sybase Aleri Reuters OMM adapter, which subscribed to order book data in OMM format from a Reuters RMDS test system. Throughput was measured at the event source, with message rates representing the total number of messages per second being input into the Sybase engine. Latency was measured end-to-end, thus from the perspective of a client application. The reported latency measurements show the elapsed time from when the Reuters RMDS test system first sent the message to the Aleri OMM adapter until the time when the client application subscribing to the output of the Sybase server received the resultant update.