Business value for Detection Oriented Event Processing
Following up on my last post about the two broad categories of Event Processing, this post continues that train of thought by analyzing one of these categories from the perspective of business value.
Today I’m writing about Detection Oriented Event Processing. Per my previous post: “Detection Oriented Event Processing applications want to locate interesting information in a flow of data.” In this case, we use “detection” very broadly to mean data analysis that produces results of interest to the enterprise. That could be a simple report destined for a human, a set of values used in an optimization problem or a complex statistical method. I might as well say Data Analysis Oriented Event Processing.
The term Detection Oriented is in contrast to applications that make some decision or take some action based on incoming events. I’ll call those Operations Oriented applications, and cover them in a later post (and also cover how they integrate with Detection Oriented applications). Of course there are many intersections of Detection and Operations Oriented Event Processing, but they each deserve some separate analysis first.
The motivation for using Detection Oriented Event Processing is to increase the “speed” of data analysis. This runs contrary to views of Event Processing that focus on improving detection or prediction capabilities. Unfortunately Event Processing can’t directly help improve the accuracy of detection or prediction (only better detection or prediction methods can do that), but it can indirectly help by easing the burden of implementation.
In general, there are two types of possible speed increase: lower latency and increased throughput.
Lower latency means that some data analysis once took X amount of time, and now it takes less time. For example, if a report summarizing daily sales figures used to be available the next morning, and now it is available at the close of the same business day – the latency of this report has been reduced. In other words, the amount of time taken to produce the analysis (measured from when all data is available to when the analysis is complete) has been reduced.
Increased throughput means that more analysis is done in the same amount of time. For example, if I were once able to simultaneously analyze 30 stocks for correlation and now I can run the same analysis on 300 simultaneous stocks – the throughput of my analysis has been increased. Each individual analysis (the analysis of one stock) may take the same amount of time, so increased throughput may allow a large task to complete faster, but does not necessarily correspond to lower latency of each individual component task.
Detection Oriented Event Processing, then, is a potential solution for problems where we need to either decrease latency or increase throughput of data analysis. Again, Event Processing is not the only method of achieving these goals, it is one method in the toolbox.
Before getting started on generating business value, we need to have one thing clear:
Since we are increasing the speed of data analysis, value only comes when the enterprise can react at the new speed. It doesn’t matter how fast we are able to produce results, if the enterprise can’t react fast enough to use them.
So we have three scenarios:
The enterprise can already make use of faster data analysis. In this case, there is work that could produce more value if only some kind of data analysis were faster. The location and nature of the bottleneck may even be common knowledge. A speed increase will bring immediate business value.
An internal system (human or computer) requires data to optimize its decision-making, and it would be better to get that data sooner rather than later.
Certain data is distributed to customers, who would rather have it faster. While the enterprise may or may not have its own use for faster data analysis, the customers certainly do and there is value in happier customers. Logistics systems come to mind here.
Build faster data analysis, allow the rest of the enterprise to catch up. It is possible that, were faster data analysis available, others in the enterprise would quickly adapt (and in adapting, increase the value of their activities). In this case, the speed of existing data analysis has set the pace for downstream activities (which depend on the data). Those activities would adapt to a faster pace or larger volume of data, were it available.
For example: Electronic trading is often a game of speed. When data is available faster, downstream systems quickly adapt to make use of it.
Build faster data analysis and faster reaction time, at once. In this case, there is some kind of reaction that should happen quickly. It may be something that the enterprise already reacts to, but would rather react faster. Or it may be some new stimulus, where there is only value in reacting quickly and little or none in reacting slowly. In either case, we must increase both the speed of data analysis and the speed of reaction to see any benefit.
This scenario may be a candidate for a mixture of Detection Oriented and Operations Oriented Event Processing.