Although unrelated to my daily work, I have recently aided some former colleagues who were called in to help a troubled real-time analytics project. That reminded me about the most important advice I can offer a new real-time analytics project:
The first step is to gather a large sample of the real-time data and demonstrate the analytics over this sample. The next step is building the real-time analytics system.
The first step is sometimes called “back testing” and it applies just as much to retail sales as it does to financial markets. This step does not involve any kind of real-time analysis – all the analysis is done using regular data processing tools like SQL, R, MATLAB, a RETE rules engine.
Do not buy Complex Event Processing software or start building a high-speed data processing system without first trying your ideas on a good sample of historical data. This may seem obvious, but I am surprised by how often projects start by diving right in to the real-time components.
This advice applies to both statistical and non-statistical analytics. Even simple rules for monitoring a distributed system should be tested with historical data.
Unfortunately, it is sometimes expensive just to gather and process the sample historical data. Maybe this represents a third or more of all the work in the project. Maybe it involves costly hardware sensors. In these cases, it’s even more important not to jump right in to the real-time processing part. Skipping this step of first collecting and analyzing a sample of historical data is almost never a cost savings – it just introduces massive project risk.
- Step 1: prove the analytics using historical data
- Step 2: build or buy a real-time analytics system
Do not do these steps in parallel unless you really know what you’re doing (in which case you don’t need my advice).
It’s possible that very standard analytics like web log analysis or simple network monitoring could skip the step of testing with historical data. I don’t work with these kinds of projects, so I can’t offer good advice there.
Great use case for Event Processing in Intelligent Transportation Systems at Julian Hyde’s blog. Been waiting for use cases like this to emerge.
Also, how do OSS vendors compete in the public sector markets these days? At one time, IBM could muscle out OSS with promises of support and integration. Is that still true?
Check out Dice.com, search term “complex event processing” in “New York, NY”. There are a couple of jobs specifically listing CEP experience as desirable (there are several versions of each job, listed by different recruiters).
This is the best kind of testimony to the success of this acronym and maybe the technology. I can believe that a technology is growing when IT shops start looking for people who use it.
I just read a post from Joe McKendrick at ZDNet, titled Why business process management and complex event processing are converging, which reports on a webinar about Business Process Management (BPM) and its relationship with Complex Event Processing (CEP). The lead quote is: The BPM and CEP combination means ‘process monitoring on steroids’
This is exactly the kind of thing that lead me to describe two types of Event Processing: Detection Oriented vs. Operations Oriented. I see this pattern over and over in pretty much every use of Event Processing (EP) that I have worked with.
Operation Oriented EP handles events that follow a pre-defined business contract. The goal is to trigger business-level action, based on the known business contracts of the incoming events.
Event driven BPM is a major type of Operations Oriented EP. BPM handles events with defined business meaning and triggers business level action based on them. That’s not an attempt to subsume BMP into EP. Rather, I point out that BPM can use EP in the same way that it can use a database, while remaining an independent class of software.
Detection Oriented EP handles events that follow patterns outside of the main business use; the business meaning must be distilled from the events, it is not captured by the business definition of the events themselves. The goal of this type of application is to create new types of business-level events based on those detected patterns.
But remember that an event can have a business meaning to a BPM system, but also be a part of patterns that lie outside of those clear business contracts. So Detection Oriented EP often works over the same events as Operations Oriented EP, but it uses them in a different way. The upcoming book Event Processing in Action describes this powerful property of events: they can be used simultaneously by different applications in very different ways.
The blog post mentioned above describes a system that monitors the BPM looking for various patterns that fall outside of the normal business contracts. I would call that Detection Oriented Event Processing. This kind of application watches the same events that drive BPM. But it’s looking for patterns that lie outside of the business meaning used by the BPM aspect. These patterns then produce new events with defined business meaning, which can feed back into the BPM.
What is the point of breaking Event Processing into the categories of Detection vs. Operations Oriented? It highlights the fact that event driven BPM shares many aspects with other Event Processing applications – because they all process events. But also, BPM belongs to a class 0f EP that differs significantly from the Detection Oriented class.
I prefer the Detection vs. Operations Oriented classification to terms like Complex Event Processing. Why? Take a look at the Wikipedia article on CEP. I read this article and I can’t tell the difference between CEP and event-driven BPM. CEP seems to encompass every business application that processes events (basically everything outside of a message bus).
Yet there are clearly differences between BPM and the type of CEP that Mr. McKendrick reports in his post. So I prefer a hierarchical definition, where Event Processing applies to all of these applications. But then we drill down into specific categories of EP.
Responding to a few blog posts about Sybase buying the assets of Aleri:
Bundling/portfolio vs. stand-alone product
John Bates thinks, among other things, that a stand-alone EP engine is not feasible. Paul Vincent thinks that Sybase will bundle an Aleri EP engine with a database purchases.
Other than the “larger company means more resources” argument, vendors with a large product portfolio can bundle many products under one purchase and support contract. Intuition says that this is particularly important for Event Processing because it reduces the cost of an initial implementation.
Now which is a better fit for an Event Processing engine: a database centric portfolio or an event driven architecture centric portfolio? Maybe it depends on the features of the EP engine.
Mark Palmer, CEO of StreamBase, naturally thinks that the Aleri buyout means great things for his company. He has argued that some vendors will get Event Processing wrong because (to paraphrase) they are not as good at innovating. He also dismisses the failures of various EP vendors as natural market evolution.
TIBCO has innovated a pretty popular product in BusinessEvents. But if history is any guide, Mark is right that most big vendors will get Event Processing wrong, at least in the early stages (where we are now). But so will most small vendors…
Colin Clark thinks that were StreamBase to merge with its step-sister company Vertica, it would imply that either StreamBase or both companies were in trouble. His comment is from a venture capital perspective, but I wonder about the product bundling aspects as well.
And finally, Opher Etzion takes his usual “not black and white” stance, providing arguments both for and against stand-alone Event Processing. Worth reading, even without a decisive opinion.
Maybe EP will develop differently, in different vertical markets
Philip Howard categorizes Event Processing products by vertical market, and is not relating the results of one vertical market to the results of the others.
It’s true that some of the most successful vendors stress particular vertical markets. But the lack of consolidation in non-finance vertical markets of Event Processing is more likely a result of when the firms got venture capital .
My first thought was “why didn’t StreamBase buy them?” Were they not approached? Did Aleri take the deal that preserved the most jobs? Were they outbid?
My gut gives a different reason…
(just being honest here)
IBM has a web ad spot called The Smarter City, showing how IT can make a difference in the urban public sector. The web campaign associated ideas like e-Govornment and Intelligent Transportation Systems (ITS) with improvements in IT for healthcare, energy and education.
The short clips that introduce these topics make many references to gathering and using data in a more timely manner. I noticed this particularly in the Transportation, Public Safety and Energy & Utilities sections. This seems like a good fit for Event Processing.
Many years ago, I worked in intelligent transportation (ITS) and had the pleasure of attending a few ITS conferences. The big technical challenge at that time (the biggest overall challenge being political) was infrastructure: aside from the cost of all the sensors and cameras, the amount of data that streams in from truly comprehensive monitoring of roads and public transportation, seemed daunting to say the least.
The infrastructure problem is much less daunting these days, although still expensive. To justify the expensive infrastructure, we need to get started on the next technical challenge: the software. Google already has some traffic speeds on its maps. But that’s just the beginning, right? Looks like Event Processing has a bright future in the public sector.
Julian Hyde has interesting things to say about streaming SQL and Complex Event Processing. It’s also a good introduction to SSQL.
Elevator makers are calling it Destination Dispatch – the process of scheduling elevators when the destination of each rider is known in advance, rather than after they enter and press the button. This is an example of the most common Event Processing value proposition – to make adjustments to business processes on the fly.
This post continues the idea that Event Processing (EP) is more easily understood after dividing the field into two distinct areas. I started with a couple of posts using this perspective to analyzing the business value of EP. Those posts are available from my EP reference page.
The most common request since then is to better explain the Two Types of EP concept. To that end, I first posted an analysis of two blog posts using the Two Types of EP.
Now I will introduce an Event Processing use case and then analyze it from the perspectives of Detection Oriented vs. Operations Oriented Event Processing. This is not a use case from an implemented project, but it’s plausible and a good example. Real use cases are currently hard to analyze because no one is willing to publish a detailed description of their use of Event Processing.
My background is in statistics, not marketing, and there are flaws in the marketing side of this use case. Anyone who has really looked at this particular marketing scenario will find many problems and questions to be addressed before the idea could be considered for implementation.
The use case: Let’s say that a chain of coffee shops wants to boost its work-hours sales to the urban business crowd. They have many chain stores in urban business centers and they want strategies to increase sales during normal work hours.
The chain has two large data sets available in-house to drive this effort:
- the (electronic) register tape from every store
- data captured by a retail traffic counter (Google the term if it’s unfamiliar), which counts foot traffic entering and leaving each store
Let’s say that analyzing this data using basic queuing theory, produces the following results (this is not real data, I’m making it up to suit the example):
- Even during “peak times”, the length of the lines at most stores fluctuates greatly during any 30 minute period. There are 10 minute periods with very long lines followed by 10 minute periods with short lines.
- During peak periods, customers seem put off by long lines
- They may walk out before making it to the register
- Foot traffic may tail off when the line gets too long, and it seems likely that interested customers see the long line and never enter the store.
The coffee chain sees a group of customers that would like to make a purchase, but are put off by long lines. Some of these customers may go to another of the chain’s stores, but many won’t because there are now so many choices for the urban working crowd.
To sell to these impatient customers, the chain can come up with some traditional marketing ideas:
- They can open more shops in the area. Now the customers can spill over from a crowded chain shop into a less crowded one. But it may not be practical to open a new store just to capture peak traffic spillover.
- They can offer price incentives (discounts) during off-peak times. But this has a very negative effect. If the incentives are low, customers pay no attention. If the incentives are high, the result is to move the peak time but lower profit.
Since neither of these ideas works, let’s find a different and creative way to sell to these impatient customers.
We want to help customers avoid lines. This is a valuable service, but doesn’t have negative profit impact of price incentives. Maybe we can do this in a way that will “smooth out” the foot traffic to each store. In other words, try to get customers to pick just the right moment when the store has a shorter line, rather than a longer line. We know that even during peak selling times, a customer arriving at any random time has a chance to find either a long or a short line – can we direct some customers to the short lines?
To summarize, we would like to:
- Help interested customers to find a store with a shorter line
- Help interested customers who would like some coffee, but are willing to come at any point within a half hour window, to find the best time to arrive at a store
- Attract undecided customers by giving them the confidence and convenience of quickly finding a store with a short line
- As much as possible, prevent the very longest lines and wait times that discourage some customers
We will be creating some kind information service to direct customers, as much as possible, to shorter lines. That information might be delivered on a query basis or through push notifications.
Since this concept relies on customers believing in the system, we need to be reasonably accurate with any help we give. We don’t need to be perfect, but if customers don’t believe the information, they will quickly abandon the service. And this is where Event Processing comes in. Without some element of on-the-fly data analysis, there is no way to give good advice on line length. Because line length at any particular time relies on a very random effect of foot traffic, it’s not enough to consider seasonality (even combining annual, monthly, weekly and intra-day components of seasonality) combined with day-to-day time series. The extra step of real-time data analysis is required.
The meat of this use case will have to wait for a future post. But here are the roles of each type of EP.
Detection Oriented Event Processing
- Inputs are real-time information from registers and foot traffic counters
- Output events an estimate of the current line length and wait time
- Outputs short term prediction of line length and wait time by combining real-time data, with stored data that results from analyzed historical data: multiple components of seasonality and day-to-day time series
- Considers the effects of this marketing campaign on predictions
Operations Oriented Event Processing
- Inputs are events containing line length and wait time info (current and predicted) from Detection Oriented EP components
- Integrate this information with the marketing strategy
- Stores data for access by query-based systems like web or smart phone apps
- Distribute the proper alerts and information to interested parties (customers, managers, etc)
- Interacts with customers, so is part of the customer relationship as well
We’ll see that it is interesting to look at this use case through the lens of Detection vs. Operations Oriented components is for a few reasons:
- The accuracy or effectiveness of the components are measured differently. The detection might work perfectly, but the operational rules make poor use of the data.
- The components are tightly integrated to the value of the overall concept, but still the business value of the two types can be viewed independently. The detection logic provides a data source that can be valuable to customers. The operations logic drives foot traffic and even becomes a part of the customer’s relationship with the brand.
- The development process is different. Developing the detection portion involves tweaking an algorithm by back-testing using large amounts of historical data. Development of the operational portion involves planning out a business strategy and analyzing the impact of various decisions, but no historical data is required to test the logic.
- The logic itself is different. One is focussed on analyzing raw data, the other on making business decisions.
- The maintenance and evolution over time will be different. The operational decision making will evolve over time, it might grow to consider (or at least not interfere with) many components of the marketing strategy. The detection logic will evolve if the model for foot traffic and line prediction changes, but the chain may find additional information that can be mined from the same real-time data sources.