REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM

INTRODUCTION
A major problem associated with a real-time vision-based application such as traffic data collection is the requirement for high computational power. In traffic applications, variation of ambient lighting, shadows, occlusion, and lane changing movement of vehicles further complicates the task. Despite these difficulties, a vision-based traffic data collection system has several advantages over the conventional point based alternatives such as magnetic loops. Vision can give time-variant and spatial information about a scene and can be recorded easily for further analysis. Ease of installation on a present road, flexibility, and maintenance issues are among other advantages, which encourage researchers to concentrate
To achieve real-time processing, vision-based traffic parameter extraction techniques are based on processing key regions of the scene. The vehicles are detected in a user-defined window by applying a background differencing technique. Since background-based algorithms are very sensitive to ambient lighting conditions, they have not yielded the expected results. The method proposed in this paper is based on applying edge-detection techniques to windows placed by the user at the key regions across the lanes.
This paper aimed to measure queue parameters accurately. The traffic queue detection algorithm has two operations:
Vehicle detection
Motion detection
Previous works about queue detection using image processing are mainly focused on measuring queue length at intersections where queues are caused by a traffic lights system. Fatty and Sisal approach is based on computing motion and presence of vehicles on profiles placed in strategic zones of the image. Profiles have a shape determined by camera and geometry parameters and their output gives information about queue length. In output of profiles is interpreted by a neural net in order to obtain better queue length estimation. Aim of this system is to give an instantaneous precise estimation of queue length. The vehicle detection is based on applying edge detection on these profiles.

SIGNAL & IMAGE PROCESSING APPLIED TO TRAFFIC
Digital signal   
Digital systems functions more reliable, because only two states are used (zero’s and one’s BINARY SYSTEM). The word signal is a function of independent variables. The signal itself carries some kind of information available for observation.
             By processing we mean operating in some fashion on signal to extract some useful information.  In many cases the processing will be none destructive “TRANEFER MODE” of the given data signals.
Signal processing will be done in many ways.
Analog signal processing
Digital signal processing
Light(Optical)wave signal processing
Radio frequency signal transmission
Need for processing of traffic data
Traffic surveillance and control, traffic management, road safety and development of transport policy.

Traffic parameters measurable
Traffic parameters measurable are the number of active tracks(Vehicles), instantaneous velocity of a vehicle, class of a vehicle, and average velocity of a vehicle during the entire time when it was in camera’s field of view.Traffic volumes, Speed, Headways, Inter-vehicle gaps, Vehicle classification, Origin and destination of traffic, Junction turning.

STAGES OF IMAGE ANALYSIS
Image sensors used 
     Improved video cameras: automatic gain control, low SNR
ADC Conversion
 Analog video signal received from video camera is converted to digital/binary form for processing
Pre-processing
     High SNR of the camera output reduces the quantity of processing enormous data flow.
To cope with this, two methods are proposed:
1. Analyze data in real time - uneconomical
2. Stores all data and analyses off-line at low speed.
   Pipeline Preprocessing does this job.

 Stages in pipeline preprocessing:
      (1)  Spatial Averaging – contiguous pixels are averaged (convolution).
      (2)  Subtraction of background scene from incoming picture.
      (3)  Threshold – Large differences are true ‘1’, small differences are false  
              ‘0’.
      (4) Data Compression – reduces resulting data.
      (5)  Block buffering – collects data into blocks.                                                                                                                          
      (6)  Tape Interface – blocks are loaded onto a digital cassette recorder.
Preprocessed picture is submitted to processor as 2-D array of numbers
Two jobs to be done
Green light on
Determine no. of vehicles moving along particular lanes and their classification by shape and size.
Red light on
Determine the backup length along with the possibility to track its dynamics and classify vehicles in backup

METHODS OF VEHICLE DETECTION
Detection of each single vehicle in the traffic scene is a very complex task due to problems such as occlusions and shadows. In order to measure a global presence of vehicle on the monitored road, this task can be avoided and the presence parameter can be measured at pixel level. There are two main approaches to estimate the presence of new objects in an image sequence taken from a static camera: background based methods and gradient based methods. Background methods perform the difference of the current frame with an updated background image, while the second approach is based on the detection of vehicle edges.
Among them the good one is second strategy, which guarantees a very fast computation and good results. In fact, considering that the road surface presents an almost flat grey level and that vehicles generate strong discontinuities in the intensity, edges can be a good indicator for the presence of vehicles
Many edge-detection algorithms based on gradient operators or statistical approaches have been developed. The gradient operators are usually sensitive to noise and are used with filtering operators to reduce the effect of noise. A statistical approach is unable to detect thin edges, which are very likely to occur at a window of an image. The morphological-based edge detectors have proved to be more effective than conventional gradient-based techniques. Some commonly used morphological edge-detection algorithms are blur-min, top-hat, and open–close operators. These operators have better performance than gradient and facet edge operators. The authors have developed a new morphological edge-detection operator separable morphological edge detector (SMED) which has a lower computational requirement while having comparable performance to other morphological operators. The reasons for adopting SMED operator in our application are as follows.
1) SMED can detect edges at different angles, while other morphological operators are unable to detect all kinds of edges
2) The strength of the edges detected by SMED is twice than other edge detectors. Blur-min is unable to detect thin lines and corners, and the strength of the edges detected is weak.
3) SMED uses separable median filtering to remove noise. Median filtering has proven to preserve edges while removing noise. Separable median filtering has shown to have comparable performance to the true median filtering, but requires less computational power.
In brief, SMED, which uses compatible and easily implementable operators, has a lower computational requirement, compared to the other morphological edge-detection operators. Open–close has better performance than SMED operator does, but it has about eight times more computational

QUEUE DETECTION ALGORITHM
The algorithm used to detect and measure queue parameters consists of two operations—one involving motion detection operation and the other vehicle detection operation. These operations are applied to a profile consisting of mall sub profiles with variable sizes to detect the size of the queue. The size of the sub profiles varies according to the distance from the camera. As the microcomputer systems operate sequentially, a motion-detection operation is first applied, and then if the algorithm detects no motion, the vehicle-detection operation is used to decide whether there is a queue or not. The reason for applying motion-detection operation first is that the traffic scenes we analyzed for queue detection are expected to contain vehicles, and in this case vehicle detection mostly gives the positive result, while in reality there may not be any queue at all. Therefore, by applying this scheme, the computation time is further reduced.
 Motion detection operation
The method for motion detection used is based on differencing two consecutive frames and applying noise removal operators. To reduce the amount of data and to eliminate the effects of minor motions of the camera, the key regions of two frames, each having at least three-pixel width, are selected along the image of the road. In this method, a median filtering operation is first applied to the key region (profile) of each frame, and a one-pixel wide profile is extracted. Then, the difference of two profiles is compared to a threshold value to detect the motion. When there is a motion, the difference of the profiles will be larger than the case when there is no motion. The process of motion detection is shown in motion detection algorithm
The size of the profile for queue detection is an important factor as there might be motion in some part of it, while there may not be motion in the other parts. Therefore, the profile along the road is divided into a number of smaller profiles (sub profile) with variable sizes. Our experiments show that the length of sub profile should be about the length of the vehicle in order to assure that the operation of both vehicle and motion-detection algorithms work accurately
 Vehicle-detection operation
The vehicle-detection algorithm is only applied when the motion-detection algorithm described earlier detects no motion. The vehicle-detection operation discussed in Section 7 is applied on the profile of the unprocessed image. To implement the algorithm in real time, it was decided that the vehicle-detection operation should only be used in a sub profile, where we expect the queue will be extended (tail of the queue). This procedure is shown in vehicle detection algorithm

 MOTION DETECTION ALGORITHM
Theory behind
 The profile along the road is divided into a number of smaller profiles (sub-profiles)
 The sizes of the sub-profiles are reduced by the distance from the front of the camera.
Transformation causes the equal physical distances to unequal distances according to the camera parameters.
 Knowing coordinates of any 6 reference points of the real-world condition and t coordinates of their corresponding images, the camera parameters (a11, a12…a33) are calculated.
The operations are simplified for flat plane traffic scene - (Zo=0).
 Above equation is used to reduce the sizes of the sub-profiles - each sub profile represents an equal physical distance.
Number. of sub profiles depend on the resolution and accuracy required. The length of sub profile should be about length of the vehicle - both the detection algorithms work accurately then.


VEHICLE DETECTION ALGORITHM
As shown in Fig.7.1, to detect vehicles, windows with suitable sizes are placed across each lane of the road. In this vehicle-detection approach, the SMED edge detector is applied to each window, and the histogram of each window is processed by selecting the dynamic left-limit value and a threshold value to detect vehicles.
Left-limit selection program
The left-limit selection program selects a gray value from the histogram of the window. When the window contains an object, the left limit of the histogram shifts toward the maximum gray value, otherwise, it shifts toward the origin. For each 100-frame sequence, the left limit for each window is the minimum of the left limits of the previous sequence. In this approach, the minimum gray values of those windows, which contain no object, is selected as a left limit. In case of queue detection, when all 100 frames detect a vehicle, the left limit is not changed.
Threshold selection program
For threshold selection, the number of edge points greater than the left-limit gray value of each window is extracted for a large number of frames (200 frames). These numbers are used to create the histogram. This histogram is smoothed by using a median filter, and we expect to get two pecks in the resultant diagram one peck related to the frames with vehicles and the other related to the frames without vehicles for that window. However, as it is can be seen in Fig. (7.2), there are other numbers of edge points (30–40) between the pecks 20 and 60, which are related to those vehicles which partially pass the window. Where the sum of the entropy of the points above and below this point is maximum. This point is selected as the threshold value for the next period.

TRAFFIC MOVEMENT AT JUNCTION
Measuring traffic movements of vehicles at junctions such as number of vehicles turning in a different direction (left, right, and straight) is very important for the analysis of cross-section traffic conditions and adjusting traffic lights. Previous research work for the traffic movement at the junction (TMJ) parameter is based on a full-frame approach, which requires more computing power and, thus, is not suitable for real-time applications. Use another method based on counting vehicles at the key regions of the junctions by using the vehicle-detection method.
   The first step to measure the TMJ parameters using the key region method is to cover the boundary of the junction by a polygon in such a way that all the entry and exit paths of the junction cross the polygon. However, the polygon should not cover the pedestrian marked lines. This step is shown in Fig. (8.2).
The second step of the algorithm is to define a minimum numbers of key regions inside the boundary of the polygon, covering the junction. These key regions are used for detecting vehicles entering and exiting the junction, based on first vehicle- in first-vehicle-out logic.
Following the application of the vehicle detection (described in Section III) on each window, a status vector is created for each window in each frame. If a vehicle is detected in a window, a “one” is inserted on its corresponding status vector, otherwise, a “zero” is inserted. Now by analyzing the status vector of each window, the TMJ parameters are calculated for each path of the junction. Since this process is also used for counting number of vehicles, it does not increase the computation power requirement.

CONCLUSION
• Algorithm measuring basic queue parameters such as period of occurrence                                                                          between queues, the length and slope of occurrence has been discussed.
• The algorithm uses a recent technique by applying simple but effective operations.
• In order to reduce computation time motion detection operation is applied on all sub-profiles while the vehicle detection operation is only used when it is necessary.
• The vehicle detection operation is a less sensitive edge-based technique. The threshold selection is done dynamically to reduce the effects of variations of lighting.
• The measurement algorithm has been applied to traffic scenes with different lighting conditions.
• Queue length measurement showed 95% accuracy at maximum. Error is due to objects located far from camera and can be reduced to some extent by reducing the size of the sub-profiles.

Download this Seminar Report:
Embed-upload: Download

        
Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.

0 comments:

Post a Comment

Thanks for your Valuable comment