An efficient approach towards ANPR system

Tirth Patel
13 min readNov 11, 2021

Automatic number plate recognition technology is a tool applied in smart cities in investigation and crime prevention. It has been widely used in Parking Management systems and toll booths on highways with a rigid shooting angle and lighting surroundings. Traffic control and vehicle owner identification have become a significant problem in every country. Sometimes it becomes challenging to identify vehicle owner who violates traffic rules and drive too fast. Therefore, it is impossible to catch and punish those kinds of people because the traffic police might not retrieve vehicle numbers from fast-moving vehicles. Also, one of the major problems in today’s fast-paced world is the inconvenience of finding free parking slots. The parking issue causes wastage of time and energy, especially for those who go to work and are looking for a spot to park their vehicles. The main objective is to make a productive model for managing obstacles such as inappropriate parking, insecurity of cars, congestion in parking regions etc.

Hence, there arose a need to develop an Automatic Number Plate Recognition System that acts as a solution to many problems. There is various kind of ANPR systems available today and are uses different methodologies. Still, it is a challenging task as factors like the high speed of a vehicle, non-uniform vehicle number plate, the language of the vehicle number, and different lighting conditions can affect the overall recognition rate. This article will learn and implement the ANPR system by applying various Computer Vision techniques and Machine Learning Algorithms. The ANPR application acts as the entryway for booking the parking spots. The proposed architecture uses OCR and Facial Recognition to detect number plates and faces for accessing the entrance of the parking spot.

Table of content

Overview

Smart car, smart city, and intelligent transportation system technologies are continuing to alter many aspects of human existence. As a result, technologies like automatic number plate recognition (ANPR) have become commonplace in our daily lives.
Furthermore, the concept of ANPR has the potential to contribute to a variety of application cases while obviating the need for human participation. For example, in Smart Parking, the wide-angle camera act as a sensor that detects only free parking spaces and records them. These records help to assign parking space to the incoming user. Intelligent Transport System (ITS) and Electronic toll collection (ETC) using optical character recognition (OCR) creates a record for all entering vehicles. This system makes tag less entry for all vehicles in the parking lot, but it does not assign a slot to the user. A universal OCR algorithm is not available, making it challenging to create.

To achieve this task, let’s understand the architecture. We need to recognize the license plate; here, specifically in Indian, all the countries have different types of license plate characters and shapes of the license plate. So, overall we need to detect the number plate first, then recognize the characters. We also have one more bifurcation in this, i.e. all the vehicles have different shapes and sizes of number plates. In this task, we focus only on the car and any number plate of the body and the size of the car’s number plate in India. Once we have the plate detected, we have to recognize the characters of the number plate. Finally, we can feed these recognized numbers to an API which can give us the vehicle information. API returns a JSON output which we can process and store in a database for future uses.

Workflow

Architecture of ANPR System

Image preprocessing techniques for Character Segmentation

As discussed earlier, we must train the model to recognize the plate’s characters, i.e. alphabets and numbers. Creating such a model needs per-processing of images. The standard procedure to extract any objects from the image is by finding the contours.

What are contours?

Contours can be explained as a curve joining all the straight points (along the boundary), having the same colour or intensity. The silhouettes are a valuable tool for shape analysis and object detection and recognition.

  • For better accuracy, use binary images. So before finding contours, apply threshold or canny edge detection. Since OpenCV 3.2 and later, findContours() no longer modifies the image source.
  • In OpenCV, finding contours is like finding a white object from a black background. So remember, the thing to detect should be white, and the environment should be black.

Read more at: https://docs.opencv.org/4.5.2/d4/d73/tutorial_py_contours_begin.html.

Now, we know that we have first to convert an image into a binary image and then perform the threshold on it. Once we do that, we even have to make our characters grow perfect in the black background. To do so, we have a threshold function in the cv2 module.

What is threshold?

For every pixel, the value of the threshold is the same. If the pixel value is smaller than the threshold, it equals 0. Otherwise, it equals a maximum value. The cv.threshold function applies the threshold by accepting three arguments.

  • The first argument is the source image, which should be grayscale.
  • The second argument is the threshold value used to classify the pixel values.
  • The third argument is the maximum value assigned to pixel values exceeding the threshold.

OpenCV provides different types of thresholds and the fourth argument help in assigning the kind of threshold. In our case, we have used BINARY and OTSU together. BINARY will create a perfect contrast of black and white on the image, and OTSU will remove all the noise from the image.

Read more at: https://docs.opencv.org/4.5.2/d7/d4d/tutorial_py_thresholding.html.

What is erode and dilate?

We have already performed the threshold of the image. We need to perform morphological operations to set all the characters that come brightly in the picture. There are perfect edges and curves, and these two morphological operations will help us make the alphabet clearer. Example: D won’t appear as O.

Erode and dilate are the most basic morphological operations. Morphological operations are a set of operations that process images based on shapes. Morphological operations apply a structuring element to an input image and generate an output image. Erode and dilate have a wide range of uses:

  • Removing noise
  • Isolation of individual components and joining disparate elements in a snap.
  • Finding of intensity bumps or holes in an image.

Dilate

This operation consists of convolving an image A with some kernel B, which can have any shape or size, usually a square or circle. By doing this operation, the brighter portion of the image grows.

Erode

Erode is precisely the opposite of Dilation, and it computes a local minimum over the area of the given kernel.

Read more at: https://docs.opencv.org/3.4/db/df6/tutorial_erosion_dilatation.html.

What is Dropout Layer?

The Dropout layer randomly sets input units to 0 with a rated frequency at each step during training time, which helps prevent over-fitting. Inputs not set to 0 scale up by 1/(1 — rate) such that the sum over all inputs is unchanged. Note that the Dropout layer only applies when training is set to True such that no values drop during inference. When using the model.fit function, training sets to True automatically. In other contexts, you can select the kwarg(Keyword Argument) explicitly to True when calling the layer. In simple words, The Dropout layer is a mask that nullifies the contribution of some neurons towards the next layer and leaves all others unmodified. We can apply a Dropout layer to the input vector, in which case it nullifies some of its features. Still, we can also use it as a hidden layer, which negates some hidden neurons. Dropout layers are essential in training CNNs because they prevent overfitting the training data. If they aren’t present, the first batch of training samples influences the learning disproportionately. Resulting in would avoid the learning of features that appear only in later instances or collections.

What is F1 score?

It’s just a metric using which we can measure our model’s accuracy. So, what’s the problem with accuracy metrics? Well, the answer is straightforward. In our data set, values/records are such that they will favour the positive side more than the negative. In layman’s terms, we might have 80% data in favour of our prediction and only 20% against, so obviously, our model will be biased to one. In these cases, we might get more false positives as well as a false negatives. Overall we need to avoid Type 1 errors more than Type 2. To do so, we have an F1 Score. F1 Score is the harmonic mean of precision and recall.

Read more at: https://towardsdatascience.com/f-beta-score-in-keras-part-i-86ad190a252f.

Code for creating Model

The below code is for detecting the number plate using Cascade Classifier.

#detecting license plate on the vehicle
plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')
#detect the plate and return car + plate image
def plate_detect(img):
plateImg = img.copy()
roi = img.copy()
plateRect = plateCascade.detectMultiScale(plateImg,scaleFactor = 1.2, minNeighbors = 7)
for (x,y,w,h) in plateRect:
roi_ = roi[y:y+h, x:x+w, :]
plate_part = roi[y:y+h, x:x+w, :]
cv2.rectangle(plateImg,(x+2,y),(x+w-3, y+h-5),(0,255,0),3)
return plateImg, plate_part

Now, we need to find contours and segment the characters. Follow the code given below to implement. find_contours() function help to process the image and find outlines and the segment_characters() function is used to segment the characters. Later, we need to pass these segmented characters to the CNN model, which will classify and get the number plate in text format.

def find_contours(dimensions, img):    #finding all contours in the image using 
#retrieval mode: RETR_TREE
#contour approximation method: CHAIN_APPROX_SIMPLE
cntrs, _ = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Approx dimensions of the contours
lower_width = dimensions[0]
upper_width = dimensions[1]
lower_height = dimensions[2]
upper_height = dimensions[3]

#Check largest 15 contours for license plate character respectively
cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]

ci = cv2.imread('contour.jpg')

x_cntr_list = []
target_contours = []
img_res = []
for cntr in cntrs :
#detecting contour in binary image and returns the coordinates of rectangle enclosing it
intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)

#checking the dimensions of the contour to filter out the characters by contour's size
if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height :
x_cntr_list.append(intX)
char_copy = np.zeros((44,24))
#extracting each character using the enclosing rectangle's coordinates.
char = img[intY:intY+intHeight, intX:intX+intWidth]
char = cv2.resize(char, (20, 40))
cv2.rectangle(ci, (intX,intY), (intWidth+intX, intY+intHeight), (50,21,200), 2)
plt.imshow(ci, cmap='gray')
char = cv2.subtract(255, char)
char_copy[2:42, 2:22] = char
char_copy[0:2, :] = 0
char_copy[:, 0:2] = 0
char_copy[42:44, :] = 0
char_copy[:, 22:24] = 0
img_res.append(char_copy) # List that stores the character's binary image (unsorted)

#return characters on ascending order with respect to the x-coordinate

plt.show()
#arbitrary function that stores sorted list of character indeces
indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])
img_res_copy = []
for idx in indices:
img_res_copy.append(img_res[idx])# stores character images according to their index
img_res = np.array(img_res_copy)
return img_resdef segment_characters(image):

#pre-processing cropped image of plate
#threshold: convert to pure b&w with sharpe edges
#erod: increasing the backgroung black
#dilate: increasing the char white
img_lp = cv2.resize(image, (333, 75))
img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)
_, img_binary_lp = cv2.threshold(img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
img_binary_lp = cv2.erode(img_binary_lp, (3,3))
img_binary_lp = cv2.dilate(img_binary_lp, (3,3))

LP_WIDTH = img_binary_lp.shape[0]
LP_HEIGHT = img_binary_lp.shape[1]
img_binary_lp[0:3,:] = 255
img_binary_lp[:,0:3] = 255
img_binary_lp[72:75,:] = 255
img_binary_lp[:,330:333] = 255

#estimations of character contours sizes of cropped license plates
dimensions = [LP_WIDTH/6,
LP_WIDTH/2,
LP_HEIGHT/10,
2*LP_HEIGHT/3]
plt.imshow(img_binary_lp, cmap='gray')
plt.show()
cv2.imwrite('contour.jpg',img_binary_lp)

#getting contours
char_list = find_contours(dimensions, img_binary_lp)

return char_list

Now, we need to create and train a CNN model to classify the characters. In this code, I have used Keras to develop and load the model.

import keras.backend as K
import tensorflow as tf
from sklearn.metrics import f1_score
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Flatten, MaxPooling2D, Dropout, Conv2D
train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.1, height_shift_range=0.1)
path = 'data/data/'
train_generator = train_datagen.flow_from_directory(
path+'/train',
target_size=(28,28),
batch_size=1,
class_mode='sparse')

validation_generator = train_datagen.flow_from_directory(
path+'/val',
target_size=(28,28),
class_mode='sparse')

The below code is for defining the custom f1 score for multi-class classification.

#It is the harmonic mean of precision and recall
#Output range is [0, 1]
#Works for both multi-class and multi-label classification
def f1score(y, y_pred):
return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro')

def custom_f1score(y, y_pred):
return tf.py_function(f1score, (y, y_pred), tf.double)

Model creation

K.clear_session()
model = Sequential()
model.add(Conv2D(16, (22,22), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (16,16), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (8,8), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (4,4), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(36, activation='softmax'))

model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001), metrics=[custom_f1score])

The below code is for training the model and configuring early stopping to get 99% accuracy.

class stop_training_callback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_custom_f1score') > 0.99):
self.model.stop_training = True
batch_size = 1
callbacks = [stop_training_callback()]
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
epochs = 80, verbose=1, callbacks=callbacks)

Finally, we can load the trained model and get the predictions, i.e. number plate of the vehicle. Moreover, we can make calls to RTO API to get the vehicle’s details and store them in the database for the future.

To get complete working code refer to https://github.com/tiru-patel/Automatic-Car-Parking-Slot-Booking-System

Benefits of Automatic Number Plate Recognition

ANPR offers numerous advantages that are the basis for real-world applications. Most benefits of ANPR come with automating manual tasks, highly efficient space management, governance, and increasing the customer experience.

  • Automation: The automated recognition of number plates allows automated alerts and controls for facilities. Hence, ANPR is a crucial technology for smart cities.
  • Analytics: The generated data helps traffic flow analytics and is particularly important for operating intelligent transportation systems (ITS), where data processing technologies help improve the mobility of people and goods, demand management, increase safety, reduce traffic congestion and manage incidents effectively.
  • Identification: Fast recognition of a vehicle numberplate is the basis for fast and seamless vehicle identification. The tag helps to grant vehicles access or find and track specific vehicles.
  • Efficiency: The precise and fast number plate recognition doesn’t rely on human input. Hence it drives cost-efficient governance and reduces waiting times.
  • Convenience: ANPR is usually integrated with other IT systems and operates in an ecosystem to provide end-users with a seamless and hassle-free experience. Hence, the technology enhances the customer experience and offers new services and products, such as automated parking payments.

Use Cases of ANPR

Automatic number plate recognition is essential for a wide range of applications, where the detection, identification, or localization of vehicles is essential. Read our extensive article about other use cases of computer vision in innovative city applications.

Law enforcement. Police forces use ANPR for law enforcement purposes, including checking if a vehicle is registered or identifying vehicles related to traffic violations. Detecting and recognizing number plates in real-time allows authorities to identify vehicles and track their location.

Car parking management. Car parking management requires an integrated solution to detect individual vehicles. Hence, automated number plate recognition is the key to efficient car parking management. ANPR allows parking garages to have automated parking management because every car is accounted for by its license plate number. Therefore, parking garage users avoid the stress of managing their tickets and tracking time spent, risking penalties for inaccurate ticket payments or losing their tickets. In addition, such automated surveillance refers back to in case of a disagreement. Such parking systems keep track of every vehicle in the facility and ensure complete governance.

Journey time analysis. Journey time analysis (JTA) is crucial for authorities to identify passing through vehicles and their time from one node to another. In addition, such analytics allow better route planning for traffic administrators.

Traffic management. Traffic management is the umbrella term for a plethora of advantages that ANPR offers. ANPR can be used throughout cities to detect overspeeding vehicles, vehicles that drive rashly, or any accidental occurrence. ANPR provides solutions for measuring and analyzing area-related traffic data of a particular area or an entire city. On a larger scale, traffic management allows insights into traffic congestion for better traffic planning.

Retail park security. Retail parks often deal with unauthorized parking, leading to hassles for rightful parking spaces or sometimes suspicious activities. This kind of security risk can be addressed with ANPR technology by ensuring only authorized vehicles use parking spaces.

Tollbooth records. Manual tollbooth management on highways is still a significant practice in some parts of the world. Often, tollbooths leverage different technologies for autonomous tollbooth management. For example, on more extensive freeways, ANPR allows authorities to receive license plate numbers for paying tolls by mail or automatically rather than stopping and paying at a manually-run tollbooth. Hence, ANPR enables efficient toll booth management, decreasing the operational time needed and thus increasing productivity.

Summary

ANPR is a complicated system because of a different number of phases. It is impossible to achieve 100% overall accuracy as each stage depends on the previous step. Certain factors like other illumination conditions, vehicle shadow and non-uniform size of license plate characters, different font and background colour affect the performance of ANPR. Some systems work in these restricted conditions only and might not produce good accuracy in adverse conditions. Some of the designs are developed and used for specific countries. There are very few ANPR systems developed for India. Hence, there is broad scope to improvise this architecture and create more robust models to detect number plates and help in Parking Management Systems and much more.

--

--