Tài liệu Research and development artificial intelligence to track trajectory and automatically path planning for auto car - Ngo Manh Tien: Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 1
RESEARCH AND DEVELOPMENT ARTIFICIAL INTELLIGENCE
TO TRACK TRAJECTORY AND AUTOMATICALLY PATH
PLANNING FOR AUTO CAR
Ngo Manh Tien1, Nguyen Nhu Chien2, Do Hoang Viet 2, Ha Thi Kim Duyen3*
Abstract: This paper presents the building and the development of a self-
propelled Auto Car mounted camera which has the missions: lane detection to
determine trajectory for the car, traffic sign recognition. The paper also presents
the algorithms: CNN model for lane detection, Adaboost for traffic sign
recognition. The results showed that the autonomous car already satisfying the
control qualities. The simulation and test results show the truth of these
researching and open the ability to use this one in fact.
Keywords: Autonomous Car, CNN, Adaboost, PID, Robot Mobile Tracking, Computer Vision.
I. INTRODUCTION
The interest in developing autonomous vehicles increases day by day with the
...
11 trang |
Chia sẻ: quangot475 | Lượt xem: 474 | Lượt tải: 0
Bạn đang xem nội dung tài liệu Research and development artificial intelligence to track trajectory and automatically path planning for auto car - Ngo Manh Tien, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 1
RESEARCH AND DEVELOPMENT ARTIFICIAL INTELLIGENCE
TO TRACK TRAJECTORY AND AUTOMATICALLY PATH
PLANNING FOR AUTO CAR
Ngo Manh Tien1, Nguyen Nhu Chien2, Do Hoang Viet 2, Ha Thi Kim Duyen3*
Abstract: This paper presents the building and the development of a self-
propelled Auto Car mounted camera which has the missions: lane detection to
determine trajectory for the car, traffic sign recognition. The paper also presents
the algorithms: CNN model for lane detection, Adaboost for traffic sign
recognition. The results showed that the autonomous car already satisfying the
control qualities. The simulation and test results show the truth of these
researching and open the ability to use this one in fact.
Keywords: Autonomous Car, CNN, Adaboost, PID, Robot Mobile Tracking, Computer Vision.
I. INTRODUCTION
The interest in developing autonomous vehicles increases day by day with the
purpose of achieving high levels of safety, performance, sustainability and
convenient. In the industry 4.0, driver-less cars are ideal to use in crowed areas, on
highways. The problems in researching autonomous car include: lane detection,
traffic sign recognition, traffic light identification, velocity and position control,
obstacle avoiding, with many high applicable modern control algorithms.
Particularly, with the increasing of open library, the applications of artificial
intelligence (AI), machine learning, deep learning are much more popular.
Many approaches using artificial neural network (ANN) can be found in the
literature for lane tracking with the purposes of accuracy and optimizing
performance. However, using ANN requires time to training from each pixel
stream input. It is not necessary and cost a lot of time to train from a huge database
model. Therefore, the convolutional neural network (CNN) was invented to
minimal preprocessing using convolutional as a feature filter and pass on to the
next layer. This step makes the process faster but still guarantee the accuracy and
stabilization.
The paper presents an application of CNN model for recognition and tracking
signal of Auto Car, using machine learning Adaboost algorithm for traffic sign
identification, embedded program run on a Raspberry Pi3 computer, Arduino in
experiment.
System consist of 4 components (figure 1):
1. CNNs and Adaboost model were trained on a PC and then embedded on
Raspberry Pi.
2. The embedded computer Raspberry Pi, Image input is taken by camera.
Velocity and position feedback signal are processed and computed the
coordinate. Then, the back-propagation and coordinate conversion are
used to define the desire set point for robot.
3. Arduino kit computes control velocity, position, moment signal for 2 DC
motor. The input of this part are the reference point was sent by
raspberry pi and feedback signal from 2 DC motor.
Electronics and Automation
N. M. Tien, , H. T. K. Duyen, “Research and development artificial for auto car.” 2
4. Autonomous car is a three-wheeled robot (2 powered wheels with an
additional free rotating wheel in front ) non - holonomic type.
The hardware of the Autonomous car was designed and tested in figure 2
Figure 1. Block diagram of overall system.
Figure 2. Structure of Robot’s hardware.
II. IMPLEMENTATION OF CNN MODEL TO DETECT LANE
FOR SELF – DRIVING CAR
2.1. CNN structure
Lane detection problem has many way to approach. The traditional way is use
passive infrared sensor to detect line. However, this solution depends on light
conditional which difficulty to apply in real life. The modern and more applicable
way is use ANN, with the input is image captured from camera. The model will be
trained from feature database in order to decide which way to steer. The
disadvantage of ANN is that to train an accurate model takes a lot of time because
it process each pixel of image input. This is not necessary according to recognition
theory, feature extraction is only requirement to recognize. Nowadays, lane
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 3
detection approach using CNN model is getting popular because of its high
accuracy and performance with short time training. The convolutional layers of
CNN work as a feature filter placed In front of full-connected layers. In this paper,
we propose a CNN architecture (figure 3):
Figure 3. CNN model for autonomous car.
The CNN model consist of 9 layers (5 convolutional, 4 fully-connected layers)
and 250.000 parameters. The input is a 200x66 RGB pixels image. We use 24
filters 5x5 matrix release 24 feature matrices after convolution, those are mapping
feature matrices of convolution layer 1 (conv1). The output of conv1 is 31x98.
Using 1x1 matrix in pooling layer, we create 24 31x98 matrices. Continue using 36
filters 5x5 matrix with input are 24 matrices released by conv1, we obtain 36
matrices in conv2. Result, with 36 matrices in conv2, we create 36 14x47 matrices
in subsampling layer.
Using 48 filters 5x5 matrix convolute matrices output of conv2, we obtain 48
mapping feature 5x22 matrices in conv3. In two next layers, we use 3x3 filter
matrix in conv5 to obtain 64 1x18 matrices. 1152 neural in fully-connected layer 1
(fc1) created by 64 matrices in conv5. After 4 fully-connected layers, the output
are 3 node which are 3 angle signal to send to controller (-30 degree – turn left, 0
degree – straight, 30 degree – turn right).
Infact, the output of the CNN-based lane detector is the set of three values (x,
y, ɵ) that define a trajectory. X,y are calculated from the reference velocity and the
processing time of each frame the camera gets.
Note, with each block from the first stage till end, we using sigmoid function:
1
( )
1 x
f x
e
(1)
To compute value of nodes, which ( * )y f A I where, A is matrix need to be
convolute, I is convolution matrix and y is value of 1 node on mapping feature
matrix.
2.2. Training process and recognition result
Electronics and Automation
N. M. Tien, , H. T. K. Duyen, “Research and development artificial for auto car.” 4
In this paper, we train with back-propagation algorithm. Using dataset contains
1000 images to train and 400 images to test (tuning hyper-parameters). The target
is reduce the loss function 20
1
1
( )
N
E y y
N
to 0 where, y is prediction value
of model, 0y is set value of input.
After 2000 loops, we obtain the training result which is graph of loss function E:
Figure 4. Training result.
Figure 4 shows that after 2000 loops, the error downed to 0.0153 which is error
average 1.53% and the accuracy is 98.47%. This is an accepted result.
Implement recognition with trained model:
Figure 5. Recognition result.
Figure 5 shows that the accuracy of 3 labels straight, turn left and turn right are
99% , 98.5% and 97.91% respectively. The difference between the accuracy of
straight label and others cause by the quantity of straight image in dataset are more
than others. With those results, the model guarantees the accuracy of image
recognition.
After training lane detection by CNNs on computer, and then the pattern
embedded on Raspberry Pi, and test run.
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 5
III. A CASCADE CLASSIFIER USING ADABOOST ALGORITHM FOR
TRAFFIC SIGN DETECTION AND IDENTIFICATION
The problem of traffic sign detection is studied for several purposes. Most of the
work related with that problem is applied to autonomous car. Sign detection allows
warning the driver for inappropriate actions and potentially dangerous situations.
This paper explains the steps followed to detect and identify traffic signs using
cascaded adaboost based on Haar-like features [9].
3.1. Adaboost algorithm
Adaboost algorithm based on Boosting – general ensemble method that creates
a strong classifier from a number of weak classifiers.
Step 1: Given example images 1 1( , t ),...( , t )n nx x where 1, 1it for negative
and positive example respectively
Initialize (1)w 1/ n N where n = 1, N
Step 2: Boosting
For m = 1.... M
Get the weak hypothesis ym
With each feature j, build classifier yj with error jE :
( )
1
w * ( ( )# )
n
m
j n m n nE I y x t (2)
Where
1 )
)
(
0 (
m n n
m n n
m n n
y x t
I y x t
y x t
Choose the lowest error classifier, we obtain ym
Update follow (3) :
. ( ( ) )( 1) ( )w w . m m n nI y x tm mn n e
(3)
Choose :
1
ln mm
m
Where
1
1
N m
n m n n
m N m
n
w I y x t
w
Step 3: Output the final hypothesis:
1
( ) ( )
M
m m
m
H x sign h x
3.2. Cascaded structure
Figure 6. Cascade structure block.
To further reduce the false alarm rate, Viola and Jones [11] proposed a
Cascade-Adaboost classifier, the architecture of which is shown in Fig. 6, where
Electronics and Automation
N. M. Tien, , H. T. K. Duyen, “Research and development artificial for auto car.” 6
each classifier is Adaboost classifier. Nega represents that they are determined to
be negative examples by the input vector while Pis represents that they are
determined to be positive examples by the input vector. X stands for input vector.
If the input vector determines Adaboost classifier to be negative, the classifier
will be removed from the training sample set and will not enter the next layer. In
this way, negative samples are removed quickly. If it is determined to be positive,
the training sample will enter the next layer to continue the classification until the
last layer.
3.3. Training process and recognition result
In this paper, we detect and identify 4 traffic sign consist of : speed limit sign,
turn left sign, turn right sign and stop sign corresponding 4 hypothesis.
Figure 7. Traffic sign.
Dataset consist of :
3020 background images without traffic sign
1200 speed limit sign images
1200 turn right sign images
1200 turn left sign images
1200 stop sign images
Choose parameters for training cascade adaboost :
Constant increase : 1.1
Maximum window size : 15x15
Minimum window size : 150x150
Optimal stage : 18
False Positive rate : 0.5
Detection rate : 0.5
Cascade false positive : 0.005
The Model was tested with 200 standard images in the competition [11], we
obtain result (table 1):
Table 1. Cascaded Adaboost detection and recognition results.
Speed limit
sign 40km/h
Turn left
sign
Turn right
sign
Stop sign
Number of Sign 50 50 50 50
Number of detection 60 55 54 61
Number of
recognition
45 47 46 43
Accuracy 0.9 0.94 0.92 0.86
After detect and identify traffic signs using cascaded adaboost based on Haar-
like features on computer, and then the pattern embedded on Raspberry Pi, and test
run. Infact, the signal of the turn lefr and turn right sign are the set of values (x, y,
ɵ) that define a trajectory for the autonomous car. The signal from the speed limit
sign and the stop sign are the reference velocity.
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 7
IV. MODELING AND CONTROL METHOD OF AUTONOMOUS CAR
4.1. Modeling system :
The self-propelled vehicle used in this paper is a simple non-holonomic. Preset
the coordinate axis of the robot to the center of the robot. Combined with the
image processing in section 2-3, we obtain the desired trajectory (these algorithms
was embedded in raspberry Pi3) for the trajectory controller (implemented by
microcontroller).
Figure 8. Kinematic and dynamic model of robot.
The equation of motion for non-holonomic are given by Euler – Lagrange
formula [1]:
( ) ( , ) ( ) ( ) ( )TM q q C q q q G q B q J q (4)
With non-holonomic constraint :
( ) 0J q q
Where q is a n-dimensional vector which are n-joints of the robot. M(q)
represents symmetric positive-definite n n matrix, C is moment Coriolis’s
component, G are n vector moment of gravity, B is input transformation matrix
size n r (r < n), is r-dimensional vector moment, defined as lagrange
constraint.
Robot’s position in Cartesian Coordinate {O,X,Y} are clarified by vector
q = [ ]Tc cx y where c cx y are center coordinate of robot and is direction of coordinate
{ , , }c cC X Y .
Figure 9. Coordination control of autonomous car.
Simaliar, we get [1]:
1
2
0 0 0 sincos cos
1
( ) 0 0 , ( , ) 0 , ( ) 0, ( ) , , ( ) cossin sin
0 0 0 0
T
m
M q m C q q G q B q J q
r
I R R
(5)
Electronics and Automation
N. M. Tien, , H. T. K. Duyen, “Research and development artificial for auto car.” 8
Euler – Lagrange equation of kinematic for robot :
1
2
0 0 os os sin
1
0 0 sin sin os
0 0 0
m x c c
m y c
r
I R R
(6)
Where
1 and 2 are moment of left and right motor respectively
M and I represent mass and moment of inertia of robot
R is radius of steering wheel and R is ½ distance of 2 back steering wheel.
T
q x y is fixed coordinate of robot,
T
r r r rq x y is reference
coordinate,
T
e e e eq x y represents error in moving coordinate,
( , )
( , )
e r
e r
v q q
u
q q
is
trajectory tracking controller signal.
We obtain error state variables and orientation:
os 1
sin 0
w 0 1
e r e e
e r e e
e r
x v c y
v
y v x
w
(7)
4.2. Design of the proposed controller
To clarify, variables v and will be redefined as dv and d to distinguish
between velocity and angular velocity . Note, dv and d are desired velocity to
stabilize kinematic.
The dynamic controller was designed based on formula (6) where state
variables, control signal of dynamic controller.
Figure 10. Overview architecture of the robot.
According to [4], the kinematic controller is :
cos
( sin )w
r e x e
r r y e e
v K xv
v K y K
(8)
Where , , x yK K K are constants.
We proposed a PD algorithm for dynamic controller.
4.3. Simulation result
The reference trajectory block consits of the CNN-based lane detector, the
adaboost- based traffic sign recognition and the velocity reference setting. The
output of the CNN-based lane detector, the adaboost- based traffic sign recognition
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 9
are the set of three values (x, y, ɵ) that defines a trajectory. The output of the
velocity reference setting is the set of reference velocity ( rv , r ).
The reference values rx , ry are calculated as following:
rxr
ryr
v tx
v ty
Where t is the processing time of each frame of the CNN-based lane detector,
rxv , ryv are the velocity horizontal rv and vertical rv respectively.
Figure 11. Structure of control system.
The simulation will test for several cases.
Parameters for robot: m = 0.8kg, r = 3cm, d = 14cm, kinematic controller:
10, 10, 10x yK K K , controller : 100, 1p dK T .
In case of trajectory is a line ( ) , ( ) , ( ) 0x t t y t t t
Figure 12. Difference between actual and reference trajectory.
In case of trajectory is a circle 5cos(t), y 5sin(t)x
Figure 13. Circle trajectory tracking.
Electronics and Automation
N. M. Tien, , H. T. K. Duyen, “Research and development artificial for auto car.” 10
The simulation result (figure 12, figure 13) shows that the kinematic and
dynamic controller guarantee stabilization and good real-time performance with
high accuracy trajectory tracking. The error takes a short time to achieve 0 (t=1,5s)
with fast response.
V. CONCLUSION
This paper proposed an architecture control for self-propelled auto car with
missions: path planning, comply with traffic sign, trajectory tracking. After the
experiments we presented, we can conclude that proposing CNN model guarantee
high precision (97,91%) and real-time performance for autonomous car. With a
simple architecture using Convolution layer to extract features take shorter training
time. CNN model becomes popular in image processing. Adaboost algorithm
implemented for traffic sign recognition guarantees high accuracy, easy to program
in embedded computer. More than that, using 2 controller for kinematic and
dynamic (PD) to track trajectory ensure high precision and fast response (t=1.5s).
The proposed model is designed and tested for good results. The result illustrated
that the proposed model is efficient for autonomous car.
REFERENCES
[1] Ngô Mạnh Tiến, Phan Xuân Minh, Trần Đức Hiếu, Nguyễn Doãn Phước, Bùi
Thu Hà, Hà Thị Kim Duyên, “Tracking Control for Mobile Robots with
Uncertain Parameters and Unstructured Dynamics Based on Model Reference
Adaptive Control”, The 3rd Vietnam Conference on Control and Automation
VCCA2013; ISBN 978-604-911-517-2, tr 548-555, 11/2013.
[2] Tien-Ngo Manh, Minh-Phan Xuan, Phuoc-Nguyen Doan, Thang-Phan Quoc,
“Tracking Control for Mobile robot with Uncertain Parameters Based on
Model Reference Adaptive Control”, International Conference on Control,
Automation and Information Sciences ICCAIS2013; IEEE catalog number:
CFP1226S-CDR; ISBN: 978-1-4673-0811-3, tr 18-23, 11/2013.
[3] Y. Kanayama, Y. Kimura, F. Miyazaki, T. Noguchi, “A stable tracking
control scheme for an autonomous mobile robot”, proc. of IEEE Int. Conf. on
Robotics and Automation, pp. 384-389, (1990).
[4] R. Fierro, L. Lewis, “Control of a nonholonomic mobile robot: backstepping
kinematics into dynamics”, proc. of the 34th IEEE Conf. on Decision &
Control, New Orleans, pp. 3805-3810, (1995).
[5] L. Fridman. End-to-End Learning from Tesla Autopilot Driving Data.
https://github.com/lexfridman/deeptesla.
[6] Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and
time-series”, In M. A. Arbib, editor, The Handbook of Brain Theory and
Neural Networks. MIT Press, 1995.
[7] R. Lienhart J. Maydt, “An extended set of Haar features for rapid object
detection”, IEEE Image Processing Proceedings 2002 International
Conference on. I-900 - I-903 vol.1. 2002.
[8] Paul Viola, Michael Jones, “Fast and Robust Classification using Asymmetric
AdaBoost and a Detector Cascade”, Neural Information Processing Systems
14, December 2001.
Research
Journal of Military Science and Technology, Special Issue, No.57A, 11 - 2018 11
[9] Yoav Freund, Robert E. Schapire, “A Short Introduction to Boosting”, Journal
of Japanese Society for Artificial Intelligence, 14(5):771-780, September,
1999.
[10] Gemany Traffic Sign Recognize Benmark,
[11] P. Viola and M. Jones, “Robust Real-Time Face Detection”, International
Journal of Computer Vision, vol. 52, no. 2, pp. 137-154, 2004.
ABSTRACT
NGHIÊN CỨU, PHÁT TRIỂN ỨNG DỤNG TRÍ TUỆ NHÂN TẠO VÀO
NHẬN DẠNG BIỂN BÁO, ĐƯỜNG ĐI CHO ĐIỀU KHIỂN BÁM QUỸ ĐẠO
VÀ DẪN ĐƯỜNG CHO XE TỰ HÀNH
Bài báo này trình bày việc xây dựng và phát triển một hệ xe tự hành có gắn
camera với các nhiệm vụ nhận dạng đường để xác định quỹ đạo, nhận dạng biển
báo giao thông và tự động bám theo quỹ đạo đăt. Bài báo nghiên cứu CNN cho
nhiệm vụ nhận đường, Adaboost cho nhiệm vụ nhận dạng biển báo giao thông, tích
hợp các thuật toán trên nền nhúng để chạy thử nghiệm hệ thống xe tự hành. Các
kết quả cho thấy hệ xe tự hành đã hoạt động đảm bảo các chỉ tiêu chất lượng đặt
trước. Những kết quả nghiên cứu này cho thấy hướng ứng dụng việc điều khiển các
xe tự hành.
Keywords: Autonomous Car, CNN, Adaboost, PID, Robot Mobile Tracking, Computer Vision.
Received 23th July 2018
Revised 18 th October2018
Accepted 27 th October 2018
Author affiliations
1Institute of Physics, Vast;
2Hanoi University of Science and Technology;
3Hanoi University of Industry.
*Email: ha.duyen@haui.edu.vn.
Các file đính kèm theo tài liệu này:
- 1_tien_6915_2150397.pdf