Project Report
MONOSCOPIC VISION BASED AUTONOMOUS GROUND VEHICLE Submitted In Partial Fullment Of The Degree Of
Bachelor Of Technology by
Abid E. H,Y3.026 Bibin Mathews,Y3.090 Eohan George,Y3.146 Feroz M Basheer,Y3.176 Sri Raj Paul,Y3.181
Under the guidance of
Mr.Saidalavi Kaladi
2007 Department of Computer Engineering
National Institute of Technology, Calicut
National Institute of Technology, Calicut Department of Computer Engineering Certied that this Project entitled
MONOSCOPIC VISION BASED AUTONOMOUS GROUND VEHICLE is a bonade work carried out by
Abid E. H,Y3.026 Bibin Mathews,Y3.090 Eohan George,Y3.146 Feroz M Basheer,Y3.176 Sri Raj Paul,Y3.181
In Partial Fullment Of The Degree Of
Bachelor Of Technology under our guidance
Mr.Saidalavi Kaladi
Faculty Dept. of Computer Engineering
Dr. M P Sebastian
Professor and Head Dept. of Computer Engineering
Acknowledgement Learning is always a great experience. But it becomes enjoyable only if help is at an arm's length. We use this space to express our heartfelt gratitude to all those who have helped us through this endeavour. We are immensely grateful to Mr. Saidalavi Kaladi, our guide for the project who was with us at every point with his valuable advice and support. We are much indebted to Dr. M.P. Sebastian, Head, Department of Computer Science and Engineering for all the help given to us all through the project work. We are also thankful to Mrs.Subasree M of Department of Computer Science and Engineering for providing us with all the necessary equipment. We are thankful to Mrs.Sathi Devi and Mr.Venu of Department of Electronics Engineering for all the help provided. We also thank Mr. Ramachandran and Mr. George Varghese of Department of Mechanical Engineering. This project would not have been possible without the help from a few people from outside the college, especially Mr. Madhavan from Kunnamangalam and Mr.Aandi from M/s Swagat Engineering works. We also pen down our special word of thanks to our batch mates Mr. M.Prithvi, Mr. Abhilash M., and Mr. Jithin Benjamin of Mechanical Engineering for helping us with the fabrication of the vehicle and our classmate Mr. Sreedal Menon for helping us in developing the image processing module.
Abid E. H,Y3.026 Bibin Mathews,Y3.090 Eohan George,Y3.146 Feroz M Basheer,Y3.176 Sri Raj Paul,Y3.181
i
Abstract The objective of this project is to design and implement a prototype of an autonomously guided vehicle with the ability to detect the road and negotiate simple curves, based on monoscopic vision. To achieve this goal, concepts from dierent disciplines of engineering are integrated. An ecient system was created to capture the images of the path to traverse, process the captured images, detect path, control the 'drive-by-wire' modules and interface the software and hardware modules with the mechanical parts of the vehicle. Machine learning capabilities based on articial neural networks enable smooth cruise control.
ii
Contents 1 Introduction
1
2 Design
3
1.1 1.2 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machine Vision . . . . . . . . . . . . . . . . . . Image Processing . . . . . . . . . . . . . . . . . Path Planning . . . . . . . . . . . . . . . . . . . Machine Learning . . . . . . . . . . . . . . . . . Drive-by-wire . . . . . . . . . . . . . . . . . . . Vehicle . . . . . . . . . . . . . . . . . . . . . . . Other Design Issues . . . . . . . . . . . . . . . . 2.7.1 Colour based path identication . . . . . . 2.7.2 Edge detection based path identication . 2.7.3 Machine learning based path identication
3 Implementation 3.1 3.2 3.3 3.4
3.5
OpenCV . . . . . . . . . . . . . . . . . . . . . Camera . . . . . . . . . . . . . . . . . . . . . The computer . . . . . . . . . . . . . . . . . . Miranda . . . . . . . . . . . . . . . . . . . . . 3.4.1 Machine Vision . . . . . . . . . . . . . 3.4.2 Image Processing . . . . . . . . . . . . 3.4.3 Path Planning . . . . . . . . . . . . . . 3.4.4 Generating the drive-by-wire signal . . 3.4.5 The Drive-by-wire interface . . . . . . 3.4.6 Execution Sequence . . . . . . . . . . . 3.4.7 Code optimization . . . . . . . . . . . Driving by Wire . . . . . . . . . . . . . . . . . 3.5.1 The AVR ATmega8(L) Microcontroller 3.5.2 AVR Programmer . . . . . . . . . . . . 3.5.3 H Bridge . . . . . . . . . . . . . . . . . 3.5.4 RS232 Serial Interface . . . . . . . . . 3.5.5 ULN2003A . . . . . . . . . . . . . . . 3.5.6 Servo . . . . . . . . . . . . . . . . . . . 3.5.7 ADC . . . . . . . . . . . . . . . . . . . 3.5.8 Hub Motor . . . . . . . . . . . . . . . 3.5.9 Steering . . . . . . . . . . . . . . . . . 3.5.10 Braking Mechanism . . . . . . . . . . . 3.5.11 PCB . . . . . . . . . . . . . . . . . . . 3.5.12 Microcontroller Programming . . . . . iii
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
2 2
3 4 4 4 5 5 6 6 6 6
7
7 7 8 9 10 10 13 14 14 15 15 15 15 16 17 18 18 19 19 20 20 21 21 21
CONTENTS
iv
4 Other Work
22
5 Future work
23
6 Conclusion
24
7 Appendix
25
References
26
4.1
The Project blog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Chapter 1 Introduction Automation has always been the watchword of 20th century. Beginning with the production of Model-T car of Ford, automation systems has found its way into almost all elds of industrial importance. More importantly, automation has made many everyday chores of humans to simple tasks. Automated transport systems will be the next important step of this ever improving technology. Commercially viable prototypes of automated vehicles is the need of the hour. Guidance systems in research phase today shall soon be converted into commercial products. The fact that every automated vehicle guidance system shall be a combination of theory and practices from dierent engineering disciplines makes it a realm of uniqueness.
1
1.1. Motivation
2
1.1 Motivation Driverless vehicles would eectively eliminate nearly all hazards associated with driving as well as driver fatalities and injuries. Having an automated vehicle would be a great convenience:
• Time spent commuting could be used for work, leisure, or rest. • Parking in dicult areas becomes less of a concern as the car can park itself away from a busy airport, for example, and come back when called on a cell-phone. • Taxiing children to school,activities and friends would become solely a matter of granting permission for the car to handle the child's request. • Allow the visually (and otherwise) impaired to travel independently. A driverless vehicle would also be a boon to economic eciency, as cars can be made lighter and more space ecient with the absence of safety technologies rendered redundant with computerized driving. Also the technology would make transportation more ecient and reliable: there may be autonomous or remote-controlled delivery trucks dispatched around the clock to pick up and deliver goods. Moreover, driverless cars would reduce trac congestion by allowing cars to travel faster and closer together.
1.2 Problem Denition The project aims to design and implement a prototype of an autonomously guided vehicle with the ability to detect the road, avoid obstacles and negotiate simple curves. The project is of interdisciplinary nature and quotes concepts from computer science, electronics and mechanical elds so as to provide an elegant autonomous vehicle control system. The phases involve:
• Capturing the images of the path to traverse • Processing the captured images • Path and obstacle detection • Creating algorithms based neural networks for control and machine learning. • Controlling the 'drive-by-wire' modules • Interfacing the software and hardware modules with the mechanical parts of the vehicle
Chapter 2 Design The literature survey on autonomous vehicles, road detection techniques, path planning algorithms and machine learning methods provided a good insight into the theoretical issues in the topic. Even though practical implementations of autonomous road vehicles were rare, lots of work has gone into the individual techniques. Based on the literature the AGV was designed to have the following phases:
These were the phases dealing with:
• Machine Vision • Image Processing • Path Planning • Machine Learning • Drive-by-wire
2.1
Machine Vision
The machine vision module interfaces the software with the camera, the 'eye' of the AGV. A camera with USB streaming facility streams the images of the path before the vehicle. The 3
2.2. Image Processing
4
module converts the stream into a sequence of image frames and passes them to the image processing module.
2.2
Image Processing
The aim of the image processing module is to identify the traversable path in front of the vehicle. The module marks the possible path in some way and supplies the data to the path planning module. In some implementations the aim of the image processing module will be just to do some pre-processing of the image frames for further path planning rather than path identication.
2.3
Path Planning
This module nds out the required steering signals based on the output of the image processing module. Path planning uses certain characteristics of the image frame set or some parameters calculated by the previous phase. . The signals are then passed onto the drive-by-wire interface through the machine learning phase
2.4
Machine Learning
The diculty in theoretically chalking down the mathematical formulations in path planning and the corresponding signal generation called for developing a module which provides machine learning capabilities. Neural networks were the main focus. The module intervenes in the path planning phase and helps in generating intelligent decisions based on previous training.
2.5. Drive-by-wire
2.5
5
Drive-by-wire
This module forms the interface between the computer and the vehicle. It consists of a piece of code which communicates with the serial port in the computer and an array of electronic circuits which interpret the signals form the port and makes the vehicle react to the instructions.
2.6
Vehicle
The following points were kept in mind while designing the vehicle:
• The size of the prototype has to big enough to simulate real life vehicles.
2.7. Other Design Issues
6
• The speed has to be high enough to make sure that it is a viable transportation medium in the real world and low enough to avoid accidents. • Manual override options were put in place in case of emergencies
2.7
Other Design Issues
The image processing, path planning and machine learning phases were designed to be exible enough to test various algorithms and strategies. An important design aspect is the availability of three separate algorithms for road identication and path planning. The algorithms can be loosely classied as those based on colour, edge detection and complete machine learning.
2.7.1 Colour based path identication This strategy uses the ideas of colour based road detection and thresholding. This tries to nd the horizon and nd the traversable path below it.
2.7.2 Edge detection based path identication This strategy tries to nd the path by rst smoothening the image and then nding the edges. The processing is restricted to a frame which forms the most crucial part of the image frame.
2.7.3 Machine learning based path identication This approach rst used successfully in [16]. This relies fully on the power of machine learning, especially neural networks. Several improvements and strategies are suggested in [13],[14],[15],[17],[18] and [19]. Any of the three strategies can be used in the processing stage. In fact the appropriate strategy depends on the road conditions. The software is designed to be exible enough to switch between the various algorithms. This also allows to conduct comparative studies on the eciency of the algorithms is various road conditions. As a whole, the design and implementation of the AGV was inuenced by a number of both theoretical and practical attempts in this regard, especially [1],[3],[4][8],[10] and [16]
Chapter 3 Implementation 3.1 OpenCV The machine vision, path planning and machine learning modules of the AGV software (Miranda) is mostly based on Intel Corporation's Open Source Computer Vision Library (OpenCV) [20]. It is aimed at real time computer vision, Human-Computer Interaction (HCI), object identication, segmentation and recognition, motion tracking and mobile robotics. The OpenCV modules used in the implementation are:
• HighGUI It provides the necessary graphical user interface for OpenCV. • Cvcam CvCam is a module for processing video stream from digital video cameras. provides an Application Programming Interface (API) for reading and controlling video stream, processing its frames and rendering the results. • CXCORE This provides the basic functionalities for OpenCV library. CXCORE deals with the special data structures like those for storing images in dierent formats, static and dynamic memory allocation, dynamic structures, drawing functions, data storage, error handling etc. • CV CV is the Computer Vision module of OpenCV. It provides classes and functions for image processing, structural analysis, motion analysis, object tracking, pattern recognition etc • Machine Learning The Machine Learning Library (MLL) is a set of classes and functions for statistical classication, regression and clustering of data. Project AGV used this module for Articial Neural Networks (ANN) in path planning and cruising.
3.2 Camera The camera that forms the eye of the AGV is a Sony DCR-PC109E camera xed on a special stand in the vehicle. The camera communicates with the computer through a USB port. It provides a 360x240 streaming video. The streamed video is captured by the AGV software running on the computer.
7
3.3. The computer
8
Main Specications
• Product Description: Sony Handycam DCR PC109E - camcorder - Mini DV • Product Type: Camcorder • Dimensions (WxDxH): 5 cm x 9.7 cm x 9.8 cm • Weight: 0.5 kg • Webcam Capability: Yes • Media Type: Mini DV • Analogue Video Format: PAL • Sensor Resolution: 1.0 Mpix • Eective Sensor Resolution: Video: 690 Kpix • Shooting Modes: Digital photo mode, frame movie mode • Lens Aperture: F/1.8-2.3 • Focus Adjustment: Automatic, manual • Focal Length: 3.2 mm - 32 mm • Optical Zoom: 10 x • Digital Zoom: 120 x • Image Stabiliser: Electronic (Super Steady Shot)
3.3 The computer The Miranda software runs on a Toshiba Portege A100 Laptop, the specication of which is as follows: System: Intel Pentium M Processor 1400 MHz 512 MB of RAM OS: Microsoft Windows XP Professional Version 2002 Service Pack 2
3.4. Miranda
9
3.4 Miranda
The software called Miranda running on the laptop forms the brain of the AGV. Miranda consists of the modules for machine vision, image processing, path planning, machine learning and the interface to the drive-by-wire hardware. Miranda is a multithreaded application. It consists of three threads, the rst one of which runs the main algorithms, second one runs the GUI and the third one is the serial communicator thread which ensures proper communication through the serial interface.
3.4. Miranda
10
3.4.1 Machine Vision OpenCV provides a powerful API to the camera device. The cvcam module creates a capture object which can be queried for image frames. Optionally the software can be take input from a video stored in the computer. This module then sets the extracts data from the stream frame by frame and passes it to the image processing module.
3.4.2 Image Processing The image processing module of Miranda has three separate algorithms as of now. The rst two tries to dierentiate between image processing and path planning as two important phases while in the third, processing has less importance and the neural network does most of the job.
Algorithm 1: Based on colour detection
The algorithm is as follows
1. Colour detection and thresholding 2. Smoothening 3. Horizon detection
3.4. Miranda
11
Algorithm 2: Based on edge detection The algorithm is as follows
1. Optionally converting the image to grayscale 2. Smoothening the image using Gaussian blur / bilateral lter. 3. Applying sobel transformation for edge detection 4. Computing the concentration of edges by using a sliding block of pixels. 5. Identifying traversable path using the concentration of edges inside the crucial frame set. OpenCV CV module provides API's for required operators like Gaussian blur, bilateral lter and the sobel operator.
3.4. Miranda
12
Algorithm 3: Based on a neural network retina. This method uses a neural network retina which gets its input from the image. The image is classied into blocks, averaged and passed onto the rst layer of the neural network. This layer is connected to a hidden layer of nodes the number of which is usually less than ten. This layer communicates with the output layer, which gives the result. The algorithm needs to be trained beforehand.
Training: 1. Divide the image into blocks. 2. Average out each block. 3. Give the block values to the neural retina. 4. Calculate required out put at the output layer.
3.4. Miranda
13
5. Train the network with the required output. It is important that the algorithm is trained using a balanced sample set to avoid errors.
Driving: 1. Divide the image into blocks. 2. Average out each block. 3. Input the block values to the neural retina. 4. Get the output from the neural output layer
The implemented neural net has 864 nodes in the rst layer ( 360x240 ) stream, with 10x10 pixel blocks), 5 nodes in the hidden layer and 30 nodes in the output layer. The neural net uses a backpropagation based learning strategy. OpenCV MLL module provides API's for required neural networking features. Another strategy which uses the Kmeans clustering algorithm is in the planning phase and will be implemented if time permits.
3.4.3 Path Planning Various strategies were tested for the path planning mechanism for AGV. Most of the literature uses theoretical formulations based on advanced mathematics.[2] gives a detailed mathematical treatment of the subject. These had to be disregarded for lack of testing facilities until the
3.4. Miranda
14
vehicle was fabricated and the mathematical complexity involved. The design aim was to create a simple but elegant algorithm which makes use of all the data available from the processed frame. Midpoint algorithms were considered and tested using road videos. These worked ne in road segments having edges of high contrast, but were found to be infeasible in most of the real roads where sharp edges are rare. Another point of concern was the sudden jumps of the midpoint line in certain frames either due to the road or due to the camera movement. This led to the conclusion that any algorithm which focuses on a limited part of the image for path planning would not be elegant. To generate an elegant path without much mathematical complexity, we came up with a simple strategy. The processed image frame is divided into two equal parts in the vertical direction. The probable road pixels in each half is counted and the dierence was taken as a measure of the amount of turn required to keep the vehicle in its course. The vehicle is assumed to start its run at the middle of the road. On testing this was giving good results except for certain cases where the number of road pixels in the image was very large or very small. To counter this, the dierence was then expressed as a percentage of the road pixels identied. Further the algorithm was modied to disregard the frames with too few road pixels identied as inconsistent and continue to consider the last consistent result for some time. Further changes in the path planning code included the averaging of the road pixels over a moving window of frames. The camera can provide 25 frames per second and hence averaging over a few frames could provide a more accurate result without sudden jumps in the trajectory. Thus the path traversal becomes smooth and accurate. The other path planning mechanisms possible include generating a trapezium to represent the traversable road and then steering towards the middle of the top edge. Averaging out could make this smooth and elegant. In general, this phase is one of the most computationally expensive phase in steering the automobile and hence must be done using the most optimized code.
3.4.4 Generating the drive-by-wire signal This problem is inherently complex due to a number of changes taking place for the signal due to the electronic circuitry involved. The complexity was eliminated to a large extent by designing an algorithm which included a machine learning mechanism. Articial neural networks were used again here to tackle the diculty. The design of the network is similar to the one used in the neural network based path planning module mentioned in section 5.2.2 This version has two input nodes, for the current state of the vehicle and the percentage dierence calculated as above. There are two hidden layers of ve and three nodes. The output layer is a single node layer which outputs the required drive-by-wire signal to the serial port interface. The training of this network is also similar to the training of the net mentioned in section 5.2.2
3.4.5 The Drive-by-wire interface The drive-by-interface of Miranda consists of a piece of code which communicates with a serial port of the computer. The design and implementation of this module is inuenced by a number of serial port communication applications, especially [21] . The current version uses the following specications: Baud rate: 9600 bits per second Data bits:7 Stop bits: 1 Parity: None A buer of the necessary size is used in the communication. The settings can be changed dynamically through the GUI of Miranda.
3.5. Driving by Wire
15
3.4.6 Execution Sequence block As mentioned earlier Miranda consists of two threads. The GUI thread starts up rst, creates some of the common data structures and creates the algorithm thread. The algorithm thread prints out the welcome message, checks for the training databases for the neural networks and trains them and initiates the capture from the camera or the video le as specied by a ag. It then tries to setup communication with the serial port. On success, it starts running the main loop which captures the frames, process them, nd road pixels, plan path and controls the drive-by-wire mechanism. When the main thread shuts down, it disconnects the software from the camera and the serial port and deallocates the resources used.
3.4.7 Code optimization The most crucial implementation issue was catering the almost real time video processing requirements needed by the application. Several round s of optimization were done on Miranda, particularly in parts which loop through the entire frame. OpenCV provided highly optimized code for most of the image processing requirements. Skipping pixels or even blocks of pixels were done in some places where the result was sure not to get aected. Reuse of allocated memory was done wherever possible.
3.5 Driving by Wire The drive by wire module is the most vital part of the AGV. It integrates a lot of electronic and mechanical components which act in synergy. In this section we will have a close examination of the individual components which constitute the drive by wire module
3.5.1 The AVR ATmega8(L) Microcontroller For controlling the drive by wire, the AVR ATmega8(L) was used. The ATmega8 is a low-power CMOS 8-bit microcontroller based on the AVR RISC architecture. By executing powerful instructions in a single clock cycle, the ATmega8 achieves throughputs approaching 1 MIPS per MHz, allowing the system designer to optimize power consumption versus processing speed.
3.5. Driving by Wire
16
3.5.2 AVR Programmer
The above circuit consists of a parallel port interface with the computer, a high speed CMOS quad bus buer and an interface with the microcontroller. MOSI, MISO and SCK are the three pins of the microcontroller to which the AVR programmer is connected. The programmer uses the Serial Peripheral Interface (SPI) to communicate with the microcontroller. The SPI is used primarily for synchronous serial communication of host processor and peripherals.The various pins used (with respect to microcontroller) are MOSI Master Output Slave Input MISO Master Input Slave Output SCK Serial Clock Ground The master device ,which is the host computer, drives the serial clock.
3.5. Driving by Wire
17
3.5.3 H Bridge
H Bridge is a circuit which is used to make the motor rotate in either direction. H Bridge consists of two sets of inputs. One powers the motor and the other controls the direction of rotation of the motor. The control signal to the H bridge is provided using the microcontroller through ULN2003A. Control input can be 00, 01, 10 or 11. When the motor is not use the control signal is 00. When a control input 01 is given, Q1 and Q4 will be in on state and a positive voltage will be applied to the motor. When 10 is given as the control input, Q2 and Q3 will be in on state and a voltage in the direction opposite to that in the case of 01 as the input signal, will be applied. A solid state H Bridge is typically constructed using reverse polarity devices ( PNP BJT, P-channel MOSFET connected to a high voltage bus and NPN BJT or N channel MOSFET connected to low voltage bus ) .
3.5. Driving by Wire
18
3.5.4 RS232 Serial Interface
The AVR RS232 interface is used for bidirectional data transfer between the computer and the microcontroller during operation of the AGV. The IC MAX32 housed in the circuit is used for protocol conversion between RS-232 of computer and UART (Universal Asynchronous Receive Transmit) of the microcontroller. MAX 232 consists of charge a pump that does a level shifting from 0-5 volts to the -15 to 15 volts.
3.5.5 ULN2003A
The motor for steering control is driven by 12 volts. Electric motors operating at such high voltages produce extreme voltages as back emf. ULN2003A is used to protect the microcontroller from this back emf which is induced by the motor. It will protect the microcontroller by making sure that the voltage in the microcontroller circuit does not exceed 5 volt this high back emf. This is accomplished by isolating motor power and microcontroller ports.
3.5. Driving by Wire
19
3.5.6 Servo
Servo stands for servomechanism. The position is controlled using PWM ( Pulse Width Modulation ) signals. In the AGV project the servo was used to rotate and control the accelerator. The servo has a closed feedback mechanism which helps to turn the accelerator to a particular position, thus powering the vehicle with a certain amount of power. This position of the servo depends on the value of the pulse width. The pulse is produced by the microcontroller. The frequency of this signal will be about 50 Hertz. If the duration of the pulse is 1 millisecond servo goes fully to the left and if the duration is 2 millisecond the servo goes fully to the right. Finally, a pulse with duration of the peak value of the signal 1.5 millisecond causes the servo to go to the middle. If the other intermediate values between 1 and 2 are given, the servo goes to the intermediate positions. Dierent servos have dierent ranges of rotation and left, right and middle correspond to dierent angles. A full left need not be ninety degree to the left and a full right need not be ninety degree to the right. Inorder to achieve full range of motion it may be needed to send the servo higher pulses that last for longer than 2 millisecond or shorter than 1 millisecond.
3.5.7 ADC The ATmega8 features a 10-bit ADC. The ADC is connected to an 8-channel Analog Multiplexer which allows eight single-ended voltage inputs constructed from the pins of Port C. The singleended voltage inputs refer to 0V (GND). The ADC converts an analog input voltage to a 10-bit digital value through successive approximation. The minimum value represents GND and the maximum value represents the voltage on the AREF pin minus 1 LSB.
3.5. Driving by Wire
20
3.5.8 Hub Motor
Two brushless hub motors are used to power the vehicle. They are powered using a 24V source. They consume 300W power each. The motor controller controls the motor though electronic switching. The various inputs to the controller are from a accelerator and brake. The motor is connected to a bicycle tyre.
3.5.9 Steering
The steering is connected to a wiper motor through a gear of ratio 1:2.35. The motor is controlled using the H-Bridge
3.5. Driving by Wire
21
3.5.10 Braking Mechanism
The braking system used is the common bicycle brake. The force required by the brakes are supplied by a DC motor connected to a 12V source through a Double Pole Double Throw relay. The relay in turn is controlled using the microcontroller. The microcontroller is connected to relay through ULN2003A to protect the microcontroller from back emf and other disturbances.
3.5.11 PCB To implement the PCB we make use of the lay out editor Eagle. The circuit is designed in Eagle in the .sch and .brd format. It is then exported to gerber format (.otl,.drl,.drd,.sol,.bot). This is imported to a circuitcam software. This software creates the necessary insulating layers to be processed by the PCB making machine. The circuitcam creates a .lmd format le which is imported to Boardmaster software. This software controls the machine movements.
3.5.12 Microcontroller Programming Programs were written for the following
• program for serial port and microcontroller communicaton. • program to control servo • program to get input from ADC
Chapter 4 Other Work 4.1 The Project blog A detailed web log ( http://projectagv.blogspot.com )of all the activities undertaken as part of the project was maintained. The blog contains links to various web resources related to the project. These include links to the websites containing the user manuals and documentation of the software which was used. Links to the pages maintained by the manufacturers of the software and hardware components are also provided.
22
Chapter 5 Future work There is scope for the following improvements in the future. The AGV uses a laptop computer which is housed inside it to run the navigation software. The laptop can be done away with by porting the software to an embedded platform. This will also make the AGV cheaper and more commercially aordable. Another capability which can be added is a more elegant obstacle detection mechanism. As of now, the obstacle detection is limited. Another area of improvement is the vehicle. At present, a prototype vehicle is being used as the AGV. In future, the software and the hardware components used in the present vehicle can be made in such a way that these components can be installed on any vehicle to make it an AGV.
23
Chapter 6 Conclusion Autonomous ground vehicles may be the key to transportation systems in the future. Work is going on all over the world in the elds of path identication, planning, controlling techniques and machine learning. Even though ours was a small attempt in a restricted time frame towards the development of an autonomous vehicle, it gives us pleasure to think that we addressed and successfully tackled many of the design and implementation issues in this regard. This endeavour gave us an insight into the eld and familiarized us with an number of related topics. We worked on an array of technologies and got acquainted with many interesting algorithms and tools. There were many instances of diculty especially in the initial design stages where we had to chalk down the entire working strategy and decide upon the tools. These issues were successfully tackled as time progressed. The whole project was a great experience for us and took us into elds where we had never been before. The prototype we have made can be used as a platform for further development in this direction to create a more intelligent ground vehicle.
24
Chapter 7 Appendix Schematic diagram and board layout of AVR Programmer designed in Eagle Layout Editor. Similar PCB designs were done for the RS232 Serial interface and the H bridge.
25
Bibliography [1] Ola Ramstrom and Henrik Christensen. "A Method for Following Unmarked Roads" [2] Ankur Naik, "Arc Path Collision Avoidance Algorithm for Autonomous Ground Vehicles ", Thesis Submitted to the Faculty of the Virginia Polytechnic Institute and State University, 2005 [3] YaoYi Chiang, Craig A. Knoblock, and ChingChien Chen, "Automatic Extraction of Road Intersections from Raster Maps" [4] D. Coombs, K. Murphy, A. Lacaze, S. Legowik, "Driving Autonomously Oroad up to 35 km/h" [5] Shivang Patel, "Monocular Color Based Road Following" [6] Marin B. Kobilarov and Gaurav S. Sukhatme, "Near Time-optimal Constrained Trajectory Planning on Outdoor Terrain" [7] Dong Hun Shin and Sanjiv Singh, "Path generation for robot vehicles using composite clothoid segments" [8] Hendrik Dahlkamp, Adrian Kaehler, David Stavens, Sebastian Thrun, and Gary Bradski, "Self-supervised Monocular Road Detection in Desert Terrain". [9] Larissa Labakhua, "Smooth trajectory palnning for fully automated passengers vehiclesSpline and Clothoid based Methods and its Simulation ". [10] Alberto Broggi Mazimo Bertozzi, Alessandra Fascioli, Corrado Guarino Lo Bianco and Aurelio Piazzi, "The ARGO Autonomous Vehicle's Vision And Control Systems", International Journal of Intelligent Control and Systems vol. 3, No. 4 (1999) 409-441 [11] Narayan Srinivasa, "Vision-based Vehicle Detection and Tracking Method for Forward Collision Warning in Automobiles". [12] Massimo Bertozzi and Alberto Broggi, "Vision based vehicle guidance". [13] Parag H. Batavia, Dean A. Pomerleau, Charles E. Thorpe, "Applying Advanced Learning Algorithms to ALVINN". [14] Shumeet Baluja, "Evolution of an articial neural network based autonomous land vehicle controller". [15] Todd M. Jochem, Dean A. Pomerleau, Charles E. Thorpe, "MANIAC A Next Generation Neurally Based Autonomous Road Follower". [16] Dean A. Pomerleau, "Neural Network Vision for Robot Driving". [17] Rahul Sukthankar, Dean Pomerleau and Charles Thorpe, "Panacea - An Active Sensor Controller for the ALVINN Autonomous Driving System". 26
BIBLIOGRAPHY
27
[18] Todd M. Jochem, Dean A. Pomerleau, Charles E. Thorpe. "Vision based neural network road and intersection detection and traversal". [19] Todd M. Jochem, Dean A. Pomerleau, Charles E. Thorpe. "Vision guided lane transition". [20] "OpenCV page on Intel website"
http://www.intel.com/technology/computing/opencv
[21] "SerialComm Project page on Sourceforge"
http://sourceforge.net/projects/serialcomm/
[22] "The home page of the DARPA Grand Challenge" http://www.darpa.mil/grandchallenge. [23] Thomas Hessburg and Masayoshi Tomizuka, "Fuzzy Logic Control for Lateral Vehicle Guidance, Second IEEE Conference on Control Applications, September 13 - 16, 1993 Vancouver, B.C. [24] Control AVR 8 bit Timer-Counter2 using WINAVR
http://winavr.scienceprog.com/avr-gcc-tutorial/control-avr-8-bit-timer-counter2-usi html
[25] Programming the AVR microcontroller with GCC, libc 1.0.4
http://www.linuxfocus.org/English/November2004/article352.shtml
[26] AVR Libc Home Page
http://www.nongnu.org/avr-libc/
[27] Diodes Incorporated Technical Sta, 1N4001/L - 1N4007/L 1.0A Rectier Data sheet, Diodes Incorporated. [28] Phillips Semiconductors Technical Sta, 74HC/HCT241 Octal buer/line driver; 3-state Product Specication , Phillips Semiconductors , September 1993 [29] Atmel Corporation Technical Sta, ATmega8/ ATmega8L Data sheet, 2006 [30] International Rectier Technical Sta, IRF530N HEXFET Power MOSFET, International Rectier, 2001 [31] International Rectier Technical Sta, IRF9540N HEXFET Power MOSFET, International Rectier, 1998 [32] STMicroelectronics Technical Sta, L7800 Series Positive Voltage Regulators Data Sheet, STMicroelectronics, 2004 [33] Texas Instruments Technical Sta, MAX232,MAX232I Dual EIA-232 Drivers/Recivers Data Sheet, Texas Instruments, 1989 [34] Texas Instruments Technical Sta, ULN2001A, ULN2002A, ULN2003A, ULN2004A, ULQ2003A, ULQ2004A, High-Voltage High-Current Darlington Transistor Array Data Sheet, Texas Instruments, 1976