Free Essay

Self Navigating Bot

In:

Submitted By tigersuru
Words 13508
Pages 55
SELF NAVIGATING AUTONOMOUS BOT
Major-Project Report by Arjun Surendran B080001EC
Deepak Venga B080027EC
Nandu Raj B080585EC
Priyanka G Das B080312EC
Sanjay George B080270EC
Under the guidance of

Dr. S. M. SAMEER

Submitted in Partial Fulfillment of the Requirements for the degree of
Bachelor of Technology
In
ELECTRONICS AND COMMUNICATION ENGINEERING

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
NATIONAL INSTITUTE OF TECHNOLOGY CALICUT
Kerala, India
April 2012

NATIONAL INSTITUTE OF TECHNOLOGY CALICUT
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING

CERTIFICATE
This is to certify that this report titled SELF NAVIGATING AUTONOMOUS
BOT is a bona fide record of the major-project done by
Arjun Surendran
Deepak Venga
Nandu Raj
Priyanka G Das
Sanjay George

B080001EC
B080027EC
B080585EC
B080312EC
B080270EC

In partial fulfillment of the requirements for the award of Degree of Bachelor of Technology in Electronics and Communication Engineering from National Institute of Technology, Calicut.

Dr. S. M. Sameer
(Project Advisor)
Assistant Professor

Dr. P S Sathidevi
Professor & Head

April 2012
NIT Calicut

i

ACKNOWLEDGEMENT
We would like to thank Dr. S. M. Sameer, Assistant Professor, Department of Electronics and Communication Engineering for his guidance and inspiration in helping us complete this project. We are also grateful to Dr. P S Sathidevi, Professor and Head,
Department of Electronics and Communication Engineering, for providing us with this opportunity to work on our project and also for permitting access to the required facilities. We would also like to thank the lab staff for their technical support and providing us assistance.

Arjun Surendran
Deepak Venga
Nandu Raj
Priyanka G Das
Sanjay George

ii

Abstract
Autonomous navigation of a mobile robot in outdoor environment is one of key issues in mobile robotics.The advantages of having a vehicle that can navigate without human intervention are many and varied,ranging from vehicles for use in hazardous industrial environments, battlefield surveillance vehicles, and planetary rovers.The autonomous vehicles are the future and a lot of researches are going on in this field.
The whole concept of self navigation is to develop a bot that follows road path to reach particular GPS co-ordinates. During its path, it works based on 2 objectives. Firstly, to overcome different type of obstacles in the road. Secondly, to route minimum path from source co-ordinates to destination co-ordinates based on different routing algorithms. The idea of road following is to keep track on the road and to tackle the hingrances on the road. So the bot should be able to continuously detect the road. Also the obstacles are to be idenified and the course of the vehicle should be changed accordingly. Path minimisation is inorder to reduce energy consumption. The bot must choose the shortest path from one point to another. This can be done only after training through all paths.
For the independent motion of the bot, it will have an array of sensors. The primary input will be a camera module which will give continuous frames. The secondary inputs are sharp sensors for distance calibration and GPS module for mapping and navigation.
The frames from the camera are processed to get obstacle detection, navigating through it and following the road.The report deals with the major aspects in autonomous navigation and routing algorithms. This is an attempt to design and fabricate a self navigating autonomous bot.

iii

Contents
Abstract
1 Introduction
1.1 General Introduction . .
1.2 Brief Literature Review
1.3 Problem Definition . . .
1.4 Motivation . . . . . . .
1.5 Thesis Contribution . .
1.6 Thesis Organization . .

iii

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

2 Literature Survey
2.1 Introduction . . . . . . . . . . . .
2.2 Literature Review . . . . . . . .
2.2.1 Global Positioning System
2.2.2 DGPS . . . . . . . . . . .
2.2.3 Edge Detection . . . . . .
2.2.4 Road Tracking . . . . . .
2.2.5 Routing Algorithms . . .
2.2.6 The Hardware . . . . . .
2.3 System Modelling . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

1
1
1
2
3
3
3

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

. . . .
. . . .
(GPS)
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

5
. 5
. 5
. 5
. 6
. 6
. 9
. 10
. 13
. 13

.
.
.
.
.
.
.

.
.
.
.
.
.
.

15
15
15
15
17
17
17
18

.
.
.
.

19
19
19
19
20

3 Positioning
3.1 Introduction . . . . . . . . . . . . . . .
3.2 Literature Review . . . . . . . . . . .
3.2.1 GPS Receiver . . . . . . . . . .
3.2.2 Positioning using Sharp sensors
3.3 Implementation and Results . . . . . .
3.3.1 Current GPS location . . . . .
3.3.2 Sharp sensor Callibration . . .
4 Road Following
4.1 Introduction . . . . . . . . . . . . . . .
4.2 Literature Review . . . . . . . . . . .
4.2.1 Colour Filtering . . . . . . . .
4.2.2 Morphological Transformations iv .
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.

4.3

4.2.3 Canny edge detection . . . . . . . . . . .
4.2.4 Hough Transform . . . . . . . . . . . . . .
4.2.5 Angle of Inclination . . . . . . . . . . . .
4.2.6 Obstacle Avoidance . . . . . . . . . . . .
Implementation and Results . . . . . . . . . . . .
4.3.1 Edge Detection . . . . . . . . . . . . . . .
4.3.2 Morphological Transformations . . . . . .
4.3.3 Road tracking wiithout Hough Transform
4.3.4 Road Tracking using Hough Transform . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

21
24
26
26
26
26
27
29
31

5 Path Optimisation
36
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 Simulation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6 Implementation of the Project
6.1 Hardware Implementation . .
6.1.1 Design of the bot . . .
6.1.2 Robot Peripherals . .
6.1.3 Working of Hardware
6.2 Software Implementation . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

39
39
40
41
42
43

7 Conclusion

45

Bibliography

46

v

List of Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7

Global Positioning System . . . . . . . . . . . . .
Buildings obstructing LOS between satellites and
Sobel masks for detecting edges . . . . . . . . . .
Laplacian filters . . . . . . . . . . . . . . . . . . .
Laplacian of Gaussian . . . . . . . . . . . . . . .
Typical network with nodes and cost of paths . .
Flow chart of the processing algorithm . . . . . .

3.1
3.2
3.3

NMEA sentence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Connecting sensor to the ADC . . . . . . . . . . . . . . . . . . . . . . . . . 18
Relation between voltage and distance for the sensor . . . . . . . . . . . . . 18

4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
4.20

Image subjected to dilation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image undergoing erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Opening and closing operation on images . . . . . . . . . . . . . . . . . . .
A 5 x 5 Gaussian Mask for Image smoothening . . . . . . . . . . . . . . . .
Representation of a line in image space . . . . . . . . . . . . . . . . . . . . .
Family of line passing through(x0 , y0 ) . . . . . . . . . . . . . . . . . . . . .
Points on same line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample input image chosen for edge detection . . . . . . . . . . . . . . . . .
Edge detection using Canny algorithm . . . . . . . . . . . . . . . . . . . . .
Simulation of the different morphological transformations . . . . . . . . . .
Input image from the camera frame . . . . . . . . . . . . . . . . . . . . . .
Result of road tracking algorithm applied on a frame of image . . . . . . . .
Performance Evaluation of the road following algorithm showing false alarms and misses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample input image chosen for edge detection . . . . . . . . . . . . . . . . .
Image converted to grayscale after colour filtering . . . . . . . . . . . . . . .
Image subjected to opening . . . . . . . . . . . . . . . . . . . . . . . . . . .
Canny edge detection of the image . . . . . . . . . . . . . . . . . . . . . . .
Standard Hough Transform of the image . . . . . . . . . . . . . . . . . . . .
Probabilistic Hough Transform of the image . . . . . . . . . . . . . . . . . .
Performance of Hough Transform . . . . . . . . . . . . . . . . . . . . . . . .

5.1

Simulation of routing algorithms . . . . . . . . . . . . . . . . . . . . . . . . 37

vi

. . .
GPS
. . .
. . .
. . .
. . .
. . .

. . . . . receiver . . . . .
. . . . .
. . . . .
. . . . .
. . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

. 5
. 6
. 7
. 8
. 9
. 11
. 14

20
21
21
22
24
25
25
27
27
28
29
30
31
31
32
32
33
34
34
35

6.1
6.2

Beagleboard and its usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
CAD design for the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

vii

List of Tables
3.1

NMEA Sentences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.1

Morphological Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 28

viii

Chapter 1

Introduction
1.1

General Introduction

Great interest has recently arisen in the design and development of autonomous land vehicle. The autonomous land vehicle is an unmanned vehicle or so called a robot. It detects the obstacles on the terrain and tackles the issues efficiently. Navigation is something which can be incorporated in an unmanned vehicle. A vehicle is made to reach a predefined destination point automatically. Autonomous navigation of a mobile robot in outdoor environment is one of key issues in mobile robotics.The advantages of having a vehicle that can navigate without human intervention are many and varied, ranging from vehicles for use in hazardous industrial environments, battlefield surveillance vehicles, and planetary rovers. In the last decades there have been remarkable developments in localization and mapping algorithm of navigation system for indoor/outdoor mobile robots. In indoor cases, personal service robots perform the missions of guiding tourists in museum, cleaning room and nursing the elderly. In outdoor cases, mobile robots have been used for the purpose of patrol, reconnaissance, surveillance and exploring planets, etc.

1.2

Brief Literature Review

The indoor environment has a variety of features such as walls, doors and furniture that can be used for mapping and navigation of a mobile robot. In contrast to the indoor cases, it is hard to find any specific features in outdoor environment without some artificial landmarks. Fortunately, the existence of curbs on roadways is very useful to build a map and localization in outdoor environment. The detected curb information could be used for not only map building and localization but also navigating safely.
Many kinds of navigation algorithm of a mobile robot have been proposed. Most of the algorithms, however, are developed for autonomous navigation. Therefore, navigation algorithms for the teleoperated mobile robot that is usually used in hazardous and disaster field are necessary. A hybrid strategy for a reliable mobile robot consists of a mobile robot and a control station. The mobile robot sends image data to the control station. The
1

control station receives and displays the image data and then the teleoperator commands the mobile robot based on the image data. For the purpose of reliable outdoor navigation, the mobile robots should be able to navigate with commanded translational and rotational velocities, perceive static and varying environments and avoid dynamic obstacles.
Also the robot should make use of positioning data to identify its position on the ground. The Global positioning system can be made use to get the current coordinates of the machine. The GPS and DGPS data couls be accessed through GPS receivers. A real life self navigation should also consider the reduction of path distance by shortest path routing.In this report, a navigation algorithm for a mobile robot by incorporating GPS and road detection is given. Depending on the size of obstacles, the distance between the robot and curbs obtained using a laser range finder, the velocity and the braking distance of the robot, the modes of the robot alternate between autonomous and teleoperated modes.

1.3

Problem Definition

The concept of unmanned vehicle is always of a huge interest. Often these concepts are depicted in films and some magazines. The concept of Google Car is looking forward to the implementation of driverless vehicle. As the thought went related to these topics, the topic of self automated navigation arose. The name itself illustrates that it is unmanned and it is aimed at an efficient navigation of unmanned vehicle.
The whole concept of self navigation is to develop a bot that follows road path to reach particular GPS co-ordinates. During its path, it works based on 2 objectives. Firstly, to overcome different type of obstacles in the road. Secondly, to route minimum path from source co-ordinates to destination co-ordinates based on different routing algorithms. The topic of self automated navigation comprises of main three stages.

1. GPS Positioning[1] : The Global Positioning System is made use for the tracking of the vehicles position. Estimating the position is very much essential as the task is to travel from one coordinate to another.
2. Path Following : This includes obstacle detection and road following. The vehicle is supposed to keep track on the road and have to encounter with various obstacles on course. So the bot needs to be equiped with obstacle detecting algorithms.
3. Optimising the route : Inorder to make the unmanned robot efficient, the path should be optimised. This is to optimize the path from source to destination.

2

1.4

Motivation

Autonomous navigation of a mobile robot in outdoor environment is one of key issues in mobile robotics.The advantages of having a vehicle that can navigate without human intervention are many and varied,ranging from vehicles for use in hazardous industrial environments, battlefield surveillance vehicles, and planetary rovers.The autonomous vehicles are the future and a lot of researches are going on in this field.
The concept of unmanned vehicle is always of a huge interest. Often these concepts are depicted in films and some magazines. The concept of Google Car is looking forward to the implementation of driverless vehicle. Google Car is a concept unmanned vehicle which can be used for public transport. It will follow the GPS coordinates and reach the destination. It performs all complicated road tracking algorithm for mapping the road and to minimize the path. Google expects that the increased accuracy of its automated driving system could help reduce the number of traffic-related injuries and deaths, while using energy and space on roadways more efficiently. The motivation for the project is the increasing no of accidents due to reckless driving. An automated navigation system will put an end to the problems in transport sector. As the thought went related to these topics, the topic of self automated navigation arose. And the idea is to buid a small scale autonous navigation system which will take a vehicle from source to destination on its own. 1.5

Thesis Contribution

The whole thesis is in order to perform self navigation in outdoor environment. The three major steps in the process are:
1. Positioning of the vehicle by using the Global Positioning System.
2. Road Tracking for the navigation of the robot on road by detecting road and non road , and to avoid the obstacles on course.
3.

1.6

Path Optimisation inorder to reduce the energy consumption by finding the minimum cost path to the destination

Thesis Organization

In Chapter 2, the basic theoritical concepts of the project is dealt. The chapter deals will some details on the GPS positioning. The fundamental concepts of the road following techniques are being dealt in detail. Also the various routing techniques that can be approached for minimising the cost are discussed.
Chapter 3, 4 and 5 deals with the three major stages of the self navigation such as
Positioning, Road Following and Path Optimization repectively. The algorithms used for

3

the development of the road tracking and obstacle avoidance are discussed deeply.
Chapter 6 deals with the implementation of the project. The hardware specifications and the design of the bot are being depicted in detail. And the experimentation and results part are being discussed. After all the final conclusion on the concept of self navigation is enclosed. 4

Chapter 2

Literature Survey
2.1

Introduction

”Mobile Robot Navigation” covers a large spectrum of different systems. requirements and solutions. The concept of autonomous navigation is a wide topic.The various steps involved in the navigation are road tracking, obstacle avoidance etc. The project that is dealt here involves the process of positioning and path optimisation along with the navigation concept. Here in this chapter the basic theoritical facts regarding the different positioning systems, image processing techniques and path optimizing algorithms.

2.2
2.2.1

Literature Review
Global Positioning System (GPS)

Figure 2.1: Global Positioning System
The Global Positioning System[1] (GPS)[Fig. ] is used to get information of the destination or the position of the vehicle. It is a satellite based approach for finding the
5

Figure 2.2: Buildings obstructing LOS between satellites and GPS receiver position. The receiver is implemented to access the GPS coordinates. But in a map based approach, GPS and dead reckoning are combined. This is because GPS failures may occur frequently. In this context, it is meant by a GPS failure the inability of a GPS receiver to estimate its position. The failures occur whenever the signals sent by the satellite are obstructed[Fig. ] or when precision factors are high[1].

2.2.2

DGPS

A Differential GPS system[2] is implemented so as provide more accuracy to a mobile user. The DGPS system operates by having reference stations receive the satellite broadcast GPS signal at a known site, and then transmit a correction according to the error in received signal, to mobile GPS users. So long as the mobile user is in the proximity of the stationary site, they will experience similar errors, and hence require similar corrections.
Typical DGPS accuracy is around 4 to 6 m, with better performance seen as the distance between user and beacon site decreases. But if the receivers are being obsructed, DGPS also fails. There the implementation of dead reckloning is required. It makes use of odometry to find out the vehicle’s location. Odometry is a positioning sensor which estimates both position and orientation of the robot by integral the number of left and right driving wheel rotations. Coordinate system of the odometry is the same with one of the DGPS.

2.2.3

Edge Detection

The vision based approach for the edge detection[3] is being discussed here. The purpose of the edge detection is to find out the edge between road and non road features. The edge can be determined by the change in pixel values between road and non road. Colour frames from the camera and converted to gray scale and then the edges are determned.
There are various image processing tools for edge detection.
1. Sobel Operator :
The operator consists of a pair of 3x3 convolution kernels. One kernel is simply the other rotated by 90.These kernels are designed to respond maximally to edges
6

running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient. Basically these gradient values will contribute the edges.
The magnitude, or edge strength, of the gradient is then approximated using the formula: |G| = |Gx | + |Gy |

(2.1)

The formula for finding edge direction is: tanΘ = (Gy /Gx )

(2.2)

The convolution kernels used for the image gradients using Sobel Operator are shown in Figure.

Figure 2.3: Sobel masks for detecting edges

In simple terms, the operator calculates the opposite of the gradient of the image intensity at each point, giving the direction of the largest possible change from light to dark and the rate of change in that direction. The result therefore shows how
”abruptly” or ”smoothly” the image changes at that point, and therefore how likely it is that that part of the image represents an edge, as well as how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation.
Mathematically, the gradient of a two-variable function (here the image intensity function) is at each image point a 2D vector with the components given by the
7

derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction.
This implies that the result of the Sobel operator at an image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from brighter to darker values.
2. Laplacian Operator
The Laplacian[3] is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection. The Laplacian is often applied to an image that has first been smoothed with something approximating a Gaussian Smoothing filter in order to reduce its sensitivity to noise. The operator normally takes a single graylevel image as input and produces another graylevel image as output. Since the input image is represented as a set of discrete pixels, we have to find a discrete convolution kernel that can approximate the second derivatives in the definition of the Laplacian.

Figure 2.4: Laplacian filters

Laplacian is basically the second derivative of an image. Mathematically, it can be written as: d2 f d2 f
2
f= 2+ 2
(2.3)
dx dy Once we use the kernels shown above for calculating edges, the areas of constant intensity gives a zero as result.Since laplacian calculates second derivative, it has a stronger response to fine detail , such as thin lines ,points etc. The sign can be used to determine whether the transition is from light to dark or vice versa. The laplacian is highly susceptable to noise. Hence it is not used as such. It is used as
Laplacian of Gaussian, a compound operator that combines a smoothing operation, using a Gaussian-shaped, linear-phase FIR filter, with a differentiation operation, using a discrete Laplacian. The edges are identified by the location of zero crossings.
Figure 4.3 shows the Laplacian of Gaussian operator.
The Sobel and Laplacian operators are highly susceptable to noise and are not commonly used. Most commonly used edge detction operator is Canny Edge detection operator. This
8

Figure 2.5: Laplacian of Gaussian

method is discussed in detail later in chapter 4.

2.2.4

Road Tracking

This method is to follow the lane of the road. It involves edge detection and road following.
This approach classifies the pixels of an image as road or non road. The line tracking approach depends on the geometric model of the road. Actually the edge detection discussed above is a part of the line tracking approach. The road tracking algorithm [4] detects road and non road features and classify them accordingly. The various steps in road tracking after edge detection are being discussed here.
1. Image Reduction : Actually this step is done prior to the edge detection. We create a pyramid of reduced resolution R, G, and B images. The image size is reduced from say 256 x 256 to 32 x 32. Also the picture is converted to grayscale as it is easy to process. This is because a color image needs to be processed in R,G and B domains.
But the grayscale image needs only one domain. Image reduction is used mainly to improve speed, but as a side effect the resulting smoothing reduces the effect of scene anomalies such as cracks in the pavement. This smoothening due to noise is later removed by canny edge detection.
2. Edge detection : After image reduction, the road edges are found out using the any of the edge detection methods discussed above.The best out of lot has to be selected for detecting the edges of the road. The Canny method involves image smoothening followed by edge detection.
3. Pixel Classification : Each pixel (in the 32 X 32 reduced image) is labeled as belong9

ing to one of the road or nonroad classes by standard maximum likelihood classification. Usually there are various types of road and non road classes. But since we are using a grayscale image, there is one class for road and one for non road. Usually white pixels are notated as road after thresholding and others as non road. Road is made as whit pixels after threholding the edge detected image. There is a problem that some shadows or dark features which are not road might be classified as road.
For this there is the next step, voting for road position.
4. Voting For Road Position : As the edge directions are found out and the pixel classification is done, the next step is decision making. The decision is to find the right one as road. The maximum blob size concept is used for finding the road. After pixel classification, The largest block is judged as road. Usually the largest white contour area will be the road. But there may be some noise and some erraneous road and non road features. By varying the threshold for decision we can eliminate such discrepencies.
This is the standard method for tracking the road. The standard method for road tracking may create a lot of hits and misses, or false alarms and misses. False alarms are the non road features which are mistakenly shown as road featues during processing of the image. Misses are those road features which are wrongly shown as non road features. So the algorithm has to made mor precise to reduce the number of hits and misses.

2.2.5

Routing Algorithms

Path optimisation is an important parameter in the real life scenario were the energy consumption should be reduced significantly. There may be more than one path from a source to the destination. The path should be planned such a way that the total cost should be minimum. The idea of routing protocols can be made into use here. There are a lot of routing algorithms that can be used for shortest path routing.

1. Dijkstra’s Algorithm: The dijkstra’s algorithm consider the map as a network consisting of various nodes[5]. For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e. the shortest path) between that vertex and every other vertex. It can also be used for finding costs of shortest paths from a single vertex to a single destination vertex by stopping the algorithm once the shortest path to the destination vertex has been determined.
Suppose you want to find the shortest path between two intersections on a city map, a starting point and a destination. The order is conceptually simple: to start, mark the distance to every intersection on the map with infinity. This is done not to imply there is an infinite distance, but to note that that intersection has not yet been visited. Now, at each iteration, select a current intersection. For the first iteration the current intersection will be the starting point and the distance to it (the intersection’s label) will be zero. For subsequent iterations the current intersection will
10

be the closest unvisited intersection to the starting point. From the current intersection, update the distance to every unvisited intersection that is directly connected to it. This is done by determining the sum of the distance between an unvisited intersection and the value of the current intersection, and relabeling the unvisited intersection with this value if it is less than its current value.
In effect, the intersection is relabeled if the path to it through the current intersection is shorter than the previously known paths. After you have updated the distances to each neighbouring intersection, mark the current intersection as visited and select the unvisited intersection with lowest distance (from the starting point) as the current intersection. Nodes marked as visited are labeled with the shortest path from the starting point to it and will not be revisited or returned to.
Continue this process of updating the neighbouring intersections with the shortest distances, then marking the current intersection as visited and moving onto the closest unvisited intersection until you have marked the destination as visited. Once you have marked the destination as visited (as is the case with any visited intersection) you have determined the shortest path to it, from the starting point, and can trace your way back, following the arrows in reverse.

Figure 2.6: Typical network with nodes and cost of paths

2. Distance Vector Algorithm: This is also an algorithm to find the optimum path between two nodes in a network. This algorithm generates routing tables between successive nodes and continues finding the distance between the nodes. Comparing the performance , Dijkstra’s algorithm has less time copmplexity than the Distance
Vector Algorithm and it is been proved. So the algorithm of this particular method is not discussed here.

11

3. A star Algorithm: A star is a computer algorithm [6] that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. Noted for its performance and accuracy, it enjoys widespread use. It is an extension to the Dijkstra s algorithm. It uses a distanceplus-cost heuristic function (f (x)) to determine the order in which the search visits nodes in the tree. The distance-plus-cost heuristic is a sum of two functions: the path-cost function, which is the cost from the starting node to the current node
(g(x)) and an admissible ”heuristic estimate” of the distance to the goal (h(x)).
Starting with the initial node, it maintains a priority queue of nodes to be traversed, known as the open set. The lower f (x) for a given node x, the higher its priority.
At each step of the algorithm, the node with the lowest f (x) value is removed from the queue, the f and h values of its neighbours are updated accordingly, and these neighbours are added to the queue. The algorithm continues until a goal node has a lower f value than any node in the queue (or until the queue is empty). The f value of the goal is then the length of the shortest path, since h at the goal is zero in an admissible heuristic. If the actual shortest path is desired, the algorithm may also update each neighbour with its immediate predecessor in the best path found so far; this information can then be used to reconstruct the path by working backwards from the goal node. Additionally, if the heuristic is monotonic, a closed set of nodes already traversed may be used to make the search more efficient. What makes A∗ different from Dijkstra is that it has an improved speed. The time complexity of
A∗ is much less.
4. Greedy Best First Search Agorithm: Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule. Greedy algorithms appear in network routing. Using greedy routing, a message is forwarded to the neighboring node which is ”closest” to the destination. The algorithm is implemented to networks as the other search algorithms are implemented.
The process starts from the parent node. Using a greedy algorithm, expand the first successor of the parent. After a successor is generated, if the successor’s heuristic is better than its parent, the successor is set at the front of the queue (with the parent reinserted directly behind it), and the loop restarts. Else, the successor is inserted into the queue (in a location determined by its heuristic value). The procedure will evaluate the remaining successors (if any) of the parent. Greedy algorithm is that which makes a choice which seems best at the moment and continues[6].
5. D star Algorithm: D* and its variants have been widely used for mobile robot and autonomous vehicle navigation[7]. The algorithm works by iteratively selecting a node from the open list and evaluating it. It then propagates the node’s changes to all of the neighboring nodes and places them on the OPEN list. This propagation process is termed ”expansion”. In contrast to A*, which follows the path from start to finish, D* begins by searching backwards from the goal node. Each expanded node has a backpointer which refers to the next node leading to the target, and each
12

node knows the exact cost to the target. When the start node is the next node to be expanded, the algorithm is done, and the path to the goal can be found by simply following the backpointers.

2.2.6

The Hardware

The core of the bot consists of a beagle board running on Linux which does all communication with the host through Internet transmission and Wi-Fi module. The USB-powered
Beagle Board is a low-cost, fan-less single board computer that unleashes laptop-like performance and expandability without the bulk, expense, or noise of typical desktop machines. The memory of the system consist of a standard SD card which stores all routing data and images for accurate and better path routing.
A standard mapping system can also be installed for navigation instead of routing algorithms. Both the approaches will be tested. For the independent motion of the bot, it will have an array of sensors. The primary input will be a camera module which will give continuous frames. The secondary inputs are sharp sensors for distance calibration and
GPS module for mapping and navigation. The frames from the camera are processed to get obstacle detection, navigating through it and following the road. The frames will be processed based on openCV image libraries or other better library. The bot actuators will be basic DC motors of 200/400rpm with standard track wheels. The motor selection will be based on image processing speed. The details of the component and more hardware specifications are dealt in chapter 6.

2.3

System Modelling

The processing of the system is to be modelled.The system is said to be performing the following functions. On receiving the initial host command, the software tracks the initial
GPS coordinates. Then the minimum path to the destination is chosen from the already stored path routing data. After that the image processing techniques are made use inorder to track the road. The sensors then position the vehicle on the road by tracking obstacles.
On reaching the destination, it is commanded to stop by the host. The entire system is to be modelled on software inorder to perform each of the functions. For image processing, the openCV library files are used for handling the images which are taken as input. The sensors will detect obstacles and send some range value. The decision is made by the program to command the wheels to turn. So the entire system is asequence of different processes which are to be implemented on software.

13

The whole functions of the system can be modelled as flow diagram of events and processes. The key processes are GPS tracking, road following, obstacle avoidance etc.
The entire system flow is depicted as the flow diagram below.

Figure 2.7: Flow chart of the processing algorithm

14

Chapter 3

Positioning
3.1

Introduction

In this chapter, we analyze the differnt techniques that can be adopted for position estimation and the implementation of positioning in the mobile robot. Self-positioning systems can be divided into three location technologies: stand-alone (e.g., dead reckoning), satellite-based (e.g., GPS), and terrestrial radio-based (e.g., LORAN-C or cellular networks). Another class of positioning systems is a hybrid system employing two or more of these technologies with possible addition to specific sensors and a map matching system.
The most common application of map matching is employed for automobile navigation.
These systems typically use a fusion of sensors, in particular, GPS receiver, odometer, directional sensor and digital-map database. The use of sensors for positioning on road is also discussed here.

3.2
3.2.1

Literature Review
GPS Receiver

GPS receivers are used to obtain information on the coordinate position of the robot. This is infact meant for checking whether the robot has reached the destination.GPS receiver can measure not only its position data (xgps ,ygps ,zgps ) but also velocity V and heading direction θgps . While its position data is calculated from TOF (Time Of Fly) of GPS radio waves, its velocity and heading direction are computed from Doppler shift of GPS radio waves. For the study of the each property of position and heading direction data from GPS, the authors observed the GPS measurement data in the walkway environment between buildings. The observation showed that the measurement error of GPS heading direction data tend to be small even if that of its position data is large. This tendency may arise from the difference of measurement principle. Therefore, in the fusion, the authors propose that GPS position data and its heading direction data are used independently for correcting the robot position.
GPS receiver send the data to a computer with 9600bps via RS232C.The output data

15

format is so-called NMEA-0183 which offers a series of characters through a RS232C communication channel.NMEA 0183 is a combined electrical and data specification for communication between marine electronic devices such as echo sounder, sonars, anemometer, gyrocompass, autopilot, GPS receivers and many other types of instruments.
The NMEA 0183[8] standard uses a simple ASCII, serial communications protocol that defines how data is transmitted in a ”sentence” from one ”talker” to multiple ”listeners” at a time. Through the use of intermediate expanders, a talker can have a unidirectional conversation with a nearly unlimited number of listeners, and using multiplexers, multiple sensors can talk to a single computer port.
The figure below gives an idea on a sample NMEA sentence received on the module.

Figure 3.1: NMEA sentence
The sample NMEA sentence is shown by the figure. Note that out of the different fields, the coordinates are shown by only two fields.

16

There are various kinds of NMEA sentences owing to the purpose of the instrument.
For GPS information, there are certain typical sentences. The table 3.1 shows some standard NMEA sentences received on the computer.

Sentence
GGA
GSA
VTG

Table 3.1: NMEA Sentences
Contents
Latitude, Longitude, Height, Quality, Number of satellites
Mode(2D, 3D), Satellite’s ID
Speed over ground, Track made good

By analyzing the GGA sentence, we can obtain latitude, longitude and height with respect to WGS-84 geographic coordinate system. Those data are transformed into latitude, longitude and height.

3.2.2

Positioning using Sharp sensors

Sensors are mainly meant to keep track of obstacles. They are in fact positioning the vehicle on the different lanes of road. The robot is being positioned on to a different lane when obstacles are detected. Sharp sensors which are having more range than IR sensors can be used. In IR or LASER, the time taken for the beam to come back was sensed and it was used as the decision metric. In the case of sharp sensors, it is more easy since the dealing is with distance. That is sharp sensor will sense the distance to the obstacle when it is in the range. The distance is stored into a decision matrix and the movement of the vehicle is altered.
The Sharp IR Range Finder works by the process of triangulation. A pulse of light in IR range is emitted and then reflected back (or not reflected at all). When the light returns it comes back at an angle that is dependent on the distance of the reflecting object.
Triangulation works by detecting this reflected beam angle - by knowing the angle, distance can then be determined. The normal range of these sensing is 10 cm to 80 cm. The sensor values can be taken to decide the position of the vehicle on the road. For example, if a sensor value is less, the vehicle has to shift its position to the left or right.

3.3
3.3.1

Implementation and Results
Current GPS location

The GPS receiver is attached to the beagle board via USB connection. The module continuously sends NMEA sentences to the computer. The sentences can be receved using the terminal of the beagleboard. For instant, a sentence received is
$GPGGA,123519,4807.038,N,01131.000,E,1,08,0.9,545.4,M,46.9,M,,*47

17

According to the NMEA protocol, the gps coordinates are in the four continuous fields from the third onwards. The coordinates can be extracted from the sentence easily. The extracted coordinates are used for the processing with the destination coordinates.

3.3.2

Sharp sensor Callibration

Connect the sensor to your analog to digital converter as shown in the circuit below. The

Figure 3.2: Connecting sensor to the ADC potentiometer connected to the Vref pin on the ADC is being used as a voltage divider to set the reference voltage to 2.55 volts. On the ADC this will give a value of 0 to 255 for an input voltage of 0 to 2.55 volts. This gives us a resolution of 0.01 volts per step from the ADC.
The sensor will give a linear relation between voltage and distance. So to callibrate the sensor, the voltage values obtained at different distances were plotted. From the diagram the relation between voltage and distance is found.

Figure 3.3: Relation between voltage and distance for the sensor

18

Chapter 4

Road Following
4.1

Introduction

The idea of path following is not that simple. The unmanned vehicle is suppose to follow the road. On the road, it has to tackle the obstacles and change the path accordingly.
Several kinds of sensors are used to acquirebinformation from the environment to carry out robot navigation with real-time obstacle avoidance. Vision system, 2D or 3D laser rangefinder and combinations of them are used to detect obstacles under different environment.Road following techniques using cameras are also common.
The idea of vision based approach for navigation is used for the purpose here. It will constitute a camera as the primary input. The frames from the camera are processed for tracking the road. There are three major steps in the concept of Road Following.
1. Edge Detection
2. Road Tracking
3. Obstacle avoidance

4.2

Literature Review

An algorithm for road tracking was developed from the basics of the edge detection and road following principles. Crude edge detection of the image will be of complete errors as the shadows of trees and small patches etc. will be considered as edges. Also the conventional road tracking method will generate lot of misses and false alarms. So a modified algorithm has to be developed.In this method, the image is cut in half and only the lower half is used for the processing. The various steps in road detection are given below. 4.2.1

Colour Filtering

The input image is cut into half and the lower portion is taken for processing. The image is cropped so that a square portion from the center of the image is taken[3]. This is the
19

portion corresponding to the road features. What the colour filter does is that it will scan the entire image and the pixels in the colour range of the square sample is retained and the rest pixels are neglected (made as zero). This will help to reduce the colour range of the image to that of the road’s colour. After colour filtering, the image is converted to gray scale.

4.2.2

Morphological Transformations

A set of operations that process images based on shapes.Morphological operations apply a structuring element to an input image and generate an output image. Morphological transformations are used to remove noise in an image and to identify the bumps and holes in an image.
1. Dilation[9] : This operations consists of convoluting an image A with some kernel
(B), which can have any shape or size, usually a square or circle. The kernel B has a defined anchor point, usually being the center of the kernel.As the kernel B is scanned over the image, we compute the maximal pixel value overlapped by B and replace the image pixel in the anchor point position with that maximal value. As you can deduce, this maximizing operation causes bright regions within an image to grow (therefore the name dilation). The figure 4.6 shows an example for dilation.

Figure 4.1: Image subjected to dilation

2. Erosion[9] : This operation is the sister of dilation. What this does is to compute a local minimum over the area of the kernel.As the kernel B is scanned over the image, we compute the minimal pixel value overlapped by B and replace the image pixel under the anchor point with that minimal value. Analagously to dilation, when we apply the erosion operator to the original image the bright areas of the image
(the background, apparently), get thinner, whereas the dark zones gets bigger. The
Figure 4.7 shows an example for erosion.
3. Opening[9] : It is obtained by the erosion of an image followed by a dilation. dst = open(src, element) = dilate(erode(src, element))

(4.1)

This is useful for removing small objects on the figure. Small bright holes in dark bbackground can be removed as a result.
20

Figure 4.2: Image undergoing erosion

4. Closing[9] : It is obtained by the dilation of an image followed by an erosion. dst = close(src, element) = erode(dilate(src, element))

(4.2)

This can be used to remove small dark regions here and there in the image. The following example shows what opening and closing will do.

Figure 4.3: Opening and closing operation on images

The appropriate morphological transformation is selected according to the application we need. For road detection the opening or closing method is used so as to remove the black dots created due to colour filtering. This will help to remove unwanted colours in the image as the wrong pixels are being removed.

4.2.3

Canny edge detection

The Canny edge detection [3] algorithm is known to many as the optimal edge detector.
There are certain criteria in canny detection. The first and most obvious is low error rate.
It is important that edges occurring in images should not be missed and that there be no responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely
21

eliminate the possibility of multiple responses to an edge.
Based on these criteria, the canny edge detector first smoothes the image to eliminate noise. It then finds the image gradient to highlight regions with high spatial derivatives.
The algorithm then tracks along these regions and suppresses any pixel that is not at the maximum. The gradient array is now further reduced by hysteresis. Hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a nonedge). The various steps involved in canny detection are:
1. The first step is to filter out any noise in the original image before trying to locate and detect any edges. And because the Gaussian filter can be computed using a simple mask, it is used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian smoothing can be performed using standard convolution methods. A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time.
The larger the width of the Gaussian mask, the lower is the detector’s sensitivity to noise. The localization error in the detected edges also increases slightly as the
Gaussian width is increased.

Figure 4.4: A 5 x 5 Gaussian Mask for Image smoothening

2. After smoothing the image and eliminating the noise, the next step is to find the edge strength by taking the gradient of the image. The Sobel operator performs a
2-D spatial gradient measurement on an image(discussed earlier). Then, the approximate absolute gradient magnitude at each point can be found.
The magnitude, or edge strength, of the gradient is then approximated using the formula: 22

|G| = |Gx | + |Gy |

(4.3)

3. The direction of the edge is computed using the gradient in the x and y directions.
However, an error will be generated when sum is equal to zero. Whenever the gradient in the x direction is equal to zero, the edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value of the gradient in the y direction is equal to. The formula for finding edge direction is: tanΘ = (Gy /Gx )

(4.4)

4. Once the edge direction is known, the next step is to relate the edge direction to a direction that can be traced in an image. So if the pixels of a 5x5 image are aligned as follows: x x x x x x x x x x

x x a x x

x x x x x

x x x x x

Then, it can be seen by looking at pixel ”a”, there are only four possible directions when describing the surrounding pixels - 0 degrees (in the horizontal direction), 45 degrees (along the positive diagonal), 90 degrees (in the vertical direction), or 135 degrees (along the negative diagonal). So now the edge orientation has to be resolved into one of these four directions depending on which direction it is closest to.
5. After the edge directions are known, nonmaximum suppression now has to be applied. Nonmaximum suppression is used to trace along the edge in the edge direction and suppress any pixel value (sets it equal to 0) that is not considered to be an edge.
This will give a thin line in the output image.
6. Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking up of an edge contour caused by the operator output fluctuating above and below the threshold. If a single threshold, T1 is applied to an image, and an edge has an average strength equal to T1, then due to noise, there will be instances where the edge dips below the threshold. Equally it will also extend above the threshold making an edge look like a dashed line. To avoid this, hysteresis uses 2 thresholds, a high and a low. Any pixel in the image that has a value greater than T1 is presumed to be an edge pixel, and is marked as such immediately. Then, any pixels that are connected to this edge pixel and that have a value greater than T2 are also selected as edge pixels.

23

4.2.4

Hough Transform

The Hough transform [10] is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. Since we are dealing with road extraction, Hough Line transform is used. The Hough Line Transform is a transform used to detect straight lines. To apply the Transform, first an edge detection pre-processing is desirable. 1. A line in the image space can be expressed with two variables.For Hough Transforms, we will express lines in the Polar system.

Figure 4.5: Representation of a line in image space

Hence, a line equation can be written as: r = xcosθ + ysinθ

(4.5)

2. In general for each point (x0 , y0 ), we can define the family of lines that goes through that point as: rθ = x0 cosθ + y0 sinθ
(4.6)

3. If for a given (x0 , y0 ) we plot the family of lines that goes through it, we get a sinusoid. For instance, for x0 = 8 and y0 = 6 we get the plot in [Fig 4.6]. (in a plane θ − r):

24

Figure 4.6: Family of line passing through(x0 , y0 )

4. If the curves of two different points intersect in the plane θ − r, that means that both points belong to a same line. For instance, following with the example above and drawing the plot for two more points: x1 = 9, y1 = 4 and x2 = 12, y2 = 3, we get:

Figure 4.7: Points on same line

5. In general, a line can be detected by finding the number of intersections between curves.The more curves intersecting means that the line represented by that intersection have more points. In general, we can define a threshold of the minimum number of intersections needed to detect a line.
6. This is what the Hough Line Transform does. It keeps track of the intersection between curves of every point in the image. If the number of intersections is above some threshold, then it declares it as a line with the parameters (θ; rθ ) of the intersection point. So by using the hough transform we will be able to obtain the edge of the road. Since the hough transform will generate a line having intersections more than a particular threshold,
25

a line will be obtained on the edge of the road. As the edge detection is done earlier, the line on the edge will be the one having maximum threshold. So by increasing threshold, the road lanes can be extracted out using the hough transform.

4.2.5

Angle of Inclination

The angle of inclination of the road has to be found out to decide the path of the bot. The vehicle has to turn according to the road angle such that it will not be out of the road.
For that, the image after hough transform is taken. A rectangular box is considered on the road. The angle of inclination of the road with the bos is taken as the road angle and processed. The angle varies each moment as new image frames are being loaded.

4.2.6

Obstacle Avoidance

Vision based approach also can be used for the obstacle detection. Camera based detection[11] is more susceptable to whether changes. But if the algorithm for the road detection is used ,the obstacles can be easily identified as non road feartures. But the problem is after colour filtering and opening of the image, the small non road features will be neglected. So there is a chance that the entire obstacles may not be identified. So the idea is use a vision based approach along with the sharp sensors to give the maximum performance.sharp sensors will detect all obstacles and give their distance. The vision based approach will identify the comparable obstacles. The advantage of vision based obstacle detection is that the path planning can be done for road following and obstacle avoidance simultaneously. 4.3
4.3.1

Implementation and Results
Edge Detection

The simple edge detection methods were one of the first image processing techniques that was studied. The simple edge detection was performed to see how much the crude edge detection will result in the given input. A sample image was subjected to the edge detection using the canny edge detection algorithm. The image and the edge detected result are depicted in the following page.
The main problems with crude edge detection are a lot. First of all, the detection will not be accurate. Then all the edges in figure will be pointed out. For our purpose, we need to get only the road edges. But the normal algorithm will get all the lame edges in the figure. So the error rate is huge. The probability of false alarms and misses are enormously high. So it is not advisable to use normal edge detection as such for road following purposes.

26

Figure 4.8: Sample input image chosen for edge detection

Figure 4.9: Edge detection using Canny algorithm

4.3.2

Morphological Transformations

The different morphological transformations like dilation, erosion,opening and closing were discussed earlier. Any of these transformation can be used for the purpose of road track27

ing. But since we are using real time processing, the processing time should be less. So the morphological transformations are to be selected on the basis of the processing time required for each. The figure below shows various morphological transformations applied on a sample image.

Figure 4.10: Simulation of the different morphological transformations

The transformations were applied on several sample images and the processing time for each was calculated.
Table 4.1: Morphological Transformations
Transformation Processing time
Dilation
0.398 seconds
Erosion
0.386 seconds
Opening
0.589 seconds
Closing
0.668 seconds

The table 4.1 shows the average processing time for each of the morphological opera28

tions. Dilation and Erosion are having the least processing times. But it is not advisable to use them for our road tracking application. This is because both of these methods will either brighten the image or darken the image. Opening or Closing operation should be used inorder to remove the holes and misses in the image. As opening has the less processing time compared to closing, it has been selected as the morphological transformation to be used in road tracking approach.

4.3.3

Road tracking wiithout Hough Transform

The conventional road tracking approach does not use the hough transform to determine the lanes of the road. As discussed earlier, the conventional approach consists of colour filtering, edge detection, thresholding and dilation to find out the edges of the road.
1. Road Tracking : The image below is taken as the input to the openCV for the image processing algorithm.

Figure 4.11: Input image from the camera frame

First step is image reduction. After reducing image size, color filtering is done to extract each pixel value. After that, the thresholding is done to make dark portions appear as white and others dark. The false alarms and misses are removed by masking or dilating. Dilating will make the neighbourhood of a pixel same as itself.
At last, the processed image is made into different contours and the contour with maximum area is chosen as road. This is same as detecting the edges of processed image. 29

Figure 4.12: Result of road tracking algorithm applied on a frame of image

2. Performance Evaluation: Results were gathered on a frame-by-frame basis for the algorithm by comparing the output of the algorithm for road extraction with the real picture pixel by pixel. As part of detection criteria, there are two possibilities.
Either both values agree with each other. When they disagree, there are two cases:
(a) The road detection algorithm claims a point to be road but it is actually non road - which is termed False Alarm.
(b) The road detection algorithm claims a point to be non-road while it is actually a part of the road which is often termed as a Miss.
So it was inevitable to evaluate the performance of the algorithm. For this a series of about 100 image frames were taken. Each image was applied with the algorithm.
The number of false alarms and misses were calculated after thresholding the image.
This is done for all the images keeping the threshold constant. Thus the average probability of misses and false alarm at that particular threshold was obtained. The test was conducted by varying the thresholds over a range. The Figure 4.13 shows a graphical representation of the error probabilities. The image clearly shows that a threshold of 0.4 will give the maximum performance.

30

Figure 4.13: Performance Evaluation of the road following algorithm showing false alarms and misses

4.3.4

Road Tracking using Hough Transform

The algorithm involving Hough transform was used to determine the lanes of the road.
The details of the algorithm has been disussed earlier. The algorithm was implemented by using openCV library files for image processing. The input image which was used to simulate the entire algorithm is given in figure 4.14.

Figure 4.14: Sample input image chosen for edge detection

The following steps were performed on the image one by one inorder to obtain the road edges.

31

1. Colour Filtering : The image is colour filtered to obtain the colour range corresponding to the road. And after than the image is converted into grayscale.

Figure 4.15: Image converted to grayscale after colour filtering

2. Morphological Transformation : The gray image is then subjected to morphological transformations. As discussed earlier, opening[9] has the less processing time than closing. So the image was subjected to an opening operation.

Figure 4.16: Image subjected to opening
The opening operation was done using a structuring element ellipse. There are three
32

structuring elements such as rectangle, cross and ellipse. The Kernel size was kept varying measuring the performance at each instant.
3. Edge Detection : The image is then subjected to edge detectio using Canny edge operation. The threshold for the edge detection is varied from 0 to 100. The threshold for which the acceptable detection occurs is taken.

Figure 4.17: Canny edge detection of the image

The image obtained after edge detection gives a crude idea of the road position. This is subjected to hough transform to get the edges of the road highlighted.
4. Hough Transform: The Standard Hough Transform (SHT) is used to determine the parameters of features such as lines and curves within an image. A binary image is used as input where each active pixel represents part of an edge feature. The SHT maps each of these pixels to many points in Hough (or parameter) space. The standad hough transform will show the lines having number of points above a particular threshold. The edge detected image on applying hough transform will be like that in figure 4.18.
As you can see the standard hough transform has some serious performance issues.
So we go for Probabilistic Hough transform which yield a better performance. Instead of treating edge pixels in a binary edge image equally, a weight is bestowed to each edge pixel according to the surround suppression strength at the pixel, which can be used in either sampling stage or voting stage or both of the probabilistic
Hough transform. This weight is used to put emphasis on those edge points located

33

Figure 4.18: Standard Hough Transform of the image

on clear boundaries between different objects, leading to higher probability of sampling from perceptually reasonable real lines in the edge image, as well as suppressed false peaks in Hough space formed by large amount of noise edges.

Figure 4.19: Probabilistic Hough Transform of the image

34

The result of applying probabilistic hough transform on the image and its advantage ove the standard hough transform is eventually visible. The threshold for the hough transform was varied between 0 and 100. And the algorithm was tested on various images to obtain an optimum threshold value.
5. Performance evaluation : Results were gathered on a frame-by-frame basis for the algorithm by comparing the output of the algorithm for road extraction with the real picture pixel by pixel. The maximum performance threshold was determined by continuously varying the threshold and using a set of images. Maximum performance implies there are no false alarms. As seen in the example earlier, the performance of probabilistic hough transformis very high on that particular threshold.

Figure 4.20: Performance of Hough Transform

The graphical representation was obtained by applying hough transform based road detection on various image frames. The threshold was varied and the value of threshold giving the least error probability was given the maximum success probability.
The process was repeated for a sequence of image frames and generalised as the above diagram. from the diagram it is clear that the threshold value between 20 and
30 gives the maximum performance.

35

Chapter 5

Path Optimisation
5.1

Introduction

Path optimisation is an important parameter in the real life scenario were the energy consumption should be reduced significantly. There may be more than one path from a source to the destination. The path should be planned such a way that the total cost should be minimum. The idea of routing protocols can be made into use here. There are a lot of routing algorithms that can be used for shortest path routing. This chapter deals with the implementation of path optimisation for self navigation

5.2

Literature review

The idea of routing algorithms were discussed in detail in the earlier chapter. The important routing protocols are:
1. Dijkstra’s Algorithm
2. A star Algorithm
3. Greedy Best First Search Algorithm
4. D star Algorithm
The algorithms were subjected to study under real life examples. The best one out of these can be chosen for minimizing the path. Initially, the bot does not have any predefined path memory or the cost along different path. The first travel of the bot will be through a random path. The cost of each path will be updated in the memory after each travel. The bot will be given training in all possible paths from a source to a destination.
The bot, using the shortest path routing algorithms, compute the cost for each path. The decision metric is made looking at the costs and the path with minimum cost can be saved in memory. The cost function can be any parameter that may affect energy efficient.
Based on the nature of the application, the cost can be taken as the distance, time or

36

anything else.
In the case of self navigation, the cost can be set as the distance. The robot is equipped with an encoder wheel and a position encoder. The position encoder will keep track of the distance covered by the robot by tracking the wheel rotations. The idea of flooding is being used here to find the distance covered. The robot is tested in all paths and the path with minimum distance metric is taken as the optimum path.

5.3

Simulation and Results

The various algorithms for the shortest path routing were discussed in previous sections.
Each of these algorithms were implemented in MATLAB.The areas taken care were cost of shortest path and time taken for calculation. The sample network to each of these methods was created using as random function which creates different networks for each compilation. The following results were obtaining after running these algorithms in MATLAB.

Figure 5.1: Simulation of routing algorithms

The Fig shows the simulation results obtained. A Map of 100*100 was created with wall at 0.5 Probablity and having a successful path from start to stop using random function. This was fed as input to the functions. The code was simulated for 75 sample networks and the average was taken.
Comparing the results, Dijkstra’s algorithm was found to showing the minimum cost on all simulations. But there was a slight difference in the number of nodes for the three algorithms. Dijkstra’s algorithm used minimum number of nodes and got a minimum value for cost. The other two algorithms had slightly more cost than this particular one.
Taking into consideration the time taken to execute the algorithm, Dijkstra was having
37

a huge delay in execution compared with the other two. On an average basis, Greedy
BFS technique has the least delay in execution. For A* algorithm, both delay and cost was moderate. It neither gave least cost nor least delay. But as far as the scenario is considered, A* has good performance since it has less cost and comparitively less delay.

38

Chapter 6

Implementation of the Project
6.1

Hardware Implementation

The core of the bot consists of a beagle board running on Linux which does all communication with the host through Internet transmission and Wi-Fi module. The USB-powered
Beagle Board is a low-cost, fan-less single board computer that unleashes laptop-like performance and expandability without the bulk, expense, or noise of typical desktop machines. The memory of the system consist of a standard SD card which stores all routing data and images for accurate and better path routing. The beagle board and its standard uses are depicted in the figure below.

Figure 6.1: Beagleboard and its usage
A standard mapping system can also be installed for navigation instead of routing algorithms. Both the approaches will be tested. For the independent motion of the bot, it will have an array of sensors. The primary input will be a camera module which will give continuous frames. The secondary inputs are sharp sensors for distance calibration and GPS module for mapping and navigation. The frames from the camera are processed to get obstacle detection, navigating through it and following the road. The frames will
39

be processed based on openCV image libraries or other better library. The bot actuators will be basic DC motors of 200/400rpm with standard track wheels. The motor selection will be based on image processing speed.

6.1.1

Design of the bot

The hardware of the bot was designed in using SOLIDWORKS. The bot is 30 cm long and 20 cm wide. The bot is being fabricated on sheet metal of the pecified length and width. There will be sharp sensors mounted on the front edges of the machine. The front wheels of the bot are powerd by two 300rpm DC geared motors. The rear wheels are freely rorating on a shaft with the help of ball bearings. The camera is clamped on top of the machine at the front. The beagle board and associated accesories for the bot are mounted on the top. The entire system is being powered from a 12V motor cycle battery.
The design for the robot is given below.

Figure 6.2: CAD design for the robot

40

6.1.2

Robot Peripherals

The major components used for the navigation of the bot are:
1. Beagleboard :The BeagleBoard[12] is a low-power open source hardware single-board computer.A modified version of the BeagleBoard called the BeagleBoard-xM is being used for the purpose. The BeagleBoard-xM measures in at 82.55 by 82.55 mm and has a faster CPU core (clocked at 1 GHz compared to the 720 MHz of the
BeagleBoard), more RAM (512 MB compared to 256 MB), onboard Ethernet jack, and 4 port USB hub. The BeagleBoard-xM requires the memory and OS to be stored on to a microSD card. The BeagleBoard runs on open software Angstrom. It is an operating system designed for embedded applications. The BeagleBoard xM has the following specifications.
(a) Package on Package POP CPU/memory chip.
i. Processor TI DM3730 Processor - 1 GHz ARM Cortex-A8 core ii. 4 GB microSD card is supplied with the BeagleBoard -xM loaded with
Angstrom.
(b) Peripheral connections
i.
ii. iii. iv.
v.
vi. vii. viii. ix. USB OTG (mini AB)
4 USB ports
Ethernet port
MicroSD/MMC card slot
Stereo in and out jacks
RS-232 port
JTAG connector
Power socket (5 V barrel connector type)
Camera port

The board runs on 5V power and has all the processing abilities of a single board computer. The openCV library files were installed in the board for the image processing task. The USB OTG connection can be used for Beagle to Computer communication.
2. 300 rpm DC motor : The DC motor used is 300 rpm side shaft heavy duty motor.
Motor runs smoothly from 4V to 12V and gives 300 RPM at 12V. Motor has 8mm diameter, 17.5mm length drive shaft with D shape for excellent coupling.
3. Battery : The battery which is selected to be used is the 12V bike battery. Since beagleboard has higher current ratings, the normal 9V battery can’t be used. So the suitable one was the 12V motorcycle battery. The 12V battery with a current rating of 2.5A is been used as the power supply.
4. Sharp Sensors : The laser sensors are used for obstacle detection and distance callibration. This consists of IR sensors which detects obstacles by transmitting IR rays.
The distance to the object is callibrated from the returned value of the sensor.
41

5. Camera : The Camera module is used as primary input to the beagle board. This is a web camera which gives continuous frames for processing the image files.
6. Micro controller board : The microcontroller board is meant for driving the motor and thus the wheels. The microcontroller used is ATMEGA 32 and the circuit is meant to control the motor rotations. The board contains the microcontroller and the necessary circuit elements to clock the controllor and to program different applications. 7. Wheels : 106mm Diameter x 44mm Thick x 8mm Bore Wheel is a jumbo sized wheel which is suitable for low cost all terrain robot. It is compatible with almost all the motors having 8mm diameter shaft.
8. GPS module : To keep track of the coordinates, the GPS module is mounted on the device. GPS Module comes with a POT (Patch On Top) ceramic antenna which makes it a small and complete solution for enabling GPS navigation to your embedded devices and robots. It supports 66 Channels and external antenna input compatibility for maximum sensitivity. Module comes with a standard 2mm DIP pin headers which provides easy interface to your device. The module works on TTL
Serial protocol which used with any microcontroller or PC. USB cable is included to connect with PC USB port.

6.1.3

Working of Hardware

The beagle board has its primary inputs as the Camera module and the gps receiver. The beagle board is serially conected with the ATMEGA board also. The microcontroller has the inputs from the sharp sensors. The motor driver is driven by the microcontroller on demand from the beagle board or the sharp sensors. Once the system is powered, the beagle board will start processing the image frames from the camera. The GPS module continuosly sends the coordinates to the beagle board. The processor checks the coordinates with the destination coordinates.
The beagle board after processing the road detection codes, send the road direction to the microcontroller serially. The microcontroller drives the motors according to the command from the processor. By the time, the sharp sensors give the distance to the obstacles. the microcontroller will drive the motors to change the path so that the obstacles are avoided. A position encoder is kept on the wheels to keep track of the distance moved by them. This is meant for finding distance covered and to take decisions when obstacles are encountered.
The major difficulty was in powering the bot. As the vehicle has to be run in outdoor environment, the power source has to be mounted on the device. The power specifications of the various components were different. The beagle board requires 5V continuous DC supply to be loaded up. The microcontroller board and the motor driver requires 12V supply where as the sharp sensors need only 5V DC.The problem was solved by using a
42

12V bike battery. The bike battery has a current rating of 2.5A. This is inorder to increase the processing speed of beagle board. The system will be lagging if very low current is driven. The 12V supply can be directly given to the motors and the microcontroller board. But for giving power to the beagleboard, the voltage has to be regulated to 5V. So a voltage regulator was used inorder to give the power to the beagle board and sharp sensors.

6.2

Software Implementation

The implementation of the robot hardware was discussed earlier. The hardware as such will not trigger the navigation of the bot. The road detection and path following are to be implemented on the software platform for the efficient running of the vehicle. The image processing fuctions are carried out using OpenCV library files. As the beagle board is running on ubuntu, the entire processing is done in linux.
The concept of road tracking algorithm was discussed in detail earlier. The implementation of the road tracking algorithm was facilitated by means of OpenCV library for image processing. The OpenCV has a good collection of image processing tools and it is easy to handle for anyone who knows basic coding techniques. The tutorials were very much advantageous and they were guidelines to the image processing techniques. The road detection was to be performed on the basis of input frames from the camera. The beagle board has one of its primary input as the camera module.
As the beagle board is running on linux, it was easy to install OpenCV in that. The software tools have to take the camera frames as the input images and then process. The important points that were considerd while developing the algorithm were:
1. Processing Speed : The processing speed sould very high for the road tracking. The vehicle is meant for road travel and it contains real life scenarios. So the frames should be processed as fast as possible. For that the best techniques should be used.
2. Error Performance : The tracking should be accurate as much as possible. As discussed earlier in Section 4.2, the image may be processed with a number of false alarms and misses. The algorithm should be robust such that the probality of the false alarms and the misses are kept minimum for the detection threshold used.
The above mentioned points were of huge importance while developing the algorithm.
The conventional road detection technique that involves edge detection, pixel classification and voting for road position was first implemented in OpenCV. The processing speed was found very less for the above mentioned technique. The main drawbacks of this technique are: 1. Less Processing Speed : Since the technique is too conventional and tideous, the process will be slow. The pixel are first classified into road and non road. And after that it is again classified for getting road position. This takes more time.

43

2. Probability of error will be high : As the application of the technique is in real life scenario, the images taken for processing may not be clear. Also, there will be colour variations on and off the road. Since the present algorithm does not smooth the image before processing, the probability of false alarms and misses may be higher.
For example, there may be a lot of non road features which may be shown as road.
So the Road detection algorithm using the hough transform method was implemented.
The implementation details and the simulation results were discussed in chapter 4. The software implementation of moving the robot was interfaced through microcontoller based programming. The wheels were driven on the commands from the microcontoller. The beagle board sends information on road serially to the microcontoller to activate it. The rest of the working depends on the hardware stability of the vehicle.

44

Chapter 7

Conclusion
The whole concept of self navigation is to develop a bot that follows road path to reach particular GPS co-ordinates. During its path, it works based on 2 objectives. Firstly, to overcome different type of obstacles in the road. Secondly, to route minimum path from source co-ordinates to destination co-ordinates based on different routing algorithms.
This project was aimed at self navigation of an automated bot. The bot was implemented on harware by means of a beagle board and a microcontroller board. The self navigation was facilitated by means of various image processing techniques combined with normal laser sensors. The image processing algorithms were implemented for tracking the road. The OpenCV library files were made use for the application of road detection and other obstacle avoidance testing. First find out the position of the mobile robot using a
GPS based sensor.Hough Transform based approach has been used for navigation of the bot on the road.The algorithm was able to detect the road for various experimental setups.

45

Bibliography
[1] Y. Zhao, ”Mobile phone location determination and its impact on intelligent transportation systems”, IEEE Trans. on Inf. Trsp. Sys, vol. 1, no. 1, March 2000.
[2] J. Tanka, ”Navigation system with map matching method”, Proc. of the SAE Int.
Congress and Exposition, 1990.
[3] Rafael C. Gonzalez, Richard E. Woods, ”Digital Image Processing”, Pearson
Education, 3rd edition, 2008
[4] R. Bhatt, D. Gaw, and A. Meystel, A real-time guidance system for an autonomous vehicle, Procedings for IEEE Int. Conf. Robotics and Automation, pp. 369 - 373, 1987.
[5] James F. Kurose, Keith W. Ross, ”Computer Networking: A Top-Down Approach
Featuring the Internet”, Pearson Education, pp. 271 - 273, 3rd edition, 2005.
[6] Judea Pearl, ”Heuristics: Intelligent Search Strategies for Computer Problem
Solving”, Addison-Wesley, 1984.
[7] P. Hart, N. Nilsson and B. Raphael, ”A Formal Basis for the Heuristic Determination of Minimum Cost Paths”,IEEE Trans. Syst. Science and Cybernetics, pp. 100 - 107,
1968.
[8] Kazunori Ohno, Takashi Tsubouchit, Bunji Shigematsut, Shoichi Maeyamas and
Shinichi Yuta, H. Mori, ”Outdoor Navigation of a Mobile Robot between Buildings based on DGPS and Odometry Data Fusion”, Proceedings of the 2003 IEEE
International Conference on Robotics and Automation, pp. 1979 - 1983, Taipei,
Taiwan, September 14-19, 2003.
[9] A. Jain, ”Fundamentals of Digital Image Processing”, Prentice-Hall, Chap. 9, 1989.

46

[10] Petros Maragos and Ronald W. Schafer, ”Applications of morphological filtering to image processing and analysis”, Proceedings of the IEEE International Conference on Acoustics, Speech, Signal Processing, Tokyo, Japan, Apr. 1986, pp. 2067 - 2070.
[11] Zezhong Xu, Yanbin Zhuang, Huahua Chen, ”Obstacle Detection and Road Following using Laser Scanner”, Proceedings of the 6th World Congress on Intelligent Control and Automation, pp. 8630 - 8634, June 21 - 23, 2006.
[12] Gerald Coley, ”BeagleBoard System Reference Manual Rev C4”, BeagleBoard.org,
Revision 0.0, December 15, 2009.
[13] M. Bertozzi, A. Broggi, A. Fascioli, ”Vision-based intelligent vehicles: State of the art and perspectives”, Robot Automat System, vol.32, 2000.

47

Similar Documents

Premium Essay

The Pros And Cons Of Autonomous Vehicles

...Autonomous vehicles, also known as self-driving vehicle, driver-less vehicle, etc. in addition, these vehicles are capable of sensing the environment and navigating without human input. For me, I would continue with “sacrifice one, save many” self driving logic because its large numbers compared to one, however some may say it will be a different case if I was the “one”, I would still choose sacrifice one, but before I do that there are some rules to be implemented like all self-driving cars should follow that sacrifice one rule and before a person buys an autonomous vehicle, he should agree the terms of the rule and know that the driver is at a greater risk. That way there will be less conflict or fight in case an accident happens. I would...

Words: 264 - Pages: 2

Free Essay

Classroom Journals

...The  Electronic  Journal  for  English  as  a  Second  Language     June  2011—Volume  15,  Number  1     Classrooms  as  Complex  Adaptive  Systems:  A  Relational  Model   Anne  Burns   Aston  University,  Birmingham,  UK,  and  University  of  New  South  Wales,  Australia         John  S.  Knox   Department  of  Linguistics,  Macquarie  University,  Sydney,  Australia       Abstract   In  this  article,  we  describe  and  model  the  language  classroom  as  a   complex  adaptive  system  (see  Logan  &  Schumann,  2005).  We  argue  that   linear,  categorical  descriptions  of  classroom  processes  and  interactions   do  not  sufficiently  explain  the  complex  nature  of  classrooms,  and  cannot   account  for  how  classroom  change  occurs  (or  does  not  occur),  over  time.   A  relational  model  of  classrooms  is  proposed  which  focuses  on  the   relations  between  different  elements  (physical,  environmental,  cognitive,   social)  in  the  classroom  and  on  how  their  interaction  is  crucial  in   understanding  and  describing ...

Words: 9763 - Pages: 40

Premium Essay

Air Line Facts, Figures

...Fixed costs do not vary with the scale of operations, and will be incurred even if the flight is cancelled. Examples of fixed cost are the rental cost of leased planes, which is time- but not operations-sensitive, and general administrative costs such as salaries. Constant costs, which cease if the flight is cancelled but are invariant to the volume of traffic carried, are also high. Examples of constant costs are the subsistence allowance paid to the cabin crew, and landing fees, which do not depend on the number of passengers, but will not be incurred if the flight is cancelled. Variable costs, which vary with the volume of traffic carried, have traditionally been quite low in the airline industry. They include ticket commissions, baggage handling, and cabin amenities including food and beverages, among other passenger-related costs. With the recent spate of cost cutting, where ticket commissions to travel agents have been eliminated by the major airlines (with the exception of Southwest Airlines), a cap of $100 commission on international flights, and drastic reductions in the quantity and quality of meals, variable costs have gone down. To counter the effects of the September 11, 2001 terrorist attacks, U.S. airlines have reduced fares to lure back lost passengers. As a result, load factors for 2002 are estimated to be around 72 percent, but the breakeven passenger load factor has risen to 81 percent, so losses for 2002 are estimated at $9 billion. (3) Thus there is...

Words: 13887 - Pages: 56

Premium Essay

Mass Media

...Media History Contents 1 Introduction 1.1 Mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 1.1.2 1.1.3 1.1.4 1.1.5 1.1.6 1.1.7 1.1.8 1.1.9 Issues with definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forms of mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professions involving mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Influence and sociology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical issues and criticism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 2 6 6 7 8 10 10 10 10 11 11 12 12 12 12 16 16 17 17 17 17 17 17 18 19 20 21 21 21 1.1.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.13 External links . . . . . . . . ....

Words: 146891 - Pages: 588

Free Essay

Total Faltu

...A–Z OF eBUSINESS MODELS Written and researched by Suntop Media Adobe Systems A Adobe Systems Adobe Systems was founded by John Warnock (now CEO and chairman) and Charles Geschke (president and chairman). Both worked at Xerox’s famous Palo Alto Research Center (Parc). Geschke arrived there via Carnegie Mellon and Xavier University. Warnock took a more circuitous route by way of the Evans & Sutherland Computer Corp., Computer Sciences, IBM and the University of Utah. Adobe helped ignite the revolution in desktop publishing in the early 1980s. Its software includes Adobe Acrobat and Adobe Photoshop. Headquartered at San Jose, CA, it now employs 2,700 people. Adobe’s interests include Adobe Ventures and Adobe Ventures II. Venture capital partnerships with Hambrecht and Quist have earned over $100 million since 1994. Links: www.adobe.com Amazon.com Amazon.com must be the most talked about company in the world. For a business that’s just five years old that’s quite an achievement; for one that has yet to make a single penny in profits, it’s unheard of. But then Amazon.com is more than just a business; it’s a business phenomenon. Launched as a website in June 1995, by the beginning of 1999 Amazon.com Inc. had a market capitalization of $6 billion, by August 1999 it had jumped to $20 billion. Amazon’s value can vary by several billion depending on stock market sentiment. Founder Jeff Bezos has promoted Amazon.com to the point where it is now synonymous with ecommerce...

Words: 10688 - Pages: 43

Free Essay

Qrt2 Task One

...QRT2 Task 1 A1: Viability of Product or Services The demand for implant treatment has increased as patients have become better educated, and insurance companies have begun to recognize the treatment as a long term cost effective way to replace missing teeth, and to improve overall gum health. As part of their cost saving structure, most dental insurance companies have begun offering coverage for portions of implant related surgery. “According to new dental reports by iData Research (www.idataresearch.net), the leading global authority in medical device, dental and pharmaceutical market research, the U.S. market for dental implants is expected to regain double-digit growth by 2013, and will help drive the dental prosthetic market to reach over 82 million prosthetic placements by 2016.” (idata research.net, 2012) “Dental implants have earned the reputation of being the best aesthetic option for single-tooth replacement," said Dr. Kamran Zamanian, CEO of iData. "By 2016, over 20% of general practitioners are expected to place dental implants and their adoption of computer-guided-surgery will further the growth of this market." (idata research.net, 2012) In the past the treatment options were limited to extractions with no replacement teeth, dentures, or fixed bridges. All of these options were stop gap measures to maintain oral stability. The cost of progressively treating the loss of a tooth in one or more areas often involved multiple procedures and time, which made it very...

Words: 11611 - Pages: 47

Free Essay

Online Exhibitions

...Online Exhibitions: Five Factors for Dynamic Design M. Merritt Haine Museum Communications The University of the Arts December 2006 A thesis submitted to The University of the Arts in partial fulfillment of the requirements for the degree of Master of Arts in Museum Communication. 1 © December 2006 M. Merritt Haine All Rights Reserved No part of this document may be reproduced in any form without written permission of the author. All photographs and drawings produced by and are the property of name unless otherwise noted. Copyrights to images are owned by other copyright holders and should not be reproduced under any circumstances. This document as shown is not for publication and was produced in satisfaction of thesis requirements for the Master of Arts in Museum Communication in the Department of Museum Studies, The University of the Arts, Philadelphia, Pennsylvania under the Directorship of Beth A. Twiss-Garrity For more information, contact: M. Merritt Haine 573 South McLean Blvd. Memphis, Tennessee 38104 215-817-1213 merritthaine@gmail.com To the Faculty of The University of the Arts: The members of the Committee appointed to examine the thesis of M. Merritt Haine, Online Exhibitions: Five Factors for Dynamic Design, find it satisfactory and recommend it to be accepted. Amy Phillips-Iversen Committee Chair Director of Education & Community Programs, The Noyes Museum of Art Phil Schulman Master Lecturer, Electronic Media, The University of the Arts Matthew Fisher...

Words: 19514 - Pages: 79

Premium Essay

Html

...The Missing Link: An Introduction to Web Development and Programming The Missing Link An Introduction to Web Development and Programming Michael Mendez SUNY Fredonia i The Missing Link An Introduction to Web Development and Programming by Michael Mendez Open SUNY Textbooks 2014 ©2014 Michael Mendez ISBN: 978-0-9897226-5-0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Published by Open SUNY Textbooks, Milne Library (IITG PI) State University of New York at Geneseo, Geneseo, NY 14454 Cover design by William Jones Licensing This text is published by the Open SUNY Textbooks project under the Creative Commons 3.0 license format (see full length legal text at http://creativecommons.org/licenses/ by-sa/3.0/): You are free: 1. To share — to copy, distribute and transmit the work 2. To remix — to adapt the work 3. To make commercial use of the work Under the following conditions: 1. Attribution: You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). 2. Share Alike: If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. With the understanding that: 1. Waiver: Any of the above conditions can be waived if you get permission from the copyright holder. 2. Public Domain:...

Words: 78185 - Pages: 313

Premium Essay

Sustainable Development

...Sustainability Research Sustainability is of increasing significance for businesses, communities, and national economies around the globe. Sustainability addresses economic, environmental, and social issues, but it also incorporates cultural dimensions. In the face of globalisation, societies seek to preserve their cultural values and community identity, while still participating in the global economy. In New Zealand the importance of sustainability issues has been recognised by central and local government policies, environmental and economic development agencies, and business leaders. Two of the active business groups focusing on these issues are the New Zealand Business Council for Sustainable Development (NZBCSD) and the Sustainable Business Network (SBN). Waikato Management School is working in partnership with both of these key business groups on sustainability projects and events. The aim of these initiatives is to develop and share insights on sustainable economic development and sustainable enterprise success. The Waikato Management School is distinctive in its commitment ‘to inspire the world with fresh understandings of sustainable success’. These fresh understandings will be achieved through our high quality research that can influence policy makers, excellent teaching, through the knowledge and values our graduates take into the workforce, through our continued consulting with business and the outstanding experiences offered to everyone who connects...

Words: 35722 - Pages: 143

Free Essay

Jezz Bezos

...Begin Reading Table of Contents Photos Newsletters Copyright Page In accordance with the U.S. Copyright Act of 1976, the scanning, uploading, and electronic sharing of any part of this book without the permission of the publisher is unlawful piracy and theft of the author’s intellectual property. If you would like to use material from the book (other than for review purposes), prior written permission must be obtained by contacting the publisher at permissions@hbgusa.com. Thank you for your support of the author’s rights. For Isabella and Calista Stone When you are eighty years old, and in a quiet moment of reflection narrating for only yourself the most personal version of your life story, the telling that will be most compact and meaningful will be the series of choices you have made. In the end, we are our choices. —Jeff Bezos, commencement speech at Princeton University, May 30, 2010 Prologue In the early 1970s, an industrious advertising executive named Julie Ray became fascinated with an unconventional public-school program for gifted children in Houston, Texas. Her son was among the first students enrolled in what would later be called the Vanguard program, which stoked creativity and independence in its students and nurtured expansive, outside-the-box thinking. Ray grew so enamored with the curriculum and the community of enthusiastic teachers and parents that she set out to research similar schools around the state with an eye toward writing a book about...

Words: 120163 - Pages: 481

Premium Essay

Try to Read It

...policy issues, and represents CGAs nationally and internationally. The Association represents 75,000 CGAs and students in Canada, Bermuda, the Caribbean, Hong Kong, and China. Mission CGA-Canada advances the interests of its members and the public through national and international representation and the establishment of professional standards, practices, and services. A proud history CGA-Canada was founded in Montréal in 1908 under the leadership of John Leslie, vicepresident of the Canadian Pacific Railway. From the beginning, its objective was to encourage improvement in skills and job performance — a goal the Association holds to this day. On April 14, 1913, Canada’s Parliament passed the Act that incorporated CGA-Canada as a self-regulating professional Association. Over the decades that followed, branches became associations in their own right, affiliated with the national body. A revised Act of Incorporation, passed in...

Words: 39811 - Pages: 160

Free Essay

La Singularidad

...NOTE: This PDF document has a handy set of “bookmarks” for it, which are accessible by pressing the Bookmarks tab on the left side of this window. ***************************************************** We are the last. The last generation to be unaugmented. The last generation to be intellectually alone. The last generation to be limited by our bodies. We are the first. The first generation to be augmented. The first generation to be intellectually together. The first generation to be limited only by our imaginations. We stand both before and after, balancing on the razor edge of the Event Horizon of the Singularity. That this sublime juxtapositional tautology has gone unnoticed until now is itself remarkable. We're so exquisitely privileged to be living in this time, to be born right on the precipice of the greatest paradigm shift in human history, the only thing that approaches the importance of that reality is finding like minds that realize the same, and being able to make some connection with them. If these books have influenced you the same way that they have us, we invite your contact at the email addresses listed below. Enjoy, Michael Beight, piman_314@yahoo.com Steven Reddell, cronyx@gmail.com Here are some new links that we’ve found interesting: KurzweilAI.net News articles, essays, and discussion on the latest topics in technology and accelerating intelligence. SingInst.org The Singularity Institute for Artificial Intelligence: think tank devoted to increasing...

Words: 237133 - Pages: 949

Free Essay

Adventures

...The Adventures of Huckleberry Finn By Mark Twain Download free eBooks of classic literature, books and novels at Planet eBook. Subscribe to our free eBooks blog and email newsletter. NOTICE P ERSONS attempting to find a motive in this narra- tive will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot. BY ORDER OF THE AUTHOR, Per G.G., Chief of Ordnance.  The Adventures of Huckleberry Finn EXPLANATORY I N this book a number of dialects are used, to wit: the Missouri negro dialect; the extremest form of the backwoods Southwestern dialect; the ordinary ‘Pike County’ dialect; and four modified varieties of this last. The shadings have not been done in a hap- hazard fashion, or by guesswork; but painstakingly, and with the trustworthy guidance and support of personal familiarity with these several forms of speech. I make this explanation for the reason that without it many readers would suppose that all these characters were trying to talk alike and not succeeding. THE AUTHOR. Free eBooks at Planet eBook.com  The Adventures of Huckleberry Finn Scene: The Mississippi Valley Time: Forty to fifty years ago  The Adventures of Huckleberry Finn Chapter I Y OU don’t know about me without you have read a book by the name of The Adventures of Tom Sawyer; but that ain’t no matter. That book was made by Mr. Mark Twain, and he told...

Words: 115104 - Pages: 461

Premium Essay

Ggggggg

...Retailing in the 21st Century Manfred Krafft ´ Murali K. Mantrala (Editors) Retailing in the 21st Century Current and Future Trends With 79 Figures and 32 Tables 12 Professor Dr. Manfred Krafft University of Muenster Institute of Marketing Am Stadtgraben 13±15 48143 Muenster Germany mkrafft@uni-muenster.de Professor Murali K. Mantrala, PhD University of Missouri ± Columbia College of Business 438 Cornell Hall Columbia, MO 65211 USA mantralam@missouri.edu ISBN-10 3-540-28399-4 Springer Berlin Heidelberg New York ISBN-13 978-3-540-28399-7 Springer Berlin Heidelberg New York Cataloging-in-Publication Data Library of Congress Control Number: 2005932316 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com ° Springer Berlin ´ Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not...

Words: 158632 - Pages: 635

Free Essay

Telco Regulation

...Tenth Anniversary Edition Tenth Anniversary Edition TELECOMMUNICATIONS REGULATION HANDBOOK TELECOMMUNICATIONS REGULATION HANDBOOK The Telecommunications Regulation Handbook is essential reading for anyone involved or concerned by the regulation of information and communications markets. In 2010 the Handbook was fully revised and updated to mark its tenth anniversary, in response to the considerable change in technologies and markets over the past 10 years, including the mobile revolution and web 2.0. The Handbook reflects modern developments in the information and communications technology sector and analyzes the regulatory challenges ahead. Designed to be pragmatic, the Handbook provides a clear analysis of the issues and identifies the best regulatory implementation strategies based on global experience. February 2011 – SKU 32489 Edited by Colin Blackman and Lara Srivastava Tenth Anniversary Edition TELECOMMUNICATIONS REGULATION HANDBOOK Edited by Colin Blackman and Lara Srivastava Telecommunications Regulation Handbook Tenth Anniversary Edition Edited by Colin Blackman and Lara Srivastava ©2011 The International Bank for Reconstruction and Development / The World Bank, InfoDev, and The International Telecommunication Union All rights reserved 1 2 3 4 14 13 12 11 This volume is a product of the staff of the International Bank for Reconstruction and Development / The World Bank, InfoDev, and The International Telecommunication...

Words: 132084 - Pages: 529