Free Essay

Algorithmics

In:

Submitted By paulyfun
Words 8195
Pages 33
Algorithmic

BFS, DFS, Kruskal, Prim’s, Adjacency matrix, Adjacency List
Table of Contents
Analysis of the Problem 4
Graph Searching 4
BFS: 4
DFS 4
Comparison of Algorithms 5
Features of BFS and DFS Algorithms 5
Minimum Spanning Tree 6
Prim’s Algorithm: 6
Kruskal’s Algorithm: 6
Feature of Prim’s and Kruskal’s Algorithm 7
Application 7
Shortest Path Problem 7
Shortest Path Algorithms 7
Adjacency Matrix:- 8
Adjacency List:- 9
Unweighted and Undirected Breadth First Search (BFS) 10
Pseudo Code for Breadth First Search (BFS) 21
Analysis Complexity of BFS 21
Depth First Search (DFS) 22
Algorithm for DFS 31
Analysis Complexity of DFS 31
DIKSTRA’S SINGLE SOURCE SHORTEST PATH 32
Algorithm for Dijkstra 39
Analysis 39
How Dijkstra’s Efficiency could be improved? 40
Kruskal’s Algorithm 41
Algorithm for Krushkal Algorithm 51
Analysis Complexity of Kruskal’s Algorithm 51
Prim’s Algorithm 52
Pseudo Code for Prims Algorithm 61
Analysis 61
Comparison of Time complexities with their analysis 62
Adjacency List and Adjacency Matrix 62
Description and Justification of chosen class 62
Definition of classes 63
Assumptions 64
Assumption of BFS: 64
Assumption of Prim’s 64
Assumption of Kruskal’s 64
References and Citations 65
Books: 65
Websites 65

Analysis of the Problem
There are various data structures are used to represent graphs in computer memory such as adjacency list, incidence list, adjacency matrix, incidence matrix. Different algorithms are applied on these graphs like Prim’s algorithm, Dijkstra’s algorithm, Kruskal’s algorithm etc. to solve the real world problems. The graphs can be of different types for example weighted or unweighted, Directed or Undirected graph.
Graph Searching
Graph traversal refers to the problem of visiting nodes in a particular order in order to find a path or a particular node. Traversal continues until the last node is visited while searching stops once the node is found. There are various algorithms exists for graph searching and path finding. They are used for numerous different real world applications for example in road and rail construction and many more. Some of them are as follows:
BFS: Breadth-first search (BFS) is a general technique for traversing a graph. A BFS traversal of a Graph G. Visits all the vertices and edges of G. Determines whether G is connected. Computes the connected components of G. Computes a spanning forest of G.
BFS on a graph with n vertices and m edges takes O(n + m ) time. BFS can be further extended to solve other graph problems Find and report a path with the minimum number of edges between two given vertices Find a simple cycle, if there is one.
DFS: Depth-first search (DFS) is a general technique for traversing a graph A DFS traversal of a
Graph G Visits all the vertices and edges of G Determines whether G is connected Computes the connected components of G Computes a spanning forest of G
DFS on a graph with n vertices and m edges takes O(n + m ) time. DFS can be further extended to solve other graph problems. Find and report a path between two given vertices. Find a cycle in the graph Depth-first search is to graphs.
Comparison of Algorithms
Features of BFS and DFS Algorithms
Application DFS BFS
Spanning forest, connected components, paths, cycles
Shortest paths
Bi-connected components
Cycle Finding
Back edge
Cross edge
Data Structure as Stack
Data Structure as Queue

Minimum Spanning Tree
A minimum spanning tree is a least-cost subset of the edges of a graph that connects all the nodes. Start by picking any node and adding it to the tree. Repeatedly: Pick any least-cost edge from a node in the tree to a node not in the tree, and add the edge and new node to the tree. Stop when all nodes have been added to the tree.
Prim’s Algorithm: The algorithm was discovered by Vojtech Jarnik and later rediscovered by Robert Prim. This algorithm finds the minimum spanning tree for a connected weighted graph & having following: All vertices are marked as not visited Any vertex v you like is chosen as starting vertex and is marked as visited (define a cluster C) The smallest- weighted edge e = (v,u), which connects one vertex v inside the cluster C with another vertex u outside of C, is chosen and is added to the MST. The process is repeated until a spanning tree is formed

Kruskal’s Algorithm: The algorithm was devised by Joseph Kruskal. This algorithm also finds the minimum spanning tree for a connected weighted graph & having Following: Edge based algorithm. Add the edges one at a time, in increasing weight order . The algorithm maintains A – a forest of trees. An edge is accepted it if connects vertices of distinct trees. We need a data structure that maintains a partition, i.e.,a collection of disjoint sets MakeSet(S,x): S ¬ S È {{x}} Union(Si,Sj): S ¬ S – {Si,Sj} È {Si È Sj} FindSet(S, x): returns unique Si Î S, where x Î Si The algorithm adds the cheapest edge that connects two trees of the forest .

Feature of Prim’s and Kruskal’s Algorithm
PRIM’S ALGORITHM KRUSKAL’S ALGORITHM Greedy algorithm. Starts with an empty graph. Add one node at a time considering minimum weight. Tree built is always cyclic. The result is the tree with lowest weight. Greedy algorithm. Starts with an empty graph. Add one node at a time considering minimum weight. Tree built is always cyclic. The result is the tree with lowest weight.

Application
Prim’s and Kruskal’s algorithms are used where we have to find the path with minimum cost and distance from a weighted graph. They found their importance in building rail and road, electronic circuits, computer networks etc.
Shortest Path Problem
In graph theory, shortest path problem is the problem of finding a path between two nodes of minimum weight. There are many different shortest path algorithms which are used for different purpose; situations and their efficiency are also different.
Shortest Path Algorithms
Shortest Path Tree in G from start node s is a tree (directed outward from s if G is a directed graph) such that the shortest path in G from s to any destination vertex t is the path from s to t in the tree. Dijkstra’s Algorithm was developed by Edsger Dijkstra. Dijkstra’s algorithm for finding the shortest path in a graph Always takes the shortest edge connecting a known node to an unknown node.

Adjacency Matrix:-

A B C D E F G H I J
A 0 1 0 0 1 0 0 1 0 0
B 0 0 1 0 0 0 0 0 0 0
C 0 0 0 1 0 1 0 0 0 0
D 0 0 0 0 0 1 0 0 0 0
E 0 0 1 0 0 0 0 1 1 0
F 0 0 0 0 0 0 1 0 0 1
G 0 0 0 1 0 0 0 0 0 1
H 0 0 0 0 0 0 0 0 0 0
I 0 0 1 0 0 1 0 1 0 1
J 0 0 0 0 0 0 0 0 0 0
Analysis
Total Number of vertices = 10
Vertices = (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)
Matrix = 10*10
It is used for denser graphs.
In this data structure, a |V|×|V| matrix A is established such that aij={1 when vertex i is adjacent to vertex j 0 when vertex i is not adjacent to vertex j}
The storage required to implement an adjacency matrix is proportional to |V|2. When G is represented as an adjacency matrix, the best time complexity you can expect for graph algorithms is O (|V|2). This is the time required to access each element of the matrix exactly one time.
|E|  |V|2
Faster to test if (x, y) exists.
Edge insertion or deletion.
Recommended for big graphs with less memory is required.
Adjacency List:- Analysis
Adjacency lists are linked lists, one for each node, containing the names of the nodes to which a node is connected. The heads of each of these lists are held in an array of pointers.
It is used for sparse graphs. Algorithm inserts each edge in E (G) into the adjacency list twice: once in the stated direction and once in the reverse direction. If G is a directed graph, the reverse insertion is inappropriate (i.e. omit the final insert statement in Algorithm 1). It should be apparent that the algorithm has a time complexity of O (|V|+|E|) if all operators have time complexity of O (1).
|E|  |V| or |E|  |V| log |V|
When we need to find vertex degree we use lists.
To traverse the graph, list is used
Recommended for small graphs where less memory is required.

Unweighted and Undirected Breadth First Search (BFS)
Step 1: All nodes in graph are initialized

u.color = white u.dist = ∞ u.pred = NULL

A is Chosen as a source vertex so, the dist is initialized as 0 and parent of is NELL. ∞ ∞ ∞ ∞ ∞ ∞ 0
NULL

∞ ∞ ∞ SOURCE=A A.COLOR=GREY A.DISTANCE=0 A.PREDECESSOR=NIL
Now we have to visit A and explorer all its white neighbour vertices Now we visit A and explore all its white neighbor vertices which are B, E and H and then push them into a queue.

A B E H Queue:

Step 2: Now we change the colour, distance and parent node of each node we discovered and POP A from the Queue. 1 ∞ ∞

1 ∞ ∞ 0

1 ∞ ∞

B E H Queue: A.COLOR=BLACK
A.DISTANCE=0
A.PRECESSOR=0 B.COLOR=GREY
B.DISTANCE=1
B.PREDECESSOR=A E.COLOR=GREY
E.DISTANCE=1
E.PREDECESSOR=A H.COLOR=GREY
H.DISTANCE=1
H.PREDECESSOR=A

Now the next element B so B is POP now from the queue and explores all the other white nodes in the neighbour which are B and C so, we push C in queue.
E H C Queue:

Step 3: 1 2 ∞

1 ∞ ∞ 0 1 ∞ ∞

E H C Queue: Queue
Now that we have explored B its colour is changed to Black and we can POP next element from the queue which is E. After this we find other neighbour white nodes of E and push them in queue which is I.
H C I Queue: B.COLOR=BLACK
B.DISTANCE=1
B.PREDECESSOR=A C.COLOR=GREY
C.DISTANCE=2
C.PREDECESSOR=B E.COLOR=GREY
E.DISTANCE=1
E.PREDECESSOR=A H.COLOR=GREY
H.DISTANCE=1
H.PREDECESSOR=A

Step 4: 1 2 ∞

1 ∞ ∞

1 2 ∞

H C I Queue: E.COLOR=BLACK
E.DISTANCE=1
E.PREDECESSOR=A I.COLOR=GREY
I.DISTANCE=2
I.PREDECESSOR=E C.COLOR=GREY
C.DISTANCE=2
C.PREDECESSOR=B H.COLOR=GREY
H.DISTANCE=1
H.PREDECESSOR=A
Now that E has been explored so it changes its colour to black and chooses the next element from the queue to POP which is H. After that we find white neighbour nodes of H and push it in the queue but, H doesn’t have any white neighbour nodes so nothing will be pushed in the queue. C I Queue:
Step 5: 1 2 ∞

1 ∞ ∞

1 2 ∞

C I Queue:

H.COLOR=BLACK
H.DISTANCE=1
H.PREDECESSOR=A I.COLOR=GREY
I.DISTANCE=2
I.PREDECESSOR=E C.COLOR=GREY
C.DISTANCE=2
C.PREDECESSOR=B

B is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is C. Find All white neighbors of C and push them into the queue.

I D F Queue: Step 6: 1 2 3

1 3 ∞

1 2 ∞

I D F Queue: C.COLOR=BLACK
C.DISTANCE=2
C.PREDECESSOR=B D.COLOR=GREY
D.DISTANCE=3
D.PREDECESSOR=C F.COLOR=GREY
F.DISTANCE=3
F.PREDECESSOR=C I.COLOR=GREY
I.DISTANCE=2
I.PREDECESSOR=E
C is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is I. Find All white neighbors of C and push them in queue.

D F J Queue :

Change the color, pred and dist of new element.
Step 7: 1 2 3

1 3 ∞

1 2 3

D F J Queue: I.COLOR=BLACK
I.DISTANCE=2
I.PREDECESSOR=E J.COLOR=GREY
J.DISTANCE=3
J.PREDECESSOR=I D.COLOR=GREY
D.DISTANCE=3
D.PREDECESSOR=C F.COLOR=GREY
F.DISTANCE=3
F.PREDECESSOR=C
I is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is D. Find All white neighbors of D and push them in queue. F J G Queue:

Change the color, pred and dist of new element.

Step 8: 1 2 3

1 3 4

1 2 3

F J G Queue: D.COLOR=BLACK
D.DISTANCE=3
D.PREDECESSOR=C J.COLOR=GREY
J.DISTANCE=3
J.PREDECESSOR=I F.COLOR=GREY
F.DISTANCE=3
F.PREDECESSOR=C G.COLOR=GREY
G.DISTANCE=4
G.PREDECESSOR=D
D is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is F. Find All white neighbors of F and push them in queue. But F doesn’t have any white neighbor so nothing will be pushed in queue. J G Queue:
Step 9: 1 2 3

1 3 4

1 2 3

J G Queue:

F is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is J. Find All white neighbors of J and push them in queue. But J doesn’t have any white neighbor so nothing will be pushed in queue.

Step 9: 1 2 3

1 3 4

1 2 3

G Queue: F.COLOR=BLACK
F.DISTANCE=3
F.PREDECESSOR=C J.COLOR=GREY
J.DISTANCE=3
J.PREDECESSOR=I G.COLOR=GREY
G.DISTANCE=4
G.PREDECESSOR=D

J is explored so its color is changed to Black. Now choose next element from queue and popped it. Next Element is G. Find All white neighbors of G and push them in queue. But G doesn’t have any white neighbor so nothing will be pushed in queue.

Step 9: 1 2 3

1 3 4

1 2 3

G is explored so its color is changed to Black. Now choose next element from queue and popped it. There is no any other element in queue. Queue is empty now.so that means now every vertex is explored

Pseudo Code for Breadth First Search (BFS)
BFS(V, E, s)
1. for each u in V − {s} ▷ for each vertex u in V[G] except s.
2. do color[u] ← WHITE
3. d[u] ← infinity
4. π[u] ← NIL
5. color[s] ← GRAY ▷ Source vertex discovered
6. d[s] ← 0 ▷ initialize
7. π[s] ← NIL ▷ initialize
8. Q ← {} ▷ Clear queue Q
9. ENQUEUE(Q, s)
10 while Q is non-empty
11. do u ← DEQUEUE(Q) ▷ That is, u = head[Q]
12. for each v adjacent to u ▷ for loop for every node along with edge.
13. do if color[v] ← WHITE ▷ if color is white you've never seen it before
14. then color[v] ← GRAY
15. d[v] ← d[u] + 1
16. π[v] ← u
17. ENQUEUE(Q, v)
18. DEQUEUE(Q)
19. color[u] ← BLACK

Analysis Complexity of BFS We analyze the runtime on an input graph G = (V, E). After initialization, no vertex is ever whitened, and ensures that each vertex is enqueue at most once and hence dequeued at most once. The operations of enqueuing and dequeuing take O (1) time, so the total time devoted to the while-loop in breadth-first search in enqueue operation is O (V). Because the adjacency list of each vertex is scanned only when the vertex is dequeued, each adjacency list is scanned at most once. Since the sum of the lengths of all the adjacency lists is Θ (E), the total time spent on the for-loop inside the while-loop is O (E). The overhead of initialization is O (V), and thus the total running time of BFS is O (V+E). Depth First Search (DFS)

∞ ∞ ∞

∞ ∞ ∞ 0

∞ ∞ ∞

Step 1: First we initialized the colour of all the nodes is white. The source vertex is ‘A’ which is now turned to grey and marked no. 1.

1

Step 2: After visiting the A we start exploring other white nodes which adjacent to ‘A’ i.e. B. It is now marked as 2 and its parent node is ‘A’. Now that we have visited B node it will be marked grey and we will starting discovering node adjacent to B which C.

2

1

Step 3: Now we go visit C so it will be marked grey and given a no. 3. After visiting this node we will move to white adjacent node of C which D and F.

2 3

1

Step 4: Now we go visit node D and mark it as 4 and shade it with grey colour.

2 3 4

1

Step 5: Now to go visit the white node which is adjacent to D and there is only one node which is adjacent to D i.e. ‘F’. So, we will visit node F and mark it 5 and shade it grey.

2 3 4

5 1

Step 6: Like this we will keep continue moving o the adjacent white until we hit the last one and mark it.

2 3 4

5 6 1

2 3 4

5 6 1

7 Step 3: Once we hit the last white adjacent node we start to go back and mark the nodes which we revisited by different numbers and shade it black it is also known as back tracking.

2 3 4

5 6 1

7/8

2 3 4

5/10 6/9 1

7/8

2 3 4/11

5/10 6/9 1

7/8

2 3/12 4/11

5/10 6/9 1

7/8

2/13 3/12 4/11

5/10 6/9 1

7/8
Step 4: Once we reach back to the source node we start by another white adjacent node which has not been marked grey and keep going till we hit the last white adjacent node.

2/13 3/12 4/11

14 5/10 6/9 1

7/8

2/13 3/12 4/11

14 5/10 6/9 1

15 7/8

2/13 3/12 4/11

14 5/10 6/9 1

16 15 7/8

Step 5: Starting Back Track again from node H

2/13 3/12 4/11

14 5/10 6/9 1

16/17 15 7/8

2/13 3/12 4/11

14 5/10 6/9 1

16/17 15/18 7/8

2/13 3/12 4/11

14/19 5/10 6/9 1

16/17 15/18 7/8

Step 6: Final Tree

2/13 3/12 4/11

1/20 14/19 5/10 6/9

16/17 15/18 7/8
{A, B, C, D, F, G, J, E, I, H} Algorithm for DFS

DFS (V, E)
1. for each vertex u in V[G]
2. do color[u] ← WHITE
3. π[u] ← NIL
4. time ← 0
5. for each vertex u in V[G]
6. do if color[u] ← WHITE
7. then DFS-Visit(u) ▷ build a new DFS-tree from u

DFS-Visit(u)

1. color[u] ← GRAY ▷ discover u
2. time ← time + 1
3. d[u] ← time
4. for each vertex v adjacent to u ▷ explore (u, v)
5. do if color[v] ← WHITE
6. then π[v] ← u
7. DFS-Visit(v)
8. color[u] ← BLACK
9. time ← time + 1
10. f[u] ← time ▷ we are done with u End End
Analysis Complexity of DFS
As soon as the target node is found, the algorithm terminates and returns the results that the destination node is found. The target could be reached through many paths. It totally depends on which adjacent we select next in each step.
STEP 1-3 take time to execute is (V) because it is totally dependent on V and time taken by next to execute is (V) + Time to execute calls to DFS-VISIT. For DFS-VISIT total number of edges kept by the adjacency list is (E). Total time spent in the adjacency list is (E). In terms of big-oh notation, the total running time of DFS is O (V+E). DIKSTRA’S SINGLE SOURCE SHORTEST PATH

SOURCE=A
DISTANCE=0
PREDECESSOR=NIL
PRIORITY QUEUE=Q
SET S={ Ø }
0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
A B C D E F G H I J N N N N N N N N N N (1) A is the minimum from queue, put it in set(s), now consider the neighbour of A’s distance, predecessor from the source. (Q) =
3 ∞ ∞ 1 ∞ ∞ 5 ∞ ∞
B C D E F G H I J
A N N A N N A N N
S = {A}

(Q) =
1 3 5 ∞ ∞ ∞ ∞ ∞ ∞
E B H C D F G I J
A A A N N N N N N

Extract E from queue(Q)
Priority Queue (Q) =
2 3 3 4 ∞ ∞ ∞ ∞
C B H I D F G J
E A E E N N N N
S = {A, E}

Extract C from queue(Q)
Priority Queue (Q) =

3 3 4 6 9 ∞ ∞
B H I F D G J
A E E C C N N
S = {A, E, C}

Extract B from queue(Q)
Priority Queue (Q) =
3 4 6 9 ∞ ∞
H I F D G J
E E C C N N
S = {A, E, C, B}

Extract H from queue(Q)
Priority Queue (Q) =

4 6 9 ∞ ∞
I F D G J
E C C N N
S = {A, E, C, B, H}

Extract I from queue(Q)
Priority Queue (Q) =
6 7 9 ∞
F J D G
C I C N
S = {A, E, C, B, H, I}

Extract F from queue(Q)
Priority Queue (Q) =
7 9 10
J D G
I C F
S = {A, E, C, B, H, I, F}

Extract J from queue(Q)
Priority Queue (Q) =
9 10
D G
C F
S = {A, E, C, B, H, I, F, J}

Extract D from queue(Q)
Priority Queue (Q) =
10
G
F
S = {A, E, C, B, H, I, F, J, D}

Extract G from queue(Q)
Priority Queue (Q) =
Ø
S = {A, E, C, B, H, I, F, J, D, G}

Algorithm for Dijkstra

DIJKSTRA (G, w, s) INITIALIZE SINGLE-SOURCE (G, s) S ← { } // S will ultimately contains vertices of final shortest-path weights from s Initialize priority queue Q i.e., Q ← V[G] while priority queue Q is not empty do u ← EXTRACT_MIN(Q) // Pull out new vertex S = S U {u} // Perform relaxation for each vertex v adjacent to u for each vertex v in Adj[u] do Relax (u, v, w) /* Need to call DECREASE-KEY */
Analysis
A record is maintained for vertices in increasing order of their distance from the source vertex. Construct the shortest path tree edge by edge – at each step; add one new edge, corresponding to the construction of the shortest path to the new vertex.
Each edge in the adjacency list Adj [v] is examined exactly once during the course of the algorithm. Since the total number of edges in all the adjacency lists is |E|, there a total of |E| iterations of this for loop. These operations are known as DECREASE-KEY operations. Other operations inside the for loop take O (1) time.
Time = |V| TEXTRACT-MIN + |E| T DECREASE-KEY
Time = |V| O (V) + |E| O (1) = O (V2 + E)
The above running time resulted from assuming that Q is implemented as an array. The time depends on different Q implementations. The efficiency of Dijkstra’s algorithm can be increased when using binary heap or Fibonacci heap. It is discussed in detail in the next sections. How Dijkstra’s Efficiency could be improved?
Implementation TExtract-Min TDecrease-Key Total
Array O(V) O(1) O(V 2)
Binary Heap O(lg V) O(lg V) O( (V+E) lg V)
Fibonacci heap O(lg V) O(1) (amort.) O(V lgV + E)

The efficiency could be improved by storing nodes in adjacency list and using binary heap, pairing heap, or Fibonacci heap as a priority queue, to implement the finding the next minimum distance vertex. In this case of binary heap, EXTRACT_MIN operations take O (lg V) time and there are |V| such operations. The binary heap can be built in O (V) time. Operation DECREASE (in the RELAX) takes O (lg V) time and there are at most such operations. In case of Fibonacci heap , the amortized cost of each of |V| EXTRAT_MIN operations if O (lg V).
Operation DECREASE_KEY in the subroutine RELAX now takes only O (1) amortized time for each of the |E| edges. Hence, the running time of the algorithm with binary heap provided given graph is sparse is O ((V + E) lg V) which becomes O (E lgV) if all vertices in the graph is reachable from the source vertices. Kruskal’s Algorithm

Step 1: Put all the vertices of the graph at a place.

Vertices(V)={0},{1},{2},{3},{4}{5},{6},{7},{8},{9},{10},{11}

Step 2: Now put all edges of graph in a set in ascending order..

E={ (3,4) , (2,3) , (4,5) , (0,1) , (1,4) , (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 1 2 2 3 3 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Step 3:
The edge (3, 4) is taken. Both belongs to separate sets and union applied as follows

Vertices(V)={0},{1},{2},{3,4}{5},{6},{7},{8},{9},{10},{11}

Remove the edge (3, 4) from the set…

E={ (2,3) , (4,5) , (0,1) , (1,4) , (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 2 2 3 3 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Include Edge into A (tree)

1

Step 4: Now took the edge (2,3) from the set and since endpoints in different sets, hence union applied as

Vertices(V)={0},{1},{2,3,4}{5},{6},{7},{8},{9},{10},{11}

Remove the edge (2, 3) from the set…

E={ (4,5) , (0,1) , (1,4) , (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 2 3 3 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A..

2 1

Step 5. Now take edge(4,5). Both are from different sets and so union applied as :-

Vertices(V)={0},{1},{2,3,4,5},{6},{7},{8},{9},{10},{11}

Remove the edge (4, 5) from the set…

E={ (0,1) , (1,4) , (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 3 3 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A..

2 1 2

Step 6. Next edge is (0,1) its belong to different sets and so union is:

Vertices(V)={0,1},{2,3,4,5},{6},{7},{8},{9},{10},{11}

Remove the edge (3, 4) from the set…

E={ (1,4) , (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 3 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A..

3

2 1 2

Step 7: Now edge (1,4) is considered. This edge is found to be in different sets so union is

Vertices(V)={0,1,2,3,4,5},{6},{7},{8},{9},{10},{11}

Remove the edge (1, 4) from the set…

E={ (6,7) , (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 3 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A..

3

3 2 1 2

Step 8: Now edge (6,7) is considered. This edge is found to be in different sets so union is

Vertices(V)={0,1,2,3,4,5},{6,7},{8},{9},{10},{11}

Remove the edge (6, 7) from the set…

E={ (8,9) , (0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 3 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A. 3

3 2 1 2

3

Step 9: Now edge (8,9) is considered. This edge is found to be in different sets so union is

Vertices(V)={0,1,2,3,4,5},{6,7},{8,9},{10},{11}

Remove the edge (8, 9) from the set…

E={(0,3) , (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , 4 4 4 5 5 5
(8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 6 6 6 7 8 9

Now Add the edge in A.. 3

3 2 1 2

3 3

Step 10: Now edge (0,3) is considered. This edge is found to be in same set so we no need to include it in the graph..

Vertices(V)={0,1,2,3,4,5},{6,7},{8,9},{10},{11}

Remove the edge (0,3) from the set…

E={ (2,6) , (4,8) , (0,2) , (3,7) , (5,9) , (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 4 4 5 5 5 5 6 6 6 7 8 9

Step 11: Now edge (2,6) is considered. This edge is found to be in different set so the union is

Vertices(V)={0,1,2,3,4,5,6,7},{8,9},{10},{11}

Remove the edge (2,6) from the set…

E={(4,8) , (0,2) , (3,7) , (5,9) , (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 4 5 5 5 5 6 6 6 7 8 9
Now Add the edge in A.. 3

3 2 1 2

4 3 3

Step 12: Now edge (4,8) is considered. This edge is found to be in different set so the union is

Vertices(V)={0,1,2,3,4,5,6,7,8,9},{10},{11}

Remove the edge (4,8) from the set…

E={ (0,2) , (3,7) , (5,9) , (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 5 5 5 6 6 6 7 8 9
Now Add the edge in A.. 3

3 2 1 2

4 3 3

Step 13: Now edge(0,2) is considered. This edge is found to be in same set. It represent the cycle in So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9},{10},{11}

Remove the edge (0,2) from the set…

E={ (3,7) , (5,9) , (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 5 5 5 6 6 6 7 8 9

Step 14: Now edge (3, 7) is considered. This edge is found to be in same set. It represent the cycle in So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9},{10},{11}

Remove the edge (3, 7) from the set…

E={ (5,9) , (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)}
5 5 6 6 6 7 8 9

Step 15: Now edge (5, 9) is considered. This edge is found to be in same set. It represent the cycle in So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9},{10},{11}

Remove the edge (5, 9) from the set…

E={ (8,11) , (1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)}
5 6 6 6 7 8 9
Step 16: Now edge (8, 11) is considered. This edge is found to be in different set so the union is

Vertices(V)={0,1,2,3,4,5,6,7,8,9,11},{10}

Remove the edge (8, 11) from the set…

E={(1,5) , (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 6 6 6 7 8 9
Now Add the edge in A.. 3

3 2 1 2

4 4 3 3

5

Step 17: Now edge (1,5) is considered. This edge is found to be in same set. It represent the cycle in So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9,11},{10}

Remove the edge (1,5) from the set…

E={ ( (7,8) , (6,10) , (7,10) , (10,11) , (9,11)} 6 6 7 8 9

Step 18: Now edge (7, 8) is considered. This edge is found to be in same set. It represent the cycle in So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9,11},{10}

Remove the edge (7, 8) from the set…

E={ ( (6,10) , (7,10) , (10,11) , (9,11)} 6 7 8 9

Step 19: Now edge (6, 10) is considered. This edge is found to be in different set so the union is

Vertices(V)={0,1,2,3,4,5,6,7,8,9,10,11}

Remove the edge (6, 10) from the set…

E= { (7,10) , (10,11) , (9,11)} 7 8 9
Now Add the edge in A.. 3

3 2 1 2

4 4 3 3

6 5

Step 20: Now after considering edge (7,10),(10,11) and (9,11) we found endpoint of the edges in in same set. It represent the cycle. So we didn’t include it in graph
Vertices(V)={0,1,2,3,4,5,6,7,8,9,10,11}

Remove the edge (7,10),(10,11) and (9,11) from the set…

Final Tree is

3

3 2 1 2

4 4 3 3

6 5

Algorithm for Krushkal Algorithm
MST-KRUSKAL(G) /* G = (V,E) */ A =  For each vertex v in V
MAKE-SET(v)
Sort the edges of E into nondecreasing order by the weight For each edge (u,v) in E, taken in nondecreasing order by weight if FIND-SET(u)  FIND-SET(v)
A = A U {(u,v)} UNION(u,v) return A

Analysis Complexity of Kruskal’s Algorithm
The running time of Kruskal’s Algorithm for a graph G = (V, E) depends on the implementation of the data structure. Initializing the set A takes O (1) time, and the time to sort the edges in line 4 is O (E log E). (It will account for the cost of the | V | MAKE-SET operations in the for loop of lines 2-3 in a moment). The for loop performs O (E) FIND-SET and UNION operations on the disjoint –set forest. Along with the | V | MAKE-SET operations, these take a total of O ((V+E) α (V)) time, where α is the very slowly growing function. Because G is assumed to be connected, we have | E | ≥ | V | - 1, and so the disjoint –set operations running time of Kruskal’s algorithm is O (E log E). Observing the | E | < | V |2, we have log | E | = O (log V), and so we can restate the running time of Kruskal’s Algorithm as O (E log V). Prim’s Algorithm

MST-Prim (G, w, r)
1 for each u ∈ V [G]
2 do key[u] ∞
3 π[u] NIL
4 key[r] 0
5 Q V [G]
6 While Q ≠ Ø
7 do u EXTRACT-MIN (Q)
8 For each v ∈ Adj[u]
9 do if v ∈ Q and w (u, v) < key [v]
10 then π[v] u
11 key[v] w (u, v)

PRIORITY QUEUE (Q) =
Q 0 1 2 3 4 5 6 7 8 9 10 11 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ N N N N N N N N N N N N

0 is minimum so it is extracted Therefore, Queue(Q) is:
Q 1 2 3 4 5 6 7 8 9 10 11 3 5 4 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 0 0 N N N N N N N N

1 is minimum so it is extracted Therefore, Queue(Q) is:
Q 2 3 4 5 6 7 8 9 10 11 5 4 3 6 ∞ ∞ ∞ ∞ ∞ ∞ 0 0 N N N N N N N N

4 is minimum so it is extracted Therefore, Queue(Q) is:
Q 2 3 5 6 7 8 9 10 11 5 1 2 ∞ ∞ ∞ ∞ ∞ ∞ 0 0 N N N N N N N

3 is minimum so it is extracted Therefore, Queue(Q) is:
Q 2 5 6 7 8 9 10 11 2 2 ∞ 5 4 ∞ ∞ ∞ 0 N N N N N N N

2 is minimum so it is extracted Therefore, Queue(Q) is:
Q 5 6 7 8 9 10 11 2 4 5 4 ∞ ∞ ∞ 4 2 3 4 N N N

5 is minimum so it is extracted. Therefore, Queue(Q) is:
Q 6 7 8 9 10 11 4 5 4 5 ∞ ∞ 2 3 4 5 N N

6 is minimum so it is extracted. Therefore, Queue(Q) is:
Q 7 8 9 10 11 3 4 5 6 ∞ 6 4 5 6 N

7 is minimum so it is extracted Therefore, Queue(Q) is:
Q 8 9 10 11 4 5 6 ∞ 4 5 6 N

8 is minimum so it is extracted Therefore, Queue(Q) is:
Q 9 10 11 3 6 5 8 6 8

9 is minimum so it is extracted Therefore, Queue(Q) is:
Q 10 11 6 5 6 8

11 is minimum so it is extracted Therefore, Queue(Q) is:
Q 10 6 6

10 is minimum so it is extracted Now, Queue (Q) is
Q=Ø
Therefore Minimum Spanning Tree (MST) Minimum weight: 3+3+2+1+2+4+3+6+4+3+5= 36

Pseudo Code for Prims Algorithm
MST-PRIM(G,r) /* G = (V,E) */
For each u in V u.dist =  u.pred = NIL r.dist = 0 Create a min-priority queue Q for V according to values of dist While Q is not empty u = EXTRACT-MIN(Q) For each v adjacent to u if v exists in Q and weight of (u,v) < v.dist v.pred = u v.dist = weight of (u,v)

Analysis
Used to find a MST rooted at a given vertex r.
Grow a MST by adding vertices. In each step, a vertex that has least ‘distance’ to the growing tree is added.
Initialization and first for loop will take O (V lg V) and time taken by Decrease key O(lg V) and the while loop will execute foe the |V| times and time taken for EXTRACT-MIN call is O(V lg V). Total time taken by Prim’s algorithm to execute is O (V lg V) + O (E *1) + O (E lg V) = O (E lg V) Comparison of Time complexities with their analysis

Adjacency List and Adjacency Matrix
The simplest data structure for a graph is an adjacency matrix. For a graph
G = (V, E), a |V| * |V| matrix A is needed. Aij = 1 if (vi, vj) = E, and Aij = 0 if (vi, vj) != E. For an undirected graph, the adjacency matrix is symmetrical, because the edges have no directions.
One of the strengths of the use of an adjacency matrix is that it can easily represent a weighted graph by changing the ones in the matrix to the edges’ respective weights. However, the weight cannot be a zero in this representation (otherwise we cannot differentiate zero-weight edge from “no connection” between two vertices). Also, an adjacency matrix requires exactly Ɵ (V 2) space. For a dense graph for which |E| is close to |V*V|, this could be a memory-efficient representation. However, if the graph is sparse, that is, |E| is much smaller than |V*V|, most of the entries in the adjacency matrix would be zeros, resulting in a waste of memory. A sparse graph is better represented with an adjacency list, which consists of an array of size |V|, with the ith element corresponding to the vertex vi. The ith element points to a linked list that stores those vertices adjacent to vi.

Breadth First Search (BFS) and Depth First Search (DFS)
As per the algorithms the total running time of BFS is O (V+E) and the total running time of DFS is also O (V+E) but the only difference is in the processes. BFS is enqueue and dequeue the nodes and then make the tree while DFS is extracting the minimum node and the relaxing the node after all its done.

Prim’s and Kruskal’s algorithms
Complexity of Prim’s in terms of big-oh notation is O (E log V) (as we already derived above) where O – big-oh, E – no. of edges, V = no. of vertices. It searches the least edge according to their weights for every vertices wherever the complexity of Kruskal’s in terms of big-oh notation is same as prim’s that is O (E log V) (as we already derived above) where O – big-oh, E – no. of edges, V = no. of vertices but it sort the edges according to their weights and compare them.

Description and Justification of chosen class
For the execution of algorithms and scalability and the inherent difficulty in providing efficient for the computational problems we need to consider the amount of resources. Problems are divided in to the different classes like P, NP, NP- complete and NP- Hard and many more. A nondeterministic algorithm terminates unsuccessfully if they exist, no set of choices leading to a success signal. The time required for choice (1: n) is O (1). A deterministic interpretation of a non-deterministic algorithm can be made by allowing unbounded parallelism in computation
According to research, the given algorithms BFS, DFS ,Dijkstra’s , Prim’s and Kruskal’s comes under P class as they solve decision problem using deterministic approach in polynomial running time. The following table summarizes why all the given algorithms are in P Class. Algorithm Name Complexity Problem Type Execution Machine Polynomial Class
MST Kruskal’s O(E LogV ) Decision Problem Deterministic Yes p Prim’s O(E LogV) Decision Problem Deterministic Yes P
Shortest Path Dijkstra’s O(V2 + E) Decision Problem Deterministic Yes P
Graph Searching BFS O(V+E) Decision Problem Deterministic Yes P DFS O(V+E) Decision Problem Deterministic Yes P

Definition of classes NP: The class of decision problem which can be solved by a non-deterministic polynomial algorithm.

NP-hard: The class of problems to which every NP problem reduces.

NP-complete (NPC): The class of problems which are NP-hard and belong to NP.

P class: The class of problems which can be solved by a deterministic polynomial algorithm .P class of problems that can be solved in time that is polynomial in the size of the input, n. The class of Problems that are solvable in polynomial time and it can be accepted by a deterministic Turing machine in polynomial time. In computational theory, the class P consists of all those decision problems that can be solved (i.e. a “yes” output) on a deterministic sequential machine in an amount of time that is polynomial in the size of the input. All the problems are under broader category P.

Complexity Class P: Problem x with size n → solvable in at most O (nk) time if input size is n, then the worst-case running time is O(nc) for constant c. Problems in P are considered “tractable” (if not in P, then not tractable). Example: Sorting Algorithm, Merging. Assumptions
Assumption of BFS:
The Breadth-First-Search algorithm assumes that the input Graph G = (V, E) is represented using adjacency lists. The colour of each vertex u V is stored in the variable colour[u], and the predecessor of u is stored in the variable π [u]. If u has no predecessor (for example, if u = s or u has not been discovered), then π [u] = NIL. The distance from the source s to the vertex u computed by the algorithm is stored in d[u]. The algorithm also uses a first-in, first-out queue Q to manage the set of vertices in reading state. To keep track of the progress, the vertex is coloured white, grey or black. At the starting every vertex is white accept the source. As the vertex is discovered the first time it is encountered during the search, at that time it becomes non-white. Grey vertices are those vertices which are being processed and the black vertices are those vertices which are processed.
Assumption of DFS: The Depth-First-Search algorithm assumes that the input Graph G = (V, E) is represented using adjacency lists. Consider a directed graph G = (V, E). After a DFS of graph G we can put each edge into one of four classes: A tree edge is an edge in a DFS-tree. A back edge connects a vertex to an ancestor in a DFS-tree. Note that a self-loop is a back edge. A forward edge is a non-tree edge that connects a vertex to a descendent in a DFS-tree. A cross edge is any other edge in graph G. It connects vertices in two different DFS-tree or two vertices in the same DFS-tree neither of which is the ancestor of the other.
Assumption of Prim’s
The connected graph G and the root r of the minimum spanning tree grown are input to the algorithm. During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key field. For each vertex v, key[v] is the minimum weight of any edge. By convention, key[v] =∞ if there is no such edge. The field p[v] names the parent of v in the tree. During the algorithm the result A is kept implicitly as: A={(v, t[v]): v Є V-{r}-Q
Assumption of Kruskal’s
Let the undirected graph G = (V, E) where V is the set of vertices and E is the set of edges. For each edge (u, ʋ) ε E, we have a weight w (u, ʋ) specifying the weight or cost to connect u and ʋ. Let r be the arbitary root vertex that grows until the tree spans all the vertices in V.
References and Citations
Books:
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein (2001). Introduction to Algorithms. McGraw-Hill, 527-536, 567 -573 Satraj sahni, Ellis, Sangutheever(2008). Fundamental of computer algorithm. 2nd ed. New delhi: Universities Press India pvt. 15-17,236-247.
Websites
Graph Theory http://www.southernct.edu/~fields/TeX-PDF/GraphTheory.pdf, Lecture Notes on GRAPH THEORY by Tero Harju, Department of Mathematics, University of Turku, FIN-20014 Turku, Finland, e-mail: harju@utu.fi, 2007. Last Accessed: 20-may- 2013
BFS
http://www.tech-faq.com/breadth-first-search-algorithm.html. Last Accessed: 18-may- 2013 http://www.cs.auckland.ac.nz/~ute/220ft/graphalg/node9.html#SECTION00021000000000000000 . Last Accessed: 19-may- 2013
DFS
http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/GraphAlgor/depthSearch.htm . Last Accessed: 19-may- 2013 http://www.tech-faq.com/depth-first-search-algorithm.html . Last Accessed: 20-may- 2013
Dijkstra's algorithm http://www-b2.is.tokushima-u.ac.jp/~ikeda/suuri/dijkstra/Dijkstra.shtml. Last Accessed: 23-may- 2013 http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/GraphAlgor/dijkstraAlgor.htm . Last Accessed: 19-may- 2013
Prim’s Algorithm http://students.ceid.upatras.gr/~papagel/project/prim.htm. Last Accessed: 22-may- 2013 http://www-b2.is.tokushima-u.ac.jp/~ikeda/suuri/dijkstra/Prim.shtml . Last Accessed: 22-may- 2013 http://www.brpreiss.com/books/opus4/html/page577.html . Last Accessed: 20-may- 2013

Similar Documents

Premium Essay

Algorithmic Programming

...What is procedural or algorithmic programming? Procedural programming is a programming methodology which divides a task into routines or procedures and process, one by one according to their relevance in task completion. Thus procedural or algorithmic programming is based upon concept of procedure call. Here procedures or subroutines are series of computational steps. It enables the programmers to specify simple interface, very suitable for reusability, and procedures are self-contained.  Because of reusability, different people can use the code written by someone. This also paves way to creation of programming libraries.   Procedural programming offers the following advantages: 1.       Ease of implementation 2.       Easier to keep track of program flow. 3.       Modularized 4.       Need less memory Disadvantage of procedural programming can be summed up as: 1.       Data is exposed to whole system. 2.       Real world object mapping is difficult. 3.       New user data type creation is very difficult. Reusability Procedural programming does achieve reusability of code within the same program as well as other programs. This is achieved by creating reusable chunk of code called procedures. Procedures can be declared and defined locally or globally inside a program. It enables the programmer to call the specific procedure at any point of the program. But here is a point to notice that local declaration permits reuse only within the local scope while a global...

Words: 870 - Pages: 4

Premium Essay

Love Is Not Algorithmic Summary

...“Love Is Not Algorithmic” is an article written by Jim Kozubek. Kozubek is a science writer and computational biologist based in Cambridge, Massachusetts. This article looks at online dating platforms and their scientific approach to matching individual up with these results. Kozubek talks about the way dating website are solely rely on information submitted by the subscriber and using this information as a basis to find their perfect match. He questions the information that was submitted to be true and how people put up information to portray themselves as they want to. This shows that pure human contact and going in with the unknowing is being taken over by profiles online. People are wanting to know everything upfront and have not element...

Words: 358 - Pages: 2

Premium Essay

Milk Dairy Project

...Evidence for Limiting the Project Scope 11 1.1.10Resources Needed by the Project 12 1.1.11 Project Success Criteria 12 1.1.12 Project Feasibility Report 12 1.1.13 Project Scope Statement 14 CHAPTER # 2 PROBLEM DESCRIPTION 15 2.1 Problem Background in a Non-Ambiguous Manner 15 2.1.1 Elaboration of the problem 15 2.2 Proposed Solution 16 2.2.1 Conclusion drawn from the Problem Area Discussed 19 2.3 Why the Problem should be studied? 19 2.4 Importance of Identified Problem 19 2.5 Nature of Challenges and Learning Capabilities 20 2.5.1 Domain challenge 20 2.5.2 Technical challenges 21 CHAPTER # 3 LITERATURE REVIEW 23 3.1. Domain Research 23 3.1.1 Commodity Trading 23 3.1.2 Algorithmic Trading 24 3.1.3 Advantage of using Algorithms in Algorithmic Trading 25 3.1.4 Web Application 25 3.2 Market Research 26 3.2.1 Similar Web Based Systems in the Market 27 3.2.1 Conclusions Derived from Market Research 29 3.2.2 Benefits of the Proposed System over Similar System Implemented 29 3.3 Services and Technology Growth in India 29 3.3.1 Internet Growth in India 30 3.2 Critical Evaluation of the Literature Review 30 CHAPTER # 4 RESEARCH METHODS 32 4.1 Primary Search 32 4.1.2 Questionnaires 32 4.1.2 Interview 37 4.2 Secondary Research 38 4.2.1 Research of Methodology Selection 39 4.2.2 Research of Web Application Development Platform 42 4.2.3 Database Research 44 4.2.4 System Architecture Research 47 CHAPTER # 5 (Part 1) ANALYSIS...

Words: 30261 - Pages: 122

Premium Essay

Real

...measures, trying to offer new perspectives and deliver solution proposals. Our main results are: HFT is a technical means to implement established trading strategies. HFT is not a trading strategy as such but applies the latest technological advances in market access, market data access and order routing to maximize the returns of established trading strategies. Therefore, the assessment and the regulatory discussion about HFT should focus on underlying strategies rather than on HFT as such. HFT is a natural evolution of the securities markets instead of a completely new phenomenon. There is a clear evolutionary process in the adoption of new technologies triggered by competition, innovation and regulation. Like all other technologies, algorithmic trading (AT)...

Words: 30328 - Pages: 122

Premium Essay

High Frequency Trading

...2014). However, even those traders still present at a physical trading floor, e.g. at the New York Stock Exchange (NYSE), rely on electronic support: quote filled computer screens provide information while electronic handhelds are used to eventually execute trades. Since the 1980s electronic trading constantly gained importance. Today virtually 100% of all trades are done electronically or at least with a remarkable amount of computer support. Special servers not only match ‘buy’ and ‘sell’ orders within fractions of a second but are also capable of confirming thousands of individual orders per second. Based on execution speed and power one can rank different electronic trading systems as follows1: (1) Direct Market Access (2) Algorithmic Trading and (3) High Frequency Trading (see Figure 1). Along this ranking the need for humans to intervene decreases while at the same time the autonomy of computers increase. The future trends promise to that trading will become both even faster and even more autonomous. A high priority currently lies in the problem of ‘latency’ this is the period of time between...

Words: 2558 - Pages: 11

Free Essay

Agency Therory

...IT and International Real-Time Media: Amplifier for a Crisis or Instrument of Rational Decision-Taking Narelle Gomes, Christian Piechorowski 09.01.2014 Table of contents: 1.1 Information technology’s impact in the development of the stock exchange 1.2 Algorithmic trading 1.3 High frequency trading 1.4 High frequency; trading beneficial or harmful for the economy? 1.5 Final Remarks 2.1 The Influential Role of Mass Media - The Pervasiveness of the information disseminated on the people 2.2 Financial Crisis- A media spectacle? 2.3 The mishaps of European Media during the current Euro crisis 2.3.1 The alternative view of the media; Citizens mistrust towards the media 2.3.2 The wavering power of mainstream amidst its pervasiveness 3. Conclusion Introduction Problem Description: The world financial crisis started in the US with the burst of the housing bubble in 2007. However, it was not just limited to the US border, but it rapidly spread all over the world. Consequently, many banks went bankrupt and some countries were even pushed into a financial downturn. Target of Study: This essay will not provide a general outlook on the financial crisis but instead examines the impact of the Real time media and IT on this economic crisis of historic scale. How important...

Words: 4847 - Pages: 20

Premium Essay

Dark Pools - Investment Banking

...Introduction Dark pools are a complex topic subject to misunderstanding amongst the broad public, media, and government regulators. To help provide a better perspective, we discuss the evolution of equity markets that led to the development of electronic trading, dark pools, and current market structure. We move on to analyze dark pools and their overall impact on trading. We then discuss further aspects of dark pools in particular, and consider regulation and global trends in market structure. Historical Perspective on Equity Markets The first modern equity market was established in the Netherlands in 1610 with the publically traded shares of the Dutch East India Company. Financial transactions had taken place since the dawn of civilization, but 1610 was a milestone towards the development of the equity markets we know today. Because equity securities represent transferable ownership interests in corporations, dividing business organizations into small, affordable pieces made it easier for entrepreneurs to raise capital from multiple sources. At the same time, limited liability allowed investors to diversify their investments without fear of incurring risk of personal accountability. Enhanced liquidity also eased transfer of ownership. Secondary markets for the securities of public firms quickly developed as the number of companies increased. Merchants and traders bought and sold securities just like other commodities, and specialization soon flourished. Stock exchanges...

Words: 4062 - Pages: 17

Premium Essay

Fragmentation of Liquidity

...ATMonitor Commentary September 2011 Issue Fragmentation of Liquidity www.atmonitor.co.uk Fragmentation of Liquidity ATMonitor Commentary Foreword This is not an academic paper on theoretical discussions but rather a series of practical questions and answers that members of MyATMonitor have asked and industry experts answered. Our primary goal is to bring knowledge that will be useful to traders on the buy side. In fact, this philosophy is well reflected in the very heart of MyATMonitor, a reliable, independent and trusted peer-group network of and for buy-side only institutional traders. This publication has been compiled from ongoing Q&A activity on the MyATMonitor Expert Panels. At the time of publication, the Expert Panels on MyATMonitor are Dark Pools, Commission Sharing Arrangements, EMS/OMS Relationships, Fragmentation of Liquidity, MiFID II and Transaction Cost Analysis and Best Execution. The ATMonitor team would like to thank all members and experts that have generously contributed to the success of MyATMonitor. ATMonitor Team. www.atmonitor.co.uk 2 Fragmentation of Liquidity ATMonitor Commentary Experts Panellists (in the order of appearance): Steve Grob Director of Group Strategy, Fidessa Steve is responsible for Fidessa’s strategic development.This includes the development of new geographic markets and strategic partnerships and driving new industry initiatives. As part of this Steve heads up the firm’s strategy in response to the fragmentation...

Words: 1560 - Pages: 7

Premium Essay

High Frequency Trading

...High-Frequency Trading Remi Charpin MBA student BADM 580 June 28, 2010 Prepared for Professor Charles Alvis Financial Markets Seminar Table of Contents List of Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Background Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Discussion of Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Key Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Benefits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Pros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....

Words: 5503 - Pages: 23

Free Essay

English

...[pic] TCS0303 ALGORITHM AND DATA STRUCTURE ASSIGNMENT 1 Due date: 8rd December 2014, by 5PM (assignment hardcopy) Weighting: Part of 10% of overall assessment. Environment: You are required to do this assignment in C++ environment. Assessment Your assignment will be assessed for the following: Correctness of the programs Sample test data/results/output or discussion of results No plagiarism   Submission You are required to submit documentation in the form of printed copy of your codes and sample test data. Your submission should bind together with the assignment cover given at the end of this assignment question. Use GREEN colour paper as your Assignment 1 cover. Warning 1. To be done in individually. 2. Marks will be deducted for plagiarism and late submission. ASSIGNMENT QUESTION: Question 1 (50 marks) In linked list, an ordered collection of data in which each element contains the location of the next element or elements using pointers. You are required to build a singly linked list program in C++ programming: 1) Create a singly linked list that contains data of 2,15,8,24,63,77 and print out the output. (10 marks) 2) With the creation of the linked list data, delete the no 8, 24 and 77 and print out the output. (5 marks) 3) Add the no 10 ,25 and 30 in your list and print out the output. (5 marks) Question 2 5. You are required to write a C++...

Words: 503 - Pages: 3

Free Essay

Essay

...CCS20503/DCS20303/TCS20404 Data Structure and Algorithm Lab Practical 3 ------------------------------------------------- 4 April 2016 ------------------------------------------------- Instructions: 1. Complete ALL the question in this Lab Practical. 2. Complete and hand in all the materials below: a. Printed source code of your programs. b. Print screen sample output of your programs. 3. Submit your work directly to the instructor with the cover printed on RED colour paper. Use the cover template given at the end of this document. 4. Due date: 4.4.2016 (Monday) Question 1 You are required to write a C++ program that indicates Linked List : a) Create a structure “model”. This structure basically has members of model name, model height, model weight, model ethnic. The data which will store the information and second is node *next which will hold the address of the next node. b) Create a structure “node”. This structure has all the model data and node pointer that will carry the model informations from structure model. c) Insert 4 data of models named, Heidi, Zoey, Alexis and Lisa that carries informations of the their heights, weights and ethnic into the nodes. Declare the first node and the last node as NULL. d) Print out the data of the models that include all the informations added as shown in the example below: TOP MODELS: Name: Heidi Height: 170 Weight: 50 Ethnic: Caucasian TOP MODELS: Name: Heidi Height:...

Words: 355 - Pages: 2

Premium Essay

Nt1330 Unit 1 Assignment

...Digital Technologies AS91074 External Assessment Section 1 - Algorithms Describe the key characteristics and roles of Algorithms Programs and Informal instructions The key characteristics of algorithms are, precision, finiteness, effectiveness, input and output. Precision mean that the algorithm needs to be clear and precisely defined so that a computer can follow the steps. Finiteness means that the algorithm has to finish. The algorithm needs to stop after it has executed all the steps set. Effectiveness means that the algorithm needs to be effective, such as that the algorithm can be done to its precise steps and done in a finite length of time, computer programming language independent. Input means that the algorithm has zero or more inputs, but not an unlimited amount of inputs. Output means that the algorithm must come back with one or more results to the steps it has followed. that an algorithm is a set of instructions that a computer can follow. They are usually written in numbers instead of English, due to English being so diverse. The role of algorithms is to set out a set of instructions that are clear enough for a computer to follow. The key characteristics of programs and informal instructions The role of programs and informal instructions is to make Describing an algorithm for a specific task There are two commonly used algorithms that I will explain, they are quick sort and selection sort. Quick sort is often used as it is fastest as ordering data to...

Words: 1521 - Pages: 7

Premium Essay

Image Theory

...Jackson Park Road, L606, Portland, OR 97239, USA 3 University of California, Berkeley, Berkeley, CA 94720, USA 4 Center for Neuropharmacology & Neuroscience, Albany Medical College, Albany, NY 12208, USA ABSTRACT An algorithmic information theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG), whose connected subgraphs are compared using an adaptive information distance measure, aided by a closed-form multi-dimensional quantization. The summary is the clustering result and feature subset that maximize the gap statistic. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. When applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain. When analyzing intra-cellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification. Finally, it was able to differentiate wild type from genetically modified thymocyte cells. Index Terms: Algorithmic information theory, Algorithmic statistics, Information distance, Gap statistic, Clustering. Various portions of this research were supported by the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program...

Words: 3769 - Pages: 16

Free Essay

Non Linear Data Structure

...What is a non-linear datastructure? A non-linear datastrucutre is a datastructure in which the data items in the memory are not allocated contiguously i.e. the data items are dispersed in the memory. The first data item will have a link to the second data item and second data item will have a link to the third data item and so on. Pros • Uses memory efficiently that the free contiguous memory in not an requirement for allocating data items • The length of the data items is not necessary to be known prior to allocation Cons • Overhead of the link to the next data item Linked list: linked list a data structure which stores data in the form of nodes.It does not require linear memory as arrays. Each node contains a data part and a pointer part(a pointer to the next data in the list) link or node is object of a class.there are so many types of linked list 1) single linked list 2)doubly linked list 3)circular linked list. single linked list: here links contains pointer to first data and last data in the list.As said earlier a pointer to the next data. example of a linked list: class node{// all nodes will be the objects of this class public int data; public link next_node;//a pointer to next data } public node(int data){ this.data=data; }//end of constructor public void showdata(){ System.out.println("data= "+data); } }//end of class node After defining class for each node we need to define a class for link list. Link list contains a pointer to...

Words: 475 - Pages: 2

Free Essay

Hp Case

...Question 1: Describe the problem that a large company such as HP might face in offering many product lines and options. Because of the numerous offerings by HP, there are some problem faced by this organization. These are listed below • Unplanned operating costs • Increase in inventory driven costs • Increase in Product Design Costs • Overabundance of a few items and shortages of others. • Rework due to the re-designing or product failures Question 2: Why is there a possible conflict between marketing and operations? There dependably exists clashes in the middle of promoting and operations as advertising group or Marketing Department dependably concentrates on more SKUs, more components, and more arrangements. The need giving every conceivable item decision an undeniable approach to fulfill more clients and produce more deals. Maybe the operations dependably need less. They need less to figure, less stock and less many-sided quality to oversee. Essentially, the driver on operations is expense control. Operations requires quick and unsurprising request process durations. The shifting objectives and goals between the diverse parts of the devouring the choice setting aside a few minutes expending and excessively complex. Question 3: Summarize your understanding of the models and the algorithms. The primary utilization of information mining method is to distinguish the examples and conduct of information. So that an association may utilize it to recognize its future...

Words: 678 - Pages: 3