Free Essay

Sensor

In:

Submitted By aryan1993777
Words 3674
Pages 15
SURJ

Learning to Select Robotic Grasps Using
Vision on the Stanford Artificial Intelligence
Robot
Lawson Wong1
Grasping is an essential ability for manipulation; for robots such as the Stanford Artificial
Intelligence Robot (STAIR) to be resourceful in real-world environments, they must know how to grasp. While this is a well-studied problem in the case when a full 3-D model of the target object is known, it is difficult for real-world scenarios, where the robot must rely on imperfect perception to model the scenario. This paper presents a novel approach for grasping that only uses local 3-D information acquired from sensors. Given data of the environment from 3-D sensors, our algorithm generates arm/hand configurations that may potentially achieve a good grasp, then computes features of these candidates to select the best candidate and execute its grasp. These features capture desirable properties of potential grasps based on sensor data, which our learning algorithm then uses to predict how likely the grasp will be successful. This algorithm was tested on STAIR in real-world grasping of single objects and of objects in cluttered environments. Significant improvements in both cases were found.
Introduction
As the field of artificial intelligence becomes increasingly advanced and integrated, it is time to revisit the half-century old “AIDream,” where intelligent robotic agents were envisioned to interact with the general human population. To this end, the Stanford Artificial Intelligence
Robot (STAIR) project aims to introduce robots into home and office environments, where they will facilitate and cooperate with people directly. In order for robots to have any non-trivial use in such environments, they must have the ability to manipulate objects, which is provided through robotic arms. An arm usually has a manipulator
“hand” attached at the end to allow finer manipulation and, more importantly, grasping. The ability to grasp is crucial; if we were unable to grasp with our hands, we would find it very difficult to perform essential tasks such as eating, and more complex actions such as cooking and working in an office would definitely be unachievable. A robust and infallible grasping system is therefore necessary for STAIR to
1Stanford University

achieve its goal.
In this paper, a novel approach for robotic grasping will be discussed. By considering information acquired from our 3-D visual sensors, we developed a reliable and efficient grasping system for STAIR that works in unknown and cluttered environments.

Background The problem of robotic grasping has existed and has been well studied over the past few decades. The conventional approach use the forces applied by the fingers on the object at their contact points to determine whether a stable grasp can be achieved1.
While in theory this fully determines the result of the grasp, this approach is not practical because a complete and precise model of the target object is necessary. If the model was inaccurate, force computations would likely be incorrect. When working in unknown and dynamic real-world environments,
STAIR can only acquire a model of the environment through visual perception, which is subject to inaccuracies and incompleteness. In practice, applying force computations directly on these models leads to poor results. The limitations imposed by perception have spurred interest over the past two decades in vision-based grasping systems. In particular, it has been found that perception of 2-D planar objects usually suffers from fewer problems. For such objects, the object

Figure 1: STAIR grasping from a very cluttered environment.

59

Engineering

Figure 3: Imperfect perception. Original bowl, and the point cloud obtained via vision (shown in simulation). Red points come from Bumblebee2, gray points from SwissRanger. Only some edges are picked up by the Bumblebee2, and neither the bowl surface or table is seen. The SwissRanger gives a much more complete bowl front face and table, but no other side of the bowl is seen. Interestingly, the two cameras complement each other in this scenario; however, the perception of the bowl is still far from complete.

Figure 2: STAIR. 7-dof Barrett WAM Arm and
4-dof 3-fingered BarrettHand with “open” spread pictured. The spread can be “closed” such that all 3 fingers will be at the top. Vision system mounted on robot frame; blue arrow marks
SwissRanger, green arrow marks Bumblebee2.

surface contour can be found reliably from vision. Criteria for successful grasps, derived from the mentioned theoretical force computations, can then be found for the object2,3. A similar approach was used by Kamon, Flash, and Edelman, where features indicative of successful grasps were computed given a 2-D image of the object4. A learnt model then used these features to compute an overall grasp quality, which predicted whether a grasp would succeed or not. While their results are promising, the methods are limited to
2-D objects and generalize poorly to the 3-D scenarios that STAIR faces.
Robot Description The STAIR robot that this project is targeted for consists of a 7-dof arm (WAM, by Barrett Technologies) situation on a mobile platform. The arm is equipped with a 3-fingered hand with 4 degrees of freedom, one for each finger and one for the spread of the

60

fingers (varying between being adjacent are undetected by the camera. While to each other and where two fingers are the point clouds from STAIR’s vision opposite the middle finger) (see Fig. 2). system are relatively accurate, they
The arm is capable of reaching objects clearly still suffer from large amounts within a 1m radius. The hand can close of missing data, hence an approach that its fingers inwards until the fingers hit does not apply force computations to an object, which is useful for grasping. evaluate grasps is necessary (see Fig.
STAIR is also equipped with two 3). cameras mounted on the robot frame. A stereo camera (Bumblebee2, by Point Approach
Grey Research) captures a 640*480
The objective is to, given a image using both its lenses, and uses the model of the environment through image differences to compute the depth visual perception, determine a robot for each image pixel, thereby giving configuration (joint angles for the arm
3-D point information. We shall refer and hand) such that, when closing the to the set of 3-D points returned by the fingers at that point (until they are fully camera as the scene’s “point-cloud.” closed or they hit an object), some object
The point-cloud returned by the stereo in the environment is successfully camera is very incomplete, as stereo correspondences cannot be found for regions without texture such as object surfaces and tabletops, and only the front face of objects can be detected (the back face is occluded). To compensate for this missing information, another camera (SwissRanger, by MESA
Imaging) provides a 144*176 array of depth estimates by firing an infrared light source and measuring the time it takes to reflect back to the camera.
While this gives a much more complete image of the scenario, the data points are relatively sparse, and object Figure 4: 2-D image-based classifier identisurfaces that absorb or scatter the light fying potential grasp points (depicted in red
5,6
squares) .

SURJ grasped. This configuration shall be denoted as a “grasp.” A successful grasp is defined here to be where the object can be lifted up into the air (such that the table is not supporting it) without it falling out of the hand. We split this problem into two parts. We first find a set of likely candidate configurations that may achieve a good grasp, then use features of these candidates and a learnt model to score each of the candidates, and finally execute the highest scoring grasp. The first component has already been addressed by previous STAIR work on grasping5,6,7. Specifically, a
2-D image-based classifier uses the images from both cameras to select a set of corresponding 3-D points that are likely to be good grasp points (see
Fig. 4). Given a 3-D point, there are still many orientations at which the arm can reach that point (and very few result in successful grasps), hence orientations are uniformly sampled for each point. These point-orientation pairs are then converted to joint angles, giving the corresponding robot grasp configuration. This forms our candidate configuration set. The second component is inspired by the work of Kamon, Flash, and Edelman as described in the background section, where features of grasps are extracted and used to determine the “quality” of a grasp.
The motivation behind this is that while force computations on perceived objects do not perform well and are inefficient, there are local properties of a grasp that inform us whether grasping at that location will be successful. There are several advantages to using local information. First, the most important
3-D region to consider for grasping is the region where the grasp will occur; little can be gained by considering the ends of a stick that we grasp in the middle. Second, while the vision data is incomplete, its distribution of incompleteness is very skewed; a bowl will have most of its front face perceived by the SwissRanger, but most of the

1. Acquire 2-D candidate grasp points set from camera images using classifier5,6,7
2. Use camera depth information to find corresponding 3-D candidate grasp points set
3. FOR each grasp point in 3-D candidate grasp points set DO
4.
Sample orientations from 3-D orientation space
5.
FOR each orientation sampled DO
6.
Use arm inverse kinematics to generate configuration with hand center near the
3-D grasp point and satisfying the 3-D orientation chosen.
7.
Select a finger configuration (sample spread and finger opening) that does not result in arm and hand colliding with obstacles
8.
Add arm/hand configuration from 7 (if any) to candidate configuration set
9.
END FOR
10. END FOR
11. FOR each configuration in candidate configuration set DO
12.
Compute features using the configuration and its hand’s local point cloud
13.
Score[grasp] := score from classifier given features from 12
14. END FOR
15. WHILE grasp not executed AND candidate configuration set not empty DO
16.
grasp* := argmax Score[grasp]
17.
Plan path to execute grasp* using Probabilistic Roadmap motion planner9
18.
IF plan successful THEN execute plan
19.
ELSE remove grasp* from candidate configuration set
20. END WHILE

Table 1: Algorithm for grasping an object

back half would be missing. Hence we can get a more complete model when we grasp at the front face. Finally, there are usually much fewer points in the local region, which significantly speeds up computation. The previous work was limited to 2-D information, hence more sophisticated features, as described in the next section, will be computed using our 3-D local point cloud. A supervised learning algorithm will then be used to train a classifier based on these features, which can then be used to predict a score between
0 and 1 of the quality of a candidate grasp. The described procedure for grasping an object is summarized in the algorithm in Table 1.8
Features

Three main properties of grasping were considered. First, the grasp must be able to achieve good contact with the target object, otherwise the object may be entirely missed by the hand. Second, the grasp should be stable, so in particular an object should not be grasped at a tip or corner when that is unnecessary. Third, the grasp must be able to apply forces on the object effectively, which is dictated

by the direction and orientation of the grasp; for example, consider grasping a long tube along its axis versus perpendicular to its axis. A total of 19 features were developed under these three categories. The contact between the hand and the object can be approximated by presence of point-cloud points inside the hand. Intuitively, the more points within the volume of the hand, the bigger the grasping area and volume of the object, hence the less likely a miss will occur. Similarly, if there are very

Figure 5: The cubes represent the local region.
The red points within the local region denote the edge region.

61

Engineering few points within the hand, the grasp may likely fail because the points may just have been noise (where the hand will grasp air) or may have been a small tip of the object (where the hand should grasp some other part of the object). We therefore simply count the number of points within the local region, defined to be a sphere with 10cm radius centered at the hand center. Just counting this region however is insufficient, as an object may be near the hand but is not in the grasp (since the region is larger than the hand’s grasp). Hence the points in the actual grasp region, i.e., on the inside region of the fingers, are also counted. The last region that was counted is a special “edge” region, defined as all points in the local region not extending further than the fingertip’s reach (see Fig. 5). This region usually defines the edge of the object, hence the given name. Note that this feature has certain drawbacks, as small objects will naturally have fewer points but should not be undesirable to grasp; such subtleties are accounted for by the training set and the learning algorithm. Stability of a grasp depends on the distribution of the object within the hand, or in our case, the distribution

Figure 6: Definition of being above and below the hand. Red points denote regions strictly above and below the hand (not enclosed in hand). 62

of the point cloud. Ideally, about the center of the hand, the point cloud should be evenly distributed along all axes. The outward axis from the hand is accounted for by the previous feature; if not enough points are within the hand, especially within the edge region, the grasp will be marked as bad. The
“horizontal” axis, defined to be the axis between the fingers (when the outer fingers are directly opposite the middle finger), is not too important. A skewed distribution along this axis means that when closing the fingers to grasp, the closer finger(s) will push the object towards the farther finger(s), which is not a problem. The final “vertical” axis, which is normal to the other two axes, needs to be accounted for. Denoting one side of this axis about the center as “above” and the other “below,” we desire that the number of points above and below the hand center to be near a 1:1 ratio (see Fig. 6). We therefore compute this feature by

, which is the absolute difference between the ideal (where points above
= points below) and actual distributions.
We also consider a similar measure where we only count points strictly above and below the hand (not enclosed by the hand). These measures are also computed with both the local and edge

regions to increase robustness towards different cases; for example, the second measure may be more useful when considering large objects. The previous feature category combined with this therefore account for grasp stability. Apart from being stable, it is more important that the forces of a grasp must be applied effectively on the object. Intuitively, an object should be grasped at narrow sides and not at wide sides, as at narrow places a tight closure on the object can be easily achieved, whereas at wide sides this is difficult (if the side is wider than the hand, then it is impossible). To capture this intuition, we consider the principal components of the local and edge regions. Using singular value decomposition (SVD), we obtain three orthonormal component directions ui with variances σi, with σ1 largest and σ3 smallest. The larger the variance, the more important the direction is in defining the region; for example, for a plate, u1 and u2 will lie on the face (with large σ1 and σ2), whereas u3 will be normal to the plate
(small σ3) (see Fig. 7). If we consider the unit horizontal axis vector h (axis running between the fingers), which is the direction in which the fingers close, we want h to be parallel to directions with small variances, and orthogonal to those with large variances. We therefore compute the directional similarity

Figure 7: Example of principal component directions of plate. u1 (left), u2 (middle) lie on the plate, whereas u3 (right) is normal to the plate. Only this direction gives good grasps.

SURJ for each component direction, which is large when ui and h are parallel. Hence we desire that s1 be 0 and s3 be 1. We therefore measure this by computing the difference between the directional similarity and its ideal value by

Depending on how large σ2 is, it may or may not be desirable to grasp in the direction of u2. These features therefore capture whether the grasp configuration has a good orientation. The features from all three categories were computed for a training set of 300 grasps, consisting of an equal number of good and bad grasps on plates, bowls, and wooden blocks, and achieved an 85% average test set accuracy when using 10-fold cross validation. Experimental Results We first considered grasping single objects from 13 novel classes
(i.e., of different types from the training set) in a total of 150 experiments. These objects also differed significantly in size, shape, and appearance. In each trial, one object was placed randomly on a table in front of the robot. STAIR was able to achieve an overall grasp success rate of 76%, which is an improvement from the 70% achieved previously9.
Moreover, the success rate was much higher at 86% for objects that were
1.5-3 times the size of the hand. We also conducted grasping experiments in cluttered environments, which was the main objective of this project. In a total of 40 experiments, where more than 5 objects were placed randomly but close to each other, STAIR had to avoid hitting other objects and grasp a single object from the scenario.
Although this was a significantly harder task in terms of perception, manipulation, and planning, STAIR had a success rate of 75%. The videos and results of the experiments are at: http://stair.stanford.

edu
Conclusion

We presented a robust and efficient algorithm that, given a 2-D image and 3-D point cloud of the environment from STAIR’s vision system, can generate candidate grasp configurations and use local point cloud features to select a good grasp. The algorithm has been tested in simulation and in real-world experiments on
STAIR, and has achieved significant improvement compared to previous systems, especially when grasping in cluttered environments. To further improve the algorithm, more features that describe general properties of grasps should be developed, and more grasp candidates should be searched and evaluated to increase the chances of finding and selecting an optimal grasp. In particular, instead of randomly sampling hand orientations uniformly from 3-D orientation space, better candidates can be found by applying heuristics to prune the search space.
Eventually, we also hope to provide
STAIR the sense of touch via force feedback, which would be extremely helpful in determining whether a secure grasp has been made yet. The challenge is to integrate all these components into a robust system without compromising for efficiency.
Acknowledgments

More details of the algorithm and results can be found in Saxena,
Wong, Quigley et al.6, Saxena,
Driemeyer, and Ng7, and Saxena, Wong, and Ng8. This project would not have been possible without all members of the STAIR Perception-Manipulation team and their efforts to develop and expand the functionality of the STAIR robots. Special thanks also to Ashutosh
Saxena and Professor Andrew Ng for providing guidance for this project.

2.

3.

4.

5.

6.

7.

8.

9.

and contact: a review. IEEE Intl Conf on
Robotics and Automation Proceedings
2000; 1:348-353.
Morales A, Chinellato E, Sanz PJ et al. Learning to predict grasp reliability for a multifinger robot hand by using visual features. Intl Conf on AI and Soft
Computing 2004.
Chinellato E, Morales A, Fisher R et al. Visual quality measures for characterizing planar robot grasps.
IEEE Trans on Systems, Man, and
Cybernetics, Part C: Applications and
Reviews 2005; 35:30-41.
Kamon I, Flash T, Edelman S. Learning to grasp using visual information. IEEE
Intl Conf on Robots and Automation
Proceedings 1994; 3:2470-2476.
Saxena A, Driemeyer J, Kearns J et al. Robotic grasping of novel objects.
Advances in Neural Info Processing
Systems 2007; 19:1209-1216.
Saxena A, Wong L, Quigley M et al. A vision-based system for grasping novel objects in cluttered environments. Intl
Symposium of Robotics Research
Proceedings 2007.
Saxena A, Driemeyer J, Ng AY. Robotic grasping of novel objects using vision.
Intl Journal of Robotics Research 2008;
27(2):157-173.
Saxena A, Wong L, Ng AY. Learning grasp strategies with partial shape information. Assoc for Advancement in
AI Proceedings 2008.
Schwarzer F, Saha M, Latombe JC.
Adaptive dynamic collision checking for single and multiple articulated robots in complex environments. IEEE Trans on
Robotics 2005; 21(3):338-353.

References
1. Bicchi A, Kumar V. Robotic grasping

63

Lawson Wong is a junior and coterminal master’s student at Stanford University majoring in computer science (with honors), and specializes in artificial intelligence. He hopes to ultimately understand what intelligence is and how to algorithmically replicate it, and currently plans to pursue a PhD in machine learning. Before studying at Stanford, Lawson spent his entire life in Hong Kong, where he developed a passion for mathematics, physics, and logic that remains till today and occupies his time outside of computer science. He thinks that undergraduate teaching and research are extremely valuable and enriching learning experiences, and he thanks Professor Andrew Ng and Ashutosh Saxena for their guidance on the STAIR project. More information about Lawson can be found at http://www. stanford.edu/~lsw/. 64

Similar Documents

Free Essay

Et2750 Types of Sensors

...Hydrogen Sensor SRC: www.instructables.com * Electrochemical Gas Sensor SRC: www.wikipedia.org * Ozone Monitor SRC: www.wikipedia.org ELECTRIC CURRENT * AC/DC Current Sensor SRC: www.electronics.com * AC/DC Current Sensitive Relays SRC: www.electronics.com * Transistors SRC: My Own Imagination * Current Transducer SRC: www.automationdirect.com x VOLTAGE * MOSFET Transistor SRC: Myself * Voltage Detector SRC: Myself MAGNETIC * Hall Effect Sensor SRC: www.autozone.com * Strain Gauge SRC: www.wikipedia.org * Metal Detector SRC: Myself NAVIGATION INSTRUMENTS * Speedometer SRC: Myself * Odometer SRC: Myself * Tachometer SRC: www.autos.com * Fuel Gauge Sensor SRC: www.autos.com OPTICAL * Disc Optical Reader SRC: Myself * LED (photodiode for optic sensing) SRC: www.dummies.com * CMOS Sensor SRC: www.wikipedia.org * Photodetector SRC: www.google.com * Fiber Optic Sensors SRC: www.wikipedia.org PRESSURE * Barometer SRC: www.wikipedia.org * Piezometer SRC: www.dummies.com * Time Pressure Gauge SRC: www.wikipedia.org * Pressure Sensor SRC: www.wikipedia.org * Tactile Sensor SRC: www.google.com TEMPERATURE * Thermostat Transducer SRC: Myself * Infrared Thermometer SRC: www.electronics.com * Heat Flux Sensor SRC:...

Words: 304 - Pages: 2

Premium Essay

Application of Chemical Sensors in Mechanical Engineering

...APPLICATION OF CHEMICAL SENSORS IN MECHANICAL ENGINEERING A chemical sensor is a device that transforms chemical information, ranging from the concentration of a specific sample component to total composition analysis, into an analytically useful signal. The chemical information, mentioned above, may originate from a chemical reaction of the analyte or from a physical property of the system investigated. Chemical sensors its application in various fields of engineering like civil, environment, medical, biotechnology and mechanical engineering. Mechanical engineering is a discipline of engineering that applies the principles of physics and materials science for analysis, design,manufacturing, and maintenance of mechanical systems. It implements core principles along with tools like computer-aided engineering and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transportsystems, aircraft, watercraft, robotics, medical devices and more. There are a lot of potential areas where chemical sensors play a key role in innumerable processes concerning to functioning and safety in manufacturing plants , transport, aerospace, automobiles , combustion engines which are a few applications of mechanical engineering concepts. The monitoring and control of combustion-related emissions is a top priority in many Industries. The real challenge is not only to develop highly sensitive and selective sensors, but to maintain...

Words: 1638 - Pages: 7

Free Essay

Wireless Sensor Network

...Wireless Sensor Networks and Their Usage Ali Raza,Shahid Rasheed & Shazib Javeed University Of Central Punjab Abstract Innovations in industrial, home and automation in transportation represent smart environments. Wireless Sensor Networks (WSNs) provide a new paradigm for sensing and disseminating information from various environments, with the potential to serve many and diverse applications Networks (WSN), where thousands of sensors are deployed at different locations operating in different modes .WSN consists of a number of sensors spread across a geographical area; each sensor has wireless communication capability and sufficient intelligence for signal processing and networking of the data. Wireless Sensor Networks (WSN) are used in variety of fields which includes military, healthcare, environmental, biological, home and other commercial applications. With the huge advancement in the field of embedded computer and sensor technology, Wireless Sensor Networks (WSN), which is composed of several thousands of sensor nodes which are capable of sensing, actuating, and relaying the collected information, have made remarkable impact everywhere? Key Words Wireless Sensor Network (WSNs) Structural Health Monitoring (SHM) Micro Electro-Mechanical Systems (MEMS) Introduction Sensor network is capable of sensing, processing and communicating which helps the base station or command node to observe and react according to the condition in a particular environment (physical...

Words: 2659 - Pages: 11

Free Essay

Wireless Sensor Networks

...SECURE ROUTING IN WIRELESS SENSOR NETWORKS By [Name] The Name of the Class (Course) Professor (Tutor): The Name of the School (University): The City and State The Date: Abstract. Wireless sensor networks (WSANs) are a group of sensors and actors that are linked by a wireless medium for the purpose of performing distributed sensing and action on a given task. This involves the sensors collecting information about the surrounding physical environment and sending the information to the actors which take the decisions and perform some needed action basing on the information received from the sensors about the surrounding environment. These sensor networks are sometimes referred to as wireless sensor and actuator networks. They monitor physical or environmental conditions such as sound, pressure, temperature among others and send the collected data to the required location. Effective sensing and acting requires a distributed local coordination methods and mechanism among the sensors and the actors in addition to this, sensor data should be valid in order for right and timely actions to be performed. This paper describes secure routing in wireless sensor networks and outlines its threats on security. Keywords: Wireless sensor and actor networks; Actuators; Ad hoc networks; Sybil attack; Real-time communication; Sinkhole; Routing; MAC; adversary. Introduction With the recent rapid improvement on technology, many networking technologies have been created to make...

Words: 5106 - Pages: 21

Free Essay

Hydrogen Concentration Sensor Selection for the Renewable Energy Vehicle

...Hydrogen Concentration Sensor Selection for the Renewable Energy Vehicle Travis Hydzik School of Mechanical Engineering, The University of Western Australia Associate Professor James Trevelyan School of Mechanical Engineering, The University of Western Australia ABSTRACT: This paper discusses the selection of a hydrogen concentration sensor for the use in the University of Western Australia’s Renewable Energy Vehicle (REV). Prior to selecting a sensor, it is important to consider the available sensing methods and the specific properties of the measurand, hydrogen. The selection process leading up to the purchase of two different hydrogen sensors from Neodym Technologies, is documented and finally the method of sensor calibration is outlined. 1 INTRODUCTION The University of Western Australia’s Renewable Energy Vehicle (REV) project aims to show the viability of using renewable energy as a means of transport. The vehicle will resemble the cars of today, but will be solely powered by a hybrid of hydrogen fuel and solar energy. The proposed car’s completion date is late 2005, allowing it to be driven around Australia in 2006. The REV requires numerous amounts of measured physical quantities for both data logging and controlling the car’s systems. For each measured physical quantity, a sensor is required to convert this quantity into an electrical signal. Safety is always first priority, and for this reason hydrogen leak safety sensors were given the highest priority on...

Words: 2412 - Pages: 10

Free Essay

Routing in Wireless Sensor Networks

...Routing in Wireless Sensor Networks: application on Fire Detection Abstract: this paper is about fire detection in building using a modified APTEEN routing protocol. Here we design a system called iFireControl which is a smart detection system for buildings, which is more water efficient than many current systems, while keeping its robustness. introduction A Wireless Sensor network (WSN) consists of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensors activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; nowadays such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, Agriculture, Area Monitoring, Smart Home Monitoring, Seismic Monitoring etc. Wireless Sensor Networks provide a bridge between the real physical and virtual worlds; allow the ability to observe the previously unobservable at a fine resolution over large spatio-temporal scales. The WSN is built of “nodes” from a few to several hundreds or even thousands, where each node is connected to one (or sometimes several) sensors. Each such sensor network node has typically several parts: a...

Words: 4845 - Pages: 20

Premium Essay

Wireless Sensor Network Case Study

...IN WIRELESS SENSOR NETWORK USING NODE ENERGY BASED SCHEDULING METHOD Jency. J1,Anita Christy angelin2, 1PG Scholar/Department of CSE, Karunya University,Coimbatore-India. 2Assistant Professor/Department of CSE, Karunya University, Coimbatore-India. ---------------------------------------------------------------------------------------------------------------- Abstract—Wireless sensor networks consist of large number of battery powered wireless sensor nodes. A major key issue in WSNs is to reduce the energy consumption while maintaining the normal functions of WSNs. Many different methods are used to reduce the energy consumption in the wireless sensor networks. If the node is not able to send a packet to the...

Words: 2807 - Pages: 12

Free Essay

Prolonging the Lifetime of Wireless Sensor Network.

...INTRODUCTION Wireless Sensor Networks (WSNs) are distributed embedded systems composed of a large number of low- cost, low-power, multifunctional sensor nodes. The sensor nodes are small in size and communicate wirelessly in short distances. These tiny sensor nodes can perform sensing, data processing and communicating. They are densely deployed in the desired environment. A sensor network consists of multiple detection stations called sensor nodes, each of which is small, lightweight and portable. Every sensor node is equipped with a transducer, microcomputer, transceiver and power source. The transducer generates electrical signals based on sensed physical effects and phenomena. The microcomputer processes and stores the sensor output. The transceiver, which can be hard-wired or wireless, receives commands from a central computer and transmits data to that computer. The power for each sensor node is derived from the electric utility or from a battery. Sensors use a signal of some sort, from the environment and convert it to readable form for purpose of information transfer. Each sensor node has multiple modalities for sensing the environment such as acoustic, seismic, light, temperature, etc. However, each sensor can sense only one modality at a time. The sensor nodes in the target tracking WSN use collaboration with the neighboring nodes. This requires data exchange between sensor nodes over an ad hoc wireless network with no central coordination medium. There...

Words: 1981 - Pages: 8

Free Essay

Energy Efficiency in Sensor Networks

...Topology-Transparent Duty Cycling for Wireless Sensor Networks Computer Science& Engineering Arizona State University syrotiuk@asu.edu Abstract Our goal is to save energy in wireless sensor networks (WSNs) by periodic duty-cycling of sensor nodes. We schedule sensor nodes between active (transmit or receive) and sleep modes while bounding packet latency in the presence of collisions. In order to support a dynamic WSN topology, we focus on topology-transparent approaches to scheduling; these are independent of detailed topology information. Much work has been done on topology-transparent scheduling in which all nodes are active. In this work, we examine the connection between topology-transparent dutycycling and such non-sleeping schedules. This suggests a way to construct topology-transparent duty-cycling schedules. We analyse the performance of topology-transparent schedules with a focus on throughput in the worst case. A construction of topology-transparent duty-cycling schedules based on a topology-transparent non-sleeping schedule is proposed. The constructed schedule achieves the maximum average throughput in the worst case if the given nonsleeping schedule satisfies certain properties. 1 Introduction Wireless sensor networking has been a growing research area for the last years. It has a wide range of potential applications, such as environment monitoring, smart spaces, medical systems and robotic exploration. In sensor networks, sensor nodes are normally battery-operated...

Words: 6730 - Pages: 27

Free Essay

Hybrid Security Approach for Nodes Authentication in Wireless Sensor Network Using Cellular Automata

.... . . . . . . . . . 2 3 3 4 4 4 5 6 7 7 8 8 9 . . . . . . . . . . . . . . . . . . . 10 2 Introduction 3 Wireless Sensor Network 3.1 The Basics of WSN . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 3.1.2 3.1.3 3.2 3.3 3.4 Components of Sensor Nodes . . . . . . . . . . . . . . . . Key Features . . . . . . . . . . . . . . . . . . . . . . . . . Types of Sensor nodes . . . . . . . . . . . . . . . . . . . . Constraints in WSNs . . . . . . . . . . . . . . . . . . . . . . . . . Applications of WSN . . . . . . . . . . . . . . . . . . . . . . . . . Security Threats in WSN . . . . . . . . . . . . . . . . . . . . . . 4 Cellular Automata 4.1 Reversible Cellular Automata 5 Deployment issues in WSN with specific focus on authentication 5.1 5.2 Authentication of Cluster Head and Base Station . . . . . . . . . Authentication of Nodes . . . . . . . . . . . . . . . . . . . . . . . 12 13 13 14 15 15 15 16 6 Schemes as well as Supporting claims 6.1 6.2 6.3 Cloning attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replay Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Man-in-the-middle . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusion List of Figures 1 2 3 4 5 Wireless sensor Network . . . . . . . . . . . . . . . . . . . . . . . Components of Sensor Nodes . . . . . . . . . . . . . . . . . . . . WSN with three types of sensor nodes . . . . . . . . . . . . . . . Elementary CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reversible...

Words: 4368 - Pages: 18

Free Essay

Simulation and Research on Data Fusion Algorithm of the Wireless Sensor Network Based on Ns2

...2009 World Congress on Computer Science and Information Engineering Simulation and Research on Data Fusion Algorithm of the Wireless Sensor Network Based on NS2 Junguo Zhang, Wenbin Li, Xueliang Zhao, Xiaodong Bai, Chen Chen Beijing Forestry University, 35 Qinghua East Road, Haidian District,Beijing, 100083 P.R.China information which processed by the embedded system to the user terminals by means of random selforganization wireless communication network through multi-hop relay. Thus it authentically achieves the purpose of ‘monitor anywhere and anytime’. The basic function of sensor network is gathering and sending back the information of the monitoring areas which the relevant sensor nodes are set in. But the sensor network node resources are very limited, which mainly embodies in battery capacity, processing ability, storage capacity, communication bandwidth and so on. Because of the limited monitoring range and reliability of each sensor, we have to make the monitoring areas of the sensor nodes overlapped when they are placed in order to enhance the robustness and accuracy of the information gathered by the entire network. In this case, certain redundancy in the gathered data will be inevitable. On the way of sending monitoring data by multi-hop relay to the sink nodes (or base stations) which are responsible to gather the data. It is necessary to reduce the redundant information by fusion processing. Data fusion is generally defined as a process...

Words: 3532 - Pages: 15

Premium Essay

Computer Science

...A REPORT ON INTELLIGENT HUMIDISTAT BY Rohan Mehta 2011B5A3376P Aditya Pillai 2011B3A3530P Shantanu Maharwal 2011B2A3700P Gaurav Dadhich 2011B3A8513P AT BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI A REPORT ON INTELLIGENT HUMIDISTAT BY Rohan Mehta 2011B5A3376P Aditya Pillai 2011B3A3530P Shantanu Maharwal 2011B2A3700P Gaurav Dadhich 2011B3A8513P Prepared in Partial fulfilment of the requirements of the course “Microprocessors and Interfacing” Course Number: EEE F241 AT BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI (April 2014) ACKNOWLEDGEMENT We would like to express our gratitude to all those who have helped us directly or indirectly to complete this report. Firstly, we would like to express our gratitude towards the Instructor-in-Charge (IC) of this course, Dr. K. R. Anupama (from Goa Campus) and Dr. Pawan Sharma (from Pilani Campus) for giving us this opportunity to work on such an interesting assignment. Their teachings and support during the program were greatly valuable to all of us. We would also like to thank our tutorial professors, viz., Dr. Rajiv Ranjan Singh, Mr. Tulsi Ram Sharma, for their excellent personal guidance, help and teachings throughout the project, and further. Last, but not the least, we would like to thank our Lab Instruction Mr. V. Balaji and his team of assistants for all the help and knowledge imparted to us related to the assembly language programming in the...

Words: 2608 - Pages: 11

Free Essay

Bigdata

...A BIG DATA APPROACH FOR HEALTH CARE APPLICATION DEVELOPMENT G.Sravya [1], A.Shalini [2], K.Raghava Rao [3] @ B.Tech Students, dept. of Electronics and Computers. K L University, Guntur, AP. *.Professor, dept. of Electronics and Computers. K L University, Guntur, AP sravyagunturi93@gmail.com , shaliniaramandla@gmail.com, raghavarao@kluniversity.in ABSTRACT: Big data is playing a vital role in present scenario. big data has become the buzzword in every field of research. big data is collection of many sets of data which contains large amount of information and little ambiguity where other traditional technologies are lagging and cannot compete with it .big data helps to manipulate and update the large amount of data which is used by every organization in any fields The main aim of this paper is to address the impact of these big data issues on health care application development but health care industry is lagging behind other sectors in using big data .although it is in early stages in health care it provides the researches to accesses what type of treatment should be taken that are more preferable for particular diseases, type of drugs required and patients records I. Introduction Health care is one of the most prominent problems faced by the society now a day. Every day a new disease is take birth which leads to illness of millions of people. Every disease has its own and unique medicine for its cure. Maintaining all the data related to...

Words: 2414 - Pages: 10

Free Essay

Sensor and Control

...Sensors & Controls SECTION 8 Tips for Daylighting with Windows OBJECTIVE Design and install a control system to dim lights and/or turn them off when there is adequate daylight. • Reduce lighting energy consumption with automatic controls. • Use a lighting specialist for best results with the control system. KEY IDEAS General • Sensors “measure” light by looking at a wide area of the office floor and work surfaces from a point on the ceiling. The sensor’s signal is then used by the control system to dim or turn off the electric lights according to the available daylight. These simple components are needed to save energy in daylighted spaces. • Controls can respond to many variables. To save lighting energy, controls are typically designed to respond to daylight and a host of other inputs (e.g., occupancy sensors, weekend/holiday/nighttime schedules, etc.). • Include all control documentation in the construction documents. This should include clearly developed control schematics, control sequences, calibration instructions, maintenance plans and checklists, and clear testing procedures. • Lighting controls and sensors must be properly calibrated and commissioned prior to occupancy. This helps ensure energy savings and reduces the likelihood of complaints from occupants. • Take special care to document integrated control systems. Control schematics are critical where different building systems (e.g., lighting, mechanical, etc.) come together. Identify...

Words: 3988 - Pages: 16

Free Essay

Smcfancontrol

...F.A.Q for smcFanControl 2.4 How do install and uninstall smcFanControl? smcFanControl is just an application. So after downloading, and unzipping it, drag it to wherever you want (e.g. the Application Folder). To uninstall it, just drag it into the trash. smcFanControl installs no permanent background processes or daemons. All changes smcFanControl does to the fan controlling get lost after you shutdown your computer (power off, not restart) or enter standby mode (as far as you don't have smcFanControl running) . Minimum fan speed then falls back to the system defaults values. When I run smcFanControl and set a new minimum speed, will my fan speed still increase if the CPU load gets higher? Yes, fan speed will increase as defined by Apple. smcFanControl lets the fans stay in automatic mode and just sets the minimum fan speed. However, the higher you set the minimum fan speed, the longer it will take for the fan speed to increase. Why does smcFanControl asks for a login and password and which login/password do I have to enter? smcFanControl needs the credentials of an admin-user to set the appropriate rights to be able to change fanspeed. You only have to enter it one time. The entered login and password are not saved by smcFanControl. They are just used for setting the appropriate rights. I get a "smcFanControl has not been tested on this machine" warning. What does that mean? Technically smcFanControl supports every intel mac, but it does not come with defaults...

Words: 1141 - Pages: 5