Free Essay

A Comparative Study of Artificial Bee Colony Algorithm

In:

Submitted By rifkyali
Words 19739
Pages 79
Applied Mathematics and Computation 214 (2009) 108–132

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A comparative study of Artificial Bee Colony algorithm
Dervis Karaboga *, Bahriye Akay
Erciyes University, The Department of Computer Engineering, Melikgazi, 38039 Kayseri, Turkey

a r t i c l e

i n f o

a b s t r a c t
Artificial Bee Colony (ABC) algorithm is one of the most recently introduced swarm-based algorithms. ABC simulates the intelligent foraging behaviour of a honeybee swarm. In this work, ABC is used for optimizing a large set of numerical test functions and the results produced by ABC algorithm are compared with the results obtained by genetic algorithm, particle swarm optimization algorithm, differential evolution algorithm and evolution strategies. Results show that the performance of the ABC is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters. Ó 2009 Elsevier Inc. All rights reserved.

Keywords: Swarm intelligence Evolution strategies Genetic algorithms Differential evolution Particle swarm optimization Artificial Bee Colony algorithm Unconstrained optimization

1. Introduction Population-based optimization algorithms find near-optimal solutions to the difficult optimization problems by motivation from nature. A common feature of all population-based algorithms is that the population consisting of possible solutions to the problem is modified by applying some operators on the solutions depending on the information of their fitness. Hence, the population is moved towards better solution areas of the search space. Two important classes of population-based optimization algorithms are evolutionary algorithms [1] and swarm intelligence-based algorithms [2]. Although Genetic Algorithm (GA) [3], Genetic Programming (GP) [4], Evolution Strategy (ES) [5,6] and Evolutionary Programming (EP) [7] are popular evolutionary algorithms, GA is the most widely used one in the literature. GA is based on genetic science and natural selection and it attempts to simulate the phenomenon of natural evolution at genotype level while ES and EP simulate the phenomenon of natural evolution at phenotype level. One of the evolutionary algorithms which has been introduced recently is Differential Evolution (DE) algorithm [8–10]. DE has been particularly proposed for numerical optimization problems. In the basic GA, a selection operation is applied to the solutions evaluated by the evaluation unit. At this operation the chance of a solution being selected as a parent depends on the fitness value of that solution. One of the main differences between the GA and the DE algorithm is that, at the selection operation of the DE algorithm, all solutions have an equal chance of being selected as parents, i.e. the chance does not depend on their fitness values. In DE, each new solution produced competes with its parent and the better one wins the competition. In recent years, swarm intelligence has also attracted the interest of many research scientists of related fields. Bonabeau has defined the swarm intelligence as ‘‘. . .any attempt to design algorithms or distributed problem-solving devices inspired by the collective behaviour of social insect colonies and other animal societies. . .” [11]. Bonabeau et al. focused their viewpoint on social insects alone such as termites, bees, wasps and different ant species. However, a swarm can be considered as any collection of interacting agents or individuals. An ant colony can be thought of as a swarm whose individual agents are ants; a flock of birds is a swarm of birds. An immune system [12] can be considered as a swarm of cells and molecules as well as a crowd is a swarm of people. A popular swarm-intelligence-based algorithm is the Particle Swarm Optimization (PSO) algorithm which was introduced by Eberhart and Kennedy in 1995 [13]. PSO is also a population-based stochastic optimization technique and is well adapted to the optimization of nonlinear functions in multidimensional space. It models
* Corresponding author. E-mail address: karaboga@erciyes.edu.tr (D. Karaboga). 0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.03.090

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

109

the social behaviour of bird flocking or fish schooling. PSO has received significant interest from researchers studying in different research areas and has been successfully applied to several real-world problems [14]. The classical example of a swarm is bees’ swarming around their hive but it can be extended to other systems with a similar architecture. Some approaches have been proposed to model the specific intelligent behaviours of honeybee swarms and they have been applied for solving combinatorial type problems [15–23]. Tereshko considered a bee colony as a dynamical system gathering information from an environment and adjusting its behaviour in accordance to it [15]. Tereshko and Loengarov established a robot idea on foraging behaviour of bees. Usually, all these robots are physically and functionally identical, so that any robot can replace any other robot. The swarm possesses a significant tolerance; the failure of a single agent does not stop performance of the whole system. Like insects, the robots individually have limited capabilities and limited knowledge of the environment. On the other hand, the swarm develops collective intelligence. The experiments showed that insectlike robots are successful in real robotic tasks. Tereshko and Loengarov also developed a minimal model of forage selection that leads to the emergence of collective intelligence which consists of three essential components: food sources, employed foragers, and unemployed foragers, and defines two leading modes of the behaviour: recruitment to a nectar source and aban´ donment of a source [16,17]. Teodorovic suggested to use bee swarm intelligence in the development of artificial systems ´ aimed at solving complex problems in traffic and transportation [19,18]. Once more Teodorovic proposed the Bee Colony Optimization (BCO) Metaheuristic which is capable of solving deterministic combinatorial problems, as well as combinatorial problems characterized by uncertainty [20]. Drias et al. introduced a new intelligent approach or meta-heuristic called Bees Swarm Optimization (BSO) and adapted it to the features of the maximum weighted satisfiability (max-sat) problem [21]. Similarly, Benatchba et al. introduced a metaheuristic based on the process of bees’ reproduction to solve a 3-sat problem [22]. Wedde et al. presented a novel routing algorithm, called BeeHive, which was the inspiration gained from communicative and evaluative methods and procedures of honeybees [23]. In BeeHive algorithm, bee agents travel through network regions called foraging zones. On their way their information on the network state is delivered for updating the local routing tables. The works mentioned in the previous paragraph introduced the algorithms of bees proposed for the combinatorial type problems. There are three continuous optimization algorithms based on intelligent behaviour of honeybee swarm [24–26] in the literature. Yang developed a Virtual Bee Algorithm (VBA) [24] to optimize only two-dimensional numeric functions. A swarm of virtual bees is generated and started to move randomly in two-dimensional search space. Bees interact when they find some target nectar and the solution of the problem is obtained from the intensity of these bee interactions. Pham et al. introduced the Bees Algorithm in 2005, which employs several control parameters [25]. For optimizing multi-variable and multi-modal numerical functions, Karaboga described an Artificial Bee Colony (ABC) algorithm [26] in 2005. Basturk and Karaboga compared the performance of ABC algorithm with those of GA [27], PSO and PS-EA [28]; and DE, PSO and EA [29] on a limited number of test problems. They have extended ABC algorithm for constrained optimization problems in [30] and applied ABC for training neural networks [31,32]. ABC algorithm was used for designing IIR filters in [33] and for the leaf-constrained minimum spanning tree problem in [34]. In this work, a comprehensive comparative study on the performances of well-known evolutionary and swarm-based algorithms for optimizing a very large set of numerical functions is presented. In Section 2, ES, GA, PSO and DE algorithms are briefly summarized. In Section 3, the foraging behaviour of real bees and then ABC algorithm simulating this behaviour are described. Finally, in Section 4 and Section 5, the simulation results obtained are presented and discussed, respectively.

2. Evolution strategies, genetic algorithm, differential evolution algorithm, particle swarm optimization algorithm 2.1. Evolution strategies Evolution Strategy has been proposed for numerical optimization problems and it is one of the oldest evolutionary algorithms. Evolution Strategy produces n-dimensional real-valued vector by mutating its parent with identical standard deviations to each vector element. The produced individual is evaluated and a selection scheme is applied to determine which one will survive in the next generation and the other one will be discarded. This simple selection mechanism is expressed as (1 + 1)-selection. In a ðl þ 1Þ-ES, which is a multi-membered evolution strategy, l parent individuals recombine to form one offspring, which after being mutated eventually replaces the worst parent individuals, if it is better [35]. The canonical versions of the ES (CES) are denoted by ðl=q; kÞ-ES and ðl=q þ kÞ-ES. Here l indicates the number of parents, q 6 l is the number of parents involved in the generation of an offspring, and k is the number of offspring produced. The parents are deterministically chosen from the set of either the offspring (referred to as comma-selection and l < k must hold) or both the parents and offspring (referred to as plus-selection). Selection is based on the fitness of individuals. The main steps of ES are given below: 1: Initialize Population 2: repeat 3: Recombination 4: Mutation 5: Evaluation 6: Selection 7: until requirements are met

110

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Mutation is carried out by adding a normally distributed random term to each vector. Evolution strategies using Cauchy mutation operators are called Fast Evolution Strategies (FES) [36]. Another mutation operator is Self-Organising Maps (SOM)-based mutation operator which is continuously trained on successful offspring only, thus reflecting the evolution path in away that allows for selection of further successful candidates with high probability [37]. Kohonen’s Self-Organising Mapping Evolution Strategy (KSOM-ES) is based on this adaptation for the mutation operator [38]. Neural Gas networks Evolution Strategy (NG-ES) has been used to overcome bad scaling property of KSOM-ES [39]. In Covariance Matrix Adaptation Evolution Strategies (CMA-ES) devised for optimal exploitation of local information, the step size can be selected by self-adaptation or by covariance matrix adaptation [40]. Evolution Strategies Learned with Automatic Termination (ESLAT) criterion is another Evolution Strategy which uses an automatic termination criterion based on a gene matrix [41]. 2.2. Genetic algorithm A basic GA consists of five components. These are a random number generator, a fitness evaluation unit and genetic operators for reproduction; crossov er and mutation operations. The basic algorithm is summarized below: 1: Initialize Population 2: repeat 3: Evaluation 4: Reproduction 5: Crossover 6: Mutation 7: until requirements are met The initial population required at the start of the algorithm, is a set of number strings generated by the random generator. Each string is a representation of a solution to the optimization problem being addressed. Binary strings are commonly employed. Associated with each string is a fitness value computed by the evaluation unit. A fitness value is a measure of the goodness of the solution that it represents. The aim of the genetic operators is to transform this set of strings into sets with higher fitness values. The reproduction operator performs a natural selection function known as seeded selection. Individual strings are copied from one set (representing a generation of solutions) to the next according to their fitness values, the higher the fitness value, the greater the probability of a string being selected for the next generation. The crossover operator chooses pairs of strings at random and produces new pairs. The simplest crossover operation is to cut the original parent strings at a randomly selected point and to exchange their tails. The number of crossover operations is governed by a crossover rate. The mutation operator randomly mutates or reverses the values of bits in a string. The number of mutation operations is determined by a mutation rate. A phase of the algorithm consists of applying the evaluation, reproduction, crossover and mutation operations. A new generation of solutions is produced with each phase of the algorithm [42]. 2.3. Differential evolution algorithm The DE algorithm is a population-based algorithm like genetic algorithms using the similar operators; crossover, mutation and selection. The main difference in constructing better solutions is that genetic algorithms rely on crossover while DE relies on mutation operation. This main operation is based on the differences of randomly sampled pairs of solutions in the population. The algorithm uses mutation operation as a search mechanism and selection operation to direct the search toward the prospective regions in the search space. The DE algorithm also uses a non-uniform crossover that can take child vector parameters from one parent more often than it does from others. By using the components of the existing population members to construct trial vectors, the recombination (crossover) operator efficiently shuffles information about successful combinations, enabling the search for a better solution space. In DE, a population of solution vectors is randomly created at the start. This population is successfully improved by applying mutation, crossover and selection operators. In the DE algorithm, each new solution produced competes with a mutant vector and the better one wins the competition. DE has received significant interest from researchers studying in different research areas and has been applied to several real-world problems [8,43]. The main steps of the DE algorithm are presented below: 1: 2: 3: 4: 5: 6: 7: 8: Initialize Population Evaluation repeat Mutation Recombination Evaluation Selection until requirements are met

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

111

Mutation: Each of the N parameter vectors undergoes mutation. Mutation operation expands the search space. A mutant solution vector ^i is produced by (1): x

^i ¼ xr1 þ Fðxr3 À xr2 Þ; x

ð1Þ

where F is the scaling factor having values in the range of [0, 1] and solution vectors xr1 ; xr2 and xr3 are randomly chosen and must satisfy (2):

xr1 ; xr2 ; xr3 jr1 – r 2 – r 3 – i; where i is the index of current solution. Crossover: The parent vector is mixed with the mutated vector to produce a trial vector by (3):

ð2Þ

( yji ¼

^ji x xji

Rj 6 CR; Rj > CR;

ð3Þ

where CR is crossover constant and Rj is a randomly selected real number between [0, 1] and j denotes the jth component of the corresponding array. All solutions in the population have the same chance of being selected as parents without dependence of their fitness value. The child produced after the mutation and crossover operations is evaluated. Then, the performance of the child vector and its parent is compared and the better one wins the competition. If the parent is still better, it is retained in the population. 2.4. Particle swarm optimization In PSO, a population of particles starts to move in search space by following the current optimum particles and changing the positions in order to find out the optima. The position of a particle refers to a possible solution of the function to be optimized. Evaluating the function by the particle’s position provides the fitness of that solution. In every iteration, each particle is updated by following the best solution of current particle achieved so far (~ðtÞ, particle best) and the best of the population p (~ðtÞ, global best). When a particle takes part of the population as its topological neighbours, the best value is a local best. The g particles tend to move to good areas in the search space by the information spreading to the swarm. The particle is moved to a new position calculated by the velocity updated at each time step t. This new position is calculated as the sum of the previous position and the new velocity by (4):

~ þ 1Þ ¼ ~ þ ~ðt þ 1Þ: xðt xðtÞ t
The velocity update is performed as indicated in Eq. (5):

ð4Þ

~ðt þ 1Þ ¼ x~ðtÞ þ /1 randð0; 1Þð~ðtÞ À ~ t t p xðtÞÞ þ /2 randð0; 1Þð~ðtÞ À ~ g xðtÞÞ:

ð5Þ

The parameter x is called the inertia weight and controls the magnitude of the old velocity ~ðtÞ in the calculation of the new t p g velocity, whereas /1 and /2 determine the significance of ~ðtÞ and ~ðtÞ, respectively. Furthermore, ti at any time step of the algorithm is constrained by the parameter tmax . The swarm in PSO is initialized by assigning each particle to a uniformly and randomly chosen position in the search space. Velocities are initialized randomly in the range ½tmin ; tmax Š. Main steps of the procedure are: 1: Initialize Population 2: repeat 3: Calculate fitness values of particles 4: Modify the best particles in the swarm 5: Choose the best particle 6: Calculate the velocities of particles 7: Update the particle positions 8: until requirements are met Particles’ velocities on each dimension are clamped to a maximum velocity tmax . If the sum of accelerations would cause the velocity on that dimension to exceed tmax , which is a parameter specified by the user, then the velocity on that dimension is limited to tmax . 3. Artificial Bee Colony (ABC) algorithm 3.1. Behaviour of real bees Tereshko developed a model of foraging behaviour of a honeybee colony based on reaction–diffusion equations [15–17]. This model that leads to the emergence of collective intelligence of honeybee swarms consists of three essential

112

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

components: food sources, employed foragers, and unemployed foragers, and defines two leading modes of the honeybee colony behaviour: recruitment to a food source and abandonment of a source. Tereshko explains the main components of his model as below: 1. Food Sources: In order to select a food source, a forager bee evaluates several properties related with the food source such as its closeness to the hive, richness of the energy, taste of its nectar, and the ease or difficulty of extracting this energy. For the simplicity, the quality of a food source can be represented by only one quantity although it depends on various parameters mentioned above. 2. Employed foragers: An employed forager is employed at a specific food source which she is currently exploiting. She carries information about this specific source and shares it with other bees waiting in the hive. The information includes the distance, the direction and the profitability of the food source. 3. Unemployed foragers: A forager bee that looks for a food source to exploit is called unemployed.It can be either a scout who searches the environment randomly or an onlooker who tries to find a food source by means of the information given by the employed bee. The mean number of scouts is about 5–10%. The exchange of information among bees is the most important occurrence in the formation of collective knowledge. While examining the entire hive it is possible to distinguish some parts that commonly exist in all hives. The most important part of the hive with respect to exchanging information is the dancing area. Communication among bees related to the quality of food sources occurs in the dancing area. The related dance is called waggle dance. Since information about all the current rich sources is available to an onlooker on the dance floor, probably she could watch numerous dances and chooses to employ herself at the most profitable source. There is a greater probability of onlookers choosing more profitable sources since more information is circulating about the more profitable sources. Employed foragers share their information with a probability which is proportional to the profitability of the food source, and the sharing of this information through waggle dancing is longer in duration. Hence, the recruitment is proportional to profitability of a food source [17]. In order to better understand the basic behaviour characteristics of foragers, let us examine Fig. 1. Assume that there are two discovered food sources: A and B. At the very beginning, a potential forager will start as unemployed forager. That forager bee will have no knowledge about the food sources around the nest. There are two possible options for such a bee: i. It can be a scout and starts searching around the nest spontaneously for food due to some internal motivation or possible external clue (S on Fig. 1). ii. It can be a recruit after watching the waggle dances and starts searching for a food source (R on Fig. 1).

Fig. 1. Behaviour of honeybee foraging for nectar.

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

113

After finding the food source, the bee utilizes its own capability to memorize the location and then immediately starts exploiting it. Hence, the bee will become an employed forager. The foraging bee takes a load of nectar from the source and returns to the hive, unloading the nectar to a food store. After unloading the food, the bee has the following options: i. It might become an uncommitted follower after abandoning the food source (UF). ii. It might dance and then recruit nest mates before returning to the same food source (EF1). iii. It might continue to forage at the food source without recruiting bees (EF2). It is important to note that not all bees start foraging simultaneously. The experiments confirmed that new bees begin foraging at a rate proportional to the difference between the eventual total number of bees and the number of bees presently foraging. 3.2. Artificial Bee Colony (ABC) algorithm In ABC algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of the employed bees or the onlooker bees is equal to the number of solutions in the population. At the first step, the ABC generates a randomly distributed initial population PðC ¼ 0Þ of SN solutions (food source positions), where SN denotes the size of employed bees or onlooker bees. Each solution xi ði ¼ 1; 2; . . . ; SNÞ is a D-dimensional vector. Here, D is the number of optimization parameters. After initialization, the population of the positions (solutions) is subject to repeated cycles, C ¼ 1; 2; . . . ; MCN, of the search processes of the employed bees, the onlooker bees and the scout bees. An employed bee produces a modification on the position (solution) in her memory depending on the local information (visual information) and tests the nectar amount (fitness value) of the new source (new solution). If the nectar amount of the new one is higher than that of the previous one, the bee memorizes the new position and forgets the old one. Otherwise she keeps the position of the previous one in her memory. After all employed bees complete the search process, they share the nectar information of the food sources and their position information with the onlooker bees. An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount. As in the case of the employed bee, she produces a modification on the position in her memory and checks the nectar amount of the candidate source. If the nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one. The main steps of the algorithm are as below: 1: Initialize Population 2: repeat 3: Place the employed bees on their food sources 4: Place the onlooker bees on the food sources depending on their nectar amounts 5: Send the scouts to the search area for discovering new food sources 6: Memorize the best food source found so far 7: until requirements are met In ABC algorithm, each cycle of the search consists of three steps: sending the employed bees onto their food sources and evaluating their nectar amounts; after sharing the nectar information of food sources, the selection of food source regions by the onlookers and evaluating the nectar amount of the food sources; determining the scout bees and then sending them randomly onto possible new food sources. At the initialization stage, a set of food sources is randomly selected by the bees and their nectar amounts are determined. At the first step of the cycle, these bees come into the hive and share the nectar information of the sources with the bees waiting on the dance area. A bee waiting on the dance area for making decision to choose a food source is called onlooker and the bee going to the food source visited by herself just before is named as employed bee. After sharing their information with onlookers, every employed bee goes to the food source area visited by herself at the previous cycle since that food source exists in her memory, and then chooses a new food source by means of visual information in the neighbourhood of the one in her memory and evaluates its nectar amount. At the second step, an onlooker prefers a food source area depending on the nectar information distributed by the employed bees on the dance area. As the nectar amount of a food source increases, the probability of that food source chosen also increases. After arriving at the selected area, she chooses a new food source in the neighbourhood of the one in the memory depending on visual information as in the case of employed bees. The determination of the new food source is carried out by the bees based on the comparison process of food source positions visually. At the third step of the cycle, when the nectar of a food source is abandoned by the bees, a new food source is randomly determined by a scout bee and replaced with the abandoned one. In our model, at each cycle at most one scout goes outside for searching a new food source, and the number of employed and onlooker bees is selected to be equal to each other. These three steps are repeated through a predetermined number of cycles called Maximum Cycle Number ðMCNÞ or until a termination criterion is satisfied. An artificial onlooker bee chooses a food source depending on the probability value associated with that food source, pi , calculated by the following expression (6):

114

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

fit pi ¼ PSN i ; n¼1 fit n

ð6Þ

where fiti is the fitness value of the solution i which is proportional to the nectar amount of the food source in the position i and SN is the number of food sources which is equal to the number of employed bees or onlooker bees. In order to produce a candidate food position from the old one in memory, the ABC uses the following expression (7):

v ij ¼ xij þ /ij ðxij À xkj Þ;

ð7Þ

where k 2 f1; 2; . . . ; SNg and j 2 f1; 2; . . . ; Dg are randomly chosen indexes. Although k is determined randomly, it has to be different from i. /i;j is a random number between [À1, 1]. It controls the production of neighbour food sources around xi;j and represents the comparison of two food positions visually by a bee. As can be seen from (7), as the difference between the parameters of the xi;j and xk;j decreases, the perturbation on the position xi;j gets decreased, too. Thus, as the search approaches the optimum solution in the search space, the step length is adaptively reduced. If a parameter value produced by this operation exceeds its predetermined limit, the parameter can be set to an acceptable value. In this work, the value of the parameter exceeding its limit is set to its limit value. The food source of which the nectar is abandoned by the bees is replaced with a new food source by the scouts. In ABC, this is simulated by producing a position randomly and replacing it with the abandoned one. In ABC, if a position cannot be improved further through a predetermined number of cycles, then that food source is assumed to be abandoned. The value of predetermined number of cycles is an important control parameter of the ABC algorithm, which is called ‘‘limit” for abandonment. Assume that the abandoned source is xi and j 2 f1; 2; . . . ; Dg, then the scout discovers a new food source to be replaced with xi . This operation can be defined as in (8)

  xji ¼ xjmin þ rand½0; 1Š xjmax À xjmin :

ð8Þ

After each candidate source position v i;j is produced and then evaluated by the artificial bee, its performance is compared with that of its old one. If the new food source has an equal or better nectar than the old source, it is replaced with the old one in the memory. Otherwise, the old one is retained in the memory. In other words, a greedy selection mechanism is employed as the selection operation between the old and the candidate one. Totally, ABC algorithm employs four different selection processes: (1) a global probabilistic selection process, in which the probability value is calculated by (6) used by the onlooker bees for discovering promising regions, (2) a local probabilistic selection process carried out in a region by the employed bees and the onlookers depending on the visual information such as the color, shape and fragrance of the flowers (sources) (bees will not be able to identify the type of nectar source until they arrive at the right location and discriminate among sources growing there based on their scent) for determining a food source around the source in the memory in a way described by (7), (3) a local selection called greedy selection process carried out by onlooker and employed bees in that if the nectar amount of the candidate source is better than that of the present one, the bee forgets the present one and memorizes the candidate source produced by (7). Otherwise, the bee keeps the present one in the memory. (4) a random selection process carried out by scouts as defined in (8). It is clear from the above explanation that there are three control parameters in the basic ABC: The number of food sources which is equal to the number of employed or onlooker bees ðSNÞ, the value of limit and the maximum cycle number ðMCNÞ. In the case of honeybees, the recruitment rate represents a measure of how quickly the bee colony finds and exploits a newly discovered food source. Artificial recruiting could similarly represent the measurement of the speed with which the feasible solutions or the good quality solutions of the difficult optimization problems can be discovered. The survival and progress of the bee colony are dependent upon the rapid discovery and efficient utilization of the best food resources. Similarly; the successful solution of difficult engineering problems is connected to the relatively fast discovery of good solutions especially for the problems that need to be solved in real time. In a robust search process, exploration and exploitation processes must be carried out together. In the ABC algorithm, while onlookers and employed bees carry out the exploitation process in the search space, the scouts control the exploration process. Detailed pseudo-code of the ABC algorithm is given below: Initialize the population of solutions xi ; i ¼ 1; . . . ; SN Evaluate the population cycle = 1 repeat Produce new solutions ti for the employed bees by using (7) and evaluate them Apply the greedy selection process for the employed bees Calculate the probability values P i for the solutions xi by (6) Produce the new solutions ti for the onlookers from the solutions xi selected depending on Pi and evaluate them Apply the greedy selection process for the onlookers Determine the abandoned solution for the scout, if exists, and replace it with a new randomly produced solution xi by (8) 11: Memorize the best solution achieved so far 12: cycle = cycle + 1 13: until cycle = MCN 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

115

4. Experiments 1: ABC vs. GA, DE and PSO 4.1. Settings In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total evaluation number were chosen to be the same. Population size was 50 and the maximum evaluation number was 500.000 for all functions. The other specific parameters of algorithms are given below: GA Settings: In our experiments, we employed a binary coded standard GA having evaluation, fitness scaling, seeded selection, random selection, crossover, mutation and elite units. Single point crossover operation with the rate of 0.8 was employed. Mutation operation restores genetic diversity lost during the application of reproduction and crossover. Mutation rate in our experiments was 0.01. Stochastic uniform sampling technique was our selection method. Generation gap is the proportion of the population to be replaced. Chosen generation gap value in experiments was 0.9. DE Settings: In DE, F is a real constant which affects the differential variation between two solutions and set to 0.5 in our experiments. Value of crossover rate, which controls the change of the diversity of the population, was chosen to be 0.9 as recommended in [43]. PSO Settings: Cognitive and social components (/1 and /2 in (5)) are constants that can be used to change the weighting between personal and population experience, respectively. In our experiments cognitive and social components were both set to 1.8. Inertia weight, which determines how the previous velocity of the particle influences the velocity in the next iteration, was 0.6 as recommended in [44].

Table 1 Benchmark functions used in experiments 1. D: Dimension, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Range [À5.12, 5.12] [À100, 100] [À100, 100] [À10, 10] [À1.28, 1.28] [À4.5, 4.5] [À100, 100] [À10, 10] [À10, 10] ½ÀD2 ; D2 Š ½ÀD2 ; D2 Š [À5, 10] [À4, 5] [À10, 10] [À100, 100] [À30, 30] [À10, 10] [À65.536, 65.536] [À5, 10]x[0, 15] [À100, 100] [À10, 10] [À5.12, 5.12] [À500, 500] ½0; pŠ ½0; pŠ ½0; pŠ D 5 30 30 30 30 5 2 2 4 6 10 10 24 30 30 30 30 2 2 2 2 30 30 2 5 10 C US US US US US UN UN UN UN UN UN UN UN UN UN UN UN MS MS MS MS MS MS MS MS MS Function Stepint Step Sphere SumSquares Quartic Beale Easom Matyas Colville Trid6 Trid10 Zakharov Powell Schwefel 2.22 Schwefel 1.2 Rosenbrock Dixon-Price Foxholes Branin Bohachevsky1 Booth Rastrigin Schwefel Michalewicz2 Michalewicz5 Michalewicz10 Formulation P f ðxÞ ¼ 25 þ 5 bxi c i¼1 Pn f ðxÞ ¼ i¼1 ðbxi þ 0:5cÞ2 P f ðxÞ ¼ n x2 i¼1 i P f ðxÞ ¼ n ix2 i¼1 i P f ðxÞ ¼ n ix4 þ random½0; 1Þ i¼1 i f ðxÞ ¼ ð1:5 À x1 þ x1 x2 Þ2 þ ð2:25 À x1 þ x1 x2 Þ2 þ ð2:625 À x1 þ x1 x3 Þ2 2 2 f ðxÞ ¼ À cosðx1 Þ cosðx2 Þ expðÀðx1 À pÞ2 À ðx2 À pÞ2 Þ f ðxÞ ¼ 0:26ðx2 þ x2 Þ À 0:48x1 x2 1 2 f20 ðxÞ ¼ 100ðx2 À x2 Þ2 þ ðx1 À 1Þ2 þ ðx3 À 1Þ2 þ 90ðx2 À x4 Þ2 þ 10:1ððx2 À 1Þ2 þ ðx4 À 1Þ2 Þ 1 3 þ19:8ðx2 À 1Þðx4 À 1Þ P P f ðxÞ ¼ n ðxi À 1Þ2 À n xi xiÀ1 i¼1 i¼2 Pn P 2 f ðxÞ ¼ i¼1 ðxi À 1Þ À n xi xiÀ1 i¼2 ÀPn Á2 ÀPn Á4 P f ðxÞ ¼ n x2 þ þ i¼1 i i¼1 0:5ixi i¼1 0:5ixi Pn=k f ðxÞ ¼ i¼1 ðx4iÀ3 þ 10x4iÀ2 Þ2 þ 5ðx4iÀ1 À x4i Þ2 þ ðx4iÀ2 À x4iÀ1 Þ4 þ 10ðx4iÀ3 À x4i Þ4 P Q f ðxÞ ¼ n jxi j þ n jxi j i¼1 i¼1 2 Pn Pi f ðxÞ ¼ i¼1 j¼1 xj PnÀ1 f ðxÞ ¼ i¼1 ½100ðxiþ1 À x2 Þ2 þ ðxi À 1Þ2 Š i P f ðxÞ ¼ ðx1 À 1Þ2 þ n ið2x2 À xiÀ1 Þ2 i¼2 i !À1 P 1 f ðxÞ ¼ 500 þ 25 P2 1 j¼1  2 À Á 5:1 5 f ðxÞ ¼ x2 À 4p2 x2 þ p x1 À 6 þ 10 1 À 81 cos x1 þ 10 1 p f ðxÞ ¼ x2 þ 2x2 À 0:3 cosð3px1 Þ À 0:4 cosð4px2 Þ þ 0:7 1 2 f ðxÞ ¼ ðx1 þ 2x2 À 7Þ2 þ ð2x1 þ x2 À 5Þ2 Pn 2 i¼1 ½xi À 10 cosð2pxi Þ þ 10Š pffiffiffiffiffiffiffi Pn f ðxÞ ¼ i¼1 À xi sin jxi j Pn f ðxÞ ¼ À i¼1 sinðxi Þðsinðix2 =pÞÞ2m i m ¼ 10 P f ðxÞ ¼ À n sinðxi Þðsinðix2 =pÞÞ2m i i¼1 m ¼ 10 Pn f ðxÞ ¼ À i¼1 sinðxi Þðsinðix2 =pÞÞ2m i m ¼ 10 f ðxÞ ¼ jþ i¼1

ðxi Àaij Þ6

116

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

ABC Settings: Except common parameters (population number and maximum evaluation number), the basic ABC used in this study employs only one control parameter, which is called limit. A food source will not be exploited anymore and is
Table 2 Benchmark functions used in experiments 1. D: Dimension, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. No 27 28 29 30 31 Range [À100, 100] [À5, 5] [À100, 100] [À100, 100] [À10, 10] D 2 2 2 2 2 C MN MN MN MN MN Function Schaffer Six Hump Camel Back Bohachevsky2 Bohachevsky3 Shubert Formulation f ðxÞ ¼ 0:5 þ sin2 ð

pffiffiffiffiffiffiffiffiffiffi 2 2 x1 þx2 ÞÀ0:5

ð1þ0:001ðx2 þx2 ÞÞ2 1 2

f ðxÞ ¼ 4x2 À 2:1x4 þ 1 x6 þ x1x2 À 4x2 þ 4x4 1 1 2 2 3 1 f ðxÞ ¼ x2 þ 2x2 À 0:3cosð3px1 Þð4px2 Þ þ 0:3 1 2 f ðxÞ ¼ x2 þ 2x2 À 0:3cosð3px1 þ 4px2 Þ þ 0:3 1 2 P P  5 5 f ðxÞ ¼ i¼1 i cosðði þ 1Þx1 þ iÞ i¼1 i cosðði þ 1Þx2 þ iÞ 1 þ ðx1 þ x2 þ 1Þ2 ð19 À 14x1 þ 3x2 À 14x2 þ 6x1 x2 þ 3x2 Þ 1 2 ! 30 þ ð2x1 À 3x2 Þ2 2 2 ð18 À 32x1 þ 12x1 þ 48x2 À 36x1 x2 þ 27x2 Þ  2 2 P x ðb þb x Þ f ðxÞ ¼ 11 ai À 1 i i 2 2 i¼1 f ðxÞ ¼ bi þbi x3 þx4 T À1 i¼1 ½ðx À ai Þðx À ai Þ þ ci Š P f ðxÞ ¼ À 7 ½ðx À ai Þðx À ai ÞT þ ci ŠÀ1 i¼1 P f ðxÞ ¼ À 10 ½ðx À ai Þðx À ai ÞT þ ci ŠÀ1 i¼1 i2 P hPn k k f ðxÞ ¼ n k¼1 i¼1 ði þ bÞððxi =iÞ À 1Þ Ã2 P ÂÀPn k Á f ðxÞ ¼ n k¼1 i¼1 xi À bk h P i P f ðxÞ ¼ À 4 ci exp À 3 aij ðxj À pij Þ2 i¼1 j¼1 h P i P f ðxÞ ¼ À 4 ci exp À 6 aij ðxj À pij Þ2 i¼1 j¼1

!

32

[À2, 2]

2

MN

GoldStein-Price

33 34 35 36 37 38 39 40 41 42

[À5, 5] [0, 10] [0, 10] [0, 10] [ÀD, D] [0, D] [0, 1] [0, 1] [À600, 600] [À32, 32]

4 4 4 4 4 4 3 6 30 30

MN MN MN MN MN MN MN MN MN MN

Kowalik Shekel5 Shekel7 Shekel10 Perm PowerSum Hartman3 Hartman6 Griewank Ackley

f ðxÞ ¼ À

P5

1 f ðxÞ ¼ 4000

Pn

2 i¼1 xi

À

Qn

i¼1

cos

  x piffi þ 1 i 43

[À50, 50]

30

MN

Penalized

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi À P Á P f ðxÞ ¼ À20 exp À0:2 1 n x2 À exp 1 n cosð2pxi Þ þ 20 þ e i¼1 i i¼1 n n 9 8 2 > > 10 sin ðpy1 Þ = < P f ðxÞ ¼ p þ nÀ1 ðyi À 1Þ2 ½1 þ 10 sin2 ðpyiþ1 ފ n> i¼1 > ; : 2 þðyn À 1Þ P þ n uðxi ; 10; 100; 4Þ i¼1  yi ¼ 1 þ 1 ðxi þ 1Þ 4 8 xi > a < kðxi À aÞm ; uðxi ; a; k; mÞ ¼ 0; Àa 6 xi 6 a : m kðÀxi À aÞ ; xi < Àa 8 > sin2 ðpx1 Þþ < f ðxÞ ¼ 0:1 PnÀ1 ðxi À 1Þ2 ½1 þ sin2 ð3pxiþ1 ފþ > i¼1 : ðxn À 1Þ2 ½1 þ sin2 ð2pxn ފ Pn i¼1 uðxi ; 5; 100; 4Þ

44

[À50, 50]

30

MN

Penalized2

9 > = þ > ;

Table 3 Benchmark functions used in experiments 1. D: Dimension, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. No 45 46 47 Range [0, 10] [0, 10] [0, 10] D 2 5 10 C MN MN MN Function Langerman2 Langerman5 Langerman10 Formulation     P  P P 1 f ðxÞ ¼ À m ci exp À p n ðxj À aij Þ2 cos p n ðxj À aij Þ2 i¼1 j¼1 j¼1 f ðxÞ ¼ À f ðxÞ ¼ À Pm Pm i¼1 c i i¼1 c i

 

   P P 1 exp À p n ðxj À aij Þ2 cosðp n ðxj À aij Þ2 Þ j¼1 j¼1    P  P 1 exp À p n ðxj À aij Þ2 cos p n ðxj À aij Þ2 j¼1 j¼1

48

½Àp; pŠ

2

MN

FletcherPowell2

49

½Àp; pŠ

5

MN

FletcherPowell5

50

½Àp; pŠ

10

MN

FletcherPowell10

P f ðxÞ ¼ n ðAi À Bi Þ2 P i¼1 Ai ¼ n ðaij sin aj þ bij cos aj Þ j¼1 P Bi ¼ n ðaij sin xj þ bij cos xj Þ j¼1 P f ðxÞ ¼ n ðAi À Bi Þ2 P i¼1 Ai ¼ n ðaij sin aj þ bij cos aj Þ j¼1 P Bi ¼ n ðaij sin xj þ bij cos xj Þ j¼1 P f ðxÞ ¼ n ðAi À Bi Þ2 P i¼1 Ai ¼ n ðaij sin aj þ bij cos aj Þ Pj¼1 Bi ¼ n ðaij sin xj þ bij cos xj Þ j¼1

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

117

assumed to be abandoned when limit is exceeded for the source. This means that the solution of which ‘‘trial number” exceeds the limit value cannot be improved anymore. We defined a relation (9) for the limit value using the dimension of the problem and the colony size:

limit ¼ ðSN Ã DÞ where D is the dimension of the problem and SN is the number of food sources or employed bees.

ð9Þ

Table 4 a and c parameters of Langerman function. i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 aij ; j ¼ 1; . . . ; 10 9.681 9.400 8.025 2.196 8.074 7.650 1.256 8.314 0.226 7.305 0.652 2.699 8.327 2.132 4.707 8.304 8.632 4.887 2.440 6.306 0.652 5.558 3.352 8.798 1.460 0.432 0.679 4.263 9.496 4.138 0.667 2.041 9.152 0.415 8.777 5.658 3.605 2.261 8.858 2.228 7.027 3.516 3.897 7.006 5.579 7.559 4.409 9.112 6.686 8.583 2.343 1.272 7.549 0.880 8.057 8.645 2.800 1.074 4.830 2.562 4.783 3.788 5.114 5.649 3.467 0.720 8.623 4.224 1.420 1.242 0.508 5.874 2.017 7.136 4.080 8.567 4.832 0.170 4.299 6.084 1.370 5.756 9.817 2.370 1.336 8.774 5.523 7.286 3.150 2.532 9.095 7.931 7.621 6.979 1.863 2.764 6.905 1.781 0.945 5.928 4.876 4.119 9.570 2.641 0.581 0.322 5.768 8.967 1.007 1.138 0.821 9.857 9.437 0.168 7.217 0.249 3.049 5.599 8.270 9.661 3.517 2.882 4.564 9.510 6.708 3.278 0.584 4.124 1.622 9.133 8.807 4.461 9.825 1.882 9.698 7.128 7.050 9.693 7.008 4.350 1.310 2.279 8.687 1.701 7.914 8.081 2.968 8.291 5.079 5.611 9.325 2.672 4.711 9.166 6.349 5.283 8.133 0.932 4.698 1.826 4.632 7.496 1.150 5.943 8.542 8.392 6.715 9.867 1.427 3.134 1.063 2.764 4.167 3.680 3.615 7.461 7.225 5.200 1.231 5.500 6.544 3.568 2.996 6.304 4.534 7.474 6.071 8.129 6.228 4.060 5.808 8.817 1.395 7.273 8.077 1.472 1.711 7.508 9.398 7.853 0.689 1.284 2.570 1.231 9.981 4.416 6.730 9.214 5.731 6.886 0.211 1.284 6.126 6.054 0.276 6.274 6.888 8.658 9.096 5.204 6.937 0.690 3.885 7.691 8.515 8.524 4.323 7.770 8.480 6.061 8.819 1.677 6.540 2.390 9.198 0.652 4.199 8.272 9.494 2.341 5.122 7.033 0.734 9.377 7.633 1.409 4.187 1.208 0.972 8.713 3.291 6.593 6.354 2.880 9.231 2.277 4.405 8.382 9.950 7.457 8.833 1.244 0.228 2.499 5.292 4.002 9.614 4.398 1.883 9.699 2.020 7.374 4.982 1.426 1.567 8.208 5.448 5.762 7.637 8.247 7.016 9.789 0.109 0.564 4.670 7.826 4.591 6.740 1.675 2.258 9.070 1.234 0.027 0.064 1.224 4.644 9.229 4.506 9.732 6.500 ci 0.806 0.517 1.5 0.908 0.965 0.669 0.524 0.902 0.531 0.876 0.462 0.491 0.463 0.714 0.352 0.869 0.813 0.811 0.828 0.964 0.789 0.360 0.369 0.992 0.332 0.817 0.632 0.883 0.608 0.326

Table 5 A parameter of Fletcher–Powell function. i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Aij ; j ¼ 1; . . . ; 20 À79 91 À38 À78 À1 34 52 81 5 À50 24 À30 56 84 À21 35 À91 53 83 52 56 À9 8 À18 À43 À96 À46 47 À43 66 98 À63 3 1 À8 93 73 66 48 À52 À62 À18 À12 À49 93 26 À69 35 À45 À47 À50 À32 88 73 À48 À98 51 23 67 À5 À9 À59 À73 65 À18 À56 99 55 46 À75 68 À90 38 43 68 À88 68 10 27 19 92 99 40 66 À76 À36 À47 67 56 89 À97 À35 À14 84 À91 À8 96 À33 91 À25 48 À45 26 À40 À68 À85 À72 À13 À94 À16 À64 44 À15 À99 17 64 49 62 À33 15 À22 88 À64 88 À42 À62 À11 33 À62 82 À24 À64 84 À35 À52 15 10 À73 À90 À1 À34 À14 29 À95 22 13 55 14 52 6 81 57 À9 24 À99 69 À13 22 À34 À11 À39 À29 À82 À57 46 93 À55 83 66 À85 À59 27 65 À78 À23 À65 À6 À65 39 8 À40 26 À32 10 À14 78 91 À42 55 À62 À7 87 À20 À58 43 À86 À23 37 À36 À70 À95 71 À89 À98 69 À43 À30 8 À86 À30 85 À70 À75 47 À8 58 50 À83 À68 À4 À69 À65 À3 À11 27 96 7 À45 À29 31 À92 À39 À37 À83 À5 À44 À89 À65 17 À7 À20 19 88 À16 À12 77 À35 À44 À52 À7 2 À18 74 94 À98 À9 19 59 À7 À4 À66 45 98 À55 À26 65 23 12 À71 À75 61 À89 66 À86 À17 À94 À67 À51 14 À6 À98 88 53 33 57 À34 À20 100 À91 À26 52 99 À44 À65 À62 68 36 À56 11 48 À66 18 58 84 À13 À52 55 À9 À46 À24 À59 40 72 63 À79 À27 À97 98 À10 88 À67 À11 45 21 0 82 61 À33 27 46 À91 14 74 À22 60 À79 0 À57 96 13 37 À81 À39 À43 1 18 À39 À11 À27 À95 74 À58 90 65 À18 À67 3 À11 98 À56 À83 À10 34 45 56 À59 À58 21 6 À71 À99 À5 À83 50 54 À35 1 À48 À32 85 À45 42 À23 100 17 À55 13 14 67 À57 À95 À42 À40 À40 74 À56 39 88 56 À65

118

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

4.2. Benchmark functions In the field of evolutionary computation, it is common to compare different algorithms using a large test set, especially when the test involves function optimization. Attempting to design a perfect test set where all the functions are present in order to determine whether an algorithm is better than another for every function, is a fruitless task. That is the reason why, when an algorithm is evaluated, we must look for the kind of problems where its performance is good, in order to characterize the type of problems for which the algorithm is suitable [45]. We used 50 benchmark problems in order to test the performance of the GA, DE, PSO and the ABC algorithms. This set is large enough to include many different kinds of problems such as unimodal, multimodal, regular, irregular, separable, non-separable and multidimensional. Initial range, formulation, characteristics and the dimensions of these problems are listed in Tables 1–3. If a function has more than one local optimum, this function is called multimodal. Multimodal functions are used to test the ability of algorithms getting rid of local minima. If the exploration process of an algorithm is poor that it cannot search whole space efficiently, this algorithm gets stuck at the local minima. Functions having flat surfaces are difficult for algorithms as well as multimodal functions since the flatness of the function does not give the algorithm any information to direct the search process towards the minima (Stepint, Matyas, PowerSum). Another group of test problems is separable/nonseparable functions. A p-variable separable function can be expressed as the sum of p functions of one variable. Non-separable functions
Table 6 B parameter of Fletcher–Powell function. i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Bij ; j ¼ 1; . . . ; 21 À65 59 21 À91 À79 À89 À76 À50 À23 À5 85 80 À66 80 94 À29 76 À8 92 À21 À86 À11 67 À23 À75 99 À35 45 À88 À95 À57 À9 À47 À98 À95 85 54 À11 48 87 35 31 76 49 À80 20 À31 À55 74 93 99 À30 À14 38 À4 82 À31 À10 À48 95 63 16 À80 78 À45 86 À64 À8 75 12 68 62 À6 27 À6 96 5 37 À68 38 47 À63 À17 À79 30 52 86 À15 À67 15 À12 10 À37 À96 À45 43 À11 À68 À25 À54 À7 39 À80 54 À5 93 À33 À30 17 À72 À6 À69 À13 96 75 70 À59 88 À54 60 À24 86 À11 96 À22 11 À86 À34 39 À89 À43 À53 2 84 27 25 55 91 À99 64 55 À16 À55 À72 À62 À93 À20 À99 29 À73 36 À55 À56 71 À21 69 À6 26 À41 5 À2 À13 21 51 À95 71 27 9 À37 À39 À91 À49 76 À96 75 65 À64 96 À87 90 5 5 48 32 26 À17 À58 88 52 52 À80 5 À2 À57 87 À60 14 À92 77 À98 À63 58 10 À23 33 8 33 17 0 À38 À20 22 83 56 1 À90 À50 4 À12 À35 À25 11 À53 85 À50 À27 À96 65 À89 À67 67 À10 7 À2 À6 À58 À93 23 92 87 À10 À12 À54 52 À33 84 48 99 96 À35 94 64 À97 3 À45 76 3 52 0 11 93 82 60 33 À98 À54 À71 37 69 À52 À96 À24 À49 À71 À19 À54 56 À59 À86 6 67 À19 82 À45 À61 À97 À30 54 À61 À84 À17 À79 À30 23 16 À15 À81 2 30 À38 44 À18 À99 97 À24 74 50 À65 47 97 41 À22 7 À90 À86 9 7 À8 À68 32 À55 À82 À51 À92 À39 À42 96 49 58 18 45 78 À15 46 À5 39 À99 À83 14 À59 57 À53 37 4 À34 À65 À88 21 97 40 À67 À90 À53 À91 À74 À48 À97 À84 À4 83 À70 29 94 91 21 À77 À8 À46 À77 À62 À21 À30 87 4 À41 88 45 76 90 84 73 48 66 98 84 À38 17 34 À95 À58 79 À84 5 À95 29 À16 93 À90 10 À18 À60 45 2 50 4 À15 75 À72 72 53 À75 À10 À66 À46 85 38 23 À44 32 81 À94 À4 32 24 À80 À79 À63 À59 29 65 28 À9 À80 À11 53 67 16 14 52 À81 20

Table 7 a parameter of Fletcher–Powell function.

aj ; j ¼ 1; . . . ; 20
À2.7910 2.5623 À1.0429 0.5097 À2.8096 1.1883 2.0771 À2.9926 0.0715 0.4142 À2.5010 1.7731 1.6473 0.4934 2.1038 À1.9930 0.3813 À2.2144 À2.5572 2.9449

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

119

cannot be written in this form. These functions have interrelation among their variables. Therefore, non-separable functions are more difficult than the separable functions. The dimensionality of the search space is an important issue with the problem [45]. In some functions, global minima is very small when compared to whole search space (Easom, Michalewicz ðm ¼ 10Þ, Powell) or global minimum is very close to the local ones (Perm, Kowalik and Schaffer). As for multimodal functions, if the algorithm cannot explore the search space properly and cannot keep up the direction changes in the functions having narrow curving valley (Beale, Colville, Rosenbrock), it fails in these kinds of problems. Another problem that algorithms suffer is scaling problem with a difference of many magnitude orders between the domain and the function hypersurface [46] (GoldsteinPrice, Trid). Fletcher-Powell and Langerman functions are non-symmetrical and their local optima are randomly distributed. In this way, the objective function has no implicit symmetry advantages that might simplify optimization for certain algorithms. Fletcher–Powell function achieves the random distribution of the optima choosing the values of the matrixes [45]. Quartic function is padded with noise. Expected fitness depends on the random noise generator (gaussian or uniform). The random noise makes sure that the algorithm never gets the same value on the same point. Algorithms that do not do well on this test function will do poorly on noisy data. Step has one minimum and it is a discontinuous function. Penalized functions are difficult due to the combinations of different periods of the sine function [47]. In Tables 1–3 characteristics of each function are given under the column titled C. In this column, M means that the function is multimodal, while U means that the function is unimodal. If the function is separable, abbreviation S is used to indicate this specification. Letter N refers that the function is non-separable. In our set, as seen from Tables 1–3, 33 functions are multimodal, 17 functions are unimodal, 14 functions are separable and 36 functions are non-separable. The increment in the dimension of function increases the difficulty. Dimensions of the problems we used can be found in Tables 1 and 3 under the column titled D.

Table 8 a parameter of FoxHoles function. j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 aij ; i ¼ 1; 2 À32 À16 0 16 32 À32 À16 0 16 32 À32 À16 0 16 32 À32 À16 0 16 32 À32 À16 0 16 32 À32 À32 À32 À32 À32 À16 À16 À16 À16 À16 0 0 0 0 0 16 16 16 16 16 32 32 32 32 32

Table 9 a and b parameters of Kowalik function. i 1 2 3 4 5 6 7 8 9 10 11 ai 0.1957 0.1947 0.1735 0.1600 0.0844 0.0627 0.0456 0.0342 0.0323 0.0235 0.0246 bi
À1

0.25 0.5 1 2 4 6 8 10 12 14 16

120

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 4 presents a and c parameters of Langerman function, Tables 5–7 present a; b and a parameters of Fletcher–Powell function and Tables 8–12 present the parameters in Foxholes, Kowalik, Shekel family, Hartman3 and Hartman6 functions, respectively. 4.3. Results In Experiments 1, we compared the GA, PSO, DE and ABC algorithms on a large set of functions described in the previous section and are listed in Tables 1–3. Each of the experiments in this section was repeated 30 times with different random seeds and the mean best values produced by the algorithms have been recorded. In order to make comparison clear, the values below 10À12 are assumed to be 0. The mean best values, standard deviations and standard errors of means are given in Tables 13–15. In order to analyze the results whether there is significance between the results of each algorithm, we performed t-test on pairs of algorithms. We calculated p-values on each function but did not directly compare this p-value to a to determine the significance because probabilistic algorithms can produce significantly different results even if they start to run random values coming from the same distribution [48]. Hence, we adopted Modified Bonferroni Correction in order to address this issue. Modified Bonferroni Correction ranks the calculated p-values ascending and significance level a is gathered by dividing 0.05 level by the inverse rank. If p > a, it is said to be significantly difference between algorithms on the function. These statistical tests are given in Tables 16–18. From the results in Tables 13–15, on functions lying on flat surfaces such as Stepint and Matyas, all algorithms have equal performance, since all algorithms have operators providing diversity and variable step sizes. For producing mutant vector, the DE algorithm uses difference of randomly selected solution vectors as in (1), the PSO algorithm uses random coefficients as multiplier for calculating different vectors as in (5), for the GA, crossover operator provides this perturbation. In the ABC algorithm, an operator as in (7) is employed to perform the perturbation. Among these flat functions, on PowerSum function none of the algorithms have produced the optima but the result of the DE algorithm is better than that of others. From Tables 16–18, for all algorithms, there is no significance on Goldstein-Price, Stepint, Beale, Easom, Matyas, Foxholes, Branin, Bohachevsky1, Booth, Six Hump Camel Back, Bohachevsky3, Shubert, and Fletcher-Powell2 functions. From Table 16,

Table 10 a and c parameters of Shekel functions. i 1 2 3 4 5 6 7 8 9 10 aij ; j ¼ 1; . . . ; 6 4 1 8 6 3 2 5 8 6 7 4 1 8 6 7 9 5 1 2 3.6 4 1 8 6 3 2 3 8 6 7 4 1 8 6 7 9 3 1 2 3.6 ci 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.5

Table 11 a; c and p parameters of 3-parameter Hartman function. i 1 2 3 4 aij ; j ¼ 1; 2; 3 3 0.1 3 0.1 10 10 10 10 30 35 30 35 ci 1 1.2 3 3.2 0.3689 0.4699 0.1091 0.03815 pij ; j ¼ 1; 2; 3 0.1170 0.4387 0.8732 0.5743 0.2673 0.7470 0.5547 0.8828

Table 12 a; c and p parameters of 6-parameter Hartman function. i 1 2 3 4 aij ; j ¼ 1; . . . ; 6 10 0.05 3 17 3 10 3.5 8 17 17 1.7 0.05 3.5 0.1 10 10 1.7 8 17 0.1 8 14 8 14 ci 1 1.2 3 3.2 0.1312 0.2329 0.2348 0.4047 0.1696 0.4135 0.1415 0.8828 pij ; j ¼ 1; . . . ; 6 0.5569 0.8307 0.3522 0.8732 .0124 0.3736 0.2883 0.5743 0.8283 0.1004 0.3047 0.1091 0.5886 0.9991 0.6650 0.0381

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

121

Table 13 Statistical results of 30 runs obtained by GA, PSO, DE and ABC algorithms. D: Dimension, Mean: Mean of the Best Values, StdDev: Standard Deviation of the Best Values, SEM: Standard Error of Means. No 1 Range [À5.12, 5.12] D 5 Function Stepint Min. 0 Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev StdDev Mean StdDev SEM Mean StdDev SEM Mean StdDev StdDev Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM GA 0 0 0 1.17E+03 76.561450 13.978144 1.11E+03 74.214474 13.549647 1.48E+02 12.4092893 2.265616 0.1807 0.027116 0.004951 0 0 0 À1 0 0 0 0 0 0.014938 0.007364 0.001344 À49.9999 2.25E–5 4.11E–06 À209.476 0.193417 0.035313 0.013355 0.004532 0.000827 9.703771 1.547983 0.282622 11.0214 1.386856 0.253204 7.40E+03 1.14E+03 208.1346 1.96E+05 3.85E+04 7029.106155 1.22E+03 2.66E+02 48.564733 0.998004 0 0 0.397887 0 0 0 0 0 PSO 0 0 0 0 0 0 0 0 0 0 0 0 0.00115659 0.000276 5.04E–05 0 0 0 À1 0 0 0 0 0 0 0 0 À50 0 0 À210 0 0 0 0 0 0.00011004 0.000160 2.92E–05 0 0 0 0 0 0 15.088617 24.170196 4.412854 0.66666667 E–8 1.8257e–009 0.99800393 0 0 0.39788736 0 0 0 0 0 DE 0 0 0 0 0 0 0 0 0 0 0 0 0.0013633 0.000417 7.61E–05 0 0 0 À1 0 0 0 0 0 0.0409122 0.081979 0.014967 À50 0 0 À210 0 0 0 0 0 2.17E–07 1.36E–7 2.48E–08 0 0 0 0 0 0 18.203938 5.036187 0.033333 0.6666667 E–9 1.8257e–0010 0.9980039 0 0 0.3978874 0 0 0 0 0 ABC 0 0 0 0 0 0 0 0 0 0 0 0 0.0300166 0.004866 0.000888 0 0 0 À1 0 0 0 0 0 0.0929674 0.066277 0.012100 À50 0 0 À210 0 0 0.0002476 0.000183 3.34E–05 0.0031344 0.000503 9.18E–05 0 0 0 0 0 0 0.0887707 0.077390 0.014129 0 0 0 0.9980039 0 0 0.3978874 0 0 0 0 0

2

[À100, 100]

30

Step

0

3

[À100, 100]

30

Sphere

0

4

[À10, 10]

30

SumSquares

0

5

[À1.28, 1.28]

30

Quartic

0

6

[À4.5, 4.5]

2

Beale

0

7

[À100, 100]

2

Easom

À1

8

[À10, 10]

2

Matyas

0

9

[À10, 10]

4

Colville

0

10

½ÀD2 ; D2 Š

6

Trid6

À50

11

½ÀD2 ; D2 Š

10

Trid10

À210

12

[À5, 10]

10

Zakharov

0

13

[À4, 5]

24

Powell

0

14

[À10, 10]

30

Schwefel 2.22

0

15

[À100, 100]

30

Schwefel 1.2

0

16

[À30, 30]

30

Rosenbrock

0

17

[À10, 10]

30

Dixon-Price

0

18

[À65.536, 65.536]

2

Foxholes

0.998

19

[À5, 10] Â [0, 15]

2

Branin

0.398

20

[À100, 100]

2

Bohachevsky1

0

on 20 functions there is no significant difference between GA and ABC algorithms but on 28 functions ABC is better than GA while GA is better than ABC on only 2 functions. From Table 17, on 22 functions ABC and PSO algorithms show equal performance. On 4 of the remaining 28 functions, PSO performs better than ABC while ABC performs better on 24 functions. From Table 18, DE and ABC algorithms show equal performance on 37 functions. On 5 functions DE is better than ABC, while on 8 functions ABC performs better.

122

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 14 Table 13 continued. Statistical results of 30 runs obtained by GA, PSO, DE and ABC algorithms. D: Dimension, Mean: Mean of the Best Values, StdDev: Standard Deviation of the Best Values, SEM: Standard Error of Means. No 21 Range [À10, 10] D 2 Function Booth Min. 0 Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM GA 0 0 0 52.92259 4.564860 0.833426 À11593.4 93.254240 17.025816 À1.8013 0 0 À4.64483 0.097850 0.017865 À9.49683 0.141116 0.025764 0.004239 0.004763 0.000870 À1.03163 0 0 0.06829 0.078216 0.014280 0 0 0 À186.731 0 0 5.250611 5.870093 1.071727 0.005615 0.008171 0.001492 À5.66052 3.866737 0.705966 À5.34409 3.517134 0.642138 À3.82984 2.451956 0.447664 0.302671 0.193254 0.035283 0.010405 0.009077 0.001657 À3.86278 0 0 À3.29822 0.050130 0.009152 PSO 0 0 0 43.9771369 11.728676 2.141353 À6909.1359 457.957783 83.611269 À1.5728692 0.119860 0.021883 À2.4908728 0.256952 0.046913 À4.0071803 0.502628 0.091767 0 0 0 À1.0316285 0 0 0 0 0 0 0 0 À186.73091 0 0 3 0 0 0.00049062 0.000366 6.68E–05 À2.0870079 1.178460 0.215156 À1.9898713 1.420602 0.259365 À1.8796753 0.432476 0.078959 0.03605158 0.048927 0.008933 11.3904479 7.355800 1.342979 À3.6333523 0.116937 0.021350 À1.8591298 0.439958 0.080325 DE 0 0 0 11.716728 2.538172 0.463405 À10266 521.849292 95.276209 À1.801303 0 0 À4.683482 0.012529 0.002287 À9.591151 0.064205 0.011722 0 0 0 À1.031628 0 0 0 0 0 0 0 0 À186.7309 0 0 3 0 0 0.0004266 0.000273 4.98E–05 À10.1532 0 0 À10.40294 0 0 À10.53641 0 0 0.0240069 0.046032 0.008404 0.0001425 0.000145 2.65E–05 À3.862782 0 0 À3.226881 0.047557 0.008683 ABC 0 0 0 0 0 0 À12569.487 0 0 À1.8013034 0 0 À4.6876582 0 0 À9.6601517 0 0 0 0 0 À1.0316285 0 0 0 0 0 0 0 0 À186.73091 0 0 3 0 0 0.0004266 6.04E–5 1.10E–05 À10.1532 0 0 À10.402941 0 0 À10.53641 0 0 0.0411052 0.023056 0.004209 0.0029468 0.002289 0.000418 À3.8627821 0 0 À3.3219952 0 0

22

[À5.12, 5.12]

30

Rastrigin

0

23

[À500, 500]

30

Schwefel

À12569.5

24

½0;



2

Michalewicz2

À1.8013

25

½0; pŠ

5

Michalewicz5

À4.6877

26

½0; pŠ

10

Michalewicz10

À9.6602

27

[À100, 100]

2

Schaffer

0

28

[À5, 5]

2

6Hump CamelBack

À1.03163

29

[À100, 100]

2

Bohachevsky2

0

30

[À100, 100]

2

Bohachevsky3

0

31

[À10, 10]

2

Shubert

À186.73

32

[À2, 2]

2

GoldStein-Price

3

33

[À5, 5]

4

Kowalik

0.00031

34

[0, 10]

4

Shekel5

À10.15

35

[0, 10]

4

Shekel7

À10.4

36

[0, 10]

4

Shekel10

À10.53

37

[ÀD, D]

4

Perm

0

38

[0, D]

4

PowerSum

0

39

[0, 1]

3

Hartman3

À3.86

40

[0, 1]

6

Hartman6

À3.32

On Beale function having a curving shape in the vicinity of minimum, all algorithms show equal performance, but on Colville function,a kind of these functions, only the PSO algorithm produces the minimum of the function. On Rosenbrock function extended to 30 parameters, the ABC algorithm shows the highest performance compared to others. On the other hand, these functions are non-separable functions having interdependence among the variables as well as Griewank function.

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

123

Table 15 Table 13 continued. Statistical results of 30 runs obtained by GA, PSO, DE and ABC algorithms. D: Dimension, Mean: Mean of the Best Values, StdDev: Standard Deviation of the Best Values, SEM: Standard Error of Means. No 41 Range [À600, 600] D 30 Function Griewank Min. 0 Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM Mean StdDev SEM GA 10.63346 1.161455 0.212052 14.67178 0.178141 0.032524 13.3772 1.448726 0.2645 125.0613 12.001204 2.191110 À1.08094 0 0 À0.96842 0.287548 0.052499 À0.63644 0.374682 0.068407 0 0 0 0.004303 0.009469 0.001729 29.57348 16.021078 2.925035 PSO 0.01739118 0.020808 0.003799 0.16462236 0.493867 0.090167 0.0207338 0.041468 0.007571 0.00767535 0.016288 0.002974 À0.679268 0.274621 0.050139 À0.5048579 0.213626 0.039003 À0.0025656 0.003523 0.000643 0 0 0 1457.88344 1269.362389 231.752805 1364.45555 1325.379655 1325.379655 DE 0.0014792 0.002958 0.000540 0 0 0 0 0 0 0.0021975 0.004395 0.000802 À1.080938 0 0 À1.499999 0 0 À1.0528 0.302257 0.055184 0 0 0 5.988783 7.334731 1.339133 781.55028 1048.813487 241.980111 ABC 0 0 0 0 0 0 0 0 0 0 0 0 À1.0809384 0 0 À0.938150 0.000208 3.80E–05 À0.4460925 0.133958 0.024457 0 0 0 0.1735495 0.068175 0.012447 8.2334401 8.092742 1.477526

42

[À32, 32]

30

Ackley

0

43

[À50, 50]

30

Penalized

0

44

[À50, 50]

30

Penalized2

0

45

[0, 10]

2

Langerman2

À1.08

46

[0, 10]

5

Langerman5

À1.5

47

[0, 10]

10

Langerman10

NA

48

½Àp; pŠ

2

FletcherPowell2

0

49

½Àp; pŠ

5

FletcherPowell5

0

50

½Àp; pŠ

10

FletcherPowell10

0

On non-symmetrical Langerman functions, for the 2-parameter case ABC and DE show equal performance, while for 5 and 10-parameter cases, the DE algorithm performs the best. Another conclusion that can be drawn is that the efficiency of ABC becomes much more clearer as the number of variables increases. As seen from Tables 1–3, in the experiments, there are 14 functions with 30 variables. ABC outperforms DE on 6, PSO on 8 and GA on 14 of these 14 functions. Four of the functions on which ABC and DE are unsuccessful are unimodal (Colville, Zakharov, Powell, Quartic for ABC, Colville, Quartic, Rosenbrock, Dixon-Price for DE). ABC is unsuccessful on 5 multimodal functions, while DE is unsuccessful on 9 multimodal functions. Consequently, ABC is more successful and the most robust on multimodal functions included in the set respect to DE, PSO and GA. As mentioned in Section 4, in the experiments, a GA using binary representation was employed. It should be noted that a GA using floating point representation could be faster and more consistent from run to run and could provide higher precision for some specific problems [49]. As seen from Fig. 2, overall evaluation can be made by using the mean absolute error values produced by the algorithms. In the calculation of mean absolute errors, firstly the absolute differences between the values found by the algorithms and the optimum of the corresponding functions were computed. Secondly, the total absolute errors were found for each algorithm. Finally, the mean absolute error was calculated by dividing the total absolute error by the number of test functions. The minimum of the Langerman10 function is not available. Therefore, the mean absolute error was calculated for 49 functions. Since there is a big difference between the mean absolute error value of the ABC algorithm and the other algorithms, the graph in Fig. 2 demonstrates the logarithmic error values obtained for the algorithms. It is clear that the ABC has the smallest mean absolute error. It means that when the results of all functions are evaluated together, the ABC algorithm is the most successful algorithm.

5. ABC vs. evolution strategies 5.1. Experiments 2 5.1.1. Settings of Experiments 2 The experiments in this section constitute the comparison of the ABC algorithm versus Evolution Strategies. The versions of the Evolution Strategies include CES, FES, CMA-ES and ESLAT of which the results were taken from the work presented in [41]. For each experiment, ABC algorithm was run 50 times and it was terminated when it reached the maximum number

124

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 16 Significance test for GA and ABC. t: t-value of student t-test, SED: Standard Error of Difference, p: p-value calculated for t-value, R: Rank of p-value, I.R.: Inverse Rank of p-value, Sign: Significance. No 42 2 3 4 22 23 44 43 41 14 15 13 5 16 17 10 12 36 11 49 35 37 50 34 26 27 29 38 9 33 47 40 25 32 46 1 6 7 8 18 19 20 21 24 28 30 31 39 45 48 Function Ackley Step sphere SumSquares Rastrigin Schwefel Penalized2 Penalized Griewank Schwefel2.22 Schwefel1.2 Powell Quartic Rosenbrock Dixon-Price Trid6 Zakharov Shekel10 Trid10 FletcherPowell5 Shekel7 Perm FletcherPowell10 Shekel5 Michalewicz10 Schaffer Bohachevsky2 PowerSum Colville Kowalik Langerman10 Hartman6 Michalewicz5 GoldStein-Price Langerman5 Stepint Beale Easom Matyas Foxholes Branin Bohachevsky1 Booth Michalewicz2 6HumpCamelBack Bohachevsky3 Shubert Hartman3 Langerman FletcherPowell2 t 451.107 83.7021 81.921 65.3244 63.5001 57.3298 57.0767 50.5754 50.1456 43.5277 35.5539 34.3237 29.9584 27.884 25.1211 24.3432 15.8047 14.9813 14.8387 13.4681 7.8781 7.683 6.512 6.3639 6.3391 5.1869 4.7821 4.3638 4.3138 3.4778 2.6201 2.5977 2.3973 2.1 0.5766 ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf SED 0.033 13.978 13.55 2.266 0.833 17.026 2.191 0.264 0.212 0.253 208.135 0.283 0.005 7029.106 48.565 0 0.001 0.448 0.035 0.013 0.642 0.036 3.277 0.706 0.026 0.008 0.014 0.002 0.018 0.001 0.073 0.009 0.018 1.072 0.052 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.000003 0.000001 0.000053 0.000063 0.000965 0.011202 0.011876 0.019758 0.040089 0.5644 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 R. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 I.R. 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 New a 0.001 0.00102 0.001042 0.001064 0.001087 0.001111 0.001136 0.001163 0.00119 0.00122 0.00125 0.001282 0.001316 0.001351 0.001389 0.001429 0.001471 0.001515 0.001563 0.001613 0.001667 0.001724 0.001786 0.001852 0.001923 0.002 0.002083 0.002174 0.002273 0.002381 0.0025 0.002632 0.002778 0.002941 0.003125 0.003333 0.003571 0.003846 0.004167 0.004545 0.005 0.005556 0.00625 0.007143 0.008333 0.01 0.0125 0.016667 0.025 0.05 Sign. ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC ABC GA ABC ABC ABC ABC ABC ABC ABC ABC GA ABC – – – – – – – – – – – – – – – – – – – –

of evaluations or when it reached the global minima within a gap of 10À3 as used in [41]. Settings of the algorithms CES, FES, CMA-ES and ESLAT can be found in [41,36]. In ABC algorithm, the colony size was 20 and the maximum evaluation number was 100.000 for all functions. Value of the limit parameter was determined by (9) as in the experiments in Section 4. 5.1.2. Benchmark functions In Experiments 2, the functions listed in Table 19 were employed. Sphere, Schwefel 1.2, Schwefel 2.21, Schwefel 2.22, Rosenbrock, Step and Quartic functions are unimodal and high dimensional problems. Schwefel, Rastrigin, Ackley, Griewank, Penalized and Penalized 2 are multimodal and high dimensional functions. Foxholes, Kowalik, Six Hump Camel Back, Branin, Goldstein-Price, Hartman family and Shekel Family functions are multimodal and low dimensional functions as mentioned before in Section 4.2. 5.1.3. Results of Experiments 2 In Experiments 2, the performance of ABC algorithm has been compared to those of the Evolution Strategies: CES, FES, CMA-ES and ESLAT. The results of these algorithms, in terms of both solution quality and solution costs, were taken from

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

125

Table 17 Significance test for PSO and ABC. t: t-value of student t-test, SED: Standard Error of Difference, p: p-value calculated for t-value, R: Rank of p-value, I.R.: Inverse Rank of p-value, Sign: Significance. No 17 23 26 25 34 36 35 5 13 22 40 47 46 39 24 38 45 9 12 49 50 41 16 43 44 42 33 37 1 2 3 4 6 7 8 10 11 14 15 18 19 20 21 27 28 29 30 31 32 48 Function Dixon-Price Schwefel Michalewicz10 Michalewicz5 Shekel5 Shekel10 Shekel7 Quartic Powell Rastrigin Hartman6 Langerman10 Langerman5 Hartman3 Michalewicz2 PowerSum Langerman2 Colville Zakharov FletcherPowell5 FletcherPowell10 Griewank Rosenbrock Penalized Penalized2 Ackley Kowalik Perm Stepint Step sphere SumSquares Beale Easom Matyas Trid6 Trid10 Schwefel2.22 Schwefel1.2 Foxholes Branin Bohachevsky1 Booth Schaffer 6HumpCamelBack Bohachevsky2 Bohachevsky3 Shubert GoldStein-Price FletcherPowell2 t 3.65E+08 67.6984 61.6014 46.827 37.4899 33.3766 32.4372 32.433 31.3832 20.5371 18.2118 18.1285 11.1093 10.7463 10.4387 8.4793 8.0112 7.683 7.4107 6.2899 5.6046 4.5778 3.3991 2.7386 2.581 1.8257 0.9453 0.5118 ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf SED 0 83.611 0.092 0.047 0.215 0.259 0.259 0.001 0 2.141 0.08 0.024 0.03 0.021 0.022 1.343 0.05 0.012 0 231.753 241.985 0.004 4.413 0.008 0.003 0.09 0 0.001 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.000025 0.001228 0.008182 0.012403 0.073045 0.34843 0.61073 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 R. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 I.R. 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 New a 0.001 0.00102 0.001042 0.001064 0.001087 0.001111 0.001136 0.001163 0.00119 0.00122 0.00125 0.001282 0.001316 0.001351 0.001389 0.001429 0.001471 0.001515 0.001563 0.001613 0.001667 0.001724 0.001786 0.001852 0.001923 0.002 0.002083 0.002174 0.002273 0.002381 0.0025 0.002632 0.002778 0.002941 0.003125 0.003333 0.003571 0.003846 0.004167 0.004545 0.005 0.005556 0.00625 0.007143 0.008333 0.01 0.0125 0.016667 0.025 0.05 Sign. ABC ABC ABC ABC ABC ABC ABC PSO PSO ABC ABC ABC ABC ABC ABC ABC ABC PSO PSO ABC ABC ABC ABC – – – – – – – – – – – – – – – – – – – – – – – – – – –

[41], as mentioned above. Each experiment was repeated 50 times. Mean values and the standard deviations of the function values found after 50 runs are presented in Table 20. When the value found by the algorithm reached the global minima within gap of 10À3 , the algorithm was terminated. If the algorithm cannot find the global minima in this precision through the maximum evaluation number, the value found in the last cycle is recorded. Solution costs of these runs are given in Table 21. From the results in this table, it is clear that CMA-ES costs less on 12 functions and CES and ESLAT cost less on 2 functions while ABC costs less on 8 functions. Generally speaking, the cost of CMA-ES is lower than those of ESLAT, CES and ABC for the unimodal functions. This is because CMA-ES is a local method devised for optimal exploitation of local information [38]. When we focus on the success rates given in Table 22, it is seen that the performance of CMA-ES gets worse on multimodal functions. ESLAT has better performance on low dimensional functions than CMA-ES, but on high dimensional and multimodal functions, the performance of ESLAT deteriorates as that of CMA-ES. ABC algorithm achieved best success rate on 20 functions; and on sixteen of these 20 functions it achieved 100% success rate. ABC is successful on both unimodal and multimodal functions. Both CMA-ES and ESLAT have achieved best success rates on 9 functions. On two functions (Rosenbrock, Schewfel 2.21), CMA-ES has better performance than ABC. ABC outperforms CMA-ES on 13 functions (Step, Schewfel,

126

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 18 Significance test for DE and ABC. t: t-value of student t-test, SED: Standard Error of Difference, p: p-value calculated for t-value, R: Rank of p-value, I.R.: Inverse Rank of p-value, Sign: Significance. No 17 46 13 5 22 23 16 40 47 38 26 49 50 41 44 9 25 37 33 1 2 3 4 6 7 8 10 11 12 14 15 18 19 20 21 24 27 28 29 30 31 32 34 35 36 39 42 43 45 48 Function Dixon-Price Langerman5 Powell Quartic Rastrigin Schwefel Rosenbrock Hartman6 Langerman10 PowerSum Michalewicz10 FletcherPowell5 FletcherPowell10 Griewank Penalized2 Colville Michalewicz5 Perm Kowalik Stepint Step sphere SumSquares Beale Easom Matyas Trid6 Trid10 Zakharov Schwefel2.22 Schwefel1.2 Foxholes Branin Bohachevsky1 Booth Michalewicz2 Schaffer SixHumpCamelBack Bohachevsky2 Bohachevsky3 Shubert GoldStein-Price Shekel5 Shekel7 Shekel10 Hartman3 Ackley Penalized Langerman2 FletcherPowell2 t 3651483719 14795.0659 34.1285 32.1347 25.284 24.1769 19.6993 10.9545 10.0513 6.6968 5.8863 4.3424 4.0384 2.739 2.7386 2.7046 1.8257 1.8191 0 ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf ÀInf SED 0 0 0 0.001 0.463 95.276 0.92 0.009 0.06 0 0.012 1.339 191.492 0.001 0.001 0.019 0.002 0.009 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0.000057 0.00016 0.008173 0.008182 0.008961 0.073045 0.074059 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 R. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 I.R. 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 New a 0.001 0.00102 0.001042 0.001064 0.001087 0.001111 0.001136 0.001163 0.00119 0.00122 0.00125 0.001282 0.001316 0.001351 0.001389 0.001429 0.001471 0.001515 0.001563 0.001613 0.001667 0.001724 0.001786 0.001852 0.001923 0.002 0.002083 0.002174 0.002273 0.002381 0.0025 0.002632 0.002778 0.002941 0.003125 0.003333 0.003571 0.003846 0.004167 0.004545 0.005 0.005556 0.00625 0.007143 0.008333 0.01 0.0125 0.016667 0.025 0.05 Sign. ABC DE DE DE ABC ABC ABC ABC DE DE ABC ABC ABC – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

Rastrigin, Griewank, Penalized, Penalized 2, Foxholes, Kowalik, Goldstein-Price, Hartman 6, Shekel 5, Shekel 7, and Shekel 10). On 7 functions, they have equal performance. On 9 functions, ESLAT has 100% success rate. On 1 function (Rosenbrock), ESLAT shows better performance than ABC while ABC is better than ESLAT on 11 functions (Step, Schwefel, Rastrigin, Griewank, Penalized 2, Foxholes, Kowalik, Hartman 6, Shekel 5, Shekel 7, and Shekel 10). On 9 functions, they perform equally. ABC algorithm has a higher success rate than CMA-ES and ESLAT since it does exploration and exploitation processes together efficiently. 5.2. Experiments 3 5.2.1. Settings In the experiments of this section, the performance of ABC algorithm has been compared to those of SOM-ES, NG-ES and CMA-ES algorithms. The results of SOM-ES, NG-ES and CMA-ES algorithms were taken from the study described in [38]. The thresholds for satisfactory convergence were fixed to 40, 0.001 and 0.001 for functions 1, 2, and 3, respectively. Maximum

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

127

Fig. 2. Mean absolute errors of algorithms in Experiments 1. Table 19 Benchmark functions used in experiments 2. D: Dimension, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. No 1 2 3 4 5 6 7 8 9 Range [À100, 100] [À10, 10] [À100, 100] [À100, 100] [À30, 30] [À100, 100] [À1.28, 1.28] [À500, 500] [À5.12, 5.12] D 30 30 30 30 30 30 30 30 30 C US UN UN UN UN US US MS MS Function Sphere Schwefel 2.22 Schwefel 1.2 Schwefel 2.21 Rosenbrock Step Quartic Schwefel Rastrigin Formulation P f ðxÞ ¼ n x2 i¼1 i P Q f ðxÞ ¼ n jxi j þ n jxi j i¼1 i¼1 P 2 P i f ðxÞ ¼ n i¼1 j¼1 xj f ðxÞ ¼ f ðxÞ ¼ f ðxÞ ¼ PnÀ1 Pn i¼1 ½100ðxiþ1 À x2 Þ2 þ ðxi À 1Þ2 Š i

2 i¼1 ðbxi þ 0:5cÞ P f ðxÞ ¼ n ix4 þ random½0; 1Þ i¼1 i pffiffiffiffiffiffiffi P f ðxÞ ¼ n À xi sin jxi j i¼1 P f ðxÞ ¼ n ½x2 À 10 cosð2pxi Þ þ 10Š i¼1 i

10 11

[À32, 32] [À600, 600]

30 30

MN MN

Ackley Griewank

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  À P Á P f ðxÞ ¼ À20 exp À0:2 1 n x2 À exp 1 n cosð2pxi Þ þ 20 þ e i¼1 i i¼1 n n   Q P x 1 f ðxÞ ¼ 4000 n x2 À n cos piffi þ 1 i¼1 i i¼1 i 9 8 > > 10 sin2 ðpy1 Þ = < PnÀ1 2 p 2 f ðxÞ ¼ n þ i¼1 ðyi À 1Þ ½1 þ 10 sin ðpyiþ1 ފ > > ; : 2 þðy À 1Þ P n þ n uðxi ; 10; 100; 4Þ i¼1 yi ¼ 1 þ 1 ðxi þ8 1Þ 4 < kðxi À aÞm ; xi > a uðxi ; a; k; mÞ ¼ 0; Àa 6 xi 6 a : m kðÀxi À aÞ ; xi < Àa 9 8 > > sin2 ðpx1 Þþ = < f ðxÞ ¼ 0:1 PnÀ1 ðxi À 1Þ2 ½1 þ sin2 ð3pxiþ1 ފþ þ > > i¼1 ; : 2 2 ðxn À 1Þ ½1 þ sin ð2pxn ފ Pn i¼1 uðxi ; 5; 100; 4Þ !À1 P 1 f ðxÞ ¼ 500 þ 25 P2 1 j¼1 jþ i¼1

12

[À50, 50]

30

MN

Penalized

13

[À50, 50]

30

MN

Penalized2

14

[À65.536, 65.536]

2

MS

Foxholes

ðxi Àaij Þ6

15 16 17

[À5, 5] [À5, 5] [À5, 10]x[0, 15]

4 2 2

MN MN MS

Kowalik Six Hump Camel Back Branin

 2 2 P x ðb þb x Þ f ðxÞ ¼ 11 ai À 1 i i 2 2 i¼1 bi þbi x3 þx4

f ðxÞ ¼ 4x2 À 2:1x4 þ 1 x6 þ x1x2 À 4x2 þ 4x4 1 2 2 3 1 1 2 5:1 5 f ðxÞ ¼ x2 À 4p2 x2 þ p x1 À 6 1 À Á þ10 1 À 81 cos x1 þ 10 p 1 þ ðx1 þ x2 þ 1Þ2 ð19 À 14x1 þ 3x2 À 14x2 þ 6x1 x2 þ 3x2 Þ 1 2 ! 30 þ ð2x1 À 3x2 Þ2 ð18 À 32x1 þ 12x2 þ 48x2 À 36x1 x2 þ 27x2 Þ 1 2 h P i P f ðxÞ ¼ À 4 ci exp À 3 aij ðxj À pij Þ2 i¼1 j¼1 h P i P f ðxÞ ¼ À 4 ci exp À 6 aij ðxj À pij Þ2 i¼1 j¼1 P f ðxÞ ¼ À 5 ½ðx À ai Þðx À ai ÞT þ ci ŠÀ1 i¼1 P f ðxÞ ¼ À 7 ½ðx À ai Þðx À ai ÞT þ ci ŠÀ1 i¼1 P f ðxÞ ¼ À 10 ½ðx À ai Þðx À ai ÞT þ ci ŠÀ1 i¼1 f ðxÞ ¼

!

18

[À2, 2]

2

MN

GoldStein-Price

19 20 21 22 23

[0, 1] [0, 1] [0, 10] [0, 10] [0, 10]

3 6 4 4 4

MN MN MN MN MN

Hartman 3 Hartman 6 Shekel 5 Shekel 7 Shekel 10

128

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 20 Results obtained by CES [41], FES [36], ESLAT [41], CMA-ES [41] and ABC algorithms. D: Dimension, SD: Standard Deviation. No Function Range D CES Mean 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sphere Schwefel 2.22 Schwefel 1.2 Schwefel 2.21 Rosenbrock Step Quartic Schwefel Rastrigin Ackley Griewank Penalized Penalized 2 Foxholes Kowalik SixHump Branin GoldsteinPrice Hartman 3 Hartman 6 Shekel 5 Shekel 7 Shekel 10 À100, 100 À10, 10 À100, 100 À100, 100 À30, 30 À100, 100 À1.28, 1.28 À500, 500 À5.12, 5.12 À32, 32 À600, 600 À50, 50 À50, 50 À65.536, 65.536 À5, 5 À5, 5 (-5, 10), (0, 15) À2, 2 0, 1 0, 1 0, 10 0, 10 0, 10 30 30 30 30 30 30 30 30 30 30 30 30 30 2 4 2 2 2 3 6 4 4 4 1.7e–26 8.1e–20 337.62 2.41 27.65 0 4.7e–2 À8.00E+93 13.38 6.0e–13 6.0e–14 1.46 2.40 2.20 1.3e–3 À1.0310 0.401 3.007 À3.8613 À3.24 À5.72 À6.09 À6.42 SD 1.1e– 25 3.6e– 19 117.14 2.15 0.51 0 0.12 4.9e+94 43.15 1.7e– 12 4.2e– 13 3.17 0.13 2.43 6.3e–4 1.2e–3 3.6e–3 1.2e–2 1.2e–3 5.8e–2 2.62 2.63 2.67 FES Mean 2.5e–4 6.0e–2 1.4e–3 5.5e–3 33.28 0 1.2e–2 À12556.4 0.16 1.2e–2 3.7e–2 2.8e–6 4.7e–5 1.20 9.7e–4 À1.0316 0.398 3.0 À3.86 À3.23 À5.54 À6.76 À7.63 SD 6.8e– 5 9.6e– 3 5.3e– 4 6.5e– 4 43.13 0 5.8e– 3 32.53 0.33 1.8e– 3 5.0e– 2 8.1e– 7 1.5e– 5 0.63 4.2e– 4 6.0e– 7 6.0e– 8 0 4.0e– 3 0.12 1.82 3.01 3.27 ESLAT Mean 2.0e–17 3.8e–5 6.1e–6 0.78 1.93 2.0e–2 0.39 À2.3e+15 4.65 1.8e–8 1.4e–3 1.5e–12 6.4e–3 1.77 8.1e–4 À1.0316 0.398 3 À3.8628 À3.31 À8.49 À8.79 À9.65 SD 2.9e–17 1.6e–5 7.5e–6 1.64 3.35 0.14 0.22 5.70E+15 5.67 5.4e–9 4.7e–3 2.0e–12 8.9e–3 1.37 4.1e–4 9.7e–14 1.0e–13 5.8e–14 2.9e–13 3.3e–2 2.76 2.64 2.06 CMA-ES Mean 9.7e–23 4.2e–11 7.1e–23 5.4e–12 0.4 1.44 0.23 À7637.14 51.78 6.9e–12 7.4e–4 1.2e–4 1.7e–3 10.44 1.5e–3 À1.0316 0.398 14.34 À3.8628 À3.28 À5.86 À6.58 À7.03 SD 3.8e– 23 7.1e– 23 2.9e– 23 1.5e– 12 1.2 1.77 8.7e– 2 895.6 13.56 1.3e– 12 2.7e– 3 3.4e– 2 4.5e– 3 6.87 4.2e– 3 7.7e– 16 1.4e– 15 25.05 4.8e– 16 5.8e– 2 3.60 3.74 3.74 ABC Mean 7.57E–04 8.95E–04 7.01E–04 2.72E+00 9.36E–01 0.00E+00 9.06E–02 À12563.67335 4.66E–04 7.81E–04 8.37E–04 6.98E–04 7.98E–04 9.98E–01 1.18E–03 À1.031 0.3985 3.000E+00 À3.862E+00 À3.322E+00 À10.151 À10.402 À10.535 SD 2.48E– 04 1.27E– 04 2.78E– 04 1.18E+ 00 1.76E+ 00 0.00E+ 00 1.89E– 02 2.36E+ 01 3.44E– 04 1.83E– 04 1.38E– 03 2.78E– 04 2.13E– 04 3.21E– 04 1.45E– 04 3.04E– 04 3.27E– 04 3.09E– 04 2.77E– 04 1.35E– 04 1.17E– 02 3.11E– 04 2.02E– 03

number of function evaluations was 5000 for each run. Totally, runs were repeated 10.000 times per test function for ABC. All settings were tuned as in [38] to make fair comparison. 5.2.2. Benchmark functions In Experiments 3, functions given in Table 23 were employed. Functions in the set are low dimensional. Rastrigin and Griewank functions are the same in Experiments 1 and Experiments 2 while a Gaussian bump in (À1, 1) is added to Rosenbrock function used in other experiments. This modification causes a local minimum in (1, 1) and a global minimum in (À1, À1). This modification makes the problem harder because local minimum basin is larger than global minimum basin [38]. 5.2.3. Results Table 24 presents the mean and the standard deviation of function evaluation numbers of the algorithms tested, and the mean and the standard deviations of success rates which is the percent of converged runs over 100 optimization runs. From the results, on Rosenbrock function, SOMES and NG-ES are more successful in terms of both solution and cost qualities. However, on the other multimodal functions (Griewank and Rastrigin), ABC outperforms SOM-ES, NG-ES and CMA-ES algorithms. On three functions, ABC shows better performance in terms of success rate than CMA-ES. As in Experiments 2, while ABC has slower convergence rate than CMA-ES which uses local information to converge quickly, ABC has better performance in terms of global optimization.

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132 Table 21 Solution costs in terms of evaluation number of CES [41], FES [36], ESLAT [41], CMA-ES [41] and ABC algorithms. No Function CES Mean 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sphere Schwefel 2.22 Schwefel 1.2 Schwefel 2.21 Rosenbrock Step Quartic Schwefel Rastrigin Ackley Griewank Penalized Penalized 2 Foxholes Kowalik SixHump Branin Goldstein-Price Hartman 3 Hartman 6 Shekel 5 Shekel 7 Shekel 10 69,724 60,859 72141 69,821 66,609 57,064 50,962 61,704 53,880 58,909 71,044 63,030 65,655 1305 2869 1306 1257 1201 1734 3816 2338 2468 2410 FES Mean 150,000 200,000 500,000 500,000 1,500,000 150,000 300,000 900,000 500,000 150,000 200,000 150,000 150,000 10,000 400,000 10,000 10,000 10,000 10,000 20,000 10,000 10,000 10,000 ESLAT Mean 69,724 60,859 72,141 69,821 66,609 57,064 50,962 61,704 53,880 58,909 71,044 63,030 65,655 1305 2869 1306 1257 1201 1734 3816 2338 2468 2410 CMA-ES Mean 10,721 12,145 21,248 20,813 55,821 2184 667,131 6621 10,079 10,654 10,522 13,981 13,756 540 13,434 619 594 2052 996 2293 1246 1267 1275 ABC Mean 9264 12,991 12,255 100,000 100,000 4853 100,000 64,632 26,731 16,616 36,151 7340 8454 1046 6120 342 530 15,186 4747 1583 6069 7173 15,392

129

StdFE 1481 673 1390 0 0 1044 0 23,897 9311 1201 17,128 2020 1719 637 4564 109 284 13,500 16,011 457 13,477 9022 24,413

Table 22 Success rates (%) of ESLAT [41], CMA-ES [41] and ABC algorithms. Result in boldface indicates that it is the best success rate achieved in all of the algorithms. No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Function Sphere Schwefel 2.22 Schwefel 1.2 Schwefel 2.21 Rosenbrock Step Quartic Schwefel Rastrigin Ackley Griewank Penalized Penalized 2 Foxholes Kowalik SixHump Branin Goldstein-Price Hartman 3 Hartman 6 Shekel 5 Shekel 7 Shekel 10 Total: ESLAT 100 100 100 0 70 98 0 0 40 100 90 100 60 60 94 100 100 100 100 94 72 72 84 9 CMA-ES 100 100 100 100 90 36 0 0 0 100 92 88 86 0 88 100 100 78 100 48 40 48 52 9 ABC 100 100 100 0 0 100 0 86 100 100 96 100 100 100 100 100 100 100 100 100 98 100 96 20

Table 23 Benchmark functions used in experiments 3. D: Dimension, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. No 1 2 3 Range [À2, 2] [À100, 100] [À5.12, 5.12] D 2 2 2 C UN MN MS Function Modified Rosenbrock Griewank Rastrigin Formulation f ðx; yÞ ¼ 74 þ 100ðy À x2 Þ2 þ ð1 À xÞ2 À 400eÀððxþ1Þ   y 1 f ðx; yÞ ¼ 1 þ 200 ðx2 þ y2 Þ À cosðxÞ cos pffiffi 2 Pn 2 f ðxÞ ¼ i¼1 ½xi À 10 cosð2pxi Þ þ 10Š
2

þðyþ1Þ2 =0:1Þ

130

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

Table 24 Statistical results of SOM-ES [38], NG-ES [38], CMA-ES [38] and ABC algorithms. Method Modified Rosenbrock Mean Evals 5 Â 5 SOM-ES 7x7 SOM-ES NG-ES (m=10) NG-ES (m=20) CMA-ES ABC 1600 Æ 200 750 Æ 90 1700 Æ 200 780 Æ 80 70 Æ 40 1371 Æ 2678 Succ. (%) 70 Æ 8 90 Æ 5 90 Æ 7 90 Æ 9 30 Æ 10 52 Æ 5 Griewank Mean Evals 130 Æ 40 100 Æ 40 180 Æ 50 150 Æ 40 210 Æ 50 1121 Æ 960 Succ. (%) 90 Æ 7 90 Æ 8 90 Æ 10 90 Æ 8 70 Æ 10 99 Æ 1 Rastrigin Mean Evals 180 Æ 50 200 Æ 50 210 Æ 50 180 Æ 40 100 Æ 40 1169 Æ 446 Succ. (%) 90 Æ 7 90 Æ 8 90 Æ 8 90 Æ 7 80 Æ 9 100 Æ 0

6. Discussion and conclusion In this work, the performance of ABC algorithm was compared with those of GA, PSO, DE and ES optimization algorithms. Although there are several improved versions of GA, DE and PSO in the literature, their standard versions were used since the ABC algorithm presented in this work is its first version i.e. its standard version. Although the performance of ABC algorithm can be improved by integrating useful heuristics, in this work our purpose was to compare the performance of standard version of ABC with those of other well-known population-based algorithms. In Experiments 1, we used the same population number and the maximum evaluation number for all problems although it is a known fact that these control parameters affect the performance of algorithms significantly. However, in most comparative studies these parameter values are varied with respect to the dimension of the problems or to their other characteristics. The reason is that we assumed the users of an algorithm do not know much about the recommended values of these parameters for their problems to be optimized. While GA and DE employ crossover operators to produce new or candidate solutions from the present ones, ABC algorithm does not. ABC algorithm produces the candidate solution from its parent by a simple operation based on taking the difference of randomly determined parts of the parent and a randomly chosen solution from the population. This process increases the convergence speed of search into a local minimum. In GA, DE and PSO the best solution found so far is always kept in the population and it can be used for producing new solutions in the case of DE and GA, new velocities in the case of PSO. However, in ABC, the best solution discovered so far is not always held in the population since it might be replaced with a randomly produced solution by a scout. Therefore, it might not contribute to the production of trial solutions. Both DE and ABC employ a greedy selection process between the candidate and the parent solutions. In ABC, on ‘‘employed bees” stage a trial solution is produced for each solution in the population as in the case of DE without depending on the quality of solutions. On ‘‘onlooker bees” stage, the solutions with the higher fitness value are used more often than those with less fitness values to produce trial solutions. It means that the promising regions of the search space are searched in shorter time and in detail. This selection process is similar to the natural selection or to the seeded selection employed in GA. In GA or DE, mutation process creates a modification on a randomly selected part of a solution to provide required diversity in the population. In ABC, there are two types of mechanisms to control the diversity in the population: (a) As in DE or GA, a randomly chosen part of a parent is modified with a magnitude determined randomly to obtain a trail solution. This modification is relatively small and useful for local search and fine tuning. (b) Rather than changing a part of a solution, a whole solution in the population is removed and then a new one produced randomly is inserted into the population by a scout. This mechanism provides the ABC algorithm a global search ability and prevents the search from premature convergence problem. This feature weakens the dependency of the algorithms’ performance on the population size, too. Hence, there is a good balance between the local search process carried out by artificial onlooker and employed bees and the global search process managed by artificial scouts. Therefore, the ABC algorithm produces better results on multimodal and multivariable problems than other algorithms considered in this paper. Apart from the maximum evaluation number and population size, a standard GA has three more control parameters (crossover rate, mutation rate, generation gap), a standard DE has at least two control parameters (crossover rate, scaling factor) and a basic PSO has three control parameters (cognitive and social factors, inertia weight). Also, limit values for the velocities tmax have a significant effect on the performance of PSO. The ABC algorithm has only one control parameter (limit) apart from Colony Size and Maximum Cycle Number. In the present work, we described an expression for determining the value of ‘‘limit” depending on population (colony size) and dimension of problem. Therefore, now ABC has only two common control parameters: maximum cycle number ðMCNÞ and colony size ðSNÞ. Consequently, ABC is as simple and flexible as DE and PSO; and also employs less control parameters. ESs used in Experiments 2 and 3 employ recombination and mutation operators to produce offsprings (new individuals) while ABC uses only mutation operator. Although the basic version of ES is as simple as ABC, the improved versions used for comparison in this work are more complex than ABC. Moreover, all of them employ more control parameters than ABC. This work compared the performance of ABC algorithm with those of GA, DE, PSO and ES algorithms on a large set of unconstrained test functions. From the results obtained in this work, it can be concluded that the performance of ABC algorithm is better than or similar to that of these algorithms although it uses less control parameters and it can be efficiently used for solving multimodal and multidimensional optimization problems.

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

131

Acknowledgement This work is supported by Erciyes University, the Department of Research Projects under Contract FBA-06-22.

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, Springer, 2003. R.C. Eberhart, Y. Shi, J. Kennedy, Swarm Intelligence, Morgan Kaufmann, 2001. J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975. J.R. Koza, Genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems, Technical Report STAN-CS90-1314, Stanford University Computer Science Department, 1990. I. Rechenberg, in: Cybernetic Solution Path of an Experimental Problem, Library Translation, vol. 1122, Royal Aircraft Establishment, Farnborough, Hants, UK, 1965. H.P. Schwefel, Kybernetische evolution als strategie der experimentellen forschung in der stromungstechnik, Master’s Thesis, Technical University of Berlin, Germany, 1965. L.J. Fogel, A.J. Owens, M.J. Walsh, Artificial Intelligence Through Simulated Evolution, John Wiley & Son, New York, NY, 1966. R. Storn, K. Price, Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical report, International Computer Science Institute, Berkley, 1995. R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11 (1997) 341–359. K. Price, R. Storn, A. Lampinen, Differential Evolution a Practical Approach to Global Optimization, Springer Natural Computing Series, 2005. E. Bonabeau, M. Dorigo, G. Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, NY, 1999. L.N. De Castro, F.J. Von Zuben, Artificial immune systems, Part I. Basic theory and applications, Technical Report Rt Dca 01/99, Feec/Unicamp, Brazil, 1999. J. Kennedy, R.C. Eberhart, in: Particle Swarm Optimization, 1995 IEEE International Conference on Neural Networks, vol. 4, 1995, pp. 1942–1948. Y. Fukuyama, S. Takayama, Y. Nakanishi, H. Yoshida, A particle swarm optimization for reactive power and voltage control in electric power systems, in: Genetic and Evolutionary Computation Conference, 1999, pp. 1523–1528. V. Tereshko, Reaction–diffusion model of a honeybee colony’s foraging behaviour, in: M. Schoenauer (Ed.), Parallel Problem Solving from Nature VI, Lecture Notes in Computer Science, vol. 1917, Springer–Verlag, Berlin, 2000, pp. 807–816. V. Tereshko, T. Lee, How information mapping patterns determine foraging behaviour of a honeybee colony, Open Systems and Information Dynamics 9 (2002) 181–193. V. Tereshko, A. Loengarov, Collective decision-making in honeybee foraging dynamics, Computing and Information Systems Journal 9 (3) (2005). ´ D. Teodorovic, Transport modeling by multi-agent systems: a swarm intelligence approach, Transportation Planning and Technology 26 (4) (2003). ´ P. Lucic, D. Teodorovic, Transportation modeling: an artificial life approach, in: ICTAI, 2002, pp. 216–223. ´ D. Teodorovic, M. Dell’Orco, Bee colony optimization – a cooperative learning approach to complex transportation problems, in: Poznan, 3–16 September 2005, 10th EWGT Meeting. H. Drias, S. Sadeg, S. Yahi, Cooperative bees swarm for solving the maximum weighted satisfiability problem, computational intelligence and bioinspired systems. in: 8th International Workshop on Artificial Neural Networks IWANN 2005, Vilanova, Barcelona, Spain, June 8–10 2005. K. Benatchba, L. Admane, M. Koudil, Using bees to solve a data-mining problem expressed as a max-sat one, artificial intelligence and knowledge engineering applications: a bioinspired approach. in: First International Work-Conference on the Interplay Between Natural and Artificial Computation IWINAC 2005, Palmas, Canary Islands, Spain, June 15–18 2005. H.F. Wedde, M. Farooq, Y. Zhang, Beehive: an efficient fault-tolerant routing algorithm inspired by honeybee behavior, ant colony, optimization and swarm intelligence, in: 4th International Workshop, ANTS 2004, Brussels, Belgium, September 5–8 2004. X.S. Yang, Engineering optimizations via nature-inspired virtual bee algorithms. in: Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, Lecture Notes in Computer Science, vol. 3562, Springer, Berlin/Heidelberg, 2005, pp. 317–323. D.T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, M. Zaidi, The bees algorithm, Technical Report, Manufacturing Engineering Centre, Cardiff University, UK, 2005. D. Karaboga, An idea based on honeybee swarm for numerical optimization, Technical Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. B. Basturk, D. Karaboga, An artificial bee colony (abc) algorithm for numeric function optimization, in: IEEE Swarm Intelligence Symposium 2006, Indianapolis, Indiana, USA, May 2006. D. Karaboga, B. Basturk, A. powerful, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm, Journal of Global Optimization 39 (3) (2007) 459–471. D. Karaboga, B. Basturk, On the performance of artificial bee colony (abc) algorithm, Applied Soft Computing 8 (1) (2008) 687–697. D. Karaboga, B. Basturk, in: Advances in Soft Computing: Foundations of Fuzzy Logic and Soft Computing, LNCS, vol. 4529/2007, Springer-Verlag, 2007, pp. 789–798 (Chapter Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems). D. Karaboga, B. Basturk Akay, C. Ozturk, in: Modeling Decisions for Artificial Intelligence, LNCS, vol. 4617/2007, Springer-Verlag, 2007, pp. 318–329 (Chapter Artificial Bee Colony (ABC) Optimization Algorithm for Training Feed-Forward Neural Networks). D. Karaboga, B. Basturk Akay, An artificial bee colony (abc) algorithm on training artificial neural networks, in: 15th IEEE Signal Processing and Communications Applications, SIU 2007, Eskisehir, Turkiye, June, pp. 1–4. N. Karaboga. A new design method based on artificial bee colony algorithm for digital iir filters, Journal of The Franklin Institute 346 (4) (2009) 328–348. Alok Singh, An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem, Applied Soft Computing 9 (2) (2009) 625–631. T. Back, H.P. Schwefel, An overview of evolutionary algorithms for parameter optimization, Evolutionary Computation 1 (1) (1993) 1–23. X. Yao, Y. Liu, Fast evolution strategies, Control and Cybernetics 26 (3) (1997) 467–496. T. Kohonen, Self-Organizing Maps, Springer-Verlag, New York, 1995. M. Milano, P. Koumoutsakos, J. Schmidhuber, Self-organizing nets for optimization, IEEE Transactions on Neural Networks 15 (3) (2004) 758–765. T. Martinetz, S. Schulten, A neural-gas network learns topologies, in: K. Kohonen et al. (Eds.), Artificial Neural Networks, Elsevier, North-Holland, The Netherlands, 1991, pp. 397–402. N. Hansen, A. Ostermeier, Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation, in: IEEE Int. Conf. Evolution. Comput. (ICEC) Proc., 1996, pp. 312–317. A. Hedar, M. Fukushima, Evolution strategies learned with automatic termination criteria, in: Proceedings of SCIS-ISIS 2006, Tokyo, Japan, 2006. D.T. Pham, D. Karaboga, Optimum design of fuzzy logic controllers using genetic algorithms, Journal of Systems Engineering 1 (1991) 114–118. D. Corne, M. Dorigo, F. Glover, New Ideas in Optimization, McGraw-Hill, 1999. J. Vesterstrom, R. Thomsen, A comparative study of differential evolution particle swarm optimization and evolutionary algorithms on numerical benchmark problems, in: IEEE Congress on Evolutionary Computation (CEC’2004), vol. 3, Piscataway, New Jersey, June 2004, pp. 1980–1987. D.O. Boyer, C.H. Martfnez, N.G. Pedrajas, Crossover operator for evolutionary algorithms based on population features, Journal of Artificial Intelligence Research 24 (2005) 1–48.

[23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45]

132

D. Karaboga, B. Akay / Applied Mathematics and Computation 214 (2009) 108–132

[46] A.D. Junior, R.S. Silva, K.C. Mundim, L.E. Dardenne, Performance and parameterization of the algorithm simplified generalized simulated annealing, Genetics and Molecular Biology 27 (4) (2004) 616–622. [47] J.G. Digalakis, K.G. Margaritis, An experimental study of benchmarking functions for genetic algorithms, International Journal of Computer Mathematics 79 (4) (2002) 403–416. [48] D. Bratton, J. Kennedy, Defining a standard for particle swarm optimization, in: Swarm Intelligence Symposium, 2007, SIS 2007. IEEE, Honolulu, HI, 2007, pp. 120–127. [49] Z. Michalewicz, C. Janikow, Genetic Algorithms + Data Structures = Evolution Programs, third ed., Springer-Verlag, Newyork, 1996.

Similar Documents

Free Essay

Swn Jdkjkjje Jne

...Employment News 31 May - 6 June 2014 www.employmentnews.gov.in 21 UNION PUBLIC SERVICE COMMISSION EXAMINATION NOTICE NO. 09/2014-CSP (LAST DATE FOR RECEIPT OF APPLICATIONS : 30/06/2014) DATE :31.05.2014 CIVIL SERVICES EXAMINATION, 2014 (Commission’s website-http://upsc.gov.in) F. No. 1/5/2013-E.I(B) : Preliminary Examination of the Civil Services Examination for recruitment to the Services and Posts mentioned below will be held by the Union Public Service Commission on 24th Aug., 2014 in accordance with the Rules published by the Department of Personnel & Training in the Gazette of India Extraordinary dated 31st May, 2014. (i) Indian Administrative Service. (ii) Indian Foreign Service. (iii) Indian Police Service. (iv) Indian P & T Accounts & Finance Service, Group ‘A’. (v) Indian Audit and Accounts Service, Group ‘A’. (vi) Indian Revenue Service (Customs and Central Excise), Group ‘A’. (vii) Indian Defence Accounts Service, Group ‘A’. (viii) Indian Revenue Service (I.T.), Group ‘A’. (ix) Indian Ordnance Factories Service, Group ‘A’ (Assistant Works Manager, Administration). (x) Indian Postal Service, Group ‘A’. (xi) Indian Civil Accounts Service, Group ‘A’. (xii) Indian Railway Traffic Service, Group ‘A’. (xiii) Indian Railway Accounts Service, Group 'A'. (xiv) Indian Railway Personnel Service, Group ‘A’. (xv) Post of Assistant Security Commissioner in Railway Protection Force, Group ‘A’ (xvi) Indian Defence Estates Service, Group...

Words: 47693 - Pages: 191

Free Essay

Whirlpool

...Employment News 11 - 17 February 2012 www.employmentnews.gov.in 21 Union Public Service Commission EXAMINATION NOTICE NO. 04/2012-CSP DATED 11.02.2012 (LAST DATE FOR RECEIPT OF APPLICATIONS : 05.03.2012) CIVIL SERVICES EXAMINATION, 2012 (Commission's website - http://www.upsc.gov.in) F. No. 1/4/2011-E.I(B) : Preliminary Examination of the Civil Services Examination for recruitment to the Services and Posts mentioned below will be held by the Union Public Service Commission on 20th May, 2012 in accordance with the Rules published by the Department of Personnel & Training in the Gazette of India Extraordinary dated 4th February, 2012. (i) Indian Administrative Service. (ii) Indian Foreign Service. (iii) Indian Police Service. (iv) Indian P & T Accounts & Finance Service, Group ‘A’. (v) Indian Audit and Accounts Service, Group ‘A’. (vi) Indian Revenue Service (Customs and Central Excise), Group ‘A’. (vii) Indian Defence Accounts Service, Group ‘A’. (viii) Indian Revenue Service (I.T.), Group ‘A’. (ix) Indian Ordnance Factories Service, Group ‘A’ (Assistant Works Manager, Administration). (x) Indian Postal Service, Group ‘A’. (xi) Indian Civil Accounts Service, Group ‘A’. (xii) Indian Railway Traffic Service, Group ‘A’. (xiii) Indian Railway Accounts Service, Group 'A'. (xiv) Indian Railway Personnel Service, Group ‘A’. (xv) Post of Assistant Security Commissioner in Railway Protection Force, Group ‘A’ (xvi) Indian Defence Estates Service, Group ‘A’. (xvii) Indian Information...

Words: 50586 - Pages: 203

Free Essay

Prospectus

...COMMON PROSPECTUS Master’s Degree Bachelor’s Degree Diplomas Certificates Indira Gandhi National Open University Maidan Garhi, New Delhi-110068, INDIA | www.ignou.ac.in Price: Rs. 100/- by cash at the counter | Rs. 150/- by Registered Post Electronic version of the prospectus is available for download at: http://www.ignou.ac.in Online Admission & Payment Gateway RECOGNITION IGNOU is a CENTRAL UNIVERSITY established by an Act of Parliament in 1985 (Act No. 50 of 1985). IGNOU Degrees/Diplomas/Certificates are recognised by all the member institutions of the Association of Indian Universities (AIU) and are at par with Degrees/Diplomas/Certificates of all Indian Universities/Deemed Universities/Institutions. Prepared & vetted at: Student Registration Division © Indira Gandhi National Open University March 2012 Print Production Mr B. Natarajan, DR(P) Mr Arvind Kumar, AR(P) Mr Ajit Kumar, So(P) IGNOU Offers “Round the Year Admission” to its Programmes under the ‘Walk-in-Admission’ Scheme. Candidates can obtain admission application forms from Regional Centre, Student Registration Divisions (SRD), IGNOU Headquarters and also can download the Prospectus and application form from the university website at ww.ignou.ac.in. Candidates can submit the same only at the Regional Centres concerned either by post or in person. Application forms can be submitted online and programme fee can be paid online through the internet payment gateway. CUT OFF DATES FOR WALK-IN-ADMISSION: Please...

Words: 77378 - Pages: 310

Free Essay

Jezz Bezos

...Begin Reading Table of Contents Photos Newsletters Copyright Page In accordance with the U.S. Copyright Act of 1976, the scanning, uploading, and electronic sharing of any part of this book without the permission of the publisher is unlawful piracy and theft of the author’s intellectual property. If you would like to use material from the book (other than for review purposes), prior written permission must be obtained by contacting the publisher at permissions@hbgusa.com. Thank you for your support of the author’s rights. For Isabella and Calista Stone When you are eighty years old, and in a quiet moment of reflection narrating for only yourself the most personal version of your life story, the telling that will be most compact and meaningful will be the series of choices you have made. In the end, we are our choices. —Jeff Bezos, commencement speech at Princeton University, May 30, 2010 Prologue In the early 1970s, an industrious advertising executive named Julie Ray became fascinated with an unconventional public-school program for gifted children in Houston, Texas. Her son was among the first students enrolled in what would later be called the Vanguard program, which stoked creativity and independence in its students and nurtured expansive, outside-the-box thinking. Ray grew so enamored with the curriculum and the community of enthusiastic teachers and parents that she set out to research similar schools around the state with an eye toward writing a book about...

Words: 120163 - Pages: 481

Premium Essay

Mass Media

...Media History Contents 1 Introduction 1.1 Mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 1.1.2 1.1.3 1.1.4 1.1.5 1.1.6 1.1.7 1.1.8 1.1.9 Issues with definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forms of mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professions involving mass media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Influence and sociology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical issues and criticism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 2 6 6 7 8 10 10 10 10 11 11 12 12 12 12 16 16 17 17 17 17 17 17 18 19 20 21 21 21 1.1.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.13 External links . . . . . . . . ....

Words: 146891 - Pages: 588

Premium Essay

Marketing Channel Distribution

...Marketing Channel Strategy This page intentionally left blank Eighth Edition Marketing Channel Strategy Robert W. Palmatier University of Washington’s Foster School of Business Louis W. Stern Northwestern University’s Kellogg School of Management Adel I. El-Ansary University of North Florida’s Coggin College of Business Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editor in Chief: Stephanie Wall Acquisitions Editor: Mark Gaffney Program Manager Team Lead: Ashley Santora Program Manager: Jennifer M. Collins Director of Marketing: Maggie Moylen Executive Marketing Manager: Anne Fahlgren Project Manager Team Lead: Judy Leale Project Manager: Thomas Benfatti Operations Specialist: Nancy Maneri Cover Designer: Suzanne Behnke Creative Director: Jayne Conte Digital Production Project Manager: Lisa Rinaldi Full Service Vendor: Integra Software Services Pvt. Ltd. Full Service Project Manager: Anandakrishnan Natarajan/Integra Software Services Printer/Binder: Courier/Westford Cover Printer: Lehigh-Phoenix Text Font: 10/12, ITC Garamond Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on appropriate page within text (or on page xix). Copyright © 2015 Pearson Education, Inc., publishing as Prentice...

Words: 236095 - Pages: 945

Free Essay

Green

...No. Nama Perguruan Tinggi AKADEMI AKUNTANSI PGRI JEMBER Nama Pengusul Sisda Rizqi Rindang Sari Program Kegiatan Judul Kegiatan 1 PKMK KUE TART CAENIS ( CANTIK, ENAK DAN EKONOMIS) BERBAHAN DASAR TAPE 2 AKADEMI FARMASI KEBANGSAAN Nensi MAKASSAR AKADEMI KEBIDANAN CITRA MEDIKA SURAKARTA AKADEMI KEBIDANAN GIRI SATRIA HUSADA AKADEMI KEPERAWATAN KERTA CENDIKA SIDOARJO AKADEMI KEPERAWATAN KERTA CENDIKA SIDOARJO AKADEMI KEPERAWATAN KERTA CENDIKA SIDOARJO Putri Purnamasari PKMK LILIN SEHAT AROMA KURINDU PANCAKE GARCINIA MANGOSTANA ( PANCAKE KULIT MANGGIS ) 3 PKMK 4 Latifah Sulistyowati PKMK Pemanfaatan Potensi Jambu Mete secara Terpadu dan Pengolahannya sebagai Abon Karmelin (Karamel Bromelin) : Pelunak Aneka Jenis Daging Dari Limbah Nanas Yang Ramah Lingkungan, Higienis Dan Praktis PUDING“BALECI”( KERES) MAKANAN BERSERATANTI ASAM URAT 5 Achmad PKMK Zainunddin Zulfi 6 Dian Kartika Sari PKMK 7 Radita Sandia PKMK Selonot Sehat (S2) Diit untuk Penderita Diabetes 8 AKADEMI PEREKAM Agustina MEDIK & INFO KES Wulandari CITRA MEDIKA AKADEMI PEREKAM MEDIK & INFO KES Anton Sulistya CITRA MEDIKA AKADEMI PEREKAM Eka Mariyana MEDIK & INFO KES Safitri CITRA MEDIKA AKADEMI PEREKAM MEDIK & INFO KES Ferlina Hastuti CITRA MEDIKA AKADEMI PEREKAM Nindita Rin MEDIK & INFO KES Prasetyo D CITRA MEDIKA AKADEMI PEREKAM MEDIK & INFO KES Sri Rahayu CITRA MEDIKA AKADEMI PERIKANAN YOGYAKARTA PKMK Kasubi Wingko Kaya Akan Karbohidrat...

Words: 159309 - Pages: 638

Free Essay

Test2

...62118 0/nm 1/n1 2/nm 3/nm 4/nm 5/nm 6/nm 7/nm 8/nm 9/nm 1990s 0th/pt 1st/p 1th/tc 2nd/p 2th/tc 3rd/p 3th/tc 4th/pt 5th/pt 6th/pt 7th/pt 8th/pt 9th/pt 0s/pt a A AA AAA Aachen/M aardvark/SM Aaren/M Aarhus/M Aarika/M Aaron/M AB aback abacus/SM abaft Abagael/M Abagail/M abalone/SM abandoner/M abandon/LGDRS abandonment/SM abase/LGDSR abasement/S abaser/M abashed/UY abashment/MS abash/SDLG abate/DSRLG abated/U abatement/MS abater/M abattoir/SM Abba/M Abbe/M abbé/S abbess/SM Abbey/M abbey/MS Abbie/M Abbi/M Abbot/M abbot/MS Abbott/M abbr abbrev abbreviated/UA abbreviates/A abbreviate/XDSNG abbreviating/A abbreviation/M Abbye/M Abby/M ABC/M Abdel/M abdicate/NGDSX abdication/M abdomen/SM abdominal/YS abduct/DGS abduction/SM abductor/SM Abdul/M ab/DY abeam Abelard/M Abel/M Abelson/M Abe/M Aberdeen/M Abernathy/M aberrant/YS aberrational aberration/SM abet/S abetted abetting abettor/SM Abeu/M abeyance/MS abeyant Abey/M abhorred abhorrence/MS abhorrent/Y abhorrer/M abhorring abhor/S abidance/MS abide/JGSR abider/M abiding/Y Abidjan/M Abie/M Abigael/M Abigail/M Abigale/M Abilene/M ability/IMES abjection/MS abjectness/SM abject/SGPDY abjuration/SM abjuratory abjurer/M abjure/ZGSRD ablate/VGNSDX ablation/M ablative/SY ablaze abler/E ables/E ablest able/U abloom ablution/MS Ab/M ABM/S abnegate/NGSDX abnegation/M Abner/M abnormality/SM abnormal/SY aboard ...

Words: 113589 - Pages: 455