Submit your research to the International Journal "Notes on Intuitionistic Fuzzy Sets". Contact us at nifs.journal@gmail.com

Call for Papers for the 27th International Conference on Intuitionistic Fuzzy Sets is now open!
Conference: 5–6 July 2024, Burgas, Bulgaria • EXTENDED DEADLINE for submissions: 15 APRIL 2024.

Help:Sandbox: Difference between revisions

From Ifigenia, the wiki for intuitionistic fuzzy sets and generalized nets
Jump to navigation Jump to search
Line 40: Line 40:


== Разширения на понятието ==
== Разширения на понятието ==
Here are some of most popular variations of ACO Algorithms
Следват някои от най-популярните вариации на алгоритмите за оптимизация по метода на мравките.


=== Оптимизация чрез елитни мравки ===
=== Оптимизация чрез елитни мравки ===
The global best solution deposits pheromone on every iteration along with all the other ants
Това е първата по време модификация на алгоритъма. При нея мравките, които към всяка отделна итерация дават най-добро решение, се обявяват за елитни мравки и те получават право да полагат специален феромон по маршрута си.


=== Минимаксна мравчена оптимизация ===
=== Минимаксна мравчена оптимизация ===
Added Maximum and Minimum pheromone amounts [τ<sub>max</sub>,τ<sub>min</sub>]
Моделът се променя, като се добавят максимални и минимални количества феромон [τ<sub>max</sub>,τ<sub>min</sub>]
Only global best or iteration best tour deposited pheromone
Феромон се полага само по маршрута, отговарящ на глобално най-доброто решение или най-доброто решение за конкретната итерация.
All edges are initialized to τ<sub>max</sub> and reinitialized to τ<sub>max</sub> when nearing stagnation. <ref name="T. Stützle et H.H. Hoos">T. Stützle et H.H. Hoos, ''MAX MIN Ant System'', Future Generation Computer Systems, volume 16, pages 889-914, 2000</ref>
Всички дъги се инициализират с τ<sub>max</sub> и повторно получават стойността τ<sub>max</sub> при наближаване на ситуация на стагнация. <ref name="T. Stützle et H.H. Hoos">T. Stützle et H.H. Hoos, ''MAX MIN Ant System'', Future Generation Computer Systems, volume 16, pages 889-914, 2000</ref>


===Proportional pseudo-random rule===
=== Proportional pseudo-random rule ===
It has been presented above <ref name="M. Dorigo et L.M. Gambardella">M. Dorigo et L.M. Gambardella, ''Ant Colony System : A Cooperative Learning Approach to the Traveling Salesman Problem'', IEEE Transactions on Evolutionary Computation, volume 1, numéro 1, pages 53-66, 1997.</ref>
Представен е по-горе. <ref name="M. Dorigo et L.M. Gambardella">M. Dorigo et L.M. Gambardella, ''Ant Colony System : A Cooperative Learning Approach to the Traveling Salesman Problem'', IEEE Transactions on Evolutionary Computation, volume 1, numéro 1, pages 53-66, 1997.</ref>


=== Мравчена система, базирана на рангове ===
=== Мравчена система, базирана на рангове ===
Line 58: Line 58:


=== Непрекъсната ортогонална мравчена система ===
=== Непрекъсната ортогонална мравчена система ===
The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy.  
Механизмът за полагане на феромон при непрекъснатата ортогонална мравчена система е такъв, че позволява на мравките на търсят решения по един колаборативен и ефективен начин. Използвайки '''orthogonal design method''', мравките могат да изследват избраните от тях региони от допустимата област бързо и ефикасно с подобрени възможности за глобално търсене и с по-висока точност.  


The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems.<ref>[http://eprints.gla.ac.uk/3894/ X Hu, J Zhang, and Y Li (2008). Orthogonal methods based ant colony search for solving continuous optimization problems. ''Journal of Computer Science and Technology'', 23(1), pp.2-18.]</ref>
'''The orthogonal design method''' и '''adaptive radius adjustment method''' също могат да бъдат разширени към други оптимизационни алгоритми, предоставяйки по-добри възможности за решаване на задачи от практиката.<ref>[http://eprints.gla.ac.uk/3894/ X Hu, J Zhang, and Y Li (2008). Orthogonal methods based ant colony search for solving continuous optimization problems. ''Journal of Computer Science and Technology'', 23(1), pp.2-18.]</ref>


== Сходимост ==
== Сходимост ==

Revision as of 19:57, 14 April 2010

File:Safari ants.jpg
Ant behavior was the inspiration for the metaheuristic optimization technique

The ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.

This algorithm is a member of ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis [1] [2] , the first algorithm was aiming to search for an optimal path in a graph; based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants.

Overview

Summary

In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).

Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.

Thus, when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.

Detailed

File:Aco branches.svg

The original idea comes from observing the exploitation of food resources among ants, in which ants’ individually limited cognitive abilities have collectively been able to find the shortest path between a food source and the nest.

  1. The first ant finds the food source (F), via any way (a), then returns to the nest (N), leaving behind a trail pheromone (b)
  2. Ants indiscriminately follow four possible ways, but the strengthening of the runway makes it more attractive as the shortest route.
  3. Ants take the shortest route, long portions of other ways lose their trail pheromones.

In a series of experiments on a colony of ants with a choice between two unequal length paths leading to a source of food, biologists have observed that ants tended to use the shortest route. [3] [4] A model explaining this behaviour is as follows:

  1. An ant (called "blitz") runs more or less at random around the colony;
  2. If it discovers a food source, it returns more or less directly to the nest, leaving in its path a trail of pheromone;
  3. These pheromones are attractive, nearby ants will be inclined to follow, more or less directly, the track;
  4. Returning to the colony, these ants will strengthen the route;
  5. If two routes are possible to reach the same food source, the shorter one will be, in the same time, traveled by more ants than the long route will;
  6. The short route will be increasingly enhanced, and therefore become more attractive;
  7. The long route will eventually disappear, pheromones are volatile;
  8. Eventually, all the ants have determined and therefore "chosen" the shortest route.

Ants use the environment as a medium of communication. They exchange information indirectly by depositing pheromones, all detailing the status of their "work". The information exchanged has a local scope, only an ant located where the pheromones were left has a notion of them. This system is called "Stigmergy" and occurs in many social animal societies (it has been studied in the case of the construction of pillars in the nests of termites). The mechanism to solve a problem too complex to be addressed by single ants is a good example of a self-organized system. This system is based on positive feedback (the deposit of pheromone attracts other ants that will strengthen it themselves) and negative (dissipation of the route by evaporation prevents the system from thrashing). Theoretically, if the quantity of pheromone remained the same over time on all edges, no route would be chosen. However, because of feedback, a slight variation on an edge will be amplified and thus allow the choice of an edge. The algorithm will move from an unstable state in which no edge is stronger than another, to a stable state where the route is composed of the strongest edges.

Разширения на понятието

Следват някои от най-популярните вариации на алгоритмите за оптимизация по метода на мравките.

Оптимизация чрез елитни мравки

Това е първата по време модификация на алгоритъма. При нея мравките, които към всяка отделна итерация дават най-добро решение, се обявяват за елитни мравки и те получават право да полагат специален феромон по маршрута си.

Минимаксна мравчена оптимизация

Моделът се променя, като се добавят максимални и минимални количества феромон [τmaxmin] Феромон се полага само по маршрута, отговарящ на глобално най-доброто решение или най-доброто решение за конкретната итерация. Всички дъги се инициализират с τmax и повторно получават стойността τmax при наближаване на ситуация на стагнация. [5]

Proportional pseudo-random rule

Представен е по-горе. [6]

Мравчена система, базирана на рангове

Всички решения се ранжират по отношение на фитнес функцията им. После количеството отложен феромон се претегля за всяко решение, по такъв начин, че решенията с по-добри показатели на фитнес функцията да получат повече феромон от тези с по-слаби показатели.

Непрекъсната ортогонална мравчена система

Механизмът за полагане на феромон при непрекъснатата ортогонална мравчена система е такъв, че позволява на мравките на търсят решения по един колаборативен и ефективен начин. Използвайки orthogonal design method, мравките могат да изследват избраните от тях региони от допустимата област бързо и ефикасно с подобрени възможности за глобално търсене и с по-висока точност.

The orthogonal design method и adaptive radius adjustment method също могат да бъдат разширени към други оптимизационни алгоритми, предоставяйки по-добри възможности за решаване на задачи от практиката.[7]

Сходимост

For some versions of the algorithm, it is possible to prove that it is convergent (ie. it is able to find the global optimum in a finite time). The first evidence of a convergence ant colony algorithm was made in 2000, the graph-based ant system algorithm, and then algorithms for ACS and MMAS. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. In 2004, Zlochin and his colleagues[8] have shown COA type algorithms could be assimilated methods of stochastic gradient descent, on the cross-entropy and Estimation of distribution algorithm. They proposed that these metaheuristics as a "research-based model".

Приложения

File:Knapsack ants.svg
Knapsack problem. The ants prefer the smaller drop of honey over the more abundant, but less nutritious, sugar.

Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to fold protein or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations. It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.

As a very good example, ant colony optimization algorithms have been used to produce near-optimal solutions to the travelling salesman problem. The first ACO algorithm was called the Ant system [9] and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:

  1. It must visit each city exactly once;
  2. A distant city has less chance of being chosen (the visibility);
  3. The more intense the pheromone trail laid out on an edge between two cities, the greater the probability that that edge will be chosen;
  4. Having completed its journey, the ant deposits more pheromones on all edges it traversed, if the journey is short;
  5. After each iteration, trails of pheromones evaporate.
File:Aco TSP.svg

Примерен псевдокод и формули

 procedure ACO_MetaHeuristic
   while(not_termination)
      generateSolutions()
      daemonActions()
      pheromoneUpdate()
   end while
 end procedure

Edge Selection:

An ant will move from node [math]\displaystyle{ i }[/math] to node [math]\displaystyle{ j }[/math] with probability

[math]\displaystyle{ p_{i,j} = \frac { (\tau_{i,j}^{\alpha}) (\eta_{i,j}^{\beta}) } { \sum (\tau_{i,j}^{\alpha}) (\eta_{i,j}^{\beta}) } }[/math]

where

[math]\displaystyle{ \tau_{i,j} }[/math] is the amount of pheromone on edge [math]\displaystyle{ i,j }[/math]

[math]\displaystyle{ \alpha }[/math] is a parameter to control the influence of [math]\displaystyle{ \tau_{i,j} }[/math]

[math]\displaystyle{ \eta_{i,j} }[/math] is the desirability of edge [math]\displaystyle{ i,j }[/math] (a priori knowledge, typically [math]\displaystyle{ 1/d_{i,j} }[/math], where d is the distance)

[math]\displaystyle{ \beta }[/math] is a parameter to control the influence of [math]\displaystyle{ \eta_{i,j} }[/math]

Обновяване на феромона

[math]\displaystyle{ \tau_{i,j} = (1-\rho)\tau_{i,j} + \Delta \tau_{i,j} }[/math]

where

[math]\displaystyle{ \tau_{i,j} }[/math] is the amount of pheromone on a given edge [math]\displaystyle{ i,j }[/math]

[math]\displaystyle{ \rho }[/math] is the rate of pheromone evaporation

and [math]\displaystyle{ \Delta \tau_{i,j} }[/math] is the amount of pheromone deposited, typically given by

[math]\displaystyle{ \Delta \tau^{k}_{i,j} = \begin{cases} 1/L_k & \mbox{if ant }k\mbox{ travels on edge }i,j \\ 0 & \mbox{otherwise} \end{cases} }[/math]

where [math]\displaystyle{ L_k }[/math] is the cost of the [math]\displaystyle{ k }[/math]th ant's tour (typically length).

Други примери

The ant colony algorithm was originally used mainly to produce near-optimal solutions to the travelling salesman problem and, more generally, the problems of combinatorial optimization.

Job-shop scheduling problem

  • Job-shop scheduling problem (JSP)[10]
  • Open-shop scheduling problem (OSP)[11][12]
  • Permutation flow shop problem (PFSP)[13]
  • Single machine total tardiness problem (SMTTP)[14]
  • Single machine total weighted tardiness problem (SMTWTP)[15][16][17]
  • Resource-constrained project scheduling problem (RCPSP)[18]
  • Group-shop scheduling problem (GSP)[19]
  • Single-machine total tardiness problem with sequence dependent setup times (SMTTPDST)[20]

Vehicle routing problem

  • Capacitated vehicle routing problem (CVRP)[21][22][23]
  • Multi-depot vehicle routing problem (MDVRP)[24]
  • Period vehicle routing problem (PVRP)[25]
  • Split delivery vehicle routing problem (SDVRP)[26]
  • Stochastic vehicle routing problem (SVRP)[27]
  • Vehicle routing problem with pick-up and delivery (VRPPD)[28][29]
  • Vehicle routing problem with time windows (VRPTW)[30][31][32]

Assignment problem

  • Quadratic assignment problem (QAP)[33]
  • Generalized assignment problem (GAP)[34][35]
  • Frequency assignment problem (FAP)[36]
  • Redundancy allocation problem (RAP)[37]

Set problem

  • Set covering problem(SCP)[38][39]
  • Set partition problem (SPP)[40]
  • Weight constrained graph tree partition problem (WCGTPP)[41]
  • Arc-weighted l-cardinality tree problem (AWlCTP)[42]
  • Multiple knapsack problem (MKP)[43]
  • Maximum independent set problem (MIS)[44]

Други

  • Classification[45]
  • Connection-oriented network routing[46]
  • Connectionless network routing[47][48]
  • Data mining[49][50]
  • Discounted cash flows in project scheduling[51]
  • Grid Workflow Scheduling Problem[52]
  • Image processing[53] [54]
  • Intelligent testing system[55]
  • System identification[56][57]
  • Protein Folding[58][59]
  • Power Electronic Circuit Design[60]

A difficulty in definition

Template:Unreferenced-section

File:Aco shortpath.svg

With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the Travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.

Stigmergy algorithms

There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies (COA). In practice, the use of an exchange of information between ants via the environment (a principle called "Stigmergy") is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation. [61].

Свързани подходи

  • Genetic algorithms (GA) maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded.
  • Simulated annealing (SA) is a related global optimization technique which traverses the search space by generating neighboring solutions of the current solution. A superior neighbor is always accepted. An inferior neighbor is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search.
  • Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. To prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
  • Artificial immune system (AIS) algorithms are modeled on vertebrate immune systems.
  • Particle swarm optimization (PSO), a Swarm intelligence method
  • Intelligent Water Drops (IWD), a swarm-based optimization algorithm based on natural water drops flowing in rivers
  • Gravitational Search Algorithm (GSA), a Swarm intelligence method
  • Ant colony clustering method (ACCM), a method that make use of clustering approach,extending the ACO.

История

<timeline> ImageSize = width:210 height:300 PlotArea = width:170 height:280 left:40 bottom:10

DateFormat = yyyy Period = from:1985 till:2005 TimeAxis = orientation:vertical ScaleMajor = unit:year increment:5 start:1985

Colors=

  id:fond     value:white #rgb(0.95,0.95,0.98)
  id:marque   value:rgb(1,0,0)
  id:marque_fond value:rgb(1,0.9,0.9)

BackgroundColors = canvas:fond

Define $dx = 7 # décalage du texte à droite de la barre Define $dy = -3 # décalage vertical Define $dy2 = 6 # décalage vertical pour double texte

PlotData=

 bar:Leaders color:marque_fond width:5 mark:(line,marque) align:left fontsize:S
 from:1989  till:1989 shift:($dx,$dy)    text:comportement collectifs
 from:1991  till:1992 shift:($dx,$dy)    text:Ant System (AS)
 from:1995  till:1995 shift:($dx,$dy)    text:continuous problem (CACO)
 from:1996  till:1996 shift:($dx,$dy)    text:Ant Colony System (ACS)
 from:1996  till:1996 shift:($dx,$dy2)   text:MAX-MIN Ant System (MMAS)
 from:2000  till:2000 shift:($dx,$dy)   text:proof to convergence (GBAS)
 from:2001  till:2001 shift:($dx,$dy)   text:multi-objectif

</timeline>

Chronology of COA Algorithms.

Chronology of Ant colony optimization algorithms.

  • 1959, Pierre-Paul Grass invented the theory of Stigmergy to explain the behavior of nest building in termites[62];
  • 1983, Deneubourg and his colleagues studied the collective behavior of ants[63];
  • 1988, and Moyson Manderick have an article on self-organization among ants[64];
  • 1989, the work of Goss, Aron, Deneubourg and Pasteels on the collective behavior of Argentine ants, which will give the idea of Ant colony optimization algorithms[3];
  • 1989, implementation of a model of behavior for food by Ebling and his colleagues [65];
  • 1991, M. Dorigo proposed the Ant System in his doctoral thesis (which was published in 1992[2]). A technical report extracted from the thesis and co-authored by V. Maniezzo and A. Colorni [66] was published five years later[9];
  • 1996, publication of the article on Ant System[9];
  • 1996, Hoos and Stützle invent the MAX-MIN Ant System [5];
  • 1997, Dorigo and Gambardella publish the Ant Colony System [6];
  • 1997, Schoonderwoerd and his colleagues developed the first application to telecommunication networks [67];
  • 1998, Dorigo launches first conference dedicated to the ACO algorithms[68];
  • 1998, Stützle proposes initial parallel implementations [69];
  • 1999, Bonabeau, Dorigo and Theraulaz publish a book dealing mainly with artificial ants [70]
  • 2000, special issue of the Future Generation Computer Systems journal on ant algorithms[71]
  • 2000, first applications to the scheduling, scheduling sequence and the satisfaction of constraints;
  • 2000, Gutjahr provides the first evidence of convergence for an algorithm of ant colonies[72]
  • 2001, the first use of COA Algorithms by companies (Eurobios and AntOptima);
  • 2001, IREDA and his colleagues published the first multi-objective algorithm [73]
  • 2002, first applications in the design of schedule, Bayesian networks;
  • 2002, Bianchi and her colleagues suggested the first algorithm for stochastic problem[74];
  • 2004, Zlochin and Dorigo show that some algorithms are equivalent to the stochastic gradient descent, the cross-entropy and algorithms to estimate distribution [8]
  • 2005, first applications to folding protein problems.

Източници

Template:Reflist

Избрани публикации

  • M. Dorigo, 1992. Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy.
  • M. Dorigo, V. Maniezzo & A. Colorni, 1996. "Ant System: Optimization by a Colony of Cooperating Agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 26 (1): 29–41.
  • M. Dorigo & L. M. Gambardella, 1997. "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 53–66.
  • M. Dorigo, G. Di Caro & L. M. Gambardella, 1999. "Ant Algorithms for Discrete Optimization". Artificial Life, 5 (2): 137–172.
  • E. Bonabeau, M. Dorigo et G. Theraulaz, 1999. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press. ISBN 0-19-513159-2
  • M. Dorigo & T. Stützle, 2004. Ant Colony Optimization, MIT Press. ISBN 0-262-04219-3
  • M. Dorigo, 2007. "Ant Colony Optimization". Scholarpedia.
  • C. Blum, 2005 "Ant colony optimization: Introduction and recent trends". Physics of Life Reviews, 2: 353-373
  • M. Dorigo, M. Birattari & T. Stützle, 2006 Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. TR/IRIDIA/2006-023
  • Mohd Murtadha Mohamad,”Articulated Robots Motion Planning Using Foraging Ant Strategy”,Journal of Information Technology - Special Issues in Artificial Intelligence, Vol.20, No. 4 pp. 163-181, December 2008, ISSN0128-3790.

Въъншни препратки


  1. A. Colorni, M. Dorigo et V. Maniezzo, Distributed Optimization by Ant Colonies, actes de la première conférence européenne sur la vie artificielle, Paris, France, Elsevier Publishing, 134-142, 1991.
  2. 2.0 2.1 M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italie, 1992.
  3. 3.0 3.1 S. Goss, S. Aron, J.-L. Deneubourg et J.-M. Pasteels, The self-organized exploratory pattern of the Argentine ant, Naturwissenschaften, volume 76, pages 579-581, 1989
  4. J.-L. Deneubourg, S. Aron, S. Goss et J.-M. Pasteels, The self-organizing exploratory pattern of the Argentine ant, Journal of Insect Behavior, volume 3, page 159, 1990
  5. 5.0 5.1 T. Stützle et H.H. Hoos, MAX MIN Ant System, Future Generation Computer Systems, volume 16, pages 889-914, 2000
  6. 6.0 6.1 M. Dorigo et L.M. Gambardella, Ant Colony System : A Cooperative Learning Approach to the Traveling Salesman Problem, IEEE Transactions on Evolutionary Computation, volume 1, numéro 1, pages 53-66, 1997.
  7. X Hu, J Zhang, and Y Li (2008). Orthogonal methods based ant colony search for solving continuous optimization problems. Journal of Computer Science and Technology, 23(1), pp.2-18.
  8. 8.0 8.1 M. Zlochin, M. Birattari, N. Meuleau, et M. Dorigo, Model-based search for combinatorial optimization: A critical survey, Annals of Operations Research, vol. 131, pp. 373-395, 2004.
  9. 9.0 9.1 9.2 M. Dorigo, V. Maniezzo, et A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics--Part B , volume 26, numéro 1, pages 29-41, 1996.
  10. D. Martens, M. De Backer, R. Haesen, J. Vanthienen, M. Snoeck, B. Baesens, Classification with Ant Colony Optimization, IEEE Transactions on Evolutionary Computation, volume 11, number 5, pages 651—665, 2007.
  11. B. Pfahring, “Multi-agent search for open scheduling: adapting the Ant-Q formalism,” Technical report TR-96-09, 1996.
  12. C. Blem, “Beam-ACO, Hybridizing ant colony optimization with beam search. An application to open shop scheduling,” Technical report TR/IRIDIA/2003-17, 2003.
  13. T. Stützle, “An ant approach to the flow shop problem,” Technical report AIDA-97-07, 1997.
  14. A. Baucer, B. Bullnheimer, R. F. Hartl and C. Strauss, “Minimizing total tardiness on a single machine using ant colony optimization,” Central European Journal for Operations Research and Economics, vol.8, no.2, pp.125-141, 2000.
  15. M. den Besten, “Ants for the single machine total weighted tardiness problem,” Master’s thesis, University of Amsterdam, 2000.
  16. M, den Bseten, T. Stützle and M. Dorigo, “Ant colony optimization for the total weighted tardiness problem,” Proceedings of PPSN-VI, Sixth International Conference on Parallel Problem Solving from Nature, vol. 1917 of Lecture Notes in Computer Science, pp.611-620, 2000.
  17. D. Merkle and M. Middendorf, “An ant algorithm with a new pheromone evaluation rule for total tardiness problems,” Real World Applications of Evolutionary Computing, vol. 1803 of Lecture Notes in Computer Science, pp.287-296, 2000.
  18. D. Merkle, M. Middendorf and H. Schmeck, “Ant colony optimization for resource-constrained project scheduling,” Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2000), pp.893-900, 2000.
  19. C. Blum, “ACO applied to group shop scheduling: a case study on intensification and diversification,” Proceedings of ANTS 2002, vol. 2463 of Lecture Notes in Computer Science, pp.14-27, 2002.
  20. C. Gagné, W. L. Price and M. Gravel, “Comparing an ACO algorithm with other heuristics for the single machine scheduling problem with sequence-dependent setup times,” Journal of the Operational Research Society, vol.53, pp.895-906, 2002.
  21. P. Toth, D. Vigo, “Models, relaxations and exact approaches for the capacitated vehicle routing problem,” Discrete Applied Mathematics, vol.123, pp.487-512, 2002.
  22. J. M. Belenguer, and E. Benavent, “A cutting plane algorithm for capacitated arc routing problem,” Computers & Operations Research, vol.30, no.5, pp.705-728, 2003.
  23. T. K. Ralphs, “Parallel branch and cut for capacitated vehicle routing,” Parallel Computing, vol.29, pp.607-629, 2003.
  24. S. Salhi and M. Sari, “A multi-level composite heuristic for the multi-depot vehicle fleet mix problem,” European Journal for Operations Research, vol.103, no.1, pp.95-112, 1997.
  25. E. Angelelli and M. G. Speranza, “The periodic vehicle routing problem with intermediate facilities,” European Journal for Operations Research, vol.137, no.2, pp.233-247, 2002.
  26. S. C. Ho and D. Haugland, “A tabu search heuristic for the vehicle routing problem with time windows and split deliveries,” Computers & Operations Research, vol.31, no.12, pp.1947-1964, 2004.
  27. N. Secomandi, “Comparing neuro-dynamic programming algorithms for the vehicle routing problem with stochastic demands,” Computers & Operations Research, vol.27, no.11, pp.1201-1225, 2000.
  28. W. P. Nanry and J. W. Barnes, “Solving the pickup and delivery problem with time windows using reactive tabu search,” Transportation Research Part B, vol.34, no. 2, pp.107-121, 2000.
  29. R. Bent and P.V. Hentenryck, “A two-stage hybrid algorithm for pickup and delivery vehicle routing problems with time windows,” Computers & Operations Research, vol.33, no.4, pp.875-893, 2003.
  30. A. Bachem, W. Hochstattler and M. Malich, “The simulated trading heuristic for solving vehicle routing problems,” Discrete Applied Mathematics, vol. 65, pp.47-72, 1996..
  31. [57] S. C. Hong and Y. B. Park, “A heuristic for bi-objective vehicle routing with time window constraints,” International Journal of Production Economics, vol.62, no.3, pp.249-258, 1999.
  32. R. A. Rusell and W. C. Chiang, “Scatter search for the vehicle routing problem with time windows,” European Journal for Operations Research, vol.169, no.2, pp.606-622, 2006.
  33. T. Stützle, “MAX-MIN Ant System for the quadratic assignment problem,” Technical Report AIDA-97-4, FB Informatik, TU Darmstadt, Germany, 1997.
  34. R. Lourenço and D. Serra “Adaptive search heuristics for the generalized assignment problem,” Mathware & soft computing, vol.9, no.2-3, 2002.
  35. M. Yagiura, T. Ibaraki and F. Glover, “An ejection chain approach for the generalized assignment problem,” INFORMS Journal on Computing, vol. 16, no. 2, pp. 133–151, 2004.
  36. K. I. Aardal, S. P. M.van Hoesel, A. M. C. A. Koster, C. Mannino and Antonio. Sassano, “Models and solution techniques for the frequency assignment problem,” A Quarterly Journal of Operations Research, vol.1, no.4, pp.261-317, 2001.
  37. Y. C. Liang and A. E. Smith, “An ant colony optimization algorithm for the redundancy allocation problem (RAP),” IEEE Transactions on Reliability, vol.53, no.3, pp.417-423, 2004.
  38. G. Leguizamon and Z. Michalewicz, “A new version of ant system for subset problems,” Proceedings of the 1999 Congress on Evolutionary Computation(CEC 99), vol.2, pp.1458-1464, 1999.
  39. R. Hadji, M. Rahoual, E. Talbi and V. Bachelet “Ant colonies for the set covering problem,” Abstract proceedings of ANTS2000, pp.63-66, 2000.
  40. V Maniezzo and M Milandri, “An ant-based framework for very strongly constrained problems,” Proceedings of ANTS2000, pp.222-227, 2002.
  41. R. Cordone and F. Maffioli,“Colored Ant System and local search to design local telecommunication networks,” Applications of Evolutionary Computing: Proceedings of Evo Workshops, vol.2037, pp.60-69, 2001.
  42. C. Blum and M.J. Blesa, “Metaheuristics for the edge-weighted k-cardinality tree problem,” Technical Report TR/IRIDIA/2003-02, IRIDIA, 2003.
  43. S. Fidanova, “ACO algorithm for MKP using various heuristic information”, Numerical Methods and Applications, vol.2542, pp.438-444, 2003.
  44. G. Leguizamon, Z. Michalewicz and Martin Schutz, “An ant system for the maximum independent set problem,” Proceedings of the 2001 Argentinian Congress on Computer Science, vol.2, pp.1027-1040, 2001.
  45. D. Martens, M. De Backer, R. Haesen, J. Vanthienen, M. Snoeck, B. Baesens, "Classification with Ant Colony Optimization", IEEE Transactions on Evolutionary Computation, volume 11, number 5, pages 651—665, 2007.
  46. G. D. Caro and M. Dorigo, “Extending AntNet for best-effort quality-of-service routing,” Proceedings of the First Internation Workshop on Ant Colony Optimization (ANTS’98), 1998.
  47. G.D. Caro and M. Dorigo “AntNet: a mobile agents approach to adaptive routing,” Proceedings of the Thirty-First Hawaii International Conference on System Science, vol.7, pp.74-83, 1998.
  48. G. D. Caro and M. Dorigo, “Two ant colony algorithms for best-effort routing in datagram networks,” Proceedings of the Tenth IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS’98), pp.541-546, 1998.
  49. R. S. Parpinelli, H. S. Lopes and A. A Freitas, “An ant colony algorithm for classification rule discovery,” Data Mining: A heuristic Approach, pp.191-209, 2002.
  50. R. S. Parpinelli, H. S. Lopes and A. A Freitas, “Data mining with an ant colony optimization algorithm,” IEEE Transaction on Evolutionary Computation, vol.6, no.4, pp.321-332, 2002.
  51. W. N. Chen, J. ZHANG and H. Chung, “Optimizing Discounted Cash Flows in Project Scheduling--An Ant Colony Optimization Approach”, IEEE Transactions on Systems, Man, and Cybernetics--Part C: Applications and Reviews Vol.40 No.5 pp.64-77, Jan. 2010.
  52. W. N. Chen and J. ZHANG “Ant Colony Optimization Approach to Grid Workflow Scheduling Problem with Various QoS Requirements”, IEEE Transactions on Systems, Man, and Cybernetics--Part C: Applications and Reviews, Vol. 31, No. 1,pp.29-43,Jan 2009.
  53. S. Meshoul and M Batouche, “Ant colony system with extremal dynamics for point matching and pose estimation,” Proceeding of the 16th International Conference on Pattern Recognition, vol.3, pp.823-826, 2002.
  54. H. Nezamabadi-pour, S. Saryazdi, and E. Rashedi, “ Edge detection using ant algorithms”, Soft Computing, vol. 10, no.7, pp. 623-628, 2006.
  55. Xiao. M.Hu, J. ZHANG, and H. Chung, “An Intelligent Testing System Embedded with an Ant Colony Optimization Based Test Composition Method”, IEEE Transactions on Systems, Man, and Cybernetics--Part C: Applications and Reviews, Vol. 39, No. 6, pp. 659-669, Dec 2009.
  56. L. Wang and Q. D. Wu, “Linear system parameters identification based on ant system algorithm,” Proceedings of the IEEE Conference on Control Applications, pp.401-406, 2001.
  57. K. C. Abbaspour, R. Schulin, M. T. Van Genuchten, “Estimating unsaturated soil hydraulic parameters using ant colony optimization,” Advances In Water Resources, vol.24, no.8, pp.827-841, 2001.
  58. X. M. Hu, J. ZHANG,J. Xiao and Y. Li, “Protein Folding in Hydrophobic-Polar Lattice Model: A Flexible Ant- Colony Optimization Approach ”, Protein and Peptide Letters, Volume 15, Number 5, 2008, Pp. 469-477.
  59. A. Shmygelska, R. A. Hernández and H. H. Hoos, “An ant colony algorithm for the 2D HP protein folding problem,” Proceedings of the 3rd International Workshop on Ant Algorithms/ANTS 2002, Lecture Notes in Computer Science, vol.2463, pp.40-52, 2002.
  60. J. ZHANG, H. Chung, W. L. Lo, and T. Huang, “Extended Ant Colony Optimization Algorithm for Power Electronic Circuit Design”, IEEE Transactions on Power Electronic. Vol.24,No.1, pp.147-162, Jan 2009.
  61. A. Ajith; G. Crina; R. Vitorino (éditeurs), Stigmergic Optimization, Studies in Computational Intelligence , volume 31, 299 pages, 2006. ISBN 978-3-540-34689-0
  62. P.-P. Grassé, La reconstruction du nid et les coordinations inter-individuelles chez Belicositermes natalensis et Cubitermes sp. La théorie de la Stigmergie : Essai d’interprétation du comportement des termites constructeurs, Insectes Sociaux, numéro 6, p. 41-80, 1959.
  63. J.L. Denebourg, J.M. Pasteels et J.C. Verhaeghe, Probabilistic Behaviour in Ants : a Strategy of Errors?, Journal of Theoretical Biology, numéro 105, 1983.
  64. F. Moyson, B. Manderick, The collective behaviour of Ants : an Example of Self-Organization in Massive Parallelism, Actes de AAAI Spring Symposium on Parallel Models of Intelligence, Stanford, Californie, 1988.
  65. M. Ebling, M. Di Loreto, M. Presley, F. Wieland, et D. Jefferson,An Ant Foraging Model Implemented on the Time Warp Operating System, Proceedings of the SCS Multiconference on Distributed Simulation, 1989
  66. Dorigo M., V. Maniezzo et A. Colorni, Positive feedback as a search strategy, rapport technique numéro 91-016, Dip. Elettronica, Politecnico di Milano, Italy, 1991
  67. R. Schoonderwoerd, O. Holland, J. Bruten et L. Rothkrantz, Ant-based load balancing in telecommunication networks, Adaptive Behaviour, volume 5, numéro 2, pages 169-207, 1997
  68. M. Dorigo, ANTS’ 98, From Ant Colonies to Artificial Ants : First International Workshop on Ant Colony Optimization, ANTS 98, Bruxelles, Belgique, octobre 1998.
  69. T. Stützle, Parallelization Strategies for Ant Colony Optimization, Proceedings of PPSN-V, Fifth International Conference on Parallel Problem Solving from Nature, Springer-Verlag, volume 1498, pages 722-731, 1998.
  70. É. Bonabeau, M. Dorigo et G. Theraulaz, Swarm intelligence, Oxford University Press, 1999.
  71. M. Dorigo , G. Di Caro et T. Stützle, Special issue on "Ant Algorithms", Future Generation Computer Systems, volume 16, numéro 8, 2000
  72. W.J. Gutjahr, A graph-based Ant System and its convergence, Future Generation Computer Systems, volume 16, pages 873-888, 2000.
  73. S. Iredi, D. Merkle et M. Middendorf, Bi-Criterion Optimization with Multi Colony Ant Algorithms, Evolutionary Multi-Criterion Optimization, First International Conference (EMO’01), Zurich, Springer Verlag, pages 359-372, 2001.
  74. L. Bianchi, L.M. Gambardella et M.Dorigo, An ant colony optimization approach to the probabilistic traveling salesman problem, PPSN-VII, Seventh International Conference on Parallel Problem Solving from Nature, Lecture Notes in Computer Science, Springer Verlag, Berlin, Allemagne, 2002.