Nature inspired algorithms provide interesting ways to optimise complex functions. In one of my courses, Scientific Computing, we were given a function with no analytically known solution and asked to optimise it with the Firefly algorithm.
The Firefly algorithm was proposed by Xin-She Yang in the early 2000’s.
The algorithm is based on the metaphor of a firefly swarm, on the following assumptions:
- All flies are unisexual and therefore can be attracted to all other flies.
- The attractiveness of a firefly is proportional to its brightness – brightness decreases with distance, so flies are attracted to each other based on an apparent attractiveness.
- If a firefly is not attracted to other fireflies during one iteration, it will move randomly.
The brightness of a firefly is associated with the function value.
The pseudocode of the algorithm can thus be described as:
- Initiate the objective function f(x)
- Generate a population of fireflies
- Formulate their intensities, I, associated with the objective function f(x)
- Define an absorption coefficient (that decreases the intensity over distance)
- while criteria is not met
- loop i over number of fireflies
- loop j over the rest of the fireflies (from i+1)
- calculate two flies’ apparent brightness
- compare two flies’ brightness and see which attracts the other
- if fly i is brighter than fly j, fly j moves towards i
- otherwise fly i will move towards fly j
- find the most attractive firefly and repeat until some_criteria is met
These functions were simply provided by one of the lecturers, and I have not been able to find documentation for them. The first, xin_she_yang, has an analytically known solution, while xin_she_yang2 does not. This makes them good for testing – implementing the first function to see if you can find a solution, and then testing your algorithm on the second function.
from numpy import array def xin_she_yang(x): # xin_she_young: The Xin She Young multi-dimensional # function, which is notoriously difficult to find the # global minimum of. This (original) version has the # analytically know solution (easy to see by inspection) # x(i)=1 for all i. rn=array([0.8444218515250481, 0.7579544029403025, \ 0.420571580830845, 0.25891675029296335, \ 0.5112747213686085, 0.4049341374504143, \ 0.7837985890347726, 0.30331272607892745, \ 0.4765969541523558, 0.5833820394550312, \ 0.9081128851953352, 0.5046868558173903, \ 0.28183784439970383, 0.7558042041572239, \ 0.6183689966753316, 0.25050634136244054, \ 0.9097462559682401, 0.9827854760376531]) f=0 for i in range(0,len(x)-1): f=f+(1-x[i])**2+100*rn[i]*(x[i+1]-x[i]**2)**2; return f
from numpy import array def xin_she_yang2(x): # xin_she_young2: A variant of the Xin She Young multi- # dimensional function, which is notoriously difficult to # find the global minimum of. This variant does NOT have # and analytically know solution rnd=array([[0.8444218515250481, 0.7579544029403025, \ 0.420571580830845, 0.25891675029296335, \ 0.5112747213686085, 0.4049341374504143, \ 0.7837985890347726, 0.30331272607892745, \ 0.4765969541523558, 0.5833820394550312], \ [0.9081128851953352, 0.5046868558173903, \ 0.28183784439970383, 0.7558042041572239, \ 0.6183689966753316, 0.25050634136244054, \ 0.9097462559682401, 0.9827854760376531, \ 0.5468815192049839, 0.9575068354342976]]) f=0 for i in range(0,len(x)-1): f=f+(1-x[i])**2+100*rnd[0,i]*(x[i+1]-rnd[1,i]*x[i]**2)**2; return f
Implementation of the algorithm
Start by importing the necessary packages:
import numpy as np import matplotlib.pyplot as plt from scipy.optimize import fmin exp = np.exp sqrt = np.sqrt ones = np.ones
Then, I found it useful to make a script that could simply implement both functions using the value of a single parameter. I use the parameter case to indicate this, and use an if-statement to initialise the object functions.
case=0 #Case=1 works with the original case, other values means we work with the modified case. if case==1: from xin_she_yang import xin_she_yang #Import test case f = xin_she_yang #Define function x0=0.5*ones(10) xmin = fmin(func=xin_she_yang, x0=x0,xtol=1e-6,maxiter=10000,maxfun=10000,disp=0) #Find firefly in minimum f_min=f(xmin) #Save minimum for reference print('Working with the original case') #Print to monitor which case we work with else: from xin_she_yang2 import xin_she_yang2 #Import modified case f=xin_she_yang2 #Define function x0=0.5*ones(10) xmin=fmin(func=xin_she_yang2, x0=x0,xtol=1e-6,maxiter=10000,maxfun=10000,disp=0) #Find firefly minimum f_min=f(xmin) #Save minimum for reference print('Working with the modified case') #Print to monitor which case we work with it = 0 #Iteration counter, starts at zero beta0 = 1 #This value is defined in the assignment text gamma = 0 #Same with this alpha=2 #This is Elas opt. alpha value delta=.9985 #This is Elas opt. value number_of_fireflies=20
Next, I initialise a generation of 20 fireflies spread out over the function area.
search_area=np.linspace(-alpha,alpha,number_of_fireflies) #Linearly space 20 X = [search_area[x]*np.ones([10,1]) for x in range(0,number_of_fireflies)] #Define a set of fireflies. These flies are randomly started.
Now, I formulate an intensity for each firefly:
I0=np.zeros([20,1]) #Preallocate space for the intensity at 'its own place'. for i in range(len(X)): #This function calculates the brightness at 'its own place', I_0. x=X[i] #Note: It has proved important to "pick out" an and define i'th vector from the list as x rather than simply call it directly from the list. I0[i]=1/(f(x)+1) #The fly should be "brighter"/more attractive, as the function minimises.
Next, I define a couple of functions to use in my while loop. These functions calculate the distance between fireflies, updates the brightness of a firefly when it has taken a new position, calculates the apparent brightness of a firefly a certain distance away.
def r(i,j): #This function calculates the distance between fireflies. xi=X[i] xj=X[j] r=0 for i in range(len(xi)): r+=(xj[i]-xi[i])**2 r=sqrt(r) return r def I_0(i): #Function to uodate the brightness of a firefly x=X[i] Iny0=1/(f(x)+1) return Iny0 def I(x): #This function calculates the apparent brightness. I=x*exp(-gamma*(r(i,j)**2)) #This expression is taken from "Firefly Algorithms for Multimodal Optimization", eq. (3). return I
I want my algorithm to minimise the relative error of the function using the fmin function from scipy.optimize to provide an ‘upper limit’ to my problem. Therefore, I initialise a couple of variables to keep track of during my while loop.
best= best_function_value=f(X[np.argmax(I0)]) best.append(best_function_value) rel_err=1 rel=
I then setup a while loop to optimise the function based on the pseudocode I wrote above. I like to keep track of how my algorithm is progressing, so I will print out the current optimal value for each 500th iteration. This way, I will quickly notice, if anything isn’t working.
print('Starting while loop') while rel_err>10**(-6): #While the relative error is too large if it%500 == 0: #Monitor progress print(it,'iterations and current best minimum at',best_function_value) for i in range(len(X)): #Loop over the number of fireflies. for j in range(i+1,len(X)): #Loop over the number of fireflies (but ignore those flies, we've already compared) Ii=I(I0[i]) #The apperent intensity for the i'th firefly. Ij=I(I0[j]) #The apparent intensity for the j'th firefly. xi=X[i] #Position of the i'th firefly. xj=X[j] #Position of the j'th firefly. if Ij>Ii: #If the one we compare with is more attractive than the i'th firefly, move the i'th firefly. Here, change the position of the i'th firefly p1=(xj-xi) p2=alpha*(np.random.rand(10,1)-0.5) X[i]=xi+p1-p2 #Update i's position I0[i]=I_0(i) #Update i's intensity at 'its own place' else: p1=(xi-xj) p2=alpha*(np.random.rand(10,1)-0.5) X[j]=xj+p1-p2 #Update j's position I0[j]=I_0(j) #Update j's intensity at 'its own place' alpha=alpha*delta #Update alpha it=it+1 #Update counter best_function_value=f(X[np.argmax(I0)]) best.append(best_function_value) rel_err=(best_function_value - f_min)/f_min rel.append(rel_err)
Finally, I make a plot of the progression per iteration to evaluate the efficiency of my implementation. For the modified case, this is how the algorithm progresses:
plt.plot(axe,best) plt.xlabel('#iterations') plt.ylabel('Best function value') plt.title('Firefly optimisation') plt.tight_layout()
There are minor differences between the original Firefly algorithm and my implementation, but I used the overall analogy of a firefly swarm to actually find good solution for the