<![CDATA[Solving global optimization problems with differential evolution (DE) algorithm]]> solving-global-optimization-problems-with-differential-evolution-algorithm solving-global-optimization-problems-with-differential-evolution-algorithm Differential evolution (DE) is a more recent stochastic population-based evolutionary method introduced by Storn and Price [1] in 1996. It follows the standard evolutionary algorithm flow with some significant differences in mutation and selection process. The simplicity of DE algorithm is based on only two tunable parameters, the mutation factor $F$ and the crossover probability $CR$. The fundamental idea behind DE is the use of vector differences by choosing randomly selected vectors, and then taking their difference as a means to perturb the parent vector with a special kind operator and probe the search space. Several variants of DE have been proposed so far [2, 3], but the following analysis is focused on the nominal approach (DE/rand/1/bin). According to this, each of the members of the population undergoes mutation and crossover. Once crossover occurs, the offspring is compared to the parent, and whichever fitness is better is moved to the next generation (selection) (see Fig. 1).

Figure 1: Differential evolution process.

Mutation

After initialization each of the members of the population undergoes mutation and a donor vector $\vec{v}_{i, G + 1}$ is generated such as: \begin{equation} \vec{v}_{i, G + 1} = \vec{x}_{p, G} + F \cdot (\vec{x}_{q, G} - \vec{x}_{r, G}) \label{eq:mutation} \end{equation} $G$ is current generation, $NP$ is the population size and $F$ is a real constant value, called the mutation factor. Integers $p$, $q$ and $r$ are chosen randomly from the interval $[1, NP]$ while $x_{i, G} \neq x_{p, G} \neq x_{q, G} \neq x_{r, G}$.

Crossover

In the next step the crossover operator is applied by generating the trial vector $\vec{u}_{i(j), G + 1}$ which is defined from the $j$-th component of $\vec{x}_{i(j), G}$ or $\vec{v}_{i(j), G + 1}$ with probability $CR$: \begin{equation} \begin{aligned} u_{i(j), G + 1} &= \left\{ \begin{array}{ll} v_{i(j), G + 1} \mbox{, } & \text{if} \;\; r_{i} \leq CR \;\; \text{or} \;\; j = J_{r} \\ x_{i(j), G} \mbox{, } & \text{if} \;\; r_{i} > CR \;\; \text{or} \;\; j \ne J_{r} \end{array} \right. \\ \{i, j \} &= 1, 2, \dots, NP \end{aligned} \label{eq:crossover} \end{equation} where $r_{i} \sim U[0, 1]$ and $J_{r}$ is a random integer from $[1, 2, \dots, D]$ which ensures that $\vec{v}_{i, G + 1} \neq \vec{x}_{i, G}$. $D$ is the problem dimension.

Selection

The last step of the generation procedure is the implementation of the selection operator where the vector $\vec{x}_{i, G}$ is compared to the trial vector $\vec{u}_{i, G + 1}$: \begin{equation} \begin{aligned} \vec{x}_{i, G + 1} &= \left\{ \begin{array}{ll} \vec{u}_{i, G + 1} \mbox{, } & \text{if} \;\; f(\vec{u}_{i, G + 1}) \leq f(\vec{x}_{i, G}) \\ \vec{x}_{i, G} \mbox{, } & \text{otherwise} \end{array} \right. \\ i &= 1, 2, \dots, NP \end{aligned} \label{eq:selection} \end{equation} where $f(\vec{x})$ is the objective function to be optimized, while without loss of generality the implementation described in Eq. (\ref{eq:selection}) corresponds to a minimization problem. The whole process is summarized in the following algorithm:

References

  1. Storn R. and Price K. (1997). Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341-359.
  2. Price K., Storn R. M., and Lampinen J. A. (2005). Differential Evolution - A Practical Approach to Global Optimization. Springer.
  3. Das S. and Suganthan P. (2011). Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation, 15(1):4-31, 2011.
]]>
MG 2015-03-28 00:00:00
<![CDATA[Solvers for linear systems: The conjugated gradient algorithm]]> solvers-for-linear-systems-the-conjugated-gradient-algorithm solvers-for-linear-systems-the-conjugated-gradient-algorithm Many of the large systems of linear equations can be encountered in science and engineering are derived from the solution of partial differential equations. To this end, a whole range of numerical techniques have been developed in order to solve a linear system of the form: \begin{equation} \textbf{A} \textbf{x} = \textbf{b} \label{eq:linear_eq} \end{equation} where $\textbf{A} \in \Re^{n \times n}$. There are two main broad categories of algorithms that are used to solve \eqref{eq:linear_eq}: direct and iterative methods. The choice of a direct or an iterative method is a combination of the efficiency of the algorithm, the particular structure of the matrix $\textbf{A}$ and a trade-off between computational time and the memory. The term iterative method refers to a wide range of techniques that use successive approximations to obtain more accurate solutions to a linear system at each step and in general are more efficient. There are two types of iterative methods. Stationary methods are older, simpler to understand and implement, but usually not as effective. Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective.

Below is a short classification of these methods:

  1. Stationary methods
    1. Gauss-Seidel
    2. Jacobi
    3. Successive Overrelaxation (SOR)
    4. Symmetric Successive Overrelaxation (SSOR)
  2. Non-Stationary methods
    1. Conjugate Gradient (CG)
    2. Preconditioned Conjugate Gradient (PCG)
    3. Conjugate Gradient Squared (CGS)
    4. BiConjugate Gradient (BiCG)
    5. Biconjugate Gradient Stabilized (Bi-CGSTAB)
    6. Chebyshev Iteration
    7. Minimum Residual (MINRES) and Symmetric LQ (SYMMLQ)
    8. Conjugate Gradient on the Normal Equations: CGNE and CGNR
    9. Generalized Minimal Residual (GMRES)
    10. Quasi-Minimal Residual (QMR)

The conjugate gradient algorithm

In this page the conjugate gradient method is introduced, as an effective method for symmetric positive definite matrices $\textbf{A}$. A real matrix $\textbf{A}\in \Re^{n \times n}$ is called positive definite if: \begin{equation} \textbf{x}^{T}\textbf{A}\textbf{x} > 0 \quad \forall \, \textbf{x} \in \Re^{n}, \quad where \quad \textbf{x} > 0 \label{eq:positive_definite} \end{equation} The method proceeds by generating vector sequences of iterates (i.e., successsive approximations to the solution), residuals corresponding to the iterates, and search directions used in updating the iterates and residuals. Although the length of these sequences can become large, only a small number of vectors needs to be kept in memory. In every iteration of the method, two inner products are performed in order to compute update scalars that are defined to make the sequences satisfy certain orthogonality conditions. On a symmetric positive definite linear system these conditions imply that the distance to the true solution is minimized in some norm.

A python implementation of conjugate gradient algorithm

A common implementation of CG algorithm is given in the following python snippet.

For the linear solution of the system: \begin{equation} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \end{equation}

the python snippet gives:

Iteration $x_{1}$ $x_{2}$ norm
1 0.23564955 0.33836858 8.0019370424E-01
2 0.09090909 0.63636364 5.5511151231E-17

It is worth noting that the exact solution of the system has been reached after 2 iterations!

References

  1. Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C. and van der Vorst, H. (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition, SIAM, Philadelphia, PA.
  2. Saad, Y. (2003). Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM.
]]>
MG 2014-11-02 00:00:00
<![CDATA[Improving BibTeX export from Zotero]]> improving-bibtex-export-from-zotero improving-bibtex-export-from-zotero Zotero [zoh-TAIR-oh] is a free, easy-to-use tool to help you collect, organize, cite, and share your research sources. I use it as the standard solution to manage references in my scientific documents written in LaTeX. Once I have my references in the Zotero library, I can easily sort them into folders and export them directly to BiBTex format. To use the default BiBTex translator (you will find it inside Zotero Installation Folder/translators/BiBTex.js), I normally do the following hacks:

Change cite key format: To export a short format of cited keys, including only the author name and year, I change the line:

var citeKeyFormat = "%a_%t_%y";

To this:

var citeKeyFormat = "%a%y";

Export special characters correctly:

Because of the fact that the LaTeX compiler could not recognize special characters, like (ñ), even though Zotero output them into the BibTeX file, you should use the suitable character endoding on export process in order to write it as "\~{n}". To this end, Western character encoding is the most appropriate. Go to Export Options, and check the box Display character encoding option on export. Then from the dropdown menu shows up in the export dialog you should pick Western character encoding. See the image (Windows case, same on Linux systems).

You can download BiBTex.js translator with some other minor modifications, including also the aforementioned changes.

]]>
MG 2014-09-29 00:00:00
<![CDATA[Tips to Improve Overall System Performance in Ubuntu Systems]]> tips-to-improve-overall-system-performance-in-ubuntu-systems tips-to-improve-overall-system-performance-in-ubuntu-systems After an extensive use with a lot of modifications to the core system, I want to share some tweaks, to speed up your Ubuntu system. Please note that applying these tweaks of course, you're not turning your system into a rocket :), but you may achieve considerable speed up in some cases. To check your overall system perfomance, there are a lot of free tools (e.g. bootchart).

Change swappiness value

This tweak is noticeable on computers with relatively low RAM memory (1 GB or less), when system accesses the hard disk too much, because of the virtual memory, called swap. Ubuntu's inclination to use the swap, is determined by swappiness value. The lower the value, the longer it takes before Ubuntu starts using the swap. On a scale of 0-100, the default value is 60, which is much too high for normal desktop use (only fit for servers). You can check the current swappiness value by typing in the terminal:

cat /proc/sys/vm/swappiness

To change the swappiness value and improve the cache management, type in the terminal:

sudo pico /etc/sysctl.conf

and scroll to the bottom of the file to add the following lines.

# Decrease swap usage to a workable level
vm.swappiness = 10
# Improve cache management
vm.vfs_cache_pressure = 50

Install Preload

Preload is an adaptive readahead daemon. It monitors applications that users run, and by analyzing this data, predicts what applications users might run, and fetches those binaries and their dependencies into memory for faster startup times. In order to install preload simply run the following command in terminal:

sudo apt-get install preload

The configuration file for Preload is kept in /etc/preload.conf and the default values should be fine for most people. But if you want to tweak the operation of Preload, you can visit the official site and search for the options in configuration file.

Also, you can download and run, a bash script collecting all the above tweaks, after applying executable attributes with the following command.

sudo chmod +x ubuntu_perfomance.sh
]]>
MG 2014-08-26 00:00:00
<![CDATA[Aria2: Lightweight Multi-Protocol Download Utility Operated in Command-Line]]> aria2-lightweight-multi-protocol-download-utility-operated-in-command-line aria2-lightweight-multi-protocol-download-utility-operated-in-command-line One of my favorite download utilities operated in command line in linux systems is aria2. Aria2 has advanced features with support for multi-chunks, multi-servers and torrents. I have been using it quite often especially for torrent downloading. To install aria2 in the Ubuntu system execute as root:

# apt-get install aria2

Aria2 can be combined with WebUI-Aria2, a web interface to manage your aria2 instance. WebUI-Aria2 is very simple to install and you will be able to add files, magnet links or torrents to download right from your web browser. Assuming that you are still in root and Apache2 server is running execute the following:

Download webui-aria2 using git in the /var/www directory

# cd /var/www 
# git clone https://github.com/ziahamza/webui-aria2

If you navigate with your browser to the corresponding url of webui-aria2 folder on your server, you will see the following info message:

Oh Snap! Could not connect to the aria2 RPC server. Will retry in 10 secs. You might want to check the connection settings by going to Settings > Connection Settings

This is because you will need to launch aria2 as a background service to listen to incoming connections. In order to avoid anyone else using your aria2 instance, your can generate a random token as follows:

openssl rand -base64 32

Run aria2 as a daemon

After that you need to run aria2 as a daemon.

# aria2c --enable-rpc --rpc-listen-all --daemon

For further configuration and parameterization check also the Aria2 Documentation.

]]>
MG 2017-08-26 00:00:00
<![CDATA[A Python script to convert svg files to pdf at once]]> a-python-script-to-convert-svg-files-to-pdf-at-once a-python-script-to-convert-svg-files-to-pdf-at-once For my publications, I'm using Inkscape to create vector graphics in svg format and finally, I convert them to pdf format in order to import them into a latex document. Lot of times the amount of graphic files in a document are very huge. So everytime I make a correction in svg file I have to re-convert it to correspoding pdf. To avoid this, I took advantage of the inkscape's powerful CLI and wrote the following all_svg2pdf script in Python language.

Download all_svg2pdf.py

The script finds all svg files in a root folder and below it, and converts them to corresponding pdf files without using Inkscape's GUI.

You can use the script freely, and modify it to your needs. Be sure that Inkscape has been installed to your machine. You can check this, by giving the following terminal command:

~$ which inkscape 
/usr/bin/inkscape # You will see this output if inkscape is present

If not, you can install it through the following terminal command:

~$ sudo apt-get install inkscape

Windows users have to add Inkscape's executable path into %PATH system variable manually.

]]>
MG 2014-08-28 00:00:00
<![CDATA[A collection of test functions for single objective optimization problems]]> a-collection-of-test-functions-for-single-objective-optimization-problems a-collection-of-test-functions-for-single-objective-optimization-problems A collection of 15 test functions taken from the mathematical literature on Global Optimization which can be used to validate the performance of optimization algorithms.

Ackley No.1 function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(\textbf{x}) = -a e^{-b\sqrt(\frac{1}{D} \sum_{i=1}^{D} x_i^2)} - e^{\frac{1}{D}cos(2\pi c x_i)} + 20 + e \end{equation*} with $x_{i} \in [-35, 35]$ and recommended values $a = 20, b = 0.2, c = 1.0$.
The global optimum is $f(0, 0) = 0$.

Beale function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1 x_2 - x_1 + 1.5)^2 + (x_1 x_2^2 - x_1 + 2.25)^2 + (x_1 x_2^3 - x_1 + 2.625)^2 \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(3.0, 0.5) = 0$.

Booth function

Booth

This is a unimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1 + 2x_2 - 7)^2 + (2x_1 + x_2 - 5)^2 \end{equation*} with $x_1, x_2 \in [-10, -10]$ and global optimum $f(1.0, 3.0) = 0$.

Bukin No.6 function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = 100 \sqrt{x_2 - 0.01x_1^2} + 0.01 |x_1 + 10| \end{equation*} with $x_1 \in [-15, -5]$ and $x_2 \in [-3, 3]$ and global optimum $f(-10.0, 1.0) = 0$.

Himmelblau function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 - 7)^2 \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(3.0, 2.0) = 0$.

Goldstein-Price function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = [1 + (x_1 + x_2 + 1)^2 (19 - 14x_1 - 3x_1^2 - 14x_2 + 6x_1x_2 + 3x_2^2)] \\ [30 + (2x_1 - 3x_2)^2 (18 - 32x_1 + 12x_1^2 + 48x_2 - 36x_1x_2 + 27x_2^2)] \end{equation*} with $x_1, x_2 \in [-2, 2]$ and global optimum $f(0.0, -1.0) = 3.0$.

Levi No.13 function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = sin(3\pi x_1)^2 + (x_1 - 1)^2 (1 + sin(3 \pi x_2)^2) + (x_2 - 1)^2 (1 + sin(2 \pi x_2)^2) \end{equation*} with $x_1, x_2 \in [-10, 10]$ and global optimum $f(1.0, 1.0) = 0.0$.

Matyas function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = 0.26(x_1^2 + x_2^2) - 0.48 x_1 x_2 \end{equation*} with $x_1, x_2 \in [-10, 10]$ and global optimum $f(0.0, 0.0) = 0.0$

McCormick function

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = - x_1 + 2x_2 + (x_1 - x_2)^2 + sin(x_1 + x_2) + 1 \end{equation*} with $x_1 \in [-1.5, 4.0]$ and $x_2 \in [-3.0, 4.0]$ and global optimum $f(0.0, 0.0) = 0.0$.

Rastrigin function

This is a highly multimodal minimization problem defined as follows: \begin{equation*} f(\textbf{x}) = 10D + \sum_{i=1}^{D}[(x_i^2 - 10cos(2\pi x_i)^2] \end{equation*} with $x_i \in [-5.12, 5.12]$ and global optimum $f(0.0,.., 0.0) = 0$

Rosenbrock function

This is a multimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = \sum_{i=1}^{n-1}[100(x_i^2 - x_{i+1})^2 + (x_{i} - 1)^2] \end{equation*} with $x_i \in [-2, -2]$ The best known global optimum is: $f(1.0,.., 1.0) = 0$.

Schaffer No.2 function

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.5 + \frac{sin^2 (x_1^2 + x_2^2) - 0.5}{1 + 0.001(x_1^2 + x_2^2)^2} \end{equation*} with $x_1, x_2 \in [-100, 100]$ and global optimum $f(0.0, 0.0) = 0.0$.

Schaffer No.4 function

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.5 + \frac{cos^2\big(sin(x_1^2 - x_2^2)\big) - 0.5}{ 1 + 0.001(x_1^2 + x_2^2)^2} \end{equation*} with $x_1, x_2 \in [-100, 100]$ and global optimum $f(0.0, 1.253115) = 0.292579$.

Styblinski-Tang function

This is a multimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = \frac{1}{2}\sum_{i=1}^{D} (x_i^4 - 16x_i^2 + x_i) \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(-2.903534, -2.903534) = -78.332$.

Zettl function

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.25 x_1 + (x_1^2 - 2 x_1 + x_2^2)^2 \end{equation*} with $x_1, x_2 \in [-5, 10]$ and global optimum $f(-0.029896, 0.0) = -0.003791$.

References

  1. Jamil M. and Yang X.S. (2013). A literature survey of benchmark functions for global optimization problems, International Journal of Mathematical Modeling and Numerical Optimization, 4(2), 150 - 194.
  2. Floudas C.A. and Pardalos P.M. (1987). A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer.
  3. http://en.wikipedia.org/wiki/Test_functions_for_optimization.
]]>
MG 2015-10-30 00:00:00
<![CDATA[Some useful snippets in python language]]> some-useful-snippets-in-python-language some-useful-snippets-in-python-language Find package version

To find the version of a specific package, simply run the following snippet:

import pkg_resources
pkg_resources.get_distribution("package_name").version

If you want to share your python script without sharing also the source you can compile it to .pyc version with the following command.

python -m compileall .

If the directory name (the current directory here, denoted with ".") is omitted, the module compiles everything found on sys.path.

Solving compilation errors when installing new python packages

We must admit that it's a bit clumsy when you're about to install additional Python packages in a windows system. The most preferable way to do this is by using pip install command. But sometimes, if you have ever tried to install some packages with either pip install or downloading the source package and running setup.py install, you may have received the following message:

error: Unable to find vcvarsall.bat

This means that, these packages require compiling and Python searches for an installed Visual Studio compiler. To overcome this you can force Python to use a Visual Studio by setting the correct path in VS90COMNTOOLS environment variable before executing pip install command or calling the correspoding setup.py by executing the correspoding terminal command:

set VS90COMNTOOLS = %VS100COMNTOOLS%	# If you have Visual Studio 2010 installed
set VS90COMNTOOLS = %VS110COMNTOOLS%	# If you have Visual Studio 2012 installed
]]>
MG 2014-08-26 00:00:00