**Figure 1**: Differential evolution process.

After initialization each of the members of the population undergoes mutation and a donor vector $\vec{v}_{i, G + 1}$ is generated such as: \begin{equation} \vec{v}_{i, G + 1} = \vec{x}_{p, G} + F \cdot (\vec{x}_{q, G} - \vec{x}_{r, G}) \label{eq:mutation} \end{equation} $G$ is current generation, $NP$ is the population size and $F$ is a real constant value, called the **mutation factor**. Integers $p$, $q$ and $r$ are chosen randomly from the interval $[1, NP]$ while $x_{i, G} \neq x_{p, G} \neq x_{q, G} \neq x_{r, G}$.

In the next step the crossover operator is applied by generating the trial vector $\vec{u}_{i(j), G + 1}$ which is defined from the $j$-th component of $\vec{x}_{i(j), G}$ or $\vec{v}_{i(j), G + 1}$ with probability $CR$: \begin{equation} \begin{aligned} u_{i(j), G + 1} &= \left\{ \begin{array}{ll} v_{i(j), G + 1} \mbox{, } & \text{if} \;\; r_{i} \leq CR \;\; \text{or} \;\; j = J_{r} \\ x_{i(j), G} \mbox{, } & \text{if} \;\; r_{i} > CR \;\; \text{or} \;\; j \ne J_{r} \end{array} \right. \\ \{i, j \} &= 1, 2, \dots, NP \end{aligned} \label{eq:crossover} \end{equation} where $r_{i} \sim U[0, 1]$ and $J_{r}$ is a random integer from $[1, 2, \dots, D]$ which ensures that $\vec{v}_{i, G + 1} \neq \vec{x}_{i, G}$. $D$ is the problem dimension.

The last step of the generation procedure is the implementation of the selection operator where the vector $\vec{x}_{i, G}$ is compared to the trial vector $\vec{u}_{i, G + 1}$: \begin{equation} \begin{aligned} \vec{x}_{i, G + 1} &= \left\{ \begin{array}{ll} \vec{u}_{i, G + 1} \mbox{, } & \text{if} \;\; f(\vec{u}_{i, G + 1}) \leq f(\vec{x}_{i, G}) \\ \vec{x}_{i, G} \mbox{, } & \text{otherwise} \end{array} \right. \\ i &= 1, 2, \dots, NP \end{aligned} \label{eq:selection} \end{equation} where $f(\vec{x})$ is the objective function to be optimized, while without loss of generality the implementation described in Eq. (\ref{eq:selection}) corresponds to a minimization problem. The whole process is summarized in the following algorithm:

- Storn R. and Price K. (1997). Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces.
*Journal of Global Optimization*, 11(4), 341-359. - Price K., Storn R. M., and Lampinen J. A. (2005). Differential Evolution - A Practical Approach to Global Optimization. Springer.
- Das S. and Suganthan P. (2011). Differential evolution: A survey of the state-of-the-art.
*IEEE Transactions on Evolutionary Computation*, 15(1):4-31, 2011.

Below is a short classification of these methods:

- Stationary methods
- Gauss-Seidel
- Jacobi
- Successive Overrelaxation (SOR)
- Symmetric Successive Overrelaxation (SSOR)

- Non-Stationary methods
- Conjugate Gradient (CG)
- Preconditioned Conjugate Gradient (PCG)
- Conjugate Gradient Squared (CGS)
- BiConjugate Gradient (BiCG)
- Biconjugate Gradient Stabilized (Bi-CGSTAB)
- Chebyshev Iteration
- Minimum Residual (MINRES) and Symmetric LQ (SYMMLQ)
- Conjugate Gradient on the Normal Equations: CGNE and CGNR
- Generalized Minimal Residual (GMRES)
- Quasi-Minimal Residual (QMR)

In this page the conjugate gradient method is introduced, as an effective method for symmetric positive definite matrices $\textbf{A}$. A real matrix $\textbf{A}\in \Re^{n \times n}$ is called positive definite if: \begin{equation} \textbf{x}^{T}\textbf{A}\textbf{x} > 0 \quad \forall \, \textbf{x} \in \Re^{n}, \quad where \quad \textbf{x} > 0 \label{eq:positive_definite} \end{equation} The method proceeds by generating vector sequences of iterates (i.e., successsive approximations to the solution), residuals corresponding to the iterates, and search directions used in updating the iterates and residuals. Although the length of these sequences can become large, only a small number of vectors needs to be kept in memory. In every iteration of the method, two inner products are performed in order to compute update scalars that are defined to make the sequences satisfy certain orthogonality conditions. On a symmetric positive definite linear system these conditions imply that the distance to the true solution is minimized in some norm.

A common implementation of CG algorithm is given in the following python snippet.

For the linear solution of the system: \begin{equation} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \end{equation}

the python snippet gives:

Iteration | $x_{1}$ | $x_{2}$ | norm |

1 | 0.23564955 | 0.33836858 | 8.0019370424E-01 |

2 | 0.09090909 | 0.63636364 | 5.5511151231E-17 |

It is worth noting that the exact solution of the system has been reached after 2 iterations!

- Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C. and van der Vorst, H. (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition, SIAM, Philadelphia, PA.
- Saad, Y. (2003). Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM.

`var citeKeyFormat = "%a_%t_%y";`

To this:

`var citeKeyFormat = "%a%y";`

**Export special characters correctly:**

Because of the fact that the LaTeX compiler could not recognize special characters, like (ñ), even though Zotero output them into the BibTeX file, you should use the suitable character endoding on export process in order to write it as "\~{n}". To this end, Western character encoding is the most appropriate. Go to Export Options, and check the box *Display character encoding option on export*. Then from the dropdown menu shows up in the export dialog you should pick *Western character encoding*. See the image (Windows case, same on Linux systems).

You can download BiBTex.js translator with some other minor modifications, including also the aforementioned changes.

]]>
This tweak is noticeable on computers with relatively low RAM memory (1 GB or less),
when system accesses the hard disk too much, because of the virtual memory, called
*swap*. Ubuntu's inclination to use the swap, is determined by swappiness value.
The lower the value, the longer it takes before Ubuntu starts using the swap. On a
scale of 0-100, the default value is 60, which is much too high for normal desktop use
(only fit for servers). You can check the current swappiness value by typing in the
terminal:

`cat /proc/sys/vm/swappiness`

To change the swappiness value and improve the cache management, type in the terminal:

`sudo pico /etc/sysctl.conf`

and scroll to the bottom of the file to add the following lines.

`# Decrease swap usage to a workable level`

vm.swappiness = 10

# Improve cache management

vm.vfs_cache_pressure = 50

Preload is an *adaptive readahead daemon*. It monitors applications that users run,
and by analyzing this data, predicts what applications users might run, and fetches those
binaries and their dependencies into memory for faster startup times. In order to install
preload simply run the following command in terminal:

`sudo apt-get install preload`

The configuration file for Preload is kept in /etc/preload.conf and the default values should be fine for most people. But if you want to tweak the operation of Preload, you can visit the official site and search for the options in configuration file.

Also, you can download and run, a bash script collecting all the above tweaks, after applying executable attributes with the following command.

`sudo chmod +x ubuntu_perfomance.sh`

]]>`# apt-get install aria2`

Aria2 can be combined with WebUI-Aria2, a web interface to manage your aria2 instance. WebUI-Aria2 is very simple to install and you will be able to add files, magnet links or torrents to download right from your web browser. Assuming that you are still in root and Apache2 server is running execute the following:

```
# cd /var/www
# git clone https://github.com/ziahamza/webui-aria2
```

If you navigate with your browser to the corresponding url of webui-aria2 folder on your server, you will see the following info message:

Oh Snap! Could not connect to the aria2 RPC server. Will retry in 10 secs. You might want to check the connection settings by going to Settings > Connection Settings

This is because you will need to launch aria2 as a background service to listen to incoming connections. In order to avoid anyone else using your aria2 instance, your can generate a random token as follows:

`openssl rand -base64 32`

After that you need to run aria2 as a daemon.

`# aria2c --enable-rpc --rpc-listen-all --daemon`

For further configuration and parameterization check also the Aria2 Documentation.

]]>The script finds all svg files in a root folder and below it, and converts them to corresponding pdf files without using Inkscape's GUI.

You can use the script freely, and modify it to your needs. Be sure that Inkscape has been installed to your machine. You can check this, by giving the following terminal command:

`~$ which inkscape `

/usr/bin/inkscape # You will see this output if inkscape is present

If not, you can install it through the following terminal command:

`~$ sudo apt-get install inkscape`

Windows users have to add Inkscape's executable path into %PATH system variable manually.

]]>This is a multimodal minimization problem defined as follows: \begin{equation*} f(\textbf{x}) = -a e^{-b\sqrt(\frac{1}{D} \sum_{i=1}^{D} x_i^2)} - e^{\frac{1}{D}cos(2\pi c x_i)} + 20 + e \end{equation*} with $x_{i} \in [-35, 35]$ and recommended values $a = 20, b = 0.2, c = 1.0$.

The global optimum is $f(0, 0) = 0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1 x_2 - x_1 + 1.5)^2 + (x_1 x_2^2 - x_1 + 2.25)^2 + (x_1 x_2^3 - x_1 + 2.625)^2 \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(3.0, 0.5) = 0$.

This is a unimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1 + 2x_2 - 7)^2 + (2x_1 + x_2 - 5)^2 \end{equation*} with $x_1, x_2 \in [-10, -10]$ and global optimum $f(1.0, 3.0) = 0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = 100 \sqrt{x_2 - 0.01x_1^2} + 0.01 |x_1 + 10| \end{equation*} with $x_1 \in [-15, -5]$ and $x_2 \in [-3, 3]$ and global optimum $f(-10.0, 1.0) = 0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = (x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 - 7)^2 \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(3.0, 2.0) = 0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = [1 + (x_1 + x_2 + 1)^2 (19 - 14x_1 - 3x_1^2 - 14x_2 + 6x_1x_2 + 3x_2^2)] \\ [30 + (2x_1 - 3x_2)^2 (18 - 32x_1 + 12x_1^2 + 48x_2 - 36x_1x_2 + 27x_2^2)] \end{equation*} with $x_1, x_2 \in [-2, 2]$ and global optimum $f(0.0, -1.0) = 3.0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = sin(3\pi x_1)^2 + (x_1 - 1)^2 (1 + sin(3 \pi x_2)^2) + (x_2 - 1)^2 (1 + sin(2 \pi x_2)^2) \end{equation*} with $x_1, x_2 \in [-10, 10]$ and global optimum $f(1.0, 1.0) = 0.0$.

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = 0.26(x_1^2 + x_2^2) - 0.48 x_1 x_2 \end{equation*} with $x_1, x_2 \in [-10, 10]$ and global optimum $f(0.0, 0.0) = 0.0$

This is a multimodal minimization problem defined as follows: \begin{equation*} f(x_1, x_2) = - x_1 + 2x_2 + (x_1 - x_2)^2 + sin(x_1 + x_2) + 1 \end{equation*} with $x_1 \in [-1.5, 4.0]$ and $x_2 \in [-3.0, 4.0]$ and global optimum $f(0.0, 0.0) = 0.0$.

This is a highly multimodal minimization problem defined as follows: \begin{equation*} f(\textbf{x}) = 10D + \sum_{i=1}^{D}[(x_i^2 - 10cos(2\pi x_i)^2] \end{equation*} with $x_i \in [-5.12, 5.12]$ and global optimum $f(0.0,.., 0.0) = 0$

This is a multimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = \sum_{i=1}^{n-1}[100(x_i^2 - x_{i+1})^2 + (x_{i} - 1)^2] \end{equation*} with $x_i \in [-2, -2]$ The best known global optimum is: $f(1.0,.., 1.0) = 0$.

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.5 + \frac{sin^2 (x_1^2 + x_2^2) - 0.5}{1 + 0.001(x_1^2 + x_2^2)^2} \end{equation*} with $x_1, x_2 \in [-100, 100]$ and global optimum $f(0.0, 0.0) = 0.0$.

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.5 + \frac{cos^2\big(sin(x_1^2 - x_2^2)\big) - 0.5}{ 1 + 0.001(x_1^2 + x_2^2)^2} \end{equation*} with $x_1, x_2 \in [-100, 100]$ and global optimum $f(0.0, 1.253115) = 0.292579$.

This is a multimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = \frac{1}{2}\sum_{i=1}^{D} (x_i^4 - 16x_i^2 + x_i) \end{equation*} with $x_1, x_2 \in [-5, 5]$ and global optimum $f(-2.903534, -2.903534) = -78.332$.

This is a unimodal minimization problem defined as follows:

\begin{equation*} f(x_1, x_2) = 0.25 x_1 + (x_1^2 - 2 x_1 + x_2^2)^2 \end{equation*} with $x_1, x_2 \in [-5, 10]$ and global optimum $f(-0.029896, 0.0) = -0.003791$.

- Jamil M. and Yang X.S. (2013). A literature survey of benchmark functions for global optimization problems, International Journal of Mathematical Modeling and Numerical Optimization, 4(2), 150 - 194.
- Floudas C.A. and Pardalos P.M. (1987). A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer.
- http://en.wikipedia.org/wiki/Test_functions_for_optimization.

To find the version of a specific package, simply run the following snippet:

```
import pkg_resources
pkg_resources.get_distribution("package_name").version
```

If you want to share your python script without sharing also the source you can compile it to .pyc version with the following command.

`python -m compileall .`

If the directory name (the current directory here, denoted with ".") is omitted, the module compiles everything found on sys.path.

We must admit that it's a bit clumsy when you're about to install additional Python packages in a windows system. The most preferable way to do this is by using *pip install* command. But sometimes, if you have ever tried to install some packages with either *pip install* or downloading the source package and running *setup.py* install, you may have received the following message:

`error: Unable to find vcvarsall.bat`

This means that, these packages require compiling and Python searches for an installed Visual Studio compiler. To overcome this you can force Python to use a Visual Studio by setting the correct path in VS90COMNTOOLS environment variable before executing *pip install* command or calling the correspoding *setup.py* by executing the correspoding terminal command:

```
set VS90COMNTOOLS = %VS100COMNTOOLS% # If you have Visual Studio 2010 installed
set VS90COMNTOOLS = %VS110COMNTOOLS% # If you have Visual Studio 2012 installed
```

]]>