Embedding Monotonicity and
Concavity in the training of Neural Networks
by means of Genetic Algorithms. Application to multiphase flow.
Computers & Chemical
Engineering, accepted (2004)
Laurentiu A. Tarca,
Bernard P.A. Grandjean and Faїçal Larachi
Department
of Chemical Engineering & CERPIC
Laval
University - Québec, Canada G1K 7P4
Abstract
It is well established that including monotonic
constraints in neural network (NN) development, when such a priori
information is available, increases the confidence in predictions and prevents
over fitting. An elegant procedure to guarantee mathematically monotonic NNs with respect to some of their inputs is to force the
signs of some of the network’s weights in such a way that the network remains
monotonic in the entirety of the input domain. In this work, besides monotonicity, a second-order information, namely, the
concavity, is used to guide a genetic algorithm - genetic hill climber
optimizer to identify the weights of the neural network. Monotonicity
and concavity are key conditions in establishing phenomenological correlations
when the available data for training are insufficient to cover in depth the
n-dimensional input space. In such instances, classical training procedures
fail to unveil the main tendencies in data and may suffer local over fitting
problems. In this work, the proof-of-concept of embedding monotonicity
and concavity information in the training of NNs by
means of genetic algorithms will be illustrated in correlating total liquid
holdup in randomly packed bed containing counter-current gas-liquid towers.
Keywords
Neural networks; monotonicity; concavity;
genetic algorithm; liquid holdup; multiphase flow reactor
You can get the LH_CCPBed.zip file
that contains an Excel worksheet simulator to compute
the liquid holdup in randomly packed beds with counter current flow.
You may
also download our Excel worksheets simulators for Trickle-bed or Flooded
Bed reactors.