Scientific Paper / Artículo Científico


pISSN: 1390-650X / eISSN: 1390-860X









Francisco Javier Sanchez-Ruiz1,*, Elizabeth Argüelles Hernández1,

José Terrones-Salgado1, Luz Judith Fernández Quíroz1


Received: 06-03-2023, Received after review: 08-05-2023, Accepted: 29-05-2023, Published: 01-07-2023




The integration of artificial intelligence techniques introduces fresh perspectives in the implementation of these methods. This paper presents the combination of neural networks and evolutionary strategies to create what is known as evolutionary artificial neural networks (EANNs). In the process, the excitation function of neurons was modified to allow asexual reproduction. As a result, neurons evolved and developed significantly. The technique of a batch polymerization reactor temperature controller to produce polymethylmethacrylate (PMMA) by free radicals was compared with two different controls, such as PID and GMC, demonstrating that artificial intelligencebased controllers can be applied. These controllers provide better results than conventional controllers without creating transfer functions to the control process represented.

La integración de técnicas de inteligencia artificial introduce nuevas perspectivas en la aplicación de estos métodos. Este trabajo presenta la combinación de redes neuronales y estrategias evolutivas para crear lo que se conoce como redes neuronales artificiales evolutivas (RNAE). Durante el proceso, se modificó la función de excitación de las neuronas para permitir su reproducción asexual. Como resultado, las neuronas evolucionaron y se desarrollaron significativamente. La técnica del controlador de temperatura de un reactor de polimerización por lotes para producir polimetilmetacrilato (PMMA) mediante radicales libres se comparó con dos controles diferentes (PID y GMC), demostrando así la aplicabilidad de los controladores basados en inteligencia artificial. Estos controladores ofrecen mejores resultados que los controladores convencionales sin crear funciones de transferencia al proceso de control representado.

Keywords: ANNs, Evolved Neural Networks, Reactor

Batch, Function Excitation, PMMA

Palabras clave: RNA, redes neuronales evolucionadas, lote de reactores, función de excitación, PMMA













1,*Laboratorio de Investigación Biotecnoambiental, Grupo de Investigación de Bioenergía e Inteligencia Artifcial, Universidad Popular Autónoma del Estado de Puebla, México.

Corresponding author :


Forma sugerida de citación: Sanchez-Ruiz, F.; Argüelles Hernández, E.; Terrones-Salgado, J. y Fernández Quíroz, L. “Red neuronal artificial evolutiva para el control de temperatura en un reactor batch de polimerización.,” Ingenius,

Revista de Ciencia y Tecnología, N.◦ 30, pp. 9-18, 2023. doi:



1.      Introduction


Artificial neural networks are systems based on the cognitive and problem-solving capacity of the human brain, with the difference of greater robustness of the artificial neural network compared to the human brain. Who can establish that the systems based on artificial intelligence and artificial neural networks (ANNs) can submit an overlearning of the dynamics of the process, this being a feature of adjustment similar to the human brain [1–4].

Neural networks can be of different types, and the selection of the same depends on the characteristics required of the network and of the process, which means

that the more robust network tends to be the most appropriate setting. However, as happens in the human brain [5–9], artificial neural networks can also present a learning non-adjustable, which is called (overlearning); this means that the neural network used as part of its adjustment calculated values not representative of the system, this being that the important part of research on artificial neural networks. The search for a neural network that avoids the presence of overlearning, this feature will cause that emit different types of neural networks artificial; these new types of neural networks combine other artificial intelligence techniques, such as fuzzy logic, evolutionary algorithms, and optimization techniques (stochastic methods) for improvement of the response of the neural network [10–14].

Who can use a neural network for the recognition of patterns and images, or as a controller, in this work is used an artificial neural network is combined with a technique of evolution, resulting in an evolutionary artificial neural network, known as neuroevolutionary control [15].

This study compares the functions of excitation applied to neuroevolutionary controllers and conventional controls such as PID and GMC and shows that systems based on artificial intelligence and neural networks evolutionary systems provide a better fit compared with traditional control systems the PID and not conventional GMC systems. This study raises a new aspect of applying intelligent systems in process control with unpredictable dynamics [16–18].


2.     Materials and Methods


The characteristics of an evolutionary neural network controller are based on an artificial neural network, which consists of a function of neuronal representation that is based on the weights (values xi) of entry and a function of excitation σ; the latter has the feature to propagate the pesos toward maximum and minimum [19–22].

Where w0 represents the value of the initial weight, what can replace the role of excitation (σ) to minimize the error, avoiding in this way the overlearning, Figure 1 shows the main structure of an artificial neural network schematically. Equation (1)




Figure 1. Basic structure of Artificial Neural Networks (ANNs)


As the neural network controller does not require to linearize the model according to a mathematical law of control given, this is mainly because the artificial neural network first linearized the evolutionary model in such a way that adequately represents the dynamics of the process, in Figure 2 shows how the use of a neural network controller [23].


Figure 2. As Neural Network Controller


The response of the control based on evolutionary neural networks makes use of evolutionary algorithms, an algorithm that establishes the foundation of the evolution of the neural network and modifies the basic neural network.

When using the three methods of evolution in a neural network must be clear that the neural network works dynamically; for this case are only used. The first two techniques of evolution, this mainly due to the use of a neural network static; the neural network does not modify your static weights neural and interconnections, but if you evolve, generating new neurons information storage [24].



Implementing the techniques in an evolutionary neural network is done by implementing an evolution algorithm with a series of established steps. Figure 3 shows each of the sequential stages to obtain a structure of an evolutionary artificial neural network; the first stage of the structuring depends on getting synaptic weights, which can be obtained through a supervised or unsupervised process. for non-linear models such as fermentation reactors, gets values through an unsupervised algorithm, which is based on the variables of the mathematical model (T, P, yeast growth, concentration, Etc.), that is, those that are obtained from the solution. From the mathematical model, the weights are used in the neural network training process, which is combined with the evolution algorithm. The cycle is repeated until the most suitable structure is found (phenotype of the evolved neural network), which implies that there is still the possibility of evolution of the system; that is, if the control variables present disturbances, there will be adjusting according to the evolutionary process of the same structure of the evolutionary artificial neural network, this is done by the evolution method which is a thread where an analysis of the values of the trained weights is carried out, this is to generate a new generation of evolved neurons that adjust to the dynamics of the system [25]


Set up a system of control through evolutionary neural networks start arises as an architecture of a primary neural network, which consists of a neuron of entry, a neuron in the hidden layer, and one neuron in the output layer; this architecture is set in such a way due to the prior knowledge of the variable will be only to check the temperature of reactor polymerization [26].

Once established, the basic architecture is set as a progenitor for each neuron: Equation (2).




Where σi is the function of excitation, wi are synaptic weights (input), yt is temporal series of synaptic weights propagated, and ei is the error of training.

The expansions of weight and the excitation function are shown in the following equations, which show who can expand the weights in three dimensions, this is when the neural network can propagate information in three dimensions very similar to what happens in the human brain [14]. Equation (3) and (4)


The progenitor is as follows: Equation (5).




Figure 3. Flow diagram of the main structure of the EANN






Who used the system to study three different excitation functions tangential function, equation (6), logarithmic, equation (7), and radial basis, equation (8).








Where wi represents the slope of the function

of excitation and wi weights of entry of the neural network, generally, these weights of entry to the artificial neural network can be obtained from a reference model or through the experimental data from the system to check.

The evolution of the neural network is performed using the equation of reproduction of progenitor; this evolutionary through the states and technique to evolved,

equation (9).




Where S and T represent each of the selected progenitors and ηi represents a uniform random number between zero and the unit [15].

The Levenberg performs the training of the neural network - Marquart (LM), an accelerated type of training that can simultaneously function as a gradient descent training as the Quasi-Newton (BFGS). The training of LM implements the Newton, based expansion second-order Taylor series method, thus obtaining a Hessian matrix of the weights of the network; after obtaining the Hessian matrix approximates a Jacobian matrix that involves different steps of training [26] Now who can express the LM method with the scaling factor μk, equation (10).




Where J (wk) is a Jacobian matrix, I is the identity matrix, and e(wk) is the error weight synaptic.

The neural network model is applied to the evolutionary model of a polymerization reactor using free

radicals of methyl-methacrylate (MMA) for obtaining poly-methyl-methacrylate (PMMA). The mathematical model consists of mass balances, energy, and kinetic equations, this model is a direct way to implement control with neural networks, and the same applies to the rules necessary to obtain the control laws for drivers of type GMC and PID [27].


Mass Balance: equation (11) and (12).






Energy Balance: equation (13) and (14).






Kinetic Equations: ((15) to (25)).


























The implementation of a control system based on artificial intelligence (AI), such as evolutionary artificial neural networks, unlike other conventional control systems, only require training and knowledge of the behaviour of the process variables to be controlled, in the case of a fermentation reactor, the following must be considered: temperature, growth kinetics of microorganisms, and concentration of the final fermentation product, for this very reason it breaks with the conventional tuning paradigms of a traditional system of control, although it must be taken into consideration that overlearning does not occur recurrently, which can be minimized by the self-adjusting training algorithm to the dynamics of the fermentative process itself.

The following equations give the control laws for the two types of controllers, such as the conventional PID and the other not-so-conventional GMC [18]. The classic tuning methods or heuristics to determine the PID control parameters were presented by Ziegler and Nichols (1942); they have significantly impacted practice and are still used today, not so much in the application but as a base and reference. Comparison, since they do not give a good tuning, these methods are based on the characterization of the process dynamics using two parameters with which the parameters of the PID controller are determined.

The frequency response method was implemented to characterize the system dynamics in a closed loop using only the proportional control action. Once this is done, Kp must be increased, starting from zero, until the system

presents sustained oscillations at the output. This takes the approach to the limit of its stability. The parameters that characterize the system dynamics are Ku and Tu, where Ku is the proportional gain that makes the system oscillate sustainably, with oscillation period Tu. Once determined these parameters are, the PID parameters are obtained with Table 1 [19].


Table 1. ISE and States of Evolution


PID control: equation (26).




Control GMC: equatiom (27).




3.      Results and discussion


The reason to train a neural network is to establish the parameters of linearization of the mathematical model, to the polymerization reactor shows that Figure 4 shows the linearization of the mathematical model by training the evolutionary neural network, first training generates the parameter sequentially to establish the evolution of the same network.




Figure 4. Responses of Plant and EANN (a) Logarithmic function, (b) Gaussian Radial Basis, (c) Tangential function.


The graphics training, validation, and testing provide the main trends of how evolutionary neural network (Figure 5), shows that the function of tangential type (Figure 5a) training has a setting similar to the function 0.79274 type logarithmic (Figure 5d). The function of the Gaussian radial type training base setting

is at a value of 0.99957, indicating that the radial basis function has lesstendency to over-learning (Figure 5g). The graphics of validation and testing has similar behaviour for the three functions.



Figure 5. Training, Validation, and Testing of EANN


The results obtained by the evolutionary neural network are shown in Table 1 , in the same observing the architecture of the artificial neural network and the evolutionary status of evolution, which has the same for different excitation functions. It should mention that simulations were conducted in Matlab, changing the number of outputs of the neural network from a range of 2 outputs of each neuron up to 4 outputs, which are obtained from evolutionary states. The states of evolution provide the total number of neurons evolved for the control system.

One of the advantages of implementing a control system with evolutionary artificial neural networks lies mainly in obtaining simple neural structures, which evolve according to the dynamics of the process itself, that is, with a basic design (neuronal phenotype). It is possible to have a capacity of adaptability to dynamics and non-linearities without requiring constant training, presenting a closed or open loop self-learning system, as required by the process itself to be controlled. This is not possible for a conventional control system such as the PID that needs to be tuned in a closed loop, which can take time and not adjust to the dynamics of the process to be controlled.

We analyzed the three functions of excitation with different slopes and their influence developments in states; similarly, Integral Square Error (ISE) is calculated to determine which neural network is adjusted to the dynamics of the process.

Similarly, who performed analysis for integral square error (ISE) for two controllers used in the comparison (Table 2), the PID control and GMC, this error provides the information necessary to establish the setting of same to the dynamics process.


Table 2. ISE PID control and GMC


The selected structure of the neural network presents minor errors. Consequently, the smallest number of states of evolution is the evolutionary neural network with a function of type Gaussian radial basis with a value of α=0.5, resulting in a total of 27 states of evolution. These states of evolution, to be minimum, indicate that those who added neural networks do not present overlearning, that part of the error spread toward the weights of forward and take as part of training itself. The number of outputs the neural network selected is 3 outputs; this number of outputs can also be a good selection of a neural network because it mainly to not be formed new neurons that may cause the about learning.

Figure 6 shows the response of neural control with the evolutionary function of Gaussian radial basis with three neural network outputs; what will note that it is adequately the set-point in a short period of time without portraying a shot in response to a controller.

Figures 7 - 8 shows the response of the controller for the other two excitation functions with different slope and number of outputs (see Table 1), the reaction of the control function tangential (Figure 7) with four outputs is presented on a shot for the temperature of the jacket. However, the control is set values set-point; in contrast, the driver’s response with the logarithmic type function with a slope of 1.5 with the same number of outputs; for this profile was noted that the control response is adjusted to the set-point. Without that present over tire, the control response is very similar to the response given by the function of base type radial with a slope of 0.5. The limitation of the control function with the logarithmic type is that it presents too many states of evolution, which cause the evolutionary neural network shop to submit about learning.


Figure 6. Profile Temperature of EANNs Control using a radial basis function.


Figure 7. Profile Temperature of EANNs control using a tangential function.




Figure 8. Profile Temperature of EANNs control using a logarithmic function.


The response of the PID control is shown in Figure 9 ; it is worth mentioning that tuning of PID control was performed using the technique of Zeigler-Nichols [19] for each of the constants of the controller; this response is observed on the drawbar that presents this driver is large compared with the control with neural networks, this is mainly attributable to the technique of tuning which causes the over tire tends to increase. The GMC also requires control of tuning because this control depends on a constant of the controller, which uses the technique used by Lee-Sullivan [18] for tuning of the same; the response of control GMC is shown in Figure 10, in comparison with the PID control the GMC does not have control over tire in its control response, which reaffirms that the technique of tuning of the controller is sensitive to the response of the same.

Figure 9. Profile Temperature using a PID controller


Figure 10. Profile Temperature using a controller GMC


In addition to the minimization of the ISE in control, the disturbances that can occur in the system are essential to measuring the response capacity of the control system; for the process studied. In this work, they were subjected to two disturbances and measured the ability. Adjustment with the lowest dead time (Td) and with a minimum frequency, the disturbance applied was a negative step change for the three control systems PID, GMC, and EANN. The PID controller fails to return to the set-point (Figure 11), causing control instability, GMC the trajectory of change in the set-point, variations in its sequence (Figure 12), and evolutionary neural network system change. The set point is followed sequentially without presenting any variation (Figure 13). The introduction of perturbations shows the robustness of the control system, indicating that evolutionary neural networks with radial Gaussian function are an option that offers better results compared to PID and GMC control.


Figure 11. Controller PID with disturbance.



Figure 12. Controller GMC with disturbance.


Figure 13. Controller EANN with disturbance.


4.      Conclusion


This work presented a new alternative control based on evolutionary artificial neural networks through the use of different excitation functions and the modification of the number of outputs of the neural network; using these two variables noted the trend in the evolution of the neural network. In the case study, the polymerization reactor for the obtaining of poly-methylmethacrylate (PMMA) is a system that features a dynamic little predictable what conventional systems of control such as the PID and other not so traditional as GMC can produce good results but with the limitation of being restricted to the tuning of the

controller, which was not the case with the control based on evolutionary neural networks, this type of control is properly adjusted to very dynamics of the process without having the need for a constant tuning, in addition to the control that is based on evolutionary neural networks also decreases the overshoot on the controls that are of a conventional type such as PID and GMC.

Concerning the different types of excitation functions, it is noted that the more complex function, as it is the type of radial basis, generates a better response from the controller based on evolutionary neural networks. When compared with the control of function logarithmic type, the difference lies in the minimization of the states of evolution, for the radial basis function of

the number of outputs is minor, in addition to presenting a minimum of states of evolution, which indicates the probability of the network not to show a tendency to overlearning and so not getting a good response of the control.

Another option for training the evolutionary artificial neural network is Deep Learning, which can generally use under the supervised neural network training scheme; it can be an alternative for this type of system, if there are enough weights to carry out the projection of the dynamics of the process, all under a supervised training process, with simple neural structures or architectures. Another alternative to deep learning is a neural network evolution algorithm, which can be a new field of study for controlling nonlinear processes.




[1] J. Narkiewicz, M. Sochacki, and B. Zakrzewski, “Generic model of a satellite attitude control system,” International Journal of Aerospace Engineering, vol. 2020, p. 5352019, Jul 2020. [Online]. Available:

[2] N. F. Salahuddin, A. Shamiri, M. A. Hussain, and N. Mostoufi, “Hybrid fuzzy-gmc control of gas-phase propylene copolymerization in fluidized bed reactors,” Chemical Engineering Journal Advances, vol. 8, p. 100161, 2021. [Online]. Available:

[3] M. S. Mahmoud, M. Maaruf, and S. El-Ferik, “Neuro-adaptive fast terminal sliding mode control of the continuous polymerization reactor in the presence of unknown disturbances,” International Journal of Dynamics and Control, vol. 9, no. 3, pp. 1167–1176, Sep 2021. [Online]. Available:

[4] E. S. Yadav, P. Shettigar J, S. Poojary, S. Chokkadi, G. Jeppu, and T. Indiran, “Datadriven modeling of a pilot plant batch reactor and validation of a nonlinear model predictive controller for dynamic temperature profile tracking,” ACS Omega, vol. 6, no. 26, pp. 16 714–16 721, 2021, pMID: 34250331. [Online]. Available:

[5] M. Maaruf, M. M. Ali, and F. M. Al-Sunni, “Artificial intelligence-based control of continuous polymerization reactor with input dead-zone,” International Journal of Dynamics and Control, vol. 11, no. 3, pp. 1153–1165, Jun 2023. [Online]. Available:

[6] P. Shettigar J, K. Lochan, G. Jeppu, S. Palanki, and T. Indiran, “Development and validation of



advanced nonlinear predictive control algorithms for trajectory tracking in batch polymerization,” ACS Omega, vol. 6, no. 35, pp. 22 857–22 865, 2021. [Online]. Available:     

[7] H. Wang and Y. Chen, “Application of artificial neural networks in chemical process control,” Asian Journal of Research in Computer Science, vol. 14, no. 1, pp. 22–37, 2022. [Online]. Available:

[8] M. L. Dietrich, A. Brandolin, C. Sarmoria, and M. Asteasuain, “Mathematical modelling of rheological properties of low-density polyethylene produced in high-pressure tubular reactors,” IFAC-PapersOnLine, vol. 54, no. 3, pp. 378–382, 2021, 16th IFAC Symposium on Advanced Control of Chemical Processes ADCHEM 2021. [Online]. Available:  

[9] P. Shettigar J, J. Kumbhare, E. S. Yadav, and T. Indiran, “Wiener-neural-networkbased modeling and validation of generalized predictive control on a laboratory-scale batch reactor,” ACS Omega, vol. 7, no. 19, pp. 16 341–16 351, 2022. [Online]. Available:   

[10] D. Q. Gbadago, J. Moon, M. Kim, and S. Hwang, “A unified framework for the mathematical modelling, predictive analysis, and optimization of reaction systems using computational fluid dynamics, deep neural network and genetic algorithm: A case of butadiene synthesis,” Chemical Engineering Journal, vol. 409, p. 128163, 2021. [Online]. Available:

[11] M. García-Carrillo, A. B. Espinoza-Martínez, L. F. Ramos-de Valle, and S. Sánchez-Valdés, “Simultaneous optimization of thermal and electrical conductivity of high density polyethylene-carbon particle composites by artificial neural networks and multi-objective genetic algorithm,” Computational Materials Science, vol. 201, p. 110956, 2022. [Online]. Available: 

[12] L. Ghiba, E. N. Drăgoi, and S. Curteanu, “Neural network-based hybrid models developed for free radical polymerization of styrene,” Polymer Engineering & Science, vol. 61, no. 3, pp. 716–730, 2021. [Online]. Available:


[13] K. Bi, S. Zhang, C. Zhang, H. Li, X. Huang, H. Liu, and T. Qiu, “Knowledge expression, numerical modeling and optimization application of ethylene thermal cracking: From the perspective of intelligent manufacturing,” Chinese Journal of Chemical Engineering, vol. 38, pp. 1–17, 2021. [Online]. Available:

[14] K. Ahmad, H. R. Ghatak, and S. M. Ahuja, “Response surface methodology (rsm) and artificial neural network (ann) approach to optimize the photocatalytic conversion of rice straw hydrolysis residue (rshr) into vanillin and 4-hydroxybenzaldehyde,” Chemical Product and Process Modeling, 2022. [Online]. Available:

[15] E. M. de Medeiros, H. Noorman, R. Maciel Filho, and J. A. Posada, “Production of ethanol fuel via syngas fermentation: Optimization of economic performance and energy efficiency,” Chemical Engineering Science: X, vol. 5, p. 100056, 2020. [Online]. Available: 

[16] S. Greydanus, M. Dzamba, and J. Yosinski,“Hamiltonian neural networks,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019. [Online]. Available:

[17] J. Zeng, L. Cao, M. Xu, T. Zhu, and J. Z. H. Zhang, “Complex reaction processes in combustion unraveled by neural network-based molecular dynamics simulation,” Nature Communications, vol. 11, no. 1, p. 5713, Nov 2020. [Online]. Available:

[18] H. Wang and R. Mo, “Review of neural network algorithm and its application in reactive distillation,” Asian Journal of Chemical Sciences, vol. 9, no. 3, pp. 20–29, 2021. [Online]. Available:

[19] I. Moreno and J. Serracín, “Dr. Santiago Ramón y Caja,” Prisma Tecnológico, vol. 12, no. 1, pp. 86–87, 2021. [Online]. Available:

[20] V. Buhrmester, D. Münch, and M. Arens, “Analysis of explainers of black box deep neural networks for computer vision: A survey,” Machine Learning and Knowledge Extraction, vol. 3, no. 4, pp. 966–989, 2021. [Online]. Available:




[21] H. Chen, C. Fu, J. Zhao, and F. Koushanfar, “Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 7 2019, pp. 4658–4664. [Online]. Available:

[22] E. Zihni, V. I. Madai, M. Livne, I. Galinovic, A. A. Khalil, J. B. Fiebach, and D. Frey, “Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome,” PLOS ONE, vol. 15, no. 4, pp. 1–15, 04 2020. [Online]. Available:

[23] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, and B. Hodjat, “Chapter 15 - evolving deep neural networks,” in Artificial Intelligence in the Age of Neural Networks and Brain Computing, R. Kozma, C. Alippi, Y. Choe, and F. C. Morabito, Eds. Academic Press, 2019, pp. 293–312. [Online]. Available:

[24] Bilal, M. Pant, H. Zaheer, L. Garcia-Hernandez, and A. Abraham, “Differential evolution: A review of more than two decades of research,” Engineering Applications of Artificial Intelligence, vol. 90, p. 103479, 2020. [Online]. Available:

[25] A. Bashar, “Survey on evolving deep learning neural network architectures,” Journal of Artificial Intelligence, vol. 1, no. 2, pp. 73–82, 2019. [Online]. Available:

[26] Y. Sun, B. Xue, M. Zhang, and G. G. Yen, “Evolving deep convolutional neural networks for image classification,” IEEE Transactions on Evolutionary Computation, vol. 24, no. 2, pp. 394–407, 2020. [Online]. Available:

[27] E. Ekpo and I. Mujtaba, “Evaluation of neural networks-based controllers in batch polymerization of methyl methacrylate,” Neurocomputing, vol. 71, no. 7, pp. 1401–1412, 2008, progress in Modeling, Theory, and Application of Computational Intelligenc. [Online]. Available: