A constraint multi-objective evolutionary optimization of a state-of-the-art dew point cooler using digital twins

This study is pioneered in developing digital twins using Feed-forward Neural Network (FFNN) and multi objective evolutionary optimization (MOEO) using Genetic Algorithm (GA) for a counter-flow Dew Point Cooler with a novel Guideless Irregular Heat and Mass Exchanger (GIDPC). The digital twins, takes the intake air characteristics, i.e., temperature, relative humidity as well as main operating and design parameters, i.e., intake air velocity, working air fraction, height of HMX, channel gap, and number of layers as the inputs. GIDPC ’ s cooling capacity, coefficient of performance (COP), dew point efficiency, wet-bulb efficiency, supply air temperature and surface area of the layers are selected as outputs. The optimum values of aforementioned operating and design parameters are identified by the MOEO to maximise the cooling capacity, COP, wet-bulb efficiencies and to minimise the surface area of the layers in four identified climates within K ö ppen-Geiger climate classification, namely: tropical rainforest, arid, Mediterranean hot summer and hot summer continental climates. The system monthly and annual performances in the identified optimum conditions are compared with the base system and the results show the annual improvements of up to 72.75% in COP and 23.57% in surface area. In addition, the annual power


Background
To provide comfortable indoor air quality, air conditioners are needed in modern buildings but astronomical part of the energy supplied to buildings, i.e., up to 50% [1], is consumed by air conditioner systems. Energy intensiveness of the conventional Mechanical Vapor Compression (MVC) air conditioners [2], has led the researchers toward an efficient replacement [3]. Evaporative cooling system, with direct evaporative cooling (DEC) and Iindirect evaporative cooling (IEC) types, were introduced as an environmentally friendly cooling systems in the past decades [2,4,5]. The IECs are more preferred owing to their superiority in keeping the humidity within the acceptable levels [6,7]. Necessity of inventing more efficient cooling systems resulted in introducing the Dew Point Coolers (DPCs) by a remarkable potential in cooling down the intake air temperature to its dew point temperature [8,9]. The M-cycle Heat and Mass Exchanger (HMX) was the core initiative of this technology which caused a significant decrease in dew point and wet-bulb temperatures of the air in the wet channel leading to up to 30% higher cooling efficiency [10] with two main types: cross flow and counter-flow [11].

Literature review: dew point cooler
The first ever research was done in Coolerado® project in USA [10] where a cross flow DPC reached wet-bulb and dew-point efficiencies of 80% and 50%, respectively. Zhao et al. [12] identified that the northern and west regions of China were the suitable regions for DPC operation. Riangvilaikul and Kumar [13] experimentally, concluded that the wet-bulb and dew point efficiencies of a DPC were in the range of 92-114%, and 58-84%, respectively. Bruno [14] investigated the applicability of a DPC prototype in both commercial and residential buildings. Jradi et al. [15] achieved wet-bulb efficiencies of 70-117% with supply air flow rate of 300-1500 m 3 .h − 1 in a cross-flow DPC. Pandelidis et al. [16] studied the effect of the inlet air parameters on performance of HMXs in different types of M-Cycle systems. Lin et al. [17] concluded that the saturation point of the working air is regardless of the intake air conditions. Xu et al. [18] conducted two studies by introducing a novel super performance DPC with 30-60% higher performance. It is also reported that the COP of the proposed system can reach 52.5 at the ideal operating condition with working air ratio of 0.36 [19]. In an experimental study, Lin et al. [20] found that the performance of the selected cross flow DPC was negatively affected by the wet conditions. The exergy flow and efficiency ratio of the cross flow DPC under various conditions [21], and the effect of sprayed water in the wet channel of a counter-flow DPC [22] were investigated. Other studies [23,24] were also carried out in which the key governing dimensionless numbers and correlations for the transient and steady-state characteristics were proposed. Wan et al. [25] selected two DPCs with two different air flow configurations and compared cooling effectiveness and product temperature of both types. A thermodynamic analysis on a hybrid membrane liquid desiccant dehumidification and DPC was done in which the targeted supply air temperature of 20.0-28.0 ° C with the humidity ratio less than 12.0 g/kg were reached [26]. Wan et al. [27] calculated the heat and mass coefficients in a counter-flow DPC with the NTU-Le-R method with maximum discrepancies of 6%. Liu et al. [28] reported the wet-bulb efficiency and COP improvements of a counter-flow DPC by 29.3% and 34.6% respectively, compared with the commercial DPCs. In addition, Liu et al. [29] identified the best operating conditions for the selected counter-flow DPC.

Artificial Intelligence in DPC, and the identified gap
Over the past decade, Artificial Intelligence (AI) is brought into the DPC technology which has led to outstanding results for performance prediction and optimum operation of DPCs. Pandelidis and Anisimov [30] used Response Surface Methodology (RSM) for a cross-flow M-cycle heat exchanger. It was concluded that performance of the system was mainly affected by supply air mass flow rate, inlet air temperature and relative humidity. Sohani et al. [31] used the Group Method of Data Handling-type neural network (GMDH) and Multi Objective Optimization (MOO) methods to predict the supply air properties and optimize the performance of a cross flow DPC. It was concluded that the COP and cooling capacity were improved by 8.1% and 6.9% respectively. Jafarian et al. [32] also used the same method for a counter-flow DPC, which could predict the supply air temperature. In addition, using MOO, COP and specific area of the cooler were optimized. Sohani et al. [33] compared the performance of the counter-regenerative and cross flow DPCs after identifying their optimum operating conditions. As a result, the proper climate for each DPC was introduced. Sohani et al. [34] also presented an hourly optimizations method for the DPCs employing the MOO. Pakari and Ghani [35] used the regression models to predict the performance of a counter-flow DPC. An optimization based study [36] revealed that the optimal channel length and working ratio for the considered DPC were 0.50 m and 0.40 respectively.
Review of the existing literature revealed that the studies on DPCs have been mostly concentrated on the commercial HMXs. However, a novel counter-flow GIDPC has the best performance in terms of COP value i.e., 52.5 [18,19]. Xu et al. [18] pioneered in introducing the GIDPC through the numerical and experimental studies [19], and Akhlaghi et al. [37] proposed a data-driven model which was based on the Multiple Polynomial Regression (MPR). However, to date, no optimization algorithm, considering both operating and design parameters, is developed for the GIDPC. Thus, the lack of a robust AI model which can identify the optimum operating and design parameters of the GIDPC in diverse climates is identified as an outstanding gap for the state-of-the-art DPC. Identification of the dedicated optimum parameters in each climate, will reduce the construction cost and improve the GIDPC efficiency in terms of power consumption and cooling performance.
Therefore, this study is pioneered in proposing a constraint multi objective evolutionary optimization (MOEO) and digital twins for GIDPCs. The digital twins is developed using Feed-forward Neural Network (FFNN) method, and the MOEO is developed based on the Genetic Algorithm (GA) to, firstly, predict the performance of the system in any random operating condition using a big dataset and, secondly, to identify the optimum values of the operating and design parameters in diverse climates. A validated numerical model is used to construct a big dataset for training the FFNN model. The input parameters of the dataset include main operating parameters, i.e., temperature, relative humidity and velocity of the intake air, and key design parameters, i.e., working air fraction, HMX height and channel gap, and number of layers in HMX structure.
Having developed the digital twins, the MOEO is used to find the optimum operating and design parameters to maximise the cooling capacity, COP, wet-bulb efficiency and to minimise the surface area of the layers, as the objectives of the MOEO, in four suitable climates for GIDPC operation based on the Koppen-Geiger's classification [38].
The remaining parts are classified as follows: in , the GIDPC and the associated numerical model are

System description (GIDPC)
In this section, a 4-kW counter-flow DPC with guideless irregular HMX (GIDPC) is explained. The GIDPC is constituted of a novel corrugated HMX, product and exhaust air fans, water supply/distribution system (which comprises a water distributor, a circulating water pump, a water tank). Among which, the HMX is the key innovative part of the GIDPC. The schematic drawing of the HMX of the GIDPC is shown in Fig. 1. The proposed HMX is constructed by numerous layers which build the wet channels and dry channels for the cooling and evaporation processes. The wet channel is constructed by two facing wet surfaces while the dry channel is built by the adjacent two dry surfaces. On operation, the intake air enters the dry channel with specified temperature and humidity and while passing the dry channel, loses its heat to the adjacent wet channels by a remarkable decrease in its temperature. At the end of the dry channel the intake air is divided into two parts: working air and supply air. The working air, flows into the adjacent wet channel while the rest of it leaves the channel as the supply air. The amount of working air in the wet channel is specified by the working air fraction which receives considerable amount of heat transferred from the dry channel and the moisture from the surface of the wet channels. By completion of the heat and moisture transition, the working air leaves the wet channel as the warm and humified air which is called exhaust air. Compared to the traditional flat plate HMX, the guideless irregular HMX has some remarkable advantages. For instance, removal of the supporting guides in the channels has led to an astronomical reduction in air flow resistance.
Moreover, the heat transfer area has increased as a result of the corrugated surfaces. A super performance wet material layer used to cover the wet surfaces, i.e., Coolmax-fabric, has provided a higher water absorption capacity, higher diffusion area and more evaporation rate. As a result of such high absorption capacity, the intermittent water supply scheme has implemented in the water distribution system which can minimize the water usage and the water pump power consumption. It was shown that under the standard test condition [19], i.e. intake air dry bulb temperature of 37.8 ° C and wet bulb temperature of 21.1 ° C , the prototype of the GIDPC has achieved the wet-bulb efficiency of 114% and the dew point efficiency of 75%. In addition, a significant increase was achieved in COP value, i.e., 52.5, by the optimal working air ratio of 0.364, compared to the commercial DPC with the same dimensions (52.5 vs. 18).

Numerical model
The finite element method is employed to treat the traditional mass and energy equations differentially, and the Newton iteration is applied to each considered element to pursue the equilibrium state in heat and mass transfer phenomena with some simplifying assumptions, i.e., heat transfer between the HMX and surrounding was ignored, heat and mass transfer was assumed to occur in steady state, the convective heat transfer in the walls of the channels were in vertical direction only, the walls were also considered to be impenetrable, thermal resistance of walls was ignored, and air within the channels was considered to be an incompressible gas. The numerical model was developed by applying the following equations to each of the selected computational elements along the channels [18] : The air enthalpy difference between the inlet and outlet of the dry element is equal to the total heat transfer between the air flow in the dry element and channel walls as shown in Eq. (1) . where is the convective mass transfer coefficient between the working air flow and wet channel surface, is density of the air in wet channel, hum w and hum air,wet are the humidity ratio of the working air at the wet wall temperature and wet channel air temperature respectively and σ is the wettability of the surface material. The convective mass transfer coefficient between the working airflow and wet channel surface is expressed as a function of the convective heat transfer coefficient and the Lewis number ( ) where n = 1/3. The convective heat transfer coefficient between the airflow and the channel wall mainly depends on the flow regime (which is laminar in this study i.e., 52.31 < Re dry < 1209 and 5.38 < Re wet < 1131) and can be calculated using Eq. (3) as follows: where is the Nusselt number which depends on the air flow regime [18] , and (m) are the thermal conductivity and the equivalent diameter respectively. (1) (3)

Updated Version
The energy balance of air in the wet channel is considered by calculating the difference of air enthalpy between the inlet and outlet of a wet element through Eq. (4) which is equal to the sum of the heat transferred from the dry to wet elements and the change of airflow enthalpy in the wet element because of the evaporation.
where is mass flow rate in wet channel and is the working air fraction over the intake air.
As shown in Eq. (5) , the amount of water evaporated from the wet element surface is equal to variation of the water flow rate between the inlet and outlet of the computational wet element.
Water enthalpy difference between the inlet and outlet of a wet element is caused by heat transfer between the water and airflow in the dry/wet channels as well as the latent heat of the evaporated water as expressed in Eq. where , is the latent heat of the evaporated water.
Generally, the IECs performance is evaluated by several common formula provided by ASHRAE [39] in which the cooling capacity and COP are the main performance parameters. The system performance can also be evaluated using other metrics such as wet-bulb efficiency and dew point efficiency. The cooling capacity can be expressed by Eq. (7) as follows: where is cooling capacity, is the specific heat capacity, is the intake air temperature in dry channel, is the outlet air temperature in the dry channel, is working air fraction, and is mass flow rate of intake air in dry channel.
COP can be expressed as Eq. (8) as follows: where, and are the electrical power consumed by the fan and the pump respectively.
Wet-bulb efficiency evaluates the system capability in reducing the intake air temperature to its wet-bulb temperature. Similarly, the dew point efficiency considers the system potential in reducing the intake air temperature to its dew point temperature as shown in Eq. (9) : is the wet bulb efficiency and is the wet-bulb temperature of the intake air in dry channel, is the dew point efficiency and is the dew point temperature of the intake air in dry channel.
where is pressure drop, is coefficient of local resistance, is coefficient of friction resistance, is hydraulic diameter, is density and is the air velocity.
Surface area of the layers is a parameter that is considered to control the cost which can be calculated using the

Eq. (11) as follows:
where is the surface area, is the number of layers, is height of the HMX and w represents the width of the surface. In the current GIDPC, the width of the corrugated surface is supposed to have the value of 0.39 (m) [19] .

Proposed methods: digital twins and multi objective evolutionary optimization
This section has two main contribution; I) The digital twins using FFNN is developed to follow the system behaviour; II) The MOEO using GA is applied to obtain the optimal operating and design parameters of the system in diverse climates.

Updated Version
Digital Twins can be defined as a digital replication of a physical entity. It can be also combined with the Internet of Things (IoT) and/or augmented reality. However, in the simplest case, it would be just a system identification for different purposes such as abnormality detection and system optimization [40]. Black box, grey box and weight box models are three classes of the system identification known as the main part of digital twins. The FFNN is used as a black box and data-driven model to build the digital twins. Neural Networks (NNs) are machine learning algorithms which can be used for data-driven prediction, regression and classification. NNs are inspired from human brain and are multi-layer networks of neurons which are constructed by: an input layer which accepts the input variables, single or numerous hidden layers, and an output layer which accepts the output variables.
The architecture of a FFNN, which is a specific type of NN is depicted in Fig. 2 where each neuron within each layer is connected to every one of the neurons in the following layer. Initialization process triggers when each of the connections is weighted by a random value which will be updated during the training procedure to reach the best fit with the lowest possible error values. In addition, there is a bias parameter which is used to adjust the output values of the weighted sum of the inputs to the neuron. Bias is a random constant which helps the model in a way that it can fit best for the given data.
where represents the weight connecting the neuron to neurone in the next layer, n represents the number of the connections, and is the corresponding bias. Activation functions are needed to attach to each neuron and their role is to determine the importance of each neuron's input in prediction of the outputs and to normalize the output of the neurons. Different activation functions are compared and it is found that the performance of the network in terms of Mean Square Error (MSE) is the best when the activation function is hyperbolic tangent sigmoid for the current GIDPC big dataset which performs through the function below: where is the activated value of the neuron .
Randomly selected weights and biases are iteratively optimized through back-propagation process until the considered evaluation metric, e.g., the mean square error, is minimised. The back-propagation is an essence step in minimising the errors and maximising the model generalization [41] . The holdout cross-validation is used to divide the big dataset into three sources: a training data set (70%), a validation data set (15%), and a testing data set (15%). The training data set is used to estimate the network weights, while the validation data set is used to monitor the network and calculate the minimum error during the iterations till network is stopped. The test data set is unseen data by network and task of the test data set is to decrease the bias and generate unbiased estimates for predicting future outcomes and generalizability. The test data set is used at the end of the iteration process for evaluating the performance of the model from an independently drawn sample.
A Bayesian Regularization (BR) is used as a regularization method in optimizing the weight and bias values, which is the linear combination of Bayesian methods and NN to determine the optimal regularization parameters. BR technique implements certain prior distributions on the model parameters as follows [42] : where D represents the big data set, i.e., X represents the inputs and Y represents the outputs, is the sum of squared estimation errors, M represents the network structure, and α are estimated hyper-parameters.
is sum of the weights' squares which intends to decrease the overfitting probability of the model [43] .
Density function is used for updating the weights according to Bayes' rule. The posterior distribution of w given α , β , D, and M can be written as: where is likelihood function of w, is the prior distribution of weights under M, which is the probability of observing the data given w and is a normalization factor or evidence for hyperparameters and .
In this study, as listed in Table 1, main operating and design parameters i.e., temperature, relative humidity and velocity of the intake air, working air fraction, HMX height and channel gap, and number of layers in HMX structure are all considered as input parameters. Additionally, main performance parameters i.e., supply air temperature, cooling capacity, COP, dew point efficiency, wet-bulb efficiency and surface area of the layers are considered as output parameters. The big data set is created using the newly defined operating ranges based on the literature [32,37] and with a purpose of covering wider ranges, by the validated numerical model [18].

Multi objective evolutionary optimization using genetic algorithm
Generally, optimization techniques are classified into four main categories [44] : constrained, multimodal, multi objective and combinatorial. It can also be categorised into classical and metaheuristic optimization [45] . In this study, the constrained multi-objective evolutionary optimization (MOEO) using Genetic Algorithm (GA) known as one of the random-based Evolutionary Algorithms (EAs) is selected as an optimization tool.
Optimization which will help to reach the maximum potential of the GIDPC by identifying the optimum values of operating and design parameters and it can deal with system nonlinearity and ignores the local Table 1 Big Dataset specifications.

Type of parameters input parameters Minimum Maximum
Operating parameters minimums of the problem. Optimization is done in MATLAB and its correctness was validated in different studies [46,47]. The convergence of the MOEO is investigated through the cost versus number of iterations.
The cooling capacity, COP, wet-bulb efficiency and surface area of the layers are selected as objectives as they inherently consider the economic and engineering characteristics of the system simultaneously. The reason for selecting the cooling capacity and COP is to maximise the cooling performance and minimise the power consumption of the system simultaneously. Although the cooling capacity is included in the COP calculations but considering the COP only, will lead to irrational results as the focus may be only on reducing the power consumption only. Moreover, maximising the wet-bulb efficiency minimises the supply air temperature of the GIDPC. Eventually, minimising the surface area of the layers will lead to lower production cost. Considering a single objective can result in irrational solutions by ignoring crucial trade-offs in identifying the optimum values. For instance, the cooling capacity of a DPC can be improved by increasing the length of the channels whereas longer channels can lead to lower COP and higher pressure drop (more fan power) [48]. Thus, a multi objective optimization is necessary to find the best optimum balance between the objectives.
The optimization function is defined by fitness function and the constraint function. The trained FFNN, is the fitness function to be optimized which sets the variables of the problem and the optimization objectives. The constraint function implements the parameters defined ranges as restrictions on the fitness function.
In the present optimization method, the input parameters are assumed as genotype and output parameters are considered as phenotype. Out of seven input parameters listed in Table 1, the temperature and relative humidity of the intake air vary by climates but the remaining five input parameters are chosen as decision variables. Hence, for each specific climate, a MOEO is performed, which will result a unique optimum design for that climate.
In each generation, selection functions pick the most valuable genes which are chosen as the parents of the next generation and then the multi point crossing over procedure is performed on them. Among these, the random genes are added to the population as mutation functions and this procedure is repeated until ultimate criteria are established. Different conditions can be set to stop this process which was reaching the maximum iterations of 200 in this study. The flowchart of the optimization process is shown in Fig. 3. In addition, configured settings and parameters for the proposed optimization are summarized in Table 2. The trial-anderror is the most common way to select the listed parameters. However, the plot of cost versus iterations, system's nonlinearity, number of inputs were the main factors in selecting these parameters. The cost function in this study is considered as follows: where and are predefined based on the climates, is the weights for each objective, R , RCOP, , and are used to normalize the output values or objectives. Detailed illustration of the methods discussed in section 3. Table 2 Genetic Algorithm settings.

Results and discussions 4.1 Selected climates
According to the Koppen-Geiger's climate classification [38] and considering the defined ranges, out of seven existed climates, warm periods of four different climates i.e., Tropical rainforest climate, Arid, Mediterranean hot summer and Hot summer continental are identified as suitable regions for the DPC operation. One representative city for each climate is selected and, in each city, the warm months for the GIDPC operation are identified. The criteria for selection of the operating months is the common defined ranges of the temperature and relative humidity of the intake air (see Table 1 ). The four suitable classifications and their representative climates and cities as well as the operating months are all shown in Fig. 4 . In addition, the monthly temperature and relative humidity of the representative cities [49] and the corresponding average values of the operating months are summarized in Table 3 .
Type of Selection Random Selection

Fig. 4
Selected climates and their representative cities. Monthly and average weather data of each city [49] .

Miami Doha
Rome Beijing

T(°C) RH(-) T(°C) RH(-) T(°C) RH(-) T(°C) RH(-)
where, Y represents the real value of the considered output, represents its predicted value by FNNN and N represents the number of operating conditions.  Comparison of different NN models. constructed to compare it with the selected model. It can be seen that, although the model was improved, i.e., with MSE of 0.03, but no significant accuracy added to the network. Hence, the model No. 9 is selected for the performance prediction of the GIDPC and GA optimization.

FFNN model validation: comparison of the supply air temperature
The developed NN model is validated by the numerical model which was validated experimentally by a 4-kW GIDPC. Although the FFNN model is inherently validated by being trained and validated through the big dataset which was constructed by the numerical model but to illustrate this validation, the predicted temperature of the supply air is compared between the FFNN and numerical models. The idea of selecting the temperature of the supply air, as the comparison parameter for the validation, is based on the key role of this factor in system performance evaluation. Supply air temperature is directly considered in performance parameters calculations, e.g., cooling capacity, and its value is influenced by other key parameters such as intake air parameters, working air fraction and HMX dimensions [39].
Therefore, the supply air temperature predicted by the models are compared in each climate and the results are  Comparison of the supply air temperature of the base system by numerical and digital twins models in operating months.

Optimization results
Average climate data, which were listed in Table 3, are taken to operate the MOEO in order to identify the optimum decision variables in each city. The reason for taking the average data instead of monthly data is because a single GIDPC unit with optimum operating and design parameters will be introduced for each representative city. The MOEO is operated for different weight values, which have the total value of one, in order to choose the best possible cost function. In this study, the priority is to choose the best approach in which the majority of the objectives can hold better values than the base system. Therefore, the results, as listed in Tables

Optimum intake air velocity
The intake air velocity is a factor which has a remarkable impact on system performance as it directly effects the cooling capacity, and rate of heat and mass transfer within the HMX. A higher velocity is associated with larger pressure drop which results in more power consumption and consequently less COP values which are not desirable in optimization and performance evaluation of DPCs. Thus, calibrating the air velocity is challenging, as investigated by Xu et al. [18] , and a robust trade-off considering the effect of several parameters was required to identify the optimum value in each climate. The GA algorithms revealed that the optimum air velocity is almost 2 (m/s) in all climates which is lower than the velocity in the base system which was 3 (m/s). Tendency of the GA to give a lower value for the air velocity was somehow expected as the higher COP values are aimed. Hence, it can be concluded that a trade-off by GA has concluded that the lower range of the intake air velocity is weighted more than the maximum allowable value of 3.3 (m/s).

Optimum working air ratio
The working air ratio is defined as the ratio of the exhaust air to the total intake air. Higher working air ratio will lead to less supply air flow and consequently more temperature drop will occur in intake air which flows inside the HMX dry channels. As a result, at a very high working air ratio, the dew point efficiency will increase but it will lead to lower COP and cooling capacity values. In addition, the low supply air flow will remain as an unfavourable issue. Thus, similar to the air velocity, calibrating the working air ratio is another important challenge in DPC operation which requires a trade-off between the other involved parameters in different climates. The working air fraction in the base system is taken as 0.44 which was based on the experimental study of the M30 (Coolerado USA) DPC [10]. GA algorithm revealed that the optimum working air ratio is ranging from 0.21 to 0.25 which are less than 0.44 in operating condition of the base system. It means that less working air and more supply air compared to the base system operation condition leads to better system performance. The optimum working air ratio holds almost the same value of 0. 21

in Miami and
Rome where it is 0.25 in Doha and 0.23 in Beijing.

Optimum HMX height
Higher HMX height normally results in better DPC performance [18] in terms of cooling capacity by providing more heat transfer area in the HMX sheets but on the contrary it leads to higher pressure drop along the heat exchanger, higher fan power, larger surface area and higher construction costs simultaneously [50].

Optimum channel gap and number of layers
The smaller channel gap will cause higher pressure drop and consequently will result in higher fan power and lower COP values. To the contrary, the larger channel gap will lead to higher mass flow rate and higher cooling capacity. Similarly, more layers can be considered as an important factor in increasing the pressure drop, surface area and construction cost. In addition, an increase of these parameters will lead to more evaporation area and more heat transfer from dry channel to wet channel. Therefore, like previous decision

System operation in optimum conditions
The identified optimum operating and design parameters revealed that remarkable improvements have occurred in COP and surface area values but other performance parameters i.e., cooling capacity, dew point and wet-bulb efficiencies, and supply air temperature are almost remained unchanged as all shown in Figs. 6-11. The main reason for this behaviour lies in the fact that the changes in the main operating and design parameters i.e., reduction in working air fraction and air velocity and height of the HMX have sacrificed the unchanged performance parameters. However, they have caused a remarkable improvement in COP and surface area which will lead to significant reduction in power consumption and production cost. Detailed discussion on the system behaviour is firstly presented by studying the monthly performance of the system under identified optimum conditions in each region, and secondly, the optimization effect on annual performance of the system is investigated.  Surface are improvement comparison between the base and optimised conditions.

Fig. 8
Monthly cooling capacity comparison between the base and optimised conditions.

Fig. 9
Monthly supply air temperature comparison between the base and optimised conditions.

Fig. 10
Monthly dew point efficiency comparison between the base and optimised conditions. , the dew point and wet-bulb efficiencies have decreased of up to 11.94% and 11.45% respectively. Although a remarkable improvement has been recorded after the optimization but due to the humid conditions in all operating months, the negative effects of high relative humidity [18,51]  ° C in June and July and the relative humidity is in the Monthly wet-bulb efficiency comparison between the base and optimized conditions. range of 0.41-0.71, unstable performance for GIDPC was recorded over the operating months. For instance, as shown in Fig. 6(b), the COP ranged from 9 in December to 27.25 in June. The reason for unsatisfying behaviour in December is that the GIDPC was operated in temperature of 25 ° C in which a low cooling capacity was expected. In addition, the wet condition of this month was another reason for unsatisfying performance. Contrarily, the warmest and driest condition in June is the main reason for the best GIDPC performance in Doha. However, the system performance is remarkably improved under the identified optimum conditions where the best performance of the system was recorded in June by the improved cooling capacity of 76.05. Similarly, it is ascertained that the poor performance of GIDPC in an unfavourable condition (December) can also be improved significantly where the optimized COP were recorded as 23.97. In addition, as seen in Fig. 7 ° C in July. Based on these values, it can be estimated that the GIDPC will have impermanent performance over the operating months. As can be seen from Fig. 6(d), the COP of the optimum system varies from 24.71 in August to 44.13 in May. The reason for this performance lies in the fact that the wettest condition in August has led to the system's poor performance, and, as expected, the driest condition in May has caused the system to demonstrate its full potential. However, the optimization has increased the system performance substantially where the maximum increase of 28.94 in COP has occurred in June. This means that although July holds the warmest temperature value, i.e., 31 ° C , but the base system was not designed properly to demonstrate the full potential of the GIDPC in this month. However, the best performance of the system was recorded in May with the COP value of 44.13. Contrarily, despite remarkable improvement in system performance, the poorest performance of the system remained in August where the COP is 24.71. In addition, as seen in Fig. 7

Annual investigation
Having analysed the monthly effects of optimization, it is needed to take the annual figures into account to figure out whether it is worth to have a unique design for the GIDPC in different climates. Thus, the average cooling capacity and COP of the base system and optimum systems in operating months are compared as shown in Fig. 12. In addition. the annual average values of the cooling capacity, supply air temperature, dew point and wet-bulb efficiencies as well as power saving of the GIDPC in optimum conditions which are mainly due to the improved COP and surface area values, are summarized in Table 9. The annual figures are the average values of the monthly values of the performance parameters in each city.

Fig. 12
Annual improvement of COP in all climates.

Conclusion
A constraint multi objective evolutionary optimization using digital twins was developed for the state-of-the-  parameters for GIDPCs were the identified gap through the detailed literature review which is now filled by this study. The digital twins is trained by a comprehensive dataset created by a validated numerical model for a 4-kW GIDPC, and the main optimum operating and design parameters in diverse climates were found. The developed hybrid model was then implemented to demonstrate the monthly and annual GIDPC improvements in all climates. The main outcomes of this study are summarized as follows: • Out of several weather classifications in Köppen-Geiger climate classifications, four suitable climates i.e., Tropical rainforest climate (Miami), Arid (Doha), Mediterranean hot summer (Rome) and Hot summer continental (Beijing), for the GIDPC operation were detected and the system in the selected warm operating months operation was investigated.

•
The FFNN model with fours layers were selected as the predictive tool. The input layer which contained seven operating and design parameters, two hidden layers with 45 neurons in each and one output layer which contained five performance parameters of the GIDPC. Supervision.