Abstract
Climate change poses a significant challenge to wind energy production. It involves long-term, noticeable changes in key climatic factors such as wind power, temperature, wind speed, and wind patterns. Addressing climate change is essential to safeguarding our environment, societies, and economies. In this context, accurately forecasting temperature and wind power becomes crucial for ensuring the stable operation of wind energy systems and for effective power system planning and management. Numerous approaches to wind change forecasting have been proposed including both traditional forecasting models and deep learning models. Traditional forecasting models have limitations since they cannot describe the complex nonlinear relationship in climatic data, resulting in low forecasting accuracy. Deep learning techniques have promising non-linear processing capabilities in weather forecasting. To further advance the integration of deep learning in climate change forecasting, we have developed a hybrid model called CNN-ResNet50-LSTM, comprising a Convolutional Neural Network (CNN), a Deep Convolutional Network (ResNet50), and a Long Short-Term Memory (LSTM) model to predict two climate change factors: temperature and wind power. The experiment was conducted using three publicly available datasets: Wind Turbine Scada (Scada) Dataset, Saudi Arabia Weather history (SA) dataset, and Wind Power Generation Data for 4 locations (WPG) dataset. The forecasting accuracy is evaluated using several evaluation metrics, including the coefficient of determination (\(\:{\text{R}}^{2}\)), Mean Squared Error (MSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE) and Root Mean Squared Error (RMSE). The proposed CNN-ResNet50-LSTM model was also compared to five regression models: Dummy Regressor (DR), Kernel Ridge Regressor (KRR), Decision Tree Regressor (DTR), Extra Trees Regressor (ETR), and Stochastic Gradient Descent Regressor (SGDR). Findings revealed that CNN-ResNet50-LSTM model achieved the best performance, with \(\:{\text{R}}^{2}\) scores of 98.84% for wind power forecasting in the Scada dataset, 99.01% for temperature forecasting in the SA dataset, 98.58% for temperature forecasting and 98.35% for wind power forecasting in the WPG dataset. The CNN-ResNet50-LSTM model demonstrated promising potential in forecasting both temperature and wind power. Additionally, we applied the CNN-ResNet50-LSTM model to predict climate changes up to 2030 using historical data, providing insights that highlight its potential for future forecasting and decision-making.
Similar content being viewed by others
Introduction
The fast development of industry nowadays, as well as the excessive consumption of traditional energy sources such as oil and natural gas and its causes of pollution highlight the importance of renewable energy. Renewable energy has gained increasing attention globally and has been anticipated to address the energy and environmental crises. Accurate and exact wind speed estimation has become more and more important as wind power is integrated into the electrical markets. Over the facility’s lifetime, a 1% inaccuracy in the predicted wind speed might result in a loss of $12,000,0001. The random and uncontrollable nature of wind speed makes forecasting extremely challenging2. Wind turbines are powered by wind energy which is then converted into electrical energy according to the fundamentals of wind power generation3. To decrease the need for power grids, minimize wind turbine loss, and increase annual energy production (AEP) accurate wind speed forecasts are therefore essential3. Moreover, load balancing scheduling and wind farm regulation are just a few applications that precise wind energy forecasting can help with4. Ensuring reliable wind speed forecasting requires the development of robust models capable of long-term wind speed prediction4. There are two types of wind speed prediction methods: traditional physical prediction models, which use statistical prediction techniques, and emerging artificial intelligence models5. Traditional models predict wind speed using physical parameters such as climate and season6. However, due to complexity and large calculations considering enormous parameters, it is difficult to achieve efficient and accurate wind prediction. Numerical Weather Prediction (NWP) uses physical rules to simulate weather conditions and make large-scale forecasts7. However, their poor spatial resolution prevents them from offering precise and accurate forecasts at specific local areas. With the development of statistical prediction models the shortcomings of physical prediction models are successfully addressed. The shortcomings of physical prediction models are successfully resolved with the development of statistical prediction models. Statistical prediction models, often referred to as data-driven or “black box” models, do not consider physical parameters such as topography, climate, or seasons7. Physical-statistical models, which combine NWP and statistical models, are being investigated as another approach to address the restrictions mentioned above8,9. Such models leverage data outputs from NWPs as predictor variables in statistical models, resulting in higher accuracy5. Statistical models include the Kalman filter method and the persistence method. The persistence model is effective in predicting wind speed for short-term, but it is not suitable for medium- and long-term wind speed prediction as it gives forecast value for the next moment8,10,11. The Kalman filter method uses linear equations to create a wind speed prediction model. Therefore, it is effective for linear processes that follow a gaussian distribution and not suitable for complicated wind speed situations12,13,14,15,16. Some models employ a hybrid approach, integrating the best features of both physical and statistical approaches9. Wind speed machine learning methods are divided into two categories: deterministic methods, which employ algorithms to produce certain values, such as recurrent neural networks (RNN), long short-term memory (LSTM), support vector machine (SVM), back propagation neural network (BPNN), and fuzzy neural networks17,18. The second category uses uncertain models to predict wind speed and quantify uncertainty such as Bayesian simulators9. A single algorithmic technique for forecasting wind speed may not provide reliable forecasts. As a result, numerous hybrid methods, including combining methods and ensemble methods (EMs), have been developed to improve wind speed prediction capabilities5. Predictive performance has been successfully increased through the application of ensemble learning. It combines multiple models to create a more powerful model and performs better when the adopted model does not provide great precision. Advanced ensemble learning models include neural networks reinforcement learning multi-objective optimization and boosting6. Several studies have adopted hybrid models to improve the performance of wind predictions. Zhang, Y. M., and Wang, H6. adopts the hybrid EWT Empirical Wavelet Transform LSTM-RELM Regularized Extreme Learning Machine, IEWT Inverse Empirical Wavelet Transform model has been constructed and has stronger predictive performance. For one-sample ahead prediction, they were able to obtain a Mean Absolute Percentage Error (MAPE) of 2.52%, which is a notable improvement over the 8.5% MAPE of a LSTM model6.
This study proposed a novel hybrid CNN-ResNet50-LSTM model that is used to forecast climate change with a particular emphasis on temperature and wind power. The contributions of this study can be outlined as follows:
-
1.
A novel deep learning model CNN-ResNet50-LSTM is proposed that integrates the strengths of three models: CNN, ResNet50, and LSTM. CNN excels at efficiently extracting relevant features from datasets and identifying spatial relationships in climate data. ResNet50 introduces skip connections or shortcuts that span multiple layers, effectively addressing the vanishing gradient problem and enabling the training of much deeper networks. LSTM is used to capture long-term dependencies and temporal sequences within data, which is crucial for climate data characterized by time-series properties.
-
2.
Unlike other deep learning models that focus on forecasting a single factor in climate change, the CNN-ResNet50-LSTM model is used to enable forecasting of both temperature and wind power, providing a more comprehensive understanding of climate change behavior.
-
3.
Extensive experiment tests and evaluation comparisons of the proposed model are conducted with five other regression models: DR, ETR, DTR, SGDR, and KRR using several metrics, including\(\:{\text{R}}^{2}\), MSE, MAE, MedAE, and RMSE. The experimental finding revealed that the developed model provides competitive performance compared to other models.
-
4.
Three different benchmark datasets are used for evaluation: Scada, SA, and WPG. Scada dataset is used for wind power forecasting, SA dataset is used for temperature forecasting, whereas WPG dataset is used for both temperature forecasting and wind power forecasting. The datasets are normalized using Z-score normalization which scales the data points to enable fair comparison. Datasets were divided into three categories: training (70%), validation (15%), and testing (15%).
-
5.
The proposed CNN-ResNet50-LSTM model can forecast climate changes to 2030 using historical data, highlighting its promise for future prediction tasks.
Most existing studies focus on either CNNs, LSTMs, or ResNet-based models independently, whereas our study introduces a novel hybrid CNN-ResNet50-LSTM model that captures both spatial and temporal dependencies in climate data. In addition, most studies have focused only on forecasting wind speed, neglecting wind power and temperature. This study fills that gap by combining both climate factors, giving a more complete view of climate change forecasting. Unlike existing models that mostly deal with short-term forecasts, this study shows the ability to predict climate trends up to 2030, helping support long-term planning.
Related work
This section discusses several studies and methodologies that are used for wind prediction. Li et al.7 built a hybrid model of Empirical Wavelet Transform (EWT), Long Short-Term Memory (LSTM), Regularized Extreme Learning Machine (RELM), and Inverse Empirical Wavelet Transform (IEWT) which achieved stronger predictive performance compared to other single models used in studies of wind speed prediction. They were able to obtain a MAPE of 0.0252 for one-sample ahead prediction. Zhu et al.19 employed a hybrid deep learning model to forecast short-term wind speeds based on empirical wavelet transform recurrent neural networks and error correction. Their model performed better than the Autoregressive Integrated Moving Average (ARIMA) model and other models. Yu et al.10 combine an Elman Recurrent Neural Network (ERNN) and Wavelet Packet Decomposition (WPD) for wind speed forecasting. The proposed model achieved better results than other models. Liu et al.8 employ a discrete wavelet transform and long short-term memory network to predict wind power. The discrete wavelet transform divides wind power data into sub-signals, and an independent LSTM is used for each sub-signal to predict wind power. Yang et al.11 proposed a novel numerical weather prediction (NWP) correction mechanism for wind power prediction. A double clustering method was developed to predict wind power across various scenarios based on separating changing weather conditions. The forecasting model was applied to a wind farm in western Inner Mongolia China and improved the accuracy of wind power predictions. RMSE and MAE are reduced by an average of 0.0593 and 0.0482 respectively using the suggested prediction model. While the multiple clustering approach and NWP correction mechanism can be adapted to specific datasets, they might not be suitable for other wind power forecasting situations or regions12. Different machine learning and deep learning algorithms are used to improve the performance of wind prediction results. The bidirectional GRU (BiGRU) model is used for predicting wind power. Yu et al.13 developed a framework for wind prediction using different algorithms; Random First RF to screen for wind parameters. The variational modal decomposition (VMD) is then optimized using the whale algorithm (WOA), a BiGRU optimized by an attention mechanism is proposed. The attention mechanism improves BiGRU’s focus on essential information. The suggested model improves accuracy and reduces MAPE by 86.81% compared to BiGRU13. Table 1 shows various studies that represent the state-of-the-art prediction models for climate change.
Methodology
The escalating challenges in climate change and their impacts on the environment, economy, and society pose an urgent need for procedures and policies to mitigate its negative impact. The development of deep learning models for effective forecasting of climate change are required to provide an in-depth understanding of climate change behavior. This study enhances our understanding of climate change, with particular emphasis on temperature and wind power, by utilizing deep learning techniques to provide an effective and accurate forecasting model of temperature and wind power. The proposed CNN-ResNet50-LSTM model combines CNN, ResNet50, and LSTM to capture spatial and temporal features, which enables accurate forecasting of temperature and wind power. We evaluate the effectiveness of the CNN-ResNet50-LSTM model by comparing it with five individual regression models: DR, ETR, DTR, SGDR, and KRR. We used several key performance metrics to evaluate the models such as: MSE, MAE, MedAE, RMSE, and R2. The experimental results show that the CNN-ResNet50-LSTM model outperforms the five regression models in predicting both temperature and wind power. Figure 1 depicts a graphical abstract of the proposed methodology for forecasting temperature and wind power. The methodology framework consists of the following steps:
-
1.
Dataset features collection.
-
2.
Scaling data using Z-score normalization.
-
3.
Split the dataset into three distinct subsets: 70% for training, 15% for validation, and 15% for testing.
-
4.
Train the CNN-ResNet50-LSTM model and the five regression models.
-
5.
Performance evaluation using several metrics: MSE, MAE, MedAE, RMSE and \(\:{\text{R}}^{2}\).
-
6.
Evaluating the model’s performance in forecasting climate change of the two main factors: temperature and wind power.
Datasets
In this study, we have used three datasets: Scada, SA, and WPG for forecasting temperature and wind power.
Wind turbine scada dataset
This dataset is used to predict wind power available at (https://www.kaggle.com/datasets/berkerisen/wind-turbine-scada-dataset).The dataset consists of 50,530 records and 4 features, namely, wind power, wind speed, theoretical power and wind direction. The dataset features are described as follows:
-
Wind_Power (kW): “The power generated by the turbine at determined time”.
-
Wind_Speed (m/s): “The wind speed used for energy generation”.
-
Theoretical_Power (KWh): “The power generated by a turbine at a particular wind speed, as specified by the manufacturer”.
-
Wind Direction (°): “The direction of the wind at the turbine’s hub height”.
Table 2 describes the statistical overview of the scada dataset features.
Figure 2 shows the heatmap matrix that represents the correlation between features of the Scada dataset.
Saudi Arabia weather history (SA) dataset
This is hourly weather dataset from 2017 to 2019 to predict the temperature available at “https://www.kaggle.com/datasets/esraamadi/saudi-arabia-weather-history”. The dataset consists of 249,023 records and 14 features. The dataset features are described as follows:
-
Year: “The year of the data collection”.
-
Month: “The month of the data collection”.
-
Day: “The day of the data collection”.
-
Hour: “The hour of the data collection”.
-
Minute: “The minute of the data collection”.
-
Temperature: “The average global temperature”.
-
Wind speed: “Wind speed determines the velocity of the wind”.
-
Visibility: “The distance at which an object or light may be clearly seen”.
-
Barometer pressure: “The pressure of air within the atmosphere of Earth”.
Table 3 describes the statistical features of the SA dataset.
Figure 3 shows the heatmap matrix that represents the correlation between features of SA dataset.
Wind power generation data for 4 locations (WPG) dataset
The dataset utilized in this study, referenced at27, constitutes a time series spanning from 2017 to 2021. It encompasses 9 distinct features and a total of 43,800 records. This extensive dataset provides a robust foundation for developing predictive models aimed at understanding and forecasting climate-related phenomena, particularly focusing on temperature and wind power scenarios. The dataset features are described as follows:
-
Temperature: “Temperature (degrees Fahrenheit) is measured at a height of 2 m above the ground”.
-
Relativehumidity_2m: “Relative humidity (percentage) at a height of 2 m”.
-
Dewpoint_2m: “Dew point in degrees Fahrenheit at a height of 2 m”.
-
Windspeed_10m: “Wind speed measured at a height of 10 m”.
-
Windspeed_100m: “Wind speed measured at a height of 100 m above the ground”.
-
Winddirection_10m: “Wind direction in degrees (0 to 360) at a height of 10 m above the ground”.
-
Winddirection_100m: “Wind direction in degrees (0 to 360) at a height of 100 m above the ground”.
-
Windgusts_10m: “Gusts of wind in meters per second at a height of 10 m above the ground”.
-
Wind Power: “The turbine output has been normalized, scaled so that it is between 0 and 1”.
Table 4 describes the statistical features of the WPG dataset.
Figure 4 shows the heatmap analysis of the dataset features, which visualizes the relationships between different attributes and helps identify patterns in exploratory data analysis. Figure 5 shows the plot of temperature against the years, while Fig. 6 shows the plot of wind power against the years.
Z-score normalization
Normalization is an important preprocessing technique for machine learning, especially for datasets with features that have varying scales28. Normalization transforms data values into a standard range, usually between 0 and 1. This technique ensures that all features contribute equally to the model, preventing large-scale features from dominating the learning process29. The mathematical formula for Z-score normalization is given by Eq. 1:
where, in Eq. 1, \(\:\:Z\)is the transformed (normalized) data, \(\:x\) stands for the original input value from the dataset, \(\:min\left(x\right)\) and \(\:max\left(x\right)\) are the minimum and maximum values for the provided input dataset.
Convolutional neural network
A Convolutional Neural Network (CNN) is a deep learning model that excels at processing structured data, particularly for tasks that require spatial feature extraction. It is employed across a variety of fields and domains30,31. The CNN model architecture is able to learn spatial hierarchies of features applied on new data through backpropagation by employing multiple building blocks31:
-
1.
Convolutional Layers: These layers are the essential building blocks of a CNN. They use a series of learnable filters, also known as kernels, that slide over the input data to conduct convolution operations. Several equal-sized filters are applied, and each filter is utilized to recognize a certain pattern such as edges. The convolutional operation captures the spatial hierarchies within the data via the local connectivity pattern between neurons of adjacent layers, essentially reducing the parameter space compared to fully connected layers.
-
2.
Activation Function: The Rectified Linear Unit (ReLU) is a most used activation function that converts all negative values to zero and keeps all positive values. ReLU helps improve training by enabling faster convergence in deep convolutional neural networks.
-
3.
Pooling Layers: Pooling, also known as downsampling, is an important step in CNNs that reduces the spatial size of feature maps. Max pooling, the most common method, selects the maximum value from each patch. This reduces computational complexity, decreases the number of parameters, and lowers the dimensionality of the feature space.
-
4.
Fully Connected Layers: In a neural network, fully connected layers follow convolutional and pooling layers to execute higher-level reasoning. By connecting each neuron to the preceding layer, they combine localized features into a global representation that captures interactions between distant features.
-
5.
Output Layer: The output layer of a CNN is the final layer that generates the prediction output. The Softmax activation function is frequently used to convert raw scores into probabilities and make the results sound.
ResNet50
ResNet50 is a deep neural network with 50 layers, including convolutional, batch normalization, ReLU activation, and fully connected layers32. It uses residual blocks with shortcut connections to solve the vanishing gradient. These shortcut connections help the network learn more effectively, even with hundreds of layers33.
Long short-term memory
Long Short-Term Memory (LSTM) is an extension of Recurrent Neural Networks (RNNs) that learns long-term dependencies in the input data and addresses the vanishing gradient problem34. The LSTM consists of three gates: Forget gate: A sigmoid activation function is used by the forget gate in an LSTM network to determine which data should be removed from the cell’s memory34. The current input and the hidden state are taken into consideration while making this choice. The output of the forget gate ranging between 0 and 1, where values close to 0 indicate forgetting and values close to 1 indicate retaining information. The input gate in an LSTM decides if new information should be stored in its memory. It consists of two layers: a sigmoid layer that selects which information to update, and a tanh activation that generates vector of candidate values to be added to memory. On the other hand, the output gate applies a sigmoid function to decide which parts of the LSTM’s memory contribute to the output. After that, a tanh function is used to scale these selected values to a range between –1 and 135.
The proposed CNN-ResNet50-LSTM model
This study presents an enhanced deep learning framework for forecasting climate change based on two key factors: temperature and wind power. The proposed CNN–ResNet50–LSTM model integrates the strengths of three components CNN for extracting detailed spatial patterns, ResNet50 for advanced deep feature learning, and LSTM for processing sequential data.
The architecture of the proposed CNN-ResNet50-LSTM model is shown in Fig. 7 and discussed in the following steps:
-
1.
Input Layer: Input vectors of length 9, shaped as (1, 9, 1), enabling multivariate input processing.
-
2.
Convolutional Layers: Three layers with ReLU activation: 1st: 256 filters, kernel size 15 2nd: 128 filters, kernel size 10 3rd: 32 filters, kernel size 7. These extract hierarchical features and reduce overfitting.
-
3.
Max Pooling Layer: 3×3 pooling reduces spatial dimensions and highlights key features.
-
4.
ResNet50 Layer: Processes feature maps using residual blocks for deep feature learning and stability. Pretrained weights improve generalization.
-
5.
LSTM Layer: 256 hidden units model temporal dependencies and sequence patterns.
-
6.
Fully Connected Layer: A dense layer with 32 ReLU-activated neurons combines features and prevents overfitting.
-
7.
Output Layer: A single neuron with linear activation outputs continuous forecasts (i.e. temperature or wind speed).
The CNN-ResNet50-LSTM model performance is largely influenced by several hyperparameters, some of which include learning rate, batch size, and number of epochs. A batch size of 256 allows for efficient training with a balance between convergence speed and memory usage. At each iteration, the learning rate of 0.01 will be used to determine the step size for weight adjustment. The Adam optimizer, which is effective in adaptive learning rates, is used to improve the training process’ efficiency. Finally, the model is trained over 50 epochs giving the model enough iterations to learn and fine-tune its parameters.
Figure 7 demonstrates the architecture of CNN-ResNet50-LSTM model. The pseudocode for CNN-ResNet50-LSTM model is demonstrated in Algorithm 1.
Individual machine learning regression models
Kernel ridge regressor (KRR)
The Kernel Ridge Regressor (KRR) is a mix of ridge regression and kernel methods that was constructed to tackle regression problems requiring the capture of non-linear relationships in the data36. It uses kernel functions to map the input into a higher-dimensional feature, allowing complex patterns that linear models cannot capture to be identified. This makes KRR particularly effective for applications in which nonlinearity is prevalent, enhancing the capacity of the model to understand intricate data structures.
The fundamental mechanism of the KRR framework is based on kernel functions, or the “kernel trick,” which enables KRR to operate in an infinite-dimensional space and makes modeling extremely complex relationships simple by implicitly transforming the input data into a high-dimensional space without explicitly providing the new coordinates of the data in that space. Polynomial kernels and the Radial Basis Function (RBF) are two frequently utilized functions in kernels37.
Decision tree regressor (DTR)
Decision nodes create decisions with multiple branches, whereas leaf nodes produce outcomes with no additional branches38.
Extra trees regressor (ETR)
The Extra Trees Regressor (ETR) is an ensemble model that predicts continuous values. It generates multiple decision trees and aggregates results from multiple decision trees to output predictions39. Unlike other models, ETR randomly selects split points for each feature rather than finding the best splits. This helps reduce overfitting and improves generalization. ETR performs well with large datasets and is effective at modeling complex data relationships40.
Stochastic gradient descent regressor (SGDR)
Stochastic Gradient Descent Regressor (SGDR) is a machine learning model applied for regression tasks. It is called stochastic because it evaluates the gradients using randomly selected samples41. SGDR can be applied to various models, including logistic regression, linear regression, and neural networks42.
Dummy regressor (DR)
The Dummy Regressor is a simple model used to set a baseline for evaluating the performance of other regression models. It does not learn any patterns from the data. Instead, it makes predictions using predefined rules based on a chosen strategy43. One common strategy is to always predict the mean value of the target variable from the training data44.
Computational complexity
Table 5 demonstrates the computational complexity for the proposed CNN-ResNet50-LSTM model and the traditional regression models, DR, KRR, DTR, ETR, and SGDR.
Table 6 compares the training time and inference time for the proposed CNN-ResNet50-LSTM model and the traditional regression models, DR, KRR, DTR, ETR, and SGDR. As seen in Table 6, CNN-ResNet50-LSTM is the fastest model in both training and inference, making it ideal for real-time applications.
Evaluation metrics
Several evaluation metrics are used for measuring how well the proposed CNN-ResNet50-LSTM model performs compared to other models. These metrics help us understand the accuracy and reliability of predictions in different ways. The metrics include: These metrics include: Mean Squared Error (MSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), Root Mean Squared Error (RMSE), R-squared (R²), Root Mean Squared Relative Error (RMSRE), Mean Squared Relative Error (MSRE), Mean Absolute Relative Error (MARE), and Root Mean Squared Percentage Error (RMSPE). The definitions and explanations of these metrics are presented in Eqs. (2–10):
where in Eqs. (2–6), n denotes the dataset sample size, and \(\:{\:Act}_{i}\), \(\:{pre}_{i}\) represent the \(\:{i}^{th}\) actual and forecasted values, respectively.
Results and discussion
We conducted our experiments using Jupyter Notebook version 7.2.1. The experiments were carried out on a Windows 10 PC equipped with an Intel Core i7 processor, 32GB of RAM, and an Nvidia RTX 2080 GPU. For model development and training, we used deep learning frameworks such as Keras and TensorFlow. Additional Python libraries NumPy, Pandas, Matplotlib, and Scikit-learn were used for data preprocessing, analysis, and visualization. In this study, we introduced an improved deep learning model called CNN-ResNet50-LSTM to forecast climate change based on two key factors: temperature and wind power. We compared the performance of the proposed model with five other regression models: KRR, DTR, ETR, SGDR, DR. The evaluation was based on several performance metrics, including Mean Squared Error (MSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), Root Mean Squared Error (RMSE), and R-squared (\({\text{R}}^{2}\)). Experimental results show that the proposed CNN-ResNet50-LSTM model outperforms the five individual regression models in the two factors: temperature and wind power. Table 7 shows the hyperparameter values for each regression model utilized in this study.
In this paper, we used the grid search approach to experiment with different values for key hyperparameters. Table 8 demonstrates the range of the hyperparameters tested during model tuning using the grid search approach.
Table 9 demonstrates the tuned hyperparameters for CNN-ResNet50-LSTM model using the grid search approach.
Table 10 shows the performance of the proposed CNN-ResNet50-LSTM model and five traditional machine learning regression models (ETR, SGDR, DTR, KRR and DR) in forecasting temperature and wind power in Scada, SA, and WPG datasets. We used evaluation metrics outlined in Eqs. (2–6) to assess their performance. The proposed CNN-ResNet50-LSTM model outperformed other models in forecasting temperature and wind power across three datasets: Scada, SA, and WPG. For Scada dataset, the proposed CNN-ResNet50-LSTM model achieves the lowest MSE (0.0302), MAE (0.1005), MedAE (0.0876), RMSE (0.1737), and the highest \(\:{\text{R}}^{2}\) (98.84%). On the other hand, the DR model performed the worst, with higher MSE (0.1062), MAE (0.8142), MedAE (0.4881), RMSE (0.3258), and the lowest \(\:{\text{R}}^{2}\) (92.05%). In SA dataset, the CNN-ResNet50-LSTM model achieved the best results with the lowest MSE (0.0399), MAE (0.1099), MedAE (0.0983), RMSE (0.1997), and the highest \(\:{\text{R}}^{2}\) (99.01%). On the other hand, the KRR and DR model performed the worst, with MSE (0.1251), MAE (0.7894), MedAE (0.7551), RMSE (0.3536), and the lowest \(\:{\text{R}}^{2}\) (91.03%) in DR and MSE (0.1036), MAE (0.6142), MedAE (0.5092), RMSE (0.3218), and the lowest \(\:{\text{R}}^{2}\) (92.15%) in KRR. Finally, in the WPG dataset our proposed CNN-ResNet50-LSTM model came out on top in forecasting temperature, achieving the best results with the lowest MSE (0.0087), MAE (0.0824), MedAE (0.0713), RMSE (0.0932), and the highest \(\:{\text{R}}^{2}\) (98.58%). On the other hand, the DR model performed the worst, with higher MSE (0.2684), MAE (1.8629), MedAE (1.0591), RMSE (0.5180), and the lowest \(\:{\text{R}}^{2}\) (89.83%). Likewise, the proposed CNN-ResNet50-LSTM model achieved the best results in forecasting wind power in WPG dataset with the lowest MSE (0.0103), MAE (0.0814), MedAE (0.0686), RMSE (0.1015), and the highest \(\:{\text{R}}^{2}\) (98.53%). On the other hand, the DR model performed the worst, with higher MSE (0.3681), MAE (0.9494), MedAE (0.8692), RMSE (0.6067), and the lowest \(\:{\text{R}}^{2}\) (88.94%). These results show that our proposed CNN-ResNet50-LSTM model is much better at forecasting wind power than the other five traditional regression models. These results show that our proposed CNN-ResNet50-LSTM model is much better at forecasting temperature than the other five traditional regression models.
Table 11 shows the performance of the proposed CNN-ResNet50-LSTM model and five traditional machine learning regression models (ETR, SGDR, DTR, KRR and DR) in forecasting temperature and wind power for Scada, SA, and WPG datasets. We used evaluation metrics outlined in Eqs. (7–10) to assess their performance. The proposed CNN-ResNet50-LSTM model outperformed other models in forecasting temperature and wind power across three datasets: Scada, SA, and WPG. The proposed CNN-ResNet50-LSTM model achieves the lowest MSRE, MARE, RMSRE, and the highest RMSPE for the three datasets. On the other hand, the DR model performed the worst, with the highest MSRE, MARE, RMSRE, and the lowest RMSPE for the three datasets.
Table 12 demonstrates the performance of the proposed CNN-ResNet50-LSTM model and five advanced deep learning models (CNN, LSTM, GRU, ResNet50 and VGG19) in forecasting temperature and wind power for Scada, SA, and WPG datasets. We used evaluation metrics, namely, MSE, MAE, MedAE, and R2 to assess their performance. The proposed CNN-ResNet50-LSTM model outperformed other models in forecasting temperature and wind power across three datasets: Scada, SA, and WPG. The proposed CNN-ResNet50-LSTM model achieves the lowest MSE, MAE, MedAE, and the highest R2 for the three datasets. On the other hand, the VGG19 model performed the worst, with the highest MSE, MAE, MedAE, and the lowest R2 for the three datasets.
The R2 convergence curves that the model produced are shown in Figs. 8, 9, 10 and 11, which provides important information about the training behavior and performance of the proposed CNN-ResNet50-LSTM model. It strengthens the study findings and allows for a more thorough understanding of the effectiveness of CNN-ResNet50-LSTM model in addressing climate change, particularly temperature and wind power. Figures 12, 13, 14 and 15 shows the training and validation curves for MSE and MAE against the number of epochs using the proposed CNN-ResNet50-LSTM model for wind power and temperature forecasting in Scada, SA, and WPG datasets. The curves indicate that the model learns and generalizes well from the training data. Figures 16, 17, 18 and 19 show a comparison of the actual temperature and wind power versus the forecasting performance for temperature and wind power using CNN-ResNet50-LSTM model for Scada, SA, and WPG datasets. The close alignment of these values shows that the model performs reliably on new, unseen data, providing accurate temperature forecasting.
Weather patterns are changing, and global temperatures and sea levels will continue to rise. Deep learning models can help anticipate the future of climate change. These mathematical climate change models employ historical weather data to improve the accuracy of weather forecasts. The proposed CNN-ResNet50-LSTM model aims to provide accurate future forecasting of temperature and wind power. Figure 20 shows actual temperature data (in blue) from 2017 to 2021, and the future temperature forecasting (in orange) from 2021 to 2030 using the proposed CNN-ResNet50-LSTM model. The blue section displays historical temperature patterns with clear seasonal variations. As we move into the orange section, the model forecasts future temperatures, capturing both short-term and long-term trends. Projections indicate a gradual increase in temperatures over the years, indicating a possible warming trend. This visualization highlights how well the proposed CNN-ResNet50-LSTM model transitions from historical data to future forecasts, demonstrating its accuracy in forecasting temperature trends through 2030. Figure 21 shows actual wind power data (in blue) from 2017 to 2021, and the future wind power forecasting (in orange) from 2021 to 2030 using the proposed CNN-ResNet50-LSTM model. The blue section displays historical wind power patterns, which remain relatively stable and high over the four-year period. As we move into the orange section, the model’s forecasts show significant fluctuations in future wind power. These forecasts capture both short-term variations and long-term trends, indicating periods of both high and low wind power availability. This visualization highlights the proposed CNN-ResNet50-LSTM model’s ability to transition smoothly from historical data to future forecasts, demonstrating its reliability in forecasting wind power trends up to the year 2030.
Forecasting the future of climate change with deep learning algorithms has major real-world implications. Here are some significant benefits of accurate future prediction:
-
Accurate forecasting of global climate change can inform policymakers about potential outcomes and environmental implications, leading to more effective policies and long-term strategies.
-
Climate change can lead to increased plant disease, causing significant negative impacts to plant species, production of food, and ecosystem sustainability45. Using deep learning algorithms helps in advance planning and developing strategies for disease management.
-
Accurate forecasting of climate change factors such as temperature and wind power can predict potential extreme weather occurrences. For example, deep learning algorithms have been effectively used for flood prediction based on temperature and rainfall intensity46.
-
Understanding associations between different climate factors and natural resources can assist in natural resource safeguarding and enhancing agriculture productivity47. Deep learning models can help optimize the allocation of natural resources such as water sources and food production and inform sustainable management.
-
Accurate forecasting of climate change-related factors can lead to better economic planning and decision-making to foster a more sustainable economy and economic decisions. Deep learning approaches has found to be successful in forecasting renewable energy generation and electricity demand48.
There is a considerable need to enhance the literature on deep learning modeling in forecasting climate change, as well as compare the performance of forecasting deep learning models. Furthermore, having abundant clean data is required for optimal outcomes of deep-learning models. In addition, the integration of many fields such as Internet of Things (IoT) and machine learning (ML) techniques are crucial to advance the domain of climate change forecasting. Figures 22, 23 and 24 demonstrate the feature importance for SCADA dataset, SA dataset, and WPG dataset.
Sensitivity analysis
Table 13 presents the results of a sensitivity analysis conducted on the Scada dataset to evaluate how changes in key input features (Wind Speed, Wind Direction, and Theoretical Power) affect the Root Mean Squared Error (RMSE) and coefficient of determination (R2) for wind power forecasting. The feature column represents the input variable being perturbed. Perturbation column represents the percentage change applied to the feature values (+ 5%, -5%, + 10%, -10%). RMSE Change (%) column represents the percentage change in RMSE due to the perturbation. A higher RMSE indicates a decrease in model accuracy. R2 Change (%) column represents the percentage change in R2 due to the perturbation. A lower R2 means the model explains less variance in wind power predictions. Wind Speed has the strongest impact on wind power forecasting. A + 10% increase in wind speed results in a 4.3% increase in RMSE (higher error) and a 2.5% drop in R2, meaning the model struggles more with prediction accuracy. A -10% decrease in wind speed improves R2 by 2.1%, meaning predictions become slightly more accurate. Wind Direction has a moderate effect. A + 10% change in wind direction increases RMSE by 3.2%, showing that directional changes impact power generation. The R² drops by 1.9%, but the effect is smaller than for wind speed. Theoretical Power has a smaller but noticeable impact. A + 5% increase in theoretical power increases RMSE by 1.8%, while a -5% decrease slightly improves R2 by 0.9%. The changes in RMSE and R2 are relatively lower compared to Wind Speed.
Table 14 presents the results of a sensitivity analysis on the Saudi Arabia Weather History (SA) dataset, evaluating how perturbations in key features (Temperature, Wind Speed, and Barometer Pressure) affect the Root Mean Squared Error (RMSE) and coefficient of determination (R2) in temperature forecasting. Temperature is the most influential feature. A + 10% increase in temperature leads to a 3.8% increase in RMSE (higher prediction error) and a 2.2% decrease in R2, meaning the model struggles more. A -10% decrease in temperature improves R2 by 1.9%, suggesting that the model is highly dependent on temperature stability. Wind Speed has a strong effect on temperature forecasting. A + 10% increase in wind speed increases RMSE by 4.5%, showing that wind fluctuations significantly impact temperature predictions. The R2 drops by 2.6%, meaning the model becomes less reliable. A -10% decrease in wind speed improves R² by 2.3%, confirming that wind dynamics play an important role in temperature changes. Barometer Pressure has the least effect. A + 5% increase in barometer pressure only increases RMSE by 1.1%, while R2 decreases by 0.6%, meaning barometer pressure has a minimal impact on temperature predictions. A -5% decrease in Barometer pressure slightly improves R2 by 0.5%, reinforcing that pressure variations do not significantly influence temperature forecasting.
Table 15 presents the results of a sensitivity analysis on the Wind Power Generation (WPG) dataset, evaluating how perturbations in key features (Temperature, Wind Speed (10 m & 100 m), Relative Humidity, and Wind Direction) affect the Root Mean Squared Error (RMSE) and coefficient of determination (R2) in temperature and wind power forecasting. Temperature is a major factor in temperature and wind power forecasting. A + 10% increase in temperature results in a 4.1% increase in RMSE (higher prediction error) and a 2.6% decrease in R2, meaning the model struggles more. A -10% decrease in temperature improves R2 by 2.2%, showing that small temperature fluctuations significantly impact prediction accuracy. Wind Speed (10 m) has a strong effect, but Wind Speed (100 m) has an even greater impact. A + 10% increase in wind speed at 10 m results in a 3.8% increase in RMSE and a 2.3% drop in R2. A + 10% increase in wind speed at 100 m has an even more pronounced effect, increasing RMSE by 4.7% and reducing R2 by 2.9%. This confirms that higher altitude wind speeds (100 m) have a stronger influence on wind power generation than lower altitude wind speeds (10 m), which is expected because wind turbines operate at higher elevations. Relative Humidity has a moderate effect on forecasting accuracy. A + 10% increase in relative humidity increases RMSE by 2.6% and decreases R2 by 1.8%. A -10% decrease in relative humidity improves R2 by 1.4%. While humidity affects temperature and atmospheric energy transfer, its effect is not as strong as temperature and wind speed. Wind Direction affects prediction but to a lesser extent than Wind Speed. A + 10% change in wind direction increases RMSE by 3.1% and decreases R2 by 1.9%. A -10% change improves R2 by 1.6%. Since wind turbines depend on both wind speed and direction, directional changes contribute to power fluctuations, but not as significantly as wind speed itself.
Limitation and future work
The accuracy of climate forecasting models heavily depends on the quality and availability of historical climate datasets. Inconsistent, missing, or noisy data can affect the reliability of predictions. The proposed CNN-ResNet50-LSTM model integrates three deep learning architectures, requiring substantial computational resources for training and inference, which may limit its accessibility for researchers with limited hardware. Climate change is influenced by numerous unpredictable factors, making it challenging to model long-term trends accurately. External factors such as natural disasters or anthropogenic activities can cause deviations from historical patterns. The model’s forecasting capability is based on past climate trends, which may not always capture sudden shifts or extreme weather events. The datasets used for training are limited to specific locations and time frames, which may introduce biases that affect the generalization of the model to different climatic regions. While the model performs well on the selected datasets, its effectiveness in entirely different geographic regions with varying climatic conditions remains to be fully tested. Future research will explore the integration of additional environmental factors such as precipitation, humidity, and atmospheric pressure to enhance prediction accuracy. Implementing real-time data streams will allow the model to adapt dynamically to changing climate conditions, improving forecasting accuracy. Extending the study to a more diverse set of global datasets will help assess the model’s generalization and adaptability to different climate zones. Exploring model optimization techniques such as pruning, quantization, or cloud-based solutions can help reduce computational overhead while maintaining high prediction accuracy. Future work may incorporate explainable AI (XAI) techniques to improve model interpretability, ensuring that climate forecasts are not only accurate but also transparent for policymakers and scientists.
Scalability of the proposed CNN-ResNet50-LSTM model
The proposed CNN-ResNet50-LSTM model is designed to handle large-scale climate datasets efficiently. The combination of CNN, ResNet50, and LSTM ensures that the model can process large climate datasets while capturing both spatial and temporal dependencies. This structure allows it to scale effectively when applied to higher-dimensional climate data. The model demonstrated low training and inference times (Table 6), making it feasible for real-time forecasting applications. Using parallel processing with GPUs, this can be efficiently deployed in operational climate monitoring systems. The model can be trained on additional datasets from different regions without significant modifications and the use of Z-score normalization ensures that the model generalizes well to datasets with different scales. To evaluate the robustness of our model, this study utilized three distinct datasets from different sources, covering varying climatic conditions. The CNN-ResNet50-LSTM model performed consistently well across all datasets, suggesting its ability to generalize to diverse climate scenarios. Since climate conditions vary across regions, retraining the model with location-specific historical data would enhance forecasting accuracy. The model can be fine-tuned with transfer learning to adapt to new environments efficiently. While temperature and wind power are universal climate variables, additional region-specific features can be incorporated to improve forecasting accuracy in specific locations. The model can be integrated with Internet of Things (IoT) networks and weather stations to continuously update forecasts based on real-time sensor data, making it suitable for dynamic climate monitoring.
Conclusion
Climate change and fluctuations in weather and temperature have emerged as one of the most environmental challenges facing the world today. These changes may comprise the rise in temperature, the increased occurrence of natural catastrophes like storms and floods, as well as the changing patterns of rainfall. All these factors pose a significant threat to daily life, economies, and ecosystems. The proposed model CNN-ResNet50-LSTM integrates CNN, ResNet50, and LSTM deep learning models to provide accurate temperature and wind power forecasts, with the goal of understanding the future behavior of climate change temperature and wind power impact, as well as enhancing the approaches based on deep learning techniques for climate change forecasting. Experimental studies were performed on three different datasets for temperature and wind power: Scada, SA, and WPG. The performance of the proposed model was verified using several metrics, including \({\text{R}}^{2}\), MSE, MAE, MedAE, and RMSE. According to the findings, the proposed deep learning framework performed better than five traditional regression models: DR, ETR, DTR, SGDR, and KRR against the three datasets, with \({\text{R}}^{2}\) scores of 98.84 for wind power forecasting in Scada dataset, 99.01 for temperature forecasting in SA dataset, and 98.58% for temperature forecasting and 98. 53% for wind power forecasting in the WPG dataset.
Data availability
References
Shouman, E. R. M. Wind power forecasting models. In Wind Turbines-Advances and Challenges in Design, Manufacture and Operation. IntechOpen. (2022).
Ding, Y., Ye, X. W., Guo, Y., Zhang, R. & Ma, Z. Probabilistic method for wind speed prediction and statistics distribution inference based on SHM data-driven. Probab. Eng. Mech. 73, 103475 (2023).
Suo, L. et al. Wind speed prediction by a swarm intelligence based deep learning model via signal decomposition and parameter optimization using improved chimp optimization algorithm. Energy 276, 127526 (2023).
Zheng, Y. et al. New ridge regression, artificial neural networks and support vector machine for wind speed prediction. Adv. Eng. Softw. 179, 103426 (2023).
Zhang, Z., Wang, J., Wei, D., Luo, T. & Xia, Y. A novel ensemble system for short-term wind speed forecasting based on Two-stage Attention-Based recurrent neural network. Renew. Energy. 204, 11–23 (2023).
Zhang, Y. M. & Wang, H. Multi-head attention-based probabilistic CNN-BiLSTM for day-ahead wind speed forecasting. Energy 278, 127865 (2023).
Li, Y., Wu, H. & Liu, H. Multi-step wind speed forecasting using EWT decomposition, LSTM principal computing, RELM subordinate computing and IEWT reconstruction. Energy. Conv. Manag. 167, 203–219 (2018).
Liu, Y. et al. Wind power short-term prediction based on LSTM and discrete wavelet transform. Appl. Sci. 9 (6), 1108 (2019).
De Giorgi, M. G., Ficarella, A. & Tarantino, M. Assessment of the benefits of numerical weather predictions in wind power forecasting based on statistical methods. Energy 36 (7), 3968–3978 (2011).
Yu, C., Li, Y., Xiang, H. & Zhang, M. Data mining-assisted short-term wind speed forecasting by wavelet packet decomposition and Elman neural network. J. Wind Eng. Ind. Aerodyn. 175, 136–143 (2018).
Yang, M., Guo, Y. & Huang, Y. Wind power ultra-short-term prediction method based on NWP wind speed correction and double clustering division of transitional weather process. Energy 282, 128947 (2023).
Habib, M. A. & Hossain, M. J. Revolutionizing wind power Prediction—The future of energy forecasting with advanced deep learning and strategic feature engineering. Energies 17 (5), 1215 (2024).
Yu, M. et al. A novel framework for ultra-short-term interval wind power prediction based on RF-WOA-VMD and BiGRU optimized by the attention mechanism. Energy 269, 126738 (2023).
Kariniotakis, G. et al. What performance can be expected by short-term wind power prediction models depending on site characteristics? In European wind energy conference EWEC 2004. (2004), November.
Barbounis, T. G., Theocharis, J. B., Alexiadis, M. C. & Dokopoulos, P. S. Long-term wind speed and power forecasting using local recurrent neural network models. IEEE Trans. Energy Convers. 21 (1), 273–284 (2006).
Chen, N., Qian, Z., Nabney, I. T. & Meng, X. Wind power forecasts using Gaussian processes and numerical weather prediction. IEEE Trans. Power Syst. 29 (2), 656–665 (2013).
Elshewey, A. M. et al. A Novel WD-SARIMAX model for temperature forecasting using daily delhi climate dataset. Sustainability, 15(1), p.757. (2022).
Shams, M. Y. et al. A machine learning-based model for predicting temperature under the effects of climate change. In The Power of Data: Driving Climate Change with Data Science and Artificial Intelligence Innovations (61–81). Cham: Springer Nature Switzerland. (2023).
Zhu, C. & Zhu, L. Wind speed Short-Term prediction based on empirical wavelet transform, recurrent neural network and error correction. J. Shanghai Jiaotong Univ. (Science). 29 (2), 297–308 (2024).
Oo, Z. Z. & Phyu, S. Time series prediction based on Facebook prophet: a case study, temperature forecasting in myintkyina. Int. J. Appl. Math. Electron. Computers. 8 (4), 263–267 (2020).
Zhang, H. et al. Temperature forecasting correction based on operational GRAPES-3km model using machine learning methods. Atmosphere, 13(2), p.362. (2022).
Nitsure, S. P., Londhe, S. N. & Khare, K. C. Prediction of sea water levels using wind information and soft computing techniques. Appl. Ocean Res. 47, 344–351 (2014).
Alawadi, S. et al. A comparison of machine learning algorithms for forecasting indoor temperature in smart buildings. Energ. Syst., pp.1–17. (2020).
Liyew, C. M. & Melese, H. A. Machine learning techniques to predict daily rainfall amount. J. Big Data. 8, 1–11 (2021).
Nyasulu, C., Diattara, A., Traore, A., Deme, A. & Ba, C. Towards Resilient Agriculture to Hostile Climate Change in the Sahel Region: A Case Study of Machine Learning-Based Weather Prediction in Senegal. Agriculture, 12(9), p.1473. (2022).
Tarek, Z. et al. Wind power prediction based on machine learning and deep learning models. Computers Mater. Continua. 75 (1), 715–732 (2023).
https://www.kaggle.com/datasets/mubashirrahim/wind-power-generation-data-forecasting
Ali, P. J. M., Faraj, R. H., Koya, E., Ali, P. J. M., & Faraj, R. H. (2014). Data normalization and standardization: a technical report. Mach. Learn. Tech. Rep. 1(1), 1–6.
Cabello-Solorzano, K., Ortigosa de Araujo, I., Peña, M., Correia, L., & J. Tallón-Ballesteros, A. (2023, August). The impact of data normalization on the accuracy of machine learning algorithms: a comparative analysis. In International conference on soft computing models in industrial and environmental applications (pp. 344-353). Cham: Springer Nature Switzerland.
Alzakari, S. A., Alhussan, A. A., Qenawy, A. S. T. & Elshewey, A. M. Early Detection of Potato Disease Using an Enhanced Convolutional Neural Network-Long Short-Term Memory Deep Learning Model. Potato Research, pp.1–19. (2024).
Li, Z., Liu, F., Yang, W., Peng, S. & Zhou, J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Networks Learn. Syst. 33 (12), 6999–7019 (2021).
Theckedath, D. & Sedamkar, R. R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Computer Science, 1(2), p.79. (2020).
Mascarenhas, S. & Agarwal, M. November. A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification. In 2021 International conference on disruptive technologies for multi-disciplinary research and applications (CENTCON) (Vol. 1, pp. 96–99). IEEE. (2021).
Siami-Namini, S., Tavakoli, N., & Namin, A. S. The performance of LSTM and BiLSTM in forecasting time series. In 2019 IEEE International conference on big data (Big Data) (pp. 3285–3292). (IEEE, 2019).
Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D: Nonlinear Phenom. 404, 132306 (2020).
Vovk, V. Kernel ridge regression. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik (105–116). Berlin, Heidelberg: Springer Berlin Heidelberg. (2013).
Exterkate, P. Model selection in kernel ridge regression. Comput. Stat. Data Anal. 68, 1–16 (2013).
Kushwah, J. S. et al. Comparative study of regressor and classifier with decision tree using modern tools. Materials Today: Proceedings, 56, pp.3571–3576. (2022).
Mastelini, S.M., Nakano, F.K., Vens, C. and de Leon Ferreira,A.C.P. (2022). Online extra trees regressor. IEEE Trans. Neural Netw. Learn. Syst. 34(10), 6755–6767.
Jafari, S. & Byun, Y. C. Efficient state of charge Estimation in electric vehicles batteries based on the extra tree regressor: A data-driven approach. Heliyon 10 (4), e25949 (2024).
Jain, P., Kakade, S. M., Kidambi, R., Netrapalli, P. & Sidford, A. Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification. J. Mach. Learn. Res. 18 (223), 1–42 (2018).
Tian, Y., Zhang, Y. & Zhang, H. Recent advances in stochastic gradient descent in deep learning. Mathematics, 11(3), p.682. (2023).
Modupe, O. D. A dummy variable regression on students’ academic performance. Transnatl. J. Sci. Technol. 2 (6), 47–54 (2012).
Mardhiyyah, Y. S., Rasyidi, M. A. & Hidayah, L. Factors affecting crowdfunding investor number in agricultural projects: the dummy regression model. Jurnal Manajemen Agribisnis. 17 (1), 14–14 (2020).
Singh, B. K., Delgado-Baquerizo, M., Egidi, E., Guirado, E. & Leach, J. E. Hongwei Liu, and Pankaj trivedi. Climate change impacts on plant pathogens, food security and paths forward. Nat. Rev. Microbiol. 21 (10), 640–656 (2023).
Sankaranarayanan, S. et al. Flood prediction based on weather parameters using deep learning. J. Water Clim. Change. 11 (4), 1766–1783 (2020).
Zubaidi, S. L. et al. Ahmed, and Khalid Hashim. A method for predicting long-term municipal water demands under climate change. Water Resour. Manage. 34, 1265–1279 (2020).
Nam, K. J., Hwangbo, S., ChangKyoo & Yoo A deep learning-based forecasting model for renewable energy scenarios to guide sustainable energy policy: A case study of Korea. Renew. Sustain. Energy Rev. 122, 109725 (2020).
Acknowledgements
Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R104), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Funding
Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R104), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Author information
Authors and Affiliations
Contributions
All authors have equally contributed.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Elshewey, A.M., Jamjoom, M.M. & Alkhammash, E.H. An enhanced CNN with ResNet50 and LSTM deep learning forecasting model for climate change decision making. Sci Rep 15, 14372 (2025). https://doi.org/10.1038/s41598-025-97401-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-97401-9