The present study proposes and thoroughly examines a novel approach for the effective hybridization of solar and wind sources based on hydrogen storage to increase grid stability and lower peak load. The parabolic trough collector, vanadium chloride thermochemical cycle, hydrogen storage tank, alkaline fuel cells, thermal energy storage, and absorption chiller make up the suggested smart system. Additionally, the proposed system includes a wind turbine to power the electrolyzer unit and minimize the size of the solar system. A rule-based control technique establishes an intelligent two-way connection with energy networks to compensate for the energy expenses throughout the year. The transient system simulation (TRNSYS) tool and the engineering equation solver program are used to conduct a comprehensive techno-economic-environmental assessment of a Swedish residential building. A four-objective optimization utilizing MATLAB based on the grey wolf algorithm coupled with an artificial neural network is used to determine the best trade-off between the indicators. According to the results, the primary energy saving, carbon dioxide reduction rate, overall cost, and purchased energy are 80.6 %, 219 %, 14.8 $/h, and 24.9 MWh at optimal conditions. From the scatter distribution, it can be concluded that fuel cell voltage and collector length should be maintained at their lowest domain and the electrode area is an ineffective parameter. The suggested renewable-driven smart system can provide for the building's needs for 70 % of the year and sell excess production to the local energy network, making it a feasible alternative. Solar energy is far less effective in storing hydrogen over the winter than wind energy, demonstrating the benefits of combining renewable energy sources to fulfill demand. By lowering CO2 emissions by 61,758 kg, it is predicted that the recommended smart renewable system might save 7719 $ in environmental costs, equivalent to 6.9 ha of new reforestation.
This article presents and thoroughly examines an innovative, practical, cost-effective, and energy-efficient smart heating, ventilation, and air conditioning (HVAC) system. The fundamental component of this concept is a state-of-the-art method called Deep Green Cooling technology, which uses deep drilling to utilize the ground's heating and cooling potential directly without the need for machinery or heat pumps. This method satisfies demands with the least energy use, environmental impact, and operational costs. In order to effectively oversee and regulate energy production, storage, and utilization, the system consists of an intelligent control unit with many smart controllers and valves. Renewable energy deployment is made easier, and the intelligent automation unit is more compatible with the help of a high-temperature cooling resource with a high supply temperature of 16 °C. The technical, environmental, and financial aspects of the suggested smart office building system in the southern region of Uppsala, Sweden, are evaluated using TRNSYS software. According to the results, boreholes provide more than 28.5 % of the building's energy requirements by utilizing the ground's ability to generate affordable, dependable seasonal thermal energy. The district heating network satisfies the remaining demand, amounting to 787.2 MWh, highlighting the benefits of combining conventional and renewable energy sources for increased supply security and dependability. The borehole thermal energy storage system meets the building's entire cooling need, underscoring the importance of high-temperature cooling systems. The most expensive part of the system is the borehole thermal energy storage, which accounts for over half of the total investment. The system has an appropriate payback period of ten years, proving its long-term profitability and cost-effectiveness, thanks to removing the machinery and heat pump. With 3138 MWh of ground-source heating and cooling, the system saves 17,962 USD by reducing CO2 emissions by about 143.7 t, sufficient to grow 16.3 ha of trees throughout the payback period.
Fluctuations in electricity price create arbitrage opportunities for compressed CO2 energy storage (CCES) systems. However, previous studies often neglected the dynamic characteristics of CCES systems, leading to inaccurate assessments. This paper addresses this gap by evaluating the CCES system arbitrage considering its dynamic characteristics. We introduce a novel indicator, state of charge (SOC), into a mixed-integer linear programming (MILP) optimization model to capture the dynamics. Utilizing real electricity prices, the model optimizes the CCES operation strategy for a maximum profit. The results demonstrate that a CCES system with a 267 MWh capacity could achieve a total income of 22.5 MEUR in 2022, with a net present value (NPV) of 258.1 MEUR over 35 years, a payback time of 2 years, and an average round-trip efficiency (ARTE) of 77.0 %. Sensitivity analysis reveals that the sizes of the compressor, the expander, and the high-pressure gas tank significantly impact the arbitrage potential. In contrast, the steady-state model-based results demonstrate that the CCES system could yield a higher NPV of 573.7 MEUR, a shorter payback time of 1 year, and a higher ARTE of 87.0 %. This emphasizes the pivotal importance of integrating dynamic characteristics into the design and assessment of CCES systems for arbitrage assessment.
The compressed carbon dioxide (CO2) energy storage (CCES) system has been attracting more and more attentions in recent years. The CCES system leads the way of green solutions to accommodating the intermittency of renewable power generation systems in a large-scale energy storage pattern. Particularly, the usage of CO2 as the working medium for CCES successfully offers a green solution to massive carbon capture and storage. This paper aims to further analyze the applicability and feasibility of a novel CCES system with the merit of efficiently and economically utilizing pressure energy and thermal energy. Thermodynamic and cost evaluation on the energy conversion cycle were carried out. Genetic algorithm was employed to perform multi-objective optimization on the novel energy conversion cycle with thermal energy storage towards maximizing exergy efficiency and economic profits. Results reveal that the net output power monotonously increases with turbine inlet temperature, but the unit product cost monotonously decreases with turbine inlet temperature. The multi-objective optimization recommends a 60.5% for the overall exergy efficiency and 0.23 $/kWh for the unit product cost. Moreover, scattered distribution of decision variables suggests always a higher outlet pressure for compressor.
Molten salts play a key role in the heat transfer and thermal energy storage processes of concentrated solar power plants. A novel composite material was prepared in this work by adding micron-sized magnesium particles into Li2CO3-Na2CO3-K2CO3 molten salt, the heat transfer and thermal energy storage properties of the composites were studied experimentally. A stable composite nanofluid can be obtained, and a thermal conductivity of 0.728 W/(m·K) at 973 K with an enhancement of 31% is achieved for the Mg/molten carbonate nanofluid. And the strengthening mechanism of thermal conductivity was revealed by using ab-initio molecular dynamics method. It is found that the main bonding interactions exist between Mg and O atoms at the surface of Mg particles. A compressed ion layer with a more compact and ordered ionic structure is formed around Mg particles, and the Brownian motions of Mg particles lead to the micro-convections of carbonate ions around them. These factors are helpful to the enhancement of thermal conduction with the improved probability and frequency of ion collisions. This work can provide a guidance for further studies and applications on metal/molten salt composites with enhanced heat transfer and thermal energy storage capacity.
Predicting the performance of Li-ion batteries over lifetime is necessary for design and optimal operation of integrated energy systems, as electric vehicles and energy grids. For prediction purposes, several models have been suggested in the literature, with different levels of complexity and predictability. In particular, electrochemical models suffer of high computational costs, while empirical models are deprived of physical meaning. In the present work, a semi-empirical model is suggested, holding the computational efficiency of empirical approaches (low number of fitting parameters, low-order algebraic equations), while providing insights on the processes occurring in the battery during operation. The proposed model is successfully validated on experimental battery cycles: specifically, in conditions of capacity fade > 20%, and dynamic cycling at different temperatures. A comparable performance to up-to-date empirical models is achieved both in terms of computational time, and correlation coefficient R-2. In addition, analyzing the evolution of fitting parameters as a function of cycle number allows to identify the limiting processes in the overall battery degradation for all the protocols considered. The model suggested is thus suitable for implementation in system modelling, and it can be employed as an informative tool for improved design and operational strategies.
Preheating batteries in electric vehicles under cold weather conditions is one of the key measures to improve the performance and lifetime of lithium-ion batteries. In general, preheating can be divided into external heating and internal heating, depending on the location of the heat source. External heating methods are usually characterized by low system complexity, long heating time and high energy loss; while internal heating methods can achieve a shorter heating time, a higher heating efficiency and lower impacts on thermal-induced aging but at a higher risk in safety. Through reviewing recent progress in the development of preheating methods for lithium-ion batteries, this paper provides insights on developing new preheating techniques and guidance on the selection of preheating methods.
The performance of lithium-ion batteries (LIBs) is sensitive to the operating temperature, and the design and operation of battery thermal management systems reply on accurate information of LIBs' temperature. This study proposes a data-driven model based on neural network (NN) for estimating the temperature profile of a LIB module. Only one temperature measurement is needed for the battery module, which can assure a low cost. The method has been tested for battery modules consisting of prismatic and cylindrical batteries. In general, a good accuracy can be observed that the root mean square error (RMSE) of esitmated temperatures is less than 0.8 °C regardless of the different operating conditions, ambient temperatures, and heat dissipation conditions.
Vehicle-to-grid (V2G) technology enables electric vehicles (EVs) to serve as flexible load storage resources, which is expected to play a pivotal role in pursuing carbon neutrality. However, existing studies on the effect of V2G at different stages of carbon neutrality is not sufficient, and there is a lack of discussion on the optimal adoption period for V2G in the context of electricity marketization trading. To fill this gap, a new methodology is proposed in this study to analyze multi-dimension effects of V2G towards carbon neutrality. The model is a novel attempt of applying partial market equilibrium model to depict the interaction between electricity suppliers and V2G adopters. By applying in a China's case, the results demonstrate that: (1) EVs with V2G can substitute 22.2 %–30.1 % energy storage and accelerate the phase-out of coal-fired power. (2) V2G can effectively mitigate electricity price fluctuations, moreover, more fast charging infrastructure will strengthen such effect. (3) V2G only become attractive when renewable energy penetration rate reaches 80 %, otherwise, it cannot effectively reduce the total social cost and carbon emission. (4) In the carbon neutrality scenario with limited emission, the emission reduction effect of V2G is weakened, however, its economic benefit keeps increasing.
Thermal runaway (TR) stands as a critical risk in battery applications. Even though various battery thermal management systems (BTMSs) have been proposed to mitigate thermal runaway propagation, a comprehensive comparison remains elusive. This study evaluates the performance of three types of BTMSs with 5 configurations, which include: liquid cooling with cold plates added on the bottom (BTMS-1a), liquid cooling with cold plates added on the sides (BTMS-1b), liquid cooling with cold plates added between batteries (BTMS-1c), integrating thermal insulation materials between batteries (BTMS-2), and implementing phase change materials between batteries (BTMS-3). The highest temperature, propagation time, temperature uniformity, cooling rate, mass energy density, and volume energy density are used as key performance indicators for comparison. In general, BTMS-2 and BTMS-3 show advantages in energy density, however, their performances on TR suppression and battery thermal management are poor. BTMS-1c can suppress TR effectively at high flowrates, whereas it can lead to poor temperature uniformity. Suggestions are also provided regarding the selection of BTMSs for different applications: BTMS-1b, BTMS-1c, and BTMS-3 are recommended for small EVs, large EVs and large scale battery energy storage systems (BESSs), and small BESSs, respectively.
Nickel-rich polycrystalline LiNixCoyMn1-x-yO2 (PC-NCM, 0.8 ≤ x < 1) particles suffer capacity degradation due to intergranular cracks, which catalyze side reactions at fresh interfaces, diminishing battery performance. Understanding the mechanisms behind crack evolution is essential for mitigating these issues. Real-time crack observation is crucial for this understanding, yet long-term monitoring remains unachieved. This study develops a versatile method using an optical in-situ reaction cell, modified from a coin cell structure, to enable long-term, real-time tracking of volume changes, crack evolution and lithium-ion diffusion in the particle. This method has provided new insights into the evolution of intergranular cracks and mechanisms in PC-NCM811 particles. Intergranular cracks can be categorized into main cracks, microcracks and cracks at the boundaries of inactive domains based on the stress origin. Main cracks stem from strain mismatches caused by asynchronous domains during initial activation, while their subsequent propagation is driven by alternating stresses from cycling. The initiation of microcracks is caused by stress concentration at grain boundaries due to abrupt volume contraction during charging process. Volume changes along the a-axis exacerbate the irreversible propagation of these microcracks at a high state of charge. Optical imaging shows regions with limited lithium-ion diffusion align with boundary cracks caused by uneven lithium-ion concentrations at high C-rate. These findings emphasize the value of long-term, real-time observation for understanding electrochemical-mechanical interactions. The observation and analysis method can be applied to investigate and evaluate the crack evolution of various materials under different conditions, facilitating the optimization of material design and the formulation of effective cycling protocols.