Browsing by Author "Elkamel, Ali"
Now showing 1 - 20 of 42
- Results Per Page
- Sort Options
Item A Robust Optimization Approach for Planning the Transition to Plug-in Hybrid Electric Vehicles(Institute of Electrical and Electronics Engineers (IEEE), 2011-02-28) Hajimiragha, Amir H.; Canizares, Claudio A.; Fowler, Michael W.; Moazeni, Somayeh; Elkamel, AliThis paper proposes a new technique to analyze the electricity and transport sectors within a single integrated framework to realize an environmentally and economically sustainable integration of plug-in hybrid electric vehicles (PHEVs) into the electric grid, considering the most relevant planning uncertainties. The method is based on a comprehensive robust optimization planning that considers the constraints of both the electricity grid and the transport sector. The proposed model is justified and described in some detail, applying it to the real case of Ontario, Canada, to determine Ontario's grid potential to support PHEVs for the planning horizon 2008-2025.Item Allocation of Hydrogen Produced via Power-to-Gas Technology to Various Power-to-Gas Pathways(University of Waterloo, 2018-09-04) Al-Zakwani, Suaad; Elkamel, AliDemand for renewable energy systems is accelerating and will account for a significant share of future power systems aimed to enhance and decarbonize the world’s energy system. Unlike conventional power plants, electricity output from renewable sources cannot be adjusted easily to match consumer power demand because renewable resources are intermittent short-term seasonal power sources. Accordingly, a rapid increase in surplus power is expected in the future. The Canadian Province of Ontario, in line with global efforts, has targeted 80 % reduction of greenhouse gas emission levels by 2050 compared to 1990 levels. One key step to accomplish this goal is to harness more renewable energies for power generation. Instead of losing the surplus power or exporting it for low returns, storage and utilization in other sectors urgently need to be explored. Power-to-Gas technology offers a possible solution for optimal use of energy surplus. It is efficient at the huge — national— consumption scale and global acceptance of Power-to-Gas as energy storage and transportation technology is growing noticeably. In short, Power-to-Gas is a potential means to manage intermittent and weather-dependent renewable energies like wind, solar, or hydro in a storable chemical energy form. The main concept behind Power-to-Gas technology is to make use of surplus electricity to decompose water molecules into their primary components: hydrogen and oxygen. Power-to-Gas is not only a storage technology; its role can be extended to other energy streams including transportation, industrial use, injection into the natural gas grid as pure hydrogen, and renewable natural gas. The current study investigated four specific Power-to-Gas pathways: Power-to-Gas to mobility fuel, Power-to-Gas to industry, Power-to-Gas to natural gas pipeline for use as hydrogen-enriched natural gas, and Power-to-Gas to Renewable Natural Gas (i.e., Methanation). This study quantifies the hydrogen volumes at three production capacity factors (67%, 80%, and 96%) upon utilizing Ontario’s surplus electricity baseload. Five allocation scenarios (A-E) of the hydrogen produced to the four Power-to-Gas pathways are investigated and their economic and environmental aspects considered. Allocation scenario A in which hydrogen assigned to each pathway is constrained by a specific demand, is based on Ontario’s energy plans for pollution management in line with international efforts to reduce global warming impacts. Scenarios B-E are about utilization of the produced hydrogen entirely for one of mobility fuel, industrial feedstock, injection into the natural gas grid, or renewable natural gas synthesis, respectively. The study also examines the economic feasibility and carbon offset of the PtG pathways in each scenario. The research sets the assumption that hydrogen is produced at three capacity production factors: 67% (16 h/day), 80% (19 h/day), and 96% (23 h/day). The amount of surplus baseload electricity for 2017 of each capacity factor is converted to hydrogen via water electrolysis. Accordingly, the total hydrogen produced is approximately 170 kilo-tonnes (kt), 193 kt, and 227 kt, respectively. Results indicate that the Power-to-Gas to mobility fuel pathway in scenarios A and B has the potential to be implemented. Utilization of hydrogen produced via Power-to-Gas technology for refueling light-duty vehicles is a profitable business case with an average positive net present value of $4.5 billions, five years payback time, and 20% internal rate of return. Moreover, this PtG pathway promises a potential 2,215,916 tonnes of CO2 reduction from road travel. In the scenario to utilize Ontario’s surplus electricity to produce hydrogen via the PtG technology for industrial demand, results indicate that supply could achieve 82%, 93%, and 110% of the industrial demand for hydrogen at the three capacity factors, respectively. Nevertheless, hydrogen production through PtG is still costly compared to other available cheaper alternatives, namely hydrogen produced via steam methane reforming. Power-to-Gas for industry projects should, however, be part of government incentives to encourage clean energy utilization. In addition, although using hydrogen-enriched natural gas or renewable natural gas instead of the conventional natural gas could offset huge amounts of carbon, their capital and operational costs are extremely high, resulting in negative net present values and very long payback time.Item Comparing Non-Steady State Emissions under Start-Up and Shut-Down Operating Conditions with Steady State Emissions(University of Waterloo, 2018-04-24) Obaid, Juwairia; Elkamel, Ali; Anderson, William B.Although often neglected, the non-steady state operations of industrial facilities are more likely to result in increased emissions and process safety incidents compared to steady state operations. Regulatory authorities such as the United States Environmental Protection Agency and the Ontario Ministry of the Environment and Climate Change do not require industrial facilities to assess and report emissions under non-steady state operating conditions such as start-up and shut-down events. It is demonstrated that emissions under non-steady state operation can be higher than those under steady state operation and that non-steady state emissions have the potential to exceed applicable regulatory emission limits. A literature review has been conducted that compares non-steady state emissions under start-up and shut-down operating conditions with steady state emissions for several industrial sectors. Where available, trends have been developed to identify the circumstances, i.e. the industrial sector and contaminant, under which the assessment and consideration of emissions from start-up and shut-down events is necessary for each industry. The thesis also compares the two most commonly used air dispersion models: AERMOD and CALPUFF using a case study approach and recommends the use of CALPUFF as the more conservative approach. CALPUFF is then used to model the greenhouse gas emissions from the full load operation (steady state) and start-up conditions (non-steady state) of a combined cycle power plant to identify the worst-case emissions scenario. The studies conclude that emissions under both, steady state and non-steady state operating conditions, must be modelled and assessed to ensure that the impacts of released emissions are modelled and studied in a conservative manner that takes into account all scenarios to determine the impacts of the worst-case scenario. The studies demonstrate that the worst-case operating condition may be different for each contaminant. Some contaminants have higher emissions during steady-state operating conditions, while others have higher emissions during non-steady state operating conditions. This was observed to depend on the nature of the industrial process and the type of contaminant. Considering these different operating scenarios is particularly important when emissions associated with non-steady state operation have the potential to exceed applicable regulatory emission limits, and to possibly cause an adverse impact on public health and the environment. Therefore, emissions under both, steady state and non-steady state, operating conditions must be assessed, controlled and reported to the regulatory authorities to ensure that emissions under the worst-case scenario are addressed, consequently preventing the emissions from adversely impacting public health and the environment. The study recommends that regulatory authorities require industrial facilities to assess their emissions under non-steady state operating conditions as well as under steady state operating conditions to ensure that the emissions under both conditions are controlled below the applicable regulatory emission limits.Item Data Driven Modelling and Optimization of MEA Absorption Process for CO2 Capture(University of Waterloo, 2020-05-27) Shalaby, Abdelhamid; Douglas, Peter; Elkamel, AliGlobal warming is a rising issue and there are many research studies aiming to reduce greenhouse gas emissions. Carbon capture and storage technologies improved throughout the years to contribute as a solution to this problem. In this work the post-combustion carbon capture unit is used to develop surrogated models for operation optimization. Previous work included mechanistic and detailed modeling of steady-state and dynamic systems. Furthermore, control structures and optimization approaches have been studied. Moreover, various solutions such as MEA, DEA, and MDEA have been tested and simulated to determine the efficiency and the behavior of the system. In this work a dynamic model with MEA solution developed by (Nittaya, 2014) and (Harun, 2012) is used to generate operational data. The system is simulated using gProms v.5.1 with six PI controllers. The model illustrated that the regeneration of the solvent is the most energy-consuming part of the process. Due to the changes in electricity supply and demand, also, the importance of achieving a specific %CC and purity of carbon dioxide as outputs of this process, surrogated models are developed and used to predict the outputs and to optimize the operating conditions of the process. Multiple machine learning and data-driven models has been developed using simulation data generated after a proper choice of the operating variables and the important outputs. Steady-state and transient state models have been developed and evaluated. The models were used to predict the outputs of the process and used later to optimize the operating conditions of the process. The flue gas flow rate, temperature, pressure, reboiler pressure, reboiler, and condenser duties were selected as the operating variables of the system (inputs). The system energy requirements, %CC, and the purity of carbon dioxide were selected to be the outputs of the process. For steady-state modeling, artificial neural network (ANN) model with backpropagation and momentum was developed to predict the process outputs. The ANN model efficiency was compared to other machine learning models such as Gaussian Process Regression (GPR), rational quadratic GPR, squared exponential GPR, tree regression and matern GPR. The ANN excelled all other models in terms of prediction and accuracy, however, the other model’s regression coefficient (R2) was never below 0.95. For dynamic modelling, recurrent neural networks (RNN) have been used to predict the outputs of the system. Two training algorithms have been used to create the neural network: Levenberg-Marquardt (LM) and Broyden-Fletcher-Goldfrab-Shanno (BFGS). The RNN was able to predict the outputs of the system accurately. Sequential quadratic programming (SQP) and genetic algorithm (GA) were used to optimize the surrogated models and determine the optimum operating conditions following an objective of maximizing the purity of CO2 and %CC and minimizing the system energy requirements.Item Data-driven Optimization: Applications to Energy Infrastructure and Process Industry(University of Waterloo, 2021-12-20) Alkatheri, Mohammed; Elkamel, Ali; Douglas, PeterNowadays, the existence and ease of access to massive amounts of data encourage proposing data-driven solutions. As optimization has always been based on the interchange between models and data, high-level optimization tasks such as planning and scheduling will extremely benefit from information mined from massive data sets. The development of big data tools (i.e., machine learning) has proven superiority over traditional data tools in dealing with vast amounts of data, data with undefined structure and capturing important information from data in a very efficient and computationally tractable manner. Therefore, in this work, big data tools are implemented to address the challenges associated with planning models of energy infrastructure that incorporate renewable resources and chemical engineering processes, namely, uncertainty handling, multiscale modelling, and unit process equation complexity. A Data-driven stochastic optimization framework that leverages big data in design and operation of power generation planning is proposed. A k-means clustering algorithm is adopted to generate uncertainty scenarios for the stochastic optimization framework. These scenarios are used as inputs to the stochastic model where the proposed model is formulated as a mixed integer linear program (MILP) and solved using GAMS. The proposed approach is applied to different power planning models that include unit commitment (UC) characteristics where the size of uncertainty scenarios is reduced. Results show that the proposed approach is an effective tool to generate reduced size stochastic scenarios. The design and operation of energy hub problem involves the integration of decision levels with different time scales that usually lead to multiscale models which are computationally expensive. The multiscale (i.e., planning and scheduling) energy hub systems that incorporate renewable energy resources become more challenging to model due to a high level of intermittency associated with renewable energy. A mathematical programming-based general clustering approach is applied to reduce the size of multiple attributes demand data and tackle the computational complexity of multiscale energy hub problems. Multiscale with multiple attributes energy hub incorporating hydrogen storage is modelled as a MILP stochastic optimization problem under wind uncertainty. Different case studies are generated under different environmental consideration to assess the efficiency of the clustering approach and stochastic formulation. Assessments conclude that the clustering approach is an effective tool to reduce the size of the original model while maintaining good results. Recent advancements in supervised machine learning tools have demonstrated their ability to achieve accurate and efficient prediction results. Therefore, in this study, these tools are employed as alternative approaches to model a specific application in the gas industry. The chosen application is a natural gas condensate stabilization process based on operating data. Natural gas condensate treatment involves condensate stabilization process in which light end components are removed and thus condensate vapour pressure is reduced to meet storage and transportation specification. Different supervised machine learning models are developed to predict the performance of two industrial condensate stabilizer units. Large datasets of the two different industrial condensate stabilizers, including operating data of input-output variables, are utilized to develop and evaluate these models. The main purpose of developing these machine learning models is to predict the important parameters of the final stabilized liquid. Results attained from this study showcase the capability of the developed models to offer reliable and accurate predictions. A data-driven surrogate-based optimization framework is developed, where the generated machine learning models can serve as a convenient replacement for detailed first principle models, to find the optimal values of the variables corresponding to the minimal operational energy consumption. The proposed framework can benefit the gas industry to simultaneously achieve process efficiency, profitability, and safety.Item Deep Recurrent Neural Networks for Fault Detection and Classification(University of Waterloo, 2018-12-20) Mireles Gonzalez, Jorge Ivan; Budman, Hector; Elkamel, AliDeep Learning is one of the fastest growing research topics in process systems engineering due to the ability of deep learning models to represent and predict non-linear behavior in many applications. However, the application of these models in chemical engineering is still in its infancy. Thus, a key goal of this work is assessing the capabilities of deep-learning based models in a chemical engineering applications. The specific focus in the current work is detection and classification of faults in a large industrial plant involving several chemical unit operations. Towards this goal we compare the efficacy of a deep learning based algorithm to other state-of-the-art multivariate statistical based techniques for fault detection and classification. The comparison is conducted using simulated data from a chemical benchmark case study that has been often used to test fault detection algorithms, the Tennessee Eastman Process (TEP). A real time online scheme is proposed in the current work that enhances the detection and classifications of all the faults occurring in the simulation. This is accomplished by formulating a fault-detection model capable of describing the dynamic nonlinear relationships among the output variables and manipulated variables that can be measured in the Tennessee Eastman Process during the occurrence of faults or in the absence of them. In particular, we are focusing on specific faults that cannot be correctly detected and classified by traditional statistical methods nor by simpler Artificial Neural Networks (ANN). To increase the detectability of these faults, a deep Recurrent Neural Network (RNN) is programmed that uses dynamic information of the process along a pre-specified time horizon. In this research we first studied the effect of the number of samples feed into the RNN in order to capture more dynamical information of the faults and showed that accuracy increases with this number e.g. average classification rates were 79.8%, 80.3%, 81% and 84% for the RNN with 5, 15, 25 and 100 number of samples respectively. As well, to increase the classification accuracy of difficult to observe faults we developed a hierarchical structure where faults are grouped into subsets and classified with separate models for each subset. Also, to improve the classification for faults that resulted in responses with low signal to noise ratio excitation was added to the process through an implementation of a pseudo random signal(PRS). By applying the hierarchical structure there is an increment on the signal-to-noise ratio of faults 3 and 9, which translates in an improvement in the classification accuracy in both of these faults by 43.0% and 17.2% respectively for the case of 100 number of samples and by 8.7% and 23.4% for 25 number samples. On the other hand, applying a PRS to excite the system has showed a dramatic increase in the classification rates of the normal state to 88.7% and fault 15 up to 76.4%. Therefore, the proposed method is able to improve considerably both the detection and classification accuracy of several observable faults, as well as faults considered to be unobservable when using other detection algorithms. Overall, the comparison of the deep learning algorithms with Dynamic PCA (Principal Component Analysis) techniques showed a clear superiority of the deep learning techniques in classifying faults in nonlinear dynamic processes. Finally, we develop these same techniques to different operational modes of the TEP simulation, achieving comparable improvements to the classification accuracies.Item Design Optimization for Spatial Arrangement of Used Nuclear Fuel Containers(University of Waterloo, 2020-05-27) Leong, Jeremy; Ponnambalam, Kumaraswamy; Elkamel, AliCanada's proposed deep geological repository is a multiple-barrier system designed to isolate used nuclear fuel containers (UFCs) indefinitely with no release of radionuclides for at least one million years. Placing UFCs together as densely as possible is ideal for mitigating repository size and cost. However, due to heat generation from radioactive decay and material limitations, a key design criterion is that the maximum temperature inside the repository must not exceed 100 °C. To satisfy that criterion, design optimization for the spatial arrangement of UFCs in a crystalline rock repository is performed. Spatial arrangement pertains to: (i) the spacing between UFCs, (ii) the separation between placement rooms underground, and (iii) the locations of variously aged UFCs that generate heat at different rates. Most studies have considered UFCs to be identical in age during placement into the repository. Parameter analyses have also been performed to evaluate repository performance under probable geological conditions. In this work, the various ages of UFCs and the uncertainties in spacing-related design variables are of focus. Techniques for the actual placement of UFCs in the deep geological repository based on their age and methods for repository risk analysis using yield optimization are developed. The thermal evolution inside the deep geological repository is simulated using a finite element model. With many components inside the massive repository planned for upwards of 95,000 UFCs, direct optimization of the model is impractical or even infeasible due to it being computationally expensive to evaluate. Surrogate optimization is used to overcome that burden by reducing the number of detailed evaluations required to reach the optimal designs. Two placement cases are studied: (i) UFCs all having been discharged from a Canadian Deuterium Uranium reactor for 30 years, which is a worst-case scenario, and (ii) UFCs having been discharged between 30 and 60 years. Design options that have UFC spacing 1–2 m and placement room separation 10–40 m are explored. The placement locations of the variously aged UFCs are specified using either sinusoidal (cosine) functions or Kumaraswamy probability density functions. Yield optimization under assumed design variable tolerances and distributions is performed to minimize the probability of a system failure, which occurs when the maximum temperature constraint of 100 °C is exceeded. This method allows variabilities from the manufacturing and construction of the repository components that affect the design variables to be taken into account, incorporating a stochastic aspect into the design optimization that surrogate optimization would not include. Several distributions for the design variables are surveyed, and these include uniform, normal, and skewed distributions—all of which are approximated by Kumaraswamy distributions.Item Designing Nano-Structural Composites as Advanced Anode Materials for Highly Efficient and Stable Lithium-ion Batteries(University of Waterloo, 2021-01-29) Attia, Elhadi; Chen, Zhongwei; Elkamel, AliWith the continued increase in energy demand for portable electronics, grid storage, and electric vehicles, more attention is being placed on the development of advanced energy conversion and storage systems such as metal ion batteries and fuel cells. Recently, lithium-ion batteries (LIBs) have dominated the electronic applications market such as consumer electronics, power tools, and medical devices. Moreover, LIBs have been used in the transportation sector in electric vehicles (EV) and electric bicycles. High capacity retention and long cycle life are essential, especially for the EV market. However, due to the limited energy density and high cost of large LIBs packs, the current battery technology is not satisfactory for the widespread application in EVs. Therefore, development of battery technology with high-energy density and low-cost materials can lead to significant improvements in the performance and lifetimes of products that use LIBs. To improve the energy density of LIBs, conventional anode materials (graphite) need to be replaced by novel electrode materials and improved electrode designs with a higher capacity and more reliable performance. Silicon (Si) is an exciting and promising candidate for use as active material in the negative electrode to develop the next generation LIBs due to its natural abundance, high safety, low-cost, environmentally friendliness, and high theoretical specific capacity reaching 4200 mAh g-1 compared with 372 mAh g-1 of graphite. However, the critical challenge with Si is the huge volume changes during the lithiation and delithiation processes, which causes mechanical fractures and delamination of the electrode. In addition, solid electrolyte interphase (SEI) formation disrupts the electrical contact between Si particles during cycling, which lead to degradation of the electrode and rapid capacity fading. These issues limit the wide commercialization of Si as anode material for LIBs. In this thesis work, different categories of advanced nanostructure materials have been designed and developed to serve as a conductive network for nanostructured Si morphologies with high capacity and better mechanical stability to enable the evolution of the next generation of LIBs. This thesis starts with a brief introduction to LIBs, followed by the objectives and approaches taken in this PhD project. A literature review on the main battery components and the operation principles of rechargeable LIBs with a focus on the development of the electrode materials will be discussed. A survey of the experimental procedures, characterization techniques, and performance testing procedures are provided. Specific research projects are proposed, and specifically demonstrated in the projects presented in this thesis. This will provide readers with a comprehensive overview of the field of study, and detailed project plans in order to successfully develop novel advanced electrode materials for high energy density and reliable rechargeable LIBs. The first approach of my PhD thesis is focused on developing flexible and conductive carbon networks to improve the stability of Si-based anodes. At this stage, we have designed a polymer blend of polyvinylpyrrolidone (PVP) and polyacrylonitrile (PAN) which was self-assembled onto the surface of Si nanoparticles (SiNPs) allowing for the generation of a very intimate coating of Si dioxide and nitrogen-rich carbon shell upon slow heat treatment. This methodology capitalizes on the surface interaction of PVP with SiNPs to provide a sturdy nanoarchitecture. The addition of PVP improves the stability and adhesion of PAN to the carbon-based matrix which surrounds the Si particles, leading to enhance the stability of the Si anode. In addition to being a very scalable fabrication process, our novel blend of PVP and PAN allowed for an electrode with high reversibility. When compared with a standard electrode Si/PVDF framework, this material of PVP/PAN demonstrated a significantly superior first discharge capacity of 2736 mAh g-1, high Coulombic efficiency, and excellent rate capability, as well as excellent cycle stability for 600 cycles at a high rate of 3000 mA g-1. Even though we achieved considerable improvements to the Si-based anode, we still need to improve the electrode capacity with long cycle stability and high areal capacity. In the second part of this thesis, a multifunctional composite binder was developed by cross-linking a poly(acrylic acid) (PAA) and carboxymethyl cellulose (CMC) spine with PAN through a slow heat treatment process. The composite binder strongly interacts with Si, providing a sturdy structure with efficient pathways for both Li-ion and electron transport. The cross-linked carboxyl groups from PAA and CMC offered a robust 3D cross-linked network, anchoring SiO2 coated Si nanoparticles onto a highly-porous carbon scaffold, forming stable a solid electrolyte interphase. This composite anode not only exhibits a high initial capacity of 3472.6 mAh g−1 with an initial Coulombic efficiency of 89.1%, but also provides excellent cycling stability for 650 cycles at a high current density of 3000 mA g−1. While excellent rate performance and dramatic enhancement of Si-based anode were obtained using cross-linking of CMC-PAA with g-PAN, we looked to further improve the cycle life with high capacity using reinforcement additives. In the last part of this thesis, a novel multi-leveled design of webs-like morphology is reported as a robust and highly stable 3D interconnected network to mass-produce nanostructured Si composite anode. This sturdy composite consisting of nano-size Si particles (NSi), nitrogen-doped carbon nanotubes (N-CNTs), and graphenized polyacrylonitrile (g-PAN) is prepared via a simple and low-cost method as a negative electrode for LIBs. The NSi@N-CNT/g-PAN composite integrates the benefits from its components, where NSi-interactive materials deliver high capacity, N-CNTs with nitrogen functionalizations act as electron highways and flexible network to connect NSi particles, and g-PAN with nitrogen-rich provides nitrogen-doped graphene sheets, which wrapped the whole structure network of NSi@N-CNTs. The stable interaction between the Si particles and N-CNTs enhances electron transport, while g-PAN effectively improves the capacity and conductivity of the whole electrode and provides a porous skeleton allowing convenient ion diffusion leading to longevity in battery operations. We found that only when all three components are introduced will significant enhancement in performance be observed. This nanocomposite anode exhibits superior cycling stability with a reversible capacity of ~1361 mAh g-1 for a remarkably long-life of 1100 cycles when cycled at a high current density of 3000 mA g-1. Moreover, high loading cycling of up to 3 mAh cm-2 at ~1 mgSi cm-2 was achieved at a current density of 500 mA g-1. This effective strategy could potentially be applied to prepare large-scale production of a high-performance electrode for LIBs.Item Development, Modeling, Analysis, and Optimization of a Novel Inland Desalination with Zero Liquid Discharge for Brackish Groundwaters(University of Waterloo, 2017-01-18) Elsaid, Khaled; Elkamel, AliGroundwater is considered the major source of domestic water supply in many countries worldwide. In the absence of surface water supplies, the use of groundwater for domestic, agricultural, and even for industrial purposes becomes essential, especially in rural communities. Groundwater supplies typically are of good quality, and the quality is reasonably uniform throughout the year compared to that of surface water, thus making it suitable for direct use, or simple to treat. A disadvantage of groundwater is the content of dissolved salt as many have a moderate-to-high salinity. The high salinity makes water brackish and thus it requires desalination before use. This has led to wide use of groundwater desalination to produce good-quality water in many regions around the world. Nevertheless, a problem of desalination processes is the generation of a concentrate stream, sometimes called brine or reject, which must be properly managed. The management of brine from brackish groundwater desalination is a significant issue if located far from the coast (i.e. inland plants) or far from public channel to discharge such brine. Some options for brine disposal from inland desalination plants are evaporation ponds, deep-well injection, disposal to municipal sewers, and irrigation of plants tolerant to high salinities. Each of these disposal methods may result in many environmental problems such as groundwater contamination, the decline in crop yields from agricultural lands, the formation of eyesores, decreasing the efficiency of biological wastewater treatment, and making treated sewage effluent unsuitable for irrigation. As a result, the brine management from inland desalination of brackish groundwater is very critical, and the need for affordable and environmentally benign inland desalination has become crucial in many regions worldwide. This work aims to develop an efficient and environmentally benign process for inland desalination of brackish groundwater, which approaches zero liquid discharge (ZLD), maximizing the water produced and minimizing the volume of concentrate effluent. The technical approach involves utilization of two-stage reverse osmosis (RO) units with the intermediate chemical treatment of brine stream that is designed to remove most of the scale-forming constituents, which foul membrane surface in RO and limits its water recovery and hence enable further recovery of water in the secondary RO unit. The treatment process proposed in this work is based on advanced lime softening processes, which have the ability to effectively remove scale-forming constituents, in addition to heavy metals and natural organic matters that might be present in the brine. The process has been applied to the brine produced from 1st stage RO i.e. primary brine stream, to minimize the volume of the stream to be treated chemically, which in turn reduces the capacity of the treatment equipment. Analysis of groundwater quality and scale-forming constituents that are present in the brine stream upon desalination of groundwater has been performed. The analysis has revealed that in most cases of brackish groundwater desalination the recovery is limited by scaling due to calcium sulfate i.e. gypsum, and amorphous silica. Thus, the main objective set for the chemical treatment of the brine stream focused on removal of calcium, sulfate, and silica. Advanced lime softening based on high lime doses along with sodium aluminate, as in ultra-high lime with alumina UHLA process, has been proposed for chemical treatment of brine. Bench-scale experiments conducted to evaluate the effectiveness of the proposed chemical treatment for removal of scale-forming constituents, particularly calcium, sulfate, and silica by studying the different factors affecting the removals efficiency from synthetic solutions containing sulfate-only, silica-only, and model brine solution. The results obtained have revealed that the proposed process was very effective and results generally in high and quick removals of calcium, sulfate, and silica of more than 80% within 2 hrs under different experimental conditions. In addition, beneficial uses of different solid byproducts formed are investigated, by analyzing the solids resulted to qualitatively and quantitatively to identify the different solids present. This offers the potential to lower both costs and solid disposal problems of solids formed being considered as added value product rather than solid waste that has to be properly managed. Results have shown that the solid precipitate contains a wide range of solids that generally composed of calcium, magnesium, aluminum along with carbonate, sulfate, and silicate, which have several potential applications as soil sub-grade, and in cement industry. Equilibrium model to simulate the chemical treatment process that is able to predict the required chemical reagents doses, effluent water quality for a given influent water quality and treatment levels has been developed utilizing OLI stream analyzer, the developed model was found to well predict the performance of the chemical treatment at equilibrium conditions. Rigorous membrane separation model has developed in Aspen Custom Modeler to more accurately model RO desalination, which is to be combined with the developed equilibrium model to formulate a complete 1st Stage RO–Chemical Treatment–2nd Stage RO process model. The developed complete and validated model has been then used to fully and accurately simulate the performance of the proposed Zero Liquid Discharge desalination process. The present work results in three novel achievements: first, introducing a very effective intermediate chemical treatment, which efficiently remove sulfate, particularly from brine. Most of the previously proposed intermediate treatment processes remove sulfate as calcium sulfate i.e. gypsum, however in the introduced process, sulfate is removed in calcium-aluminum-sulfate complexes, which has very low solubility, making the brine highly undersaturated with respect to gypsum, and hence lowering the fouling propensity in the secondary RO, leading to maximizing the overall recovery. In addition, the chemical treatment has been successfully modeled for better simulate of its performance for different brine qualities, which are usually encountered in brackish ground desalination due to the high location-specific nature of groundwater quality. Second, the developed membrane model has treated the species present in water as ions, accounting for monovalent and divalent ions separately, and obtaining a different permeability coefficient for their transport through the membrane. This is different from most developed RO models, which simplify the transport through the membranes to only water and salt permeability coefficients. This treatment results in better and more refined modeling and simulation of the RO membrane separation, as the RO membrane interact differently to ions present in water. Third, the complete process model, results from combining the developed equilibrium model of the chemical treatment, and membrane separation model, has revealed very promising results of achieving high recovery desalination of about 93.5% suitable for drinking water purposes, which is higher by about 90% than most of the reported literature, whose result in reducing the brine volume from 25% in conventional desalination to only 6.5% in the proposed process, i.e. brine volume reduction of 74% relative to conventional inland desalination, and 35% relative to other high recovery processes, at reasonable chemical treatment levels.Item Economic Model Predictive Control of Chemical Processes(University of Waterloo, 2015-11-19) Santander, Omar; Hector, Budman; Elkamel, AliThe objective of any chemical process is to transform raw materials into more valuable products subject to not only physical and environmental but also economic and safety constraints. To meet all these constraints in the presence of disturbances the processes must be controlled. Although nowadays there are many available control techniques available Model Predictive Control (MPC) is widely used in industry due to its many advantages such as optimal handling of interactions in multivariable systems and process constraints. Generally, the MPC strategy is implemented within a hierarchical structure, where it receives set points or targets from the Real Time Optimization (RTO) layer and then maintains the process at these targets by calculating optimal control moves. However, often the set point from the RTO may not be the best optimal operation or it may not be reachable thus motivating the integration of the RTO and MPC calculations into one single computation layer. This work focuses on this idea of integrating RTO and MPC into one single optimization problem thus resulting in an approach referred in literature as Economic Model Predictive Control (EMPC). The term “Economic” is used to reflect that the objective function used for optimization includes an economic objective generally used in RTO calculations. In this thesis, we propose an EMPC algorithm which calculates manipulated variables values to optimize an objective consisting of a combination of a steady state and a dynamic economic cost. A weight factor is used to balance the contributions of each of these two terms. Also, the cost is defined such as when the best economic steady state is reached the objective is only influenced by the dynamic economic cost. An additional feature of the proposed algorithm is that the asymptotic stability is satisfied online by enforcing four especial constraints within the optimization problem: 1-positive definiteness of the matrix P defining the Lyapunov function, 2- contraction of the Lyapunov function with respect to set point changes, 3- contraction of the matrix P with respect to time and 4- Lyapunov stability condition. The last constraint both ensures decreasing of the Lyapunov function and also accounts for the robustness of the algorithm with respect to model error (uncertainty). A particular novelty of this algorithm is that it constantly calculates a best set point with respect to which stability is ensured by the aforementioned constraints. In contrast to other algorithms reported in the literature, the proposed algorithm does not require terminal constraints or terms in the cost that penalize deviations from fixed set points that often lead to conservative closed loop performance. To account for unmeasured disturbances entering the process, changes in parameters are also explored and the algorithm is devised to compensate for these changes through parameter updating. Accordingly, the parameters are included as additional decision variables within the optimization problem without the need for an external observer. The stability of the parameters estimation is ensured through the set point constraint mentioned above. To demonstrate the capabilities of the proposed algorithm, it is tested on two case studies: a simpler one involving a system of 4 nonlinear ODEs describing an isothermal nonlinear reactor and a larger problem involving a non-isothermal Williams-Otto reactor with parallel reactions. The dynamics of the latter reactor consists of a set of nonlinear ODE describing the evolution of the process temperature and concentration of the different species. The simulations for the isothermal reactor showed that the proposed algorithm not only outperformed (in terms of an economic function) alternative formulations, but addressed all their limitations. In addition, when there was a parameter modification, this was adapted in a finite time. In terms of the non-isothermal reactor, the simulations demonstrated that not only the best steady state could be computed, but also the states were steered to it satisfying the online stability property.Item Energy Management and Environmental Sustainability of the Canadian Oil Sands Industry(University of Waterloo, 2018-04-10) Elsholkami, Mohamed; Elkamel, AliBy 2030 the worldwide energy demand is expected to increase by twofold, in which fossil fuels inevitably will still play a major role in this transition. Canadian oil sands, the second largest proven oil reserves, represent a major pillar in providing energy and economic security in North America. Their development on a large scale is hindered due to associated environmental impacts, which include greenhouse gas emissions, water usage, and management of by-products of downstream operations (e.g. Sulfur, petroleum coke, etc.). In this work optimization techniques are employed to address the management of various environmental issues while minimizing the cost of operations of the oil sands industry. In this context, this thesis makes four principal contributions. First, an extensive review is conducted on potential production pathways of renewable energy that can be integrated in the energy infrastructure of oil sands. Renewable technologies such as wind, geothermal, hydro, bioenergy, and solar are considered the most environmentally benign options for energy production that would contribute in achieving significant carbon emissions reductions. A mixed integer non-linear optimization model is developed to simultaneously optimize the capacity expansion and new investment decisions of both conventional and renewable energy technologies, and determine the optimal configurations of oil producers. The rolling horizon approach is used for the consecutive planning of multiple operational periods. To illustrate the applicability of the model, it was applied to a case study based on operational data for oil sands operators in Alberta for the period of 2010 – 2025. Second, a generalized optimization model was developed for the energy planning of energy intensive industries. An extensive superstructure was developed that incorporates conventional, renewable, nuclear, and gasification of alternative fuels (e.g. petroleum coke, asphaltenes, etc.) technologies for the production of energy in the form of power, heat and hydrogen. Various carbon mitigation measures were incorporated, including carbon capture and sequestration, and purchase of carbon credits to satisfy emission targets. Finally, the superstructure incorporated the possibility of selling excess energy commodities in competitive markets. The superstructure is represented by a multi-period mixed integer optimization model with the objective of identifying the optimal set of energy supply technologies to satisfy a set of demands and emission targets at the minimum cost. Time-dependent parameters are incorporated in the model formulation, including energy demands, fuel prices, emission targets, carbon tax, construction lead time, etc. The model is applied to a case study based on the oil sands operations over the planning period 2015–2050. A scenario based approach is used to investigate the effect of variability in energy demand levels, various carbon mitigation policies, and variability in fuel and energy commodity prices. Third, a multi-objective and multi-period mixed integer linear programming model is developed for the integrated planning and scheduling of the energy infrastructure of the oil sands industry incorporating intermittent renewable energy. The contributions of various energy sources including conventional, renewable, and nuclear are investigated using a scenario based approach. Power-to-gas for energy storage is incorporated to manage surplus power generated from intermittent renewable energy sources, particularly wind. The wind-electrolysis system incorporates two hydrogen recovery pathways, which are power-to-gas and power-to-gas-to-power using natural gas generators. The model takes into account interactions with the local Alberta grid by incorporating unit commitment constraints for the grid’s existing power generation units. Three objective functions are considered, which are the total system cost, grid operating cost and total emissions. The epsilon constraint method is used to solve the multi-objective aspect of the proposed model. Fourth, extensive research has been done on the components that constitute the sulfur supply chain, including sulfur recovery, storage, forming, and distribution. These components are integrated within a single framework to assist in the design optimization of sulfur supply chains. This represents a starting point in understanding the trade-offs involved in the sulfur supply chain from an optimization point of view. Optimization and mathematical modeling techniques were implemented to generate a decision support system that will provide an indication of the optimal design and configuration of sulfur supply chains. The resulting single-period mixed-integer linear programming model was aimed at minimizing total capital and operating costs. The model was illustrated through a case study based on Alberta’s Industrial Heartland. A deterministic approach in an uncertain environment was implemented to investigate the effect of supply and demand variability on the design of the supply chain. This was applied to two scenarios, which are steady state operation and sulfur surplus accumulation. The model identified the locations of forming facilities, the forming, storage and transportation technologies, and their capacities. The contributions of this thesis are intended to support effective carbon mitigation policy making and to address the environmental sustainability of the oil sands industry.Item Evaluating the Potential Environmental and Human Toxicity of Solvents Proposed for use in Post-Combustion Carbon Capture(University of Waterloo, 2025-01-28) Ghiasi, Fatima; Elkamel, AliCarbon dioxide emitted by industrial activities is a growing concern due to the effects on global climate. For this reason, firms are being urged to lower their carbon footprint. Post combustion carbon capture is being explored as a method for the power and materials industries to decarbonize. The most mature technique of carbon capture is amine absorption. Different amines are being explored to potentially be used within post-combustion carbon capture units. Many biological molecules are amines, and amines that resemble them can disrupt biological processes, harming organisms. In addition, if an amine is soluble within lipids, it can persist within the food chain and cause long term toxic effects that are not immediately visible. 151 solvents were compared based on four properties: volatility, lipophilicity, mutagenicity, and neuroactivity. Machine learning models were trained to predict these values. Due to their hydrophilicity, amino acids were determined to have the lowest potential of causing environmental toxicity.Item Examining the Importance of Understanding During Training: An Industrial Perspective(University of Waterloo, 2017-09-14) Chan, Keziah; Cao, Shi; Elkamel, AliHuman error is an inevitable existence in virtually all tasks in which an issue arises as humans are almost always directly or indirectly involved with the process. Researchers have examined many past accidents and catastrophic events for the types of human error involved and found inadequate training, deviation from procedures and insufficient knowledge, especially in critical and emergency situations, to be among the most common root causes of these incidents. However, within the field of research aiming to resolve the issue of lack of operator knowledge, researchers have yet to perform studies regarding how to equip operators with this knowledge and the extent to which providing training on important knowledge and concepts can help improve their process operation and prevent human error. The objective of this research is to examine the impact of improving operators’ understanding on the following 4 aspects of their process operation: 1) performance, 2) adherence to instructions, 3) emergency response, and 4) retention of learned knowledge and skills. In particular, this study focuses on the use of training manuals as a method of improving operators’ understanding. An experiment was conducted where participants were trained to operate a hydraulic pump system using either an explanatory training manual, which describes both ‘what’ needs to be done and ‘why’ it needs to be done, or a procedural training manual, which only describes ‘what’ needs to be done. Participants were then asked to manipulate process variables to achieve production requirements while meeting operating criteria in scenarios that exemplify both real-world normal operation and emergency situations. The results of this experiment indicate that type of manual and educational background showed no significant effect on participants’ operation time and accuracy performance of control operations or on the appropriateness of their response to an emergency scenario. However, type of manual was found to have a significant effect on procedural adherence, where participants using an explanatory manual showed greater adherence to procedures compared to those using a procedural manual, though these findings were not replicated for adherence to wait times. All participants also significantly increased in understanding of the process after participating in the training session, with similar levels of knowledge retained after approximately 2 weeks and chemical engineering participants outscoring those from other faculties overall on the questionnaires. These findings identified the usefulness of incorporating explanatory information within a training manual and the aspects of process operation in which an increase in operator understanding was shown to improve. It provided evidence on the importance of operator training and understanding of vital concepts and its impact across the production process, as well as provided insight into the development of better and more appropriate training programs.Item Graphene and Glass Flake Nanocomposites Coatings for Corrosion Mitigation in Chloride Rich Environments(University of Waterloo, 2018-08-02) Alhumade, Hesham; Elkamel, Ali; Yu, AipingInspired by the needs for the preparation of protective coatings with enhanced protection properties especially corrosion resistance in the oil and gas industry, the research focuses on the synthesis and the evaluation of various polymer composites on different metals substrates as protective coatings in Chloride rich environment. In various areas of application including oil and gas industry, metals substrates are continuously exposed to various deterioration factors including corrosion, impact, thermal and UV degradation. In addition, the rates of deterioration based on those factors can be further accelerated in certain environment. For example, the rate of metal deterioration due to corrosion can be accelerated in a Chloride rich environment causing significant reduction in the life span of metal substrates in different fields including oil and gas industry. For instance, in off shore oil and gas operation, drilling rigs are continently exposed to the Chloride rich ocean’s wave, which may accelerates the corrosion process on various metals based items of the rigs. Therefore, various corrosion mitigation techniques including the use of protective coatings are utilized to attenuate the corrosion rate and extend the life span of metal substrates. In particular areas, protective coatings can be exposed to various degradation factors including UV, Thermal degradations as well as deterioration due to impact. Therefore, it was important to evaluate other protection properties of the prepared protective coatings in addition to corrosion resistance. The studies focused on the incorporation of pristine Graphene and Glass Flake in different polymer resin such as Epoxy and Polyetherimide and evaluates the composites as protective coating on different metals substrates such as Copper, Stainless Steel 304 and Cold Rolled Steel. Furthermore, the studies investigated the possibility of enhancing the protective properties of the prepared protective composites coating by surface modification and functionalization of the filler in order to enhance the level of interaction between the polymer resin and the fillers. The synthesized composites are characterized using X-Ray diffraction (XRD) and Fourier transfer infrared (FTIR) techniques, while the dispersion of the fillers in polymeric matrices are examined using Transition electron microscopy (TEM) and Scanning electron microscopy (SEM). The corrosion protection properties of the prepared protective composites coatings are examined using Electrochemical impedance spectroscopy (EIS) and Cyclic voltammetry (CV) or potentiodynamic techniques. Furthermore, the interface adhesion between metal substrates and the protective coatings is examined and evaluated according to the ASTM-D3359 standard, while the impact resistance and the UV degradation properties are examined and evaluated according to the ASTM -D2794 and ASTM-D4587 standards, respectively. Moreover, the thermal degradation properties of the prepared protective coatings are evaluated by examining the rate of degradation or weight loss of the composites using Thermal Gravimetric Analysis (TGA) techniques and examining the influences of the incorporation of the various fillers in the glass transition temperature of the composites using Differential Scanning Calorimetry (DSC) technique. The studies reveal that the incorporation of the different types of fillers will enhance the corrosion resistance properties of the polymer resin in addition to other properties such as impact resistance, thermal stability and UV degradation. Furthermore, the studies conclude that the level of enhancement in corrosion protection as well as other protection properties can be further excelled by increasing the load of fillers in the composites. Moreover, it was interesting to observe that increasing the load of filler in the composites may negatively impact imperative properties such as interface adhesion, where increasing the load of fillers may attenuate the interface adhesion between the protective coatings and the coated metal substrates. A number of contributions have been reported in this research project including the preparation and the examination of nanocomposites materials as protective coatings on different metals substrates after the incorporation of different pristine nano-fillers such as Graphene and Glass Flake. The contributions also include the reporting for the first time of new and unique recipes that demonstrate simple steps for the surface fuctionalization of Graphene Oxide and Glass Flake before utilizing the functionalized fillers in the preparation of nanocomposites coatings with enhanced protective properties including corrosion resistance and thermal stabilityItem Graphene Based Membranes for High Salinity, Produced Water Treatment by Pervaporation Separation(University of Waterloo, 2023-03-13) Almarzooqi, Khalfan; Pope, Michael A.; Elkamel, AliPetroleum industries generate huge volumes of wastewater that is associated with oil and gas during extraction, known as produced water. It accounts for 98% of the amount extracted, and comprises diverse pollutants of salts, suspended solids, dissolved organic solutes, and dispersed oils; that require to be safely treated before being disposed to the environment, or reused for various beneficial applications. Nowadays, graphene-based membranes have shown potential as a membrane material due to their high performance and stability features. This research demonstrated the use of graphene oxide membranes supported on polyethersulfone films (GO/PES) for high salinity water, simulated produced water model (PWM), and PWM with simulated foulants treatment via the pervaporation separation technology. The membranes showed the highest water flux of 47.8 L m-2 h-1 for NaCl solutions in pervaporation testing operated at 60 oC, and salt and organic rejections of 99.9% and 56%, respectively. In addition, the membranes were tested for long-term pervaporation for 72 hours and showed a decline of 50–60% from the initial flux in the worst-case-scenario. Moreover, in-depth investigation of the Zn2+ crosslinker showed a hydrolysis reaction to Zn(OH)2, with the progress of the long-term pervaporation, in which much of it is being leached out. Consequently, since GO membranes are not stable in water, it remains challenging to be utilized in the industry. A more stable GO membrane in aqueous phase was proposed. The membrane’s stability was enhanced by divalent and trivalent metal cations of Zn2+ and Fe3+ crosslinkers, respectively, and partial reduction under vacuum. Two orders of fabrications were investigated of either crosslinking rGO (method I) or reducing M+–rGO (method II). The prepared membranes were examined for their characterization and performance. Fe3+–rGO prepared by method II showed the best organic solute rejection of 69%. Moreover, long-term pervaporation experiment was performed for 12 hours for Zn2+–rGO membranes, and revealed a drop in flux of 6% only, while Zn2+–GO membrane had a drop in flux of 24%. Additionally, the stability of the membranes was tested via an abrasion method using a rotary wheel abrader. The conducted experiments revealed that Fe3+–rGO membranes had the maximum mechanical integrity with an abrasion resistance of 95% compared to the initial control (non-reduced and non-crosslinked) GO/PES membrane.Item Identification of Dynamic Metabolic Flux Balance Models Based on Parametric Sensitivity Analysis(University of Waterloo, 2016-06-15) Martinez Villegas, Ricardo; Elkamel, Ali; Budman, HectorA dynamic mathematical model that involves a set of physicochemical parameters can describe a dynamic system. Parametric sensitivity analysis studies the effect of changes in these parameters on model outputs of interest. If a system is operated within a region of high sensitivity any small change in the parameter values drastically affect the output. Hence, it is essential to be able to predict this sensitivity when designing, operating or optimizing a system based on the model. Dynamical biological models that describe gene regulation, signalling, and metabolic networks are strongly dependent on a large number of parameters. Most of these models are highly nonlinear and involve a high-dimensional state space. Conventional parametric sensitivity analysis that examines the effect of each parameter independently at one specific moment is generally inaccurate since it ignores correlations between parameters. Thus, it is very important to account for correlations when conducting a parametric sensitivity analysis. Model parameters are never known accurately and consequently they are typically described by a range of values. Some parameters may be measured directly but even for such case they will exhibit variability due to noise, e.g. a flow rate that is measured by a noisy flow meter. The variability in values of model parameters that cannot be measured directly arises from two main sources: i- noise in data and ii- process disturbances that translate directly or indirectly into changes in the parameters. In the presence of measurement noise, the identification of model parameters from data will result in model parameter values that are known within bounds with different levels of confidence. Also, process disturbances may directly affect the value of a parameter, e.g. changes in initial conditions of a metabolite concentration in a batch culture, or indirectly, e.g. changes in oxygen transfer due to changes in aeration rates. This thesis focuses on the identification of model parameters for biochemical systems. Models describing such systems are based on the biochemical reactions occurring within an organism that are used to produce or consume essential components to grow, reproduce, preserve cell structures, and respond to environmental changes. This group of reactions is collectively referred to as a metabolic network. Dynamic Flux Balance Analysis, a particular modeling method, which is the focus of the current work, can be used to study microbial metabolic networks. This type of mathematical model can simulate the metabolism of an individual cell by describing the flux distribution inside a cellular network. The approach is based on maximizing a biological based objective such as growth rate or production of ATP subject to constraints on the rate of change of certain metabolites. Several other approaches have been developed in turn to simulate the responses of the cells to different stimulus. Nevertheless, Dynamic Flux Balance Analysis compared to other approaches is advantageous in terms of the relatively smaller number of parameters that have to be calibrated to fit the data thus resulting in lower sensitivity to noise and requiring smaller data sets for calibration. In view of its advantages, this thesis focuses on this particular modeling approach, which is becoming increasingly popular in the field of biotechnology and systems biology disciplines. The research to be presented will focus on the robust identification of dynamic metabolic flux models based on parametric sensitivity analysis. The particular case study that is chosen to illustrate the proposed method is Diauxic growth in Escherichia coli in a batch culture. This approach intends to show how to identify the model parameters of the dynamic model based on a parametric sensitivity analysis that explicitly accounts for correlations in the data. The sensitivity is quantified by a parameter sensitivity spectrum. Then, the parameters are ranked based on this analysis to assess whether a subset of the parameters can be eliminated from further analysis. Finally, identification of the remaining significant parameters is based on the maximization of an overall parametric sensitivity measure subject to set based constraints that are derived from the available data. The parametric sensitivity method is global in the sense that it examines the simultaneous variation of all the model’s outputs instead of focusing on outputs variables one at a time.Item Implementation of Power-to-Gas to Reduce Carbon Intensity and Increase Renewable Content in Liquid Petroleum Fuels(University of Waterloo, 2017-08-28) Alsubaie, Abdullah; Elkamel, AliPower-to-gas (PtG) is an emerging energy storage concept, which can transfer the surplus and intermittent renewable generated power into a marketable hydrogen, as well as providing other ancillary services for the electrical grid. In the case of Ontario, excess power is encountered during periods of low electricity demand as a result of substantial generation from baseload nuclear and increasing integration of intermittent renewable sources powering the electrical grid. This thesis develops various simulation and analysis of Ontario’s energy system to illustrate the use of PtG when its electrolytic hydrogen is employed in the gasoline production cycle to reduce the carbon intensity of the production process, and increase the renewable content of this traditional transportation fuel. The work includes a case study for a simulated refinery to evaluate the production cost and life cycle emission for different production scenarios, related to the deployment of polymer electrolyte membrane (PEM) electrolyzers to meet the refinery demand of hydrogen. Moreover, the study involves examining the use of the province surplus baseload generation (SBG) for which currently results in net exports to neighboring jurisdictions, and curtailed power generation capacity from wind and nuclear to meet the overall demand of the refining industry. Furthermore, a comparative assessment is conducted of blending 10% corn-ethanol and using electrolytic hydrogen supply via PtG on the ‘well to wheel’ (WTW) impacts of gasoline fuel, according to the metrics of total energy use, greenhouse gas emissions, and criteria air pollutants. According to the study, it is found that steam methane reforming (SMR) provides a lower cost hydrogen as a result of the current low natural gas prices, even with stringent carbon-pricing policy. However, the electrolytic hydrogen production shows a potential to curb significant carbon emissions as a substitute for SMR hydrogen. At a single refinery level, the use of electrolytic hydrogen can be compared to eliminating as many as 35,000 gasoline passenger vehicles from the road when there is an installation of 130 PEM electrolyzer units (1 MW nameplate capacity per unit). Also, the analysis shows that PtG has the potential to supply the refineries within the province with the entire hydrogen demand with a fraction of the surplus power, particularly when making use of available seasonal storage at least for the next four years. Moreover, PtG is found to decrease 4.6% of the natural gas consumption on the gasoline cycle, and increase the renewable content of gasoline by extending the utilization of wind and hydro power. Furthermore, the deployment of electrolytic hydrogen results in minimizing gasoline carbon intensity by 0.5 gCO2e per MJ of the fuel. When associated with the annual gasoline sales in Ontario, it can offer the reduction of 0.26 Megaton of greenhouse gas (GHG) emissions yearly. Moreover, PtG may contribute to lowering VOCs, NOx, PM10 and PM2.5 criteria air pollutants from gasoline cycle, which cannot be achieved with blending corn based ethanol. Accordingly, the results of this thesis outline the benefits of using power-to-gas to mitigate the existing issue of surplus power generation. Utilizing the excess electricity to produce hydrogen for refinery end user also increases the utilization of CO2 free energy and renewable content of gasoline within its life cycle production scheme.Item Improved Dynamic Latent Variable Modeling for Process Monitoring, Fault Diagnosis and Anomaly Detection(University of Waterloo, 2024-01-04) Zhang, Haitian; Zhu, Qinqin; Elkamel, AliDue to the rapid advancement of modern industrial processes, a considerable number of measured variables enhance the complexity of systems, progressively leading to the development of multivariate statistical analysis (MSA) methods to exploit valuable information from the collected data for predictive modeling, fault detection and diagnosis, such as partial least squares (PLS), canonical correlation analysis (CCA) and their extensions. However, these methods suffer from some issues, involving the irrelevant information extracted by PLS, and CCA’s inability to exploit quality information. Latent variable regression (LVR) was designed to address these issues, but it has not been fully and systematically studied. A concurrent kernel LVR (CKLVR) with a regularization term is designed for collinear and nonlinear data to construct a full decomposition of the original nonlinear data space, and to provide comprehensive information of the systems. Further, dynamics are inevitable in practical industrial processes, and thus a dynamic auto-regressive LVR (DALVR) is also proposed based on regularized LVR to capture dynamic variations in both process and quality data. The comprehensive monitoring framework and fault diagnosis and causal analysis scheme based on DALVR are developed. Their superiority can be demonstrated with case studies, involving the Tennessee Eastman process, Dow’s refining process and three-phase flow facility process. In addition to MSA approaches, autoencoder (AE) technology is extensively used in complicated processes to handle the expanding dimensionality caused by the increasing complexity of industrial applications. Apart from modeling and fault diagnosis, anomaly detection draws great attention as well to maintain the performance, avoid economic losses, and ensure safety during the industrial processes. In view of advantages in dimensionality reduction and feature retention, autoencoder (AE) technology is widely applied for anomaly detection monitoring. Considering both high dimensionality and dynamic relations between elements in the hidden layer, an improved autoencoder with dynamic hidden layer (DHL-AE) is proposed and applied for anomaly detection monitoring. Two case studies including Tennessee Eastman process and Wind data are used to show the effectiveness of the proposed algorithm.Item Improved Slow Feature Analysis for Process Monitoring(University of Waterloo, 2022-08-22) Saafan, Hussein; Elkamel, Ali; Qinqin, ZhuUnsupervised multivariate statistical analysis models are valuable tools for process monitoring and fault diagnosis. Among them, slow feature analysis (SFA) is widely studied and used due to its explicit statistical properties, which aims to extract invariant features of temporally varying signals. This inclusion of dynamics in the model is important when working with process data where new samples are highly correlated to previous ones. However, the existing variations of SFA models cannot exploit increasingly tremendous data volume in modern industries, since they require the data to be fed in as a whole in the training stage. Further, sparsity is also desirable to provide interpretable models and prevent model overfitting. To address the aforementioned issues, a novel algorithm for inducing sparsity in SFA is first introduced, which is referred to as manifold sparse SFA (MSSFA). The non-smooth sparse SFA objective function is optimized using proximal gradient descent and the SFA constraint is fulfilled using manifold optimization. An associated fault detection and diagnosis framework is developed that retains the unsupervised nature of SFA. When compared to SFA, sparse SFA (SSFA), and sparse principal component analysis (SPCA), MSSFA shows superior performance in computational complexity, interpretability, fault detection, and fault diagnosis on the Tennessee Eastman process (TEP) and three-phase flow facility (TPFF) data sets. Furthermore, its sparsity is much improved over SFA and SSFA. Further, to exploit the increasing number of collected samples efficiently, a covariance free incremental SFA (IncSFA) is adapted in this work, which handles massive data efficiently and has a linear feature updating complexity with respect to data dimensionality. The IncSFA based process monitoring scheme is also proposed for anomaly detection. Further, a new incremental MSSFA (IncMSSFA) algorithm is also introduced that is able to use the same monitoring scheme. These two algorithms are compared against recursive SFA (RSFA) which can also process data incrementally. The efficiency of IncSFA-based monitoring is demonstrated with the TEP and TPFF data sets. The inclusion of sparsity in the IncMSSFA method provides superior monitoring performance at the cost of a quadratic complexity in terms of data dimensionality. This complexity is still an improvement over the cubic complexity of RSFA.Item Integration of Hydrogen Technology into Large Scale Industrial Manufacturing in Ontario(University of Waterloo, 2022-01-12) Preston, Nicholas; Fowler, Michael; Elkamel, AliPower-to-Gas is particularly applicable in Ontario’s energy market, due to the abundance of curtailed renewable energy. During off peak hours this results in not only low carbon, but low-cost electricity making hydrogen generation a highly profitable and environmentally friendly venture. Despite the benefits listed above, there has yet to be a full-scale adoption of Power-to-Gas technology both globally and in the local market. This eliminate this hesitation there is a requirement for diverse, profitable proof of concept installations and a public uncertainty regarding the inherent safety of the technology. It is the objective of this thesis to address these concerns by demonstrating the versatility of hydrogen in different energy system configurations, to show how layered revenue streams can produce profits in the face of policy uncertainty and by outlining the risks and control methods available to mitigate the safety concerns associated with Hydrogen. The first paper presented in this thesis will address the question of whether a business case with strong financial returns is possible for a finished goods manufacture. Here we demonstrate the potential to capitalize on multiple revenue streams under a single investment and highlight some of the ancillary assets including reduction in air pollution and balance of the electrical grid. This design was developed for an automotive manufacturer requiring a total capital investment of $2,620,448 and resulting in a payback period of 2.8 years. Based on a sensitivity analysis, the annual revenue for selling hydrogen at $1.5 to $12 per kgH2 can sum to $54,741 to $437,928. In the modelled carbon tax program, CO2 allowances can be sold at $18 to $30 per tonne CO2 and the model predicts a CO2 offset of 2359.7 tonnes. The second paper develops a case study that further expands on the use of a single pathway, the is the use of hydrogen enriched natural gas. This paper analyzes the integration of an electrolyzer unit into a manufacturer’s CHP microgrid and both explores the impact a carbon tax has on its feasibility and carries out a failure mode and effects analysis to highlight the safe nature of the technology. Currently realizable capital incentives can see IRRs as high as 13.76% with net present values of approximately $750,000. To realize financial feasibility, the carbon price in Ontario must achieve or exceed a minimum of 60$/ton CO2e. In all economically feasible, cases the system operating under an optimal storage coefficient and operational limit produced an emission offset greater than 3000-ton CO2 per year.
- «
- 1 (current)
- 2
- 3
- »