Civil and Environmental Engineering

Μόνιμο URI για αυτήν τη συλλογήhttps://uwspace.uwaterloo.ca/handle/10012/9906

This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.

Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).

Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.

Περιηγούμαι

Πρόσφατες Υποβολές

Τώρα δείχνει1 - 20 of 930
  • Τεκμήριο
    Electrochemical Modeling of Bioenergy Generation from Wastewater by Microbial Fuel Cells
    (University of Waterloo, 2025-05-09) Li, Yiming
    As global water scarcity and environmental pollution continue to escalate, innovative wastewater treatment technologies are needed to ensure sustainable water resource management. Conventional wastewater treatment methods, such as activated sludge processes, are energy-intensive, costly, and contribute significantly to greenhouse gas emissions. Microbial fuel cells (MFCs) present a promising alternative, harnessing electroactive bacteria to simultaneously degrade organic pollutants and generate electricity. By leveraging microbial metabolism, MFCs can convert chemical energy in wastewater into usable electrical energy, offering a dual benefit of pollution reduction and renewable energy production. This study focuses on developing a numerical simulation framework to optimize MFC performance, with an emphasis on real-world application at the Guelph Water Resource Recovery Centre (WRRC). A steady-state microbial fuel cell model was developed and validated using experimental data from previous studies. The model employs a finite difference method to solve mass balance equations for key reactants and products, including acetate, dissolved CO₂, protons, and oxygen. The simulation results highlight the influence of various operational parameters—such as substrate concentration, internal resistance, wastewater flow rate, and temperature—on the performance of a dual-chamber MFC. The study further compares MFC efficiency with conventional wastewater treatment processes, demonstrating a significantly higher chemical oxygen demand (COD) removal rate in MFCs (0.0633 kg COD/m³/day), which is approximately 4.7 times greater than that observed at the WRRC. The results emphasize the role of microbial activity and electrochemical interactions in optimizing power generation and pollutant degradation. Key limitations such as oxygen transport restrictions, internal resistance, and pH imbalances were identified, suggesting areas for improvement in MFC design. Numerical simulations were further extended to model full-scale integration within WRRC, providing insights into the feasibility of MFC technology as an alternative treatment strategy. Despite challenges in large-scale deployment, MFCs show strong potential for reducing wastewater treatment energy demands and mitigating environmental impacts. This research contributes to the advancement of MFC applications in wastewater treatment by demonstrating the effectiveness of numerical modeling in predicting and optimizing system performance.
  • Τεκμήριο
    Upscaling and downscaling snow processes with machine learning in watershed models
    (University of Waterloo, 2025-05-08) Burdett, Hannah
    Hydrologic models play a vital role in understanding and predicting the movement of water within watersheds, providing essential insights for effective management and sustainability of water resources. However, watersheds exhibit significant heterogeneity in their landscape properties and complex responses to spatiotemporal variations in climatic inputs. This variability introduces a gap between the representation of physical processes at the point scale and their behaviour at the watershed scale, making it challenging to accurately capture the full complexity of the hydrologic cycle across different spatial scales. Bridging this gap requires the identification of effective scaling approaches tailored to capture the complexities across scales. Scaling approaches look to translate information from one scale to another, whether moving from a smaller to a larger scale (upscaling) or from a larger to a smaller scale (downscaling). Although various approaches in the literature have been applied to develop scaling methods for forcing variables, such as precipitation and temperature, and fluxes (e.g., evapotranspiration), there is a notable gap in deriving and applying scaling techniques for snow-related variables, such as SWE, snowmelt, or sublimation. Addressing this gap may help in improving hydrologic model accuracy in snow-dominated regions, where snow dynamics significantly influence water availability and watershed resources. The primary objective of this thesis is to develop, implement, and evaluate machine learning-based upscaling methodologies to aid in understanding the relationship between local-scale snow-related variables, landscape heterogeneity, and the large-scale hydrologic response of a catchment. Such methods are useful for effectively simulating the net impact of local variability in snow processes without resorting to fine-resolution models. A secondary focus of this research aims to identify the conditions under which emergent constitutive relationships specific to snow-related fluxes are (or are not) valid and to assess the transferability of these relationships. Finally, this work introduces a machine learning-based downscaling approach that refines large-scale mean model outputs into localized snow states and fluxes. Together, these scaling techniques explore the potential of machine learning to address challenges in hydrologic scaling specific to snow-related fluxes.up
  • Τεκμήριο
    Real-Time Short-Term Intersection Turning Movement Flows Forecasting Using Deep Learning Models for Advanced Traffic Management and Information Systems
    (University of Waterloo, 2025-05-07) Zhang, Ce
    Traffic congestion remains a persistent challenge in urban transportation systems, causing excessive travel delays, increased fuel consumption, and severe environmental pollution. To address these issues, Advanced Traffic Management and Information Systems (ATMIS) have been developed, integrating real-time traffic monitoring, adaptive control strategies, and data-driven decision-making to enhance overall traffic efficiency. A crucial component of ATMIS is the real-time forecasting of intersection Turning Movement Flows (TMFs), which provides essential data for optimizing signal timings, improving vehicle routing, and implementing proactive congestion mitigation strategies. By leveraging accurate TMFs predictions, transportation agencies can dynamically adjust traffic signals, enhance intersection operations, and reduce delays, ultimately improving urban mobility and minimizing environmental impacts. While numerous traffic forecasting models exist, they face significant limitations in capturing the complex spatial and temporal patterns inherent in intersection-level TMFs, as they primarily rely on historical traffic data without adequately modeling these dependencies. Moreover, most existing approaches fail to incorporate exogenous factors, such as weather conditions, road characteristics, and other time-dependent variables, which significantly influence traffic flow but are often ignored. These shortcomings lead to poor generalization performance when applied to hold-out intersections (few-shot) and unseen regions (zero-shot), making them less effective in real-world dynamic traffic environments. To overcome these challenges, this study systematically develops and evaluates a deep learning-based TMFs forecasting framework designed for improved generalization and interpretability. First, we employ a Parallel Bidirectional LSTM (PB-LSTM) with multilayer perceptron (MLP) to capture both long-term seasonality and spatial dependencies, thereby enhancing the model's transferability across different locations, improving performance across hold-out intersections. Second, we integrate an encoder-decoder architecture using Deep Autoregressive (DeepAR) model, which enables probabilistic forecasting and quantifies uncertainty, ensuring robust predictions under varying traffic conditions. Third, we leverage the Temporal Fusion Transformer (TFT) to assess the relative importance of external covariates, such as weather conditions and road characteristics, improving interpretability and model reliability by identifying speed zone, road category, hour of the day, and temperature as key influential factors. Finally, we explore the potential of TimesFM, a decoder-only model, to enhance zero-shot learning capabilities, demonstrating strong performance in previously unseen intersections and new city datasets, particularly when enhanced with EMD and RF. To evaluate model performance, we conduct a series of experiments, including hold-out intersection tests, cross-city generalization assessments, and evaluations under extreme weather conditions, to assess robustness and adaptability. Experimental results highlight the effectiveness of integrating exogenous factors and hybrid modeling approaches in improving real-time TMFs forecasting accuracy, generalizability, and robustness under dynamic conditions. These insights provide valuable contributions to the development of scalable and interpretable deep learning models for intersection-level traffic flow prediction, supporting more adaptive and data-efficient traffic management strategies.
  • Τεκμήριο
    Effect of Biofilm Formation on the Sorption of Per- and Polyfluoroalkyl Substances to Colloidal Activated Carbon
    (University of Waterloo, 2025-04-29) Moran, Erica Lynne
    Per- and polyfluoroalkyl substances (PFAS) are a class of contaminants that have garnered increasing concern due to their widespread presence and harmful effects on humans and ecosystems. PFAS enter the environment via many different pathways, with the release of PFAS-containing aqueous firefighting foams being a major source of groundwater contamination. Because PFAS are highly resistant to most chemical and biological degradation processes, they are currently removed from groundwater mainly by ex-situ adsorption, which is expensive and energy intensive. Recently, activated carbon (AC) permeable reactive barriers (PRBs) have been proposed and used in-situ to limit the downgradient migration of PFAS by groundwater. AC PRBs are created by injecting powdered activated carbon (PAC) or colloidal activated carbon (CAC) into the subsurface to generate a stationary zone that removes PFAS by adsorption. As with any adsorption technology, however, PFAS breakthrough will occur once adsorptive sites in the barrier are exhausted. To improve our understanding of the ability of AC PRBs to adsorb PFAS and their longevity, there is a need for research that evaluates the adsorption of PFAS on AC and the factors affecting this process. The research reported in this thesis focused on one potential influencing factor, namely biofilm. Specifically, the objectives of this study were first, to evaluate if a biofilm can form on small (<5 µm) CAC particles, and second, to examine the impact that biofilm may have on the adsorption of PFAS to CAC. To address the first objective, the growth of Pseudomonas putida (P.putida), an aerobic bacterium, in the absence of particulate and in the presence of either CAC or fine silica was investigated. P.putida was selected because it has been shown to readily form a biofilm, is not infectious to humans, is commonly found in the environment, and has applications in the bioremediation of organic contaminants. Analyses of the bacterial samples by confocal laser scanning microscopy (CLSM) indicated that the bacteria remained planktonic when no particulate was present but formed a biofilm consisting of cells and CAC or sand particles held together by extracellular polymeric substances (EPS). Over seven days of growth, the biofilm formed on CAC increased in thickness and decreased in roughness as it developed and formed more cohesive structures. Results suggest that P.putida is capable of forming a biofilm on CAC particles. Rather than the classical depiction of a biofilm adhered to a single surface, the P.putida biofilm was formed on an aggregate of CAC particles, which were held together by EPS. To address the second objective, the adsorption of perfluoroctane sulfonate (PFOS, a hydrophobic PFAS) and perfluoropentane carboxylate (PFPeA, a hydrophilic PFAS) on virgin and biofilm-coated CAC was investigated. P.putida was grown in the presence of CAC, and either PFOS or PFPeA was added to the microcosms once a biofilm was formed. Because the adsorption of PFAS to CAC is known to be impacted by the presence of dissolved organic carbon (DOC), experiments were also conducted to determine the impact of broth (used for culture growth) concentration on the extent of PFAS sorption to CAC and the development of the biofilm. In the experiments without bacteria, the amount of PFOS adsorbed to CAC decreased as the concentration of broth was increased. The relationship between aqueous and sorbed PFOS could not be described by a linear, Freundlich, or Langmuir isotherm model, likely due to competitive sorption between the DOC present in the broth and PFOS. In the experiments with P.putida, it was observed that as the broth concentration increased, the biofilm became thicker and smoother, as the additional broth appeared to have aided biofilm development. Subsequent experiments, conducted with 3 mg/L broth and 80 mg/L broth (which represented high and low DOC concentrations, respectively), revealed that the majority of PFOS sorption on virgin and biofilm-coated CAC occurred during the first three days, and the biofilm resulted in a decrease in PFOS adsorption. This decreased adsorption is presumed to be due to biofilm blocking sorption sites. For PFPeA, limited sorption occurred, and no significant difference was observed between the amount adsorbed in the bacteria-free CAC and P.putida-containing CAC systems. The difference in sorption between PFOS and PFPeA was attributed to decreased hydrophobic interactions between CAC and the shorter fluorinated tail of PFPeA. The results of this study improve our understanding of how biofilm may impact CAC PRBs implemented for the management of PFAS. Biofilm can form on cell-sized particles and, as a result, may reduce the adsorption of long-chain compounds, such as PFOS. The effect of biofilm on the adsorption of short-chain compounds, such as PFPeA, may be less prominent than for PFOS, as the extent of sorption is comparatively limited. Further investigation is required to evaluate the impacts of biofilm on CAC sorption of other PFAS, the interactions of biofilm with other groundwater parameters, and the extent to which biofilm plays a role in the longevity of CAC PRBs in column or field scale studies.
  • Τεκμήριο
    Saturation-Dependent Thermal Conductivity of Southern Ontario Soils
    (University of Waterloo, 2025-04-25) Islam, Zahidul
    Soil thermal conductivity is an important parameter in geotechnical and environmental engineering applications, influencing the performance of underground energy storage, ground heat exchangers, and other subsurface thermal systems. Through geotechnical characterization and laboratory measurements, this study investigates the thermal conductivity of 20 soil samples collected from seven locations in Southern Ontario. The key soil properties, including texture, moisture content, and bulk density, were analyzed to understand their impact on thermal conductivity. Measured thermal conductivity values were compared with published regression-based and normalized models to assess their predictive accuracy across diverse soil types. A statistical evaluation incorporating root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R²) was performed to identify the best-performing models. The results indicate that Lu et al. (2014) and Yoon et al. (2018) describe the most reliable regression-based models, demonstrating strong correlations with measured data, minimum bias, and low error margins. Among normalized models, the Côté and Konrad (2005) model exhibited superior adaptability and lower prediction errors, while Johansen’s (1975) model performed well but required calibration for extreme soil compositions. The results emphasize the significant influence of soil texture and moisture content on thermal conductivity, with silty and sandy soils exhibiting higher values due to their mineral composition and structural properties. The best-performing models effectively captured these variations, highlighting their applicability in geotechnical and environmental engineering.
  • Τεκμήριο
    Assessing the Prevalence of Energy Hardship in Canada: An Enhanced Methodology Integrating Energy Modeling
    (University of Waterloo, 2025-04-25) Al Humidi, Sara
    The building sector has focused on addressing climate mitigation through the electrification and decarbonization of households, mainly by upgrading building envelope and replacing combustion-based systems with electric heat pumps. However, the impact of climate change could result in more households falling into energy hardship, underscoring the need for an equitable transition. A household falls under energy hardship if its energy expenditure ratio exceeds the defined threshold, regardless of its total household income. Energy hardship encompasses both energy poverty and energy burden. Thus, a household experiences energy poverty if i) its energy expenditure ratio exceeds the defined threshold and ii) the total household income is below low-income cut-offs. A household experiences an energy burden if i) its energy expenditure ratio exceeds the defined threshold and ii) the total household income is above low-income cut-offs. Techno-economic factors such as energy costs, type of fuel, building age and envelope condition, and type of heating and cooling system in a household contribute to energy hardship. Socioeconomic factors such as income, education, and race are also catalysts to the problem, making energy poverty a multidimensional issue with great implications for public health, social equity, and environmental sustainability. This study aims to quantify energy hardship in Canada in 2019 and 2021 and identify the building and household characteristics experiencing energy poverty. Further analysis was completed for Ontario, Canada to establish a correlation and quantify the impact climate change and household electrification (e.g., switching from a natural gas furnace to a heat pump) have on energy hardship. The study identified key indicators of energy poverty and burden, confirming that household income is the most critical factor. Nearly 40% of Canadian households with an income of CAD$29,000 fall under energy poverty. Older dwellings, which tend to be leaky with poor insulation and outdated HVAC systems, contribute to higher energy consumption. Single-detached homes, with major repair requirements, are likely to be energy burdened (17%). Additionally, socioeconomic factors play a role, one person households (31%) households being the most affected by energy hardship. Education and employment also indirectly impacted energy poverty (10%); households with a higher education and full-time employment were less likely to be energy poor. The Ontario-specific analysis mirrored national trends, revealing that energy burden is more pronounced in rural areas (36%). In addition, an energy simulation study was performed for a median energy-poor household in Ontario. The study investigated two scenarios: a business-as-usual scenario where the households performed minimal energy efficiency upgrades, and an electrification and decarbonization scenario where energy-poor households implemented measures such as envelope renovations and switching to fully electric heating system (i.e., cold climate air source heat pump). The energy modeling results revealed the importance of income levels in alleviating energy hardship. Regardless of the level of energy efficiency measures applied, the median energy-poor household remained in energy poverty after building enclosure and airtightness improvement when maintaining a natural gas furnace or fuel switching to a heat pump (11.2% and 14.3%, respectively), . Households earning CAD$50,000 after tax came out of energy hardship after insulation and airtightness upgrades. However, the adoption of an electric heat pump worsened energy hardship by doubling the costs of electricity despite the fact that reduced energy use intensity by nearly 50%. This concluded that energy efficiency measures alone are not enough to remove households out of energy poverty (or hardship, in general) in Ontario. By analyzing the prevalence, causes, and impacts of energy poverty in Canada and Ontario, this study aims to develop a replicable methodology that provides evidence-based insights to inform policy decisions and support the development of effective interventions that are inclusive and equitable for all households
  • Τεκμήριο
    Glass Fiber-Reinforced Polymer (GFRP) Reinforced Concrete Corner Joints Subjected to Opening Moments
    (University of Waterloo, 2025-04-24) Bashbishi, Lamar
    Concrete corner joints are elements in structures that transfer forces between their adjoining members. In recent decades, glass fiber-reinforced polymer (GFRP) has been gaining popularity due to its corrosion resistance and light weight. However, its linear elastic properties and lack of practical bond slip theory make it challenging for engineers to properly detail GFRP reinforcement in corner joints. Previous studies on GFRP-reinforced concrete closing joints have been conducted at the University of Waterloo; however, no research has been conducted on GFRP-reinforced concrete opening joints. The experimental program presented in this thesis consists of eight full scale corner joint specimens which were subjected to monotonic opening moments. The specimens are divided into two groups based on their tensile rebar geometry within the cantilever slab: Type A specimens with straight tensile bars, and Type B specimens with hooked tensile bars. Within each group, the specimens were constructed with one of the following: a) an unreinforced joint panel, b) bent bars perpendicular to the inner corner, c) confining stirrups within the joint panel, or d) both bent bars and confining stirrups. The aim of the study was to determine the effects of each design choice on corner joint behaviour. Test results showed that increasing development length of GFRP using hooked bars reduced bond-slip and increased joint strength and deformability. When primary tensile cracks were constrained using bent bars or confining stirrups, the main influence of joint strength became the strength of the concrete. Joints that contained bent bars perpendicular to the inner corner exhibited more consistent post-peak responses and had higher joint deformability than their base specimens. Joints with confining stirrups saw reduced widths of shear cracks as well as reduced bond-slip of the bars they confined. Further studies on GFRP-reinforced concrete joints must be conducted, including the study of the effect of different bent bar areas and sizes, different member geometries, as well as different GFRP development lengths/anchorage methods.
  • Τεκμήριο
    Investigating the Performance of Straight and Bent GFRP Bars as Flexural Reinforcement for Glulam Beams
    (University of Waterloo, 2025-04-22) Shrimpton, Catherine
    The heightened interest in using wood as a sustainable building material contributed to an increased demand for glued-laminated timber (glulam). Despite this, fundamental research is required on how to rehabilitate and retrofit deficient structural wooden members to extend the service life of the structure. The research focuses on the effects of reinforcement configurations consisting of glass-fibre reinforced polymer (GFRP) bars on the flexural behaviour of glulam beams. Of particular interest are the effects of reinforcement length, adhesive type, and knurling on the failure modes of the reinforced members when compared to unreinforced glulam. A total of eighteen pullout specimens were tested to investigate the effects of adhesive and knurling patterns on bond strength, and fourteen full-scale glulam beams were tested to failure under four-point static bending, including four unreinforced and ten GFRP-reinforced. The pullout test results showed that the texture and density of an adhesive had a critical role on the overall behaviour with improved behaviour in specimens using a fluid density in comparison to those with dense. The addition of GFRP reinforcement to the glulam beams contributed to an increase in strength, failure displacement, and stiffness by factors ranging between 1.16 – 1.30, 1.04 – 1.24, and 1.13 – 1.18, respectively, in comparison to unreinforced glulam irrespective of the failure mode obtained. The effects of reinforcement length and termination point showed that the change from short to long bars resulted in improvements in maximum resistance and stiffness by factors of 1.10 and 1.19, respectively, for the bent bar reinforced specimens, and insignificant improvements for the straight bar reinforced specimens. Additionally, the change from straight to bent bars resulted in improvements in maximum resistance and stiffness by factors of 1.06 and 1.03, respectively, for the specimens with longer lengths of bars, and insignificant improvements for the specimens with short lengths of bars. The addition of knurling in the full-scale GFRP-reinforced beams resulted in increases of 1.07 and 1.03 for the maximum resistance and stiffness, in comparison to beams without knurling. Additionally, a change in failure mode from shear to flexure was observed with the addition of knurling. A material model was developed to predict the flexural behaviour of unreinforced and GFRP-reinforced glulam beams, and the two proposed approaches were shown to generally captured the overall behaviour with a tendency to overpredict displacements at initial failure. Finally, the improvement in tensile failure strains in flexure due to the reinforcement was not observed to be present due to the mixed failure modes of shear and flexure. Strains from the digital image correlation system were observed to be lower than those measured by localized strain gauges, suggesting that measuring strains over a large area is critical.
  • Τεκμήριο
    Towards SLAM-Centric Inspection of Infrastructure
    (University of Waterloo, 2025-04-15) Charron, Nicholas
    The inspection and maintenance of civil infrastructure are essential for ensuring public safety, minimizing economic losses, and extending the lifespan of critical assets such as bridges and parking garages. Traditional inspection methods rely heavily on manual visual assessments, which are often subjective, labor-intensive, and inconsistent. These limitations have driven the development of robotic-aided inspection techniques that leverage mobile robotics, sensor fusion, computer vision, and machine learning to enhance inspection efficiency and accuracy. Despite advancements in robotic-aided inspection, existing works often focus on isolated components of the inspection process—such as improving data collection or automating defect detection—without providing a complete end-to-end solution. Many approaches utilize robotics to capture 2D images for inspection, but these lack spatial context, making it difficult to accurately locate, quantify, and track defects over multiple inspections. Other works extend this by detecting defects within images; however, without a robust 3D representation, defects cannot be precisely geolocated or measured in real-world dimensions, limiting their utility for long-term monitoring. While some studies explore 3D mapping for inspection, the majority rely on image-only Structure-from-Motion, which is known to be unreliable for generating dense and accurate maps, or are restricted to mapping along 2D surfaces, thereby failing to capture the full complexity of infrastructure assets. This thesis introduces a novel SLAM (Simultaneous Localization and Mapping)-centric framework for robotic infrastructure inspection, addressing these limitations by integrating lidar, cameras, and inertial measurement units (IMUs) into a mobile robotic platform. This system enables precise and repeatable localization, 3D mapping, and automated inspection of infrastructure assets. Three key challenges that hinder the development of a practical SLAM-centric inspection system are identified and addressed in this work. The first challenge pertains to the design and implementation of SLAM-centric robotic systems. This thesis demonstrates how sensor selection and configuration can be optimized to simultaneously support both highaccuracy SLAM and high-quality inspection data collection. Additionally, it establishes a robotic platform-agnostic design, allowing for flexibility across different infrastructure inspection applications. The second challenge involves precise and reliable calibration of camera-lidar systems, particularly when sensors have non-overlapping fields of view as is the case with the proposed inspection systems. To address this, a novel target-based extrinsic calibration technique is developed, leveraging a motion capture system to achieve high-precision calibration across both sensing modalities. This ensures accurate sensor fusion, yielding geometrically consistent inspection outputs. The third challenge is the development of a complete end-to-end inspection methodology. This research implements state-of-the-art online camera-lidar-IMU SLAM, with an added offline refinement process and a decoupled mapping framework. This approach enables the generation of high-quality 3D maps that are specifically tailored for infrastructure inspection by prioritizing accuracy, density, and low noise in the map. Machine learning-based defect detection is then integrated into the pipeline, coupled with a novel 3D map labeling method that transfers visual and defect information onto the 3D inspection map. Finally, an automated defect quantification and tracking system is introduced, allowing for defects to be monitored across multiple inspection cycles—completing the full end-to-end inspection workflow. The proposed SLAM-centric inspection system is validated through extensive real-world experiments on infrastructure assets, including a bridge and a parking garage. Results demonstrate that the system generates highly accurate, repeatable, and metrically consistent inspection data, significantly improving upon traditional manual inspection methods. By enabling automated defect detection, precise localization, and long-term defect tracking within a robust 3D mapping framework, this research represents a paradigm shift in infrastructure assessment—transitioning from qualitative visual inspections to scalable, data-driven, and quantitative condition monitoring. Ultimately, this thesis advances the field of robotic infrastructure inspection by presenting a comprehensive SLAM-centric framework that integrates state-of-the-art sensing, calibration, and mapping techniques. The findings have broad implications for the future of automated infrastructure management, providing a foundation for intelligent inspection systems that can enhance the efficiency, reliability, and safety of civil infrastructure maintenance worldwide.
  • Τεκμήριο
    Advancing Structural Engineering Through Data-Driven Methodologies: Seismic Vulnerability Assessment and Backbone Curve Determination
    (University of Waterloo, 2025-04-11) Elyasi, Niloofar
    Structural engineering has traditionally relied on analytical and experimental methods to ensure the safety of structures. These methods, while effective, often require significant resources, time, and expertise, limiting their applicability across diverse contexts. Meanwhile, vast amounts of data collected from surveys, experimental studies, and seismic events remain largely underutilized, providing a unique opportunity to develop advanced data-driven methodologies. This thesis aims to harness the potential of the available data repositories to address critical challenges in structural engineering, with a focus on seismic vulnerability assessment and backbone curve determination. Through the use of machine learning (ML), this thesis introduces innovative methodologies at both the system and component levels. A rapid visual screening (RVS) framework is developed to quickly assess the seismic vulnerability of low-rise reinforced concrete (RC) buildings. By incorporating ML models, this framework outperforms traditional evaluation methods with higher accuracy and broader applicability. Using post-earthquake survey datasets from a variety of seismic events, it proposes a region-independent tool, eliminating reliance on subjective judgments and region-specific calibrations. For backbone curve determination, used for analyzing the seismic behavior of RC columns, this thesis introduces a novel ML-based methodology. By employing experimental datasets and advanced regression techniques, it offers a practical and efficient alternative to the conventional methods. This approach not only predicts backbone curve parameters with high accuracy but also ensures accessibility for broader applications, especially in resource-limited environments. In summary, this thesis bridges system-level and component-level challenges, underscoring the potential of data-driven approaches in structural engineering. By providing a foundation for integrating innovative approaches into the field, this research advances both academic insights and practical applications. These contributions respond to the demand for efficient and reliable solutions, supporting safer structures and more effective resource management in modern structural engineering practices.
  • Τεκμήριο
    Chemo-rheological Characterization of Asphalt Binders Using Different Aging Processes
    (University of Waterloo, 2025-03-17) Sharma, Aditi; Baaj, Hassan; Tavassoti, Pejoohan
    The performance and longevity of asphalt pavements depend heavily on the properties of asphalt binders, which are affected by aging, binder modifications, and the incorporation of reclaimed asphalt pavement (RAP) materials. However, significant gaps exist in understanding the long-term chemical and rheological changes induced by aging processes (particularly with respect to differences between thermo-oxidative aging and UV exposure), and in the use/standardization of chemical analytical techniques such as Fourier Transform Infrared (FTIR) and Nuclear Magnetic Resonance (NMR) spectroscopy for binder characterization. Furthermore, the behaviour in RAP-virgin binder blends, along with the influence of bio-based rejuvenators and anti-aging additives under different aging conditions, remains underexplored. Addressing these gaps are crucial to developing more durable, sustainable pavements. This thesis bridges these research gaps through comprehensive investigation of chemo-rheological binder characterization, combining experimental testing with advanced analytical tools and varying aging methods. The findings offer essential insights into binder aging, rejuvenation strategies, and modification techniques, with significant implications for pavement durability and environmental sustainability. The first chapter presents an evaluation of Attenuated Total Reflection-Fourier Transform Infrared (ATR-FTIR) spectroscopy combined with functional group and multivariate analysis techniques to characterize asphalt binders. The research identifies challenges in repeatability across binder sources and aging states demonstrating the importance of standardized protocols for improving reliability. Repeatability as described by AASHTO standards is listed in the precision and bias statement as single operator precision. This is the allowable difference in two test results measured under the repeatability conditions (same asphalt binder, measured by the same operator, on the same piece of equipment in the same lab). Principal Component Analysis (PCA) and k-means clustering successfully classified binder types and aging states, with large quantity (LQ) sample preparation yielding more consistent results than small quantity (SQ) preparation. These findings underscore the need for uniform procedures in binder analysis, addressing inconsistencies prevalent in the current literature. The second part of the thesis investigates the impact of Styrene-Butadiene-Styrene (SBS) polymer modification on binder performance and oxidative resistance. Using Nuclear Magnetic Resonance (NMR) and ATR-FTIR spectroscopy, along with PCA and Partial Least Squares Regression (PLSR), the research highlights the ability of SBS to enhance high-temperature performance and slow thermo-oxidative aging. This work not only confirms previous findings on SBS but also provides new insights into the molecular interactions contributing to aging resistance. The study fills a gap in understanding how SBS-modified binders behave under various aging scenarios, offering a deeper perspective on polymer-modified asphalt technologies. The thesis also addresses a critical gap related to UV-induced aging, which has been underexplored in comparison to thermo-oxidative aging. A novel UV aging chamber was developed to simulate real-world environmental conditions, incorporating UV exposure, water spray cycles, and controlled heating at 70°C. Comparative analysis revealed that different additives exhibit varying effectiveness under UV and thermo-oxidative conditions. Zinc diethyldithiocarbamate (ZDC) showed strong resistance to thermo-oxidative aging but limited efficacy under UV aging, while ascorbic acid (Vit. C) accelerated aging under UV exposure, contrary to expectations. These findings emphasize the challenges involved in designing effective anti-aging strategies for asphalt binders, demonstrating the value of combining conventional rheological tests with spectroscopic techniques and further highlighting the need for more targeted approaches to additive selection and development. This thesis advances the understanding of asphalt binder behaviour and aging processes by integrating chemical, rheological, and multivariate analysis techniques. It offers critical contributions to the standardization of binder characterization protocols, the optimization of polymer-modified asphalt technologies, and the development of more effective anti-aging strategies. The research also demonstrates the potential of machine learning and artificial intelligence (AI) in predicting binder performance from spectroscopic data using multivariate analysis, paving the way for future innovations in asphalt binder characterization. In conclusion, the work in this thesis addresses significant gaps in the literature, providing new insights into aging mechanisms, additive/rejuvenation strategies, and RAP binder interactions. By combining chemical analysis, rheological testing, and multivariate techniques, this research contributes both to academic knowledge and practical pavement engineering, promoting the development of more sustainable, long-lasting asphalt pavements.
  • Τεκμήριο
    LiDAR-Driven Calibration of Microscopic Traffic Simulation for Balancing Operational Efficiency and Prediction of Traffic Conflicts
    (University of Waterloo, 2025-01-21) Farag, Natalie; Bachmann, Christian; Fu, Liping
    Microscopic traffic simulation is a proactive tool for road safety assessment, offering an alternative to traditional crash data analysis. Microsimulation models, such as PTV VISSIM, replicate traffic scenarios and conflicts under various conditions, thereby aiding in the assessment of driving behavior and traffic management strategies. When integrated with tools like the Surrogate Safety Assessment Model (SSAM), these models estimate potential conflicts. Research often focuses on calibrating these models based on traffic operation metrics, such as speed and travel time, while neglecting safety performance parameters. This thesis investigates the effects of calibrating microsimulation models for both operational metrics including travel time and speed, and safety metrics including traffic conflicts and Post Encroachment Time (PET) distribution, using LiDAR sensor data. The calibration process involves three phases: performance calibration, performance and safety calibration, and only safety calibration. The results show that incorporating safety-focused parameters enhances the model's ability to replicate observed conflict patterns. The study highlights the trade-offs between operational efficiency and safety, with adjustments to parameters like standstill distance improving safety outcomes without significantly compromising operational metrics. Furthermore, there is a substantial difference in the calibrated minimum distance headway for the safety model, highlighting the trade-off between operational efficiency and safety. While the operational calibration focuses on optimizing flow, the safety calibration prioritizes realistic conflict simulation, even at the cost of reduced flow efficiency. The research emphasizes the importance of accurately simulating real-world driver behavior through adjustments to parameters like the probability and duration of temporary lack of attention.
  • Τεκμήριο
    Reduced Order Geomechanics Models
    (University of Waterloo, 2025-01-14) Hatefi Ardakani, Saeed; Gracie, Robert
    Computational techniques are commonly used for real-time simulation of complex geomechanics problems, such as hydraulic dilation stimulation. A significant challenge in this realm is that high-fidelity mathematical models or full order models (FOMs) are computationally expensive as they must span multiple spatial and temporal length scales, often including nonlinearities and thermo-hydro-mechanical processing. The computationally intensive nature of these simulations continues to pose challenges in parameter estimation, uncertainty quantification, and optimization applications, where hundreds to thousands of simulations are required to achieve a solution. Intrusive reduced order models (ROMs) have emerged as a method to derive and train a computationally efficient surrogate/proxy model using the FOM. This thesis seeks to bridge the gap in existing intrusive ROMs in reservoir engineering by introducing efficient ROMs that are capable of capturing hydro-mechanical coupling behavior and path-dependent plastic deformation of rocks. A complex case involving hydraulic dilation stimulation is used to show the efficiency and accuracy of the ROM in addressing coupling, plasticity, and permeability enhancement features. First, an efficient and accurate ROM is proposed for nonlinear porous media flow problems, with specific application to a two-dimensional layered reservoir with a two-well system. Standard projection-based intrusive ROMs without hyper-reduction, such as proper orthogonal decomposition-Galerkin (POD-Galerkin), have not demonstrated efficacy in reducing the computational cost of the ROM for nonlinear problems. In this context, we combine POD-Galerkin with discrete empirical interpolation method (DEIM) as a hyper-reduction technique to reduce the size of the system of equations and accelerate the computation of nonlinear terms (residual force vector and its Jacobian). Column-reduced Jacobian DEIM technique is employed to interpolate the Jacobian, leading to a significant reduction in the computational time of the online stage. The ROM is parameterized for the nonlinear transient injection rate (pumping schedule). Offline, training data is generated by the FOM runs with simple constant injection rates. Online, the ROM demonstrates high accuracy and efficiency for complex and time-varying pumping schedules, including sinusoidal, high-frequency, and time-discontinuous pumping schedules that are located outside of the training regime. It is shown that POD-DEIM ROM has about 10^3 times fewer degrees of freedom (DoFs) and is approximately 190 times faster than the FOM for a reservoir model with 3*10^4 DoFs, while maintaining an accurate solution in the online stage. The accuracy and efficiency of the POD-DEIM motivate its potential use as a surrogate model in the real-time control and monitoring of fluid injection processes. Intrusive ROMs have faced considerable difficulties in accurately capturing the history-dependent nonlinear evolution of plastic strain. In the second objective, an intrusive ROM is developed and evaluated for a Drucker-Prager plasticity model, in which material properties and cyclic load path are parametric inputs. By constructing multiple local DEIM (LDEIM) approximations in combination with clustering and classifier techniques, a fast and accurate ROM is achieved. The FOM consists of a two-dimensional finite element analysis (FEA) of a deformable solid with Drucker-Prager plasticity. Offline, the temporal and parameterized training data generated from FOM runs are classified using the k-means clustering algorithm, whereby LDEIM basis vectors are computed. Online, a nearest neighbor classifier identifies the appropriate LDEIM. The ROM has three hyper-parameters (the size of the ROM, the number of clusters, and the number of DEIM measurement points per cluster), influencing both accuracy and speed-up. In a micromechanics porous media problem, parameterized by Young’s modulus and hardening modulus, the ROM’s performance is demonstrated for inputs within and outside of the training domain; error and speed-up vary with inputs - accuracy is highest for inputs within the training domain (Error: 1.0-3.5% vs 1.0-9.2%), while speed-up varies from 106 to 134 times. For a cyclic plasticity problem, parameterized by load path, the ROM exhibits stable and accurate online performance with a substantial speed-up for test load paths. Under FOMs with ~10^3 and ~5*10^4 DoFs, speed-ups are 11 and 770 times, respectively. Larger speed-ups seem likely for larger FOMs. Finally, the ROM for nonlinear transient porous media flow as a diffusion problem is coupled with the ROM for plasticity to develop a novel ROM formulation for poroplasticity problems. This ROM aims to significantly reduce the computational costs for nonlinear and fully-coupled hydro-mechanical simulations in large-scale reservoirs. The developed mathematical model integrates a coupled system of equations from a two-dimensional FEA of momentum and mass balance equations equipped with Drucker-Prager plasticity and stress-dependent permeability enhancement models. The proposed ROM combines various ROMs, including POD-Galerkin to reduce the number of DoFs, DEIM to accelerate the computation of nonlinear terms, and local POD and local DEIM (LPOD/LDEIM) for further reductions in poroplasticity problems. LPOD and LDEIM classify the parameterized training data, obtained from offline FOM runs, into multiple subspaces with similar dynamic features. A new strategy for clustering and classification techniques tailored for the coupled formulation framework is introduced. The advantages of this ROM are demonstrated in a large-scale application involving hydraulic dilation stimulation of a reservoir with a horizontal well pair. The ROM is parameterized not only by the material properties but also by the injection rate. Its effectiveness is evaluated for more realistic use cases, where ROM remains efficient for injection rates that extend beyond the training data. In large-scale subsurface flow modeling of hydraulic dilation stimulation, a speed-up of ~400 times is achieved, with a ROM reducing the model dimension from 10^5 DoFs to 100 DoFs. This substantial computational saving results from a real-time analysis of the ROM and becomes even more highlighted in multi-query problems, where the model must be executed for multiple inputs and system configurations. This ROM has a high potential for accelerating various problems, such as uncertainty quantification, design, history matching, and well control optimization. It is also recommended that the proposed ROM be adopted for other real-world subsurface applications, including conventional and unconventional oil and gas production, hydraulic fracturing, and carbon storage.
  • Τεκμήριο
    Finding Specific Industrial Objects in Point Clouds using Machine Learning and Procedural Scene Generation
    (University of Waterloo, 2025-01-06) Lopez Morales, Daniel; Haas, Carl; Narasimhan, Sriram
    In the era of Industry 4.0 and the rise of Digital Twins (DT), the demand for enriched point cloud data has grown significantly. Point clouds allow seamless integration into Building Information Modeling (BIM) workflows, offering deeper insights into structures and enhancing the value of documentation, analysis, and asset management processes. However, several persistent challenges limit the current effectiveness of point cloud methods in industrial settings. The first major challenge is the difficulty in identifying specific objects within point clouds. Finding and labeling individual objects in a complex 3D environment is technically demanding and fraught with various issues. Manually processing these point clouds to locate specific objects is labor-intensive, time-consuming, and susceptible to human error. In large-scale industrial environments, the complexity of layouts and the volume of data make these manual methods impractical for efficient and accurate results. The second major challenge lies in the scarcity of industrial point cloud datasets necessary for training machine learning-based segmentation networks. Automating point cloud enrichment through machine learning relies heavily on the availability of high-quality datasets specific to industrial applications. Unfortunately, comprehensive datasets of this kind are either unavailable or proprietary, creating a significant barrier to developing effective segmentation networks. Furthermore, the few current datasets often lack flexibility, being limited only by the areas that have been scanned. This rigidity, combined with the time-consuming process of manually segmenting data, slows down the development and deployment of scalable machine-learning solutions for point cloud segmentation. These limitations highlight the need for more flexible and adaptive solutions to efficiently address object detection, asset tracking, and inventory management in dynamic industrial scenarios. This research addresses these challenges by developing open-access, weight-balanced class datasets specifically designed for 3D point cloud segmentation in industrial environments. The datasets integrate synthetic data with real-world industrial scans, offering a solution to the problem of imbalanced class distributions, which often hinder the accuracy of neural networks. Two methodologies for synthetic datasets were developed, one with random object placement and the second through a procedural generation pipeline, which includes rules for object placement and rules for generating tube structures for industrial elements, filling the scene with various objects of variable geometric features to understand the different effects that make a dataset realistic. This procedural generation technique provides a flexible method for dataset creation that can be adapted for different objects, point cloud scales, point densities, and noise levels. The dataset improves the generalization capabilities of machine learning models, making them more robust in identifying and segmenting objects within industrial settings. The second part of the research presents a methodology for efficiently and accurately identifying specific objects in point cloud scenes and two methodologies for creating open-access industrial datasets designed to train neural networks for segmentation. The first part of the research focuses on the object-finding methodology, which is crucial for multiple applications, including object detection, pose estimation, and asset tracking. Traditional methods struggle with generalization, often failing to differentiate between unique objects and general classes. The proposed methodology for specific object finding utilizes a point transformer network for point cloud segmentation and a fully convolutional geometric features network to enhance geometrical features using color. A key innovation in this process is using a color-based iterative closest point (ICP) algorithm on the output of the fully convolutional geometric features network. This enables precisely matching segmented objects with a point cloud template, ensuring accurate object identification.
  • Τεκμήριο
    Land-to-Water Linkages: Nutrients Legacies and Water Quality Across Anthropogenic Landscapes
    (University of Waterloo, 2025-01-06) Byrnes, Danyka; Basu, Nandita
    An increasing population and the intensification of agriculture has driven rapid changes in land use and increases in excess nutrients in the environment. Globally, excess nutrients in inland and coastal waters have led to persistent issues of eutrophication, ecosystem degradation, hypoxia, and drinking water toxicity. Over the past few decades, we have seen policies set to mitigate the degradation in water quality. The existing paradigm of water quality management is based on decades of research finding a linear relationship between the net nitrogen inputs to the landscape and stream nitrogen exports. For instance, in the U.S., in response to these nutrient problems, working groups have spent approximately a trillion dollars to improve water quality by upgrading wastewater treatment plants and implementing nutrient management plans to decrease watershed nitrogen and phosphorus inputs. Despite concerted efforts, in many cases we have not seen marked improvements in water quality. In cases where water quality has improved, it is frequently after decades of nutrient management. The lack of or delayed water quality improvement suggests the importance of other drivers in modulating the relationship between nutrient inputs and watershed exports. Indeed, watershed nutrient loads are not just a function of current-year nitrogen inputs but can also depend on the history of inputs to the watershed. However, we still have little understanding of the relationship between nutrient inputs related to exports and the extent that accumulated stores of nitrogen and phosphorus influence this relationship. The central theme of my research has been an exploration of the history of anthropogenic nutrient use and the relationship between nutrient inputs and the response in water quality. Specifically, I have focused on the role of current nutrient inputs versus historical nutrient use in impacting water quality at the watershed scale, as well as the various landscape and climate controls that can mediate responses to changes in management. My research objectives will be to (1) develop a multi-decadal mass balance of nitrogen and phosphorus at the sub-watershed scale across the contiguous U.S. in order to investigate (2) the relationship between watershed nitrogen inputs and export and the drivers of changes in watershed nitrogen export, (3) the magnitude, spatial distribution, and drivers of nitrogen retention and legacy stores, and (4) the use and management of phosphorus in agricultural landscapes in the context of both food security and environmental health. I began by developing county-scale nitrogen and phosphorus surplus datasets, TREND-N and TREND-P, for the contiguous U.S.—with surplus defined as the difference between anthropogenic inputs (fertilizer, manure, domestic inputs, biological nitrogen fixation, and atmospheric deposition) and non-hydrological export (crop and pasture uptake). In Chapter 2, I present the updates to a previously published TREND-N county-scale nitrogen mass balance dataset, improving crop and pasture uptake and livestock excretion methods. In Chapter 3, I develope new county-scale phosphorus surplus dataset, using similar methods. These datasets were then downscaled to a 250 m gridded-scale dataset, known as gTREND-Nitrogen and gTREND-Phosphorus, a step led by my collaborator Shuyu Chang. These novel datasets serve as the foundational data for the subsequent chapters. Next, in Chapter 4, I explored the relationship between net nitrogen inputs and nitrogen export for over 400 watersheds across the U.S. I used the newly developed nitrogen surplus dataset to understand how watershed-scale nitrogen surplus magnitudes and exports change over time and examine how the relationships are influenced by both natural and anthropogenic controls within watersheds. To achieve this, I used a set of 492 watersheds with nitrogen input and export data spanning from 1990 to 2017. We found that 284 watersheds had a significant (p<0.1) increasing or decreasing trend in both nitrogen surplus and nitrogen load. Of these watersheds, we identified 62 where both nitrogen surplus and export have been significantly increasing over the last two decades. These input-driven watersheds are characterized by high livestock density, agricultural area, and tile drainage. In contrast, nitrogen surplus and export have been decreasing in 127 "bright spot" watersheds, characterized by high population density and urban land use. Nitrogen surplus is also decreasing in 60 "transitioning" watersheds, but export is increasing as nitrogen surplus decreases. We argue that these watersheds are transitioning from agriculture to more urban areas, such that fertilizer inputs have decreased, but the higher nitrogen export is driven by legacy nitrogen stores. Finally, we found 35 watersheds demonstrating a delayed response, with nitrogen export decreasing despite an increase in nitrogen surplus. Climate appears to be the driver of response in these watersheds, with aridity likely driving lower nitrogen export, despite increasing inputs. The four typologies of nitrogen inputs and export relationships suggest that watersheds can act as filters and modulate the movement of nitrogen. Our results provide insights into the complex dynamics of nitrogen surplus and export relationships, as well as how the landscape, climate, and legacy nitrogen can influence these relationships. In Chapter 4, I analyzed relationships between changes in nitrogen inputs and export, to understand what drives changes in watershed export, finding that legacy stores may be modulating the watershed response to changing net nitrogen inputs. However, we have limited knowledge of the magnitude and spatial distribution of legacy stores across North America. Therefore, in Chapter 5, we quantified how much nitrogen retention, which is the mass of nitrogen stored in legacy pools and nitrogen lost to denitrification, has accumulated in watersheds, and where it can be accumulating. To achieve this, we used existing datasets and machine learning algorithms to calculate the mass of ‘retained’ nitrogen in the landscape—defined as the nitrogen stored in the soil organic nitrogen pool, the groundwater pool, or lost through denitrification. Specifically, we built a random forest modeling framework trained on the watersheds’ nitrogen surplus and components, loads, and characteristics to predict nitrogen loads at the HUC8 scale across the U.S. We calculated retention for HUC8, which is the difference between nitrogen surplus and predicted loads, and found that nitrogen retention is highest in the Midwestern and Eastern U.S. because of low exports in regions with high agricultural inputs or high population density. Next, we used a data-driven approach to estimate legacy stores by allocating retained nitrogen mass into their legacy pools. We partition nitrogen retention in the Upper Mississippi region HUC8 watersheds into the mass stored in the groundwater pool, soil organic nitrogen pools, and mass lost to denitrification. We found that, on average, 42% of the mass is stored in the soil organic nitrogen pool, 16.5% is stored in the groundwater pool, and 40% is lost to denitrification. While these two chapters focused on nitrogen, in my final chapter we shifted to explore phosphorus use in agricultural systems. In my final chapter, we used the new gridded phosphorus surplus and components dataset to explore current and historical agricultural phosphorus use and management in landscapes within the context of both food security and environmental health. To characterize the extent of phosphorus depletion and excess, we employed indicators such as annual and cumulative phosphorus surplus and phosphorus use efficiency (PUE). We found that the evolution of agricultural phosphorus management is shaped by changing fertilizer management, the proliferation of concentrated animal operations, climate, and the landscape's memory of past phosphorus use. We further integrated both cumulative phosphorus surplus and PUE into a framework to quantify phosphorus sustainability in intensively managed landscapes. We found that in the 1980s, much of the agricultural land was undergoing ‘intensification,’ with positive and increasing cumulative stores because phosphorus inputs exceeded crop uptake (PUE < 1). By 2017, 29.5% of the agricultural land was undergoing ‘recovery’ and had positive cumulative phosphorus stores that were being depleted through improved phosphorus management (PUE > 1). However, 70% of the agricultural area in the U.S. is still undergoing ‘intensification,’ particularly in areas with more of their inputs from livestock manure, pointing to the need to treat manure as a resource instead of the current approach of treating it as a waste product. By using novel datasets, we have been able to explore nutrient use across space and time and its impact on food security and environmental outcomes. I have made significant contributions towards expanding the discussion of nutrient us and fate, understanding the magnitude and distribution of cumulative net nutrient inputs stores in the landscape, as well as the ways in which intrinsic watershed properties, climate, land management, and historical nutrient use can modulate the relationship between inputs and export. Overall, my findings underscore the importance of nuanced, place-based, and context-dependent nutrient management strategies, with a focus on manure management, to address the diverse challenges of different agricultural systems and prevent unintended environmental consequences.
  • Τεκμήριο
    Erosion Risk Modelling: An Improved Screening Tool for Urban Watershed Management
    (University of Waterloo, 2025-01-02) Thirimanne, Hettige Dona Thiruni Dulara; MacVicar, Bruce
    Urbanization alters hydrological responses by increasing impervious surfaces, leading to elevated runoff, altered streamflow regimes, and heightened flood risks (Paul & Meyer, 2001; Walsh et al., 2012). The impact of land-use changes is a crucial consideration for urban watershed management (Bochis-Micu & Pitt, 2005; Walsh et al., 2012). SPINpy 2 is a screening tool that utilizes digital elevation model (DEM)-based methods of stream power mapping from Vocal Ferencevic and Ashmore (2012) to integrate land-use data into its modelling framework. This study presents the development of two of SPINpy 2's Land Use (LU) based analyses: i) the No Stormwater Management (NSM) Scenario and ii) the Engineered Stormwater Management Pond (ESM) Scenario. Incorporating Nature-based Solutions (NbS), such as stormwater management ponds, into SPINpy 2 can model methods to alleviate the adverse effects of urbanization by promoting infiltration and stabilizing stream banks. This feature is particularly valuable for urban watersheds at high erosion risk, where NbS can help reduce the effects of impervious surfaces, lower flood risks, and stabilize channels. SPINpy 2 facilitates the modelling of NbS, assessing their effects on stream power, discharge, and erosion sensitivity and providing a decision-support tool for urban watershed managers. It helps evaluate the long-term benefits of NbS in reducing runoff and enhancing ecosystem resilience. By modelling the effects of reducing peak flows on erosion risk, SPINpy 2 simulates how stormwater management measures can mitigate erosion and offers insights into effective strategies for enhancing channel stability. The model was applied to urbanized watersheds such as Cooksville Creek to assess its utility in high-risk environments. The simulation results provide insights into the potential of NbS to reduce flood risks and improve channel stability. The application of SPINpy 2 demonstrated that incorporating NbS significantly mitigates the impacts of urbanization. Comparisons between scenarios with and without NbS interventions highlighted the importance of infiltration-based solutions in stabilizing stream channels and reducing sediment transport. SPINpy 2 also provided spatially explicit maps showing locations of high erosion risk and areas where NbS would be most effective. The findings underscore the potential of SPINpy 2 as a decision-support tool for urban watershed managers. By simulating the impacts of land-use changes and NbS interventions, SPINpy 2 offers a proactive approach to addressing hydrological and geomorphological challenges posed by urbanization. The ability to model diverse NbS scenarios enhances the tool's applicability in high-risk watersheds, such as Cooksville Creek, where impervious surfaces dominate and flood risks are heightened. The results demonstrate that NbS can substantially reduce runoff and stabilize channels, promoting ecosystem resilience and sustainable development. Overall, SPINpy 2 serves as a screening tool for decision-makers, enabling them to simulate and evaluate the impacts of land-use changes and NbS interventions, promoting sustainable development and environmental stewardship in urban environments. Its comprehensive approach allows watershed managers to tackle the unique challenges posed by urbanization and supports the development of cost-effective and environmentally sound infrastructure and policies. This proactive, integrative approach positions SPINpy 2 as a key resource for managing urban watersheds.
  • Τεκμήριο
    Development of Tools for Infrastructure Asset Management Cross-Asset Trade-off Analysis and Universal Performance Measure for Public Agencies
    (University of Waterloo, 2024-12-13) Posavljak, Milos; Tighe, Susan
    While the modern monetary system, the limited liability corporation, and modern public infrastructure trace their beginnings to past centuries – use of the term “asset” when referring to public infrastructure started at the end of the 20th Century. With advances in computer technology, New Zealand and Australian public agencies were initial adopters and were the first to benefit from mass data availability on road infrastructure. Soon after, North America and the rest of the developed world followed in adopting what today is commonly referred to as infrastructure asset management practices. Infrastructure’s purpose remains the same as before – to support economic growth and societal accessibility. However, the new perspective of viewing it as an asset rather than an almost naturally occurring - passive societal commodity - has brought forth demand for increased transparency and evidence-based decision-making. Appropriate timing and action relative to asset’s performance and societal growth demands require a complex socio(org)-technical system to maximize the asset’s benefits to society and minimize the risk of it turning into a societal liability. The thesis presents an original approach to improving an organization’s decision-making capabilities by operationalizing asset management processes within vertical and horizontal public agency structures. One which uses organizational behaviour theory and operational analysis while leveraging civil engineering industry experience and engineering risk and reliability knowledge to develop-corporate data driven-asset performance measures. A novel horizontal information flow is mapped and introduced as the operationalizing asset management framework. It is used as a guide to shine a light on the asset management process complexities at tactical and operational levels of organizations. A new operational perspective on the definition of asset management is argued, one which sees it as an equal partnership between engineering and financial professionals reinforced with administrative policies and procedures. The effects of division of labour are reflected in the academic fields of engineering as well. Intra-departmental specializations within civil engineering include transportation, structural, hydrology, and further branching within each. With respect to infrastructure asset management this is a necessity as public agencies typically have a portfolio of varying assets ranging from roads, water distribution, sewer management, facilities, and parks - to name a few. For which different knowledge and skills are necessary in order to provide expert level management sought and claimed by managing agencies. As such, subject matter experts along with finance professionals make up the core team which functions within a compartmentalized structure of administrative policies and procedures. The two degrees of compartmentalization, one in the academic, other in the corporate setting has yielded organizationally silo-ed asset management processes competing for a single source of funding – public monies. Provided that all assets are equally important in providing a singular infrastructure system - as experienced by citizenry – the questions of which and why one is a priority over the other arise when there is a lack of funding for all within a particular time span. The research originally argued the need to use the inherit objectivity of monetary value to provide an objective method of cross-asset trade-off analysis to answer the “which”. While organizational theory and engineering experience is used to create new value from untapped potentials of existing organizational processes in creating one objective level playing field from which evidence-based decisions can rapidly be made and cataloged in answering the “why”. The research journey identified a significant bottleneck with the cross-asset item. Specifically, “field inspection of information” showed that the forecasting tools available to municipalities – within single asset classes – do not satisfy minimal scientific standards. Subsequently, it is argued that this is a naturally occurring limitation of the sample space, rather than a “continuous improvement item”. The research found that forecasting infrastructure spending needs according to the scientifically unreliable Age-Based approach overestimates them by 335%. This is compared to the scientifically reliable Consumer-Based approach that is based upon engineering risk and reliability.
  • Τεκμήριο
    Advancing the efficient development and deployment of hydrologic and hydraulic models for large scale and real-time applications
    (University of Waterloo, 2024-12-12) Chlumsky, Robert; Craig, James R.; Tolson, Bryan
    Hydrologic and hydraulic models are tools that may be typically applied, respectively, to simulate streamflow and to determine depths and locations of flooding. While these tools are crucially important for predicting flood events, they require niche expertise and a high degree of effort to be developed and deployed effectively. This thesis aims to streamline the level of effort required to develop and deploy quality models within the typical workflows that support the simulation of flood events. First, the selection of optimal model structures within hydrologic models is addressed. The blended hydrologic model, which allows the selection of mathematical equations to represent processes in the hydrologic cycle to occur as part of a calibration exercise, is tested and shown to provide benefit to both model performance as well as scientific utility. Secondly, the blended model is improved through an extensive empirical experiment which delivers a high performing blended version 2 model. This model achieves high performance scores across the contiguous USA without a need to adjust the model structure, which will greatly reduce the time-consuming step in practice of manually selecting the optimal model structure for a given watershed. Finally, a novel method for hydraulic modelling and flood mapping is introduced. Improved geospatial methods are paired with a one-dimensional hydraulic model solver and then benchmarked against conventional methods. The result is shown to provide improved accuracy of flood maps while maintaining a computational runtime that is suitable for large-scale and real-time applications. Overall, it is anticipated that this research benefits the development of crucial tools for predicting and simulating flooding.
  • Τεκμήριο
    Increasing Nutrient Circularity and Reducing Water Pollution Through Anaerobic Digesters
    (University of Waterloo, 2024-12-11) Wallace, Nettie; Basu, Nandita; Mai, Juliane
    While the intensification of agricultural practices over the last few decades has increased livestock and crop production, it has also led to unintended environmental consequences such as harmful algal blooms, drinking water contamination, and increased emissions of greenhouse gasses. Much of the increase in crop and livestock production can be attributed to a shift towards specialized agriculture which has resulted in the decoupling and spatial separation of livestock and crop systems. The spatial separation of the two systems has disrupted the circular flow of nutrients in agricultural systems. Relinking the livestock-nutrient economy has been identified as a strategy to reduce the overall environmental burden of the sector. The use of anaerobic digesters to manage livestock manure presents a promising pathway towards the recoupling of crop and livestock systems. Anaerobic digesters, or also referred to as biodigesters, utilize anaerobic decomposition to transform organic wastes into valuable by-products. During the digestion process, methane – a potent greenhouse gas emitted in traditional manure management – is captured to produce biogas which is a source of renewable energy. The process also produces digestate which is a nutrient rich effluent that can be applied to cropland as a fertilizer source. The nutrient dense nature of digestate, and the potential revenue from biogas production enable it to be economically transported over a greater distance than untreated manure – thereby providing a pathway to enhance the nutrient circularity in spatially separated livestock and crop systems. However, there is concern that digestate use can result in greater nitrogen leaching losses than manure. The work presented in this thesis estimates the nitrogen leaching losses from the corn and soybean cropland across 263 regions in Ontario and assesses the water quantity implications of manure and digestate land-application. To do this, a DeNitrification-DeComposition (DNDC) model was developed for each region. The models were calibrated individually to observed crop yields from 2011 and 2021. The calibrated models were able to capture the general magnitude and annual variation in reported corn and soybean yields across the study region. The median error between simulated and observed crop yields across all regions was 5.8% (mean absolute percent error). Corn crops were provided with synthetic fertilizer at an optimal rate, as determined by calibration. The results of the calibration showed that observed crop yields across the study region could be met through the application of 69% of the nitrogen fertilizer purchased in Ontario in 2021. This finding suggests corn nitrogen requirements are met through the application of purchased synthetic fertilizer while manure is applied to cropland in addition of crop needs. Next, I used livestock population data to estimate the quantity of manure nitrogen produced in each region. Using the calibrated DNDC models, I simulated a number of scenarios which explored various manure and digestate distribution configurations across the landscape. The results of this work show that when digestate was substituted for manure and subject to the same transportation constraints, the amount of nitrogen lost by leaching across the study region increased by 6% (from 46.77 to 49.42 kt N/yr). However, when the digestate distribution configuration was altered to reflect re-distribution from a centralized biodigester and its ability to be transported over a greater distance, the amount of nitrogen lost through leaching across the study region was reduced by 7% (43.42 kt N/yr). These findings show that when digestate was used as a direct substitute for manure and applied at equal rates based on total nitrogen content, it contributed to increased nitrogen leaching losses. However, when the distribution of the digestate was considered at a regional scale and the system dynamics of the biodigester were accounted for, the use of digestate reduced the total nitrogen leaching losses across the study region. This research shows that biodigesters can provide benefit to water quality when considered at a regional scale.
  • Τεκμήριο
    Improving Short-term Streamflow Forecasting with Wavelet Transforms: A Large Sample Evaluation
    (University of Waterloo, 2024-12-11) You, John; Quilty, John
    Accurate streamflow forecasting is instrumental to water management, including flood preparation and drought monitoring. The past two decades have seen a steady rise in the application of machine learning (ML) models to streamflow forecasting, given their ability to model highly nonlinear relationships, moderate data requirements, and accuracy. Successful application of ML to streamflow forecasting requires the modeler to select appropriate features (e.g., precipitation and air temperature) to use in an ML model. Since the original features can be insufficient, adding new features derived from existing ones, also known as feature engineering, is often needed to improve the accuracy of streamflow forecasts. Wavelet transforms (WTs) have become promising feature engineering methods for streamflow forecasting since they can decompose time series (e.g., precipitation) into multiple sub series (coefficients). Each set of coefficients captures changes across different timescales (e.g., monthly, seasonal), allowing for the variance of the original time series to be associated with specific timescales. The different coefficients extracted by WTs are then used as features in an ML model, often improving the accuracy of streamflow forecasts compared to using the original features alone. Furthermore, different wavelet filters can capture different behaviours of a given time series (e.g., trends, short-term transients), making some more suitable for different applications (e.g., streamflow forecasting) than others. This leaves the modeler with the task of finding the right wavelet filter(s) for their application. Despite many existing studies coupling WTs and ML for streamflow forecasting, none have explored a large hydro-climatically diverse sample of catchments to evaluate the impact of WTs on streamflow forecasting performance. Due to the small number of catchments included in existing streamflow forecasting studies using WT-based ML models, it is not clear how the performance of the adopted models generalizes to catchments with different characteristics. In addition, approximately 90% of studies using WTs for hydrological forecasting misuse WTs. The most common issue is not taking proper precautions to address look-ahead bias (i.e., the use of ‘future data’), invalidating the forecasts for real-world applications. Thus, this thesis seeks to address the abovementioned gaps in the literature by undertaking a large sample case study involving 620 catchments across the contiguous United States, using best practices for WT-based streamflow forecasting at the daily timescale. The WT-generated features are used in long short-term memory networks (LSTMs) to produce streamflow forecasts. LSTMs are selected due to their exceptional streamflow forecasting performance compared to other commonly adopted models, as noted in the literature. In total, three LSTM configurations are considered: baseline LSTM (B LSTM), wavelet LSTM (W-LSTM), and grid search LSTM (GS-LSTM). In the first configuration, a baseline LSTM model is developed for each catchment. In the second configuration, 33 different wavelet filters are used to engineer features based on several hydro-meteorological features (e.g., precipitation and air temperature), resulting in 33 different W-LSTM models for each catchment. For each catchment, the 33 different W-LSTM models are compared to the B-LSTM model to evaluate the impact of WTs on streamflow forecasting performance. In the third configuration, the B-LSTM models undergo hyper-parameter selection using grid search. This setup is used to test whether grid search has a greater impact on streamflow forecasting performance than WTs. All configurations are applied to one- and three-day-ahead streamflow forecasting. For the one-day-ahead forecast horizon, W-LSTM improves upon B-LSTM performance in 97% of catchments and improves upon GS-LSTM in 50% of catchments. For a forecast horizon of three days ahead, W-LSTM improves upon B-LSTM performance in 97% of catchments and improves upon GS LSTM in 41% of catchments. When considering only catchments where the B-LSTM model meets a minimum performance threshold (i.e., out-of-sample Nash-Sutcliffe Efficiency, OOS NSE, greater than 0.4), then for a forecast horizon of one day ahead, W-LSTM improves upon GS-LSTM in 60% of catchments, while for a forecast horizon of three days ahead, W-LSTM improves upon GS-LSTM in 70% of catchments. Certain wavelet filters perform better than others. For instance, the W-LSTM using the Morris Minimum Bandwidth 4.2 filter outperforms B-LSTM in over 50% of catchments (where B-LSTM has an OOS NSE greater than 0.4) for both forecast horizons. Overall, WTs provide the greatest improvement to forecasting performance (for both one and three day(s) ahead forecast horizons) in the D (snowy climates) and B (dry climates) Koppen classification regions. This finding presents a clear direction for researchers and practitioners when deciding whether WTs will benefit their streamflow forecasting models in their regions. This thesis is the first to use a large sample of catchments to demonstrate that WTs are useful for improving ML-based streamflow forecasts. These models can be used for reservoir management, early flood warning systems, irrigation, navigation, and many other water management applications. Future work can explore the combined optimization of wavelet filters and LSTM hyper-parameters to improve further upon the performance of the models reported in this thesis. Another worthwhile endeavor is to focus on modifications to the LSTM, such as quantile loss functions, Monte Carlo dropout connections, conformal prediction, and/or Bayesian methods to generate probabilistic forecasts enabling risk-based solutions to water management problems.