The Libraries will be performing routine maintenance on UWSpace on October 13th, 2025, from 8 - 9 am ET. UWSpace will be unavailable during this time. Service should resume by 9 am ET.
 

Civil and Environmental Engineering

Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906

This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.

Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).

Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.

Browse

Recent Submissions

Now showing 1 - 20 of 935
  • Item
    Municipal Wastewater Sludge as a Sustainable Bioresource: Spatial Analysis Across Ontario
    (University of Waterloo, 2025-10-08) Granito Gimenes, Camila
    Effective wastewater sludge management is critical for sustainable wastewater treatment, nutrient recovery, and environmental protection. However, Ontario lacks in data of sludge generation and nutrient content, particularly across diverse facility sizes and treatment processes. This study aims to fill that gap by estimating wastewater sludge generation, nitrogen and phosphorus content, and disposal practices across 548 municipal wastewater treatment plants (WWTPs) in Ontario. Using a combination of facility-level annual reports, the Wastewater Systems Effluent Regulations (WSER) database, Ontario Clean Water Agency (OCWA) records, and National Pollutant Release Inventory (NPRI) was developed treatment-specific coefficients from a subset of plants with complete data to extrapolate sludge generation, nitrogen and phosphorus mass for facilities lacking direct data. For the year 2022, the most recent year with complete data, Ontario’s WWTPs generated an estimated 356,265 ± 35,859 dry metric tons of sludge, with a per capita generation of 23.5 kg/person/year falling within the range reported in the literature (17.8–31.0 kg/person/ year). Nutrient content analysis revealed median concentrations of 29 kg/ metric tons of dry sludge for phosphorus and 42 kg / metric tons of dry sludge for nitrogen, resulting in an estimated 9,937 ± 1,837 metric tons of phosphorus and 15,302 ± 9,044 metric tons of nitrogen generation per year in wastewater sludge. Over 50% of the nutrients are concentrated in larger, anaerobic digester-equipped facilities, located primary in Southern Ontario. Incineration accounts for the end-use of 30% of the total sludge generated, resulting in the loss of their nutrients. In contrast, agricultural disposal, practiced by 140 facilities, allows for nutrient recovery from 26% of total sludge generated. Spatial and process-level analysis revealed that plant size and stabilization method are predictors of disposal type. Large plants (defined with influent (≥ 37,850 m3/day), which are more likely to operate aerobic or anaerobic digesters, tend to adopt more sustainable disposal methods when conditions permit (e.g. during appropriate seasons). In contrast, small facilities (with influent (≤ 3,785 m3 /day) often lack in advance stabilization and are more likely to rely on less sustainable practices such as landfill. Many of these facilities also lack consistent reporting, making it difficult to track sludge generated and disposal pathways. By quantifying the generation of sludge and nutrient flows across Ontario, this study provides a baseline for evidence-based decision-making. The data can be used by municipalities and regulators to identify areas with high biosolids generation, data gaps, and to target specific regions for further study or investment. These findings highlight the need for provincial-level data transparency and target strategies to promote nutrient circularity in municipal sludge management, particularly by addressing the data and resource gaps at smaller facilities. While this study provides a valuable province level overview, a key limitation is the reliance on extrapolation data or facilities lacking complete records, underscoring the need for improved, standardized reporting and new methodologies with more data in the future.
  • Item
    Automating Construction Material Sourcing and Distribution for Circularity
    (University of Waterloo, 2025-09-23) Olumo, Adama
    Circularity in the construction industry is developing, with increasing emphasis on extracting resources from existing infrastructure. Given the growing amount of resources embedded in the current housing stock, sustainability within the industry is critical. To support the large-scale reuse of Reclaimed Construction Materials (RCMs), through active reuse strategies, it is essential to develop tools and frameworks for sourcing RCMs. This study contributes to that effort by providing insights into the creation of such frameworks and emphasizing the value of material reuse within the construction sector. Although material reuse is considered an excellent circular strategy, the application of reuse across the industry still faces technical, social, and environmental limitations. A significant drawback of material reuse is the complexity of finding RCMs that fit a design with limited alterations required for use. Furthermore, the environmental and economic cost of acquiring and reusing RCMs is taxing, compared to acquiring New Construction Materials (NCMs). Additionally, there is limited insight into other options for restoring existing building resources before replacement. Therefore, this thesis develops decision support frameworks for component level assessment of RCMs and assembly level assessment of RCMs. The component level assessment tool is designed to integrate 3D scanning, Optimization Programming Languages (OPL), Life Cycle Assessment (LCA) and Building Information Modeling (BIM) tools to create an enhanced digital supply sourcing system; whereby RCMs at secondary sources can be found with basic required information like; cost, proximity and dimensions to enable planning and implementation. The component-level assessment framework is refined and extended through a policy assessment study, demonstrating its adaptability to diverse challenges presenting both risks and potential benefits for policy implementation. This thesis is fundamentally based on real-world data gathered from RCM stores and it challenges the current ongoing building design practices that deem material reuse as a problematic approach by enabling flexible sourcing of used and new building materials and providing an assessment framework for selecting appropriate restoration strategies. The approach alters the social perspective to consider partial integration of RCMs at varying levels of integration in new building projects.
  • Item
    Methods for Modelling Wetlands in Hydrologic Models
    (University of Waterloo, 2025-09-22) Tucker, Madeline Gabriela
    Wetlands are abundant natural systems that serve as important ecosystems, mechanisms for nutrient filtering and storage, and providers of flood mitigation services. Wetlands strongly influence the hydrologic response and water balance on a landscape. The practice of water resources management often relies on numerical computer models that represent hydrologic features within a watershed like wetlands, lakes, and rivers, to accurately simulate the movement of water. However, representation of wetlands in hydrologic models is challenged by their small-scale nature, numerous classification schemes that are not readily associated with a water balance conceptual model, and sometimes complex hydrology. A shortcoming of existing wetland modelling studies includes the lack of multiple wetland types being represented, often due to the complexity that accompanies wetland classification schemes. In this study, we address three research objectives: 1) to inventory existing wetland modelling methods and develop a catalogue of conceptual-numerical wetland modelling methods in hydrology based on wetland classifications and numerical water balance equations, 2) to implement conceptual-numerical wetland modelling methods in a regional hydrologic model case study and evaluate model performance to determine the impact of wetlands on simulation results, and 3) to examine how available wetland mapping products can inform wetland modelling. A hydrologic model of the Nipissing watershed in Ontario was built using the Raven Hydrologic Framework and calibrated in a multi-objective calibration to both high and low flow objective functions in three modelling scenarios. The first modelling scenario (Scenario 1) contained no wetland representation; the second modelling scenario (Scenario 2) contained explicit wetland representation of one wetland conceptual-numerical model type; and the third modelling scenario contained explicit wetland representation of three wetland conceptual-numerical model types based on connectivity of wetlands to modelled streams and lakes. Calibration results indicated good model performance for all model scenarios, as an adequate performance threshold of 0.50 for the Kling Gupta Efficiency (KGE) and log transformed Nash Sutcliffe Efficiency (logNSE) was achieved for both performance metrics. In calibration, Scenario 2 most often outperformed Scenario 1 (no wetland scenario) at individual calibration gauges and Scenario 3 (most complex wetland scenario) due to pareto solution uncertainty and site-specific properties. Validation results indicated that Scenario 3 most often outperformed the other two scenarios across multiple performance metrics at individual flow gauges and handled low flows especially well when analyzing low flow performance metrics and hydrographs. This is attributed to Scenario 3 storing the most water in wetland depressions out of all modelling scenarios from abstraction, lateral diversion of water accounting for wetland contributing areas, and groundwater process parameters set up for each simulated wetland type. Percent bias median and spread across all flow gauges significantly decreased by 15% from Scenario 2 to Scenario 3, highlighting the importance of low flow accuracy to hydrologic model performance. Flow duration curves and hydrographs plotted by flow gauge demonstrated that site-specific properties of the entire study area and individual gauge drainage areas can impact simulation results. There was no relationship found between gauge drainage area, wetland coverage percent by area, and model performance at individual gauges in this study. Four wetland mapping datasets in Ontario were compared to select a wetland data input to the Nipissing model. By comparing each wetland dataset, a formalized checklist is provided for modellers to use as a reference when making similar comparisons between their own wetland mapping products. It is recommended that wetland mapping product comparisons for project suitability be performed by first comparing wetland coverage between datasets using the wetland polygon coverage by area, then comparing spatial variability between datasets by inspecting areas of overlap and non-overlap, and finally comparing data attributes, particularly wetland classifications and any discrepancies between dataset attributes. While the results of this study demonstrate the importance of low flow accuracy to model performance through the representation of wetlands, improvements could be made to aid future studies. It is recommended that future studies select a watershed with high quality flow and meteorological data, basins with varying wetland coverage, and little to no water regulation influence (e.g., hydroelectric dams). It is also recommended that the wetland conceptual-numerical models presented in this thesis be further tested on watersheds of different sizes, different combinations of wetland types, and varying degrees of complexity.
  • Item
    Quantifying and Mitigating Uncertainty in Crash Risk Prediction for Road Safety Analysis
    (University of Waterloo, 2025-09-17) Aminghafouri, Reza
    Road safety analysis is a cornerstone of traffic safety management programs like Vision Zero, which aim to eliminate fatalities and serious injuries on roadways. Central to road safety analysis is the ability to accurately predict crash risk; however, this task is challenged by significant uncertainty arising from the random nature of crashes (aleatoric uncertainty) and limitations in data and modeling (epistemic uncertainty). These uncertainties can lead to the misidentification of hazardous locations, resulting in false positives and negatives, and the inefficient allocation of limited safety resources. While numerous statistical models exist for risk prediction, most traditional crash-based approaches provide simple point estimates, failing to formally quantify the inherent uncertainty in their predictions. Proactive conflict-based analysis has emerged as a promising alternative that avoids direct reliance on sparse crash data, but its application introduces new methodological challenges. The reliability of conflict-based predictions is not well understood, and key methodological choices, such as the duration of data collection and the selection of analytical thresholds for Extreme Value Theory (EVT) models, introduce significant, often unaddressed, uncertainty into the results. To overcome these challenges, this thesis systematically develops and evaluates a framework to quantify, investigate, and reduce critical sources of uncertainty in road safety analysis. First, to quantify the impact of uncertainty on network screening, a frequentist approach is employed to establish a joint confidence region (CR) for hotspot rankings, moving beyond simple point estimates. This is achieved by first estimating the confidence interval (CI) of risk for each location using a hierarchical Full Bayesian (FB) model that considers both crash frequency and severity. Second, this research investigates a primary source of data uncertainty in conflict-based analysis by systematically assessing the relationship between sample size and prediction reliability using a unique, year-long LiDAR dataset and a Bayesian Peak-Over-Threshold (POT) EVT model. Third, to address methodological uncertainty in EVT, an automated and objective approach for threshold selection is developed and validated, comparing a Sequential Goodness-of-Fit Selection Method (SGFSM) with an Automatic L-moment Ratio Selection Method (ALRSM) to reduce analytical subjectivity. The analysis demonstrates that explicitly accounting for uncertainty can lead to substantially different hotspot identifications, revealing that rankings based on point estimates alone may be unreliable. The sample size analysis reveals that the common practice of using short-term conflict data is inadequate for reliable collision predictions, a finding that challenges the validity of a significant portion of the existing literature on conflict-based safety analysis. Finally, the automated threshold selection approach, particularly the L-moment-based approach, proves to be a robust and objective method that improves the accuracy of crash risk estimation. Collectively, this research provides researchers and practitioners with an evidence-based methodology to understand, quantify, and mitigate key uncertainties in road safety analysis, fostering more reliable safety assessments and a more effective allocation of resources.
  • Item
    Computer Vision Based High-Fidelity Mapping and 3D Reconstruction for Civil Infrastructure Inspection
    (University of Waterloo, 2025-08-26) Bajaj, Rishabh
    Globally, infrastructure is deteriorating due to aging structures and delays in timely and effective rehabilitation. As a result, there has been a growing demand for efficient and scalable methods to assess the condition of civil infrastructure. Traditional inspection practices, which rely heavily on manual visual assessments, are time-consuming, labour-intensive, and prone to human error. As a result, there is growing interest in automating traditional inspection processes. A common approach involves using 3D reconstructions of civil structural components through off-the-shelf Structure-from-Motion (SfM) software to create digital twins for measurement and analysis. However, this method faces several challenges. First, the black-box nature of current SfM software used for 3D reconstruction is not optimized for the visually degenerate surfaces common in civil infrastructure and lacks transparency for error diagnosis in case of a failed reconstruction. Secondly, existing methods often fail to provide end-to-end support for extracting measurements that comply with structural inspection manuals. This thesis proposes an open-access, end-to-end framework for enabling vision-based structural inspection through high-fidelity 3D reconstructions. The motivation is to address key technical and scientific challenges in adopting computer vision tools for infrastructure assessment by providing field-deployable and standards-compliant solutions that enhance both visualization and quantification of structural conditions. The methodologies developed in this thesis support inspections needed at small, medium and large-scales of structural components. The first and second parts of this thesis address small-scale inspection. In the first part, high-fidelity reconstructions from smartphone-based LiDAR sensors are utilized to extract concrete surface roughness profiles. Point cloud processing methods are calibrated against existing subjective field tools used by inspectors to ensure compatibility and enable classification of roughness profiles within current inspection frameworks. The second part addresses deployment challenges by introducing a reconstruction tool that only uses images. This enables small-scale reconstruction in environments where LiDAR or high-end equipment is unavailable. By removing hardware constraints and validating the proposed tools through field deployment, this thesis demonstrates the practical feasibility of an open, vision-based inspection workflow for performing surface roughness measurements. The third part focuses on medium-scale inspection. An image-based 3D reconstruction pipeline is developed, followed by integration with an interactive segmentation algorithm. The AI-based segmentation method is integrated with 3D models to detect and quantify a common defect, concrete spalling, in accordance with structural inspection standards. Finally, a large-scale multi-resolution map (MRM) reconstruction workflow is developed for constructing 3D maps with varying resolutions by integrating LiDAR sensor-based 3D maps (coarse resolution) with maps built from images (fine resolution). This method uses a novel image-based localization algorithm to precisely align two maps into a cohesive 3D point cloud. To facilitate MRM, an in-house, cost-effective and portable backpack-based scanner and mapper is designed to collect large-scale colourized LiDAR maps. Experimental results are presented for each method, including field tests, demonstrating the accuracy and utility of the proposed system for real-world inspections. The major contribution of this work is bridging the gap between academic research and practical implementation in infrastructure inspection, advancing toward a more intelligent, scalable, and accessible inspection paradigm.
  • Item
    Electrochemical Modeling of Bioenergy Generation from Wastewater by Microbial Fuel Cells
    (University of Waterloo, 2025-05-09) Li, Yiming
    As global water scarcity and environmental pollution continue to escalate, innovative wastewater treatment technologies are needed to ensure sustainable water resource management. Conventional wastewater treatment methods, such as activated sludge processes, are energy-intensive, costly, and contribute significantly to greenhouse gas emissions. Microbial fuel cells (MFCs) present a promising alternative, harnessing electroactive bacteria to simultaneously degrade organic pollutants and generate electricity. By leveraging microbial metabolism, MFCs can convert chemical energy in wastewater into usable electrical energy, offering a dual benefit of pollution reduction and renewable energy production. This study focuses on developing a numerical simulation framework to optimize MFC performance, with an emphasis on real-world application at the Guelph Water Resource Recovery Centre (WRRC). A steady-state microbial fuel cell model was developed and validated using experimental data from previous studies. The model employs a finite difference method to solve mass balance equations for key reactants and products, including acetate, dissolved CO₂, protons, and oxygen. The simulation results highlight the influence of various operational parameters—such as substrate concentration, internal resistance, wastewater flow rate, and temperature—on the performance of a dual-chamber MFC. The study further compares MFC efficiency with conventional wastewater treatment processes, demonstrating a significantly higher chemical oxygen demand (COD) removal rate in MFCs (0.0633 kg COD/m³/day), which is approximately 4.7 times greater than that observed at the WRRC. The results emphasize the role of microbial activity and electrochemical interactions in optimizing power generation and pollutant degradation. Key limitations such as oxygen transport restrictions, internal resistance, and pH imbalances were identified, suggesting areas for improvement in MFC design. Numerical simulations were further extended to model full-scale integration within WRRC, providing insights into the feasibility of MFC technology as an alternative treatment strategy. Despite challenges in large-scale deployment, MFCs show strong potential for reducing wastewater treatment energy demands and mitigating environmental impacts. This research contributes to the advancement of MFC applications in wastewater treatment by demonstrating the effectiveness of numerical modeling in predicting and optimizing system performance.
  • Item
    Upscaling and downscaling snow processes with machine learning in watershed models
    (University of Waterloo, 2025-05-08) Burdett, Hannah
    Hydrologic models play a vital role in understanding and predicting the movement of water within watersheds, providing essential insights for effective management and sustainability of water resources. However, watersheds exhibit significant heterogeneity in their landscape properties and complex responses to spatiotemporal variations in climatic inputs. This variability introduces a gap between the representation of physical processes at the point scale and their behaviour at the watershed scale, making it challenging to accurately capture the full complexity of the hydrologic cycle across different spatial scales. Bridging this gap requires the identification of effective scaling approaches tailored to capture the complexities across scales. Scaling approaches look to translate information from one scale to another, whether moving from a smaller to a larger scale (upscaling) or from a larger to a smaller scale (downscaling). Although various approaches in the literature have been applied to develop scaling methods for forcing variables, such as precipitation and temperature, and fluxes (e.g., evapotranspiration), there is a notable gap in deriving and applying scaling techniques for snow-related variables, such as SWE, snowmelt, or sublimation. Addressing this gap may help in improving hydrologic model accuracy in snow-dominated regions, where snow dynamics significantly influence water availability and watershed resources. The primary objective of this thesis is to develop, implement, and evaluate machine learning-based upscaling methodologies to aid in understanding the relationship between local-scale snow-related variables, landscape heterogeneity, and the large-scale hydrologic response of a catchment. Such methods are useful for effectively simulating the net impact of local variability in snow processes without resorting to fine-resolution models. A secondary focus of this research aims to identify the conditions under which emergent constitutive relationships specific to snow-related fluxes are (or are not) valid and to assess the transferability of these relationships. Finally, this work introduces a machine learning-based downscaling approach that refines large-scale mean model outputs into localized snow states and fluxes. Together, these scaling techniques explore the potential of machine learning to address challenges in hydrologic scaling specific to snow-related fluxes.up
  • Item
    Real-Time Short-Term Intersection Turning Movement Flows Forecasting Using Deep Learning Models for Advanced Traffic Management and Information Systems
    (University of Waterloo, 2025-05-07) Zhang, Ce
    Traffic congestion remains a persistent challenge in urban transportation systems, causing excessive travel delays, increased fuel consumption, and severe environmental pollution. To address these issues, Advanced Traffic Management and Information Systems (ATMIS) have been developed, integrating real-time traffic monitoring, adaptive control strategies, and data-driven decision-making to enhance overall traffic efficiency. A crucial component of ATMIS is the real-time forecasting of intersection Turning Movement Flows (TMFs), which provides essential data for optimizing signal timings, improving vehicle routing, and implementing proactive congestion mitigation strategies. By leveraging accurate TMFs predictions, transportation agencies can dynamically adjust traffic signals, enhance intersection operations, and reduce delays, ultimately improving urban mobility and minimizing environmental impacts. While numerous traffic forecasting models exist, they face significant limitations in capturing the complex spatial and temporal patterns inherent in intersection-level TMFs, as they primarily rely on historical traffic data without adequately modeling these dependencies. Moreover, most existing approaches fail to incorporate exogenous factors, such as weather conditions, road characteristics, and other time-dependent variables, which significantly influence traffic flow but are often ignored. These shortcomings lead to poor generalization performance when applied to hold-out intersections (few-shot) and unseen regions (zero-shot), making them less effective in real-world dynamic traffic environments. To overcome these challenges, this study systematically develops and evaluates a deep learning-based TMFs forecasting framework designed for improved generalization and interpretability. First, we employ a Parallel Bidirectional LSTM (PB-LSTM) with multilayer perceptron (MLP) to capture both long-term seasonality and spatial dependencies, thereby enhancing the model's transferability across different locations, improving performance across hold-out intersections. Second, we integrate an encoder-decoder architecture using Deep Autoregressive (DeepAR) model, which enables probabilistic forecasting and quantifies uncertainty, ensuring robust predictions under varying traffic conditions. Third, we leverage the Temporal Fusion Transformer (TFT) to assess the relative importance of external covariates, such as weather conditions and road characteristics, improving interpretability and model reliability by identifying speed zone, road category, hour of the day, and temperature as key influential factors. Finally, we explore the potential of TimesFM, a decoder-only model, to enhance zero-shot learning capabilities, demonstrating strong performance in previously unseen intersections and new city datasets, particularly when enhanced with EMD and RF. To evaluate model performance, we conduct a series of experiments, including hold-out intersection tests, cross-city generalization assessments, and evaluations under extreme weather conditions, to assess robustness and adaptability. Experimental results highlight the effectiveness of integrating exogenous factors and hybrid modeling approaches in improving real-time TMFs forecasting accuracy, generalizability, and robustness under dynamic conditions. These insights provide valuable contributions to the development of scalable and interpretable deep learning models for intersection-level traffic flow prediction, supporting more adaptive and data-efficient traffic management strategies.
  • Item
    Effect of Biofilm Formation on the Sorption of Per- and Polyfluoroalkyl Substances to Colloidal Activated Carbon
    (University of Waterloo, 2025-04-29) Moran, Erica Lynne
    Per- and polyfluoroalkyl substances (PFAS) are a class of contaminants that have garnered increasing concern due to their widespread presence and harmful effects on humans and ecosystems. PFAS enter the environment via many different pathways, with the release of PFAS-containing aqueous firefighting foams being a major source of groundwater contamination. Because PFAS are highly resistant to most chemical and biological degradation processes, they are currently removed from groundwater mainly by ex-situ adsorption, which is expensive and energy intensive. Recently, activated carbon (AC) permeable reactive barriers (PRBs) have been proposed and used in-situ to limit the downgradient migration of PFAS by groundwater. AC PRBs are created by injecting powdered activated carbon (PAC) or colloidal activated carbon (CAC) into the subsurface to generate a stationary zone that removes PFAS by adsorption. As with any adsorption technology, however, PFAS breakthrough will occur once adsorptive sites in the barrier are exhausted. To improve our understanding of the ability of AC PRBs to adsorb PFAS and their longevity, there is a need for research that evaluates the adsorption of PFAS on AC and the factors affecting this process. The research reported in this thesis focused on one potential influencing factor, namely biofilm. Specifically, the objectives of this study were first, to evaluate if a biofilm can form on small (<5 µm) CAC particles, and second, to examine the impact that biofilm may have on the adsorption of PFAS to CAC. To address the first objective, the growth of Pseudomonas putida (P.putida), an aerobic bacterium, in the absence of particulate and in the presence of either CAC or fine silica was investigated. P.putida was selected because it has been shown to readily form a biofilm, is not infectious to humans, is commonly found in the environment, and has applications in the bioremediation of organic contaminants. Analyses of the bacterial samples by confocal laser scanning microscopy (CLSM) indicated that the bacteria remained planktonic when no particulate was present but formed a biofilm consisting of cells and CAC or sand particles held together by extracellular polymeric substances (EPS). Over seven days of growth, the biofilm formed on CAC increased in thickness and decreased in roughness as it developed and formed more cohesive structures. Results suggest that P.putida is capable of forming a biofilm on CAC particles. Rather than the classical depiction of a biofilm adhered to a single surface, the P.putida biofilm was formed on an aggregate of CAC particles, which were held together by EPS. To address the second objective, the adsorption of perfluoroctane sulfonate (PFOS, a hydrophobic PFAS) and perfluoropentane carboxylate (PFPeA, a hydrophilic PFAS) on virgin and biofilm-coated CAC was investigated. P.putida was grown in the presence of CAC, and either PFOS or PFPeA was added to the microcosms once a biofilm was formed. Because the adsorption of PFAS to CAC is known to be impacted by the presence of dissolved organic carbon (DOC), experiments were also conducted to determine the impact of broth (used for culture growth) concentration on the extent of PFAS sorption to CAC and the development of the biofilm. In the experiments without bacteria, the amount of PFOS adsorbed to CAC decreased as the concentration of broth was increased. The relationship between aqueous and sorbed PFOS could not be described by a linear, Freundlich, or Langmuir isotherm model, likely due to competitive sorption between the DOC present in the broth and PFOS. In the experiments with P.putida, it was observed that as the broth concentration increased, the biofilm became thicker and smoother, as the additional broth appeared to have aided biofilm development. Subsequent experiments, conducted with 3 mg/L broth and 80 mg/L broth (which represented high and low DOC concentrations, respectively), revealed that the majority of PFOS sorption on virgin and biofilm-coated CAC occurred during the first three days, and the biofilm resulted in a decrease in PFOS adsorption. This decreased adsorption is presumed to be due to biofilm blocking sorption sites. For PFPeA, limited sorption occurred, and no significant difference was observed between the amount adsorbed in the bacteria-free CAC and P.putida-containing CAC systems. The difference in sorption between PFOS and PFPeA was attributed to decreased hydrophobic interactions between CAC and the shorter fluorinated tail of PFPeA. The results of this study improve our understanding of how biofilm may impact CAC PRBs implemented for the management of PFAS. Biofilm can form on cell-sized particles and, as a result, may reduce the adsorption of long-chain compounds, such as PFOS. The effect of biofilm on the adsorption of short-chain compounds, such as PFPeA, may be less prominent than for PFOS, as the extent of sorption is comparatively limited. Further investigation is required to evaluate the impacts of biofilm on CAC sorption of other PFAS, the interactions of biofilm with other groundwater parameters, and the extent to which biofilm plays a role in the longevity of CAC PRBs in column or field scale studies.
  • Item
    Saturation-Dependent Thermal Conductivity of Southern Ontario Soils
    (University of Waterloo, 2025-04-25) Islam, Zahidul
    Soil thermal conductivity is an important parameter in geotechnical and environmental engineering applications, influencing the performance of underground energy storage, ground heat exchangers, and other subsurface thermal systems. Through geotechnical characterization and laboratory measurements, this study investigates the thermal conductivity of 20 soil samples collected from seven locations in Southern Ontario. The key soil properties, including texture, moisture content, and bulk density, were analyzed to understand their impact on thermal conductivity. Measured thermal conductivity values were compared with published regression-based and normalized models to assess their predictive accuracy across diverse soil types. A statistical evaluation incorporating root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R²) was performed to identify the best-performing models. The results indicate that Lu et al. (2014) and Yoon et al. (2018) describe the most reliable regression-based models, demonstrating strong correlations with measured data, minimum bias, and low error margins. Among normalized models, the Côté and Konrad (2005) model exhibited superior adaptability and lower prediction errors, while Johansen’s (1975) model performed well but required calibration for extreme soil compositions. The results emphasize the significant influence of soil texture and moisture content on thermal conductivity, with silty and sandy soils exhibiting higher values due to their mineral composition and structural properties. The best-performing models effectively captured these variations, highlighting their applicability in geotechnical and environmental engineering.
  • Item
    Assessing the Prevalence of Energy Hardship in Canada: An Enhanced Methodology Integrating Energy Modeling
    (University of Waterloo, 2025-04-25) Al Humidi, Sara
    The building sector has focused on addressing climate mitigation through the electrification and decarbonization of households, mainly by upgrading building envelope and replacing combustion-based systems with electric heat pumps. However, the impact of climate change could result in more households falling into energy hardship, underscoring the need for an equitable transition. A household falls under energy hardship if its energy expenditure ratio exceeds the defined threshold, regardless of its total household income. Energy hardship encompasses both energy poverty and energy burden. Thus, a household experiences energy poverty if i) its energy expenditure ratio exceeds the defined threshold and ii) the total household income is below low-income cut-offs. A household experiences an energy burden if i) its energy expenditure ratio exceeds the defined threshold and ii) the total household income is above low-income cut-offs. Techno-economic factors such as energy costs, type of fuel, building age and envelope condition, and type of heating and cooling system in a household contribute to energy hardship. Socioeconomic factors such as income, education, and race are also catalysts to the problem, making energy poverty a multidimensional issue with great implications for public health, social equity, and environmental sustainability. This study aims to quantify energy hardship in Canada in 2019 and 2021 and identify the building and household characteristics experiencing energy poverty. Further analysis was completed for Ontario, Canada to establish a correlation and quantify the impact climate change and household electrification (e.g., switching from a natural gas furnace to a heat pump) have on energy hardship. The study identified key indicators of energy poverty and burden, confirming that household income is the most critical factor. Nearly 40% of Canadian households with an income of CAD$29,000 fall under energy poverty. Older dwellings, which tend to be leaky with poor insulation and outdated HVAC systems, contribute to higher energy consumption. Single-detached homes, with major repair requirements, are likely to be energy burdened (17%). Additionally, socioeconomic factors play a role, one person households (31%) households being the most affected by energy hardship. Education and employment also indirectly impacted energy poverty (10%); households with a higher education and full-time employment were less likely to be energy poor. The Ontario-specific analysis mirrored national trends, revealing that energy burden is more pronounced in rural areas (36%). In addition, an energy simulation study was performed for a median energy-poor household in Ontario. The study investigated two scenarios: a business-as-usual scenario where the households performed minimal energy efficiency upgrades, and an electrification and decarbonization scenario where energy-poor households implemented measures such as envelope renovations and switching to fully electric heating system (i.e., cold climate air source heat pump). The energy modeling results revealed the importance of income levels in alleviating energy hardship. Regardless of the level of energy efficiency measures applied, the median energy-poor household remained in energy poverty after building enclosure and airtightness improvement when maintaining a natural gas furnace or fuel switching to a heat pump (11.2% and 14.3%, respectively), . Households earning CAD$50,000 after tax came out of energy hardship after insulation and airtightness upgrades. However, the adoption of an electric heat pump worsened energy hardship by doubling the costs of electricity despite the fact that reduced energy use intensity by nearly 50%. This concluded that energy efficiency measures alone are not enough to remove households out of energy poverty (or hardship, in general) in Ontario. By analyzing the prevalence, causes, and impacts of energy poverty in Canada and Ontario, this study aims to develop a replicable methodology that provides evidence-based insights to inform policy decisions and support the development of effective interventions that are inclusive and equitable for all households
  • Item
    Glass Fiber-Reinforced Polymer (GFRP) Reinforced Concrete Corner Joints Subjected to Opening Moments
    (University of Waterloo, 2025-04-24) Bashbishi, Lamar
    Concrete corner joints are elements in structures that transfer forces between their adjoining members. In recent decades, glass fiber-reinforced polymer (GFRP) has been gaining popularity due to its corrosion resistance and light weight. However, its linear elastic properties and lack of practical bond slip theory make it challenging for engineers to properly detail GFRP reinforcement in corner joints. Previous studies on GFRP-reinforced concrete closing joints have been conducted at the University of Waterloo; however, no research has been conducted on GFRP-reinforced concrete opening joints. The experimental program presented in this thesis consists of eight full scale corner joint specimens which were subjected to monotonic opening moments. The specimens are divided into two groups based on their tensile rebar geometry within the cantilever slab: Type A specimens with straight tensile bars, and Type B specimens with hooked tensile bars. Within each group, the specimens were constructed with one of the following: a) an unreinforced joint panel, b) bent bars perpendicular to the inner corner, c) confining stirrups within the joint panel, or d) both bent bars and confining stirrups. The aim of the study was to determine the effects of each design choice on corner joint behaviour. Test results showed that increasing development length of GFRP using hooked bars reduced bond-slip and increased joint strength and deformability. When primary tensile cracks were constrained using bent bars or confining stirrups, the main influence of joint strength became the strength of the concrete. Joints that contained bent bars perpendicular to the inner corner exhibited more consistent post-peak responses and had higher joint deformability than their base specimens. Joints with confining stirrups saw reduced widths of shear cracks as well as reduced bond-slip of the bars they confined. Further studies on GFRP-reinforced concrete joints must be conducted, including the study of the effect of different bent bar areas and sizes, different member geometries, as well as different GFRP development lengths/anchorage methods.
  • Item
    Investigating the Performance of Straight and Bent GFRP Bars as Flexural Reinforcement for Glulam Beams
    (University of Waterloo, 2025-04-22) Shrimpton, Catherine
    The heightened interest in using wood as a sustainable building material contributed to an increased demand for glued-laminated timber (glulam). Despite this, fundamental research is required on how to rehabilitate and retrofit deficient structural wooden members to extend the service life of the structure. The research focuses on the effects of reinforcement configurations consisting of glass-fibre reinforced polymer (GFRP) bars on the flexural behaviour of glulam beams. Of particular interest are the effects of reinforcement length, adhesive type, and knurling on the failure modes of the reinforced members when compared to unreinforced glulam. A total of eighteen pullout specimens were tested to investigate the effects of adhesive and knurling patterns on bond strength, and fourteen full-scale glulam beams were tested to failure under four-point static bending, including four unreinforced and ten GFRP-reinforced. The pullout test results showed that the texture and density of an adhesive had a critical role on the overall behaviour with improved behaviour in specimens using a fluid density in comparison to those with dense. The addition of GFRP reinforcement to the glulam beams contributed to an increase in strength, failure displacement, and stiffness by factors ranging between 1.16 – 1.30, 1.04 – 1.24, and 1.13 – 1.18, respectively, in comparison to unreinforced glulam irrespective of the failure mode obtained. The effects of reinforcement length and termination point showed that the change from short to long bars resulted in improvements in maximum resistance and stiffness by factors of 1.10 and 1.19, respectively, for the bent bar reinforced specimens, and insignificant improvements for the straight bar reinforced specimens. Additionally, the change from straight to bent bars resulted in improvements in maximum resistance and stiffness by factors of 1.06 and 1.03, respectively, for the specimens with longer lengths of bars, and insignificant improvements for the specimens with short lengths of bars. The addition of knurling in the full-scale GFRP-reinforced beams resulted in increases of 1.07 and 1.03 for the maximum resistance and stiffness, in comparison to beams without knurling. Additionally, a change in failure mode from shear to flexure was observed with the addition of knurling. A material model was developed to predict the flexural behaviour of unreinforced and GFRP-reinforced glulam beams, and the two proposed approaches were shown to generally captured the overall behaviour with a tendency to overpredict displacements at initial failure. Finally, the improvement in tensile failure strains in flexure due to the reinforcement was not observed to be present due to the mixed failure modes of shear and flexure. Strains from the digital image correlation system were observed to be lower than those measured by localized strain gauges, suggesting that measuring strains over a large area is critical.
  • Item
    Towards SLAM-Centric Inspection of Infrastructure
    (University of Waterloo, 2025-04-15) Charron, Nicholas
    The inspection and maintenance of civil infrastructure are essential for ensuring public safety, minimizing economic losses, and extending the lifespan of critical assets such as bridges and parking garages. Traditional inspection methods rely heavily on manual visual assessments, which are often subjective, labor-intensive, and inconsistent. These limitations have driven the development of robotic-aided inspection techniques that leverage mobile robotics, sensor fusion, computer vision, and machine learning to enhance inspection efficiency and accuracy. Despite advancements in robotic-aided inspection, existing works often focus on isolated components of the inspection process—such as improving data collection or automating defect detection—without providing a complete end-to-end solution. Many approaches utilize robotics to capture 2D images for inspection, but these lack spatial context, making it difficult to accurately locate, quantify, and track defects over multiple inspections. Other works extend this by detecting defects within images; however, without a robust 3D representation, defects cannot be precisely geolocated or measured in real-world dimensions, limiting their utility for long-term monitoring. While some studies explore 3D mapping for inspection, the majority rely on image-only Structure-from-Motion, which is known to be unreliable for generating dense and accurate maps, or are restricted to mapping along 2D surfaces, thereby failing to capture the full complexity of infrastructure assets. This thesis introduces a novel SLAM (Simultaneous Localization and Mapping)-centric framework for robotic infrastructure inspection, addressing these limitations by integrating lidar, cameras, and inertial measurement units (IMUs) into a mobile robotic platform. This system enables precise and repeatable localization, 3D mapping, and automated inspection of infrastructure assets. Three key challenges that hinder the development of a practical SLAM-centric inspection system are identified and addressed in this work. The first challenge pertains to the design and implementation of SLAM-centric robotic systems. This thesis demonstrates how sensor selection and configuration can be optimized to simultaneously support both highaccuracy SLAM and high-quality inspection data collection. Additionally, it establishes a robotic platform-agnostic design, allowing for flexibility across different infrastructure inspection applications. The second challenge involves precise and reliable calibration of camera-lidar systems, particularly when sensors have non-overlapping fields of view as is the case with the proposed inspection systems. To address this, a novel target-based extrinsic calibration technique is developed, leveraging a motion capture system to achieve high-precision calibration across both sensing modalities. This ensures accurate sensor fusion, yielding geometrically consistent inspection outputs. The third challenge is the development of a complete end-to-end inspection methodology. This research implements state-of-the-art online camera-lidar-IMU SLAM, with an added offline refinement process and a decoupled mapping framework. This approach enables the generation of high-quality 3D maps that are specifically tailored for infrastructure inspection by prioritizing accuracy, density, and low noise in the map. Machine learning-based defect detection is then integrated into the pipeline, coupled with a novel 3D map labeling method that transfers visual and defect information onto the 3D inspection map. Finally, an automated defect quantification and tracking system is introduced, allowing for defects to be monitored across multiple inspection cycles—completing the full end-to-end inspection workflow. The proposed SLAM-centric inspection system is validated through extensive real-world experiments on infrastructure assets, including a bridge and a parking garage. Results demonstrate that the system generates highly accurate, repeatable, and metrically consistent inspection data, significantly improving upon traditional manual inspection methods. By enabling automated defect detection, precise localization, and long-term defect tracking within a robust 3D mapping framework, this research represents a paradigm shift in infrastructure assessment—transitioning from qualitative visual inspections to scalable, data-driven, and quantitative condition monitoring. Ultimately, this thesis advances the field of robotic infrastructure inspection by presenting a comprehensive SLAM-centric framework that integrates state-of-the-art sensing, calibration, and mapping techniques. The findings have broad implications for the future of automated infrastructure management, providing a foundation for intelligent inspection systems that can enhance the efficiency, reliability, and safety of civil infrastructure maintenance worldwide.
  • Item
    Advancing Structural Engineering Through Data-Driven Methodologies: Seismic Vulnerability Assessment and Backbone Curve Determination
    (University of Waterloo, 2025-04-11) Elyasi, Niloofar
    Structural engineering has traditionally relied on analytical and experimental methods to ensure the safety of structures. These methods, while effective, often require significant resources, time, and expertise, limiting their applicability across diverse contexts. Meanwhile, vast amounts of data collected from surveys, experimental studies, and seismic events remain largely underutilized, providing a unique opportunity to develop advanced data-driven methodologies. This thesis aims to harness the potential of the available data repositories to address critical challenges in structural engineering, with a focus on seismic vulnerability assessment and backbone curve determination. Through the use of machine learning (ML), this thesis introduces innovative methodologies at both the system and component levels. A rapid visual screening (RVS) framework is developed to quickly assess the seismic vulnerability of low-rise reinforced concrete (RC) buildings. By incorporating ML models, this framework outperforms traditional evaluation methods with higher accuracy and broader applicability. Using post-earthquake survey datasets from a variety of seismic events, it proposes a region-independent tool, eliminating reliance on subjective judgments and region-specific calibrations. For backbone curve determination, used for analyzing the seismic behavior of RC columns, this thesis introduces a novel ML-based methodology. By employing experimental datasets and advanced regression techniques, it offers a practical and efficient alternative to the conventional methods. This approach not only predicts backbone curve parameters with high accuracy but also ensures accessibility for broader applications, especially in resource-limited environments. In summary, this thesis bridges system-level and component-level challenges, underscoring the potential of data-driven approaches in structural engineering. By providing a foundation for integrating innovative approaches into the field, this research advances both academic insights and practical applications. These contributions respond to the demand for efficient and reliable solutions, supporting safer structures and more effective resource management in modern structural engineering practices.
  • Item
    Chemo-rheological Characterization of Asphalt Binders Using Different Aging Processes
    (University of Waterloo, 2025-03-17) Sharma, Aditi; Baaj, Hassan; Tavassoti, Pejoohan
    The performance and longevity of asphalt pavements depend heavily on the properties of asphalt binders, which are affected by aging, binder modifications, and the incorporation of reclaimed asphalt pavement (RAP) materials. However, significant gaps exist in understanding the long-term chemical and rheological changes induced by aging processes (particularly with respect to differences between thermo-oxidative aging and UV exposure), and in the use/standardization of chemical analytical techniques such as Fourier Transform Infrared (FTIR) and Nuclear Magnetic Resonance (NMR) spectroscopy for binder characterization. Furthermore, the behaviour in RAP-virgin binder blends, along with the influence of bio-based rejuvenators and anti-aging additives under different aging conditions, remains underexplored. Addressing these gaps are crucial to developing more durable, sustainable pavements. This thesis bridges these research gaps through comprehensive investigation of chemo-rheological binder characterization, combining experimental testing with advanced analytical tools and varying aging methods. The findings offer essential insights into binder aging, rejuvenation strategies, and modification techniques, with significant implications for pavement durability and environmental sustainability. The first chapter presents an evaluation of Attenuated Total Reflection-Fourier Transform Infrared (ATR-FTIR) spectroscopy combined with functional group and multivariate analysis techniques to characterize asphalt binders. The research identifies challenges in repeatability across binder sources and aging states demonstrating the importance of standardized protocols for improving reliability. Repeatability as described by AASHTO standards is listed in the precision and bias statement as single operator precision. This is the allowable difference in two test results measured under the repeatability conditions (same asphalt binder, measured by the same operator, on the same piece of equipment in the same lab). Principal Component Analysis (PCA) and k-means clustering successfully classified binder types and aging states, with large quantity (LQ) sample preparation yielding more consistent results than small quantity (SQ) preparation. These findings underscore the need for uniform procedures in binder analysis, addressing inconsistencies prevalent in the current literature. The second part of the thesis investigates the impact of Styrene-Butadiene-Styrene (SBS) polymer modification on binder performance and oxidative resistance. Using Nuclear Magnetic Resonance (NMR) and ATR-FTIR spectroscopy, along with PCA and Partial Least Squares Regression (PLSR), the research highlights the ability of SBS to enhance high-temperature performance and slow thermo-oxidative aging. This work not only confirms previous findings on SBS but also provides new insights into the molecular interactions contributing to aging resistance. The study fills a gap in understanding how SBS-modified binders behave under various aging scenarios, offering a deeper perspective on polymer-modified asphalt technologies. The thesis also addresses a critical gap related to UV-induced aging, which has been underexplored in comparison to thermo-oxidative aging. A novel UV aging chamber was developed to simulate real-world environmental conditions, incorporating UV exposure, water spray cycles, and controlled heating at 70°C. Comparative analysis revealed that different additives exhibit varying effectiveness under UV and thermo-oxidative conditions. Zinc diethyldithiocarbamate (ZDC) showed strong resistance to thermo-oxidative aging but limited efficacy under UV aging, while ascorbic acid (Vit. C) accelerated aging under UV exposure, contrary to expectations. These findings emphasize the challenges involved in designing effective anti-aging strategies for asphalt binders, demonstrating the value of combining conventional rheological tests with spectroscopic techniques and further highlighting the need for more targeted approaches to additive selection and development. This thesis advances the understanding of asphalt binder behaviour and aging processes by integrating chemical, rheological, and multivariate analysis techniques. It offers critical contributions to the standardization of binder characterization protocols, the optimization of polymer-modified asphalt technologies, and the development of more effective anti-aging strategies. The research also demonstrates the potential of machine learning and artificial intelligence (AI) in predicting binder performance from spectroscopic data using multivariate analysis, paving the way for future innovations in asphalt binder characterization. In conclusion, the work in this thesis addresses significant gaps in the literature, providing new insights into aging mechanisms, additive/rejuvenation strategies, and RAP binder interactions. By combining chemical analysis, rheological testing, and multivariate techniques, this research contributes both to academic knowledge and practical pavement engineering, promoting the development of more sustainable, long-lasting asphalt pavements.
  • Item
    LiDAR-Driven Calibration of Microscopic Traffic Simulation for Balancing Operational Efficiency and Prediction of Traffic Conflicts
    (University of Waterloo, 2025-01-21) Farag, Natalie; Bachmann, Christian; Fu, Liping
    Microscopic traffic simulation is a proactive tool for road safety assessment, offering an alternative to traditional crash data analysis. Microsimulation models, such as PTV VISSIM, replicate traffic scenarios and conflicts under various conditions, thereby aiding in the assessment of driving behavior and traffic management strategies. When integrated with tools like the Surrogate Safety Assessment Model (SSAM), these models estimate potential conflicts. Research often focuses on calibrating these models based on traffic operation metrics, such as speed and travel time, while neglecting safety performance parameters. This thesis investigates the effects of calibrating microsimulation models for both operational metrics including travel time and speed, and safety metrics including traffic conflicts and Post Encroachment Time (PET) distribution, using LiDAR sensor data. The calibration process involves three phases: performance calibration, performance and safety calibration, and only safety calibration. The results show that incorporating safety-focused parameters enhances the model's ability to replicate observed conflict patterns. The study highlights the trade-offs between operational efficiency and safety, with adjustments to parameters like standstill distance improving safety outcomes without significantly compromising operational metrics. Furthermore, there is a substantial difference in the calibrated minimum distance headway for the safety model, highlighting the trade-off between operational efficiency and safety. While the operational calibration focuses on optimizing flow, the safety calibration prioritizes realistic conflict simulation, even at the cost of reduced flow efficiency. The research emphasizes the importance of accurately simulating real-world driver behavior through adjustments to parameters like the probability and duration of temporary lack of attention.
  • Item
    Reduced Order Geomechanics Models
    (University of Waterloo, 2025-01-14) Hatefi Ardakani, Saeed; Gracie, Robert
    Computational techniques are commonly used for real-time simulation of complex geomechanics problems, such as hydraulic dilation stimulation. A significant challenge in this realm is that high-fidelity mathematical models or full order models (FOMs) are computationally expensive as they must span multiple spatial and temporal length scales, often including nonlinearities and thermo-hydro-mechanical processing. The computationally intensive nature of these simulations continues to pose challenges in parameter estimation, uncertainty quantification, and optimization applications, where hundreds to thousands of simulations are required to achieve a solution. Intrusive reduced order models (ROMs) have emerged as a method to derive and train a computationally efficient surrogate/proxy model using the FOM. This thesis seeks to bridge the gap in existing intrusive ROMs in reservoir engineering by introducing efficient ROMs that are capable of capturing hydro-mechanical coupling behavior and path-dependent plastic deformation of rocks. A complex case involving hydraulic dilation stimulation is used to show the efficiency and accuracy of the ROM in addressing coupling, plasticity, and permeability enhancement features. First, an efficient and accurate ROM is proposed for nonlinear porous media flow problems, with specific application to a two-dimensional layered reservoir with a two-well system. Standard projection-based intrusive ROMs without hyper-reduction, such as proper orthogonal decomposition-Galerkin (POD-Galerkin), have not demonstrated efficacy in reducing the computational cost of the ROM for nonlinear problems. In this context, we combine POD-Galerkin with discrete empirical interpolation method (DEIM) as a hyper-reduction technique to reduce the size of the system of equations and accelerate the computation of nonlinear terms (residual force vector and its Jacobian). Column-reduced Jacobian DEIM technique is employed to interpolate the Jacobian, leading to a significant reduction in the computational time of the online stage. The ROM is parameterized for the nonlinear transient injection rate (pumping schedule). Offline, training data is generated by the FOM runs with simple constant injection rates. Online, the ROM demonstrates high accuracy and efficiency for complex and time-varying pumping schedules, including sinusoidal, high-frequency, and time-discontinuous pumping schedules that are located outside of the training regime. It is shown that POD-DEIM ROM has about 10^3 times fewer degrees of freedom (DoFs) and is approximately 190 times faster than the FOM for a reservoir model with 3*10^4 DoFs, while maintaining an accurate solution in the online stage. The accuracy and efficiency of the POD-DEIM motivate its potential use as a surrogate model in the real-time control and monitoring of fluid injection processes. Intrusive ROMs have faced considerable difficulties in accurately capturing the history-dependent nonlinear evolution of plastic strain. In the second objective, an intrusive ROM is developed and evaluated for a Drucker-Prager plasticity model, in which material properties and cyclic load path are parametric inputs. By constructing multiple local DEIM (LDEIM) approximations in combination with clustering and classifier techniques, a fast and accurate ROM is achieved. The FOM consists of a two-dimensional finite element analysis (FEA) of a deformable solid with Drucker-Prager plasticity. Offline, the temporal and parameterized training data generated from FOM runs are classified using the k-means clustering algorithm, whereby LDEIM basis vectors are computed. Online, a nearest neighbor classifier identifies the appropriate LDEIM. The ROM has three hyper-parameters (the size of the ROM, the number of clusters, and the number of DEIM measurement points per cluster), influencing both accuracy and speed-up. In a micromechanics porous media problem, parameterized by Young’s modulus and hardening modulus, the ROM’s performance is demonstrated for inputs within and outside of the training domain; error and speed-up vary with inputs - accuracy is highest for inputs within the training domain (Error: 1.0-3.5% vs 1.0-9.2%), while speed-up varies from 106 to 134 times. For a cyclic plasticity problem, parameterized by load path, the ROM exhibits stable and accurate online performance with a substantial speed-up for test load paths. Under FOMs with ~10^3 and ~5*10^4 DoFs, speed-ups are 11 and 770 times, respectively. Larger speed-ups seem likely for larger FOMs. Finally, the ROM for nonlinear transient porous media flow as a diffusion problem is coupled with the ROM for plasticity to develop a novel ROM formulation for poroplasticity problems. This ROM aims to significantly reduce the computational costs for nonlinear and fully-coupled hydro-mechanical simulations in large-scale reservoirs. The developed mathematical model integrates a coupled system of equations from a two-dimensional FEA of momentum and mass balance equations equipped with Drucker-Prager plasticity and stress-dependent permeability enhancement models. The proposed ROM combines various ROMs, including POD-Galerkin to reduce the number of DoFs, DEIM to accelerate the computation of nonlinear terms, and local POD and local DEIM (LPOD/LDEIM) for further reductions in poroplasticity problems. LPOD and LDEIM classify the parameterized training data, obtained from offline FOM runs, into multiple subspaces with similar dynamic features. A new strategy for clustering and classification techniques tailored for the coupled formulation framework is introduced. The advantages of this ROM are demonstrated in a large-scale application involving hydraulic dilation stimulation of a reservoir with a horizontal well pair. The ROM is parameterized not only by the material properties but also by the injection rate. Its effectiveness is evaluated for more realistic use cases, where ROM remains efficient for injection rates that extend beyond the training data. In large-scale subsurface flow modeling of hydraulic dilation stimulation, a speed-up of ~400 times is achieved, with a ROM reducing the model dimension from 10^5 DoFs to 100 DoFs. This substantial computational saving results from a real-time analysis of the ROM and becomes even more highlighted in multi-query problems, where the model must be executed for multiple inputs and system configurations. This ROM has a high potential for accelerating various problems, such as uncertainty quantification, design, history matching, and well control optimization. It is also recommended that the proposed ROM be adopted for other real-world subsurface applications, including conventional and unconventional oil and gas production, hydraulic fracturing, and carbon storage.
  • Item
    Finding Specific Industrial Objects in Point Clouds using Machine Learning and Procedural Scene Generation
    (University of Waterloo, 2025-01-06) Lopez Morales, Daniel; Haas, Carl; Narasimhan, Sriram
    In the era of Industry 4.0 and the rise of Digital Twins (DT), the demand for enriched point cloud data has grown significantly. Point clouds allow seamless integration into Building Information Modeling (BIM) workflows, offering deeper insights into structures and enhancing the value of documentation, analysis, and asset management processes. However, several persistent challenges limit the current effectiveness of point cloud methods in industrial settings. The first major challenge is the difficulty in identifying specific objects within point clouds. Finding and labeling individual objects in a complex 3D environment is technically demanding and fraught with various issues. Manually processing these point clouds to locate specific objects is labor-intensive, time-consuming, and susceptible to human error. In large-scale industrial environments, the complexity of layouts and the volume of data make these manual methods impractical for efficient and accurate results. The second major challenge lies in the scarcity of industrial point cloud datasets necessary for training machine learning-based segmentation networks. Automating point cloud enrichment through machine learning relies heavily on the availability of high-quality datasets specific to industrial applications. Unfortunately, comprehensive datasets of this kind are either unavailable or proprietary, creating a significant barrier to developing effective segmentation networks. Furthermore, the few current datasets often lack flexibility, being limited only by the areas that have been scanned. This rigidity, combined with the time-consuming process of manually segmenting data, slows down the development and deployment of scalable machine-learning solutions for point cloud segmentation. These limitations highlight the need for more flexible and adaptive solutions to efficiently address object detection, asset tracking, and inventory management in dynamic industrial scenarios. This research addresses these challenges by developing open-access, weight-balanced class datasets specifically designed for 3D point cloud segmentation in industrial environments. The datasets integrate synthetic data with real-world industrial scans, offering a solution to the problem of imbalanced class distributions, which often hinder the accuracy of neural networks. Two methodologies for synthetic datasets were developed, one with random object placement and the second through a procedural generation pipeline, which includes rules for object placement and rules for generating tube structures for industrial elements, filling the scene with various objects of variable geometric features to understand the different effects that make a dataset realistic. This procedural generation technique provides a flexible method for dataset creation that can be adapted for different objects, point cloud scales, point densities, and noise levels. The dataset improves the generalization capabilities of machine learning models, making them more robust in identifying and segmenting objects within industrial settings. The second part of the research presents a methodology for efficiently and accurately identifying specific objects in point cloud scenes and two methodologies for creating open-access industrial datasets designed to train neural networks for segmentation. The first part of the research focuses on the object-finding methodology, which is crucial for multiple applications, including object detection, pose estimation, and asset tracking. Traditional methods struggle with generalization, often failing to differentiate between unique objects and general classes. The proposed methodology for specific object finding utilizes a point transformer network for point cloud segmentation and a fully convolutional geometric features network to enhance geometrical features using color. A key innovation in this process is using a color-based iterative closest point (ICP) algorithm on the output of the fully convolutional geometric features network. This enables precisely matching segmented objects with a point cloud template, ensuring accurate object identification.
  • Item
    Land-to-Water Linkages: Nutrients Legacies and Water Quality Across Anthropogenic Landscapes
    (University of Waterloo, 2025-01-06) Byrnes, Danyka; Basu, Nandita
    An increasing population and the intensification of agriculture has driven rapid changes in land use and increases in excess nutrients in the environment. Globally, excess nutrients in inland and coastal waters have led to persistent issues of eutrophication, ecosystem degradation, hypoxia, and drinking water toxicity. Over the past few decades, we have seen policies set to mitigate the degradation in water quality. The existing paradigm of water quality management is based on decades of research finding a linear relationship between the net nitrogen inputs to the landscape and stream nitrogen exports. For instance, in the U.S., in response to these nutrient problems, working groups have spent approximately a trillion dollars to improve water quality by upgrading wastewater treatment plants and implementing nutrient management plans to decrease watershed nitrogen and phosphorus inputs. Despite concerted efforts, in many cases we have not seen marked improvements in water quality. In cases where water quality has improved, it is frequently after decades of nutrient management. The lack of or delayed water quality improvement suggests the importance of other drivers in modulating the relationship between nutrient inputs and watershed exports. Indeed, watershed nutrient loads are not just a function of current-year nitrogen inputs but can also depend on the history of inputs to the watershed. However, we still have little understanding of the relationship between nutrient inputs related to exports and the extent that accumulated stores of nitrogen and phosphorus influence this relationship. The central theme of my research has been an exploration of the history of anthropogenic nutrient use and the relationship between nutrient inputs and the response in water quality. Specifically, I have focused on the role of current nutrient inputs versus historical nutrient use in impacting water quality at the watershed scale, as well as the various landscape and climate controls that can mediate responses to changes in management. My research objectives will be to (1) develop a multi-decadal mass balance of nitrogen and phosphorus at the sub-watershed scale across the contiguous U.S. in order to investigate (2) the relationship between watershed nitrogen inputs and export and the drivers of changes in watershed nitrogen export, (3) the magnitude, spatial distribution, and drivers of nitrogen retention and legacy stores, and (4) the use and management of phosphorus in agricultural landscapes in the context of both food security and environmental health. I began by developing county-scale nitrogen and phosphorus surplus datasets, TREND-N and TREND-P, for the contiguous U.S.—with surplus defined as the difference between anthropogenic inputs (fertilizer, manure, domestic inputs, biological nitrogen fixation, and atmospheric deposition) and non-hydrological export (crop and pasture uptake). In Chapter 2, I present the updates to a previously published TREND-N county-scale nitrogen mass balance dataset, improving crop and pasture uptake and livestock excretion methods. In Chapter 3, I develope new county-scale phosphorus surplus dataset, using similar methods. These datasets were then downscaled to a 250 m gridded-scale dataset, known as gTREND-Nitrogen and gTREND-Phosphorus, a step led by my collaborator Shuyu Chang. These novel datasets serve as the foundational data for the subsequent chapters. Next, in Chapter 4, I explored the relationship between net nitrogen inputs and nitrogen export for over 400 watersheds across the U.S. I used the newly developed nitrogen surplus dataset to understand how watershed-scale nitrogen surplus magnitudes and exports change over time and examine how the relationships are influenced by both natural and anthropogenic controls within watersheds. To achieve this, I used a set of 492 watersheds with nitrogen input and export data spanning from 1990 to 2017. We found that 284 watersheds had a significant (p<0.1) increasing or decreasing trend in both nitrogen surplus and nitrogen load. Of these watersheds, we identified 62 where both nitrogen surplus and export have been significantly increasing over the last two decades. These input-driven watersheds are characterized by high livestock density, agricultural area, and tile drainage. In contrast, nitrogen surplus and export have been decreasing in 127 "bright spot" watersheds, characterized by high population density and urban land use. Nitrogen surplus is also decreasing in 60 "transitioning" watersheds, but export is increasing as nitrogen surplus decreases. We argue that these watersheds are transitioning from agriculture to more urban areas, such that fertilizer inputs have decreased, but the higher nitrogen export is driven by legacy nitrogen stores. Finally, we found 35 watersheds demonstrating a delayed response, with nitrogen export decreasing despite an increase in nitrogen surplus. Climate appears to be the driver of response in these watersheds, with aridity likely driving lower nitrogen export, despite increasing inputs. The four typologies of nitrogen inputs and export relationships suggest that watersheds can act as filters and modulate the movement of nitrogen. Our results provide insights into the complex dynamics of nitrogen surplus and export relationships, as well as how the landscape, climate, and legacy nitrogen can influence these relationships. In Chapter 4, I analyzed relationships between changes in nitrogen inputs and export, to understand what drives changes in watershed export, finding that legacy stores may be modulating the watershed response to changing net nitrogen inputs. However, we have limited knowledge of the magnitude and spatial distribution of legacy stores across North America. Therefore, in Chapter 5, we quantified how much nitrogen retention, which is the mass of nitrogen stored in legacy pools and nitrogen lost to denitrification, has accumulated in watersheds, and where it can be accumulating. To achieve this, we used existing datasets and machine learning algorithms to calculate the mass of ‘retained’ nitrogen in the landscape—defined as the nitrogen stored in the soil organic nitrogen pool, the groundwater pool, or lost through denitrification. Specifically, we built a random forest modeling framework trained on the watersheds’ nitrogen surplus and components, loads, and characteristics to predict nitrogen loads at the HUC8 scale across the U.S. We calculated retention for HUC8, which is the difference between nitrogen surplus and predicted loads, and found that nitrogen retention is highest in the Midwestern and Eastern U.S. because of low exports in regions with high agricultural inputs or high population density. Next, we used a data-driven approach to estimate legacy stores by allocating retained nitrogen mass into their legacy pools. We partition nitrogen retention in the Upper Mississippi region HUC8 watersheds into the mass stored in the groundwater pool, soil organic nitrogen pools, and mass lost to denitrification. We found that, on average, 42% of the mass is stored in the soil organic nitrogen pool, 16.5% is stored in the groundwater pool, and 40% is lost to denitrification. While these two chapters focused on nitrogen, in my final chapter we shifted to explore phosphorus use in agricultural systems. In my final chapter, we used the new gridded phosphorus surplus and components dataset to explore current and historical agricultural phosphorus use and management in landscapes within the context of both food security and environmental health. To characterize the extent of phosphorus depletion and excess, we employed indicators such as annual and cumulative phosphorus surplus and phosphorus use efficiency (PUE). We found that the evolution of agricultural phosphorus management is shaped by changing fertilizer management, the proliferation of concentrated animal operations, climate, and the landscape's memory of past phosphorus use. We further integrated both cumulative phosphorus surplus and PUE into a framework to quantify phosphorus sustainability in intensively managed landscapes. We found that in the 1980s, much of the agricultural land was undergoing ‘intensification,’ with positive and increasing cumulative stores because phosphorus inputs exceeded crop uptake (PUE < 1). By 2017, 29.5% of the agricultural land was undergoing ‘recovery’ and had positive cumulative phosphorus stores that were being depleted through improved phosphorus management (PUE > 1). However, 70% of the agricultural area in the U.S. is still undergoing ‘intensification,’ particularly in areas with more of their inputs from livestock manure, pointing to the need to treat manure as a resource instead of the current approach of treating it as a waste product. By using novel datasets, we have been able to explore nutrient use across space and time and its impact on food security and environmental outcomes. I have made significant contributions towards expanding the discussion of nutrient us and fate, understanding the magnitude and distribution of cumulative net nutrient inputs stores in the landscape, as well as the ways in which intrinsic watershed properties, climate, land management, and historical nutrient use can modulate the relationship between inputs and export. Overall, my findings underscore the importance of nuanced, place-based, and context-dependent nutrient management strategies, with a focus on manure management, to address the diverse challenges of different agricultural systems and prevent unintended environmental consequences.