Search
Search Results
-
101. [Article] Intrastate Armed Conflict and Peacebuilding in Nepal: An Assessment of the Political and Economic Agency of Women
The proliferation of intrastate armed conflicts has been one of the significant threats to global peace, security, and governance. Such conflicts may trigger resource exploitation, environmental degradation, ...Citation Citation
- Title:
- Intrastate Armed Conflict and Peacebuilding in Nepal: An Assessment of the Political and Economic Agency of Women
- Author:
- Luintel, Gyanu Gautam
- Year:
- 2016
The proliferation of intrastate armed conflicts has been one of the significant threats to global peace, security, and governance. Such conflicts may trigger resource exploitation, environmental degradation, human rights violations, human and drug trafficking, and terrorism. Women may suffer disproportionately from armed conflicts due to their unequal social status. While they endure the same effects of the conflict as the rest of the population, they also become targets of gender-based violence. However, women can also be active agents of armed conflict and perpetrate violence. Therefore, political and scientific communities at the national and international levels are now increasingly interested in developing a better understanding of the role of women in, and effect on them from, armed conflict. A better understanding of the roles of women in conflict would help to prevent conflicts and promote peace. Following in-depth interviews with civil society members who witnessed the decade-long armed conflict between Communist Party of Nepal-Maoist (CPN-M) and the Government of Nepal (GoN) (1996-2006) and thereafter the peacebuildng process, I assess the political and economic agency of women particularly in terms of their role in, and impact on them from, the armed conflict and peacebuilding processes. My research revealed that a large number of women, particularly those from rural areas, members of socially oppressed groups, poor and productive age (i.e., 14 - 45 years) - participated in the armed conflict as combatants, political cadres, motivators, and members of the cultural troupe in CPN-M, despite deeply entrenched patriarchal values in Nepali society. The GoN also recruited women in combatant roles who took part in the armed conflict. Women joined the armed conflict voluntarily, involuntarily, or as a survival strategy. Women who did not participate directly in the armed conflict were affected in many different ways. They were required to perform multiple tasks and unconventional roles at both household and community levels, particularly due to the absence or shortage of men in rural areas as they were killed, disappeared, or displaced. At the household level, women performed the role of household head- both politically and economically. However, in most cases the economic agency of women was negatively affected. At the community level, women's role as peacebuilders, members of community based organizations and civil society organizations either increased or decreased depending on the situation. Despite active participation of women in formal and informal peacebuilding processes at different levels, they were excluded from most of the high level formal peace processes. However, they were able to address some of the women's issues (e.g., access to parental property, inclusion in the state governance mechanism) at the constitutional level. The armed conflict changed gender relations to some extent, and some women acquired new status, skills and power by assuming new responsibilities. However, these changes were gained at the cost of grave violations of human rights and gender-based violence committed by the warring sides. Also, the gains made by women were short-lived and their situation often returned to status quo in the post-conflict period.
-
102. [Article] Reefscapes of Fear: Predation Risk and Reef Hetero-Geneity Interact to Shape Herbivore Foraging Behaviour
Predators can exert strong direct and indirect effects on ecological communities by intimidating their prey. The nature of predation risk effects is often context dependent, but in some ecosystems these ...Citation Citation
- Title:
- Reefscapes of Fear: Predation Risk and Reef Hetero-Geneity Interact to Shape Herbivore Foraging Behaviour
- Author:
- Catano, Laura B., Rojas, Maria C., Malossi, Ryan J., Peters, Joseph R., Heathaus, Michael R., Fourquren, James W., Burkepile, Deron E.
- Year:
- 2016
Predators can exert strong direct and indirect effects on ecological communities by intimidating their prey. The nature of predation risk effects is often context dependent, but in some ecosystems these contingencies are often overlooked. Risk effects are often not uniform across landscapes or among species. Indeed, they can vary widely across gradients of habitat complexity and with different prey escape tactics. These context dependencies may be especially important for ecosystems such as coral reefs that vary widely in habitat complexity and have species-rich predator and prey communities. With field experiments using predator decoys of the black grouper (Mycteroperca bonaci), we investigated how reef complexity interacts with predation risk to affect the foraging behaviour and herbivory rates of large herbivorous fishes (e.g. parrotfishes and surgeonfishes) across four coral reefs in the Florida Keys (USA). In both high and low complexity areas of the reef, we measured how herbivory changed with increasing distance from the predator decoy to examine how herbivorous fishes reconcile the conflicting demands of avoiding predation vs. foraging within a reefscape context. We show that with increasing risk, herbivorous fishes consumed dramatically less food (ca. 90%) but fed at a faster rate when they did feed (ca. 26%). Furthermore, we show that fishes foraging closest to the predator decoy were 40% smaller than those that foraged at further distances. Thus, smaller individuals showed muted response to predation risk compared to their larger counterparts, potentially due to their decreased risk to predation or lower reproductive value (i.e. the asset protection principle). Habitat heterogeneity mediated risk effects differently for different species of herbivores, with predation risk more strongly suppressing herbivore feeding in more complex areas and for individuals at higher risk of predation. Predators appear to create a reefscape of fear that changes the size structure of herbivores towards smaller individuals, increases individual feeding rates, but suppresses overall amounts of primary producers consumed, potentially altering patterns of herbivory, an ecosystem process critical for healthy coral reefs.
-
103. [Article] Towards Effective Design Treatment for Right Turns at Intersections with Bicycle Traffic
The overall goal of this research was to quantify the safety performance of alternative traffic control strategies to mitigate right-turning vehicle-bicycle crashes at signalized intersections in Oregon. ...Citation Citation
- Title:
- Towards Effective Design Treatment for Right Turns at Intersections with Bicycle Traffic
- Author:
- Hurwitz, David, Jannat, Mafruhatul, Warner, Jennifer, Monsere, Christopher M., Razmpa, Ali
- Year:
- 2015
The overall goal of this research was to quantify the safety performance of alternative traffic control strategies to mitigate right-turning vehicle-bicycle crashes at signalized intersections in Oregon. The ultimate aim was to provide useful design guidance to potentially mitigate these collision types at the critical intersection configurations. This report includes a comprehensive review of more than 150 scientific and technical articles that relate to bicycle-motor vehicle crashes. A total of 504 right-hook crashes were identified from vehicle path information in the Oregon crash data from 2007-2011, mapped and reviewed in detail to identify the frequency and severity of crashes by intersection lane configuration and traffic control. Based on these efforts, a two stage experiment was developed in the Oregon State University (OSU) high-fidelity driving simulator to investigate the causal factors of right-hook crashes at signalized intersections with a striped bike lane and no right-turn lane, and to then identify and evaluate alternative design treatments that could mitigate the occurrence of right-hook crashes. Experiment 1 investigated motorist and environmental related causal factors of right-hook crashes, using three different motorist performance measures: (1) visual attention, (2) situational awareness (SA) and (3) crash avoidance behavior. Data was collected from 51 participants (30 male and 21 female) turning right 820 times in 21 different experimental scenarios. It was determined that the worst case right-hook scenario occurred when a bicycle was approaching the intersection at a higher speed (16 mph) and positioned in the blind zone of the motorist. In crash and near crash situations (measured by time-to-collision) the most common cause was a failure of the driver to actively search for the adjacent bicyclist (situational awareness level 1), although failures were also determined to occur due to failures of projection (i.e. incorrectly assuming that the bicycle would yield or that there was enough time to turn in front of the bicycle). Elements of driver performance and gap acceptance collected in the first stage simulator experiment were field validated to provide additional confidence in the findings. The research reviewed 144 hours of video and identified 43 conflicts where the time-to-collision (TTC) measured less than 5 seconds. When field observations of scenarios most similar to those in the simulator were isolated, the analysis indicated that the distribution of the TTCs values observed in the simulator were consistent with those observed in the field. Experiment 2 evaluated several possible design treatments, (specifically: signage, pavement markings, curb radii, and protected intersection designs), based on the visual attention of motorist, their crash avoidance behavior, and the severity of the observed crashes. Data was collected from 28 participants (18 male and 10 female) turning right 596 times in 22 scenarios that were used. The resulting analysis of the driver performance indicators suggest that while various driver performance metrics can be measured robustly, and all of the treatments had some positive effect on measured driver performance, it is not yet clear how to map the magnitudes of the differences to expected crash outcomes. Additional work is recommended to address the limitations of this study and to further consider the potential effects of the right-hook crash mitigation strategies from this research.
-
The Sandy River (OR) is a costal tributary of the Columbia River and has a steep hydroshed 1316 square kilometers which is located on the western side of Mount Hood (elevation range 3 m to 1800 m). The ...
Citation Citation
- Title:
- Application of Numerical Modeling to Study River Dynamics: Hydro-Geomorphological Evolution Due to Extreme Events in the Sandy River, Oregon
- Author:
- Muhammad, Sarkawt Hamarahim
- Year:
- 2017
The Sandy River (OR) is a costal tributary of the Columbia River and has a steep hydroshed 1316 square kilometers which is located on the western side of Mount Hood (elevation range 3 m to 1800 m). The system exhibits highly variable flow: Its average discharge is ~40 m3/s, and the highest recorded discharge was 1739 m3/s in 1964. In this study I model the geomorphic sensitivity of an 1800m reach located the downstream of the former Marmot Dam, which was removed in 2007. The hydro-geomorphic response to major flood has implications for system management and aquatic life. Studying hydro-geomorphic change requires a systematic approach. Here, I define flows and flood hydrographs for specified return interval based on the observed hydrologic record, and then examine potential hydro-geomorphic changes using a numerical model. A Pearson Type III distribution is used to calculate 100, 75, 50, 25, 10, and 2 year return periods. Extreme event hydrographs are derived by fitting derived and observed flood hydrographs to the gamma distribution curve. Sediment transport and geomorphology are then modeled numerically with Nays2DH, a solver that is part of iRIC software. Because the model is computationally intensive, I model the domain with five different spatial grid resolutions, to find proper grid resolution. The grid resolutions used are 1.5 m, 2 m, 3 m, 4 m, and 5 m. We choose 4 m as optimum grid resolution, based on the convergence of model results. The model is run for extreme event hydrographs with six above return periods. For result visualization and analysis, we focus on flow properties and bed elevation at peak flow and at the end of each event. For both times for each event, important flow and sediment transport parameters are visualized for the entire domain in plane form and eight cross-sections at 200 m intervals. Finally, we divide the geomorphic response into areas of erosion and deposition. Linear regression analyses of mean values of erosion and deposition at peak flow for all extreme events yield R2 of 0.981 for erosion and 0.986 for deposition. The mean erosion and deposition depth at the end of the events is modeled by nonlinear regression with correlation coefficient of 0.965 for erosion and 0.998 for deposition. The regression models provide direct understanding of impacts of different floods on the geomorphic response of the river domain. examination of the model as a whole suggest that the amount of erosion and deposition in the bed and banks is a function of channel geometry, bank and bed geology, riparian area condition and strongly depend on the amount of flow through the channel.
-
The U.S. Forest Service (USFS) Enterprise Program (EP), which provides fee-for-service consulting services to the USFS, is interested in integrating systems thinking into its service offerings. Despite ...
Citation Citation
- Title:
- Systems Thinking in the Forest Service: a Framework to Guide Practical Application for Social-Ecological Management in the Enterprise Program
- Author:
- Kmon, Megan Kathleen
- Year:
- 2016
The U.S. Forest Service (USFS) Enterprise Program (EP), which provides fee-for-service consulting services to the USFS, is interested in integrating systems thinking into its service offerings. Despite there being several excellent sources on the range and diversity of systems thinking, no single framework exists that thoroughly yet concisely outlines what systems thinking is along with its deep history, theoretical tenets, and soft and hard approaches. This thesis is an attempt to create such a framework, aimed specifically at practical application in a land management agency, through literature synthesis injected with original analysis. The usefulness of the framework is then tested using three case studies within the EP and the agency as a whole. The framework highlights several important aspects of systems thinking, both generally and related specifically to social-ecological management. First, systems thinking is the transdisciplinary study of complex phenomena from a holistic, rather than reductionist, perspective. The world can be viewed as a massive set of embedded systems -- elements with relations that lead to nonlinear behavior -- making the role of the observer essential in identifying scales of interest and interactions amongst them. Second, the deep history of holistic thinking suggests that its modern scientific study could benefit from exploring the East's long-standing cultural and spiritual approaches to holism through cognitive unity and oneness with mankind and nature. Third, categorizations of systems approaches as "soft" versus "hard" are helpful but can distract us from the ultimate goal of systems thinking, which is to understand the various tools in the systems thinking toolbox so as to apply them critically and creatively to make a meaningful difference in the world. Fourth, I see the soft systems approaches as having a distinct systems thinking orientation and the hard systems approaches as overlapping substantially with operations research, the close cousin of systems thinking. Fifth, I identify a spectrum of complexity, contending that systems thinking tends to be concerned with what I call subjectively and computationally complex systems, as well as complex adaptive systems, leaving simple systems for other approaches. Finally, I contend that it is the soft systems approaches and the two theoretical pillars of hierarchy theory and cooperation theory that will aid wicked social-ecological problem solving the most. The framework is applied to three case studies. Examination of the EP reorganization using a hard systems approach revealed two critical high-level functions that were absent in the current structure, paving the way for new designs that could take those functions into account. Analysis of an initiative to increase citizen recreation on USFS lands showed that a systems approach had been improperly applied and how the application of a soft approach at the onset could have systematically framed the problem and offered unique normative insights for giving voice to relevant non-agency stakeholders as well as nature and future generations. And viewing the perennial problem of wildfire management through the lens of cooperation theory revealed how USFS leadership could take a more active role in promoting the long-term outlook, durable relationships, and reciprocal behaviors that are required for cooperative improvement to take place. As environmental narratives worsen and the need for transitioning towards sustainable ways of living heightens, systems thinking offers ever-increasing value to resource managers for its ability to deal with the many perspectives and normative content that underlie wicked problems and to help to illuminate potential consequences of system interventions given the interplay of complex structural dynamics across space and time.
-
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical ...
Citation Citation
- Title:
- Novel Methods for Learning and Adaptation in Chemical Reaction Networks
- Author:
- Banda, Peter
- Year:
- 2015
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical systems that would offer flexibility through programmability and autonomous adaptation faces major challenges because of the large number of required chemical species as well as the timing-sensitive feedback loops required for learning. Currently, biochemistry lacks a systems vision on how the user-level programming interface and abstraction with a subsequent translation to chemistry should look like. By developing adaptation in chemistry, we could replace multiple hard-wired systems with a single programmable template that can be (re)trained to match a desired input-output profile benefiting smart drug delivery, pattern recognition, and chemical computing. I aimed to address these challenges by proposing several approaches to learning and adaptation in Chemical Reaction Networks (CRNs), a type of simulated chemistry, where species are unstructured, i.e., they are identified by symbols rather than molecular structure, and their dynamics or concentration evolution are driven by reactions and reaction rates that follow mass-action and Michaelis-Menten kinetics. Several CRN and experimental DNA-based models of neural networks exist. However, these models successfully implement only the forward-pass, i.e., the input-weight integration part of a perceptron model. Learning is delegated to a non-chemical system that computes the weights before converting them to molecular concentrations. Autonomous learning, i.e., learning implemented fully inside chemistry has been absent from both theoretical and experimental research. The research in this thesis offers the first constructive evidence that learning in CRNs is, in fact, possible. I have introduced the original concept of a chemical binary perceptron that can learn all 14 linearly-separable logic functions and is robust to the perturbation of rate constants. That shows learning is universal and substrate-free. To simplify the model I later proposed and applied the "asymmetric" chemical arithmetic providing a compact solution for representing negative numbers in chemistry. To tackle more difficult tasks and to serve more complicated biochemical applications, I introduced several key modular building blocks, each addressing certain aspects of chemical information processing and learning. These parts organically combined into gradually more complex systems. First, instead of simple static Boolean functions, I tackled analog time-series learning and signal processing by modeling an analog chemical perceptron. To store past input concentrations as a sliding window I implemented a chemical delay line, which feeds the values to the underlying chemical perceptron. That allows the system to learn, e.g., the linear moving-average and to some degree predict a highly nonlinear NARMA benchmark series. Another important contribution to the area of chemical learning, which I have helped to shape, is the composability of perceptrons into larger multi-compartment networks. Each compartment hosts a single chemical perceptron and compartments communicate with each other through a channel-mediated exchange of molecular species. Besides the feedforward pass, I implemented the chemical error backpropagation analogous to that of feedforward neural networks. Also, after applying mass-action kinetics for the catalytic reactions, I succeeded to systematically analyze the ODEs of my models and derive the closed exact and approximative formulas for both the input-weight integration and the weight update with a learning rate annealing. I proved mathematically that the formulas of certain chemical perceptrons equal the formal linear and sigmoid neurons, essentially bridging neural networks and adaptive CRNs. For all my models the basic methodology was to first design species and reactions, and then set the rate constants either "empirically" by hand, automatically by a standard genetic algorithm (GA), or analytically if possible. I performed all simulations in my COEL framework, which is the first cloud-based chemistry modeling tool, accessible at http://coel-sim.org. I minimized the amount of required molecular species and reactions to make wet chemical implementation possible. I applied an automatized mapping technique, Soloveichik's CRN-to-DNA-strand-displacement transformation, to the chemical linear perceptron and the manual signalling delay line and obtained their full DNA-strand specified implementations. As an alternative DNA-based substrate, I mapped these two models also to deoxyribozyme-mediated cleavage reactions reducing the size of the displacement variant to a third. Both DNA-based incarnations could directly serve as blue-prints for wet biochemicals. Besides an actual synthesis of my models and conducting an experiment in a biochemical laboratory, the most promising future work is to employ so-called reservoir computing (RC), which is a novel machine learning method based on recurrent neural networks. The RC approach is relevant because for time-series prediction it is clearly superior to classical recurrent networks. It can also be implemented in various ways, such as electrical circuits, physical systems, such as a colony of Escherichia Coli, and water. RC's loose structural assumptions therefore suggest that it could be expressed in a chemical form as well. This could further enhance the expressivity and capabilities of chemically-embedded learning. My chemical learning systems may have applications in the area of medical diagnosis and smart medication, e.g., concentration signal processing and monitoring, and the detection of harmful species, such as chemicals produced by cancer cells in a host (cancer miRNAs) or the detection of a severe event, defined as a linear or nonlinear temporal concentration pattern. My approach could replace hard-coded solutions and would allow to specify, train, and reuse chemical systems without redesigning them. With time-series integration, biochemical computers could keep a record of changing biological systems and act as diagnostic aids and tools in preventative and highly personalized medicine.
-
107. [Article] Study of Prestige and Resource Control Using Fish Remains from Cathlapotle, a Plankhouse Village on the Lower Columbia River
Social inequality is a trademark of Northwest Coast native societies, and the relationship between social prestige and resource control, particularly resource ownership, is an important research issue ...Citation Citation
- Title:
- Study of Prestige and Resource Control Using Fish Remains from Cathlapotle, a Plankhouse Village on the Lower Columbia River
- Author:
- Rosenberg, J. Shoshana
- Year:
- 2015
Social inequality is a trademark of Northwest Coast native societies, and the relationship between social prestige and resource control, particularly resource ownership, is an important research issue on the Northwest Coast. Faunal remains are one potential but as yet underutilized path for examining this relationship. My thesis work takes on this approach through the analysis of fish remains from the Cathlapotle archaeological site (45CL1). Cathlapotle is a large Chinookan village site located on the Lower Columbia River that was extensively excavated in the 1990s. Previous work has established prestige distinctions between houses and house compartments, making it possible to examine the relationship between prestige and the spatial distribution of fish remains. In this study, I examine whether having high prestige afforded its bearers greater access to preferred fish, utilizing comparisons of fish remains at two different levels of social organization, between and within households, to determine which social mechanisms could account for potential differences in access to fish resources. Differential access to these resources within the village could have occurred through household-level ownership of harvesting sites or control over the post-harvesting distribution of food by certain individuals. Previous work in this region on the relationship between faunal remains and prestige has relied heavily on ethnohistoric sources to determine the relative value of taxa. These sources do not provide adequate data to make detailed comparisons between all of the taxa encountered at archaeological sites, so in this study I utilize optimal foraging theory as an alternative means of determining which fish taxa were preferred. Optimal foraging theory provides a universal, quantitative analytical rule for ranking fish that I was able to apply to all of the taxa encountered at Cathlapotle. Given these rankings, which are based primarily on size, I examine the degree to which relative prestige designations of two households (Houses 1 and 4) and compartments within one of those households (House 1) are reflected in the spatial distribution of fish remains. I also offer a new method for quantifying sturgeon that utilizes specimen weight to account for differential fragmentation rates while still allowing for sturgeon abundance to be compared to the abundances of other taxa that have been quantified by number of identified specimens (NISP). Based on remains recovered from 1/4" mesh screens, comparisons between compartments within House 1 indicate that the chief and possibly other elite members of House 1 likely had some control over the distribution of fish resources within their household, taking more of the preferred sturgeon and salmon, particularly more chinook salmon, for themselves. Comparisons between households provide little evidence to support household-based ownership of fishing sites. A greater abundance of chinook salmon in the higher prestige House 1 may indicate ownership of fishing platforms at major chinook fisheries such as Willamette Falls or Cascades Rapids, but other explanations for this difference between households are possible. Analyses of a limited number of bulk samples, which were included in the study in order to examine utilization of very small fishes, provided insufficient data to allow for meaningful intrasite comparisons. These data indicate that the inhabitants of Cathlapotle were exploiting a broad fish subsistence base that included large numbers of eulachon and stickleback in addition to the larger fishes. This study provides a promising approach for examining prestige on the Northwest Coast and expanding our understanding of the dynamics between social inequality and resource access and control.
-
Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in ...
Citation Citation
- Title:
- Development of a Technology Transfer Score for Evaluating Research Proposals: Case Study of Demand Response Technologies in the Pacific Northwest
- Author:
- Estep, Judith
- Year:
- 2017
Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in basic research technologies to help fill the pipeline for other organizations to take the technology into commercialization. However, it is not about just investing in innovation, it is about converting that research into application. A cursory review of the research proposal evaluation criteria suggests that there is little to no emphasis placed on the transfer of research results. This effort is motivated by a need to move research into application. One segment that is facing technology challenges is the energy sector. Historically, the electric grid has been stable and predictable; therefore, there were no immediate drivers to innovate. However, an aging infrastructure, integration of renewable energy, and aggressive energy efficiency targets are motivating the need for research and to put promising results into application. Many technologies exist or are in development but the rate at which they are being adopted is slow. The goal of this research is to develop a decision model that can be used to identify the technology transfer potential of a research proposal. An organization can use the model to select the proposals whose research outcomes are more likely to move into application. The model begins to close the chasm between research and application -- otherwise known as the "valley of death." A comprehensive literature review was conducted to understand when the idea of technology application or transfer should begin. Next, the attributes that are necessary for successful technology transfer were identified. The emphasis of successful technology transfer occurs when there is a productive relationship between the researchers and the technology recipient. A hierarchical decision model, along with desirability curves, was used to understand the complexities of the researcher and recipient relationship, specific to technology transfer. In this research, the evaluation criteria of several research organizations were assessed to understand the extent to which the success attributes that were identified in literature were considered when reviewing research proposals. While some of the organizations included a few of the success attributes, none of the organizations considered all of the attributes. In addition, none of the organizations quantified the value of the success attributes. The effectiveness of the model relies extensively on expert judgments to complete the model validation and quantification. Subject matter experts ranging from senior executives with extensive experience in technology transfer to principal research investigators from national labs, universities, utilities, and non-profit research organizations were used to ensure a comprehensive and cross-functional validation and quantification of the decision model. The quantified model was validated using a case study involving demand response (DR) technology proposals in the Pacific Northwest. The DR technologies were selected based on their potential to solve some of the region's most prevalent issues. In addition, several sensitivity scenarios were developed to test the model's response to extreme case scenarios, impact of perturbations in expert responses, and if it can be applied to other than demand response technologies. In other words, is the model technology agnostic? In addition, the flexibility of the model to be used as a tool for communicating which success attributes in a research proposal are deficient and need strengthening and how improvements would increase the overall technology transfer score were assessed. The low scoring success attributes in the case study proposals (e.g. project meetings, etc.) were clearly identified as the areas to be improved for increasing the technology transfer score. As a communication tool, the model could help a research organization identify areas they could bolster to improve their overall technology transfer score. Similarly, the technology recipient could use the results to identify areas that need to be reinforced, as the research is ongoing. The research objective is to develop a decision model resulting in a technology transfer score that can be used to assess the technology transfer potential of a research proposal. The technology transfer score can be used by an organization in the development of a research portfolio. An organization's growth, in a highly competitive global market, hinges on superior R&D performance and the ability to apply the results. The energy sector is no different. While there is sufficient research being done to address the issues facing the utility industry, the rate at which technologies are adopted is lagging. The technology transfer score has the potential to increase the success of crossing the chasm to successful application by helping an organization make informed and deliberate decisions about their research portfolio.
-
109. [Article] Improving the Roadside Environment through Integrating Air Quality and Traffic-Related Data
Urban arterial corridors are landscapes that give rise to short and long-term exposures to transportation-related pollution. With high traffic volumes, congestion, and a wide mix of road users and land ...Citation Citation
- Title:
- Improving the Roadside Environment through Integrating Air Quality and Traffic-Related Data
- Author:
- Kendrick, Christine M.
- Year:
- 2016
Urban arterial corridors are landscapes that give rise to short and long-term exposures to transportation-related pollution. With high traffic volumes, congestion, and a wide mix of road users and land uses at the road edge, urban arterial environments are important targets for improved exposure assessment to traffic-related pollution. Applying transportation management strategies to reduce emissions along arterial corridors could be enhanced if the ability to quantify and evaluate such actions was improved. However, arterial roadsides are under-sampled in terms of air pollution measurements in the United States and using observational data to assess such effects has many challenges such as lack of control sites for comparisons and temporal autocorrelation. The availability of traffic-related data is also typically limited in air monitoring and health studies. The work presented here uses unique long-term roadside air quality monitoring collected at the intersection of an urban arterial in Portland, OR to characterize the roadside atmospheric environment. This air quality dataset is then integrated with traffic-related data to assess various methods for improving exposure assessment and the roadside environment. Roadside nitric oxide (NO), nitrogen dioxide (NO2), and particle number concentration (PNC) measurements all demonstrated a relationship with local traffic volumes. Seasonal and diurnal characterizations show that roadside PM2.5 (mass) measurements do not have a relationship with local traffic volumes, providing evidence that PM2.5 mass is more tied to regional sources and meteorological conditions. The relationship of roadside NO and NO2 with traffic volumes was assessed over short and long-term aggregations to assess the reliability of a commonly employed method of using traffic volumes as a proxy for traffic-related exposure. This method was shown to be insufficient for shorter-time scales. Comparisons with annual aggregations validate the use of traffic volumes to estimate annual exposure concentrations, demonstrating this method can capture chronic but not acute exposure. As epidemiology and exposure assessment aims to target health impacts and pollutant levels encountered by pedestrians, cyclists, and those waiting for transit, these results show when traffic volumes alone can be a reliable proxy for exposure and when this approach is not warranted. Next, it is demonstrated that a change in traffic flow and change in emissions can be measured through roadside pollutant concentrations suggesting roadside pollution can be affected by traffic signal timing. The effect of a reduced maximum traffic signal cycle length on measurements of degree of saturation (DS), NO, and NO2 were evaluated for the peak traffic periods in two case studies at the study intersection. In order to reduce bias from covariates and assess the effect due to the change in cycle length only, a matched sampling method based on propensity scores was used to compare treatment periods (reduced cycle length) with control periods (no change in cycle length). Significant increases in DS values of 2-8% were found along with significant increases of 5-8ppb NO and 4-5ppb NO2 across three peak periods in both case studies. Without matched sampling to address the challenges of observational data, the small DS and NOx changes for the study intersection would have been masked and matched sampling is shown to be a helpful tool for future urban air quality empirical investigations. Dispersion modeling evaluations showed the California Line Source Dispersion Model with Queuing and Hotspot Calculations (CAL3QHCR), an approved regulatory model to assess the impacts of transportation projects on PM2.5, performed both poor and well when predictions were compared with PM2.5 observations depending on season. Varying levels of detail in emissions, traffic signal, and traffic volume data for model inputs, assessed using three model scenarios, did not affect model performance for the study intersection. Model performance is heavily dependent on background concentrations and meteorology. It was also demonstrated that CAL3QHC can be used in combination with roadside PNC measurements to back calculate PNC emission factors for a mixed fleet and major arterial roadway in the U.S. The integration of roadside air quality and traffic-related data made it possible to perform unique empirical evaluations of exposure assessment methods and dispersion modeling methods for roadside environments. This data integration was used for the assessment of the relationship between roadside pollutants and a change in a traffic signal setting, a commonly employed method for transportation management and emissions mitigation, but rarely evaluated outside of simulation and emissions modeling. Results and methods derived from this work are being used to implement a second roadside air quality station, to design a city-wide integrated network of air quality, meteorological, and traffic data including additional spatially resolved measurements with feedback loops for improved data quality and data usefulness. Results and methods are also being used to design future evaluations of transportation projects such as freight priority signaling, improved transit signal priority, and to understand the air quality impacts of changes in fleet composition such as an increase in electric vehicles.