Search
Search Results
-
161. [Article] Trauma and Betrayal Blindness in Charitable Donations
Betrayal trauma theory (see Freyd, 1996) posits betrayal events often require "betrayal blindness" in order to limit awareness or memory of information regarding the betrayal. This occurs in order to maintain ...Citation Citation
- Title:
- Trauma and Betrayal Blindness in Charitable Donations
- Author:
- Kaehler, Laura
- Year:
- 2014
Betrayal trauma theory (see Freyd, 1996) posits betrayal events often require "betrayal blindness" in order to limit awareness or memory of information regarding the betrayal. This occurs in order to maintain a connection that is necessary for survival. BTT may be applied to events that generally would not be considered traumatic, such as adultery or discrimination. In order to maintain connections within relationships, institutions, and social systems upon which there is a dependency, people (acting as victims, perpetrators, and witnesses) may show betrayal blindness. This dissertation consists of two studies investigating betrayal blindness and betrayal trauma history as they relate to charitable behavior. Study 1 included 467 college students at the University of Oregon who completed self-report measures of trauma history and a behavioral measure requesting a hypothetical donation. Contributions were requested for three scenarios that varied in level of betrayal: natural disaster, external genocide, and internal genocide. Results indicated no significant main effects for trauma history or type of event. However, people were less willing to donate to the group of recipients and the genocide conditions at low levels of emotional arousal. Additionally, those who have experienced high betrayal traumas also were less likely to donate at low emotional response values. Given the lack of significant findings in this experiment, a second study was conducted using a repeated measures design. Study 2 involved 634 undergraduate students at the University of Oregon. In addition to the measures from Study 1, participants also completed additional self-report measures assessing trait measures of prosocial tendencies, social desirability, personality, emotion regulation, and betrayal awareness. There were no main effects on charitable behavior for personality traits, prosociality, emotion regulation, social desirability, or betrayal awareness. Significant order effects were observed when comparing the type of event and betrayal level of event. A between-subjects approach revealed people donated less money to the higher betrayal versions of both types of scenarios. Across both studies, increased affect, particularly guilt, was associated with more charitable behavior. Although there are several limitations of these studies, the findings represent an important first step exploring prosocial behavior within a betrayal trauma framework.
-
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical ...
Citation Citation
- Title:
- Novel Methods for Learning and Adaptation in Chemical Reaction Networks
- Author:
- Banda, Peter
- Year:
- 2015
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical systems that would offer flexibility through programmability and autonomous adaptation faces major challenges because of the large number of required chemical species as well as the timing-sensitive feedback loops required for learning. Currently, biochemistry lacks a systems vision on how the user-level programming interface and abstraction with a subsequent translation to chemistry should look like. By developing adaptation in chemistry, we could replace multiple hard-wired systems with a single programmable template that can be (re)trained to match a desired input-output profile benefiting smart drug delivery, pattern recognition, and chemical computing. I aimed to address these challenges by proposing several approaches to learning and adaptation in Chemical Reaction Networks (CRNs), a type of simulated chemistry, where species are unstructured, i.e., they are identified by symbols rather than molecular structure, and their dynamics or concentration evolution are driven by reactions and reaction rates that follow mass-action and Michaelis-Menten kinetics. Several CRN and experimental DNA-based models of neural networks exist. However, these models successfully implement only the forward-pass, i.e., the input-weight integration part of a perceptron model. Learning is delegated to a non-chemical system that computes the weights before converting them to molecular concentrations. Autonomous learning, i.e., learning implemented fully inside chemistry has been absent from both theoretical and experimental research. The research in this thesis offers the first constructive evidence that learning in CRNs is, in fact, possible. I have introduced the original concept of a chemical binary perceptron that can learn all 14 linearly-separable logic functions and is robust to the perturbation of rate constants. That shows learning is universal and substrate-free. To simplify the model I later proposed and applied the "asymmetric" chemical arithmetic providing a compact solution for representing negative numbers in chemistry. To tackle more difficult tasks and to serve more complicated biochemical applications, I introduced several key modular building blocks, each addressing certain aspects of chemical information processing and learning. These parts organically combined into gradually more complex systems. First, instead of simple static Boolean functions, I tackled analog time-series learning and signal processing by modeling an analog chemical perceptron. To store past input concentrations as a sliding window I implemented a chemical delay line, which feeds the values to the underlying chemical perceptron. That allows the system to learn, e.g., the linear moving-average and to some degree predict a highly nonlinear NARMA benchmark series. Another important contribution to the area of chemical learning, which I have helped to shape, is the composability of perceptrons into larger multi-compartment networks. Each compartment hosts a single chemical perceptron and compartments communicate with each other through a channel-mediated exchange of molecular species. Besides the feedforward pass, I implemented the chemical error backpropagation analogous to that of feedforward neural networks. Also, after applying mass-action kinetics for the catalytic reactions, I succeeded to systematically analyze the ODEs of my models and derive the closed exact and approximative formulas for both the input-weight integration and the weight update with a learning rate annealing. I proved mathematically that the formulas of certain chemical perceptrons equal the formal linear and sigmoid neurons, essentially bridging neural networks and adaptive CRNs. For all my models the basic methodology was to first design species and reactions, and then set the rate constants either "empirically" by hand, automatically by a standard genetic algorithm (GA), or analytically if possible. I performed all simulations in my COEL framework, which is the first cloud-based chemistry modeling tool, accessible at http://coel-sim.org. I minimized the amount of required molecular species and reactions to make wet chemical implementation possible. I applied an automatized mapping technique, Soloveichik's CRN-to-DNA-strand-displacement transformation, to the chemical linear perceptron and the manual signalling delay line and obtained their full DNA-strand specified implementations. As an alternative DNA-based substrate, I mapped these two models also to deoxyribozyme-mediated cleavage reactions reducing the size of the displacement variant to a third. Both DNA-based incarnations could directly serve as blue-prints for wet biochemicals. Besides an actual synthesis of my models and conducting an experiment in a biochemical laboratory, the most promising future work is to employ so-called reservoir computing (RC), which is a novel machine learning method based on recurrent neural networks. The RC approach is relevant because for time-series prediction it is clearly superior to classical recurrent networks. It can also be implemented in various ways, such as electrical circuits, physical systems, such as a colony of Escherichia Coli, and water. RC's loose structural assumptions therefore suggest that it could be expressed in a chemical form as well. This could further enhance the expressivity and capabilities of chemically-embedded learning. My chemical learning systems may have applications in the area of medical diagnosis and smart medication, e.g., concentration signal processing and monitoring, and the detection of harmful species, such as chemicals produced by cancer cells in a host (cancer miRNAs) or the detection of a severe event, defined as a linear or nonlinear temporal concentration pattern. My approach could replace hard-coded solutions and would allow to specify, train, and reuse chemical systems without redesigning them. With time-series integration, biochemical computers could keep a record of changing biological systems and act as diagnostic aids and tools in preventative and highly personalized medicine.
-
163. [Article] Study of Prestige and Resource Control Using Fish Remains from Cathlapotle, a Plankhouse Village on the Lower Columbia River
Social inequality is a trademark of Northwest Coast native societies, and the relationship between social prestige and resource control, particularly resource ownership, is an important research issue ...Citation Citation
- Title:
- Study of Prestige and Resource Control Using Fish Remains from Cathlapotle, a Plankhouse Village on the Lower Columbia River
- Author:
- Rosenberg, J. Shoshana
- Year:
- 2015
Social inequality is a trademark of Northwest Coast native societies, and the relationship between social prestige and resource control, particularly resource ownership, is an important research issue on the Northwest Coast. Faunal remains are one potential but as yet underutilized path for examining this relationship. My thesis work takes on this approach through the analysis of fish remains from the Cathlapotle archaeological site (45CL1). Cathlapotle is a large Chinookan village site located on the Lower Columbia River that was extensively excavated in the 1990s. Previous work has established prestige distinctions between houses and house compartments, making it possible to examine the relationship between prestige and the spatial distribution of fish remains. In this study, I examine whether having high prestige afforded its bearers greater access to preferred fish, utilizing comparisons of fish remains at two different levels of social organization, between and within households, to determine which social mechanisms could account for potential differences in access to fish resources. Differential access to these resources within the village could have occurred through household-level ownership of harvesting sites or control over the post-harvesting distribution of food by certain individuals. Previous work in this region on the relationship between faunal remains and prestige has relied heavily on ethnohistoric sources to determine the relative value of taxa. These sources do not provide adequate data to make detailed comparisons between all of the taxa encountered at archaeological sites, so in this study I utilize optimal foraging theory as an alternative means of determining which fish taxa were preferred. Optimal foraging theory provides a universal, quantitative analytical rule for ranking fish that I was able to apply to all of the taxa encountered at Cathlapotle. Given these rankings, which are based primarily on size, I examine the degree to which relative prestige designations of two households (Houses 1 and 4) and compartments within one of those households (House 1) are reflected in the spatial distribution of fish remains. I also offer a new method for quantifying sturgeon that utilizes specimen weight to account for differential fragmentation rates while still allowing for sturgeon abundance to be compared to the abundances of other taxa that have been quantified by number of identified specimens (NISP). Based on remains recovered from 1/4" mesh screens, comparisons between compartments within House 1 indicate that the chief and possibly other elite members of House 1 likely had some control over the distribution of fish resources within their household, taking more of the preferred sturgeon and salmon, particularly more chinook salmon, for themselves. Comparisons between households provide little evidence to support household-based ownership of fishing sites. A greater abundance of chinook salmon in the higher prestige House 1 may indicate ownership of fishing platforms at major chinook fisheries such as Willamette Falls or Cascades Rapids, but other explanations for this difference between households are possible. Analyses of a limited number of bulk samples, which were included in the study in order to examine utilization of very small fishes, provided insufficient data to allow for meaningful intrasite comparisons. These data indicate that the inhabitants of Cathlapotle were exploiting a broad fish subsistence base that included large numbers of eulachon and stickleback in addition to the larger fishes. This study provides a promising approach for examining prestige on the Northwest Coast and expanding our understanding of the dynamics between social inequality and resource access and control.
-
Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in ...
Citation Citation
- Title:
- Development of a Technology Transfer Score for Evaluating Research Proposals: Case Study of Demand Response Technologies in the Pacific Northwest
- Author:
- Estep, Judith
- Year:
- 2017
Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in basic research technologies to help fill the pipeline for other organizations to take the technology into commercialization. However, it is not about just investing in innovation, it is about converting that research into application. A cursory review of the research proposal evaluation criteria suggests that there is little to no emphasis placed on the transfer of research results. This effort is motivated by a need to move research into application. One segment that is facing technology challenges is the energy sector. Historically, the electric grid has been stable and predictable; therefore, there were no immediate drivers to innovate. However, an aging infrastructure, integration of renewable energy, and aggressive energy efficiency targets are motivating the need for research and to put promising results into application. Many technologies exist or are in development but the rate at which they are being adopted is slow. The goal of this research is to develop a decision model that can be used to identify the technology transfer potential of a research proposal. An organization can use the model to select the proposals whose research outcomes are more likely to move into application. The model begins to close the chasm between research and application -- otherwise known as the "valley of death." A comprehensive literature review was conducted to understand when the idea of technology application or transfer should begin. Next, the attributes that are necessary for successful technology transfer were identified. The emphasis of successful technology transfer occurs when there is a productive relationship between the researchers and the technology recipient. A hierarchical decision model, along with desirability curves, was used to understand the complexities of the researcher and recipient relationship, specific to technology transfer. In this research, the evaluation criteria of several research organizations were assessed to understand the extent to which the success attributes that were identified in literature were considered when reviewing research proposals. While some of the organizations included a few of the success attributes, none of the organizations considered all of the attributes. In addition, none of the organizations quantified the value of the success attributes. The effectiveness of the model relies extensively on expert judgments to complete the model validation and quantification. Subject matter experts ranging from senior executives with extensive experience in technology transfer to principal research investigators from national labs, universities, utilities, and non-profit research organizations were used to ensure a comprehensive and cross-functional validation and quantification of the decision model. The quantified model was validated using a case study involving demand response (DR) technology proposals in the Pacific Northwest. The DR technologies were selected based on their potential to solve some of the region's most prevalent issues. In addition, several sensitivity scenarios were developed to test the model's response to extreme case scenarios, impact of perturbations in expert responses, and if it can be applied to other than demand response technologies. In other words, is the model technology agnostic? In addition, the flexibility of the model to be used as a tool for communicating which success attributes in a research proposal are deficient and need strengthening and how improvements would increase the overall technology transfer score were assessed. The low scoring success attributes in the case study proposals (e.g. project meetings, etc.) were clearly identified as the areas to be improved for increasing the technology transfer score. As a communication tool, the model could help a research organization identify areas they could bolster to improve their overall technology transfer score. Similarly, the technology recipient could use the results to identify areas that need to be reinforced, as the research is ongoing. The research objective is to develop a decision model resulting in a technology transfer score that can be used to assess the technology transfer potential of a research proposal. The technology transfer score can be used by an organization in the development of a research portfolio. An organization's growth, in a highly competitive global market, hinges on superior R&D performance and the ability to apply the results. The energy sector is no different. While there is sufficient research being done to address the issues facing the utility industry, the rate at which technologies are adopted is lagging. The technology transfer score has the potential to increase the success of crossing the chasm to successful application by helping an organization make informed and deliberate decisions about their research portfolio.
-
165. [Article] Improving the Roadside Environment through Integrating Air Quality and Traffic-Related Data
Urban arterial corridors are landscapes that give rise to short and long-term exposures to transportation-related pollution. With high traffic volumes, congestion, and a wide mix of road users and land ...Citation Citation
- Title:
- Improving the Roadside Environment through Integrating Air Quality and Traffic-Related Data
- Author:
- Kendrick, Christine M.
- Year:
- 2016
Urban arterial corridors are landscapes that give rise to short and long-term exposures to transportation-related pollution. With high traffic volumes, congestion, and a wide mix of road users and land uses at the road edge, urban arterial environments are important targets for improved exposure assessment to traffic-related pollution. Applying transportation management strategies to reduce emissions along arterial corridors could be enhanced if the ability to quantify and evaluate such actions was improved. However, arterial roadsides are under-sampled in terms of air pollution measurements in the United States and using observational data to assess such effects has many challenges such as lack of control sites for comparisons and temporal autocorrelation. The availability of traffic-related data is also typically limited in air monitoring and health studies. The work presented here uses unique long-term roadside air quality monitoring collected at the intersection of an urban arterial in Portland, OR to characterize the roadside atmospheric environment. This air quality dataset is then integrated with traffic-related data to assess various methods for improving exposure assessment and the roadside environment. Roadside nitric oxide (NO), nitrogen dioxide (NO2), and particle number concentration (PNC) measurements all demonstrated a relationship with local traffic volumes. Seasonal and diurnal characterizations show that roadside PM2.5 (mass) measurements do not have a relationship with local traffic volumes, providing evidence that PM2.5 mass is more tied to regional sources and meteorological conditions. The relationship of roadside NO and NO2 with traffic volumes was assessed over short and long-term aggregations to assess the reliability of a commonly employed method of using traffic volumes as a proxy for traffic-related exposure. This method was shown to be insufficient for shorter-time scales. Comparisons with annual aggregations validate the use of traffic volumes to estimate annual exposure concentrations, demonstrating this method can capture chronic but not acute exposure. As epidemiology and exposure assessment aims to target health impacts and pollutant levels encountered by pedestrians, cyclists, and those waiting for transit, these results show when traffic volumes alone can be a reliable proxy for exposure and when this approach is not warranted. Next, it is demonstrated that a change in traffic flow and change in emissions can be measured through roadside pollutant concentrations suggesting roadside pollution can be affected by traffic signal timing. The effect of a reduced maximum traffic signal cycle length on measurements of degree of saturation (DS), NO, and NO2 were evaluated for the peak traffic periods in two case studies at the study intersection. In order to reduce bias from covariates and assess the effect due to the change in cycle length only, a matched sampling method based on propensity scores was used to compare treatment periods (reduced cycle length) with control periods (no change in cycle length). Significant increases in DS values of 2-8% were found along with significant increases of 5-8ppb NO and 4-5ppb NO2 across three peak periods in both case studies. Without matched sampling to address the challenges of observational data, the small DS and NOx changes for the study intersection would have been masked and matched sampling is shown to be a helpful tool for future urban air quality empirical investigations. Dispersion modeling evaluations showed the California Line Source Dispersion Model with Queuing and Hotspot Calculations (CAL3QHCR), an approved regulatory model to assess the impacts of transportation projects on PM2.5, performed both poor and well when predictions were compared with PM2.5 observations depending on season. Varying levels of detail in emissions, traffic signal, and traffic volume data for model inputs, assessed using three model scenarios, did not affect model performance for the study intersection. Model performance is heavily dependent on background concentrations and meteorology. It was also demonstrated that CAL3QHC can be used in combination with roadside PNC measurements to back calculate PNC emission factors for a mixed fleet and major arterial roadway in the U.S. The integration of roadside air quality and traffic-related data made it possible to perform unique empirical evaluations of exposure assessment methods and dispersion modeling methods for roadside environments. This data integration was used for the assessment of the relationship between roadside pollutants and a change in a traffic signal setting, a commonly employed method for transportation management and emissions mitigation, but rarely evaluated outside of simulation and emissions modeling. Results and methods derived from this work are being used to implement a second roadside air quality station, to design a city-wide integrated network of air quality, meteorological, and traffic data including additional spatially resolved measurements with feedback loops for improved data quality and data usefulness. Results and methods are also being used to design future evaluations of transportation projects such as freight priority signaling, improved transit signal priority, and to understand the air quality impacts of changes in fleet composition such as an increase in electric vehicles.