Search
Search Results
-
4401. [Article] Bicyclist Compliance at Signalized Intersections
This project examined cyclist red light running behavior using two data sets. Previous studies of cyclist compliance have investigated the tendencies of cyclists to run red lights on the whole by generalizing ...Citation Citation
- Title:
- Bicyclist Compliance at Signalized Intersections
- Author:
- Thompson, Samson Ray Riley
- Year:
- 2015
This project examined cyclist red light running behavior using two data sets. Previous studies of cyclist compliance have investigated the tendencies of cyclists to run red lights on the whole by generalizing different maneuvers to their end outcome, running a red light. This project differentiates between the different types of red light running and focuses on the most egregious case, gap acceptance, which is when a cyclist runs a red light by accepting a gap in opposing traffic. Using video data, a mathematical model of cyclist red light running was developed for gap acceptance. Similar to other studies, this analysis utilized only information about the cyclist, intersection, and scenario that can be outwardly observed. This analysis found that the number of cyclists already waiting at the signal, the presence of a vehicle in the adjacent lane, and female sex were deterrents to red light running. Conversely, certain types of signal phasing, witnessing a violation, and lack of helmet increased the odds that a cyclist would run the red light. Interestingly, while women in general are less likely to run a red light, those who witnessed a violation were even more prone that men who had witnessed a violation to follow suit and run the red light themselves. It is likely that the differing socialization of women and men leads to different effects of witnessing a previous violator. The analysis also confirmed that a small subset of cyclists, similar to that found in the general population, are more prone to traffic violations. These cyclists are more willing to engage in multiple biking-related risk factors that include not wearing a helmet and running red lights. Although the model has definite explanatory power regarding decisions of cyclist compliance, much of the variance in the compliance choices of the sample is left unexplained. This points toward the influence of other, not outwardly observable variables on the decision to run a red light. Analysis of survey data from cyclists further confirms that individual characteristics not visible to the observer interact with intersection, scenario, and visible cyclist characteristics to result in a decision to comply (or not) with a traffic signal. Furthermore, cyclist characteristics, in general, and unobservable individual characteristics, specifically, play a larger role in compliance decisions as the number of compliance-inducing intersection traits (e.g. conflicting traffic volume) decrease. One such unobservable trait is the regard for the law by some cyclists, which becomes a more important determinant of compliance at simpler intersections. Cyclists were also shown to choose non-compliance if they questioned the validity of the red indication for them, as cyclists. The video and survey data have some comparable findings. For instance, the relationship of age to compliance was explored in both data analyses. Age was not found to be a significant predictor of non-compliance in the video data analysis while it was negatively correlated with stated non-compliance for two of the survey intersections. Gender, while having significant effects on non-compliance in the video dataset, did not emerge as an important factor in the stated non-compliance of survey takers. Helmet use had a consistent relationship with compliance between the video and survey datasets. Helmet use was positively associated with compliance in the video data and negatively associated with revealed non-compliance at two of the survey intersections. When coupled with the positive association between normlessness and stated willingness to run a red light, the relationship between helmet use and compliance solidifies the notion that a class of cyclists is more likely to consistently violate signals. It points towards a link between red light running and individuals who do not adhere to social norms and policies as strictly as others. Variables representing cyclists and motorists waiting at the signal were positively related to signal compliance in the video data. While an increased number of cyclists may be a physical deterrent to red light running, part of the influence on compliance that this variable and the variable representing the presence of a vehicle may be due to accountability of cyclists to other road users. This relationship, however, was not revealed in the stated non-compliance data from the survey. Efforts to increase cyclist compliance may not be worth a jurisdiction's resources since nearly 90% of cyclists in the video data were already compliant. If a problem intersection does warrant intervention, different methods of ensuring bicyclist compliance are warranted depending on the intersection characteristics. An alternative solution is to consider the applicability of traffic laws (originally designed for cars) to bicyclists. Creating separation in how laws affect motorists and cyclists might be a better solution for overly simple types of intersections where cyclists have fewer conflicts, better visibility, etc. than motorists. Education or other messaging aimed at cyclists about compliance is another strategy to increase compliance. Since cyclists appear to feel more justified in running red lights at low-volume, simple-looking intersections, it would probably be prudent to target messaging at these types of intersections. Many cyclists are deterred by high-volume and/or complicated looking intersections for safety reasons. Reminding cyclists of the potential dangers at other intersections may be a successful messaging strategy. Alternatively, reminding cyclists that it is still illegal to run a red light even if they feel safe doing so may be prudent. Additionally, messaging about the purpose of infrastructure such as bicycle-specific signals or lights that indicate detection at a signal may convince cyclists that stopping at the signal is in their best interest and that the wait will be minimal and/or warranted.
-
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical ...
Citation Citation
- Title:
- Novel Methods for Learning and Adaptation in Chemical Reaction Networks
- Author:
- Banda, Peter
- Year:
- 2015
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical systems that would offer flexibility through programmability and autonomous adaptation faces major challenges because of the large number of required chemical species as well as the timing-sensitive feedback loops required for learning. Currently, biochemistry lacks a systems vision on how the user-level programming interface and abstraction with a subsequent translation to chemistry should look like. By developing adaptation in chemistry, we could replace multiple hard-wired systems with a single programmable template that can be (re)trained to match a desired input-output profile benefiting smart drug delivery, pattern recognition, and chemical computing. I aimed to address these challenges by proposing several approaches to learning and adaptation in Chemical Reaction Networks (CRNs), a type of simulated chemistry, where species are unstructured, i.e., they are identified by symbols rather than molecular structure, and their dynamics or concentration evolution are driven by reactions and reaction rates that follow mass-action and Michaelis-Menten kinetics. Several CRN and experimental DNA-based models of neural networks exist. However, these models successfully implement only the forward-pass, i.e., the input-weight integration part of a perceptron model. Learning is delegated to a non-chemical system that computes the weights before converting them to molecular concentrations. Autonomous learning, i.e., learning implemented fully inside chemistry has been absent from both theoretical and experimental research. The research in this thesis offers the first constructive evidence that learning in CRNs is, in fact, possible. I have introduced the original concept of a chemical binary perceptron that can learn all 14 linearly-separable logic functions and is robust to the perturbation of rate constants. That shows learning is universal and substrate-free. To simplify the model I later proposed and applied the "asymmetric" chemical arithmetic providing a compact solution for representing negative numbers in chemistry. To tackle more difficult tasks and to serve more complicated biochemical applications, I introduced several key modular building blocks, each addressing certain aspects of chemical information processing and learning. These parts organically combined into gradually more complex systems. First, instead of simple static Boolean functions, I tackled analog time-series learning and signal processing by modeling an analog chemical perceptron. To store past input concentrations as a sliding window I implemented a chemical delay line, which feeds the values to the underlying chemical perceptron. That allows the system to learn, e.g., the linear moving-average and to some degree predict a highly nonlinear NARMA benchmark series. Another important contribution to the area of chemical learning, which I have helped to shape, is the composability of perceptrons into larger multi-compartment networks. Each compartment hosts a single chemical perceptron and compartments communicate with each other through a channel-mediated exchange of molecular species. Besides the feedforward pass, I implemented the chemical error backpropagation analogous to that of feedforward neural networks. Also, after applying mass-action kinetics for the catalytic reactions, I succeeded to systematically analyze the ODEs of my models and derive the closed exact and approximative formulas for both the input-weight integration and the weight update with a learning rate annealing. I proved mathematically that the formulas of certain chemical perceptrons equal the formal linear and sigmoid neurons, essentially bridging neural networks and adaptive CRNs. For all my models the basic methodology was to first design species and reactions, and then set the rate constants either "empirically" by hand, automatically by a standard genetic algorithm (GA), or analytically if possible. I performed all simulations in my COEL framework, which is the first cloud-based chemistry modeling tool, accessible at http://coel-sim.org. I minimized the amount of required molecular species and reactions to make wet chemical implementation possible. I applied an automatized mapping technique, Soloveichik's CRN-to-DNA-strand-displacement transformation, to the chemical linear perceptron and the manual signalling delay line and obtained their full DNA-strand specified implementations. As an alternative DNA-based substrate, I mapped these two models also to deoxyribozyme-mediated cleavage reactions reducing the size of the displacement variant to a third. Both DNA-based incarnations could directly serve as blue-prints for wet biochemicals. Besides an actual synthesis of my models and conducting an experiment in a biochemical laboratory, the most promising future work is to employ so-called reservoir computing (RC), which is a novel machine learning method based on recurrent neural networks. The RC approach is relevant because for time-series prediction it is clearly superior to classical recurrent networks. It can also be implemented in various ways, such as electrical circuits, physical systems, such as a colony of Escherichia Coli, and water. RC's loose structural assumptions therefore suggest that it could be expressed in a chemical form as well. This could further enhance the expressivity and capabilities of chemically-embedded learning. My chemical learning systems may have applications in the area of medical diagnosis and smart medication, e.g., concentration signal processing and monitoring, and the detection of harmful species, such as chemicals produced by cancer cells in a host (cancer miRNAs) or the detection of a severe event, defined as a linear or nonlinear temporal concentration pattern. My approach could replace hard-coded solutions and would allow to specify, train, and reuse chemical systems without redesigning them. With time-series integration, biochemical computers could keep a record of changing biological systems and act as diagnostic aids and tools in preventative and highly personalized medicine.
-
4403. [Article] Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor Model
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research ...Citation Citation
- Title:
- Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor Model
- Author:
- Hawash, Maher Mofeid
- Year:
- 2013
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research area is focused on reversible, permutative and fully specified binary and ternary specifications and the applicability of the resulting circuit to the physical limitations of existing quantum technologies. Automated synthesis of arbitrary reversible specifications is an NP hard, multiobjective optimization problem, where 1) the amount of time and computational resources required to synthesize the specification, 2) the number of primitive quantum gates in the resulting circuit (quantum cost), and 3) the number of ancillary qubits (variables added to hold intermediate calculations) are all minimized while 4) the number of variables is maximized. Some of the existing algorithms in the literature ignored objective 2 by focusing on the synthesis of a single solution without the addition of any ancillary qubits while others attempted to explore every possible solution in the search space in an effort to discover the optimal solution (i.e., sacrificed objective 1 and 4). Other algorithms resorted to adding a huge number of ancillary qubits (counter to objective 3) in an effort minimize the number of primitive gates (objective 2). In this dissertation, I first introduce the MMDSN algorithm that is capable of synthesizing binary specifications up to 30 variables, does not add any ancillary variables, produces better quantum cost (8-50% improvement) than algorithms which limit their search to a single solution and within a minimal amount of time compared to algorithms which perform exhaustive search (seconds vs. hours). The MMDSN algorithm introduces an innovative method of using the Hasse diagram to construct candidate solutions that are guaranteed to be valid and then selects the solution with the minimal quantum cost out of this subset. I then introduce the Covered Set Partitions (CSP) algorithm that expands the search space of valid candidate solutions and allows for exploring solutions outside the range of MMDSN. I show a method of subdividing the expansive search landscape into smaller partitions and demonstrate the benefit of focusing on partition sizes that are around half of the number of variables (15% to 25% improvements, over MMDSN, for functions less than 12 variables, and more than 1000% improvement for functions with 12 and 13 variables). For a function of n variables, the CSP algorithm, theoretically, requires n times more to synthesize; however, by focusing on the middle k (k by MMDSN which typically yields lower quantum cost. I also show that using a Tabu search for selecting the next set of candidate from the CSP subset results in discovering solutions with even lower quantum costs (up to 10% improvement over CSP with random selection). In Chapters 9 and 10 I question the predominant methods of measuring quantum cost and its applicability to physical implementation of quantum gates and circuits. I counter the prevailing literature by introducing a new standard for measuring the performance of quantum synthesis algorithms by enforcing the Linear Nearest Neighbor Model (LNNM) constraint, which is imposed by the today's leading implementations of quantum technology. In addition to enforcing physical constraints, the new LNNM quantum cost (LNNQC) allows for a level comparison amongst all methods of synthesis; specifically, methods which add a large number of ancillary variables to ones that add no additional variables. I show that, when LNNM is enforced, the quantum cost for methods that add a large number of ancillary qubits increases significantly (up to 1200%). I also extend the Hasse based method to the ternary and I demonstrate synthesis of specifications of up to 9 ternary variables (compared to 3 ternary variables that existed in the literature). I introduce the concept of ternary precedence order and its implication on the construction of the Hasse diagram and the construction of valid candidate solutions. I also provide a case study comparing the performance of ternary logic synthesis of large functions using both a CUDA graphic processor with 1024 cores and an Intel i7 processor with 8 cores. In the process of exploring large ternary functions I introduce, to the literature, eight families of ternary benchmark functions along with a Multiple Valued file specification (the Extended Quantum Specification XQS). I also introduce a new composite quantum gate, the multiple valued Swivel gate, which swaps the information of qubits around a centrally located pivot point. In summary, my research objectives are as follows: * Explore and create automated synthesis algorithms for reversible circuits both in binary and ternary logic for large number of variables. * Study the impact of enforcing Linear Nearest Neighbor Model (LNNM) constraint for every interaction between qubits for reversible binary specifications. * Advocate for a revised metric for measuring the cost of a quantum circuit in concordance with LNNM, where, on one hand, such a metric would provide a way for balanced comparison between the various flavors of algorithms, and on the other hand, represents a realistic cost of a quantum circuit with respect to an ion trap implementation. * Establish an open source repository for sharing the results, software code and publications with the scientific community. With the dwindling expectations for a new lifeline on silicon-based technologies, quantum computations have the potential of becoming the future workhorse of computations. Similar to the automated CAD tools of classical logic, my work lays the foundation for creating automated tools for constructing quantum circuits from reversible specifications.
-
Spectral techniques in digital logic design have been known for more than thirty years. They have been used for Boolean function classification, disjoint decomposition, parallel and serial linear decomposition, ...
Citation Citation
- Title:
- Spectral Methods for Boolean and Multiple-Valued Input Logic Functions
- Author:
- Falkowski, Bogdan Jaroslaw
- Year:
- 1991
Spectral techniques in digital logic design have been known for more than thirty years. They have been used for Boolean function classification, disjoint decomposition, parallel and serial linear decomposition, spectral translation synthesis (extraction of linear pre- and post-filters), multiplexer synthesis, prime implicant extraction by spectral summation, threshold logic synthesis, estimation of logic complexity, testing, and state assignment. This dissertation resolves many important issues concerning the efficient application of spectral methods used in the computer-aided design of digital circuits. The main obstacles in these applications were, up to now, memory requirements for computer systems and lack of the possibility of calculating spectra directly from Boolean equations. By using the algorithms presented here these obstacles have been overcome. Moreover, the methods presented in this dissertation can be regarded as representatives of a whole family of methods and the approach presented can be easily adapted to other orthogonal transforms used in digital logic design. Algorithms are shown for Adding, Arithmetic, and Reed-Muller transforms. However, the main focus of this dissertation is on the efficient computer calculation of Rademacher-Walsh spectra of Boolean functions, since this particular ordering of Walsh transforms is most frequently used in digital logic design. A theory has been developed to calculate the Rademacher-Walsh transform from a cube array specification of incompletely specified Boolean functions. The importance of representing Boolean functions as arrays of disjoint ON- and DC- cubes has been pointed out, and an efficient new algorithm to generate disjoint cubes from non-disjoint ones has been designed. The transform algorithm makes use of the properties of an array of disjoint cubes and allows the determination of the spectral coefficients in an independent way. By such an approach each spectral coefficient can be calculated separately or all the coefficients can be calculated in parallel. These advantages are absent in the existing methods. The possibility of calculating only some coefficients is very important since there are many spectral methods in digital logic design for which the values of only a few selected coefficients are needed. Most of the current methods used in the spectral domain deal only with completely specified Boolean functions. On the other hand, all of the algorithms introduced here are valid, not only for completely specified Boolean functions, but for functions with don't cares. Don't care minterms are simply represented in the form of disjoint cubes. The links between spectral and classical methods used for designing digital circuits are described. The real meaning of spectral coefficients from Walsh and other orthogonal spectra in classical logic terms is shown. The relations presented here can be used for the calculation of different transforms. The methods are based on direct manipulations on Karnaugh maps. The conversion start with Karnaugh maps and generate the spectral coefficients. The spectral representation of multiple-valued input binary functions is proposed here for the first time. Such a representation is composed of a vector of Walsh transforms each vector is defined for one pair of the input variables of the function. The new representation has the advantage of being real-valued, thus having an easy interpretation. Since two types of codings of values of binary functions are used, two different spectra are introduced. The meaning of each spectral coefficient in classical logic terms is discussed. The mathematical relationships between the number of true, false, and don't care minterms and spectral coefficients are stated. These relationships can be used to calculate the spectral coefficients directly from the graphical representations of binary functions. Similarly to the spectral methods in classical logic design, the new spectral representation of binary functions can find applications in many problems of analysis, synthesis, and testing of circuits described by such functions. A new algorithm is shown that converts the disjoint cube representation of Boolean functions into fixed-polarity Generalized Reed-Muller Expansions (GRME). Since the known fast algorithm that generates the GRME, based on the factorization of the Reed-Muller transform matrix, always starts from the truth table (minterms) of a Boolean function, then the described method has advantages due to a smaller required computer memory. Moreover, for Boolean functions, described by only a few disjoint cubes, the method is much more efficient than the fast algorithm. By investigating a family of elementary second order matrices, new transforms of real vectors are introduced. When used for Boolean function transformations, these transforms are one-to-one mappings in a binary or ternary vector space. The concept of different polarities of the Arithmetic and Adding transforms has been introduced. New operations on matrices: horizontal, vertical, and vertical-horizontal joints (concatenations) are introduced. All previously known transforms, and those introduced in this dissertation can be characterized by two features: "ordering" and "polarity". When a transform exists for all possible polarities then it is said to be "generalized". For all of the transforms discussed, procedures are given for generalizing and defining for different orderings. The meaning of each spectral coefficient for a given transform is also presented in terms of standard logic gates. There exist six commonly used orderings of Walsh transforms: Hadamard, Rademacher, Kaczmarz, Paley, Cal-Sal, and X. By investigating the ways in which these known orderings are generated the author noticed that the same operations can be used to create some new orderings. The generation of two new Walsh transforms in Gray code orderings, from the straight binary code is shown. A recursive algorithm for the Gray code ordered Walsh transform is based on the new operator introduced in this presentation under the name of the "bi-symmetrical pseudo Kronecker product". The recursive algorithm is the basis for the flow diagram of a constant geometry fast Walsh transform in Gray code ordering. The algorithm is fast (N 10g2N additions/subtractions), computer efficient, and is implemented
-
4405. [Article] Employment of Crystallographic Image Processing Techniques to Scanning Probe Microscopy Images of Two-Dimensional Periodic Objects
Thin film arrays of molecules or supramolecules are active subjects of investigation because of their potential value in electronics, chemical sensing, catalysis, and other areas. Scanning probe microscopes ...Citation Citation
- Title:
- Employment of Crystallographic Image Processing Techniques to Scanning Probe Microscopy Images of Two-Dimensional Periodic Objects
- Author:
- Moon, Bill
- Year:
- 2011
Thin film arrays of molecules or supramolecules are active subjects of investigation because of their potential value in electronics, chemical sensing, catalysis, and other areas. Scanning probe microscopes (SPMs), including scanning tunneling microscopes (STMs) and atomic force microscopes (AFMs) are commonly used for the characterization and metrology of thin film arrays. As opposed to transmission electron microscopy (TEM), SPMs have the advantage that they can often make observations of thin films in air or liquid, while TEM requires highly specialized techniques if the sample is to be in anything but vacuum. SPM is a surface imaging technique, while TEM typically images a 2D projection of a thin 3D sample. Additionally, variants of SPM can make observations of more than just topography; for instance, magnetic force microscopy measures nanoscale magnetic properties. Thin film arrays are typically two-dimensionally periodic. A perfect, infinite two-dimensionally periodic array is mathematically constrained to belong to one of only 17 possible 2D plane symmetry groups. Any real image is both finite and imperfect. Crystallographic Image Processing (CIP) is an algorithm that Fourier transforms a real image into a 2D array of complex numbers, the Fourier coefficients of the image intensity, and then uses the relationship between those coefficients to first ascertain the 2D plane symmetry group that the imperfect, finite image is most likely to possess, and then adjust those coefficients that are symmetry-related so as to perfect the symmetry. A Fourier synthesis of the symmetrized coefficients leads to a perfectly symmetric image in direct space (when accumulated rounding and calculation errors are ignored). The technique is, thus, an averaging technique over the direct space experimental data that were selected from the thin film array. The image must have periodicity in two dimensions in order for this technique to be applicable. CIP has been developed over the past 40 years by the electron crystallography community, which works with 2D projections from 3D samples. Any periodic sample, whether it is 2D or 3D has an "ideal structure" which is the structure absent any crystal defects. The ideal structure can be considered one average unit cell, propagated by translation into the whole sample. The "real structure" is an actual sample containing vacancies, dislocations, and other defects. Typically the goal of electron and other types of microscopy is examination of the real structure, as the ideal structure of a crystal is already known from X-ray crystallography. High resolution transmission electron microscope image based electron crystallography, on the other hand, reveals the ideal crystal structure by crystallographic averaging. The ideal structure of a 2D thin film cannot be easily in a spatially selective fashion examined by grazing incidence X-ray or low energy electron diffraction based crystallography. SPMs straightforwardly observe thin films in direct space, but SPM accuracy is hampered by blunt or multiple tips and other unavoidable instrument errors. Especially since the film is often of a supramolecular system whose molecules are weakly bonded (via pi bonds, hydrogen bonds, etc.) both to the substrate and to each other, it is relatively easy for a molecule from the film to adhere to the scanning tip during the scan and become part of the tip during subsequent observation. If the thin film array has two-dimensional periodicity, CIP is a unique and effective tool both for image enhancement (determination of ideal structure) and for the quantification of overall instrument error. In addition, if a sample of known 2D periodicity is scanned, CIP can return information about the contribution of the instrument itself to the image. In this thesis we show how the technique is applied to images of two dimensionally periodic samples taken by SPMs. To the best of our knowledge, this has never been done before. Since 2D periodic thin film arrays have an ideal structure that is mathematically constrained to belong to one of the 17 plane symmetry groups, we can use CIP to determine that group and use it for a particularly effective averaging algorithm. We demonstrate that the use of this averaging algorithm removes noise and random error from images more effectively than translational averaging, also known as "lattice averaging" or "Fourier filtering". We also demonstrate the ability to correct systematic errors caused by hysteresis in the scanning process. These results have the effect of obtaining the ideal structure of the sample, averaging out the defects crystallographically, by providing an average unit cell which, when translated, represents the ideal structure. In addition, if one has recorded a scanning probe image of a 2D periodic sample of known symmetry, we demonstrate that it is possible to use the Fourier coefficients of the image transform to solve the inverse problem and calculate the point spread function (PSF) of the instrument. Any real scanning probe instrument departs from the ideal PSF of a Dirac delta function, and CIP allows us to quantify this departure as far as point symmetries are concerned. The result is a deconvolution of the "effective tip", which includes any blunt or multiple tip effects, as well as the effects caused by adhesion of a sample molecule to the scanning tip, or scanning irregularities unrelated to the physical tip. We also demonstrate that the PSF, once known, can be used on a second image taken by the same instrument under approximately the same experimental conditions to remove errors introduced during that second imaging process. The preponderance of two-dimensionally periodic samples as subjects of SPM observation makes the application of CIP to SPM images a valuable technique to extract a maximum amount of information from these images. The improved resolution of current SPMs creates images with more higher-order Fourier coefficients than earlier, "softer" images; these higher-order coefficients are especially amenable to CIP, which can then effectively magnify the resolution improvement created by better hardware. The improved resolution combined with the current interest in supramolecular structures (which although 3D usually start building on a 2D periodic surface) appears to provide an opportunity for CIP to significantly contribute to SPM image processing.
-
4406. [Article] <i>Wolbachia-</i>Host Interactions and the Implications to Insect Conservation and Management
Parasitic reproductive endosymbionts are emerging as formidable threats to insect biodiversity. Wolbachia are prevalent maternally inherited intra-cellular bacteria found in >50% of arthropod species. ...Citation Citation
- Title:
- <i>Wolbachia-</i>Host Interactions and the Implications to Insect Conservation and Management
- Author:
- Truitt, Amy Michelle
- Year:
- 2017
Parasitic reproductive endosymbionts are emerging as formidable threats to insect biodiversity. Wolbachia are prevalent maternally inherited intra-cellular bacteria found in >50% of arthropod species. These symbiotic bacteria interact with their hosts in diverse ways, most often they alter host reproduction causing four conditions that all selectively favor infected females: feminization, male killing, parthenogenesis, and cytoplasmic incompatibility (CI). Furthermore, depending on strain-type and host genetic background, Wolbachia are known to affect insect behavior, expand or shift host thermal tolerance ranges, and confer anti-viral protection to their hosts. Because Wolbachia both reside in and are transmitted with host cell cytoplasm, mitochondria and other cytoplasmically inherited genetic elements become linked with the bacteria. Thus, by enhancing their own transmission, Wolbachia-induced phenotypes can lead to mitochondrial selective sweeps, which may have profound impacts on vulnerable and small insect populations. Elucidating the extent to which endosymbionts influence biological and ecological functions is pivotal to making management decisions regarding imperiled insect species. My dissertation investigates biological and ecological impacts of host-endosymbiont interactions by examining Wolbachia infections in three different host systems. First, I used the federally threatened butterfly species Speyeria zerene hippolyta to determine whether the general reproductive success of local populations was affected by the introduction of CI-inducing Wolbachia-infected butterflies through implemented species recovery programs. Next, by characterizing the Wolbachia infections of parasitoids associated with the Eurema butterfly clade, I analyzed whether host-parasitoid interactions provide a path for interspecies horizontal transmission. Finally, I conducted a laboratory experiment using an isogenic Drosophila melanogaster line to determine whether Wolbachia influence host temperature preference. Together, my research examines how the individual level effects of host-endosymbiont interactions can expand into populations, have broader impacts on insect communities, and potentially impede the conservation and management of insects in nature. In chapter one, I screened S. z. hippolyta samples from three extant populations for Wolbachia infection. To examine the impacts of Wolbachia on small populations, I analyzed and compared infected and uninfected S. z. hippolyta reproductive data and showed that, in a population composed of infected and uninfected S. z. hippolyta, uninfected butterflies had reduced reproductive success (GLMM z = -8.067, P < 0.0001). I then developed a single-population demographic theoretical model using these same reproductive data to simulate and analyze different potential dynamics of small populations resulting from population supplementation with uninfected, CI-Wolbachia infected, or combined uninfected and infected butterflies. Analysis of model simulations revealed that supplementation with CI-inducing butterflies significantly suppressed host-population size (ANOVA F5,593 = 3349, PWolbachia-infected individuals (Tukey's post-hoc test P < 0.0001). In addition, supplementation by multiple releases using a combination of 50 infected and 300 uninfected butterflies has a less severe suppression effect, reducing the population by 75.8%, but the reduction occurs 42.6% faster than with the single release of 50 Wolbachia-infected butterflies (Tukey's post-doc test P < 0.0001). Parasitoid-host interactions have emerged as probable ecological relationships to facilitate horizontal transmission of Wolbachia. In chapter two, I addressed horizontal transmission using Eurema butterflies and their associated parasitoids. From four locations in Northern Queensland, Australia, I collected a total of 404 Eurema hecabe butterfly larvae. Twenty-three parasitoids emerged from the larvae of which 21 were Diptera and two were Hymenoptera. I amplified COI loci fragments from each parasitoid for BLAST query searches and found that 20 individual Diptera parasitoids matched to the genus Exorista and one to the genus Senometopia. One of the Hymenoptera parasitoids matched to the genus Microoplitis and the other to the genus Cotesia. To characterize Wolbachia infections, I used Wolbachia Multi Locus Sequencing Technique (MLST) and discovered that all 20 Exorista parasitoids were infected with an identical Wolbachia strain (ST-41), which is the same strain infecting their Eurema hecabe butterfly hosts. Although, further experiments are necessary to definitively determine that ST-41 Wolbachia are incorporated into germline cells of the parasitoids, this is the first study to provide ecological evidence for inter-ordinal Wolbachia transmission between Lepidoptera and Diptera. Furthermore, this discovery exposes the risk of population augmentation programs that move insects, potentially facilitating the spread of Wolbachia between species within a community through the accidental introduction of new Wolbachia-infected parasitoids. Finally, both Wolbachia and their insect hosts are temperature sensitive organisms. Wolbachia’s replication behavior in their hosts is positively-temperature dependent, while environmental variation can have profound effects on insect’s immune function, fitness, and fecundity. In chapter three, I conducted a laboratory experiment using a thermal gradient choice assay and an isogenic Drosophila melanogaster line with four different Wolbachia infection statuses – uninfected, wMel, wMelCS, and wMelPop - to assess whether a relationship existed between Wolbachia infection and host temperature preference. Results from my laboratory experiment revealed that Wolbachia-infected flies preferred cooler temperatures compared to uninfected flies. Moreover, D. melanogaster temperature preferences varied depending on the Wolbachia strain variant with which they were infected; flies infected with the wMel strain had temperature preferences 2°C cooler compared to uninfected flies; flies infected with either wMelCS or wMelPop strains had preferred temperatures 8°C cooler compared to uninfected flies. Wolbachia-associated temperature preference variation within a species can lead to conspecifics occupying different microclimates, genetically adapting to different sets of specific environmental conditions, and may eventually result in ecological and reproductive isolation. While, reproduction isolation is recognized as one of the first stages in speciation, in small populations of endangered and threatened species, the inability to reproduce between conspecifics can drive species to extirpation or extinction. Collectively, the three chapters of my dissertation set precedent for future integration of host-endosymbiont research prior to implementing population supplementation or translocation programs for the conservation of imperiled insects.
-
Understanding the maintenance of sexual systems is of great interest to evolutionary and ecological biologists because plant systems are extremely varied. Plant sexual systems have evolved to include not ...
Citation Citation
- Title:
- Spatial Segregation of the Sexes in a Salt Marsh Grass Distichlis spicata (Poaceae)
- Author:
- Mercer, Charlene Ashley
- Year:
- 2010
Understanding the maintenance of sexual systems is of great interest to evolutionary and ecological biologists because plant systems are extremely varied. Plant sexual systems have evolved to include not only complete plants with both male and female reproduction occurring on one plant (i.e., monoecious and hermaphroditic) but also plants with male and female function on separate plants (dioecious). The dioecious reproductive system can be used to test theories on niche differentiation given that having separate plants potentially allows for the exploitation of a broader niche. This increase in the realized niche is due to the ability for separate sexes to occupy different niches, which may occur in different physical habitats. Some dioecious plants have been shown to occur in areas biased to nearly 100% male or nearly 100% female, called spatial segregation of the sexes (SSS). Occupying a broader niche could increase fitness in some species when the separation is used for one sex to gain access to resources that increase reproductive success and/or if the separation inhibits deleterious competition. These two mechanisms have been previously proposed for the evolution of SSS in dioecious plants. The first mechanism suggests that males and females have evolved to occupy different niches due to differences in reproduction (sexual specialization). The hypothesis for the sexual specialization mechanism is that females should have higher fitness in female-majority sites and males should have higher fitness in male-majority sites. The second mechanism states that males and females occupy different niches due to competition between the sexes (niche partitioning). The hypothesis for niche partitioning states that inter-sexual competition should decrease fitness more than intra-sexual competition. These mechanisms are not mutually exclusive. In our research we use the salt-marsh grass Distichlis spicata as our study species because this plant is dioecious and because molecular markers have been developed to determine the sex of juvenile plants. These molecular markers are important for testing the niche partitioning hypothesis for SSS in juveniles. Furthermore, previous work in California has shown that plants occur in areas nearly 100% female and nearly 100% male called spatial segregation of the sexes (SSS). The previous research also showed that female-majority sites were higher in soil phosphorus than male-majority sites. We conduct all research, presented in the proceeding chapters, on Distichlis spicata in the Sand Lake estuary near Pacific City, Oregon and in the laboratory at Portland State University. In Chapter 1 we used field data to answer two questions: (1) Does Distichlis spicata exhibit SSS in Oregon, and (2) If SSS is occurring, do differences occur in plant form and function (sexual specialization) in reproductive female and male plants in female-majority and male-majority sites? We used a sex ratio survey and collected field data on reproductive males and females. Our results show that there are female-majority and male-majority areas and SSS is occurring in the Sand Lake Estuary. Results from our native plant data suggest that reproductive females perform better in female-majority sites compared to male-majority sites which could suggest that sexual specialization is occurring in females. We currently have a long term field reciprocal transplant experiment in place to further address this hypothesis. In Chapter 2 we use field dada to address the following questions: (1) Does site-specific soil nutrient content occur in August, when females have set seed? (2) Does sex-specific mycorrhizal colonization occur in reproductively mature plants? (3) Does sex-specific mycorrhizal colonization vary seasonally in natural populations? Inside the roots of D. spicata a symbiotic relationship is formed between plant and arbuscular mycorrhizal fungus (AM). The AM- plant relationship has been shown to thrive in phosphorus limited areas because the mycorrhizal fungus increases nutrient access to the plant. We analyzed the results of the field soil nutrient content and mycorrhizal colonization in roots of native Distichlis spicata from male-majority and female-majority sites. The root colonization included staining roots with trypan blue and viewing sections of the roots under the microscope. Our results show that female- majority sites are higher in phosphorus and are found to have higher AM colonization than male- majority sites in the field. In Chapter 3 we then reciprocally transplanted D. spicata plants in the field to address the following questions: (1) Does niche partitioning occur in D. spicata, and (2) If niche partitioning is occurring, which plants are competing more? Our reciprocal transplant experiment included seeds grown in intra-sexual, inter-sexual and no competition in cones, planted directly into the field, and allowed to grow for 15 months. After the 15 months was over we measured survival, dry weight and root/shoot ratio. The design of the experiment was to determine the effects of competition (intra-sexual and inter-sexual) and no competition on (single male and female) on survival, biomass and root/shoot ratios. Our results show that niche partitioning is occurring and plants in inter-sexual competition have significantly less biomass then intra-sexual competitors. In, Chapter 4, we conduct a laboratory experiment to address the following questions: (1) Do plants show plasticity in their response to root exudates of the competing plant in regards to the sexual phenotype of the competitor? (2) Do plants show plasticity in their response to root exudates of the competing plant with respect to the relatedness of the competitor? We use sterile seeds grown in 24-well plates containing liquid media. For each competing plant, we picked plants up out of the wells and into the competing plants wells so that plants only experienced media that the competing plant had grown. At no time do roots ever come into contact with one another. We measured primary root length, number of lateral roots, the number of root hairs, root/shoot ratio and total dry weight. We analyzed the study two different ways, one for sexual type competition (inter-sexual, intra-sexual, none) and for plant relationship (KIN, STRANGER and OWN). The results for the sexual type competition found that inter-sexual competition was greater for root/shoot ratio and dry weight. The results for plant relationship competition found that kin plants had a significantly greater number of lateral roots and a significantly longer primary root. The last chapter, Chapter 5, includes a summary of our conclusions. Our study found SSS occurring in the Sand Lake Estuary in Oregon with female-majority sites higher in phosphorus and root colonization higher in percent colonization of arbuscular mycorrhizal fungi compared to male-majority sites. Based on the sexual specialization hypothesis as a mechanism for SSS, we found that females had greater fitness in female-majority sites compared to male-majority sites, suggesting that sexual specialization is occurring in reproductive females. We then tested the niche partitioning hypothesis for SSS, and we found consistent lab and field results suggesting that niche partitioning due to inter-sexual competition is an explanation for why females and males D. spicata plants spatially segregate themselves at the juvenile life history stage. Furthermore, we found that plants that have the same mother had a significantly greater number of lateral roots and a significantly longer primary root. These results suggest that KIN plants respond differently to one another compared to plants paired with a plant not from the same mother (STRANGER) or when the plant is alone (OWN).