Search
Search Results
-
27091. [Article] The effects of enhanced expression of the GluN2B (NR2B) subunit of the N-methyl-D-aspartate (NMDA) receptor on memory in aged animals
As the aging population continues to grow worldwide, age-related complications are becoming more apparent within the aging population. One of the first age-related complications to become apparent is age-associated ...Citation Citation
- Title:
- The effects of enhanced expression of the GluN2B (NR2B) subunit of the N-methyl-D-aspartate (NMDA) receptor on memory in aged animals
- Author:
- Brim, Brenna L.
As the aging population continues to grow worldwide, age-related complications are becoming more apparent within the aging population. One of the first age-related complications to become apparent is age-associated memory impairment and it can make the elderly more dependent on caregivers early on. The N-methyl-D-aspartate (NMDA) receptor is important to learning and memory and appears to be especially vulnerable to the process of aging. The density of NMDA receptors declines with age more than any other ionotropic glutamate receptor. Both the density of NMDA receptors and the mRNA and protein expression of its subunits decline with age. In particular, the GluN2B subunit of the NMDA receptor shows the greatest age-related declines in expression across multiple brain regions, including the frontal lobe (including the prefrontal and frontal cortices), caudate nucleus and hippocampus. These declines are strongly correlated to age-related declines in spatial memory. Specifically, age-related decreases in the protein expression of the GluN2B subunit within crude synaptosomes of the frontal cortex of C57BL/6 mice show a relationship to the declines in performance in a long-term spatial memory task across age groups. However, within the population of aged mice, there was a subpopulation of aged mice in which higher expression of the GluN2B subunit within the synaptic membrane of the hippocampus was associated with poorer performance in the same task. Moreover, transgenic mice designed to express higher levels of the GluN2B subunit from birth also possess superior memory, including spatial memory, across adulthood to middle-age. Taken together, these data led to the hypothesis that increasing the expression of the GluN2B subunit within the aged brain could potentially alleviate age-related declines in memory. However, increasing its expression regionally was first examined since higher expression of the GluN2B subunit within the hippocampus has been associated with poorer memory in aged animals. Since age-related decreases in the protein expression of the GluN2B subunit within the frontal cortex show a relationship to impaired memory function, the first study was designed to determine if increasing GluN2B subunit expression in the frontal lobe would improve memory in aged mice. Mice received bilateral injections of either an adenoviral vector, containing cDNA specific for the GluN2B subunit and enhanced Green Fluorescent Protein (eGFP) (GluN2B vector); an adenoviral vector containing only the cDNA for eGFP (control vector); or vehicle into their frontal lobe. Spatial memory, cognitive flexibility and associative memory were assessed using the Morris water maze. Aged mice, with increased GluN2B subunit expression in the frontal lobe, exhibited improved long-term spatial memory, comparable to young mice, in the second day of training. Moreover, a higher concentration of the specific GluN2B antagonist, Ro 25-6981, was required to impair long-term spatial memory in aged mice with enhanced GluN2B subunit expression, as compared to aged controls. The requirement for greater antagonism in aged mice to block memory performance suggests that the number of GluN2B-containing receptors in their frontal lobe was enhanced and contributed to the improved memory. This study provides suggestive evidence that therapies that enhance GluN2B subunit expression within the aged brain could have the potential to ameliorate age-related memory loss. Since higher expression of the GluN2B subunit within the hippocampus of aged mice is associated with poorer memory, the second study was designed to determine if increasing GluN2B subunit expression in the hippocampus would improve or further impair memory in aged mice. This would help to determine if a therapy aimed at enhancing the GluN2B subunit expression or function of GluN2B-containing receptors throughout the aged brain could help ameliorate age-associated memory loss. Mice were injected bilaterally with either the GluN2B vector, a control vector or vehicle into the hippocampus. Spatial memory, cognitive flexibility and associative memory were assessed using the Morris water maze. Aged mice, with increased GluN2B subunit expression in the hippocampus, exhibited improved long-term spatial memory, comparable to young mice, early in training. However, there was a trend for impaired memory later in the long-term spatial memory trials. Still, these data suggest that enhancing GluN2B subunit expression in the aged hippocampus could be more beneficial to memory than harmful. In addition, the results of this study suggest that enhancing GluN2B subunit expression in different brain regions may improve memory at different phases of learning. Therefore, therapies that enhance GluN2B subunit expression throughout the aged brain could help ameliorate age-related memory loss. The first two studies demonstrated that enhancing the expression of the GluN2B subunit within either the frontal lobe or hippocampus of the aged brain has the potential to reduce age-related memory declines. However, the increase was not global nor specific to the synapse. Therefore, a third study was developed with the intent of garnering a more global increase in GluN2B subunit expression that was localized to the synapse. Cyclin dependent kinase 5 (Cdk5) enhances endocytosis of the GluN2B subunit-containing NMDA receptors from the synapse. Previous research has shown that inhibiting Cdk5 increases the number of GluN2B subunits at the synapse and within the whole cell and improves memory in young mice. This study was designed to determine if using antisense phosphorodiamidate morpholino oligomers (Morpholinos) to decrease the expression of Cdk5 protein within the brain would improve memory in aged mice. Morpholinos were conjugated to a cell penetrating peptide, which enhances cellular uptake, and delivered bilaterally to the lateral ventricles of both young and aged mice via acute stereotaxic injection. Treatments consisted of equivalent volumes and concentrations of either vehicle, control Morpholino or a Morpholino targeting the mRNA of Cdk5 (Cdk5 Morpholino). Memory was evaluated in the Morris water maze and using a novel object recognition task. Aged mice treated with the Cdk5 Morpholino exhibited improved early acquisition and spatial bias in the long-term spatial memory trials, as well as improved performance overall, compared to control Morpholino-treated aged animals. However, aged mice treated with the Cdk5 Morpholino performed similarly to vehicle-treated aged animals. The presence of the peptide-conjugated Morpholinos within the brain may have worsened performance in the Morris water maze task since control Morpholino-treated animals performed significantly worse than vehicle-treated animals. In concurrence, there was significantly greater gliosis in peptide-conjugated Morpholino-treated animals over vehicle-treated brains, suggesting it was neurotoxic. In contrast, young mice treated with the Cdk5 Morpholino showed impaired early acquisition and spatial bias but a trend for improved later learning in the long-term spatial memory task compared to control Morpholino-treated animals. Treatment with the Cdk5 Morpholino had no significant effect on cognitive flexibility, associative memory or novel object recognition for young or aged animals. Immunohistochemistry revealed increased GluN2B subunit expression within cells with characteristics of neurons and astroglia in regions of the frontal lobe, caudate nucleus and hippocampus of aged mice who received the Cdk5 Morpholino compared to control treatments. However, the increased GluN2B subunit expression appeared to be greater within the hippocampus. These results suggest that inhibiting the translation of Cdk5 using Morpholinos increased GluN2B subunit expression in both young and aged mice and may have contributed to the improved long-term spatial memory observed in aged mice, despite the Morpholino being administered at a presumably toxic concentration. An additional group of mice was used to determine a non-neurotoxic dosage of the peptide conjugated Morpholino. However, future studies are needed to determine the efficacy of the Cdk5 Morpholino at this dosage. Taken together, the studies presented here suggest that increasing expression of the GluN2B subunit within the aged brain does improve age-associated memory declines. In addition, cell penetrating peptide- conjugated Morpholinos show promise as tools for genetic manipulation within the brain and Cdk5 could prove to be a novel target for enhancing GluN2B subunit expression within the aged brain. Though future studies are needed, the studies presented here do suggest that therapies that enhance GluN2B subunit expression within the aged brain have the potential to help ameliorate memory loss. However, since enhanced GluN2B subunit expression itself can increase the potential for excitotoxicity, an optimal dose of such a therapeutic would need to be determined.
-
27092. [Article] Implications of cougar prey selection and demography on population dynamics of elk in northeast Oregon
Mule deer (Odocoileus hemionus hemionus) and Rocky Mountain elk (Cervus canadensis nelsoni; hereafter elk) populations in northeast Oregon have declined in the past 10 to 20 years. Concurrent with these ...Citation Citation
- Title:
- Implications of cougar prey selection and demography on population dynamics of elk in northeast Oregon
- Author:
- Clark, Darren A.
Mule deer (Odocoileus hemionus hemionus) and Rocky Mountain elk (Cervus canadensis nelsoni; hereafter elk) populations in northeast Oregon have declined in the past 10 to 20 years. Concurrent with these declines, cougar (Puma concolor) populations have apparently increased, leading to speculation that predation by cougars may be responsible for declining ungulate populations. However, empirical data on cougar diets, kill rates, and prey selection are lacking to support this speculation. Furthermore, the common assumption that cougar populations have increased in northeast Oregon may not be well founded because cougar populations in other areas within the Pacific Northwest region have declined in recent years. My primary research objectives were to (1) estimate kill rates and prey selection by cougars in northeast Oregon, (2) document causes of mortality and estimate survival rates for cougars, (3) estimate population growth rates of cougars in northeast Oregon and simulate the effects of hypothetical lethal control efforts on the cougar population, and (4) investigate the relative influence of top-down, bottom-up, and climatic factors for limiting population growth rates of elk in northeast Oregon. Results from my research will help guide cougar and elk management in northeast Oregon and provide a framework for assessing relative effects of top-down, bottom-up, and abiotic factors on population growth rates of ungulates in this and other areas. I implemented a 3-year study in northeast Oregon to investigate diets, kill rates, and prey selection of cougars in a multiple-prey system to better understand mechanisms by which cougars may influence ungulate populations. During my research, 25 adult cougars were captured and fitted with Global Positioning System (GPS) collars to identify kill sites. I monitored predation sequences of these cougars for 7,642 days and located the remains of 1,213 prey items killed by cougars. Cougars killed ungulates at an average rate of 1.03 per week (95% CI = 0.92 – 1.14); however, ungulate kill rates were variable and influenced by the season and demographic classification of cougars. Cougars killed ungulates 1.55 (95% CI = 1.47 – 1.66) times more frequently during summer (May-Oct) than during winter (Nov-Apr), but killed similar amounts of ungulate biomass (8.05 kg/day; 95% CI = 6.74 – 9.35) throughout the year. Cougars killed ungulates more frequently in summer because juvenile ungulates comprised most of the diet and were smaller on average than ungulate prey killed in winter. Female cougars with kittens killed more frequently (kills/day) than males or solitary females. After accounting for the additional biomass of kittens in cougar family groups, male cougars killed on average more biomass of ungulate prey per day than did females (R = 0.41, P < 0.001), and female cougars killed more biomass of prey per day as a function of the number and age of their kittens (R = 0.60, P < 0.001). Patterns of prey selection were influenced by season and demographic classification of cougars. Female cougars selected elk calves during summer and deer fawns during winter. In contrast, male cougars selected elk calves and yearling elk during summer and elk calves during winter. My results strongly supported the hypothesis that cougar predation is influenced by season, gender, and reproductive status of the cougar and these patterns in cougar predation may be generalizable among ecosystems. The observed selection for juvenile elk and deer suggested a possible mechanism by which cougars could negatively affect population growth rates of ungulates. I investigated survival and documented causes of mortality for radio-collared cougars at 3 study areas in Oregon during 1989 – 2011. Mortality due to hunter harvest was the most common cause of death for cougars in the Catherine Creek study area and the study area combining Wenaha, Sled Springs, and Mt. Emily Wildlife Management Units (WSM study area) in northeast Oregon. In contrast, natural mortality was the most common cause of death for cougars in the Jackson Creek study area in southwest Oregon. Annual survival rates of adult males were lowest at Catherine Creek when it was legal to hunt cougars with dogs (Ŝ = 0.57), but increased following the prohibition of this hunting practice (Ŝ = 0.86). This latter survival rate was similar to those observed at Jackson Creek (Ŝ = 0.78) and WSM (Ŝ = 0.82). Regardless of whether hunting of cougars with dogs was permitted, annual survival rates of adult females were similar among study areas (Catherine Creek Ŝ = 0.86; WSM Ŝ = 0.85; Jackson Creek Ŝ = 0.85). I did not document an effect of age on cougar survival rates in the Catherine Creek study area, which I attributed to selective harvest of prime-aged, male cougars when it was legal to hunt cougars with dogs. In contrast, I observed an effect of age on annual survival in both the WSM and Jackson Creek study areas. These results indicate that sub-adult males had significantly lower survival rates than sub-adult females, but survival rates of males and females were similar by age 4 or 5 years. My results suggest that survival rates of cougars in areas where hunting cougars with dogs is illegal should be substantially higher than areas where use of dogs is legal. I used estimates of cougar vital rates from empirical data collected in northeast Oregon to parameterize a Leslie projection matrix model to estimate deterministic and stochastic population growth rates of cougars in northeast Oregon when hunting cougars with dogs was legal (1989 - 1994) and illegal (2002 - 2011). A model cougar population in northeast Oregon that was hunted with dogs increased at a mean stochastic growth rate of 21% per year (λ[subscript s] = 1.21). Similarly, I found that a model cougar population that was subjected to hunting without dogs increased at a rate of 17% per year (λ[subscript s] = 1.17). Given that hunting cougars with dogs typically results in increased harvest and reduced survival rates of cougars, it was unexpected that the cougar population subjected to hunting with dogs was increasing at a faster rate than one that was not hunted with dogs. However, cougar populations in Oregon were subjected to low harvest rates when hunting cougars with dogs was legal and harvest was male biased. This resulted in high survival rates of female cougars and correspondingly high population growth rates. The Oregon Cougar Management Plan allows the Oregon Department of Fish and Wildlife to administratively reduce cougar populations to benefit ungulate populations, reduce human-cougar conflicts, and limit livestock depredation. Consequently, I was interested in modeling the effects of a hypothetical lethal control effort on a local cougar population. Using empirically-derived vital rates and a deterministic Leslie matrix model, I found that the proportion of the cougar population that would need to be removed annually to achieve a 50% population reduction within 3 years was 28% assuming a closed population, and 48% assuming maximum immigration rates into the population. Using a stochastic Leslie matrix model, I also determined that the model cougar population would likely return to its pre-removal size in 6 years assuming a closed population, and 2 years assuming maximum immigration rates. These model results indicate that current management practices and harvest regulations, combined with short-term, intensive, and localized population reductions, are unlikely to negatively affect the short-term viability of cougar populations in northeast Oregon. However, at this time, it is not known if intensive lethal control efforts funded by state agencies will be cost-effective (i.e., increased sales of tags to hunt deer and elk will offset the costs of control efforts). Further research is needed to investigate the cost-effectiveness of cougar control efforts in Oregon. I developed a Leslie matrix population model, parameterized with empirically-derived vital rates for elk in northeast Oregon, to investigate the relative influence on elk population growth rates of (1) survival and pregnancy, and (2) top-down, bottom-up, and climatic variables. I then estimated the effect of varying the strength of top-down factors on growth rates of elk populations. Growth rates of the model elk population were most sensitive to changes in adult female survival, but due to the inherent empirical variation in juvenile survival rates explained the overwhelming majority of variation in model population growth rates (r² = 0.92). Harvest of female elk had a strong negative effect on model population growth rates of elk (r² = 0.63). An index of cougar density was inversely related to population growth rates of elk in my model (r² = 0.38). A delay in mean date of birth was associated with reduced juvenile survival, but this had a minimal effect on population growth rates in my model (r² = 0.06). Climatic variables, which were used as surrogates for nutritional condition of females, had minimal effects on population growth rates. Likewise, elk density had almost no effect on population growth rates (r² = 0.002). The results of my model provided a novel finding that cougars can be a strong limiting factor on elk populations. Wildlife managers should consider the potential top-down effects of cougars and other predators as a limiting factor on elk populations.
-
27093. [Article] Maximizing urban property values through open space conservation
This study investigates the share of open space that maximizes total private property values in urban areas. Open space poses a number of trade-offs to city managers. On the one hand, previous studies ...Citation Citation
- Title:
- Maximizing urban property values through open space conservation
- Author:
- Oakley, Winston
This study investigates the share of open space that maximizes total private property values in urban areas. Open space poses a number of trade-offs to city managers. On the one hand, previous studies have shown that certain kinds of open space can increase property values, which tends to increase tax revenues. On the other hand, open space typically requires substantial capital to establish and perpetual maintenance costs to maintain. This means that in order to keep the city budget balanced, financing open space requires either taking money away from other municipal services, which may be of greater value to residents than open space, or increasing the property tax rate. Both these courses of actions tend to reduce property values, and therefore, lower tax revenues. Open space also incurs an opportunity cost, in that land used for open space could be developed and taxed. While previous research has modeled these trade-offs, there is still more to be learned by empirically estimating the share of open space that maximizes property values in urban areas. According to the theoretical underpinnings of this study, one of the primary determinants of a city's value-maximizing, or "optimal", share of open space is the price elasticity of housing supply. Therefore, in order to estimate the optimal share of open space, this study estimates the price elasticity of housing supply for 349 U.S. Metropolitan Statistical Areas (MSAs). According to theory, the other factors that determine the optimal share of open space are the price elasticity of housing demand, the economies of scale in the provision of municipal services, the elasticity of property values with respect to municipal services, and the elasticity of housing demand with respect to open space. For these factors, an example value is establish based on prior research and used commonly among all MSAs to estimate the optimal share. Once the estimated and example values are determined, they are inserted into the equation that determines the optimal share of open space. The result provides an estimate of this optimal share of open space for 349 MSAs. On average, the model, combined with the estimated and assumed values, produces very low estimates for the optimal share of open space. The mean optimal share was 1.5%, and 95% of the estimates were 5% or less. For shares based on statistically significant supply elasticity estimates, optimal shares ranged from 0.2% to 27%. In order to gauge how far cities were from their estimated optimal share of open space, this study compared the estimated optimal share to observed shares of open space in 72 MSAs. When compared to observed shares of open space, the model (along with the estimated and assumed values) showed that 89% of the observed MSAs displayed "excesses" of open space, or, an observed share of open space that exceeded their optimal share. The other 11% demonstrated a "shortage" of open space. The average deviation between optimal and actual share was an excess of 6.3 percentage points. Further analysis was conducted in order to account for the error inherent in the supply elasticity estimates (and subsequently inherent to the estimates of optimal share). Once this error was accounted for, only two cities still showed evidence of having open space shortages: Stockton, CA and Miami, FL. However, both cities were within a percentage point of their optimal share's confidence interval, making it possible these cities are not experiencing meaningful shortages of open space. In contrast, 92% of the cities in the sample set showed statistically significant excesses of open space. Of these, five MSAs exceeded their confidence intervals by 15 or more percentage points: Austin, TX; Albuquerque, NM; Akron, OH; New Orleans, LA; and Anchorage, AK. Because these cities' actual share of open space lies so far above their optimal share, it is very likely that decreasing open space area would increase property values. Two cities in the sample fell within their optimal share's confidence interval: Washington, DC and Virginia Beach, VA. Of all the cities in the sample set, these two are the most likely to be at their optimal share of open space, and therefore, are the most likely to decrease property values by making any changes to their share of open space. After this primary analysis, a sensitivity analysis was conducted in order to determine how assumptions regarding the variables impacted the estimated optimal share of open space. A reasonable range for each variable was established based on the literature, and this range was used to test each variable’s effect on the optimal share of open space. These tests revealed that the optimal share is not especially sensitive to the assumed values for the price elasticity of housing demand, nor to the economy of scale in the provision of municipal services. However, the elasticity of property values with respect to municipal services and the elasticity of housing demand with respect to open space both have large influences on the optimal share. The impact of all the other variables increased as supply elasticity decreased, and as the elasticity of housing demand with respect to open space increased. Because the elasticity of housing demand with respect to open space has such a disproportionate influence on the optimal share of open space, and because there is very little empirical evidence surrounding its value, further analysis was done to investigate this variable. By assuming that the 72 MSAs for which there is an observed share of open space are at their optimal share, in conjunction with the estimated and assumed values for the other variables, one can estimate the implied value for the elasticity of housing demand with respect to open space. Using this method, this study found that the average implied elasticity was 0.57. While this result could indicate that open space has a higher-than-assumed effect on housing demand, evidence from the literature suggests this value is too high. It is more likely that this result provides further evidence that the model is indicating actual shares of open space are higher than optimal. In the final portion of the sensitivity analysis, cities' observed shares of open space were again assumed to be at the optimal level, however, the other variables assume the limits of their reasonable range so as to make the implied elasticity as high or as low as possible. This analysis provided further evidence of discrepancies between actual and optimal shares of open space. While the evidence for open space shortages was fairly slim, the analysis reinforced evidence that some cities have excess open space. Those that presented the highest implied elasticities (and therefore show the strongest evidence of open space excess) are Austin, TX; Akron, OH; and Greensboro, NC. By comparing optimal shares of open space to observed shares of open space, the results show that the majority of cities could likely increase property values by decreasing their share of open space. This study also sheds new light on the relationship between housing demand and open space. By defining a reasonable value and range for the elasticity of housing demand with respect to open space, this study adds to the scarce information on this variable. While the results of this study indicate that many urban areas in the U.S. have larger shares of open space than would maximize property values, it is important to emphasize that the value-maximizing share of open space is not the socially optimal share. Open space provides a number of other social benefits that are not capitalized into property values, and are therefore not considered in this study. Environmental benefits are one important example. Further research is needed to determine the socially optimal amount of open space that maximizes social welfare. The appendices of this study list both the estimates of housing supply elasticity and the estimates of value-maximizing share for the 349 MSAs in the sample set. For the managers of cities that were included in the study, these figures can provide valuable information to help them better understand the implications of open space provision. The housing supply elasticities serve to better illuminate housing markets in their area. The optimal share estimates allow managers to better understand the relationship between property values and open space. For the managers of cities not included in this study, the methods presented here offer a relatively simple way to calculate their own housing supply elasticity and optimal share. With data on housing prices, housing construction, and population, interested parties can estimate the supply elasticity in their cities. Using this estimate in combination with the values this study assumed for the other variables, they can estimate their own value-maximizing share of open space. By comparing this figure to their present share of open space, city managers can gain a better understanding of the effect open space has on the property values within their city.
-
Phytophthora cinnamomi is a soilborne pathogen that causes root rot disease of highbush blueberry (Vaccinium corymbosum L.). When new installations of susceptible blueberry cultivars are infected with ...
Citation Citation
- Title:
- Cultural controls for suppressing Phytophthora cinnamomi root rot of blueberry
- Author:
- Yeo, John R.
Phytophthora cinnamomi is a soilborne pathogen that causes root rot disease of highbush blueberry (Vaccinium corymbosum L.). When new installations of susceptible blueberry cultivars are infected with P. cinnamomi, plants often fail to grow significant new tissue, greatly reducing yields over the life of the planting. Chemical fungicides are available for disease suppression including mefenoxam and phosphonate compounds. However these tools are not available for organically certified growers. Initially, this research focused on non-chemical control strategies for blueberry root rot disease, including cultivar selection, soil amendments (gypsum and a variety of organic materials) via a series of trials in the greenhouse. Based on the findings of the greenhouse experiments, a field trial was conducted to evaluate a combination of cultural factors in a two year field experiment. Pre-plant gypsum incorporation into soil was evaluated as a control strategy in the field experiment in combination with other cultural practices that were hypothesized to affect disease including mulch type (geotextile weedmat or sawdust), and drip irrigation line placement. Eighteen highbush blueberry cultivars and advanced breeding selections were evaluated for susceptibility or resistance in three greenhouse experiments. Cultivars varied widely in susceptibility, with 'Duke,' 'Draper,' 'Bluetta,' 'Blue Ribbon,' 'Cargo,' 'Last Call,' 'Top Shelf,' and 'Ventura' exhibiting high levels of susceptibility to the disease. More resistant cultivars included 'Legacy,' 'Liberty,' 'Aurora,' 'Overtime,' 'Reka,' and 'Clockwork'. By selecting cultivars with superior resistance, growers may avoid yield losses associated with the disease, enhancing production profitability. Although choosing a disease resistant cultivar is likely the most effective control strategy for blueberry, other cultural disease control practices are needed when susceptible cultivars are grown to fill market demands. Fungicides can control disease in conventional plantings, but cannot be applied within certified organic production systems. Gypsum and organic amendments sometimes provide suppression of Phytophthora root rot in crops other than blueberry. Three greenhouse trials were conducted to evaluate organic soil amendments (peat, sawdust, dairy solids compost, yard debris compost, and composted municipal biosolids blended with douglas-fir bark) incorporated at 20% v/v into soil, and gypsum (CaSO₄) incorporated at 5% (v/v) for disease suppression in factorial combination. Trials were conducted with a disease susceptible cultivar ('Draper'), and soil moisture was maintained near saturation, favoring disease development. Organic amendments did not suppress disease in any of the experiments. Gypsum was effective in disease suppression in one of the three experiments. Gypsum provides a source of soluble calcium, and previous reported studies have demonstrated a mechanism for calcium-mediated suppression of P. cinnamomi. An additional greenhouse study was conducted to determine the relationship between gypsum application rate, soluble calcium in soil solution, and disease suppression. Soluble Ca in soil solution reached a plateau concentration of ~454 mg Ca L⁻¹ soil solution at gypsum application rates equal to or above 16 meq gypsum 100g⁻¹ soil. Higher gypsum application rates did not increase soil solution Ca, or provide additional disease suppression. Rates of gypsum required for disease suppression increase soil electrical conductivity (EC) beyond current recommended salinity guidelines for blueberries (> 2.0 mS/cm). The effects of salinity on plant growth were evaluated in a six month greenhouse experiment, using gypsum (CaSO₄) or potassium sulfate (K₂SO₄) salts to increase EC. Treatment EC levels ranged from 0.3 to 2.6 mS/cm, in ten increments. The maximum EC value chosen for this experiment approximates the maximum value for gypsum solubility in soil solution. Aboveground plant biomass declined more rapidly when EC was supplied by potassium sulfate than it did with gypsum. Root biomass declined when EC was supplied by potassium sulfate, but it was not affected by gypsum rate. Leaf cation concentrations responded strongly to increasing rates of potassium sulfate application. Leaf K increased dramatically, accompanied by declines in leaf Ca and Mg concentration. In contrast, leaf cation concentrations were much more stable when EC was adjusted by gypsum addition. Under the conditions of this greenhouse experiment, the increase in EC accompanying gypsum application had only minor effects on plant growth and nutrient uptake, suggesting that gypsum application is a viable option for trial in the field. A two year field trial was conducted to evaluate cultural practices for efficacy in suppressing P. cinnamomi root rot disease. Pre-plant gypsum incorporation into soil was evaluated as a control strategy in combination with other cultural practices that were hypothesized to affect disease: mulch type (geotextile weed mat vs. sawdust), and drip irrigation line placement. Disease suppression over two growing seasons was evaluated with the highly susceptible cultivar 'Draper' grown on a site with clay loam soil. P. cinnamomi inoculum was mixed into soil before planting. Experimental design was a 2x2x2 factorial with 2 mulch types (geotextile weed mat or douglas-fir sawdust), 2 drip line placements (narrow or wide placement relative to plant row), and 2 gypsum rates (0 and + gypsum). Gypsum was incorporated into planting beds in a 30-cm band at an application rate of 22,420 kg ha⁻¹ in-band equivalent to 2,242 kg ha⁻¹ on a whole-field basis. Drip irrigation treatments consisted of two drip lines placed either adjacent to the plant crown or 20-cm on either side of the crown (wide placement). A fungicide treatment was included to provide an assessment of the efficacy of cultural disease control vs. a chemical alternative. The fungicide treatment was mulched with sawdust and irrigated with drip lines adjacent to plants, and did not receive gypsum application. Mulch type had no significant effect on plant biomass after two years, but plants grown under sawdust had slightly higher biomass and less root infection. Soluble Ca, as measured by mini-lysimeters placed in the rootzone, was increased by gypsum application, especially with wide placement of drip irrigation lines. The disease suppressive effect of gypsum amendment depended on irrigation line placement. Soluble Ca movement in soil was associated with water movement away from drip emitters. When drip irrigation lines were placed adjacent to the plant crown, soluble Ca was moved away from the plant with the wetting front. With wide drip line placement , soluble Ca was moved toward the roots. Plants grown with wide drip line placement and gypsum addition had the lowest root infection incidence and highest plant biomass, likely as the result of more soluble Ca in the rootzone. Despite significant increases in plant biomass with gypsum and widely-placed irrigation lines, plants treated with conventional fungicide had approximately twice as much biomass after two growing seasons. An integrated control program is required for cultural suppression of blueberry root rot disease. Organic production using highly susceptible cultivars in the presence of P. cinnamomi is difficult, and may not produce equivalent yields to conventional production. Improved plant performance in the presence of P. cinnamomi was observed in these trials using cultivar resistance, gypsum, and widely-spaced drip irrigation lines. Other cultural practices, such as careful irrigation scheduling, and appropriate rate and timing of N fertilizer application are also important for disease suppression. In the future, greater P. cinnamomi disease suppression should be possible by using the disease suppressive cultural practices identified in this research in combination with a disease-resistant cultivar.
-
Range, areas of concentrated activity, and dispersal characteristics for juvenile Steller sea lions Eumetopias jubatus in the endangered western population (west of 144° W in the Gulf of Alaska) are poorly ...
Citation Citation
- Title:
- Range-use estimation and encounter probability for juvenile Steller sea lions (Eumetopias jubatus) in the Prince William Sound-Kenai Fjords region of Alaska
- Author:
- Meck, Stephen R.
Range, areas of concentrated activity, and dispersal characteristics for juvenile Steller sea lions Eumetopias jubatus in the endangered western population (west of 144° W in the Gulf of Alaska) are poorly understood. This study quantified space use by analyzing post-release telemetric tracking data from satellite transmitters externally attached to n = 65 juvenile (12-25 months; 72.5 to 197.6 kg) Steller sea lions (SSLs) captured in Prince William Sound (60°38'N -147°8'W) or Resurrection Bay (60°2'N -149°22'W), Alaska, from 2003-2011. The analysis divided the sample population into 3 separate groups to quantify differences in distribution and movement. These groups included sex, the season when collected, and the release type (free ranging animals which were released immediately at the site of capture, and transient juveniles which were kept in captivity for up to 12 weeks as part of a larger ongoing research program). Range-use was first estimated by using the minimum convex polygon (MCP) approach, and then followed with a probabilistic kernel density estimation (KDE) to evaluate both individual and group utilization distributions (UDs). The LCV method was chosen as the smoothing algorithm for the KDE analysis as it provided biologically meaningful results pertaining to areas of concentrated activity (generally, haulout locations). The average distance traveled by study juveniles was 2,131 ± 424 km. The animals mass at release (F[subscript 1, 63] = 1.17, p = 0.28) and age (F[subscript 1, 63] = 0.033, p = 0.86) were not significant predictors of travel distance. Initial MCP results indicated the total area encompassed by all study SSLs was 92,017 km², excluding land mass. This area was heavily influenced by the only individual that crossed over the 144°W Meridian, the dividing line between the two distinct population segments. Without this individual, the remainder of the population (n = 64) fell into an area of 58,898 km². The MCP area was highly variable, with a geometric average of 1,623.6 km². Only the groups differentiated by season displayed any significant difference in area size, with the Spring/Summer (SS) groups MCP area (Mdn = 869.7 km²) being significantly less than that of the Fall/Winter (FW) group (Mdn = 3,202.2 km²), U = 330, p = 0.012, r = -0.31. This result was not related to the length of time the tag transmitted (H(2) = 49.65, p = 0.527), nor to the number of location fixes (H(2) = 62.77, p = 0.449). The KDE UD was less variable, with 50% of the population within a range of 324-1,387 km2 (mean=690.6 km²). There were no significant differences in area use associated with sex or release type (seasonally adjusted U = 124, p = 0.205, r = -0.16 and U = 87, p = 0.285, r = -0.13, respectively). However, there were significant differences in seasonal area use: U = 328, p = 0.011, r = -0.31. There was no relationship between the UD area and the amount of time the tag remained deployed (H(2) = 45.30, p = 0.698). The kernel home range (defined as 95% of space use) represented about 52.1% of the MCP range use, with areas designated as "core" (areas where the sea lions spent fully 50% of their time) making up only about 6.27% of the entire MCP range and about 11.8% of the entire kernel home range. Area use was relatively limited – at the population level, there were a total of 6 core areas which comprised 479 km². Core areas spanned a distance of less than 200 km from the most western point at the Chiswell Islands (59°35'N -149°36'W) to the most eastern point at Glacier Island (60°54'N -147°6'W). The observed differences in area use between seasons suggest a disparity in how juvenile SSLs utilize space and distribute themselves over the course of the year. Due to their age, this variation is less likely due to reproductive considerations and may reflect localized depletion of prey near preferred haul-out sites and/or changes in predation risk. Currently, management of the endangered western and threatened eastern population segments of the Steller sea lion are largely based on population trends derived from aerial survey counts and terrestrial-based count data. The likelihood of individuals to be detected during aerial surveys, and resulting correction factors to calculate overall population size from counts of hauled-out animals remain unknown. A kernel density estimation (KDE) analysis was performed to delineate boundaries around surveyed haulout locations within Prince William Sound-Kenai Fjords (PWS-KF). To closely approximate the time in which population abundance counts are conducted, only sea lions tracked during the spring/summer (SS) months (May 10-August 10) were chosen (n = 35). A multiple state model was constructed treating the satellite location data, if it fell within a specified spatiotemporal context, as a re-encounter within a mark-recapture framework. Information to determine a dry state was obtained from the tags time-at-depth (TAD) histograms. To generate an overall terrestrial detection probability 1) The animal must have been within a KDE derived core-area that coincided with a surveyed haulout site 2) it must have been dry and 3) it must have provided at least one position during the summer months, from roughly 11:00 AM-5:00 PM AKDT. A total of 10 transition states were selected from the data. Nine states corresponded to specific surveyed land locations, with the 10th, an "at-sea" location (> 3 km from land) included as a proxy for foraging behavior. A MLogit constraint was used to aid interpretation of the multi-modal likelihood surface, and a systematic model selection process employed as outlined by Lebreton & Pradel (2002). At the individual level, the juveniles released in the spring/summer months (n = 35) had 85.3% of the surveyed haulouts within PWS-KF encompass KDE-derived core areas (defined as 50% of space use). There was no difference in the number of surveyed haulouts encompassed by core areas between sexes (F[subscript 1, 33] << 0.001, p = 0.98). For animals held captive for up to 12 weeks, 33.3% returned to the original capture site. The majority of encounter probabilities (p) fell between 0.42 and 0.78 for the selected haulouts within PWS, with the exceptions being Grotto Island and Aialik Cape, which were lower (between 0.00-0.17). The at-sea (foraging) encounter probability was 0.66 (± 1 S.E. range 0.55-0.77). Most dry state probabilities fell between 0.08-0.38, with Glacier Island higher at 0.52, ± 1 S.E. range 0.49-0.55. The combined detection probability for hauled-out animals (the product of at haul-out and dry state probabilities), fell mostly between 0.08-0.28, with a distinct group (which included Grotto Island, Aialik Cape, and Procession Rocks) having values that averaged 0.01, with a cumulative range of ≈ 0.00-0.02 (± 1 S.E.). Due to gaps present within the mark-recapture data, it was not possible to run a goodness-of-fit test to validate model fit. Therefore, actual errors probably slightly exceed the reported standard errors and provide an approximation of uncertainties. Overall, the combined detection probabilities represent an effort to combine satellite location and wet-dry state telemetry and a kernel density analysis to quantify the terrestrial detection probability of a marine mammal within a multistate modeling framework, with the ultimate goal of developing a correction factor to account for haulout behavior at each of the surveyed locations included in the study.
-
In the first set of studies, 2 experiments evaluated the influence of supplement composition on ruminal forage disappearance, performance, and physiological responses of Angus × Hereford cattle consuming ...
Citation Citation
- Title:
- Nutritional strategies to improve the reproductive performance of beef females
- Author:
- Cappellozza, Bruno Ieda
In the first set of studies, 2 experiments evaluated the influence of supplement composition on ruminal forage disappearance, performance, and physiological responses of Angus × Hereford cattle consuming a low-quality, cool-season forage (8.7 % CP and 57 % TDN). In Exp. 1, 6 rumen-fistulated steers housed in individual pens were assigned to an incomplete 3 x 2 Latin square design containing 2 periods of 11 d each and the following treatments: 1) supplementation with soybean meal (PROT), 2) supplementation with a mixture of cracked corn, soybean meal, and urea (68:22:10 ratio, DM basis; ENER), or 3) no supplementation (CON). Steers were offered meadow foxtail (Alopecurus pratensis L.) hay for ad libitum consumption. Treatments were provided daily at 0.50 and 0.54 % of shrunk BW/steer for PROT and ENER, respectively, to ensure that PROT and ENER intakes were isocaloric and isonitrogenous. No treatment effects were detected on rumen disappearance parameters of forage DM (P ≥ 0.33) and NDF (P ≥ 0.66). In Exp. 2, 35 pregnant heifers were ranked by initial BW on d -7 of the study, allocated into 12 feedlot pens (4 pens/treatment), and assigned to the same treatments and forage intake regimen as in Exp. 1 for 19 d. Treatments were fed once daily at 1.77 and 1.92 kg of DM/heifer for PROT and ENER, respectively, to achieve the same treatment intake as % of initial BW used in Exp. 1 (0.50 and 0.54 % for PROT and ENER, respectively). No treatment effects (P = 0.17) were detected on forage DMI. Total DMI was greater (P < 0.01) for PROT and ENER compared with CON, and similar between PROT and ENER (P = 0.36). Accordingly, ADG was greater (P = 0.01) for PROT compared with CON, tended to be greater for ENER compared with CON (P = 0.08), and was similar between ENER and PROT (P = 0.28). Heifers receiving PROT and ENER had greater mean concentrations of plasma glucose (P = 0.03), insulin (P ≤ 0.09), IGF-I (P ≤ 0.04), and progesterone (P₄; P = 0.01) compared to CON, whereas ENER and PROT had similar concentrations of these variables (P ≥ 0.15). A treatment × hour interaction was detected (P < 0.01) for plasma urea N (PUN), given that PUN concentrations increased after supplementation for ENER and PROT (time effect, P < 0.01), but did not change for CON (time effect; P = 0.62). In conclusion, beef cattle consuming low-quality cool-season forages had similar ruminal forage disappearance and intake, performance, and physiological status if offered supplements based on soybean meal or corn at approximately 0.5 % of BW (DM basis). The following experiment evaluated the influence of supplement composition on performance, reproductive, and metabolic responses of Angus × Hereford heifers consuming a low-quality cool-season forage (8.7 % CP and 57 % TDN). Sixty heifers (initial age = 226 ± 3 d) were allocated into 15 drylot pens (4 heifers/pen; 5 pens/treatment), and assigned to the same treatments as reported above. Heifers were offered meadow foxtail (Alopecurus pratensis L.) hay for ad libitum consumption during the experiment (d -10 to 160). Beginning on d 0, PROT and ENER were provided daily at a rate of 1.30 and 1.40 kg of DM/heifer to ensure that PROT and ENER intakes were isocaloric and isonitrogenous. Hay and total DMI were recorded for 5 consecutive days during each month of the experiment. Blood was collected every 10 d for analysis of plasma P₄ to evaluate puberty attainment. Blood samples collected on d -10, 60, 120, and 150 were also analyzed for PUN, glucose, insulin, IGF-I, NEFA, and leptin. Liver samples were collected on d 100 from 2 heifers/pen, and analyzed for mRNA expression of genes associated with nutritional metabolism. No treatment effect was detected (P = 0.33) on forage DMI. Total DMI, ADG, mean concentrations of glucose, insulin, and IGF-I, as well as hepatic mRNA expression of IGF-I and IGFBP-3 were greater (P ≤ 0.02) for PROT and ENER compared with CON, and similar between PROT and ENER (P ≥ 0.13). Mean PUN concentrations were also greater (P < 0.01) for PROT and ENER compared with CON, whereas PROT heifers had greater (P < 0.01) PUN compared with ENER. Plasma leptin concentrations were similar between ENER and PROT (P ≥ 0.19), and greater (P ≤ 0.03) for ENER and PROT compared with CON on d 120 and 150 (treatment × day interaction; P = 0.03). Hepatic mRNA expression of mitochondrial phosphoenolpyruvate carboxykinase was greater (P = 0.05) in PROT compared with CON and ENER, and similar between CON and ENER (P = 0.98). The proportion of heifers pubertal on d 160 was greater (P < 0.01) in ENER compared with PROT and CON, and similar between PROT and CON (P = 0.38). In conclusion, beef heifers consuming a low-quality cool-season forage had a similar increase in DMI, growth, and overall metabolic status if offered supplements based on soybean meal or corn at 0.5 % of BW. The last experiment was designed to determine if frequency of protein supplementation impacts physiological responses associated with reproduction in beef cows. Fourteen non-pregnant, non-lactating beef cows were ranked by age and BW, and allocated to 3 groups. Groups were assigned to a 3 × 3 Latin square design, containing 3 periods of 21 d and the following treatments: 1) soybean meal (SB) supplementation daily (D), 2) SB supplementation 3 times/wk (3WK), and 3) SB supplementation once/wk (1WK). Within each period, cows were assigned to an estrus synchronization protocol; 100 μg of GnRH + controlled internal drug release (CIDR) containing 1.38 g of P4 on d 1, 25 mg of PGF₂α on d 8, and CIDR removal + 100 μg of GnRH on d 11. Grassseed straw was offered for ad libitum consumption. Soybean meal was individually supplemented at a daily rate of 1 kg/cow (as-fed basis). Moreover, 3WK were supplemented on d 0, 2, 4, 7, 9, 11, 14, 16, and 18, whereas 1WK were supplemented on d 4, 11, and 18. Blood samples were collected from 0 (prior to) to 72 h after supplementation on d 11 and 18, and analyzed for PUN. Samples collected from 0 to 12 h were also analyzed for plasma glucose, insulin, and P4 (d 18 only). Uterine flushing fluid was collected concurrently with blood sampling at 28 h for pH evaluation. Liver biopsies were performed concurrently with blood sampling at 0, 4, and 28 h, and analyzed for mRNA expression of carbamoylphosphate synthetase I (CPS-I; h 28), and CYP2C19 and CYP3A4 (h 0 and 4 on d 18). Plasma urea-N concentrations were greater (P < 0.01) for 1WK vs. 3WK from 20 to 72 h, and greater (P < 0.01) for 1WK vs. D from 16 to 48 h and at 72 h after supplementation (treatment × hour interaction; P < 0.01). Moreover, PUN concentrations peaked at 28 h after supplementation for 3WK and 1WK (P < 0.01), and were greater (P < 0.01) at this time for 1WK vs. 3WK and D and for 3WK vs. D. Expression of CPS-I was greater (P < 0.01) for 1WK vs. D and 3WK. Uterine flushing pH tended (P ≤ 0.10) to be greater for 1WK vs. 3WK and D. No treatment effects were detected (P ≥ 0.15) on expression of CYP2C19 and CYP3A4, plasma glucose and P4 concentrations, whereas plasma insulin concentrations were greater (P ≤ 0.03) in D and 3WK vs. 1WK. Hence, decreasing frequency of protein supplementation did not reduce uterine flushing pH or plasma P₄ concentrations, which are known to impact reproduction in beef cows. In summary for all the experiments presented herein: (1) pregnant and developing replacement beef heifers consuming a low-quality, cool-season forage equally utilize and benefit, in terms of growth and metabolic parameters, from supplements based on protein or energy ingredients provided at approximately 0.5 % of heifer BW/d, (2) energetic supplementation at approximately 0.5 % BW/d did not impair forage disappearance parameters in rumen-fistulated steers, and (3) decreasing soybean meal supplementation frequency to once a week did not increase uterine pH, plasma P₄, and expression of hepatic enzymes associated with steroid catabolism in ruminants.
-
27097. [Article] Polyunsaturated fatty acid metabolism in broiler chickens : effects of maternal diet
Three experiments were conducted in broiler hens to study the influence of dietary n-3 polyunsaturated fatty acids (PUFA) on egg quality, antioxidant status in progeny, and eicosanoid production in tissue. The ...Citation Citation
- Title:
- Polyunsaturated fatty acid metabolism in broiler chickens : effects of maternal diet
- Author:
- Bautista Ortega, Jaime
Three experiments were conducted in broiler hens to study the influence of dietary n-3 polyunsaturated fatty acids (PUFA) on egg quality, antioxidant status in progeny, and eicosanoid production in tissue. The objective of experiment 1 was to determine the effect of hen age and dietary n-3 PUFA on egg quality and hatchability. Two-hundred-twenty breeder chicks (males and females) (Cobb Breeders) were raised until 20 weeks of age following the company guidelines. At this age, 3 groups of birds (24 breeder hens and 3 roosters) were randomly allocated to one of the following dietary treatments: 3.5% sunflower oil (Low n-3 diet), 1.75% sunflower oil + 1.75% fish oil (Medium n-3 diet) or 3.5% fish oil (High n-3 diet). Egg quality was evaluated at 29, 37 and 45 weeks of age by determining total egg weight, its components (albumen, yolk and shell) and shell thickness. Total fat content in the yolk and its fatty acid profile was also determined. Egg production was recorded daily. Breeder hens fed the High n-3 diet laid lighter eggs with lighter yolks, albumens and shells than those fed the Medium and Low n-3 diets (p<0.05). Eggs laid by hens fed the Medium n-3 diet had thicker shells than those laid by hens fed the Low n-3 diet (p<0.05). Egg weight, yolk weight, albumen weight, shell weight and shell thickness increased significantly with hen age (p<0.05). Total fat content in the yolk was significantly higher in the eggs laid by 37-week-old and 45-week-old hens than in those laid by 29-week-old hens. Hens fed the high n-3 diet laid eggs with significantly higher n-3 PUFA and lower n-6 PUFA content than hens from the other treatments (p<0.05). Hen age did not affect the n-3 or n-6 PUFA content. Fertility and hatchability were not affected by maternal diet. Total egg weight, yolk weight, albumen weight and shell weight was decreased by feeding n-3 PUFA to breeder hens. The decreased n6:n3 ratio brought about by maternal dietary n-3 PUFA was further investigated in connection with possible effects on antioxidant and eicosanoid status in newly hatched chicks. The objective of experiment 2 was to determine the effect of maternal diet (Low, Medium and High n-3) on the antioxidant and eicosanoid status, tissue fatty acid profile and lipid peroxidation in the newly hatched chick. Two-hundred-ninety-eight eggs were collected from the 29-week-old breeder hens mentioned in experiment 1. After incubation, day-old chicks were randomly selected from a pool of eggs laid by hens fed the three experimental diets. Antioxidant status was established by measuring activity of antioxidant enzymes (glutathione peroxidase, glutathione reductase, superoxide dismutase and catalase) and the content of total glutathione. Hatchability and total fat content in the tissues were not affected by maternal diet. n-3 PUFA content increased and n-6 PUFA decreased significantly in chick’s tissue (p<0.05) hatched from hens fed the fish-oil supplemented diet compared to those hatched from hens fed the Low n-3 diet (p<0.05). Total glutathione and antioxidant enzyme activity was not affected by maternal diet, except for catalase, whose activity was significantly lower in chicks hatched from hens fed the High n-3 and the Medium n-3 diets than in those hatched from hens fed the Low n-3 diets (p<0.05). Malondialdehyde, a measure of lipid peroxidation, was significantly lower in the liver of chicks hatched from eggs laid by hens fed the High n-3 diet than in those hatched from hens fed the Medium n-3 diet. Maternal dietary n-3 PUFA was successfully transferred to the newly hatched chicks without compromising their antioxidant status. The decreased n-6/n-3 ratio observed in chicks hatched from hens fed the fish-oil supplemented diets needs was further investigated relative to its downstream modulatory effects in connection with fat metabolism. The objective of experiment 3 was to establish the effect of maternal diet on fatty acid accretion in heart tissue, and the production of eicosanoids by heart tissue homogenates and peripheral blood mononuclear cells (PBMNC) from broilers fed diet devoid of long-chain PUFA. Broilers were hatched from hens fed the Low, Medium or High n-3 diet. One-hundred-fortyfour 1-day-old chicks were weighed and randomly allocated to four pens housed in three rooms of similar dimensions. Temperature was controlled according to specifications by Cobb Breeders during the 42 days that the experiment lasted. A cardiac morphological study was conducted on a weekly basis starting at 14 days of age to assess the heart weight relative to body weight. Also, the ventricular weights index (right ventricular weight divided by the total ventricular weight) was determined weekly from 14 days onwards. Day-0 chicks hatched from hens fed the High n-3 diet were significantly lighter than those hatched from hens fed the Low n-3 diet. After accounting for age, chicks hatched from hens fed the Low n-3 diet were significantly heavier than those hatched from hens fed the High n-3 diet (p<0.05). Maternal diet did not affect heart weight, after accounting for age. The heart percentage (heart weight relative to body weight) was significantly higher in chicks hatched from hens fed the High n-3 diet than in those hatched from hens fed the Low n-3 diet (p<0.05). The ventricular weights index was not affected by maternal diet. At 7 and 14 days of age, arachidonic acid (AA) content in heart tissue was significantly lower in chicks hatched from hens fed the High n-3 diet than those hatched from hens fed the Low n-3 diet (p<0.05). At 7 days of age, AA content in the heart tissue of chicks hatched from hens fed the High n-3 diet was significantly lower than in those hatched from hens fed the Medium n-3 diet (p<0.05). At day 0, the heart tissue production of prostaglandin E2 (PGE2) was significantly higher in the chicks hatched from hens fed the Low n-3 diet than in those hatched from hens fed the Medium or High n-3 diets (p<0.05). At the same age, thromboxane A3 (TXA3) production was significantly lower in the heart tissue of chicks hatched from hens fed the Low n-3 diet than in that of chicks hatched from hens fed either the Medium or Low n-3 diets (p<0.05). At day 7, PBMNCs isolated from chickens hatched from hens fed the Low and Medium n-3 diets produced significantly higher PGE2 and TXA2 concentrations than those isolated from birds hatched from eggs laid by hens fed the High n-3 diet (p<0.05). At day 31, PBMNCs isolated from chickens hatched from hens fed the Medium and High n-3 diets produced significantly higher PGE2 and TXA2 concentrations than those isolated from chicks hatched from eggs laid by hens fed the Low n-3 diet (p<0.05). Chicks hatched from hens fed the High n-3 diet had a higher heart/body weight than those hatched from the Low n-3 diet. Thus, chicks hatched from hens fed the Low n-3 diet may be at higher risk of developing cardiovascular complications related to high AA concentrations in the heart and blood cells during the first week of age. Further research is encouraged in which the three subpopulations of broilers (i.e.hatched from hens fed the Low, Medium and High n-3 diets) are raised under commercial conditions to investigate how the economic loss, due to a reduction in body weight observed in the High n-3 chickens compares with the potential reduction in mortality during the first week of age. Finally, the mechanistic action of n-3 PUFA needs further investigation, especially molecular aspects related to modulatory effects on gene expression.
-
27098. [Article] The Role of Bone Marrow Adipose Tissue in Bone Loss During Spaceflight and Simulated Spaceflight in Rodents
The reduction in mechanical load imparted on the body during spaceflight presents unique physiological challenges. One detrimental and seemingly unavoidable response to microgravity is rapid bone loss. ...Citation Citation
- Title:
- The Role of Bone Marrow Adipose Tissue in Bone Loss During Spaceflight and Simulated Spaceflight in Rodents
- Author:
- Keune, Jessica Ann
The reduction in mechanical load imparted on the body during spaceflight presents unique physiological challenges. One detrimental and seemingly unavoidable response to microgravity is rapid bone loss. This adaptation is hazardous not only to astronaut health, but to the success of long-duration exploration-class missions. Skeletal adaptation to spaceflight in astronauts typically includes rapid site-specific bone loss from unbalanced bone turnover, primarily in the femur, hip and vertebra. Studies of rats in space and biochemical markers from astronauts typically indicate unbalanced bone turnover results from an increase in bone resorption and either no change or a decrease in bone formation. An increase in bone marrow adipose tissue (MAT) is concurrently observed with bone loss in ground-based conditions of disuse or reduced mobility, such as bed rest. While the precise role of bone marrow adiposity is unclear, it has concurrently been associated with a decrease in bone formation. The invasive nature of bone marrow analysis and high cost associated with long-term disuse studies makes human-based research difficult to accomplish. The use of rodent subjects is a valuable tool in research for desirable size, cost, lifespan and comparable physiological systems to humans, and was thus used in the studies described in this dissertation. The central hypothesis of this dissertation states skeletal disuse results in bone loss, in part, from a reduction in ability to form osteoblasts, and occurs concurrently with MAT infiltration. Furthermore, we hypothesize that increased MAT plays a causative role in bone loss. If this hypothesis is correct: 1) an inability to produce MAT is protective against disuse-induced bone loss, and 2) an ability to produce high quantities of MAT exacerbates disuse-induced bone loss. The role of bone marrow adiposity in disuse-induced bone loss was evaluated using archived bone specimens from rats flown in space for 14 days and using mice subjected to hindlimb-unloading (HU), a ground-based model for spaceflight. The effects of the 14-day spaceflight on bone mass, density and microarchitecture in weight bearing (femur and humerus) and non-weight bearing (2nd lumbar vertebra and calvarium) bones in female rats insufficient in ovarian hormones due to ovariectomy are presented in Chapter 2. In the context of established ovarian hormone deficiency, a 14-day spaceflight resulted in bone- and bone compartment-specific decrements in bone acquisition and a negative turnover balance leading to deficits in bone mass and defective microarchitecture beyond that induced by ovx. The observed changes demonstrate the importance of evaluating multiple bones and bone compartments. The effects of a 14-day spaceflight on bone mass, bone resorption, bone formation, and MAT in lumbar vertebrae of the ovx rats was subsequently evaluated and the results are described in Chapter 3. The increase in MAT observed during this short-duration spaceflight did not impair osteoblast activity, reduce the interval osteoblasts are present on bone surfaces or decrease generation of new osteoblasts. These findings argue against the hypothesis that increased MAT produces factors that suppress bone formation. Although increased MAT did not impact osteoblast kinetics or bone formation, it is important to note that bone formation did not increase during spaceflight to compensate for the increase in bone resorption. This finding is consistent with the hypothesis that, in the context of ovarian hormone deficiency, osteoblast precursors are diverted to adipocytes instead of osteoblasts during spaceflight. These studies were the first large-scale investigation into the influence of microgravity on multi-site microarchitecture and MAT in slowly growing rats. The findings also indicated the importance of location in evaluation. A 14-day spaceflight did not result in loss of cortical bone in femur, humerus or calvaria. Cancellous bone loss was observed in femur and vertebra but not in humerus. The lumbar vertebra is not a primary weight-bearing site in rodents, yet it exhibited bone loss from increased bone resorption and no change in bone formation, as well as an increase in MAT. Importantly, our findings suggest that while MAT may increase during spaceflight it does not impair ongoing bone formation. MAT-deficient Kit[superscript W/W-v] (MAT-) mice were used to determine if absence of MAT reduced bone loss in HU mice after 14 days. The results from this study are described in Chapter 4. MAT- mice had a greater reduction in bone volume fraction than WT mice. While both HU groups had greater osteoclast perimeter than controls, HU MAT- mice had greater osteoblast perimeter, mineral apposition rate and bone formation rate compared to other treatment groups. The increase in bone formation was not sufficient to balance the increase in bone resorption during disuse, ultimately resulting in reduced bone that was of a greater magnitude in MAT-deficient mice. Targeted gene profiling further suggested a differential response of WT and MAT- mice to HU. To verify that the differences were not due to kit deficiency, we reconstituted the hematopoietic system in the Kit[superscript W/W-v] mice with WT hematopoietic stem cells. Adoptive transfer of WT bone marrow-derived hematopoietic stem cells reconstituted c-kit but not MAT in KitKit[superscript W/W-v] mice. The WT→ Kit[superscript W/W-v] mice lost cancellous bone following 14 days of HU. Together, the results do not support the hypothesis that MAT potentiates disuse-induced bone loss in mice. MAT was not increased in WT mice following HU and MAT deficiency was not protective against disuse-induced cancellous bone loss. Results from this study indicate MAT may actually have a protective role in limiting disuse-induced osteopenia, perhaps by limiting the magnitude of increased bone turnover. Chapter 5 describes results using mice with high MAT due to leptin deficiency (ob/ob) to determine if excess MAT exacerbated bone loss in HU mice after 14 days. ob/ob mice were pair-fed to WT mice to prevent the development of morbid obesity, but still maintained a greater body weight and abdominal adipose tissue than the WT mice. ob/ob mice had lower femoral bone mineral content and length, but no difference in cancellous bone volume fraction in the metaphysis and epiphysis. HU resulted in cancellous bone loss in metaphysis and epiphysis. While osteoblast perimeter was increased after HU, it was not sufficient to compensate for the increase in bone resorption leading to a reduction in cancellous bone. No significant interactions between genotype and treatment were detected for any of the endpoints measured. Together, the results do not support the hypothesis that high levels of MAT exacerbate disuse-induced bone loss in mice. The findings from this study indicate that having higher levels of MAT did not influence the bone response to disuse. While the central hypothesis that inadequate formation of osteoblasts contributes to bone loss and occurs concurrently with increased MAT infiltration was partially accepted in spaceflight, it was rejected in hindlimb unloading. Although we did observe an increase in MAT following spaceflight, it was not associated with altered osteoblast turnover or decreased bone formation. During HU, MAT levels did not change but bone formation increased. Importantly, bone resorption was increased during spaceflight and HU. Therefore, bone loss resulted from inadequate coupling of bone formation to compensate for the increase in bone resorption. These studies suggest that a simple relationship between MAT and bone mass does not exist and that targeting MAT to increase bone mass may not be an effective strategy.
-
27099. [Article] Effectiveness Monitoring Report for the Western Oregon Stream Restoration Program, 1999-2008 Report Number: OPSW-ODFW-2010-6
Abstract -- State and federal agencies have invested millions of dollars to restore streams and watersheds in the Pacific Northwest over the past two decades. In Oregon alone, over 500 million dollars ...Citation Citation
- Title:
- Effectiveness Monitoring Report for the Western Oregon Stream Restoration Program, 1999-2008 Report Number: OPSW-ODFW-2010-6
Abstract -- State and federal agencies have invested millions of dollars to restore streams and watersheds in the Pacific Northwest over the past two decades. In Oregon alone, over 500 million dollars has been spent on completed projects from 1995 to 2007 (Oregon Watershed Enhancement Board 2009). Restoration practitioners have distributed the investment among watershed scale activities such as road repair, dam removal, and upland management, and stream scale activities such as passage, instream complexity, and riparian plantings. The Western Oregon Stream Restoration Program (WOSRP) was established to work in cooperation with private and corporate landowners to restore stream habitat for juvenile and adult salmonids. In addition to the WOSRP, the Oregon Watershed Enhancement Board (OWEB) funds restoration projects with local watershed councils, who commonly partner with state and federal agencies. Eight WOSRP restoration biologists in Tillamook, Newport, Charleston, Gold Beach, Roseburg, Clackamas, and Salem select sites and implement projects consistent with the criteria described in Thom et al (2001). A monitoring component is integrated in the program, with surveys coordinated and reported by a biologist in Corvallis. The goal of the monitoring program is to assess the long term effectiveness of instream restoration projects implemented by WOSRP, and to evaluate progress towards salmon conservation and recovery goals in Oregon’s coastal basins. The WOSRP restoration sites are distributed throughout the Willamette, Lower Columbia, and coastal drainage's. Restoration treatments added large wood and/or boulders, improved fish passage, planted trees in riparian areas, or were a combination of the three. Large wood was placed in complex jams at intervals throughout the stream to increase stream roughness and complexity. Boulders were sometimes used in conjunction with wood jams to provide stability to the structures, and prevent large wood from moving downstream and posing a hazard to culverts and bridges. Bedrock dominated streams were often treated with boulders to collect gravel and cobble, intended to aggrade the streambed. In the future, large wood may be added to these streams. Fish passage projects opened previously inaccessible habitat to juvenile and/or adult salmonids while riparian plantings and fencing were designed to improve riparian vegetation and bank structure. The project length varied from site to site. Fish passage sites were quite short, but provided access to kilometers of fish habitat, and large wood sites were up to several kilometers in length. Large wood and boulder placement projects have become commonplace in the Pacific Northwest to restore complex stream habitat for juvenile coho and other salmonids (Katz et al. 2007, Roni et al. 2008). Detailed assessments have been published for individual projects or experiments (e.g. Moore and Gregory, 1988, Nickelson et al. 1992, Cederholm et al.1997). More extensive evaluations have used a post treatment design (Hicks et al 1991, Roni and Quinn 2001), but none have used a pre- and post treatment design. In this paper we evaluate habitat changes at 103 restoration projects in western Oregon from pre-treatment to one year post treatment to 6 years following treatment. Projects commonly treated 0.5 – 1 km of stream, but some extended up to 6 km. The projects we evaluated in this paper were treated with large logs, usually arranged in jams, and were not cabled or driven into banks or bottom. As of 2008, the OWEB and WOSRP projects have treated approximately 750 km of stream with large wood (Figure 1), 120 km with boulders, and over 4,000 km of stream have been made accessible by replacing and/or removing culverts. Each year, OWEB receives 210 grant applications for restoration projects. These projects generally adhere to a similar selection process and design, so the results of this study can be expected to apply more broadly within the Pacific Northwest. Roni et al (2008), in a synthesis paper, summarized many of the potential physical benefits of restoration; these include pool depth and frequency, habitat complexity, woody debris, and sediment retention and quality of spawning gravel. Some projects in deeply incised channels have reduced the incision and increased bed elevation. Evaluations of biological responses have been confounded by natural variability of populations, duration of study, or length of stream examined. For example, determination of success based on spawning ground counts is problematic because of variation in ocean survival. However, longer duration and watershed scale studies have shown positive responses of juvenile and adult salmon (Johnson et al 2005). Burnett et al. (2008) conducted a systematic review of peer-reviewed articles to examine the effects of large wood placement on salmonid abundance, growth, or survival, or on overall stream habitat complexity. Few publications were both relevant and met the rigorous standards outlined in their review. Although the review supported short term improvements in habitat complexity, the relationship to salmonid productivity was less definitive. Notable exceptions included Johnson et al. (2005) cited above, and Solazzi et al. (2000). An alternative approach to directly assessing biological response is to model potential changes in abundance or productivity. The Habitat Limiting Factors Model (Reeves et al. 1989, Nickelson et al.1992a, Nickelson 1998) was developed to quantify the carrying capacity of coastal streams for juvenile coho during the summer and winter. Use of this model is appropriate because most of the instream restoration projects in western Oregon were intended to improve habitat for juvenile coho. In this paper, we evaluated the physical response directly, and quantified the potential response of juvenile coho salmon by application of the Habitat Limiting Factors Model. Project effectiveness monitoring requires linking the restoration treatment to improved physical conditions for and biological response of salmon (Katz et al. 2007) and defining desired outcomes (Rumps et al. 2007). Because the WOSRP projects were designed to improve ecological and hydrologic stream function specifically for salmonids, we evaluated 1) retention of wood structures, 2) natural recruitment of additional wood, 3) increase in pool number, area, and depth, 4) retention of gravels and sorting of finer substrates, and 5) increase in channel complexity – secondary channels and off-channel habitats. Biological evaluation was based on estimates of the potential carrying capacity for juvenile coho during the overwinter life stage. The primary objectives of this evaluation are to test for these changes one year following treatment and 6 years following treatment. Secondarily, we evaluated the response of the projects by geographic location and position along the stream network. Previous WOSRP monitoring reports (e.g. Jacobsen and Jones 2003, Jacobsen et al. 2007) have focused on conditions one year following treatment, with relatively few sites assessed 2-3 years following restoration. Since 2003, the restoration projects have increased in complexity – more and larger pieces and jams, and treated more kilometers of stream length per site. The WOSRP program has provided a unique opportunity to evaluate the effects of restoration projects over longer times and broader geographic scales than previously feasible. We have been surveying the restoration sites in both summer and winter to monitor changes in stream habitat and evaluate the success of treatments, such as the placement of wood and/or boulders and fish passage. Surveys are logistically easier to manage in the summer, but surveys conducted during the winter provide a more timely and accurate assessment of over-winter rearing potential for juvenile coho. Because we have paired surveys, we are able to assess the added value of revisits across seasons. We test the hypothesis that habitat characteristics at the restoration sites do not change from summer to winter. The findings permit us to modify the survey program if the information is duplicative, and use the resources in another fashion.
-
27100. [Article] Hood River Bull Trout Abundance, Life History, and Habitat Connectivity, 2007 Progress Reports 2007
Abstract -- Hood River bull trout are thought to exist as two independent reproductive units (USFWS 2004), known as local populations (Rieman and McIntyre 1995). The Clear Branch local population is isolated ...Citation Citation
- Title:
- Hood River Bull Trout Abundance, Life History, and Habitat Connectivity, 2007 Progress Reports 2007
Abstract -- Hood River bull trout are thought to exist as two independent reproductive units (USFWS 2004), known as local populations (Rieman and McIntyre 1995). The Clear Branch local population is isolated above Clear Branch Dam, which provides limited downstream fish passage during infrequent and sporadic periods of spill and no upstream passage. Bull trout in this population inhabit Laurance Lake Reservoir and tributaries upstream of Clear Branch Dam. The Hood River local population occurs in the mainstem Hood River and Middle Fork Hood River downstream of the Clear Branch Dam and a small number of adult bull trout migrate each year into the Hood River from the Columbia River (Figure 1). The status of both populations is extremely precarious. The Clear Branch population is at risk of a random extinction event due to low numbers, negative interactions with non-native smallmouth bass, isolation and limited spawning habitat (USFWS, 1998). The Hood River population also appears to be small and is threatened by passage barriers, unscreened irrigation systems, impaired water quality and periodic siltation of spawning substrate by glacial outbursts. Clear Branch bull trout spawn in Clear Branch and Pinnacle Creek. After rearing in these two natal streams for an unknown time period, most are believed to migrate downstream to Laurance Lake Reservoir. Clear Branch bull trout have been documented passing over the dam spillway during high water events (Pribyl et al. 1996) and may provide a recruitment source for the Hood River local population. Adult bull trout tagged at Powerdale Dam have been observed at Coe Branch irrigation diversion and in a trap at the base of Clear Branch dam. These fish may have been attempting to reach spawning areas located upstream of the dam. However, the success of bull trout migrating downstream via the spillway or the possibility of successfully navigating through the diversion network has never been determined. Depending on the water year, the Middle Fork Irrigation District (MFID) may not spill at all, or the timing of the spill may not coincide with the timing of downstream migration, which is currently unknown (East Fork Hood River and Middle Fork Hood River Watershed analysis). Smallmouth bass were discovered in Lake Laurance Reservoir in the 1990s. Creel surveys have shown that large adult bass are caught occasionally in the reservoir and schools of bass fry have been seen by district fish biologist (Rod French, ODFW, personal communication), suggesting that they are spawning successfully. This illegal introduction poses a potential threat to the Clear Branch bull trout population, but its magnitude is unknown because the bass population size and degree of interaction between the two species are unknown. Bull trout and smallmouth bass have significantly different temperature preferences and tolerances, with bull trout being one of the most sensitive coldwater species and bass being a warm water species. Lake Laurance, a relatively high-altitude reservoir at 890 m (2,920 feet), does not provide ideal bass habitat so these two species may have largely non-overlapping distributions or differing activity periods (Terry Shrader, ODFW warmwater fish biologist, personal communication). However, based on past reservoir temperature data (Berger et al. 2005), there are periods in the reservoir when there is potential for bull trout and bass interaction: periods when bull trout are susceptible to bass predation and when juvenile fish might compete for resources. Spawning activity of the Hood River local population has been observed in a few locations within the Middle Fork of Hood River (Figure 1). Although consistent and extensive spawning areas for this population are not known, some of the locations where juvenile rearing or potential bull trout redds have been observed include the Middle Fork Hood River and some of its tributaries: Bear Creek, Compass Creek and Coe Branch (USFWS 2004). However, Coe Branch, Compass Creek, and the Middle Fork are glacial streams with a high volume of sand and silt which may compromise spawning success. No bull trout spawning or rearing has been observed on the East and West Forks of Hood River. The Middle Fork and mainstem Hood River provide foraging, migration and overwintering habitat. Hood River bull trout are also known to migrate into the Columbia River. Two bull trout tagged at Powerdale Dam (RK 7.2 of mainstem Hood River) were recovered near Drano Lake in Washington State; and one was captured 11 kilometers downstream of the confluence of the Hood and Columbia Rivers (USFWS 2004). Every year (usually between May and July), adult bull trout, presumably migrating upstream from the Columbia River, are captured and anchor tagged at Powerdale Dam. Although some of these tagged fish have been observed upstream (one in Coe Branch and three below Clear Branch dam), the spawning destination of fluvial adults within the Hood River basin is largely unknown. Dispersing juvenile bull trout and migrating adults in this local population are threatened by flow diversions with inadequate screening and passage facilities. Several structures are suspected to impede upstream migration or entrain juvenile and adult bull trout into irrigation works (Pribyl et al. 1996, HRWG 1999). These structures include: the diversion at Clear Branch Dam (passage and screening), Coe Branch (passage and screening), and the Farmers Irrigation District diversion (screening) on the mainstem Hood River (HRWG 1999). However, little research has been conducted to assess the impacts of these structures on migrating bull trout. Beyond a general knowledge of the distribution of Hood River bull trout and the nature of anthropogenic factors that potentially restrict their life history and habitat connectivity, little is known about this recovery unit. Baseline information about adult abundance is lacking for both local populations, the potential of a source (Clear Branch) and sink (Hood River) relationship between the two local populations has not been explored, and the migratory life history of adult fish caught at Powerdale Dam is unknown. The degree to which irrigation and hydropower diversions hamper connectivity within the Hood River basin is also poorly understood. Migratory life histories have been viewed as key to species persistence (Rieman and McIntyre 1995; Dunham and Rieman 1999), and understanding movement patterns and associated habitat requirements are critical to maintaining those migratory forms (Muhlfeld and Morotz 2005; Hostettler 2005). Gaining this information is also critical to evaluating bull trout recovery in the Hood River Subbasin (Coccoli 2004). The Oregon Department of Fish and Wildlife (ODFW) initiated a study in 2006 to improve our understanding of the abundance, life history, and potential limiting factors of the bull trout in this recovery unit. This report describes findings for the first two years of the study (2006-2007). Specific study objectives for the first two years were: 1. Determine the migratory life history of Hood River bull trout and assess the potential impacts of flow diversions and two new falls on the Middle Fork Hood River (scoured by the November 2006 glacial outburst) on bull trout migrations. 2. Determine current distribution of bull trout reproduction and early rearing in historical and potential bull trout streams in the Hood River Subbasin. 3. Determine the juvenile and adult life history the Clear Branch local population and develop a statistically reliable and cost-effective protocol for monitoring the abundance of adult Clear Branch bull trout. 4. Assess the potential impact of smallmouth bass on bull trout in Laurance Lake Reservoir.