Home Page Display: 

It’s okay when you’re not okay: ASU study re-evaluates resilience in adults

August 14, 2018

Adversity is part of life: Loved ones die. Soldiers deploy to war. Patients receive terminal diagnoses.

Research on how adults deal with adversity has been dominated by studies claiming the most common response is uninterrupted and stable psychological functioning. In other words, this research suggests that most adults are essentially unfazed by major life events such as spousal loss or divorce. These provocative findings have also received widespread attention in the popular press and media. It's okay when you're not okay Experiencing a decrease in psychological functioning after a trauma is a common response, say two ASU researchers. Photo by Jeremy Perkins on Unsplash Download Full Image

The idea that most adults are minimally affected by adversity worries Frank Infurna and Suniya Luthar, of the Arizona State University Department of Psychology, because it could negatively affect people living through adversity. Infurna and Luthar closely examined the research studies and found problems with how they were designed and how the data were analyzed. The pair summarize the problems and re-evaluate adult resilience research in a new paper in Clinical Psychology Review.

A dip and recovery

Infurna and Luthar are an ideal team to tackle the discrepancy between studies on adult responses to adversity that contradict 80 years of research in child development. Infurna, an associate professor of psychology, is an expert on using complex statistical models to study health and well-being in adulthood and old age. Luthar, a Foundation Professor of Psychology, is an international expert on resilience in children, with 30 years of experience and highly influential publications on the concept of resilience and how best to study it.

“As experts, the onus is on us to be careful about how research is conducted and communicated,” Luthar said. “There has been a message percolating in the popular press that most people are unaffected by major life events like bereavement or a deployment, but that is not the whole picture.”

The project started over two years ago when Infurna downloaded publicly available data for re-analysis. The data had been analyzed using “growth mixture modeling” — a statistical model that can classify how different people in a population respond to adversity. After classifying the study participants into groups based on their response to adversity, the model outputs the response patterns for each group.

Psychological functioning graphic
The graph shows some possible response patterns in adults following adversity. The top line (hashed) shows the response that has been reported as the most common. This flat line indicates that living through an adverse event causes minimal or no disruption to psychological functioning. When the data are analyzed with growth mixture models that are set up using appropriate assumptions, the most common response pattern after adversity is shown by the bottom line. The most common response to adversity is a decrease in psychological functioning followed by a return to normal or near-normal after a period of time.

Infurna and Luthar noticed the results depended on how the model was set up in the software used for statistical analysis. Setting up a model for data analysis requires a researcher to define some assumptions or educated guesses about aspects of the model — like how the data are organized or how much error was included in the experimental measurements. The assumptions identified as problematic by Infurna and Luthar were that the variations in the data were the same for the entire participant group and that the psychological functioning of all participants changed at the same rate. These assumptions also corresponded to the default settings in several software programs commonly used for statistical analysis. When the default software settings were used to run the growth mixture model, the most common response pattern was a flat line, which indicated stable and largely uninterrupted psychological functioning after adversity.

When the growth mixture models were set up with more appropriate assumptions, the researchers found the most common response pattern was a temporary decrease followed by an increase. Such a response pattern indicates a decline in psychological functioning followed by a return to normal, which agrees with 80 years of resilience research in children. This response pattern also agrees with the conventional wisdom that in general, most people struggle to some degree after a major life event and recover after a period of time.

“The idea that ‘It is okay to not be okay’ following adversity is important,” Infurna said. “Sometimes it can take months or years to recover after a traumatic or upsetting event because resilience depends on the person and the resources they have available to them, their past experiences and the type of adverse event.”

Life is multidimensional and so is resilience

How adults respond to adversity has typically been measured with longitudinal assessments or surveys that are repeated over a time interval like once a year. Many research studies have tracked just one psychological outcome, such as life satisfaction, positive or negative emotions or general physical health.

“How do you define doing well?” Luthar asked. “An individual’s response to adversity is multidimensional so that success in one area can coexist with considerable trouble in others. Just because a person is effectively meeting deadlines at work does not mean she is not struggling at home, perhaps crying herself to sleep or estranged from her partner.”

Infurna and Luthar recently examined how resilience depended on measures such as life satisfaction, negative or positive emotions, general health or physical health. When just one measure was considered, the percentage of people classified as resilient was high, ranging from 19 to 66 percent, but when all measures were considered, only 8 percent of the adult participants were resilient.

Child development researchers have solved the problem of defining resilience by qualifying different types. For example, children who have lived through adversity and are functioning well in school are described as having “academic resilience.” Researchers are also careful not to generalize a child’s performance in school to how they might be functioning in other aspects of their lives.

The way forward for adult resilience research and its applications

In the Clinical Psychology Review paper, Infurna and Luthar's main goal was to prevent the misinterpretation of what a common response to adversity looks like. The correct model is typically some decline followed by an increase back towards normal.

Infurna and Luthar made several recommendations to improve adult resilience research in the future, in addition to changing the assumptions used with growth mixture modeling. They also encourage researchers to assess more than one measure of psychological functioning and to administer longitudinal surveys at more frequent time intervals.

“It is very important for the public and for policymakers to know what a normal or common response to adversity is,” Luthar said. “This knowledge can help people avoid self-blame when they are hurting or have a setback in the aftermath of a major loss or other traumatic events. And it can help clinicians and policymakers continue to provide support resources that are often critical in helping adults overcome major life adversities.”

Research Assistant Professor, Psychology Department


ASU astrophysicist helps discover that ultrahot planets have starlike atmospheres

August 9, 2018

Recent observations by NASA's Hubble and Spitzer space telescopes of ultrahot Jupiter-like planets have perplexed theorists. The spectra of these planets have suggested they have exotic — and improbable — compositions.

However, a new study just published by a research team that includes Arizona State University astrophysicist Michael Line, an assistant professor in ASU's School of Earth and Space Exploration, proposes an explanation — that these gas-rich planets have compositions that are basically normal, going by what is known about planet formation. What’s different about them is that the atmospheres on their daysides look more like the atmosphere of a star than a planet. These simulated views of the ultrahot Jupiter WASP-121b show what the planet might look like to the human eye from five different vantage points, each illuminated to different degrees by its parent star. The images were made with a computer simulation being used to help scientists understand the atmospheres of these planets. Ultrahot Jupiters reflect almost no light, much like charcoal. However, their daysides have temperatures of between 3,600 F and 5,400 F, so they produce their own glow like a hot ember. The orange color in this simulated image thus comes from the planet's own heat. Image credit: NASA/JPL-Caltech/Vivien Parmentier/Aix-Marseille University (AMU) Download Full Image

"Interpreting the spectra of the hottest of these Jupiter-like planets has posed a thorny puzzle for researchers for years," Line said.

The biggest puzzle is why water vapor appears to be missing from these worlds' atmospheres, when it is abundant in similar but slightly cooler planets.

According to the new study, ultrahot Jupiters do in fact possess the ingredients for water (hydrogen and oxygen atoms). But due to the strong radiation on the planet's daysides, temperatures there go high enough that water molecules are completely torn apart.

With ultrahot Jupiters orbiting extremely close to their stars, one side of the planet faces the star perpetually, while the nightside is gripped by endless darkness.

Dayside temperatures reach between 3,600 to 5,400 degrees Fahrenheit (2,000 to 3,000 degrees Celsius), ranking ultrahot Jupiters among the hottest exoplanets known. And nightside temperatures are around 1,800 degrees Fahrenheit cooler.

Star-planet hybrids

Among the growing catalogue of planets outside our solar system — known as exoplanets — ultrahot Jupiters have stood out as a distinct class for about a decade.

"The daysides of these worlds are furnaces that look more like a stellar atmosphere than a planetary atmosphere," said Vivien Parmentier, an astrophysicist at Aix Marseille University in France and lead author of the new study published in Astronomy and Astrophysics. "In this way, ultrahot Jupiters stretch out what we think planets should look like."

While telescopes like Spitzer and Hubble can gather some information about the daysides of ultrahot Jupiters, their nightsides are difficult for current instruments to probe.

Jupiter-like exoplanets are 99 percent molecular hydrogen and helium with smaller amounts of water and other molecules. But what their spectra show depends strongly on temperature. Warm-to-hot planets form clouds of minerals, while hotter planets make starlight-absorbing molecules of titanium oxide. Yet to understand ultrahot Jupiter spectra, the research team had to turn to processes more commonly found in stars. Image credit: Michael Line/ASU

The new paper proposes a model for what might be happening on both the illuminated and dark sides of these planets. The model is based largely on observations and analysis from three recently published studies, coauthored by Parmentier, Line, and others, that focus on three ultrahot Jupiters, WASP-103b, WASP-18b, and HAT-P-7b.

The new study suggests that fierce winds driven by heating may blow the torn-apart water molecules into the planets' cooler nightside hemispheres. There the atoms can recombine into molecules and condense into clouds, all before drifting back into the dayside to be ripped apart again.

Family resemblance?

Hot Jupiters were the first widely discovered kind of exoplanet, starting back in the mid-1990s. These are cooler cousins to ultrahot Jupiters, with dayside temperatures below 3,600 degrees Fahrenheit (2,000 Celsius).

Water has proven to be common in their atmospheres, and thus when ultrahot Jupiters began to be found, astronomers expected them to show water in their atmospheres as well. But water turned out to be missing on their easily observed daysides, which got theorists looking at alternative, even exotic, compositions.

One hypothesis for why water appeared absent in ultrahot Jupiters has been that these planets must have formed with very high levels of carbon instead of oxygen. Yet this idea could not explain the traces of water sometimes detected at the dayside-nightside boundary.

To break the logjam, the research team took a cue from well-established physical models of stellar atmospheres, as well as "failed stars," known as brown dwarfs, whose properties overlap somewhat with hot and ultrahot Jupiters.

"Unsatisfied with exteme compositions, we thought harder about the problem," Line said. "Then we realized that many earlier interpretations were missing some key physics and chemistry that happens at these ultrahot temperatures."

The team adapted a brown dwarf model developed by Mark Marley, one of the paper's co-authors and a research scientist at NASA's Ames Research Center in Silicon Valley, California, to the case of ultrahot Jupiters. Treating the atmospheres of ultrahot Jupiters more like blazing stars than conventionally colder planets offered a way to make sense of the Spitzer and Hubble observations.

"With these studies, we are bringing some of the century-old knowledge gained from studying the astrophysics of stars, to the new field of investigating exoplanetary atmospheres," Parmentier said.

"Our role in this research has been to take the observed spectra of these planets and model their physics carefully," Line said. "This showed us how to produce the observed spectra using gases that are more likely to be present under the extreme conditions. These planets don't need exotic compositions or unusual pathways to make them."

Robert Burnham

Science writer, School of Earth and Space Exploration


image title

Crowdfunding success relies on friendly networks, ASU research finds

Social media drives rely on how friendly network "friends" are, ASU paper finds.
August 3, 2018

Campaigns like 'Ice Bucket Challenge' closely tied to social media connections

Four years ago this summer, a phenomenon hit social media when millions of people participated in the "ALS Ice Bucket ChallengeThe challenge involved people taking videos of themselves dumping a bucket of ice and water over their heads and posting it on social media to promote awareness of amyotrophic lateral sclerosis, or ALS, and drive donations to the ALS Association.," raising more than $115 million for charity.

An Arizona State University professor has published a research paper looking at these kinds of social-media crowdsourcing phenomena and why they’re so successful.

Yili Hong, an associate professor in the Department of Information Systems at the W. P. Carey School of Business at ASU, and his co-authorsHis co-authors are Yuheng Hu, an assistant professor in the Department of Information and Decision Sciences at the College of Business Administration of the University of Illinois at Chicago, who received his PhD at ASU, and Gordon Burtch, an associate professor in the Information and Decision Sciences Department at the Carlson School of Management at the University of Minnesota. The paper, “Embeddedness, Pro-Sociality, and Social Influence: Evidence from Online Crowdfunding,” will be published in the journal MIS Quarterly. researched huge data sets from Twitter and Facebook to examine how the social media networks affected the success of crowdsourcing campaigns on Kickstarter.

It has to do with “embeddedness,” or how connected people on the network are to each other.

“So what is a friends’ network? Is this a network that’s built among friends, people who have many connections with each other?” said Hong, who also is co-director of the Digital Society Initiative in the W. P. Carey School of Business. “You can think about how many friends there are in common as the embeddedness measure.

“There’s also a network in which I’m connected to you and you’re connected to someone else, who is connected to someone else. It’s not a close network and embeddedness is not high.”

Yili Hong is an associate professor in the W. P. Carey School of Business.

Essentially, the ALS Ice Bucket Challenge succeeded because it was a perfect storm of a campaign with a pro-social message combined with networks that were highly embedded.

“If I’m not doing it, it might make me look bad, but if I do it there is some reputational gain,” he said.

“This effect wouldn’t work in the weak network because I wouldn’t care with these kinds of loose acquaintances.”

To test the hypothesis, the researchers compared pro-social Kickstarter campaigns with ones that sought to raise money to launch new products, like technology gadgets or video games, as well as how embedded the networks were. They found that pro-social campaigns raised roughly twice as much money as private-product campaigns in embedded networks — which worked out to about $6,000 more money raised over a 30-day campaign.

“While social media campaigns seem ubiquitous and like they’re around forever, there is almost no research of this kind,” Hong said. Their research looked at data from 2014 to 2016 and included more than 1,000 Kickstarter campaigns. The team also used “text mining” to determine whether a campaign was pro-social by analyzing the words in the description.

The research results have implications for marketers to most strategically focus their efforts, Hong said.

“Where do we put advertising budgets? Facebook is more dense, more friends based, and Twitter is more information based,” he said.

Hong said that researching the nuances of social media is increasingly important.

“It’s something very different from what it was before,” he said. “It is influencing a lot of things — peoples’ purchasing behaviors, donation behaviors and even political views.

“And it’s exciting because those data are free as long as you have a way to write a program to capture them.”

Top photo courtesy of Pixabay.

Mary Beth Faller

reporter , ASU Now


'Fooducate' yourself on the merits of mustard

In honor of National Mustard Day, learn about its history — and health benefits.
August 3, 2018

ASU nutrition professor Carol Johnston hails condiment for its versatility, taste and low calories

Stand aside, ketchup. Saturday, Aug. 4, is National Mustard Day.

Besides being a popular and versatile condiment, mustard is one of the world’s most ancient flavors. It's made from the ground seeds of the mustard plant, water and vinegar.

It’s also full of nutrients, can lower blood pressure and has anti-inflammatory properties. Who knew?

Carol Johnston, a professor and associate director of the nutrition program in the College of Health Solutions at Arizona State University, certainly does. ASU Now spoke to Johnston about mustard in honor of its biggest day of the year.

Woman in tie dye shirt
Carol Johnston

Question: What are the origins of mustard, and when did America first see it cropping up as a condiment?

Answer: Mustard is one of the ancient spices used to flavor foods — or perhaps more accurately, to "mask" undesirable tastes of rotting food in a day when food was not preserved well. As with other herbs and spices, mustard was used medicinally for thousands of years. It was used as an antiseptic and to treat congestion, colds and flu.

Q: How is mustard made?

A: Mustard is made by mixing the ground seeds of the mustard plant with liquid. In the U.S. the mild yellow mustard is most popular: finely ground yellow mustard seeds and the coloring spice turmeric are mixed with vinegar and water. Yellow mustard seeds are the mildest, while brown and black seeds are much hotter and more pungent. Vinegar, water, wine and verjuiceA sour juice obtained from crab apples, unripe grapes, or other fruit, used in cooking and formerly in medicine. are common liquids. Spices include turmeric, cinnamon, ginger and nutmeg. There are many recipes available on the web. 

Q: Does mustard have nutritional value?

A: Mustard has possible medicinal value which complements nutritional aspects of a meal. Recent research suggests that mustard has hypoglycemic and thermogenic properties. Its hypoglycemic properties may benefit those with diabetes or prediabetes since its use on breads and sandwiches appears to lower the blood glucose spike following meal ingestion. This effect is likely due to the vinegar content in the mustard. The thermic effect suggests that mustard consumption increases energy expenditure following a meal which may help offset the calories consumed at mealtime. This is a phenomenon noted for other spices such as chilli peppers. 

Q: Is there one mustard that is healthier than another, say processed versus mustard with seeds?

A: The different seeds, the degree that the seeds are ground, the liquid of choice — vinegar or wine, etc. — all add different qualities to the mustard in terms of texture and flavor. Experimenting with mustards and their uses as a condiment can add to the enjoyment of eating. Since there are minimal calories in mustards, the use of mustards in place of sauces and ketchups will reduce the calories in the meal. 

Q: How do you use mustard?

A: I have used mustard liberally on many foods — it is a "freebie" with lots of taste and no calories.

Oldest-ever igneous meteorite contains clues to planet building blocks

August 2, 2018

Scientists believe the solar system was formed some 4.6 billion years ago when a cloud of gas and dust collapsed under gravity, possibly triggered by a cataclysmic explosion from a nearby massive star or supernova. As this cloud collapsed, it formed a spinning disk with the sun in the center.

Piece by piece, scientists have been working on establishing the formation of the solar system with clues from space. Now, new research has enabled scientists Meenakshi Wadhwa and Daniel Dunlap at Arizona State University’s Center for Meteorite Studies in the School of Earth and Space Exploration, as well as researchers from the University of New Mexico and NASA’s Johnson Space Center to add another piece to that puzzle, with the discovery of the oldest-ever dated igneous meteorite. meteorite “Northwest Africa (NWA) 11119 The meteorite “Northwest Africa (NWA) 11119” was found in a sand dune in Mauritania. The rock is lighter in color than most meteorites and is laced with green crystals. Photo courtesy: ASU Center for Meteorite Studies Download Full Image

“The meteorite we studied is unlike any other known meteorite,” co-author Dunlap said. “It has the highest abundance of silica and the most ancient age (4.565 billion years old) of any known igneous meteorite. Meteorites like this were the precursors to planet formation and represent a critical step in the evolution of rocky bodies in our solar system.”

The research on this meteorite, published today in Nature Communications, provides direct evidence that chemically evolved, silica-rich crustal rocks were forming on planetesimals within the first 10 million years prior to the assembly of the terrestrial planets and helps scientists further understand the complexities of planet formation.

A meteorite laced with green crystals

The research began at the University of New Mexico (UNM) with a yet-to-be studied meteorite, called “Northwest Africa (NWA) 11119,” that was found in a sand dune in Mauritania. The rock is lighter in color than most meteorites and is laced with green crystals, cavities and quench melt, a type of rock texture that suggests rapid cooling and is often found in volcanic rocks which cool rapidly or “quench” when brought to the surface quickly.

Using an electron microprobe and a computed tomography (CT) scan at UNM and NASA’s Johnson Space Center facilities, lead author Poorna Srinivasan started to examine the composition and mineralogy of the rock. Srinivasan noted the intricacies of NWA 11119 including its unusual light-green fusion crust.

“The mineralogy of this rock is a very, very different from anything that we've worked on before,” Srinivasan said. “I examined the mineralogy to understand all of the phases that comprise the meteorite. One of the main things we saw first were the large silica crystals of tridymite which is similar to the mineral quartz. When we conducted further image analyses to quantify the tridymite, we found that the amount present was a staggering 30 percent of the total meteorite — this amount is unheard of in meteorites and is only found at these levels in certain volcanic rocks from the Earth.”

Video by University of New Mexico

Determining the age and origin of the meteorite

At ASU’s Center for Meteorite Studies, scientists and co-authors Dunlap and Wadhwa used inductively coupled plasma mass spectrometry in their Isotope Cosmochemistry and Geochronology Laboratory, which helped determine the precise formation age of the meteorite. The research confirmed that NWA 11119 is the oldest-ever igneous meteorite recorded at 4.565 billion years old.

“The purpose of this research was to understand the origin and formation time of an unusually silica-rich igneous meteorite,” said Wadhwa, who is the director of ASU’s Center for Meteorite Studies. “Most other known igneous asteroidal meteorites have ‘basaltic’ compositions that have much lower abundances of silica — so we wanted to understand how and when this unique silica-rich meteorite formed in the crust of an asteroidal body in the early solar system.”

Daniel Dunlap
Daniel Dunlap, a graduate student at the School of Earth and Space Exploration and co-author of this study, performed the isotope analyses for dating NWA 11119 in the Isotope Cosmochemistry and Geochronology Laboratory at ASU. Photo by Laurence Garvie/ASU

In addition, the research involved trying to figure out through chemical and isotopic analyses what body the meteorite could be from. Utilizing oxygen isotopes done in collaboration with co-author Karen Ziegler of UNM’s Center for Stable Isotope lab, the team was able to determine that it was definitely extraterrestrial.

“Based on oxygen isotopes, we know it's from an extraterrestrial source somewhere in the solar system, but we can't actually pinpoint it to a known body that has been viewed with a telescope,” Srinivasan said. “However, through the measured isotopic values, we were able to possibly link it to two other unusual meteorites (Northwest Africa 7235 and Almahata Sitta) suggesting that they all are from the same parent body — perhaps a large, geologically complex body that formed in the early solar system.”

One possibility is that this parent body was disrupted through a collision with another asteroid or planetesimal and some of its ejected fragments eventually reached the Earth’s orbit, falling through the atmosphere and ending up as meteorites on the ground — in the case of NWA 11119, falling in Mauritania at a yet unknown time in the past.

“The oxygen isotopes of NWA11119, NWA 7235, and Almahata Sitta are all identical, but this rock — NWA 11119 — stands out as something completely different from any of the over 40,000 meteorites that have been found on Earth,” Srinivasan said.

Asteroids are the remains from the formation of the solar system, some 4.6 billion years ago. Photo: Artist’s rendition — University of New Mexico

Building blocks of planet formation

Most meteorites are formed through the collision of asteroids orbiting the sun in a region called the asteroid belt. Asteroids are the remains from the formation of the solar system, some 4.6 billion years ago. 

The chemical composition ranges of ancient igneous meteorites, or achondrites, are key to understanding the diversity and geochemical evolution of planetary building blocks. Achondrite meteorites record the first episodes of volcanism and crust formation, the majority of which are basaltic in composition.

“This research is key to how the building blocks of planets formed early in the solar system,” said co-author Carl Agee, director of UNM’s Institute of Meteoritics. “When we look out of the solar system today, we see fully formed bodies, planets, asteroids, comets and so forth. Then, our curiosity always pushes us to ask the question, how did they form, how did the Earth form? This is basically a missing part of the puzzle that we've now found that tells us these igneous processes act like little blast furnaces that are melting rock and processing all of the solar system solids. Ultimately, this is how planets are forged.”

The next steps for the ASU team are to detail the chronology of this meteorite (and related meteorites) with new isotopic measurements. These new data will help even more precisely determine the age of this unique meteorite and the implications for the evolution of rocky bodies in the early solar system.

Karin Valentine

Media Relations & Marketing manager, School of Earth and Space Exploration


Dietary competition played a key role in the evolution of early primates

August 1, 2018

Since Darwin first laid out the basic principles of evolution by means of natural selection, the role of competition for food as a driving force in shaping and shifting a species’ biology to outcompete its adversaries has played center stage. So important is the notion of competition between species, that it is viewed as a key selective force resulting in the lineage leading to modern humans.

The earliest true primates, called “euprimates,” lived about 55 million years ago across what is now North America. Two major fossil euprimate groups existed at this time: the lemur-like adapids and the tarsier-like omomyids. Dietary competition between these similarly adapted mammals was presumably equally critical in the origin and diversification of these two groups. Though it has been hinted at, the exact role of dietary competition and overlapping food resources in early adapid and omomyid evolution has never been directly tested. Diagram competition vs noncompetition 2 Three models of niche competition between euprimates and non-euprimate mammals. Non-euprimates thrived across North America prior to euprimate arrival about 55 million years ago (large tree, left). After euprimate arrival (center column), these two groups could have: occupied separate niches with no competition (top row, right); occupied the same niche with one group ultimately displacing the other to reduce competition (middle row, right); or coexisted with minimal competition (bottom row, right). Download Full Image

New research published online Tuesday in the Proceedings of the Royal Society B led by Laura K. Stroik, an alumna of ASU’s School of Human Evolution and Social Change (SHESC) and currently assistant professor of biomedical sciences at Grand Valley State University, and Gary T. Schwartz, associate professor with SHESC and research scientist at ASU’s Institute of Human Origins, confirms the critical role that dietary adaptations played in the survival and diversification of North American euprimates.

“Understanding how complex food webs are structured and the intensity of competition over shared food resources is difficult enough to probe in living communities, let alone for communities that shared the same landscape nearly 55 million years ago,” Stroik said.

The researchers utilized the latest in digital imaging and micro CT scanning on more than 350 fossil mammal teeth from geological deposits in North America. They sought to quantify the 3D surface anatomy of molars belonging to extinct representatives of rodents, marsupials and insectivores — all of which were found within the same geological deposits as the euprimates and were thus likely real competitors.

The high-resolution scans allowed them to capture and quantify details of how sharp, cresty or pointy the teeth were. In particular, they looked at molars, or teeth at the back of the mouth, useful in pulverizing and crushing food or prey. The relative degree of molar sharpness is directly linked to the broad menu of dietary items consumed by each species.

Tooth ct scans
Examples of micro CT scans of molars and the types of measurements researchers were looking at.

Stroik and Schwartz used these aspects of molar anatomy to compute patterns of dietary overlap across some key fossil groups through time. These results were then weighed against predictions from three models of how species compete with one another drawn from the world of theoretical ecology. The signal was clear: Lineages belonging to the adapids largely survived and diversified without facing competition for food. The second major group, the omomyids, had to sustain periods of intensive competition with at least one contemporaneous mammal group. As omomyids persisted into more recent geological deposits, it is clear that they evolved adaptive solutions that allowed them to compete and were usually victorious.

"The results showed adapids and omomyids faced different competitive scenarios when they originated in North America," Stroik said. 

“Part of what makes our story unique is that for the first time we compared these fossil euprimates to a range of potential competitors from across a diverse group of mammals living right alongside adapids and omomyids, not just to other euprimates,” Schwartz said. “Doing so allowed us to reconstruct a far greater swath of the ecological landscape for these important early primate relatives than has ever been attempted previously.”

The key advance of this new research is the demonstration that diet did in fact play a fundamental role in the establishment and continued success of euprimates within the North American mammalian paleocommunity. An exciting outcome is the development of a new quantitative tool kit to diagnose patterns of dietary competition in past communities. This will now allow them to explore the role that diet and competition played in how some of these fossil euprimates continued to evolve and diversify to give rise to living lemurs and all higher primates.

Julie Russ

Assistant director, Institute of Human Origins


ASU researchers study how parent-child bond can affect children’s development

August 1, 2018

A baby girl sits on the floor, crying. A man picks up the child and attempts to soothe her by patting her back and quietly singing in her ear. The baby sighs and stops crying.

This scenario is an example of a secure attachment between a child and her father, but what if her parent or caregiver had not been there to offer comfort? What might happen to the quality of the relationship between the child and caregiver? The answers to these questions are unknown, but research from the Arizona State University Department of Psychology suggests myriad problematic outcomes in the development of a child are possible when a caregiving relationship is insecure. ASU researchers study how parent-child bond can affect children’s development Research from the Arizona State University Department of Psychology suggests myriad problematic outcomes in the development of a child are possible when a caregiving relationship is insecure. Photo by Jenn Evelyn-Ann on Unsplash Download Full Image

The researchers quantified how much the quality of attachment in a caregiving relationship predicted the self-regulation abilities of children. Self-regulation is the ability to adjust behavior depending on the situation and is extremely important because it predicts performance in school, social behaviors, mental health and more.

Nancy Eisenberg, Regents’ Professor of psychology, performed the work with Susanna Pallini of the University of Roma Tre and researchers at Sapienza University of Rome. The study was published in Psychological Bulletin in May.

The project started with a question: What relation does a caregiving relationship have to a child’s self-regulation ability? The researchers focused on self-regulation because it appears to affect many, if not all, aspects of a child’s development.

“Self-regulation is so important for children’s socioemotional adjustment and relates to nearly everything important in child development, including problem behaviors and mental health, academic achievement, prosocial behaviors and sympathy for people, among other things,” Eisenberg said. “If self-regulation is compromised, a host of other skills are likely to be compromised.”

The researchers conducted a meta-analysis by combining published studies and examining the data together. In total, the researchers analyzed data from 20,350 children. The first step of a meta-analysis is to identify which studies should be included. The researchers identified 106 published studies on children 18 years or younger that measured both the self-regulation of the children and the type of attachment in their caregiving relationships.

, Regents’ Professor of psychology
Nancy Eisenberg

The researchers classified the adult-child caregiving relationships in the data set based on the type of attachment, which is defined by how the caregiver and child perceive each other and by how they interact.

For older children, attachment can be measured using questionnaires, while for younger children, attachment is often measured with a series of structured separations and reunions between the child and caregiver. A securely attached child will cry or show signs of distress when separated from his or her parent or caregiver but is easily comforted when reunited. A child in a caregiving relationship defined by an insecure attachment might not react at all when the parent or caregiver departs or might overreact by becoming so upset that the caregiver cannot alleviate the child’s distress. A child with disorganized attachment often seems immobilized and shows little attachment behavior.

The next step in a meta-analysis is to aggregate and analyze the findings from the included studies. The researchers compared the self-regulation of children across the different types of attachment. Children with secure caregiving relationships had stronger self-regulation compared with those with insecure attachments. Also, children with disorganized attachments were lower in self-regulation than the other groups of children combined. The association between the quality of the caregiving relationship and the child’s self-regulation ability was modest and robust.

“We found the children with secure attachments had stronger self-regulation abilities,” Eisenberg said. “Although differences in children’s self-regulation are partially due to biological factors, including genetics, the environment also plays a big role. Self-regulation can also be affected by the caregiving relationship, and the security of a caregiving relationship can be improved through interventions that train parents to be more responsive and sensitive to their children.”

This meta-analysis looked for correlations, or relations, between attachment and self-regulation in children. The researchers did not change attachment style to test whether the type of attachment in a caregiving relationship actually caused differences in the self-regulation of children. Because the researchers found children with insecure or disorganized attachments had less self-regulation ability, it is possible that if a caregiving relationship were to become insecure — from a traumatic event, prolonged separation or other interruption — the self-regulation abilities of the child would suffer.

“We reported only a correlation between attachment and self-regulation,” Eisenberg said. “But other research indicates that separation can affect attachment security. If attachment has a causal effect on self-regulation, then the children experiencing traumatic separations could have many developmental problems.”

Eisenberg and her collaborators have just finished an additional meta-analysis that investigates the relation between attachment style and attention problems in children. The findings are similar — securely attached children have fewer attention problems — and the pattern of findings emphasize the importance of stable and secure caregiving relationships for children.

Research Assistant Professor, Psychology Department


Monsoon rains found to be beneficial to underground aquifers

August 1, 2018

The summer monsoon in the deserts of the southwestern U.S. is known for bringing torrents of water, often filling dry stream beds and flooding urban streets. A common misconception when observing the fast moving water generated by monsoon storms is that most of the water is swept away into large rivers, with very little of it percolating into underground aquifers. 

New research dispels that myth. Field research, Chihuahuan Desert, New Mexico Download Full Image

Using a combination of field instrumentation, unmanned aerial vehicles and a hydrologic model, a team of researchers from Arizona State University and the Jornada Long-Term Ecological Research Program of the National Science Foundation has been studying the fate of monsoon rainfall and its impact on groundwater recharge in the Chihuahuan Desert of New Mexico. 

Their findings, recently published in the journal Water Resources Research, explain how a surprising amount of rainfall, nearly 25 percent, from monsoon storms is absorbed into small stream beds and percolates into the groundwater system. The researchers identified factors affecting the percolation process through the use of a numerical model that reproduced the long-term observations obtained at a highly instrumented research site.

“The results of this study show that monsoon storms serve an important role in recharging groundwater aquifers near the point of runoff generation,” said ASU hydrologist Enrique Vivoni of the School of Earth and Space Exploration and the School of Sustainable Engineering and the Built Environment. “This is an essential process that banks renewable surface water for future use as a groundwater resource in the arid Southwest and throughout the world.”  

Eight years of field work leads to novel insights

From 2010 to 2018, the team, which included several undergraduate and graduate students from ASU and collaborators from New Mexico State University and the U.S. Department of Agriculture, collected data from a watershed monitoring network established at the Jornada Experimental Range in New Mexico. They focused specifically on measuring hydrologic and ecologic conditions on piedmont slopes, locally known as “bajadas,” which connect mountain ranges to river valleys, but have often been ignored as sources of groundwater recharge.

Adam Schreiner-McGraw, currently a postdoctoral fellow at the University of California, Riverside and lead author of the study, was a graduate student at ASU’s School of Earth and Space Exploration when the published research was conducted. Schreiner-McGraw visited the watershed site every three weeks for over six years to collect hydrologic data, maintain the extensive instrument network, and carry out the site sampling needed to setup and test the hydrologic model intended to gain further understanding of the field conditions. 

“In hydrology,” said Vivoni, “you have to wait for certain conditions to happen. In this study, we benefitted from having a sequence of wet summer monsoons deliver above-average rainfall.” 

During this time, the team collected high-resolution data on rainfall, streamflow, soil moisture and evapotranspiration using a variety of instruments operating in a coordinated manner. Using long-term data from these sensors, Schreiner-McGraw identified that large amounts of the incoming rainfall, especially during wet monsoons, was not being lost to the atmosphere via evapotranspiration nor from the channel system as streamflow. Instead, runoff was being lost as percolation in small channels not more than two feet in width — an unexpected finding.    

Simulating where the water was going — an improved hydrological model

By tracking the fate of monsoon rainfall, the research team set out to explain how hillslopes and channels of the piedmont slope might lead to groundwater recharge.

“Soils on hillslopes are very different than those in the channels,” Schreiner-McGraw said. “They are compact and do not absorb water very quickly, and they also have calcium carbonate layers about 12 to 20 inches below the surface that limit infiltration. Channels, on the other hand, have coarse and permeable sediments that can absorb water much more quickly.”  

This information was used to modify a hydrological model of the instrumented watershed, originally developed during Vivoni’s graduate studies at the Massachusetts Institute of Technology. Based on their field work, the research team tested the model against a suite of long-term data, including evapotranspiration, soil moisture, streamflow and percolation.

“It is uncommon to have a hydrologic model tested so thoroughly,” Vivoni said. “By performing iterations of field observations and model developments, we demonstrated the value of long-term research.”

The research team then used the numerical model to isolate two important factors affecting the percolation process: the infiltration properties of hillslopes and of channel reaches. Simulations indicated diverging effects of these factors on the proportion of rainfall that recharges groundwater systems. These findings are applicable to arid piedmont slopes anywhere on Earth. “Understanding the groundwater recharge process in arid regions can help us sustainably manage groundwater use in these climate settings,” Schreiner-McGraw said.

With water becoming an increasingly precious resource, a better understanding of how groundwater is recharged could help communities across the globe.

“Groundwater is much like a bank account,” Vivoni said. “Underground aquifers can store water delivered from surface systems which can be subsequently extracted under periods of water scarcity.”  

Effects of vegetation change through the land-water connection

The Chihuahuan Desert, like many areas in the southwestern U.S., has experienced a transition in vegetation communities from grasslands to shrub lands.

“We have historically used large open areas of the western U.S. and northern Mexico for livestock grazing,” Vivoni said. “As a result, many grasslands have disappeared and been replaced by desert shrubs.”

In addition, drought and fire suppression have contributed to the conversion of grasses to shrub lands. 

An open question remains as to whether this transition has impacted the groundwater recharge process in piedmont slopes.

“We have examined how the instrumented watershed contributes to groundwater recharge under current conditions,” Schreiner-McGraw said. “The next step in the research is to determine how these contributions would be altered under different plant communities.”

Here, the hydrological model will be employed as a numerical laboratory to determine how vegetation changes alter groundwater recharge, for instance under a historical grassland scenario or a case with desertification and the absence of vegetation.

“The future of water resources for humans and wildlife is uncertain,” said John Schade, director of the National Science Foundation’s Long-Term Ecological Research program, which funded the research. “Studies like this are essential to proper water management in the face of rapid environmental change, especially in arid lands where water is scarce. This study is an example of the critical role long-term research plays in uncovering what controls the availability of freshwater. It advances our ability to forecast how freshwater availability will change in the years and decades to come.”

Karin Valentine

Media Relations & Marketing manager, School of Earth and Space Exploration


ASU research demonstrates silicon-based tandem photovoltaic modules can compete in solar market

Nature-Energy features ASU study that depicts acceptable intersection of improved solar technology costs vs. efficiency

July 30, 2018

New solar energy research from Arizona State University demonstrates that silicon-based tandem photovoltaic modules, which convert sunlight to electricity with higher efficiency than present modules, will become increasingly attractive in the U.S.

A paper that explores the costs vs. enhanced efficiency of this new solar technology appears in Nature Energy this week. The paper is authored by ASU Ira A. Fulton Schools of Engineering Assistant Research Professor Zhengshan J. Yu, graduate student Joe V. Carpenter and Assistant Professor Zachary Holman. ASU Professor Zhengshan Yu addresses how current solar tell technologies are reaching the limits of efficiency. ASU Assistant Research Professor Zhengshan Yu addresses how current solar cell technologies are reaching the limits of efficiency. Photo courtesy of ASU Holman Lab Download Full Image

The Department of Energy’s SunShot Initiative was launched in 2011 with a goal of making solar cost-competitive with conventional energy sources by 2020. The program attained its goal of $0.06 per kilowatt-hour three years early, and a new target of $0.03 per kilowatt-hour by 2030 has been set. Increasing the efficiency of photovoltaic modules is one route to reducing the cost of the solar electricity to this new target. If reached, the goal is expected to triple the amount of solar installed in the U.S. in 2030 compared to the business-as-usual scenario. 

But according to Holman, “the dominant existing technology — silicon — is more than 90 percent of the way to its theoretical efficiency limit,” precipitating a need to explore new technologies. More efficient technologies will undoubtedly be more expensive, however, which prompted the paper co-authors to ask, “Does a doubling of module efficiency warrant a doubling of cost?”

Tandem modules stack two, complementary photovoltaic materials — for instance, a perovskite solar cell atop a silicon solar cell — to best use the full spectrum of colors emitted by the sun and exceed the efficiency of either constituent solar cell on its own. The study was designed to determine how much more expensive high-efficiency tandem photovoltaic modules can be and still compete in the evolving solar marketplace. 

ASU Assistant Research Professor Zachary Holman reflects on the efficiency of new solar technologies vs. the costs.
ASU Assistant Professor Zachary Holman reflects on the efficiency of new solar technologies vs. the costs. Photo by Deanna Dent/ASU Now

Results indicate that in the expected 2020 U.S. residential solar market, 32-percent-efficient anticipated tandem modules can cost more than three times that of projected 22-percent-efficient silicon modules and still produce electricity at the same cost. This premium, however, is a best-case scenario that assumes the energy yield, degradation rate, service life and financing terms of tandem modules are similar to those of silicon modules alone. The study also acknowledges that cost premium values will vary according to region. 

“Our previous study defines the technological landscape of tandems; this study paints the economic landscape for these future solar technologies that are only now being created in labs,” Yu said. “It tells researchers how much money they’re allowed to spend in realizing the efficiency enhancements expected from tandems.”

Holman’s research group is a leader in silicon-based tandem photovoltaic technologies, having held the efficiency world record in collaboration with Stanford University for a perovskite/silicon tandem solar cell until last month. As the team strives to reclaim the record while sticking to inexpensive materials and simple processes, it now knows that its innovations will likely find their way to a U.S. rooftop.

Terry Grant

Media Relations Officer, Media Relations and Strategic Communications


Plants produce ‘green vaccine’ against norovirus in ASU study

July 27, 2018

Each year, close to 700 million people are stricken with a viral infection that causes vomiting, diarrhea and stomach pain. While the majority will recover in a few days, some 200,000 infected patients will die. The culprit is known as norovirus — often referred to as "the cruise ship illness." Currently, no recommended treatments or vaccines are available.

In a new study, Andrew G. Diamos and Hugh S. Mason of the Biodesign Center for Immunotherapy, Vaccines and Virotherapy describe a trial vaccine against norovirus. The innovative therapeutic is produced using a plant-based system, which offers many advantages over traditional routes of pharmaceutical production. Leaves of the tobacco plant are treated with an engineered viral vector through the process of agroinfiltration. Through this process, the tobacco leaves are induced to produce virus-like particles that are capable of provoking a strong protective immune response to norovirus infection. Graphic by Shireen Dooling/ASU Biodesign Institute Download Full Image

The study demonstrates that the new plant system for norovirus vaccine production is effective against the tenacious pathogen and that the versatile method could be used for the development of a broad range of novel vaccines. It is estimated that an effective vaccine against gastroenteritis could save billions of dollars in healthcare costs in the U.S. alone.

The researchers outline a technique to produce so-called virus-like particles (VLPs) — portions of the virus able to provoke a strong immune response, without the disease-causing properties of whole norovirus.

"The beauty of VLPs is that they are very simple, but also very effective vaccines. A VLP is comprised of only a single protein, repeated many times: 180 identical copies of this protein self-assemble inside the plants to make the entire structure of the virus,” Diamos said. “Because this structure looks just like the real virus to our immune systems, it makes an excellent vaccine. However, since the VLP is just an empty shell without any viral genes inside, it has no potential to cause infection. Coupled with the inherent safety of plants, this makes plant-made VLP vaccines among the safest known vaccines."

The new study describes the production of virus-like particles derived from norovirus at more than three times the level previously reported in plant-based systems, allowing the production of milligrams of pure, fully-assembled norovirus particles from a single tobacco leaf. Efforts to optimize the VLP expression system also significantly reduced cell death in treated plant leaves.

The group’s findings recently appeared in the journal Protein Expression and Purification.

Insidious invader

Norovirus is the most common cause of gastroenteritis. The highly contagious pathogen, commonly spread during winter months, is usually transmitted through contaminated food or water or through person-to-person contact. Symptoms, including severe diarrhea, stomach pain and vomiting, typically occur within 12 to 48 hours of infection. Dehydration is common.

The aggressive virus is sometimes referred to as “the perfect human pathogen.” It remains highly stable in the environment, can cause infection at very low doses and is shed in large quantity by those infected. Norovirus induces very limited immunity after natural infection, as the virus is rapidly evolving.

Norovirus spreads readily in densely populated, contained areas like ships, though it is a ubiquitous scourge in both developed and developing countries. Norovirus is known to spread through facilities like hospitals, schools and military bases with ferocious speed — often requiring lengthy and challenging decontamination measures.

As the authors note, plant-based vaccines offer a safe, convenient and cost-effective means of producing novel vaccines for a range of biological threats, including norovirus. Such vaccines have advantages over traditionally produced vaccines produced in mammalian or insect cell systems, in terms of cost, efficiency and safety.

Plants to the rescue

In earlier years, plant expression systems for vaccine development were overwhelmingly based on transgenic plants, with permanently altered or recombinant DNA residing in the cell nucleus or chloroplasts. However, transient plant alterations based on infection with viral vectors have now become the norm and have improved convenience and speed, allowing researchers to produce large quantities of protein used for the vaccine in just days. Protein expression in transgenic plants typically requires months.

The most commonly used viral vector has been the tobacco mosaic virus, though the authors demonstrate substantially better results with their system based on the bean yellow dwarf virus as a vector, which is less toxic and damaging to infected leaves.

The vector-based technique has been used to produce vaccines against swine flu, bird flu and many other leading infectious diseases. The vaccines can typically be produced in weeks rather than months, a critical advantage in the case of sudden disease outbreaks, where time is of the essence.

Building a vector

The strategy for producing virus-like particles for norovirus vaccine occurs in several stages. The re-engineered bean yellow dwarf virus contains genes for producing the norovirus VP1 capsid protein in tobacco leaves. This vector is introduced into a bacterium known as agrobacterium tumefaciens.

Leaves of the target tobacco plant are then infiltrated with a solution contacting the bean yellow dwarf virus-bearing bacteria through a process called electroporation, which uses electric fields to increase plant cell permeability. When tobacco leaves are exposed to this viral vector system through the process of agroinfiltration, they respond by producing norovirus-derived virus-like particles, which self-assemble in the tobacco leaves. Essentially, tobacco leaves infected with the bean yellow dwarf virus are induced to act as a production factory for the virus-like particles used in the vaccine.

The virus-like particles produced have structural and antigenic characteristics that are recognized by the body as though they are complete norovirus virions, though disease-causing viral components are absent. The resulting vaccine is effective at stimulating both humoral and cellular immunity and priming the body against a subsequent infection by norovirus.

Versatile vaccines

Tobacco plants are hearty and reproduce rapidly, allowing vaccine production to be scaled up according to need, and plant-made vaccine candidates do not require the advanced sterilization and purification steps typical for mammalian and insect cell culture vaccines. Further, producing virus-like particles rather than whole viruses avoids the use of infectious agents in the production process, improving safety.

One challenge facing researchers producing plant-made vaccines is the damage caused to tobacco leaves by the virus used to infect them. The new technique incorporates a number of design improvements in the bean yellow dwarf virus vector, reducing cell death and boosting virus-like particle yield by 65 percent.

With careful optimization of extraction conditions, the group produced norovirus virus-like particles with 90 percent purity, with no losses in yield. The method enables the production of milligram quantities of virus-like particles from a single plant leaf, outpacing earlier efforts by two to three times. The use of bean yellow dwarf virus for the expression system is also desirable as this plant virus has a broad host range and can be used for protein expression in a large number of plant species.

Richard Harth

Science writer, Biodesign Institute at ASU