image title

ASU honors fallen Sun Devils with memorial wall

November 3, 2017

Howard Draper held his hand up to the wall of 134 names, next to his uncle’s name, which he shares.

“This is the kind of thing he would have really loved,” Draper said of his uncle, who was killed on the Aisne Plateau in northern France in 1918. He was 21.

Draper and 133 other Sun Devils who gave their lives for their country were honored permanently when the Arizona State University Memorial to Fallen Alumni was dedicated today.

They died in places like Caballo Island in the Pacific, Takoradi in British West Africa, Kon Tum in Vietnam, Fallujah in Iraq and Soltan Kheyl in Afghanistan. While they were killed on the far sides of the world, what they had in common with each other, and us, is that they walked over the lawns and under the palms of ASU.

 

Howard Draper and his son Jeff attended from Modesto, California. They were among eight families at the dedication who had a son, brother, uncle or father with a name on the wall.

Draper said his uncle would have been tickled “that they would be doing something like this.”

He speculated that Howard might have gone into politics, had he survived. He wrote patriotic essays and jumped at the chance to enlist. In Arizona, he was remembered well after the war, placed with Frank Luke in the Grand Canyon State’s pantheon of military heroes and cited in speeches.

“I just wonder what he could have done,” Draper said.

His great-nephew Jeff Draper glanced at the wall, which is just outside the Pat Tillman Veterans Center in the Memorial Union. “I kind of wish his name wasn’t up there,” he said.

Patrick Kenney, dean of the College of Liberal Arts and Sciences, spoke at the dedication.

“The Memorial Union bears that name for a reason, and this memorial is a testament to that,” Kenney said.

Kenney’s father was a tail gunner in a B-24 over the Pacific during World War Two. Kenney grew hearing stories about "The War," as it was always referred to by the Greatest Generation, the Baby Boomers and Generation X. He was also an altar boy in the 1960s, serving at mass for three soldiers from his home town who were killed in Vietnam

“It was a very small parish, so everyone knew each other,” Kenny said. “Those moments are seared into me.”

Retired Navy Capt. Steven Borden, director of the Pat Tillman Veterans Center, pointed out that some of the people honored on the wall went through ASU’s ROTC program.

“It is a stark reminder of individuals who have left this institution and endeavoured to make a difference,” Borden said.

Support for the Memorial to Fallen Alumni was raised, in part, by the ASU Alumni Association Veterans Chapter and donors to PitchFunder, the ASU Foundation-led crowdfunding program designed to empower the ASU community to raise funds for projects, events and efforts that make a difference locally and across the globe.

Top photo: The Veterans Memorial Wall is located near the Pat Tillman Veterans Center in the Memorial Union, Friday, Nov. 3. The wall is dedicated to the 134 Sun Devils who made the greatest sacrifice for their country. Photo by Charlie Leight/ASU  

Scott Seckel

Reporter , ASU Now

480-727-4502

 
image title

AI taught itself to beat us at our own game — what does it mean?

November 3, 2017

Q&A with ASU computer science professor provides a glimpse into the future

Smart just got beaten by smarter, and it taught itself.

Two weeks ago Google DeepMind announced that the artificial intelligence program AlphaGo Zero soundly beat all previous versions of AlphaGo in the ancient Chinese board game Go, teaching itself to become the best Go player ever, human or machine, in just 40 days. 

Previous versions of AlphaGo were trained on thousands of human amateur and professional games of Go to learn what humans required 3,000 years to master. AlphaGo Zero had only the rules of Go to work with, mastering the game without human assistance by playing itself.

Some experts believe the victory moves the needle on AI, pushing forth a new AI-driven industrial revolution, while others are worried about a robot uprising that will threaten people’s jobs and security.

Arizona State University's Subbarao Kambhampati, a professor of computer science the Ira A. Fulton Schools of Engineering, works in artificial intelligence and focuses on planning and decision-making, especially in the context of human-machine collaboration. As president of the Association for the Advancement of Artificial Intelligence, Kambhampati provides a glimpse into future in this Q&A with ASU Now.  

Man in glasses smiling

Subbarao Kambhampati

Question: What does the ability of AlphaGo Zero to teach itself play the game of Go at super-human level tell us about the state of artificial intelligence?

Answer: AlphaGo Zero is an impressive technical achievement, in as much it as it learns the game of Go purely from self-play, without any human intervention. It is, however, still an example of narrow AI. While we romanticize ability in Go and Chess as signs of high-intellect, the games don’t actually have that much in common with real world. For example, there aren’t that many real world scenarios or tasks for which unlimited self-play with a perfect simulator is feasible — something AlphaGo Zero depends on.

Because of this, human learning is forced to be a lot more parsimonious in terms of examples, depending instead on background knowledge accumulated over a lifetime to analyze single examples more closely

Q: What do we as humans have to gain in creating intelligence smarter than us?

A: I think we have always worked on creating machines better than us in narrow spheres. We don’t compete with calculators in arithmetic, or with cranes on weight lifting. We found ways to augment our overall abilities with the help of these specialized superhuman machines. So it goes with intelligent systems specialized to narrow areas. For example, image recognition systems that can read radiology images better than humans can be used to help improve the diagnostic capabilities of the human doctors.

Q: The experts seem to disagree on the dangers of whether or not AI is dangerous to the economy and to mankind. What is your perspective?

A: I find the “AI as a threat to humankind” arguments advanced by the likes of Elon Musk and Nick Bostrom rather far-fetched. These "Terminator" scenarios often distract attention from the more important discussions we need to have about the effects of increased autonomy and automation on our society.

It is increasingly clear that AI will have big impact on many types of routine jobs — whether they are blue collar or white collar. We need to educate the public about this and provide retraining opportunities for those affected by the job displacement.

Q: Is there something we can do to develop AI technology that will work with us rather than replace or threaten us?

A: For much of its history, AI has focused on autonomy and surpassing humans in various tasks, than on the far more important, if less glamorous, goal of cooperation and collaboration with humans. I believe that AI research should focus a lot more on human-aware AI systems that are designed from the ground-up to collaborate with us. After all, it is this ability we have of working together, rather than that of playing a game of Go, that is the true hallmark of our intelligence (even if we tend to take it for granted).

To do this well, AI systems need to learn and use mental models of the humans they work with, and take aspects of social and emotional intelligence much more seriously. I joke that they have been the Rodney Dangerfield of AI research — they weren’t given much respect. This is why human-aware AI is the main focus of our research group at ASU.

 

Top photo illustration courtesy of Pixabay.