Ask most Americans about the periodic table and you’re likely to get vague descriptions of rarely used furniture or perhaps faint memories from a high school classroom. Even here in Charlottesville, science is a murky mystery, the domain of white coat-wearing magicians. Yet over on the west side of town sits one of the most active, prestigious hubs of scientific research in the world. Conversations with five local science types reveal that scientists are people, too—no Coke bottle glasses or twitchy Mr. Hyde types here—and that the layperson can actually understand groundbreaking science, as long as a researcher explains it patiently.
In August of 1984, Dr. Joseph “Pepe” Humphrey and his 8-year-old daughter Fiona were wandering along a beach in Portugal when Fiona stopped and remarked, “It’s raining spiders!” He looked up and sure enough, spiders were ballooning in from the sea, landing on the sand, bushes and rocks. Upon closer inspection, Humphrey and his daughter observed that when the spiders landed, they turned to face the breeze and lifted two of their eight legs. They then released a silk filament that would balloon up once more and “Poof! They would take off again!
“Ballooning has been known for thousands of years,” Humphrey explains today. “But that was our first observation of it …At that point I realized that I would not be happy until I had read everything there was to know about that topic.”
Twenty years later, the memory of that eureka moment still prompts enthusiasm from Humphrey. His life’s work, researching arthropod sensors, was shaped by the question he asked himself that day on the beach: How do those spiders know the breeze is blowing?
Dressed in a green polo shirt and pressed khakis with a cell phone clipped to his belt, 54-year-old Humphrey holds the Wade Professorship at UVA’s department of engineering and applied science and is chairman of the department of mechanical and aerospace engineering. An engineer with a biologist’s heart, his corner office is simple, even Spartan, with cream-colored cinderblock walls and two windows covered by metal Venetian blinds. With a neatly trimmed gray beard and glasses, he sits holding a children’s picture book titled The Book of Spiders that he’s pulled off the nearest of his five large bookcases. The cover illustration is a close up of a large, hairy tarantula. As he discusses the wonders of spiders, he excitedly points out different magnified sensory hairs on the insect.
“If a spider is sitting on a table and a cockroach walks across a table on the other side…every time the cockroach puts its leg on the table… the spider senses it,” Humphrey says. “All this information enters the nervous system of the animal and the animal behaves accordingly.”
The spider then knows whether to turn right, to turn left, to jump, to run and hide or to attack, leading, as Charles Darwin taught, to the survival of the species. How these sensors work together in a system that provides the insect all it needs to know about its environment is Humphrey’s engineering problem.
Hairs on a spider measure approximately 10 microns in diameter and 100 microns long (a micron is one millionth of a meter). But the sensors, which are located at the base of the hair, are 100 times smaller than a spider’s hair.
Humphrey’s goal is to take the principles of these sensors and duplicate them in the lab. While he admits that it would be impossible to replicate them perfectly, he says, “I can pick up the fundamental principle.” By understanding how the spiders’ sensors work and by using “my albeit limited engineering technology,” Humphrey is currently working to produce replica sensors with colleagues at UVA and at the University of Connecticut.
Along with spiders, Humphrey also researches the sensory systems of crayfish and moths. All three species are part of the arthropod family, distinguished by segmented bodies and articulated legs. Apart from their sensors, these animals are uniquely suited to his research for two other reasons. First, cockroaches and other arthropods have lived for hundreds of millions of years (400 million years in the case of spiders) and thus predate dinosaurs. This, marvels Humphrey, “means that they got it right very early on.”
The second reason is logistical. Arthropods aren’t like cats or dogs or monkeys: They are easy to raise in a lab and, as Humphrey puts it, “Who cares about cockroaches?” Adding after a slight pause, “Which happen to be extremely interesting animals.”
For the first 12 years of his life, Humphrey was raised in Cuba, fishing, collecting shells and swimming in the sun. He traces his interest in biology to this early exposure to nature. His mother, whose parents had moved from Maine to Cuba in the 1920s, was American, and his father, Spanish. In 1960, the family lost all their property and was “politely asked to leave” the country when Castro collectivized the land. From there, the family scattered. Humphrey moved to Barcelona where he spent the next 10 years. He later earned his undergraduate degree in Spain, his master’s degree in Canada and his Ph.D. in England. His nickname, “Pepe,” is “Joe” in Spanish.
Humphrey accepted a position at the University of California at Berkeley in the late 1970s and later spent time at the University of Arizona at Tucson and Bucknell. He arrived at UVA in 2000. Charlottesville, he notes quietly, has the most beautiful moths he’s ever seen: “Big ones, little ones, colored ones. Incredible.”
The sensors Humphrey hopes to build will monitor—as they do on spiders, moths and crayfish—the health of our environment. Extremely small, mass-produced, inexpensive and replaceable, the sensors could be strategically located all over a room, able to “beam their information to a central processing system that would inform you about the health of that system,” he explains.
Practically speaking, this means sensors in a building in, say, California, could detect an earthquake’s subtle tremors far underground, or similarly, they could inform people if “a terrorist dumped a harmful gas through one of the vent ducts…The animals are tremendous models,” concludes Humphrey. “But that’s the ultimate goal: To get this to work for society.”—Nell Boeschenstein
Dr. Angeline Lillard‘s office in UVA’s Gilmer Hall is filled with props. The prominent psychologist enthusiastically uses a troll doll, fake sushi roll and an Altoid tin to help illustrate several concepts from her research in child psychology.
Lillard’s toy props are particularly appropriate, as in addition to adeptly illuminating complex ideas, her work is based on the idea that pretense, or using make-believe objects and actions, is key in how children learn to communicate, to navigate the idiosyncrasies of the world, even to first grasp that people have interior intellectual lives.
“We’re the only species that pretends like we do,” Lillard says. “It seems to be at the root of all these fundamental activities that make us what we are.”
Lillard says children usually learn to pretend during infancy, with a “huge jump” in playacting occurring between 18 and 24 months of age. From age 2 until elementary school, children will spend 20 percent to 40 percent of their free time pretending, whether playing with plastic animals or acting like firefighters.
While a child’s make-believe games may seem silly, they may be techniques for sorting out emotional issues or, Lillard says, of running through the “scripts” of life’s activities—from eating dinner to raising kids.
Lillard thinks we continue our childhood pretending in adulthood by “engaging in fictional worlds” through novels, movies, music and art. In fact, much of life is marked by symbolism and representational reality. By viewing the world through pictures and words on the Internet or in newspapers, we often learn through interpretation, not through actual experiences.
The Early Social Cognition Laboratory at UVA, Lillard’s lab, is striving to learn how parents teach babies to pretend, and how well children understand pretend acts.
Using an Altoid box, Lillard explains one mysterious finding. If asked what’s inside the tin, a 3-year-old child will answer that it contains mints. However, when Lillard cracks open the little tin, it’s filled with rubber bands and a thumbtack. She says that even after seeing the tin’s contents, a child will often continue to assume that the closed tin contains mints. Lillard says this common response is particularly confounding because these same young children have usually learned to pretend, meaning that they can already substitute false representations of the world for real ones.
In another example of children’s puzzling mental development, Lillard holds a card displaying a picture of a horse. Holding it out flat and parallel to the ground, the horse would appear right side up to a child sitting across from her while Lillard would see an upside down picture. Many young children think that Lillard will also see the picture as upright.
“They don’t seem to see that minds actively represent and interpret the world,” Lillard says. “They seem to think that we all share one view of the world.”
Yet these same children understand pretense, such as understanding that an adult speaking into a banana is not actually talking on the phone, but merely playacting. “That’s the crazy problem,” Lillard says.
The world is a confusing place for young children, who are constantly learning about their environment. Yet Lillard says parents begin to “bamboozle” their kids by playacting at an early age, even before a baby’s first birthday. Since 1999, Lillard’s lab has sought to learn how a child learns when Mom is actually doing something, and when she’s just playing around.
All of this research, in addition to providing basic insight into how the human mind evolves, may help psychologists better understand children with developmental problems or with disabilities such as autism. Lillard says children with autism never spontaneously engage in make-believe play like normally developing children do.
Lillard’s team spends a great deal of energy recruiting local mothers and children for her research. During the course of one experiment that required a narrow window of age among babies, she says, “We just used them all up.” Lillard says that mothers seem to enjoy the experiments.
“They like the commitment to science and that their baby can help science,” she says, adding that parents benefit from learning about child development. (Interested parents with a child who is under 8 years old can call 243-5234 for more information on participating in research.)
Lillard’s interest in child psychology began while she was working as a technical writer in San Francisco in the early ’80s. As an avid windsurfer and a fan of computers, the Bay Area was ideal for her. During that time, she took a course on Montessori schooling, a nontraditional school model developed by Italy’s first woman doctor. Lillard, who attended a Montessori school and was interested in Montessori’s differently structured teaching methods, which allow children to have more control over their environment.
Deciding that she wanted to better understand the science behind Montessori, Lillard later enrolled at Stanford University, earning her Ph.D. in psychology and studying with Dr. John Flavell, a famous psychologist who essentially brought Jean Piaget’s groundbreaking work on child psychology to the United States.
“I really had no clue what I was getting into,” Lillard says of her decision to study psychology.
But 20 years later, Lillard’s work has come back around to Montessori. She has written a book, Montessori: The Science Behind the Genius, which should hit bookstores in February.
Lillard says her research has been augmented by another personal experience besides Montessori schooling: that of raising two daughters. She says that in psychology, children are thought to suddenly make leaps in their understanding, often at a specific age.
From watching her daughters, Lillard says she’s learned that in reality, there are “waves of understanding that just come and go.”—Paul Fain
about small stuff
Rosalyn Berne travels to the frontlines of the revolutionary nanotechnology field
Nothing so small has ever kicked up such a fuss. In the case of nanotechnology, which is loosely defined as technology that works at the nanometer level—one billionth of a meter or the width of 10 hydrogen atoms in a row—both the minute scale and surrounding hype are indeed extraordinary.
A conversation with Dr. Rosalyn Berne, a nanotech ethicist at UVA, about the promise and risks of the growing juggernaut of nanotechnology can make your head spin.
Each of the novel nano-scale developments Berne describes, some still far-off concepts and others already to be found in consumer products such as sunscreen, yield a cascade of questions. Could carbon nanotubes—rolled using a layer of carbon only one atom thick—be toxic when they pass through human tissue? What could the military do with a camera the size of pin? What would “robot-type instruments in the blood stream” mean for targeted cancer treatments?
Berne says that by precisely manipulating matter at the atomic level, one can fundamentally alter a material and its behavior.
“Once you get down to the level of the atom, you’re talking about rearranging nature,” Berne says. “This is a big deal.”
In 2003, Congress decided to dole out $3.7 billion on nanotechnology research over the next four years and venture capital for nanotech jumped by 40 percent. UVA is getting in on the nanotech bonanza, big time. Local researchers are working on nanotech breakthroughs in medical technology, semiconductors and in designing new metals, prompting a senior UVA nanotech expert to predict that the school would be among the top 10 institutions working in the field within 10 years.
But despite all the interest in nanotechnology, Berne says the technology’s potential to massively alter society has yet tobe considered.
“Where’s the moral leadership?” she asks rhetorically. “It doesn’t exist.”
To spark a conversation about nanotech’s impact on the way we all live, Berne, who is an assistant professor in the department of technology, culture and communications in UVA’s school of engineering, is going to the frontlines and speaking directly with scientists.
As part of a grant from the National Science Foundation, Berne is three years into a five-year study in which she meets twice a year with 35 top nanotech researchers. Though the scientists Berne interviews are actually developing these cutting-edge technologies, she says they rarely get the chance to publicly air their thoughts about nanotechnology and what their discoveries could, or should, mean for society.
“Their voices don’t usually come out as individuals,” Berne says.
In addition to having little free time to ruminate on the ethics surrounding their work, Berne says, nanotech scientists “need money to run their labs.” Because she says virtually all of the funding for nanotechnology “has special interests attached to it,” any quest for overarching ethical principles often gets lost in the grind of producing discoveries to keep the lab afloat.
Because the scientists she speaks with are in sensitive roles, Berne guarantees their anonymity in her interviews. Over the years, she has taken these interviews, added her own analysis and put them together in a book that serves as an “interim reflection.” Berne says the manuscript is written, and at the publisher.
Berne hasn’t always been working on the cusp of technology. First arriving in Charlottesville as a UVA undergrad in the mid-’70s, Berne’s diverse local career includes turns as Darden’s dean of admissions and as head of the Tandem Friends School. But Berne says she’s always been interested in the future, displaying a watch that she says is perpetually seven minutes fast.
Berne arrived at her ethical investigation of nanotechnology through her interest in the relationship between humans and machines. In the undergraduate classes she teaches, Berne weaves in discussions of artificial intelligence, robotics and the movie The Matrix. Like these futuristic conceits, nanotechnology can fundamentally alter how we interact with nature, raising the question of whether humans can “sculpt the future” or are predetermined by evolution to always be a few steps behind our technological advances.
“HAL is coming,” Berne says, referencing the demented self-aware computer in the sci-fi classic 2001: A Space Odyssey. “Do we want HAL? Do we have a choice?”
Berne describes how one nanotech scientist, whom she has interviewed, is developing a DNA mapping kit that could be used at home, as easily as a home pregnancy test. With a swipe of the tongue, a kit user could be screened for 100 genetic defects, learning instantly that he might be at risk for Alzheimer’s disease or prostate cancer.
“How much information is good information?” Berne asks of the magic DNA box. “And who has access to it?”
Berne, 47, is not a technophobe, however. While discussing nanotech at an outside café on the Downtown Mall, her daughter quietly sitting at the next table, Berne comes across as an optimist. Her work depends on the notion that scientists and lawmakers have the power to influence how nanotechnology will be used.
“I still want to believe that we can decide how we want to have a relationship with technology,” Berne says.—Paul Fain
How high is that mountain?
Ever been annoyed by someone talking to you when you’re trying to read? What if someone’s prattling on when you’re reading targeting data—tall man, spotless white robes, menacingly beatific face; chubby ex-paratrooper populist, freely-elected leader of a South American oil power—while simultaneously piloting three Predator-type drones armed with Hellfire missiles?
Well, it can get under your skin.
The neurological phenomenon involved in performing distinct mental activities has far-reaching technological implications, says Dr. Dennis Proffitt, director of UVA’s undergraduate program for cognitive science and the Proffitt Perception Lab.
“We have different memory systems,” Proffitt says. “We have a spatial one and a verbal one. They don’t much draw on common resources, so you can do a spatial task and a verbal task at the same time. But you can’t do two verbal tasks at the same time.”
Proffitt’s lab, with funding from the Department of Defense, recently completed building a system that uses portable, wearable brain-activity sensors, which look like dental headgear without the unpleasant teeth-attachments, to coordinate information flow from a computer. Using the UVA Hospital’s magnetic resonance imaging scanner—a large machine into which subjects are inserted like pizzas into a brick oven—to first pinpoint neural activity associated with mental functions of interest, the wearable device is calibrated to detect the specific ways in which its wearer is preoccupied at any given moment, and to what extent.
The idea is to channel messages based on cognitive availability, or to choke off nonessential traffic altogether if the user is under great stress. So, in trials, subjects were asked to count silently at intervals and respond to verbal tasks presented to them on a computer screen. Performance was improved when the additional verbal tasks were presented only when the system detected the subjects were not counting.
Honeywell, Lockheed Martin and Boeing are interested in military applications for the technology, such as equipping a pilot with the ability to control several unmanned aerial drones simultaneously, as compared with the three-man crew currently required to remotely fly the Predator.
DaimlerChrysler is also investigating uses for Proffitt’s work. For example, Proffitt suggests that a Mercedes’ navigational computer could decide to stop spitting out directions when the car senses its put-upon driver is trying to avoid a collision.
Proffitt, who has a stark, sanguine pate, sharp blue eyes, and salt-and-pepper mustache and hair, earned his doctorate in psychology in 1976, arriving at UVA in 1979. Speaking in even, soft tones—perhaps owing to the shellshock of moving into a new lab and offices just before the academic year begins—he describes his lab’s mission as providing a place for students and staff to create knowledge, build careers and be useful.
“I do none of the work,” he says self-effacingly. “The work all gets done by graduate students, undergraduates, staff.” Reflecting a tenet of his life’s philosophy, Proffitt also includes the directive to “have fun” in the lab’s mission statement.
The next project for Proffitt’s portable brain-scanning equipment—one of few such devices in the country—is aimed at helping patients stricken with Locked-In Syndrome, a condition where brain trauma and other causes leave victims conscious but permanently and totally paralyzed, save for eye movements. Proffitt hopes his scanning equipment can be calibrated to detect simple motor commands, such as the neural activity associated with making a fist, which would then be translated by computer into a command to move a cursor, for instance. Computers would become a surrogate voice for a patient whose consciousness is severed from interface with the world.
Proffitt’s lab operates on several research tracks. Its theoretical work is largely based on investigating spatial perception, emphasizing the impact of anticipated physical effort and behavioral goals as opposed to simply the raw processing of optical information.
For example, in an experiment in 2003, a group of subjects was made to walk on a treadmill while being exposed to a virtual reality sequence that pictured a stationary environment, while another group viewed an environment that moved in sync with their stride. Blindfolded and asked to walk in place off the treadmill, the first group irresistibly drifted forward, and separately tended to arrive at increased visual estimates of distance.
“Our approach says you see the world in terms of your ability to act on it,” Proffitt says. “So hills look steeper when you’re tired, when you’re wearing a backpack, when you’re elderly, infirm. Similarly, distances look farther away if you’re wearing a backpack, but only if you’re anticipating walking. If you’re anticipating throwing a ball it’s not affected by wearing a backpack.”
Another major ongoing initiative at the lab is the “InfoCockpits” project, an effort to help computers cater to the physiology of human memory. Applying the concept that the brain naturally associates learned information with the place in which it was received, the group created a computer workstation that faces a large screen onto which unique, synthetic environments, such as images of the UVA Lawn, or the UVA Art Museum, are projected to accompanying ambient soundtracks. Because information is encoded in memory along with cues about where it was seen relative to the body, the InfoCockpit also uses multiple monitors.
Memory performance was dramatically better among subjects who used the InfoCockpit as compared with those who used an ordinary computer setup. MRI scans also showed increased activity in brain areas associated with “episodic memory retrieval, spatial processing and visual imagery.”
During his decades-spanning career in psychology, Proffitt says he has seen an evolution of the field from a focus on the “brain’s software” to an approach that encompasses the physiological “hardware” of the mind, but that the fullness of the relationship between neural activity and consciousness remains elusive.
“I don’t know what the connection is between the activation [of neurons] and the experience that I have,” he says, but explains that scientists can now say which regions of the brain will be activated by an experience.
“That seems to me to be an important step toward understanding things. I don’t expect it to be understood soon, but that seems to be a good way to start out trying to get there,” Proffitt says.—Harry Terris
On February 5, Mark Whittle had a new ditty to lay on his fellow musicians in The Charlottesville Classical Guitar Society—the score of creation.
As an amateur guitarist, Whittle plays classic compositions by Bach, Albeniz and Tarrega. In February, the UVA astronomer became the first person ever to play the sound of the birth of the universe—a high-pitched scream building to a deep roar and ending with a hiss, like a wave crashing into sizzling surf. The universe even has a musical quality, a majestic major chord that shifts, over a million years, to a melancholy minor third.
In what would be the first of many presentations, including interviews with CNN and The New York Times, Whittle described the sound to his fellow musicians.
“I wanted to make this extraordinary subject accessible to people who aren’t scientists,” says Whittle, clad in jeans, a flannel shirt and stocking feet, his British-accented voice brimming with enthusiasm. “I didn’t discover anything new here. I just took what was already known in specialist circles, and presented it in a new way.”
Whittle prepared the “big bang acoustics” using a picture of the early universe, which astronomers have slowly pieced together over the last 80 years. In 1920, Edwin Hubble made the groundbreaking discovery that the universe was expanding, indicating that at some point in the past the universe began almost as a single point.
Forty years later, astronomers used powerful new telescopes to observe a faint microwave “glow,” visible across the entire sky, far away, no matter which direction they looked. They concluded it was radiation from a massive burst of energy nearly 14 billion years ago—the Big Bang.
“When you look through a telescope, you’re looking back in time,” says Whittle. “If we had eyes that could see microwaves, we could walk around at night by the light of creation. It’s an astonishing concept. So, what do we see?”
For the past 40 years, scientists have been producing more detailed maps of the Cosmic Microwave Background (CMB), which radiates from a time when the universe was only 380,000 years old. (In human terms, that’s like observing a human fetus 12 hours after conception.)
The most current maps show that the CMB is not quite uniform throughout the sky, but slightly blotchy. A little brighter here, a little dimmer there, these blotches are sound waves moving through the early universe.
“It’s like looking at the ocean from above,” says Whittle, “with little waves on top of bigger waves on top of even bigger waves.”
Whittle calls the CMB the “holy grail” for astronomers, comparable to recent developments in the study of the human genome. Whittle used a computer program to express the CMB map as a sound.
Had we been alive at the time, we wouldn’t actually have been able to hear the sounds of creation. “Besides the fact that we would instantly suffocate and roast in the searing heat of that early fireball,” Whittle says, “the pitch was about 50 octaves too low for humans to hear. I had to pull it up to make it audible.”
On his desk, Whittle keeps a photograph of himself as a 9-year-old, socks pulled to his knees, peering through the lens of his first telescope at his boyhood home in Cambridge, England. Now, at 42, he’s a research professor in UVA’s astronomy department. He still radiates a youthful awe at the wonders of deep space, and it’s this visible enthusiasm that helps him explain complex astrophysics in a way that even newspaper writers can comprehend.
Whittle sees the universe with the equipment of science and the vision of a poet, looking not just for the facts but the narratives they sustain. The few seconds of sound are important because they give people a new way of conceiving one of the most important events in history—the birth and early growth of the universe.
“The history of the universe is sort of like an epic story,” says Whittle.
This tale begins with brilliant light and sound. The highest notes of the descending scream will become the first stars, and their furnaces will produce the first heavy atoms. The bass notes will slowly dissolve to become the tapestry of galaxies. “These are important steps in the ladder from creation to us,” Whittle says.
“We’re able to tell an accurate story about creation. You can tell a story about light and matter in a language that people can both understand and feel,” Whittle says.
“The point of the story is that we are all a part of nature.”—John Borgmeyer