SciTechStory Tracking the impact of science and technology Wed, 05 Feb 2014 09:34:48 +0000 en-US hourly 1 Black Holes, Steven Hawking, Oh My Wed, 05 Feb 2014 09:34:48 +0000 (Visited 95 times, 1 visits today)]]> According to much of the media, Steven Hawking says there’s no such thing as a black hole. This sounds really rad but is the media reflecting celebrity and titillating terminology rather than substance and veracity? Of course, Hawking is a media magnet, as are (ahem) Black Holes. Besides, didn’t Hawking write the book on black holes? Yes he wrote an influential papers on the subject. However, if you look at what he was saying then and what he’s saying now…for one thing, he’s not saying there’s no such thing as a black hole.

The actual quote from his recent paper (taken from a talk he gave in 2013) is, “The absence of event horizons mean that there are no black holes – in the sense of regimes from which light can’t escape to infinity.”

Notice, there are two parts to the sentence. The media generally picked up the first and ignored the second. Here and elsewhere in the paper, Hawking is dealing with a long-held belief in cosmology – the definition of a black hole says it is a phenomenon of space-time with an event horizon, such that when the event horizon is crossed nothing (not even light) ever comes back out. In practice, we are (with current technology) limited to observing apparent horizons. The apparent horizon, according to Hawking’s current thinking, is a quantum phenomenon in which it is (theoretically) possible for energy and information to escape a black hole. So, not really a black hole in the classical sense, but still a kind of black hole – black holes do exist.

Actually, Hawking’s paper addresses a relatively new problem known as the black hole firewall paradox. This theory holds that because of quantum mechanics at the atomic level, an event horizon is transformed into a highly energetic region, a ‘firewall,’ that incinerates all matter passing through it (like a hapless astronaut, for example). This theory is close to an abomination for some astrophysicists mainly because it does violence to Einstein’s theory of general relativity (the theory doesn’t hold in the event horizon firewall).

Hawking proposes another possibility – given current knowledge of quantum mechanics and general relativity, black holes do not have an event horizon to act like a firewall. Instead, there is no sharp boundary, only an apparent boundary (apparent horizon). In theory, the apparent horizon can dissolve, which would mean eventually energy trapped in a black hole could escape – no more, “…light can’t escape to infinity.”

Behind these ideas are worlds of math, and very little of it is settled. Hawking himself realizes that, “The correct treatment remains a mystery.” This is to say most of these ideas are thought experiments, backed by skeins of incredibly difficult math and perhaps questionable assumptions. Mainly, Hawking wishes to add to the arguments against the event horizon firewall paradox; he is not arguing against the existence of black holes.

So yes, most of the media is guilty (again) of running with celebrity and titillation rather than checking their facts and getting the story right. Should we be surprised? (Rhetorical question, of course.)

(Visited 95 times, 1 visits today)
]]> 0
Flow batteries: For when the wind don’t blow and the sun don’t shine Thu, 16 Jan 2014 12:19:46 +0000 (Visited 168 times, 1 visits today)]]> prototype flow battery
The Harvard prototype organic flow battery….Credit: SEAS

A team of scientists and engineers at Harvard tackled the problem of storing electricity from short term or irregular energy sources, such as wind mills or solar panels by looking to improve on a type of battery technology known as a flow battery. As published in the journal Nature [08 January 2014, paywalled, A metal-free organic–inorganic aqueous flow battery], the researchers believe they may have found a way to make flow batteries commercially viable for mass energy storage (that is, for grid-level applications).

While we all know about batteries, including the buzzwords such as ‘rechargeable’ and ‘lithium ion,’ people are generally not very familiar with the underlying technology (much less the physics), so that types of batteries such as a flow battery are almost unknown. Most common batteries, such as those used in the home, store the electrolyte (which conducts the electricity) and the electrode (which stores the electric energy) in the same container. In a flow battery, the two kinds of electrolytes store oppositely charged electric energy and are in separate containers. An outside source such as windmill or solar panel charges each electrolyte. When electricity is needed, the electrolytes are pumped into a container with a membrane that separates the electrolytes but allows an energy exchange that provides the electrical current.

This approach has some important advantages: Unlike a standard battery that, at best, can hold only a few hours charge, a flow battery’s charge is limited mainly by the size of the containers for the electrolytes. With a reasonable amount of efficiency, a flow battery can store at least two days’ supply of electricity – enough to build up a charge from wind or solar sources and store it during low wind or nighttime. Because the system exchanges ions through a membrane instead of between electrodes, it doesn’t suffer from a slow degradation of the electrode’s ability to take a charge. In a similar fashion, the large size of the electrolyte containers means that for practical purposes, the electrolyte is almost infinitely rechargeable.

Because of these advantages, flow batteries have been in use for several decades and are important for research. Why then aren’t they ubiquitous? In a word, cost. Most flow battery designs use metallic-based electrolytes, typically vanadium or platinum, which are relatively expensive. Other factors, such as insulation, membrane replacement and charge control sensors contribute to making flow battery energy cost about US$700 per kilowatt-hour. To be more widely practical, the cost needs to be about US$100 per kilowatt-hour (U.S. Dept. of Energy).

To tackle the cost problem, the Harvard researchers decided to explore a new area of materials for electricity storage – organic molecules. Specifically, they chose quinones, which are known for their affinity for electrical charge (electrophilic) and come in a vast number of configurations (hydrogen peroxide and Vitamin K are just two examples). The researchers tested hundreds of quinones and settled on AQDS (if you really want to know, that’s 9,10-anthraquinone-2,7-disulphonic acid). AQDS is found naturally in rhubarb and is easily extracted from crude oil. It mixes with water for storage in tanks. This makes it inexpensive, costing about US$27 per kilowatt-hour compared to about US$80 for metallic materials. In the current configuration, the Harvard flow battery uses quinones on one side of the charge and uses a bromine mixture on the other side. Unfortunately, bromine is both toxic and corrosive, so the researchers are hoping to develop a variation of quinone (or another organic molecule) to replace it.

It’s important to understand that at this stage of research, the Harvard flow battery is a small prototype (as in the picture above). It has successfully been recharged 100 times, but needs to demonstrate thousands of recharges. It, of course, be much bigger and demonstrate reliability over a period of years before utility companies will be seriously engaged. Nevertheless, this version of the flow battery seems to have an angle with organic molecules that has a lot of room for tuning and improvement (like replacing the bromine electrolyte). It already has a rapid recharge capacity and a good record for holding the charge. The Harvard team is already working with a company (Sustainable Innovations, LLC Connecticut) to find commercial applications, probably for storage of home solar energy.

If this approach to battery technology proves to be commercially scalable and reliable, it could solve one of the major problems of renewable energy sources – the storage of energy when the sources are not available. That could have enormous impact on the spread of wind and solar power use.

(Visited 168 times, 1 visits today)
]]> 0
Peer reviewed climate change deniers Sat, 11 Jan 2014 11:13:12 +0000 (Visited 133 times, 1 visits today)]]> Sometimes overwhelming numbers are a stand-in for credibility (or the lack of it). Consider these figures of published peer reviewed papers compiled by James Lawrence Powell*:

November 2012 through December 2013:
9136 authors published 2258 peer-reviewed climate articles
1 author rejected man-made global warming (0.0106 percent)

And the longer view:

For the years 1991-2012:
13,950 peer reviewed climate articles
24 articles rejected global warming (0.17 percent)

Of course, deniers will say “Peer review? Why that’s obviously a global conspiracy!” And you can say, “Since when did human beings, much less scientists, agree on anything so that only 1 out of 9136 people disagreed? That’s one hell of a conspiracy!”

*Currently Executive Director of the National Physical Science Consortium, Ph.D. Geochemistry, M.I.T, former president of Oberlin College, Franklin and Marshall College, Reed College, Franklin Institute Science Museum, Los Angeles County Museum of Natural History, appointed by Presidents Reagan and G.H.W. Bush to the National Science Board (12 years).

(Visited 133 times, 1 visits today)
]]> 0
What do we lose as large carnivores disappear? Fri, 10 Jan 2014 09:54:52 +0000 (Visited 112 times, 1 visits today)]]> Top predators - gone
A world without top predators? Wolves, for example….Credit: Doug McLaughlin, Oregon State University

Globally, we are losing our large carnivores. That is the central conclusion of a large international study (participants from the U.S., Sweden, Australia, and Italy). These are the animals at the top of the food chain, the ones people most readily recognize. There aren’t many species; scientists identify just 31 significant species such as bear, wolf, lion, leopard, tiger, otter, cougar and lynx. Of these species, 17 now occupy half (or less) of the former ranges and 75 percent of them are declining in population. Some species are already exterminated (Lobo wolf, for example) or driven completely out of large natural ranges (such as the Eastern U.S.). The big question addressed by the study, published in Science [10 January 2014, paywalled, Status and Ecological Effects of the World’s Largest Carnivores] is what does the decline of large predators mean for the environment?

The problem, both historically and currently, is that top predators nearly always compete with human beings for prey animals. They take deer, elk, reindeer and many other species that humans hunt for recreation or food. They also kill domesticated animals. Either way, the tendency has been to eradicate the predators or at a minimum drive them out of human inhabited land. Humans tend to be quite pleased with the results – more game animals, no loss of livestock. However, removing top predators changes the ecosystem.

Unfortunately, the ecological effects are often initially less dramatic. In fact, the phrase used by ecologists suggests just how ‘technical’ the results can be – they call it a trophic cascade. The effects spread throughout the ecosystem. Remove a top predator, say the wolf from the Yellowstone region of the U.S., and over a few decades the population of elk, the wolf’s natural prey, overpopulates. The overpopulation results in overgrazing, which reduces food for many other species including rodents and birds. Eventually, the entire ecosystem is affected.

This is not easy to document. One of recommendations of the study is that more predator-ecosystem research is needed. Only about 7 of the major predators are well-studied. In particular, the long-term effects of a predator-less environment need to be examined. The scientists want to build a case for the preservation of predators and a restoration of the natural range and role in the environment. This is hard to argue if they can’t provide evidence of the harm done by removing all large predators.

(Visited 112 times, 1 visits today)
]]> 0
Sci-Fi Movie Review: Elysium Mon, 06 Jan 2014 09:34:23 +0000 (Visited 302 times, 1 visits today)]]> [Elysium. Released August 9, 2013. Directed by Neill Blomkamp, Writers: Neill Blomkamp. DVD/Blu-Ray released. As usual, this “post-viewing review” contains spoilers.]

What to make of a shoot-em-up action science fiction movie with a MacGuffin of health care? I’m deliberately using Hitchcock’s term (MacGuffin: a plot device in the form of some goal, desired object, or other motivation often with little or no narrative development as to why it’s important). Imagine, a plot driven by the need for medical attention in the age of Obamacare (in the U.S. anyway). A MacGuffin captures the notion that Elysium is going to upset some people with its obvious political sentiment and at times kack-handed plotting.

The story, as well as the direction, is by Neill Blomkamp who made District 9, a well-reviewed and popular sci-fi movie about a form of apartheid for aliens (Blomkamp is South African). Elysium carries similar sensibilities in another direction, with a budget of literally a different magnitude ($110 million) and marquee actors Matt Damon and Jodie Foster. The money, when spent on scenery and special effects wasn’t wasted.

Elysium, besides the MacGuffin of health care, is about income inequality. Really. It is as graphic a representation as one is likely to see. The images are striking: Earth in 2154, as the introduction puts it, “…was diseased, polluted and vastly overpopulated.” The teeming masses, mostly roiling, toiling and filthy proletarians live in massive cities of utter squalor, with crime and corruption the norm (and BTW, lousy health care). We’ve seen these dystopian scenes before, but Blomkamp and his creative people have filled an unusually epic canvas with endless miles of slum.

In stark contrast, way up there in the sky is the gleaming wheel of the gigantic space habitat known as “Elysium.” That’s where the 1% (or maybe, like it really is today, the 0.01%) live in park-like, mansion-filled, lap-of-luxury splendor. It’s the ultimate gated community. They have a bad attitude about the rest of humanity and only go slumming (Earthside) when they must conduct some business among their corporate empires. The wretchedness of living on Earth compared the sterile and healthy Elysium (lots of miraculous medical equipment the proles can’t access) couldn’t be more visually obvious. The proles on Earth look up to Elysium, literally. They would like to exist there, but hate that they can’t.

You might think there would be a lot of rioting, or at least union organization, but no – Earth security (police, military) is almost entirely run by intelligent “droids” (androids, robots, drones), and one gets the impression they’re efficient (brutal) and perhaps even-handed. Of course, the droids get their orders from the human government on Elysium.

The hero of the story, ex-con but good-guy Max, played by Matt Damon of the shaved head, has a tenuous job in droid manufacturing. He’s shaky with his line boss, so shaky in fact, that the line boss deliberately gets him exposed to a lethal dose of radiation. Max thereafter has five days to live, and as it turns out, save the world, a very convenient plot condition.

Sick with radiation poisoning, Max seeks medical attention. (Medical care, such as it is, is gruesome, worse even than an army field hospital.) There he meets up again with his childhood sweetheart, Frey (played by Brazilian actor Alice Braga). She is, of course, a nurse. While she agrees to help Max, she has problems of her own – her adorable daughter is dying of leukemia. Of course, there’s a curing treatment on Elysium but….

I think you can see the good-guys side of the plot set-up. The bad-guys-and-gals side is less nuanced. Although Jodie Foster can play snooty-snide extremely well (Viz, The Dangerous Lives of Altar Boys), in this movie the role of Defense Secretary Delacourt is caricature. About 99% percent of the movie, she’s the heavy – callous, bigoted, conservative and self-serving head of security for Elysium who sets most of the “bad things that happen” into motion. This includes unleashing another caricature, Kruger (Sharlto Copely), the violent baddy of a secret-service field operative who goes bonkers. He eventually winds up sticking a shard of broken mirror in Delacourt’s neck, and she dies gurgling but sort of nobly (refusing medical care, if you can believe that).

There’s no need to outline more plot, especially since it mostly frames the set-piece battles. Director Blomkamp is good at CGI battles; most of the time you can tell what’s happening, although his fondness for explosively disintegrating human bodies may seem overworked. As you could easily predict, the hero and heroine wind up on Elysium, doing noisy combat with the evil Secretary of Defense and degenerate Kruger. After much tribulation and random mayhem, they manage to get Frey’s little girl into a household medical device (like a MRI scanner), that cures her completely of acute lymphoblastic leukemia in about 35 seconds, including post-op hugs and kisses. The ultimate ending of the movie, which many will find perfunctory and sappy, is that decent (as in modern and miraculous) health care is dispatched from a presumably reforming Elysium down to Earth. What struck me was the symbolic imagery of four medical space ships we see descend into the slums, as if the resources of Elysium could cure all the (literal) ills of Earth with a population of, say, 50 billion people. It seems like an all too obvious bandaid for a mortally ill society, and a storyteller afraid or unable to put his finger on the real problems.

In two phrases, Elysium is gritty in look and story, while slick in production values. The two don’t quite cancel each other out. That makes it a worthy popcorn movie for a Friday night’s settee, especially if you like lots of well-meaning action and bloodshed.

Science Spoilers
The science in the movie is both extremely low tech (on Earth), and scintillatingly high-tech (on Elysium). Since the movie takes place about 150 years into the future, it’s well beyond the limit of what I call “present day extrapolation,” which is a fancy way of saying who the hell knows where science and technology will be that far into the future? As befitting a MacGuffin, the medical marvels we see on Elysium are in no way explained. They just are. Why they don’t exist on Earth (other than, perhaps, being too expensive) is also unexplained. (Leaving important things unexplained being, of course, the essence of MacGuffin-dom.)

By comparison, Earth seems almost totally devoid of anything we’d identify as high tech – despite that a hundred and fifty years of making consumer items out of technology should have produced some low-cost wonders. In any case, the low tech Earth means that guns and other weaponry are still of the blast and blow-to-smithereens variety, which makes for nice, noisy battle scenes.

On a more technical level, there has been some kvetching about details of the Elysium torus. This is especially about the apparently open-to-space interior of the wheel that somehow holds the atmosphere in place. Glad to see some interest in the science of spin gravity, since if man goes into space successfully, it will require spinning environments of many kinds. I’m inclined to believe that the torus (wheel) is far too small to generate enough gravity to hold an open atmosphere, but this design probably found its way into the movie because it made for much better camera angles and an appealing aesthetic sense of openness.

(Visited 302 times, 1 visits today)
]]> 0
Sci-Fi Movie Review: Pacific Rim Sun, 05 Jan 2014 01:39:40 +0000 (Visited 275 times, 1 visits today)]]> [Pacific Rim. Released July 1, 2013. Directed by Guillermo del Toro, Writers Guillermo del Toro and Travis Beacham. DVD/Blu-Ray released. As usual, this “post-viewing review” contains many spoilers.]

Among the fanboys (and yes, they’re often boys and great fans of shoot ’em up games), Pacific Rim is the heavily argued new gold standard in action movies. If you want to be paternalistic about it, this does not disqualify Pacific Rim from being a good, if not great movie. Like most genre movies, if you like the genre, then this is a very good movie. If you ask, “What genre is that?” Well, that’s a good question.

Nominally, Pacific Rim is science fiction. It takes place in the future, roughly the 2020s. It incorporates aliens, monsters and other elements familiar to sci-fi. The story is the creation of the director, Guillermo del Toro who co-authored the script with Travis Beacham, and it reverberates like a pumped-up woofer with pieces from many other science fiction movies (Star Trek mind-meld, Transformers giant mecha [robots] etc.). It’s also a member of two sub-genres of science fiction, monster movies and mecha movies, which are sometimes teamed into a hybrid genre like this one. Each genre has its own fans, including del Toro, who dedicated the movie to Ray Harryhausen and Ishiro Honda, who more than most established the monster movie genre.

The tradition of monster and mecha movies, which always involve game-like and cartoon-like elements, has its own suite of conventions – heroes and heroines, frequent pitched battles, massive city-smashing destruction, and the good-guys-win endings. Pacific Rim does not deviate from these traditions and will remind people of cartoons (print and movie) intended for the kids (if perhaps somewhat older kids), which is what director del Toro said he wanted. In fact, so much of what del Toro said he was shooting for in this movie, he achieved: focusing on “…big, beautiful, sophisticated visuals….” The idea was to make something slick enough to appeal to adults (like a love story built on respect instead of sexual attraction), but action-packed to “introduce a new generation of kids to monsters and mecha.”

If it sounds like del Toro, as director and story teller, is crucial to this movie – you’ve got it. He is. This is not only his movie; it’s his vision, his love of the material. That’s what lifts Pacific Rim above the level of so many similar movies. There are details that only a strong director would dare to include. My favorite was the moment where Gypsy (the good guys’ mecha) punches a massive metal fist into a building, crushing walls and glass halfway into the interior offices when in a magical moment it comes to a slow halt and just nudges one of those steel-balls chain reaction toys into movement, click, click, click.

I could analyze the symbolism of that moment of slowed-momentum, but what for? This is not the type of movie in which analysis reduces to anything more than, “Well it was fun, if you like that sort of thing.” Personally, I can see myself watching Pacific Rim again – can’t say that about any other movie of its (hybrid) genre. Yes, it’s derivative, predictable, violent, and noisy. Flip these another way and you get, comfortably familiar with interesting variations, anticipatory of dramatic satisfaction, moment to moment exciting, and pulsing with sound and fury. It has also a kind of human realistic sense to it, something I attribute to del Toro, who is a self-proclaimed humanist. Maybe that’s enough depth for you. Maybe not.

Science Spoilers
Sci-fi conventions in this movie substitute for science – mind melds, especially with aliens, temporal rifts in the Pacific Ocean floor, and even the mecha are not the stuff of currently plausible science. They’re more like science fantasy, or fantasy that’s dressed to look like science. In any case, with a move full of such conventions, critiquing the science makes no sense.

(Visited 275 times, 1 visits today)
]]> 1
Life on Mars: Curiosity finds a promising lake bed Tue, 31 Dec 2013 12:13:12 +0000 (Visited 88 times, 1 visits today)]]> Curiosity rover and Yellowknife Bay, Mars

After about two decades of poking around Mars, it’s clear that scientists don’t expect to find life, certainly not on the surface [SciTechStory: Life on Mars: If it exists, is below the surface]. There are no Martian yetis, or if any life at all, nothing bigger than a bacteria – probably living deep below the surface. So the excitement about the announcement that the U.S. Mars rover Curiosity has identified a “life friendly environment” in a former lake bed is not about today’s conditions, but the conditions that might have supported life billions of years ago.

Keep in mind that even former “life” has yet to be discovered on Mars, either direct (as in a fossil) or indirect (chemical traces). For now, scientists – mostly the people of the nascent field of exobiology – utilize suppositions about life that we’ve gleaned from studying life on Earth. It may turn out that life, if it ever existed on Mars, may have had different signatures than we expect – but that’s the importance of what’s happening now with the new data provided by Curiosity.

The Curiosity rover is traversing the Gale Crater, a 150-kilometer (90 mile) wide impact basin with a mountain in the center. It’s looking at the geology as it goes, sampling the soils, digging a bit, sniffing the air and taking jillions of images. What it sees now, in a location named “Yellowknife Bay” (remembrance of a place in the Yukon of Canada) is an area that almost certainly was a lake roughly 3.6 billion years ago.

It’s interesting to note that the descriptive language NASA uses about Mars now takes the former existence of water for granted. It’s gone now, from the surface except in frozen form near the poles, but water signs are everywhere on Mars and most scientists now believe that water – the foundation of life as we know it on Earth – was once abundant on Mars.

The evidence of water in Yellowknife Bay is in the form of mudstone, a sedimentary material typically laid down on the bed of a relatively calm, shallow and freshwater lake. The analysts looking at Curiosity’s data suggest that the lake could have existed for a long as 100,000 years – long enough to develop life if the conditions were favorable. Analysis with x-ray diffraction techniques reveal that the mudstone (smectite) contains minerals crucial for life, including carbon, hydrogen, sulfur, nitrogen and phosphorus. The same materials show that the lake had a low salinity (not too much salt) and a neutral pH, neither too acidic nor alkaline to inhibit life.

On the other hand, most life on Earth used sunlight (directly or indirectly) as a source of energy. On Mars the sunlight is weaker, most scientists believe too weak to produce life (at least as we know it). If life existed on Mars, it had to derive its energy from chemical reaction, which makes the presence of so many key elements necessary for the right chemistry to happen.

Whether that chemistry ever happened and whether this or other Martian lakes existed long enough to produce life – that’s the target for Curiosity’s next round of investigation.

(Visited 88 times, 1 visits today)
]]> 1
Synaptic transmission: Another step illuminated Sun, 29 Dec 2013 12:02:57 +0000 (Visited 545 times, 1 visits today)]]> Neurovesicle recycling
Illustration of neural exo and endo cytosis. Credit: U of Utah

Many people, including neuroscientists, refer to the patterns of neurons in the brain and elsewhere in the body as “wiring.” It’s a metaphor, which makes it seem almost axiomatic that our nervous system operates on electricity and is akin to the electrical systems of, say, a house or a computer. Actually, for all but a small percentage of neurons in the human body, ‘wiring’ is not a good metaphor. Wiring, in the usual understanding, implies a flow of electrons through a wire, usually metallic such as copper or fiber-optic. “Electron flow” is hardly the best descriptor for the way axon bodies (the long form of the neuron) transmit electricity – and the synapses (the gaps between neurons); they’re something else again.

In most neurons, the axons transmit signals via “action potentials,” which involves channeling ions of sodium and potassium in a complex “impulse” consisting of passing ionic charges along the axon. This does not sound like house wiring…it works more like a burning fuse. For another thing, it’s a lot slower. Electron flows in copper wire can travel at a reasonable approximation of light speed. Action potentials in a neuron vary, but an average speed is about 10 meters per second, a lame tortoise’s view of the speed of light. Then the action potential inevitably comes to the end of an axon and encounters an actual physical gap – the synaptic cleft.

At the synapses, in order to cross the gap between neurons, transmission is mainly by chemical reaction. Of course, it’s an electrochemical process and electrical charge is involved, but its relationship to electron flow or even ion exchange is distant. The principal actors are neurotransmitters, specialized chemicals such as norepinephrine and dynorphin. The method of transmission involving neurotransmitters is, not to put too fine a point on it, elaborate. Only recently have scientists begun to understand how it works.

When two neurons meet at a synapse, the one carrying the action potential (nerve impulse) generates the appropriate neurotransmitter chemicals (this is a variable and highly complex process). The neurotransmitters package in ‘bubbles’ – vesicles – and are sent to the neuron’s outer membrane at the synapse. There, the vesicles fuse with the membrane and dump their load of neurotransmitters into the synaptic cleft. The neurotransmitters quickly reach the outer membrane of the receiving axon, which is loaded with neurotransmitter receptors, chemical points that attract specific molecules of the neurotransmitters. The pattern and strength of the electrochemical charge generated in the receptors activates a new action potential in the receiving axon, and the impulse or ‘message’ continues.

Many decades of research went into building the above rough description of what happens at the synapses, and the work continues. This includes the 2013 Nobel Prize in Physiology or Medicine for work by Rothman, Schekman and Südhof on the machinery regulating vesicle traffic. As you probably noticed, vesicles are a key factor in the functioning of the nervous system. The questions involving how, why and when vesicles generate, fill with the appropriate neurotransmitters, transport to the terminal membrane and dump into the synapse have generated some truly amazing biochemical explanations. Many questions remain. For example, it’s obvious that in some way the vesicles are recycled, the question is how that works and how does it affect the speed of neuron transmission.

To find answers, two researchers, Erik Jorgensen and Shigeki Watanabe at the University of Utah (Salt Lake City, USA) and a team of neuroscience researchers at the Charity University of Medicine (Berlin, Germany) starting digging into the process known as endocytosis, the recycling of vesicles at the nerve ends.

The current state of knowledge identifies three mechanisms for vesicle recycling, which Jorgensen illustrates with a machine gun analogy:

1. Clathrin mediated – the clathrin coating of the vesicles disintegrates after the vesicle deposits its neurotransmitters into the synaptic cleft, and the vesicle material is re-used from scratch to make new vesicles. This is like making rounds of new bullets to rapidly feed a machine gun.
2. Kiss and run – re-uses existing, sometimes partially filled vesicles. This is like refilling used shells one at a time.
3. Ultrafast – “grabs” (chemically) a batch of vesicles at one time and refills them, something like an endless conveyer belt of bullets for a machine gun.

To-date each of these mechanisms has its proponents and detractors. What Jorgensen and Watanabe wanted to do is develop some evidence behind the mechanism. For this, they had to invent new investigation techniques – in this case photographic.

They started with growing hundreds of brain cells (neurons) from the hippocampus of mice. It’s an area associated with memory formation, where neurotransmission and synaptic integration are most likely to be optimized. The researchers grew the neurons on sapphire disks one-quarter inch wide and placed them in a petri dish with a growth medium.

As the neurons were grown, the researchers inserted a gene taken from algae that forced the mouse brain cells to produce an “ion channel” that would switch on the neurotransmitter process with a light signal (from a laser) and not from an electrical impulse. They did this because the next step in the technique involved super-freezing the cells, and an electrical wire could not be used for stimulus. The cooling was done in a high-pressure chamber – set at 310 degrees below zero Fahrenheit and 2000 times Earth barometric pressure.

In this chamber, the researchers flashed a blue laser light, making them “fire” neurotransmitter nerve signals. Each firing was frozen by injecting a blast of liquid nitrogen. This was repeated for various time intervals (15, 30,100 milliseconds and 1, 3 and 10 seconds). Watanabe called it the “flash and freeze” technique. The sapphire disks containing the frozen neurons were put into liquid epoxy, hardened and thin sliced to be photographed under an electron microscope. Roughly, 3,000 mouse neuron synapses were photographed this way. About 20% of them were firing at the time, which provided a basis for examining the behavior of the vesicles.

In the images, it was clear that large numbers of vesicles were in different stages of formation, a continuous cycle of refilling batches and sending them to the neuron’s terminal membrane for transmission. In short, they were looking at the ultrafast mechanism, which they believe is the most common (and efficient) method for recycling vesicles – and an explanation for some the of high speeds (milliseconds) observed for synaptic transmission.

The recycling of vesicles is but one step in a relatively long chain of steps involved in “synaptic transmission” (perhaps better named as “synaptic integration,” because transmission isn’t always the result). It’s representative of the work scientists are doing to pick apart this absolutely crucial process (our brains, or indeed, our entire nervous system wouldn’t work without it). Almost every component, whether it’s the chemistry of neurotransmitters, the vesicle formation process, neurotransmitter receptors, the role of astrocytes and other glia in neurotransmitter control (and more), is the subject of intense research.

From this new research, neurotransmission is turning out to be dauntingly complex. The big question is why, why is it so complex? It’s a question thus far eliciting guesses and partial evaluations. Why is synaptic integration so complicated? What is actually happening at the synapses (and there may be multiple answers), and how does it affect the processes of the body? For example, some researchers believe that the complex nature of chemical neurotransmission plays an important role in memory. They just don’t know exactly how it works, or if it applies to all neurons or just specialized neurons (or groups of neurons in special locations).

Part of the reason so many important questions remain is that neuroscience is still very much into the process of explaining “what is” – discovering and verifying an accurate description of what is involved with synaptic integration and how it works. Until there are some substantial and substantiated models, it’s premature to make anything but educated guesses about the “why” question. Not that scientists won’t speculate, but most of them are reluctant to put much weight on the speculation. They know that neuroscience has a long way to go before it can not only describe the functioning of neurons (especially at the synapses) and from that explain how basic neurological processes, such as memory and consciousness, take place.

(Visited 545 times, 1 visits today)
]]> 0
Confirmation: Element 115 Sat, 21 Dec 2013 11:22:41 +0000 (Visited 99 times, 1 visits today)]]> The thing about new elements these days is they don’t exist in nature. They’re a product of human research. A ‘new’ element, dubbed ununpentium with the symbol Uup, was first “discovered” (read: created) in 2003 by bombarding a nucleus of americium-243 with ions driven from calcium-48. This created an element with the atomic weight of 288. It doesn’t stick around, with a half-life of about 200 milliseconds.

Now it’s 2013 and in the true fashion of science, two of the few research facilities capable of reproducing the element 115 have finally chimed in to confirm the new element. The original creation was performed by the Joint Institute for Nuclear Research in Dubna (Russia) and at the Lawrence Livermore National Laboratory (California, USA). The recently announced confirmations were by Lund University (Sweden) and the Darmstadt research facility (Germany). With this confirmation, ununpentium is ready for ‘official’ confirmation by the IUPAC/IUPAP Joint Working Party and eventually a final name.

The superheavy artificial elements are exotic, to say the least. For the most part, their isotopes are unstable – meaning they are gone in the blink of an eye – so they will have no ‘practical’ application. However, physicists continue to learn about the behavior of elements and their particles through creating these new elements and their isotopes. Ununpentium will, someday, have a stable isotope, probably Uup-299, but at the moment, the technology to add 184 neutrons (the ‘magic’ closed-shell number) doesn’t exist.

(Visited 99 times, 1 visits today)
]]> 0
Frederick Sanger: Gone and should not be forgotten Wed, 20 Nov 2013 12:41:23 +0000 (Visited 120 times, 1 visits today)]]> If you don’t know about Fred Sanger, it’s not surprising. He was a quiet man, far more interested in his work than in recognition. In today’s world of media hypertrophy, that work generally gets a one-column, three-inch obituary, or about fifteen seconds of airtime.

Here’s a question. How many people have won more than one scientific Nobel Prize?

Three: Marie Curie (Physics, Chemistry), John Bardeen (Physics), and Fredrick Sanger (Chemistry)

I mention this because it is so rare and it indicates a contribution to science that is arguably more important to humanity than the contributions of all but a few political, military, cultural and sports figures. In the case of Fred Sanger, his work earned him the sobriquets (plural) among his peers as “The father of genomics,” and “The father of proteomics.” In other words, he did foundation work in the areas of genetics and proteins – the foundations of life. He was a great biochemist.

You could say that Sanger’s work (and life) was about getting things in order. His genius (and it was genius) was to look at something so complicated that it seemed either impenetrable or chaotic and find order – not only find it, but through painstaking laboratory techniques, many he developed himself, prove the existence of the order he saw (or suspected).

His first effort addressed the chemical configuration of proteins, which until that time were thought so complex as to be almost random. Sanger believed the amino acids, which make up proteins, were probably not chemically random but to unravel (and prove) their chemical structure was at the time beyond biochemical technique. So he invented techniques. He developed the “Sanger reagent” (fluorodinitrobenzene) to expose amino acid groups in the insulin protein of cows (the only pure protein commercially available at the time). He then isolated through partial hydrolisation smaller chains of amino acids (peptides). Using an ingenious “fingerprinting” technique (filter paper combined with electrophoresis and chromatography), he identified the amino acid composition of the peptides. Eventually, he was able to identify the entire sequence of amino acids in the protein insulin. Among other things, this led to the synthesis of insulin (vital for diabetics, of course), but more fundamentally he showed that proteins do indeed have a distinct structure – the foundation for the further study of proteins in the field of proteomics. For this, he won his first Nobel Prize in Chemistry (1958).

He then performed an encore of equal, if not greater, importance by sequencing RNA and DNA. Sequencing the amino acids in RNA involved techniques similar to those he used for the amino acids of insulin. The circumstances were different in that obtaining a ‘pure’ RNA sample to sequence was difficult and Sanger was in a kind of race for discovery with Robert Holley at Cornell. Holley sequenced the ribonucleotides of alanine tRNA in 1965. Sanger and colleagues followed with the sequence of ribosomal RNA in E. coli in 1967. Sanger then pushed on to DNA.

These days, DNA sequencing machines decode whole genomes in only a few hours. When Sanger started, circa 1970, approaches to even looking at a single sequence (much less a genome) were primitive and took days or weeks. Sanger, along with Alan Coulson and other colleagues, decided to study DNA polymerase, an enzyme involved in the process of DNA replication and see if it could be used to tease apart the DNA sequence of amino acids. Their first research resulted in the so-called “Plus and Minus” technique that broke DNA strands into short pieces (oligonucleotides). The short pieces were subjected to fractionation by electrophoresis and visualized through autoradiography. The technique worked, leading in 1975 to the first fully sequenced DNA genome.

While Plus and Minus was a major improvement over previous techniques, Sanger was not satisfied with the approach – too slow, too fragile to be useful for large genomes. He and his colleagues then worked to improve identification at the end of DNA sequences. By 1977, this led to a complex sequencing procedure now known as the “Sanger method.” It held sway for almost 25 years, and is still in use, although most modern sequencing machines use another approach (next generation). The Sanger method earned him his second Nobel Prize in Chemistry.

Scarcely five years after developing his signature method but on schedule in 1983 at the age of 65, Sanger retired to his home near Cambridge. In a way, he was like a Thomas Edison of biochemistry, an inspired tinker and inventor. As he put it, “I am just a chap who messed about in the lab.” He followed the Edison dictum about “genius” – 1 percent inspiration, 99 percent perspiration. His insight into the biochemical nature of proteins (including DNA and RNA) was subjected to endless experimentation in the lab and the development of new approaches and techniques that could make his insight “real.” His hands-on approach and devotion to work gave him an ethos that turned down an offer for knighthood (“too much distraction”). He was a perfect counter example about nice guys finishing last. He spent the final decades of his life tending his garden (pace Voltaire) and died in his sleep November 19, 2013 at the age of 95.

(Visited 120 times, 1 visits today)
]]> 0