Courtesy KEIOk, well, there isn’t really such a thing as a nuclear earthquake. “Nuclear Earthquake” just sounds impressive. And I suppose “impressive” is one way to describe what’s happening in Japan right now.
I suppose you’re all aware of the 8.9 (or possibly 9.0) rated earthquake that hit Japan last week (if you aren’t, check out this post), and that the Fukushima nuclear power station there has been severely damaged.
While the country is still trying to put itself together, officials are still trying to get the power plant under control. So what happened, what’s happening, and what’s (probably) going to happen?
Well, a nuclear plant like Fukushima basically operates by using radioactive uranium to boil water. The uranium is always decaying—the number of protons and neutrons it has isn’t stable, so neutrons fly off, causing heat. If the neutrons hit other neutrons in the uranium, they fly off too, causing even more heat. If there’s too much of this neutron-on-neutron action, the uranium will get too hot, melt everything around it, and it’s a disaster. This is what happened at Chernobyl.
To prevent the neutron reaction from getting out of control, and to make sure the uranium produces just the right amount of heat, “control rods” are inserted in with the uranium fuel. The control rods absorb some of the neutrons to keep the reaction under control. When the right amount of heat is being produced, the water around the fuel boils, turns to steam, and spins electric generators. It works out pretty well.
At Fukushima, the control rods were inserted into the uranium as soon as the earthquake started, and they did work—the uranium reaction was shut down. But the decaying uranium had already produced other elements, elements with a lot of heat of their own. So there was a lot of residual heat in the power station.
Normally, water would keep circulating around the hot core, carrying away the residual heat (and turning it into electricity). But between the earthquake and the ensuing tsunami, the power to the water pumps was shut off and the backup generators were disabled. The pumps had backup batteries, but eventually those ran out. That means there was still a lot of heat in the core, but no fresh water to carry it away. The water that was there would continue to heat up until the steam was vented or the vessel containing it simply burst.
Unfortunately, both of those things sort of happened. While purposely venting some of the steam, an explosion happened in the building surrounding one of the cores. According to this site, this is probably because some of the vented water vapor had separated into hydrogen and oxygen, which built up in the building and ignited. This whole situation (the venting and the steam) was radioactive, but not that bad as those things go—the radioactive elements decayed and became stable in a very short time.
The next problem was that with all the venting, the water level around the cores was slowly falling. Without water to take away their heat, the cores could overheat, and eventually melt down. This started to happen, and some more dangerous radioactive products of the uranium started to mix with the remaining water and steam, so officials decided to start pumping seawater into the core. Seawater can get more radioactive than clean water, but it would keep the core cool and under control. And it did.
Today, there was a second explosion at the plant, however. I’m not totally sure what caused this, but it looks like it was again from the accumulation of hydrogen in one of the buildings.
With the uranium reaction under control and the cores under water, the residual heat should eventually dissipate. But the explosions have further damaged the cooling systems, and keeping the multiple cores at the station submerged in seawater has been a challenge. The longer the cores are exposed, the harder it is to control radioactive material already produced by the cores, and the greater the chance of a meltdown occurring at the plant.
Approximately 200,000 people living in the region of the nuclear plant have been evacuated, and it’s still unclear what will happen there. Nothing good certainly, but a meltdown isn’t a sure thing at this point, and even if a meltdown were to occur (again, a meltdown happens when there’s too much heat in the core, and everything around the radioactive fuel melts), the Fukushima plant was built to much higher safety standards than Chernobyl was, and it should contain the damage much more effectively. At Chernobyl, explosions sent radioactive material into the atmosphere and over the surrounding area. At Fukushima, as I understand it, the radioactive products of a meltdown would be contained inside extremely thick, tough containers, which, so far, have not been damaged by the earthquake or the explosions.
There’s more to be said about what will happen, and how this might affect the world’s attitude toward nuclear power, and whether that’s a good thing or not … but that will have to wait for another post.
Update: A Third Explosion at Fukushima
A there's been another explosion at the Fukushima nuclear plant. Now three of the four reactors at Fukushima have experienced an explosion. The previous two explosions were probably caused by a buildup of hydrogen, but it isn't certain whether that was the cause of this explosion as well.
The vents that emergency workers had hoped to use to flood the reactor chamber with seawater were malfunctioning, meaning that the core was dry (and un-cooled) for several hours. The vents finally started working in the early morning, but the chamber wasn't filling with water the way they had hoped, perhaps because of a leak.
A meltdown is still possible, but while radiation levels in the area are considered "elevated," they are low enough that it's very unlikely that the vessels that contain the reactor cores have been breached.
3/15/11 Update: Fire at 4th reactor
Shortly after the explosion at reactor 2 (the third explosion), a fire started at reactor 4. Between the fire and the explosion, radiation levels at the site briefly spiked to about 167 times the average annual dose. Reactor 4 actually wasn't producing power when the tsunami hit, but it did contain a cooling pool for spent fuel assemblies.
Nuclear fuel that has decayed to the point where it's not useful for sustaining a nuclear reaction still produces a lot of heat, and so it's stored in a pool of water for years to deal with the heat and radiation. Reactor 4 at Fukishima has one of these pools, and—just like with the active reactors—it looks like the cooling system was malfunctioning, which allowed the water in the pool to boil away, exposing the spent fuel. The spent fuel likely heated up until it ignited, or caused a fire in the building.
Authorities are now warning people living as far as 20 miles away from the plant to stay inside to avoid any radioactive fallout. As for the emergency workers at Fukushima, CNN's expert says, "Their situation is not great. It's pretty clear that they will be getting very high doses of radiation. There's certainly the potential for lethal doses of radiation. They know it, and I think you have to call these people heroes."
Update: 2 Reactor containment vessels probably cracked
Japanese officials think that the spike of radiation around the Fukushima plant last night might have been associated with a cracked containment vessel in one of the reactors. Today, they think a second container might be cracked as well, and leaking radioactive steam.
Courtesy SchuminWebBuckle up, because this is a long post. But it’s about your second favorite thing: food. If you’re the impatient type, skip to the end for the bullet points.
(The number one thing is Hollywood gossip, duh. Go on and act like it’s not.)
So … imagine you and six of your friends standing in a room together. I know some of you don’t have six friends (Facebook doesn’t count), but for the sake of science pretend that you do. And I don’t know why you all are just standing around in a room. Trying to prove a point, I guess.
Imagine you and six of your friends are standing in a room together. Now, imagine one hundred times that number of people. Now imagine one hundred times that number. And one hundred times that number. And a thousand times that number.
That’s seven billion people, all just sort of standing around a room, and that’s about the number of people we have on the planet today.
And the thing is, all seven billion of y’all eat like Garfield. (Garfield, for all of you foreign Buzzketeers, was the 20th president of the United States, and he loved lasagna.) Seven billion people, eating, eating, eating. That’s you.
Obviously y’all have to eat, so we put a lot of effort into producing food. Right now, humans have used up about 40% of the planet’s land surface, and the vast majority of that is dedicated to agriculture (i.e., food production). In fact, if you were to take all the crop-growing land in the world and lump it together, it would be the size of South America. And if you were to take all of the pastureland (land for raising animals) in the world and lump it together, it would be the size Africa!
That is obviously a lot of land. The transformation of that land from its natural state into agricultural land may be responsible for about a third of all the carbon dioxide mankind has released into the atmosphere. And each year agriculture is responsible for more than 20% of all the new greenhouse gas emissions. And the whole process takes 3,500 cubic kilometers of water, and hundreds of millions of tons of non-renewable fertilizers, and lots of people don’t have enough food …
But we’re pretty much doing it. It’s not pretty, but we’re feeding the planet.
Here’s the punch: there’s a lot more people coming soon, and not much more food. By 2050, there will very probably be about 9 billion people on the planet. How are we going to feed 2 billion more people than are alive today? While there is a lot unused land out there, very little of it is arable. That means that we’ve already used up almost all of the land that’s good for growing food.
What we need to do is produce more food with just the land we’re already using. Fortunately, scientists are working on ways to do this.
I’m going to get the first one out of the way right now, because you aren’t going to like it …
Eat less meat. Eat a lot less meat.
Don’t get me wrong—I agree with you that meat is delicious and manly (or womanly), but we eat a lot of meat, and raising meat animals is a really inefficient way to get food. To get lots of meat, and to get the animals to grow quickly, we feed them grains that we farm. But to get just one pound of beef (not one pound of cow; one pound of beef) we have to feed a cow about 30 pounds of grain. Say what you will about meat being calorically more dense, it doesn’t have 30 times the nutritional value of grain.
If you look at the maps that compare the volume of crops we grow to the volume of crops we actually eat, you find that places like North America and Europe actually use most of their crops for something besides directly eating—mostly because we’re feeding them to animals (and using them for biofuel feedstock).
Leaving alone the amount of water animals need, and the pollution they can cause, eating meat doesn’t make a lot of sense.
So there you go. I told you that you wouldn’t like it. If it makes you feel any better, you’re not the only one causing the problem—the rest of the world, as it gets wealthier, wants to eat as much meat as you, and so unsustainable meat production is on the rise for just about anyone who can afford it.
Ok, here’s the next idea:
Cut it all down, and turn the planet into one big ol’ farm.
Courtesy Jami Dwyer
We aren’t going to be growing crops in the arctic any time soon, but there are areas we could take advantage of still. Like the tropical forests. We could bulldoze those suckers down, and use the land for crops.
This, of course, is a horrible solution, and I snuck it in here just to bother you. Even if you don’t prioritize the biodiversity of the world’s tropical forests, or the ways of life of the people who live in them, tropical forests play a huge role in keeping the planet a livable place. So we should table that one for a while, unless you really, really want to bulldoze the rainforests.
And then there’s this idea:
Grow more food on the land we’re already using.
Of course! Why didn’t we think of this before?!
Well, we did think of this before, about 60 years ago. Back in the middle of the 20th century, populations in developing countries were exploding, much faster than food production was increasing. Trouble was on the horizon.
And then … Norman Borlaug came along. Of course, lots and lots of people helped deal with the food crisis, but Borlaug was at the center of what became known as the Green Revolution. He worked to build up irrigation infrastructure (to water crops), distribute synthetic fertilizers (mostly nitrogen chemically extracted from the atmosphere), and develop high-yield crop varieties that would produce much more food than traditional crops, when given enough fertilizer and water.
Courtesy University of Minnesota
Now, some folks point out that the Green Revolution had plenty of environmental and social drawbacks, but the fact remains that it also kept millions upon millions of people from starving. And Borlaug himself said that while it was “a change in the right direction, it has not transformed the world into a Utopia.”
The change in the right direction part is what scientists are working on now.
Researchers at organizations like the University of Minnesota’s Institute on the Environment (IonE) are figuring out implement the sorts of things Borlaug worked on more fully, and more efficiently.
By combining satellite data with what can be observed on the ground, IonE is determining exactly where crops are growing, how much each place is growing.
They can then compare this information with estimates of how much each place could grow, given the right conditions. The difference is called a “yield gap.” What it will take to close the yield gap, and get area place growing as much as possible, differs from place to place. But IonE is trying to figure that out too—some places need more water, and some need more nitrogen, phosphorus, or potassium fertilizers.
Knowing how much of a particular resource a place needs, and what the food payoff will be when it receives those resources is a big step in working up to feeding nine billion people. It’s not the last step, not by a long shot, but it provides an excellent map of where future efforts would be best invested.
Aaaaannnnd … the bullet point version for you osos perezosos out there:
May I have your attention, please?
(…Will the real Slim Shady please stand up?)
Very funny. But seriously, I’ve got breaking news!
The Institute on the Environment’s Dialogue Earth program is bursting into the online community. With their first press release, Twitter account, Facebook page, YouTube channel, and blog, they’re drawing attention, and new supporters, every day. They've even been featured on The Line, SUNfiltered, The Daily Crowdsource, and Crowdsourcing.org.
Big things, folks. I’m telling ya: big things.
(Um, excuse me, KelsiDayle, but what is Dialogue Earth?)
Oh, gosh. I’m always getting ahead of myself. I’ll allow Dialogue Earth to explain for themselves:
“The Dialogue Earth™ team is working to increase public understanding on timely issues related to the environment by delivering engaging, trustworthy multimedia content to large, diverse audiences.”
Consider these three main ways people gather information about the environment:
Dialogue Earth is developing ways to monitor the ‘chatter’ from each information source.
For example, weather and gas price data sets allow Dialogue Earth to monitor these environmentally-relevant personal experiences.
Twitter provides the Dialogue Earth team with an intriguing sample of peoples’ conversations that have some connection to the environment. Dialogue Earth has developed a method of analyzing Tweets for sentiment through crowdsourcing.
Emerging or social medias, like blogs, are changing our understanding of what’s news, but there are still ways to understand the content, frames, sentiment, and assertions of stories. Dialogue Earth is working on developing a responsive and scalable method for so doing.
Eventually, Dialogue Earth hopes to help people process through the hot topics of the day, but for now Dialogue Earth is focusing on understanding what the big issues are and how people are communicating about them. Knowing these things first should help Dialogue Earth develop additional effective communication tools in the coming months. In fact, Dialogue Earth has already conducted their first experiment in crowdsourcing creative content via Tongal. Check out the winning science video on the topic of ocean acidification below:
Pretty great stuff, huh?
We've probably been debating the virtues of urban areas since humans gathered in the first cities thousands of years ago. But one question we probably haven't explored much is how we can prepare our cities for climate change.
Climate and sea level have changed slowly throughout humanity's history, and we've been able to adapt. Until quite recently, humans either didn't build settlements in risky areas, or the ones they built (say on floodplains or near a sea shore) were temporary and easily moved or abandoned.
Now that we face accelerating and more extreme changes in the next 100 years, we also have some very permanent structures (and infrastructures) in the riskiest of places. Over 100 million people live in areas likely to be underwater by 2100. And even landlubbers face the challenges of more frequent extreme weather events--heavier rainfalls, droughts, etc.
Courtesy John Polo
Luckily, engineers are already beginning to plan for these changes as they retrofit and build new buildings and infrastructure. Often, these engineers are ahead of city building codes and have trouble persuading property owners to invest in addressing threats that lie in the future. But isn't it better safe than sorry? Maybe we could build cities so strong that climate change barely bothers us.
And even luckier perhaps is that cities are hotbeds of innovation and creativity. We could see the efforts of these engineers as just another example of urban virtues. More people mean more ideas and more resources devoted to the cause. And in our rapidly changing world, we need that teamwork more than ever.
We finally made it to the lakes above Byrd Glacier, and what a beautiful site and perfect weather! It was a balmy -18F, but very calm winds made for a perfect afternoon for installation of our GPS units. The surface was a very densely packed snow rather than the blue ice we had been accustomed to working with which made the construction process go much more quickly! We did have to be quite careful of hidden crevasses, however. On blue ice, what you see if what you get. The crevasses are easily visible. On the snowpack, it becomes much more difficult to spot since the snow has been windblown and filled in the areas on the surface that would make crevasse detection relatively easy.
Upon our arrival to the deployment location, our mountaineers, Mike and Peter, had to probe around looking for these hidden dangers. Once an area had been determined clear and safe, we proceeded to unload the equipment. From here is was as simple and digging a hole for the equipment box, construction the frame and mounting the solar panels, then completing the install with the electronic hook-ups and testing.
Before we left on our flight yesterday we were able to witness the departure of the fuel tanker. It was quite interesting to see the Swedish ice breaker, Oden, tethered to the tanker to help pull it away from the dock and our into open water. By the time we arrived home, the container ship BBC EMS had taken the place of the tanker at the Ice Pier. The BBC EMS, is the annual resupply vessel, filled with containers carrying all sorts of items, from food to spare parts to new vehicles to chemicals and more.
Operations are conducted 24hrs-a-day for the offload and a lot of departments and work centers have reduced service hours during this time in order to support the effort. Meal times have changed, bars are closed, hiking trails are closed, and people are moving about station constantly.
Courtesy NASAThems is what we calls "scare quotes" in the title of this post. (And thems in the last sentence is regular quotes.) Grammar is everywhere and nowhere!
Anyway, a couple of Italian researchers claimed last week to have a working cold fusion device. By fusing hydrogen and nickel, they say their machine is producing copper and about 30 times the energy they put into it.
Fusion is the process of mashing two atoms together to get a heavier atom and lots of energy. This process happens all the time... in the extreme heat and pressure of the sun and other stars. Here on Earth, the amount of energy extracted from a fusion reaction wouldn't be worth the energy it'd take to replicate the conditions of a star (to get the fusion going), unless we can figure out how to start a fusion reaction at near-room temperatures. So this new discovery is awesome, right?!
Yeah, well, probably not. It would be awesome, except lots of people have tried to produce cold fusion, even some claimed that they had produced cold fusion, and it has never ever worked out. And, in this case, other scientists reviewing their research say it looks like junk, the Italian scientists can't explain why their reaction works, and while they have demonstrated their device, they won't do so in a closed-loop system (where all the inputs and outputs can be accounted for.)
Nonetheless, they're saying that their fusion device will go into production by the end of this year. The world, I'm sure, will happily eat crow if these guys have solved humanity's energy problems, but until then we remain skeptical.
Courtesy Wikimedia CommonsA rare copy of naturalist and artist John James Audubon’s epic book, Birds of America, just sold at Sotheby’s auction for more than $10 million. That’s an enormous sum considering the book is essentially a work of natural history illustration. Also known as the Double Elephant Folio because of its large size, the massive tome opens to 4 feet across and contains hundreds of plates of exquisitely drawn, life-sized paintings of birds in their natural settings. It’s considered one of the greatest collections of natural history illustrations in the world, and I have to admit, after researching the story behind this stunning collection of work, and its creator, I understand why it's so valuable.
Courtesy Wikimedia CommonsIn the early 19th century, the Haitian-born Audubon (1785 - 1851) traveled across the eastern and central United States -often alone, sometimes with an assistant- to gather images of over 500 known species of bird. He would often draw them from life, but sometimes killed his avian subjects and posed them with wires in order to capture them on paper. The latter technique guaranteed the birds wouldn’t fly off. He used all sorts of media considered unconventional at the time to create his masterpiece images. Backgrounds were created sometimes by the artist himself but more often by several assistants.
Courtesy Wikimedia CommonsAudubon developed his deep interest in birds and natural history as a child growing up in France. At age 18 he arrived in the United States (as an illegal immigrant, mind you) where he honed his passion in ornithology in the woods surrounding the family property near Philadelphia.
Courtesy Wikimedia CommonsDuring his early days in America he worked at improving his drawing techniques, and became skilled at specimen preparation and taxidermy, even working for a time in that capacity at a museum in Cincinnati. On a return trip to France he met naturalist Charles-Marie D’Orbigny who schooled him in scientific methods of research and offered tips to improve his taxidermy skills.
Courtesy Wikimedia CommonsThe book Birds of America was a well-planned venture long before it finally came to fruition. Audubon had the title in mind when he set about in 1820 to paint every known bird in America. His goal was to eventually produce a body of work that would far surpass any other in existence. And he did exactly that. For nearly three years he roamed down the Mississippi River and across the American frontier searching out specimens to paint, sometimes purchasing them from local hunters.
Courtesy WikipediaAt the time Alexander Wilson was considered the leading ornithologist and painter of birds. He had cataloged most known birds in the country but his renderings were somewhat stiff and lifeless. Audubon worked persistently to make the birds in his drawings come to life, placing them in their natural ecosystems, often in active and dramatic poses. A single illustration would sometimes portray several species of bird.
Natural history illustration was and remains to this day crucial in disseminating scientific knowledge about the natural world. Detailed illustrations, graphics, and photographs help convey what's being explained in the text. Sometimes all the facets come together perfectly. Such is the case with Birds of America; its high regard is based on both its level of visual artistry and scientific information.
Since American printers couldn’t accommodate the oversize plates he insisted upon using, Audubon traveled to Great Britain where his paintings (and he himself) became an overnight sensation. The Brits were eager to learn anything about the new American frontier, its people and environs. The book’s original edition was printed by engraver Robert Havell (and son) starting in 1826. The process of engraving and printing all 435 plates took a dozen years and cost Audubon $111,640, a huge sum for the time. He financed the initial printing mainly through advance subscriptions, exhibitions, and lectures (a teen-aged Charles Darwin attended one of these).
Courtesy Wikimedia CommonsInitially, four title pages were sent to subscribers (including King George IV, an admirer of Audubon). Prints were then issued in groups of five with the idea the buyers – if they chose to do so - would bind them together at their own cost. Each separate illustration was printed in black and white using etching and aquatint techniques on large copper plates 39 x 28 in dimension. They were then each hand-painted by an army of colorists, a technique common in the 19th Century. An accompanying volume of text titled Ornithological Biographies was later added for each of the four plate volumes. The biographies match the illustrations in their scope. Audubon (aided by ornithologist William MacGillivray) gives a detailed description of each bird’s features (including drawings of internal organs), their behaviors, and the environments in which they lived.
Courtesy Wikimedia CommonsAudubon originally published about 750 copies of Birds of America of which only 219 copies are extant today. Of those, only 119 complete copies exist, most of which are in museum and library collections. Eleven copies are in private hands and this latest intact volume is one of two to be auctioned in the last decade. Over the years, many of the original editions were broken up and sold as individual illustrations. But with so few intact editions available now their value has skyrocketed against the amount single prints would attract.
After his death, Audubon’s wife sold most of the original paintings reproduced in Birds of America to the New York Historical Society for $4000! Luckily for us, the originals are occasionally put on display there, and that would be something to see. Audubon’s final project titled Vivaraporous Quadrupeds of North America was completed posthumously by his sons.
You'd be hard pressed to name a work of as monumental as Birds of America in terms of art and science, as it's considered by many to be one of the most important natural history books in existence. And Audubon was served well by it both financially and the worldwide acclaim it brought him. He was elected to the Royal Society of Edinburgh and the Linnaean Society, and was only the second American to be named a fellow by London's Royal Society (Ben Franklin was the first). Charles Darwin made three mentions of Audubon’s work in his own book On the Origin of Species. The ornithological organization the National Audubon Society is named in his honor. Not a bad legacy for a backwoods kid who just loved birds.
Courtesy B. MayerWho hasn’t heard about the very great scientific and social problems of global warming and ocean acidification? As microbiologist Louis Pasteur noted more than a century ago, “The very great is accomplished by the very small.” Part of the answer to these very great problems can be accomplished by understanding the very small: ocean microbes, living things that are less than a hundredth of the thickness of a human hair.
Our effort to understand the very small in the ocean has just taken a big step. C-MORE Hale (Hawaiian language for “house,” pronounced hah-lay) was officially dedicated in a ceremony that took place on October 25, 2010. C-MORE, or the Center for Microbial Oceanography: Research & Education, is all about studying ocean microbes. Scientists at C-MORE are looking into microorganisms at the genomic, DNA level and all the way up to the biome level where microbes recycle elements in ocean ecosystems.
Headquartered at the University of Hawai`i, C-MORE’s interdisciplinary team includes scientists, engineers and educators from the Massachusetts Institute of Technology, Monterey Bay Aquarium Research Institute, Oregon State University, University of California – Santa Cruz and Woods Hole Oceanographic Institution. As a National Science Foundation center, C-MORE is a dynamic “think tank” community of researchers, educators and students from a variety of cultural backgrounds, including native Hawaiian and other Pacific Islander.
Courtesy B. MayerC-MORE Hale will be equipped completely and ready for scientists to put on their lab coats and get to work in January 2011. For now, e komo mai! (welcome!) Imagine yourself walking along this sidewalk leading to C-MORE Hale. Stop for a moment to look at the round pavers; they depict ocean microbes first discovered by 19th century zoologists on the worldwide HMS Challenger expedition. Step past these unique designs and take a tour of the brand-new building!
Courtesy ZooFariHere’s my impression of the future:
“Um, hey. How was lunch? Italian dunkers, eh? Nice. Gotta love the dunkers. Ate those right up, I see. Pretty good sauce too, huh? Got some extra sauce there, actually. Were you going to… can I have that sauce? Yeah? Oh, it’s SO good.”
Yeah, that’s the future for you. Man, is he hungry. Stuff you wouldn’t touch, the future will pound back like Captain Haddock with a bottle of Loch Lomond (before that fiasco in San Theodoros).
But the future is smart, because it realizes that Italian dunker sauce is in short supply, and it’ll take perfectly good extra sauce wherever it can get it.
Are you following the metaphor still? Were you thrown by Captaion Haddock?
Here’s what I’m saying: in the next few decades, we’re going to be super hungry for energy, food, and water, because there will be about 9 billion of us on the planet. So, in addition to coming up with new ways to produce of all of these things, we’re going to have to look for areas where they’re being wasted right now, like all those puddles of Italian Dunker sauce being shoveled into the cafeteria trash bins.
Example: drinking potato chip water.
Potatoes, as it happens, are about 75% water. When we turn them into potato chips, we get rid of all that water—we bake it, dry it, and fry it away. Considering how much we love dried potato products, that’s a lot of water wasted.
But that doesn’t mean we should stop eating potato chips. (NEVER!) Instead, some factories have been installing equipment to reclaim water that would otherwise be vented out of potato processing facilities as steam. One of the factories where the technology is being tried may be able to recapture as much as 3,000 liters of water an hour (about 790 gallons an hour). This water, already clean and pure, can be reused in the factory, or even sent back into the municipal water system.
Although the article doesn’t mention it, I’d be willing to bet that there’s another product being recaptured with the water: energy. Steam, after all, is just water with a whole bunch of heat energy in it. With the right equipment, heat can be extracted from steam, and reused for anything from cooking to powering heating and cooling equipment.
Do you see now? The future, with its peanut butter covered fingers and greasy South Park t-shirt isn’t quite the loser you think it is. It’s using all that Italian Dunker sauce, in ways that you never imagined possible.
Courtesy Leigha HortonEver been on a beach (and I’m talking a real beach that rests alongside an ocean, not some piddly lakeshore)…AHEM, as I was saying - ever been on a beach when someone nearby sighs aloud, “water, water everywhere, nor any drop to drink?”
I have, and have always found the thought astounding. How is it that our world can have so much water and somehow not figure out how to make it drinkable via efficient means, and at the same time saddle up a populace with something as advanced as the iPhone?
And just so you know, over 70% of the Earth is water, and of that 70%, over 96% of it is salt water from our oceans. Salt water that is totally unsuitable for drinking. (Who’s thirsty? MEEEEE!)
Now don’t get me wrong, desalination methods exist in the world – they’re just not very efficient yet, using boatloads of energy for very little final, useable product.
According to a recent Wall Street Journal article, High-Tech Cures for Water Shortages, NanoH20, Inc. is harnessing the power of reverse osmosis using nanoparticles. Turns out these nanoparticles “attract water and reject salts and other particles that can clog other membranes, reducing the energy needed to push water through the membrane.” That’s pretty awesome. California, with its entire west coast on the Pacific Ocean, could stop fighting with Wyoming, Colorado, Utah, New Mexico, Arizona, and Nevada over rights to the Colorado River water.
And since NanoH20 is based in southern California, which presently gets most of its drinking water piped in from the dwindling Colorado River, I trust them in taking this whole useable-water-thing seriously.