Archive for August, 2011

Smart Appliances and Energy

2011/08/30

Smart Appliances and Energy

With more and more household appliances using microprocessors to add “smart” features, their energy consumption can become an issue.  In fact, even the dormant state of these appliances can consume significant amounts of electricity.

One of the worst offenders is the now ubiquitous cable TV box, of which my house has half a dozen.  These use power to retain state.  This is superficially reasonable, since the catalog of the next several days of TV shows is part of that state (as is all the personal settings.)  Downloading this catalog takes several minutes, which would be annoying for most consumers, who want “instant” access to their TV show listings.  As these boxes contain Internet connections, which take time to establish a connection, there will be a tendency to use power to maintain that connection 24 hours per day.  Here some design work is required to minimize energy consumption while maintaining fast reaction times when the TV is turned on.

Now we move to refrigerators.  One can imagine quite a number of applications for a microprocessor in the refrigerator.  Recipes from the Internet that pop up when a food bar code is scanned, or entered into the computer, but probably a more useful feature would be an inventory of what is being stored (fresh or frozen).  While storing an item, swipe its bar code so that the system knows when you first stored it and hence how old it is later.  Enter its expiration date and location in the refrigerator.  Coming back to recipes, how about a list of recipes that can be made from ingredients in the refrigerator?  Nice!  Even nicer would be to access this inventory via cell phone when at the market.  “Do I have any fresh cilantro at home?”  My current refrigerator simply beeps at me with the door is left open, but why not send my cell phone a message?  Of course, access to the Internet allows access to video based cooking instructions to be downloaded and played.  Want to see that technique to whip up a great soufflé?

Smart stoves and ovens have a different play when hooked up to the Internet.  My favorite would be to warm up my oven as I’m heading home from the market.  Integrated temperature probes are popular with microwaves, but should be integrated into ovens as well.  Sensors that turn off that stove when the pot has boiled away all liquid would be a nice safety feature.  OK, all these features are nice, but what is the cost of the energy?  Again, some design work is needed to minimize energy consumption and still to maintain fast power ups.

Refrigerators have another option that was discussed briefly in an earlier post.  When it is cheaper to use electricity at night, a refrigerator can wait until night time to make the system extra cold so that it can more easily ride out usage during the day.  Note that if the building has solar power to augment the utility provided electricity, the solar power is “free” during the day and expensive at night.  Thus, it makes sense to keep the system extra cold during the day and to almost power off at night.

My current microwave’s defrost features are terrible.  Look, I want to defrost a steak quickly without partially cooking it.  I also don’t want to wait an hour defrosting a big chicken.  The smarts to do this are SMOP (“simply a matter of programming”).  My microwave gives me a ridiculous choice for defrosting options.  How about getting the right option off the Internet by scanning its bar code?  Also, a microwave should probably be hooked up to a digital scale to weigh things, but putting the scale into the microwave itself has its charm.  A lot of these features can be implemented without using too much excess energy.  In fact, considerable energy can be saved, simply by not overcooking things!

Now, who is going to invent the “green” robot that does all my cooking for me?

Of course the moral here is that smart appliances are coming, and I hope that industry will embrace saving energy.

-gayn

Storing Energy as MgH2

2011/08/29

Storing Energy as MgH2

With wind and solar power generation, it is often desirable to store energy locally rather than to put it onto the grid.  For example,

  • Sometimes the rates paid by the electric company are not favorable, e.g., at non-peak times, or are even negative.
  • Usually the electric company wants a “base load” to be provided, i.e., a minimum amount of electricity that goes onto the grid 7×24.  Neither wind nor solar can do this without burning another fuel such as natural gas or hydrogen.
  • A capital cost reduction can be partially realized if the generation system is designed NOT to generate more than a certain maximum level of electricity, with the rest being stored to address the base load issue.  The reason for “partially” is that the capital cost of the storage system needs to be factored into the total cost.

One storage possibility is to store the energy as hydrogen gas using electrolysis to break down water into hydrogen and oxygen:

2H2O + energy –> 2H2 + O2

With an 80% efficient electrolysis unit, it takes about 50 kWh of electricity to create 1 kg of hydrogen. The hydrogen could then be used in a fuel cell or converted gas engine to generate electricity.  This is great except for the “detail” of storing the hydrogen.  (What to do with the resultant oxygen is another detail.  It can be bottled and sold, or simply released into the air.)  Most people suggest compressing the hydrogen possibly to a liquid form, but that takes considerable energy, and the result is not economical for the production of electricity.  Not compressing it takes too much storage and pumping to move it around.

Multiple methods for storing the generated hydrogen as hydrides, e.g., magnesium hydride, MgH2, have been developed over the years.  For MgH2, they involve grinding the magnesium in a hydrogen environment at pressure (15 Bars or so), at high temperature (in excess of 300°C), and in the presence of a catalyst. (Various catalysts have been proposed, including MgH2 itself.) This process gives an initial supply of MgH2 for use in an energy storage cycle.

The folks at Safe Hydrogen mix the resulting MgH2 into a “slurry” by adding light mineral oil and some dispersants. The dispersants aid in keeping the MgH2 particles from agglomerating and help to stabilize the slurry. The mineral oil helps to protect the MgH2 particles from inadvertent contact with moisture in the air and provides the liquid medium for the slurry.  The resulting slurry is then cooled to room temperature for stable storage.  This process is described in their patent US 2010/0252423 A1. The slurry looks a lot like thick paint.  Here’s a photo from one of the Safe Hydrogen papers, links to which can be found on their web site.70 Percent MgH2 Slurry

At ambient temperatures, the slurry is stable, not highly flammable, and can be pumped, transported, and stored in standard tanks and trucks.  It has stored energy by kg or by liter (L) as high as [0] 3.9 kWh/kg and 4.8 kWh/L.  Compare this to gasoline which has 1.8 KWh/kg of practical energy storage within 12.8 kWh/kg of total theoretical energy storage. [1]

The slurry can be converted back to hydrogen and recycled as follows:

 

The first technique is to add water to release the hydrogen:

MgH2 + 2H2O –> Mg(OH)2 + 2H2

The resulting byproduct, magnesium hydroxide, Mg(OH)2, is, in its pure form, a white power, and when mixed with water forms a suspension that is essentially the same as “Milk of Magnesia.”  Recycling the Mg(OH)involves several steps:

  1. The separation of the oils from the byproduct slurry for reuse.
  2. The calcination of Mg(OH)2 to MgO by heating to around 300°C,

Mg(OH)2 –> MgO + H2O,

  1. The electrolytic reduction of the resulting magnesium oxide using the “solid-oxide • oxygen-ion-conducting membrane (SOM) process developed atBostonUniversity[2].

2MgO –> 2Mg + O2

  1. The hydriding of the Mg obtained in the preceding step and H2 (obtained by additional electrolysis of water) to MgH2, is done by mixing magnesium powder with magnesium hydride powder and hydrogen at about 300°C and 10 Bar.
  2. The production of new slurry from the MgH2 and the recovered oils.

In this process, slurry is pumped into a stream of hot water to mix the slurry with the water. The mixed stream then flows into a reaction chamber where the bulk of the reaction takes place, releasing hydrogen. Several injections take place in quick succession to take advantage of the heat from the previous reaction. The byproducts of this reaction collect in the bottom of the reactor. Periodically, the reaction chamber is flushed by filling it with water to recover the oils from the slurry. The water level is reset and the injections resume. Also periodically, the byproducts are moved from the bottom of the reactor to the byproduct separation chamber. In the byproduct separation chamber, the water is filtered from the solid byproducts and returned to the water storage container. One can readily hydride magnesium by mixing magnesium powder with magnesium hydride powder and hydrogen at about 300°C and 10 Bar [This step is described in detail in the above cited patent.].  The heat produced by the electrolysis process was estimated to be enough to satisfy the heating requirements of the system in continuous operation.

Effort needs to be performed to condense magnesium into a powder from the vapor produced by the SOM process. To minimize the fire hazards associated with magnesium powder, the powder should be immediately hydrided, and the resulting magnesium hydride should proceed directly into a slurry production process.

According to Andrew McClaine, CTO at Safe Hydrogen, “We are currently scaling up from a laboratory scale of 2 kWth to a scale of 20-40 kWth.”  In other words, this technology needs a lot of development before it can scale to utility sized operations, which would require tens to hundreds of MWh of energy stored.  Note that 10MWh would take about 2,000 liters of MgH2 slurry.  Safe Hydrogen’s estimates on cost seem reasonable, but the process is complex with lots of variables.  It usually takes a decade for such technology to become cost effective and mass produced.

McClaine also points out that there is another technique to release hydrogen from the slurry.  It is to heat it to around 280°C.  This releases some of the hydrogen rather efficiently and allows the slurry to be replenished later by adding hydrogen from the electrolysis process.  Apparently this process can be repeated efficiently approximately 100 or so times, before the complete first process needs to be performed.  Safe Hydrogen is working on optimizing this combined technique and on increasing the number of cycles from 100 to potentially 1000 or so.

Refining all these techniques may well produce a commercially viable product, even at a scale of 20-40 kWth.  Safe Hydrogen seems to think so.  It appears that smoothing out the energy produced from wind farms may have an earlier commercial product than transporting the slurry and producing hydrogen elsewhere. [3]

-gayn

[0] Certain proposed lower cost processes will produce a lower energy density of the slurry.  Safe Hydrogen is experimenting with various processes.  Personal communication Andrew McClaine, 8/28/2011.

[1] Journal of Physical Chemistry and Letters

[2] Solid-Oxide Oxygen-Ion-Conducting Membrane (SOM) Technology For Production Of Magnesium Metal By Direct Reduction Of Magnesium Oxide, D. E. Woolley and U. Pal, Department of Manufacturing Engineering, Boston University, Boston, MA 02215, G. B. Kenney. ElMEx, LLC, Medfield. MA 02052

[3]  Storing Wind Energy as Hydrogen, David Anthony and Ken Brown, reprinted in Green Economy Post, August 15, 2011.

Metal-air Batteries

2011/08/29

Metal-air Batteries

As discussed in a previous post, wind and solar generators need energy storage mechanisms for a variety of reasons.  This post discusses some promising new battery technology, rechargeable metal-air batteries.  Single use metal-air batteries have been around for years.  The new technology here is to design them so that they are efficiently rechargeable.

There are three sizes of applications to think about:  little button batteries for electrical gadgets (usually single use), batteries for electric vehicles, and batteries to smooth the energy generation of utility-sized wind and solar generators.

The current commercial state-of-the-art, Li-ion batteries, which are used in today’s electric and hybrid vehicles, are really heavy and they cost a lot!  They have some other problems.  First the range today of an EV is 50-200 miles whereas we’re used to say 350 miles.  Second, the charging time is in the order of 7 hours, whereas we’re used to 3-4 minutes to refill a car’s gas tank.  These problems, of course, get worse when scaled to utility sized units; hence, the industry is driven to a new generation of battery.

Metal-air batteries use oxygen directly from the air, which allows for higher total energy density due to unlimited cathode capacity.  This definitely will reduce the weight.  The possible metals investigated by the industry are Zn, Fe,Al, Mg, and Li, with zinc (Zn) and lithium (Li) being the most frequently discussed.  One model predicts that the overall theoretical energy density of polymer electrolyte Li-air battery could be as high as 2790 Wh/kg and 2800 Wh/L, which is comparable to gasoline-air combustion engines [1].  The following table comes from Chemical and Energy News, 11/22/10.  Note that Li-air batteries have 10-11 times the energy storage potential that Li-ion batteries do (per weight).  Zn-air batteries have about four times the energy storage potential as Li-ion batteries.  The potential or theoretical numbers are computed from the energy released from the metal assuming total oxidation from the oxygen.  It is clearly much greater than the energy released from the corresponding metal-air battery today; although this battery technology will improve over time.

This would address the weight and energy storage, but rechargeability is a problem.

Older lithium-air batteries do not have long lasting bi-functional cathodes where the oxygen reduction and evolution both take place.  Effective catalysts are needed to reduce the byproducts of the discharge, such as Li2O2 and Li2O.  These are not soluble in current electrolytes and eventually clog the pores of the cathodes, seizing the cell.  Membranes between the anode and cathode also can clog.  Similar problems occur for older zinc-air and other older metal-air batteries.  Thus to make effective (i.e. having a high rates of oxygen reduction and evolution) rechargeable metal-air batteries, new materials for cathode, catalysts for both the reduction and evolution cycles, electrolytes, and membranes are needed.

There are more problems uncovered by current research on Li-air.  Li-air batteries require more voltage to charge them than one obtains when using them.  This ratio is called “energy efficiency”, and most metal-air batteries today have energy efficiency around 60%.

Another problem is the discharge rate.  The reaction between lithium and oxygen in today’s Li-air batteries proceeds too slowly to generate significant current.  It is a little impractical to gang them up to get adequate current.

Next cost.  While current research tends to favor the performance of lithium-air batteries, zinc is a far more abundant metal than lithium, and hence the cost of zinc-air batteries could be significantly lower than that of lithium-air batteries.  At least two companies are headed in this direction with their zinc-air technologies.  One, ReVolt Technology, has a patented [2, 3] mixture of materials for the anode, cathode, catalyst, electrolyte, and membrane, and the other company, EOS Energy Storage, has a patent-pending mixture.  Both believe that their technologies can scale to utility sized batteries, i.e. batteries with tens to hundreds of MW/h of energy stored, and both believe that they can address the EV market (which would give them the scale to keep the price down at the utility level.)  In other words, both believe they have solved the above mentioned problems with today’s Li-air batteries.  The industry continues with considerable research being done by various organizations.  These organizations all hold a wide variety of related patents.  Lawyers will do well when the industry starts to shake out!

Of course, just as an array of wind or solar generators is needed to produce utility sized amounts of energy, an array of batteries is needed with a capacity sufficient to smooth out the peaks and valleys of the generated electricity.  E.g. the wind might not blow or the sun might not shine for several days in a row.  Allowing for these extremes could drive the cost of the batteries unacceptably high.  As a second level of backup, natural gas generators could also be part of the system, but one would expect very little natural gas to be burned over the course of the year with reasonably sufficient batteries.

-gayn

[1] J.P.Zheng et al, J.Elec.Chem.Soc.155(6)A432-A437 (2008)

[2] Patent US 2007/0166602 A1, “Bifunctional Air Electrode”

[3] Patent US 2008/0096061 A1 “Metal-Air Battery or Fuel Cell”

CPV Environmental Impact

2011/08/22

 

Today I read the paper “An Assessment of the Environmental Impacts of Concentrator Photovoltaics and Modeling of Concentrator Photovoltaic Deployment Using the SWITCH Model” June 2011 by Dr. Daniel Kammen of the Renewable and Appropriate Energy Laboratory at UC Berkeley and his Ph.D. students James Nelson, Ana Mileva, and Josiah Johnston.  [cf. www.rael.berkeley.edu]  It’s not too long (25 pages) and is packed with information.  I recommend it.  The review/summary here contains my comments.

The paper discusses, in the context of other forms of energy generation, the comparative environmental impact of CPV.  The short answer, if you don’t want to read any more, is that CPV has comparatively minimal environmental impact, and in particular is slightly worse than PV due to the former’s tracking system (see below) and a tiny bit better than CSP due to the latter’s cooling and water use.  Of course, oil, coal, and even natural gas are bad environmentally for a variety of reasons, and thankfully the paper doesn’t harangue about this too much.

The report considers three Life Cycle Assessment (LCA) phases: (1) fabrication and deployment of the energy generation facility, (2) energy production and maintenance, and (3) recycling and disposal at end of life of the facility.  Discussed in this context are the environmental issues of energy, emissions, water use, and land use.  Also considered in this report is the Energy Pay-Back Time (EPBT) which is the time (in years) that it takes to generate the net energy (roughly the energy generated minus energy used) in Phase 2 to be equal to the energy used in Phases 1 and 3, i.e. in creation and disposal of the generation facility.  Heretofore, this number has been estimated in the 3-15 year range; however, this report estimates it as less than 1 year for both PV and CPV. Caution:  one review of this paper that I read has challenged the logic and data used by the authors in their calculations of EPBT.  The authors point out that EPBT calculations are also sensitive to the geographic location of the generation facility, and hence have significant variance.

Green House Gas (GHG) emissions for Phase 1 are considered.  They are highest for large tracking systems such as CSP and CPV that use a lot of GHG-intensive steel in their structures. Such steel is heavy and incurs greater shipping costs in Phase 1.  Not only can the designs be improved to reduce the amount of steel used, but such steel could be manufactured where a large portion of the steel production energy came from solar or at least renewable energy sources.

Water use during Phase 1 is difficult to estimate due to lack of data on recycling, CPV use is estimated at 2 times that of PV.  Water use during operations, Phase 2, is significant since the large sites tend to be in water constrained regions.  In any case, for PV and CPV, water is only used for washing as opposed to CSP where it is used for wet cooling.  If dry cooling, i.e., air cooling, is used in a CSP plant, then water use is reduced 90% and is relegated to washing and steam production.

When mining, transportation, and disposal of non-renewable fuels are taken into consideration, the land use per GWh of renewable and non-renewable generations turns out to be comparable.  Hydroelectric and Wind are several times higher, and rooftop PV systems, where the land is already used by the building, are several times less. CPV has an advantage in land use over PV and CSP.  In addition, most large CPV and CSP systems are pole mounted, allowing potential reuse of most of the land under the arrays for plants and small animals.  Such use, it is noted by the authors, is not currently common; although CPV vendors seem to be leading here. Environmental impact studies should (by law) address impact on flora and fauna in the region; however, such studies can get around this by stating the impact is minimal in the large.

The authors also model CPV deployment in the southwest area covered by the Western Electricity Coordinating Council (WECC) by starting with around 1000 existing installations of various types and varying the mix of 10,000 additional renewable and conventional installations.  Their model runs thousands of wind and sun conditions on an hourly basis to meet the forecasted needs and to minimize the cost of generation, storage, and transmission.  The model uses NREL’s System Analysis Model (SAM) to get various costs of operation and maintenance.  At this point the paper is somewhat unclear as to just how the mix of the additional 10,000 generation types is determined.  It appears that CPV sites are preferentially added and CSP sites are not considered by this modeling.  That said, the conclusions are:

  • It would be economical to install between 12 and 43 GW of CPV by 2030 in the United States Desert Southwest
  • Including CPV allows for deeper CO2 reductions in the electric power system
  • CPV displaces natural gas generation on the margin
  • Strong carbon policy (e.g. charging $40/ton for CO2 generation) increases the deployment of CPV

Analyzing several of the graphs in this paper, it also appears that with the authors’ investment assumptions, the WECC area will need to increase its use of natural gas to cover peak usage.

In summary, I wish there were more studies like this.  Indeed, the authors’ model could be run many times over with differing investment assumptions to generate additional fascinating information.

-gayn

Concentrated Solar Power (CSP)

2011/08/19

Concentrated Solar Power (CSP)

The basic idea behind Concentrated Solar Power (CSP) is dead simple, especially if you liked to start fires with a magnifying glass when you were a kid:  Focus a lot of sunlight to gather heat, and use this heat to generate electricity.

There are several ways to do this.  A simple way, to get our mental juices flowing, would be to use the heat source to boil water to generate steam, and then use a steam engine to drive a generator to produce electricity.  While simple, this approach turns out to have a few problems, e.g., needing a water supply in the middle of a desert and the fact that steam is not a good energy storage medium.  Thus, sometimes this approach is used as a “back end” of a more complicated system, where one fluid is heated to a very high temperature, and this hot fluid is used to create steam for a steam engine.  This is discussed more below.

A vastly better approach is to focus the sun’s rays to create a very hot liquid, and to use it to heat a Stirling Engine to generate electricity.  The very hot liquid stays external to the Stirling Engine.  There is a nice article on Stirling Engines in Wikipedia here. Like steam engines, they are external combustion engines as opposed to internal combustion engines as are gasoline and diesel engines.  Here’s a picture from the Wikipedia article.  Since WordPress doesn’t support motion in gif files, you’ll have to click on the picture to see motion.

Stirling Engine

For this simple Stirling Engine, there is only one cylinder, hot at one end and cold at the other. A loose fitting displacer shunts the internal fluid between the hot and cold ends of the cylinder. A power piston at the end of the cylinder drives the flywheel, which in turn drives the generator.  In our case, our external hot fluid replaces the external “fire” in this picture.

Let’s break a CSP system down a little more.  There are several ways to “focus” the sunlight.  First there are three dimensional parabolic dishes that focus the light essentially onto the hot side of the Stirling Engine.  Here are some images taken from Google’s image collection (Click on them to see their original source.):

Above there is a single CSP with one Stirling Engine at the focal point, and below an array of such CSPs.

Note in this array there are multiple Stirling Engines. While more expensive, this is goodness, because the array keeps working in the presence of even multiple engine failures.

Another approach is for movable mirrors (heliostats) to reflect the sunlight to a “point” on a tower.  The mirrors are usually rather flat parabolic mirrors so that their focal point is quite far away – near the top of the tower.

In this tower/heliostat approach, the external fluid is heated at the top of the tower and piped down to a Stirling Engine or to a steam engine.  After it has been used for heating, the cooled external fluid is piped back to the tower to be reheated.

A second form of CSP uses a two dimensional parabolic trough:

Along the line of parabolic foci is a thermal collector pipe that not only serves as an “oven” for the heated fluid, but it also transports the heated fluid to a central generating facility.  Below is such a facility in Abu Dhabi, at one of the world’s largest CSP plants.

A third way to focus sunlight is with a Fresnel lens or mirror.  Recall these are used by many CPV systems as well.  In the lens case the thermal collector pipe is below the lens.  For Fresnel mirrors, it is above the mirrors. Here’s a photo:

The company Areva, has a fourth combination-type of setup.  Here are a couple photos from their web site:

The blue trough is a parabolic trough focusing the light onto the overhead “compact linear Fresnel reflector” which in turn focuses the light onto a thermal collector pipe.  Here’s a close-up photo of that:

To maximize efficiency, all CSP systems need tracking so that they “point” towards the sun.  The trough and Fresnel systems need only single axis tracking, while dual axis tracking is needed for tower/heliostat CSPs and also for parabolic dish arrays.

One potential problem CSP systems have is glare.  A system under construction in Cloncurry, Australia was scrapped in November 2010 due to concerns about reflective glare in an urban environment.

The next part of a CSP system is the external heating fluid, which is typically some sort of liquid salt, although other materials including super-heated steam and graphite have been used. The molten salt is a mixture of 60 percent sodium nitrate and 40 percent potassium nitrate, commonly called saltpeter. New studies show that calcium nitrate could be included in the salts mixture to reduce costs and with technical benefits. This salt melts at 220 °C (430 °F) and is kept liquid at 290 °C (550 °F) in an insulated storage tank.   The main feature of liquid salt is that it can be heated to very high temperatures, in excess of 500 °C, and at this heat, it can be used to drive a Stirling Engine or generate steam for many hours after the sun goes down.  This time of course depends on the initial temperature, the quantity of liquid, the insulation, and the efficiency of the Stirling or steam engine used.  Typically current systems continue to generate significant amounts of electricity for 7 to 8 hours after the CSP stops heating the salt.  This isn’t quite enough for 24 hour operation, and some utility companies set up “hybrid” systems where the load at night is met with a gas fired generator.

These systems are improving, and the 19.9 MW CSP system in Andasol, Spain generated 24 hours of continuous electricity in July 2011 using a molten salt storage system.  This design should average 20 hours per day of electrical power generation, hitting 24 hours on particularly sunny summer days.  This plant has a field of 2650 heliostats that focus sunlight on a central tower. Here’s a photo:

Future posts will compare CSP systems with CPV systems for efficiency and cost effectiveness.

-gayn

AWS GovCloud Announced

2011/08/16

The AWS GovCloud

Companies with data subject to compliance regulations such as the International Traffic in Arms Regulations (ITAR) are not allowed to manage and store defense-related data in an environment that could be accessed by anyone outside the U.S.  This would include most Cloud implementations.  Amazon addresses this legal requirement with its recently announced [cf. businesswire.com 8/16/11] AWS GovCloud, which is an AWS Region that is physically and logically accessible only from the U.S. by U.S. persons.  Government contractors and government agencies can now manage more heavily regulated data in AWS while remaining compliant with strict federal requirements. GovCloud offers similar levels of AWS security as other AWS Regions but also supports controls and certifications such as FISMA, FIPS 140-2 compliant end points, SAS-70, ISO 27001, and PCIDSS Level 1.

The usual AWS resources can be deployed from the AWS GovCloud including Amazon’s Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Virtual Private Cloud (VPC).

NASA’s JPL, a long time AWS user, will soon take advantage of the AWS GovCloud. Other government contractors and agencies (cf. http://aws.amazon.com/federal/) that use AWS in some Region, now have the option of moving over to the GovCloud Region.  (Amazon has a charge for moving data from one Region to another.  I wonder if they will wave this charge for Government agencies.)

It will be interesting to see how quickly classified simulations can be moved to a Cloud like AWS GovCloud.  These simulations require tremendous hardware resources (ASIC simulations are typically run on Palladium workstations, for example) and would be perfect for the Cloud, since such resources are only needed during simulations.  Flight simulations, Red/Blue Force battlefield simulations, and military GPS simulations, and some electronic warfare (EW) simulations could benefit.  Some simulations need various types of high frequency RF fed into the simulation environment to special purpose hardware.  Today, these feeds are captured and stored on special purpose Digital RF Memory devices (DRFM’s) and they play back into the simulation environment directly.  Such huge storage devices would either have to be deployed directly and virtualized in the Cloud environment, or they would have to be simulated themselves with appropriate timing adjustments made.  Either the special purpose hardware would need to be simulated, or placed on site.

In the end, the cost savings for the Government and its contractors to use such a Cloud implementation will be substantial, and the business, legal, security, and implementation kinks will surely be worked out soon.

Finally, other Cloud vendors will undoubtedly offer competitive services to the AWS GovCloud.  Again, it’s only a matter of time.

-gayn

Google Buys Motorola

2011/08/16

Google Buys Motorola

Well, today’s headline knocked my socks off!   I previously noted that Google was poorly “patent positioned” in the mobile space and was going to be increasingly challenged by patent holders.  This would potentially have damaged the Android mobile phone sector.  Now, armed with Motorola’s 17,000+ patent portfolio, Google can definitely play the patent exchange game with much greater power.  Let me illustrate.  I recall when Apple came to Digital with a claim that Digital was infringing on Apple’s claim to have invented the mouse.  It was more specific than that, but never-mind.  Digital countered by offering Apple to trade for a patent that claimed that Digital invented Ethernet.  The specifics aren’t important, but the issue went away with the trade of equally obtuse patents.

OK, that’s goodness for Android, but the Motorola purchase changes the dynamics of mobile computing tremendously.  Now, Android is no longer an operating system from a company that is hardware neutral, as was Microsoft in the formative PC days.  Now, Android is the product of a hardware vendor of cell phones and tablets.  Companies like Samsung, Nokia, and HTC have to rethink their strategy.  What was formally a no-brainer to adopt Android, is now a question of whether to use technology from a competitor.  What if Google were to decide to keep some OS technology for itself, or if it were to decide to charge for Android?  It also isn’t clear that Google won’t prioritize the development of Android features to favor its own (formerly Motorola) products.  Should a cell phone or tablet company hedge its bets and also support one or even two other operating system competitors to Android?

The Android Open Source Project (AOSP) officially maintains Android, but I don’t understand how its costs are covered.  My assumption has always been that Google covers most of the cost and has most of the influence.  It is one thing when Google makes such immense profits that Google’s Android development costs are lost in the expense round-off errors, and another if it gets public scrutiny. Of course, phone and tablet vendors could decide to equally fund AOSP, much like the “Gang of Nine” PC clone makers decided to fund EISA after IBM bullied the PC market with their proprietary Microchannel bus.  It will be interesting to see how Google will address this problem.

Another possibility is to hedge bets with another operating system.  Some cell phone makers already support Microsoft, and HP has WebOS.  My guess is that Microsoft’s OS will benefit from Google’s little dilemma, but I can only speculate on WebOS, which gets some good reviews in spite of HP’s meager market share.

My advice to Google:  Call in all your Android AOSP partners and address this RIGHT NOW.  Openness is really important to instill confidence and to keep the Android market penetration climbing.

-gayn

Storing Energy from Solar Arrays

2011/08/11

Storing Energy from Solar Arrays

Even in the case there is heavy daytime use for solar power, e.g. air conditioning, there will always be a need for power when the sun can’t provide it.  Ideally, the solar generation system will be over-specified so that peak usage can be handled, in which case there is a need to store the excess energy for “after sun” hours.  In addition, peak energy consumption is in the afternoon, and solar arrays are effective in the morning hours, which might generate excess power in the mornings.

Now the best “storage” device is usually the grid.  Dump your excess energy onto the grid so that someone else can use it, and then use the grid whenever (day or night) your solar power isn’t adequate.

The problem is that the grid has only minuscule renewable storage capacity – most of it behind dams that contain hydro-electric generators. Thus “storing” excess energy on the grid assumes that someone can use it at essentially that moment.  Utility companies try to shape demand by turning on and off not only hydro-electric generators, but also (ugh) fossil fuel generators and by adjusting the various components of energy pricing.  For the excess producer of solar or other renewable energy, this means that putting electricity onto the grid at non-peak times, may provide low or even negative prices for that energy.

That said, what if you can’t (or don’t want to) dump your excess power onto the grid for some reason?  Here are some thoughts to be elaborated on in subsequent postings:

  • Store the energy as heat, e.g. as hot water.  This can be done in hot water tanks, or even in a swimming pool.  A more sophisticated method is to store the heat in a heat sink such as molten salt (very hot), which can be reused in various ways.  There are various salts besides NaCl – common table salt – but the idea is to heat the salt to over 500 degrees Celsius.  Well insulated, liquid salt can keep 90% of its heat for over 24 hours.
  • Storing heat in gravel has merit.  Isentropic’s Pumped Heat Electricity Storage (PHES) system is based on the First Ericcson cycle and uses a heat pump to store electricity in thermal form. The storage system uses two large (7m high  x 8m in diameter) containers of gravel, one hot (500C) and one cold (-150C) with a heat pump machine between them. Electrical power is input to the machine which compresses/expands air to 500C on the hot side and -150C on the cold side. The air is passed through the two piles of gravel, where it gives up its heat/cold to the gravel. In order to regenerate the electricity, the cycle is reversed with a round trip efficiency of 70-80 percent. The temperature difference is used to run the system as a heat engine.
  • Compressed Air Energy Storage (CAES) is very interesting; however,  don’t think of a tank of compressed air, but rather think of an underground cavern filled with it.  In the middle of the night when the price of electricity is low, utilities can run compressors and pump air into a cavern at around 750 psi.  When the price of electricity goes up, the compressed air is then used to power a turbine generator. Often this portion of the design is supplemented by the use of natural gas either to heat the air or to mix with the pressurized air to burn.  Investigations by EPRI indicate that up to 80 percent of the U.S. has geology suitable for CAES. A single 300 megawatt CAES plant would require 22 million cubic feet of storage space — enough to store eight hours’ worth of electricity. Ridge Energy has a nice diagram here. Whether CAES systems can be commercially viable at a utility level remains to be seen.  One negative argument is here.
  • Store it as kinetic energy, e.g. in a fly wheel.  Well, there are horror stories of large fly wheels disintegrating with hunks flying around causing much damage.  Not a pretty picture.  Fly wheels in general lose a lot of energy to friction, and really are only good to transition use from one source of energy to another.  Lots of small fly wheels are too expensive to maintain. The company Velkess (founded 2007) is developing new flywheel technology.  Keep an eye on them.
  • Pump water up hill.  This isn’t as silly as it sounds, especially if what is needed is water pressure.  Think of those old fashioned city water tanks, which are decidedly useful. On a large scale, filling a reservoir that was above a hydro-generation facility would be a good idea.  For example, the TVA’s Racoon Mountain pump-up facility has been operating for a number of years.  Some estimate that about 10% of the hydro-electric dams are suitable for this.
  • Freeze water.  If you don’t have enough use for ice, e.g., for cooling, then run the freezer during the day and let it “coast” at night.  Some commercial refrigerators have logic for this, expecting both power and usage to be nil at night and to start up in the morning.  Even when commercial electricity costs less at night, this is a win for solar.  For wind, making ice at night and using it to cool a building during the day is done by CALMAC.
  • Use electrolysis to make hydrogen, then burn the hydrogen later to make electricity.  This approach is costly, and it takes considerable energy to compress the hydrogen into a liquid form so that it can be shipped or stored easily.
  • A more sophisticated version of the preceding is to use electrolysis to make MgH2 – magnesium hydroxide.  This is a relatively stable liquid, and the hydrogen can be released either by mixing the MgH2 with water or by heating it.  The released hydrogen can then be burned to make electricity.  The byproduct after adding water, Mg(OH), is essentially Milk of Magnesia. which can be recycled to recover the magnesium and start the process all over again. In addition to storing energy in the form of hydrogen, this “liquid energy” and its byproduct can be transported easily in trucks, boats, etc..
  • Of course batteries are the standard answer, and battery technology is getting better every year.  Here, you get a little extra mileage bypassing the inverter(s) and using DC appliances, but I’m not sure what the trade-offs are.  For example, using LED’s for DC lighting would be a good idea.  It is pretty easy to bypass the power supplies in most computer equipment.  DC fans would work.  But the cost of DC ovens, air conditioners, etc. is probably prohibitive.  At the scale of utilities, EOS Aurora and Revolt Technology both claim to have rechargeable Zinc-Air batteries that will scale hold 6 MWh each.  Zinc-air batteries aren’t as good as Lithium-air, but they are vastly cheaper.  The battery researchers are lately hot on Vanadium Redox batteries which you can read about here.

As I’ve thought about putting solar arrays on my house’s rooftop, some of the above ideas may work for me, e.g. heating water, and getting the freezer and refrigerator extra cold during the day.  On the other hand, I’ll also connect to the grid and take the pittance that the electric company gives me for my “donations.”

Finally, for you wind farm fans, most of these ideas apply – with the exception of storing liquid salt.  Not that a wind mill can’t make liquid salt, but solar arrays essentially start there.

-gayn

Amazon and Microsoft Cloud Outages in Europe

2011/08/08

Amazon and Microsoft Cloud Outages in Europe.

Apparently both Amazon’s Elastic Compute Cloud (EC2) and Microsoft’s Business Productivity Online Suite (BPOS) house their cloud services near Dublin, Ireland.  I can personally attest that Dublin is a great place to be, especially if you don’t mind a lot of drizzle and rain.  If you are a business, you like Ireland because of the tax breaks, educated workforce, a cool climate to avoid excessive air conditioning of data centers, and good Internet connections to the mainland.

With the rain, at times, comes lightening, and apparently lightening knocked out both Amazon and Microsoft’s cloud facilities by knocking out a power substation nearby.  It appears that neither Amazon nor Microsoft had backup cloud facilities in Europe.  Thus their cloud customers, whose data must remain physically in Europe by law, were knocked out.

Amazon reported, “We understand at this point that a lighting strike hit a transformer from a utility provider to one of our Availability Zones in Dublin, sparking an explosion and fire,” Amazon wrote. “Normally, upon dropping the utility power provided by the transformer, electrical load would be seamlessly picked up by backup generators. The transient electric deviation caused by the explosion was large enough that it propagated to a portion of the phase control system that synchronizes the backup generator plant, disabling some of them. Power sources must be phase-synchronized before they can be brought online to load. Bringing these generators online required manual synchronization.”  According to Amazon, EC2 instances were being brought back within three hours of the lightening strike, and within about 12 hours 60 percent of instances had been restored.  However, “Due to the scale of the power disruption, a large number of EBS servers lost power and require manual operations before volumes can be restored,” said Amazon. “Restoring these volumes requires that we make an extra copy of all data, which has consumed most spare capacity and slowed our recovery process.”  This took several days to complete.

Microsoft reported, BPOS was also knocked offline for several hours by the lightning strike. According to Microsoft, European BPOS services were restored within four hours. In a statement, Microsoft said “a widespread power outage in Dublin caused connectivity issues.”

Well, some EC2 customers weren’t knocked out, in the case of Amazon.  Amazon had three “Availability Zones” (AZs) within the same facility.  An AZ is just another cloud, and a customer can design their cloud application to fail-over from one AZ to another.  Amazon must have designed their AZs in Dublin reasonably well, since NetFlix reported that they were able to execute a failover, because only one of the Dublin AZs was knocked out by the storm.

OK, so kudos for Amazon for having backup AZs there in Dublin that were resilient to minor storm damage.  (Amazon probably should have had better line conditioning so that the surge didn’t propagate to the other AZs.)  Small raspberries to Amazon for not explaining to their customers exactly how AZs worked in Dublin so that more customers could have taken advantage of them for “storm protection.”  On the other hand, big raspberries for Amazon for not having a geographically separate site, preferably far from Dublin.  Only with a second location would Amazon’s cloud (EC2) be protected from a devastating lightening strike, a major earthquake, or another disaster that would knock out the entire location.  I’ll note that a customer that takes advantage of say two AZ’s in Dublin, could easily move their backup AZ to another location whenever Amazon gets its head out of a dark place.  (To be fair, Amazon probably needs more European customers to make a second Ireland site cost effective.)

These outages weren’t the first for either Amazon nor for Microsoft.  It is clear that Cloud Services in general are still maturing.

-gayn

Killing birds

2011/08/03

Today’s (8/3/11) LA Times (page AA1) reports that federal authorities are investigating the deaths of golden eagles at the LA Department of Water and Power’s Pine Tree Wind Project in the Tehachapi Mountains.  Environmentalists have long complained about damage to fauna caused by these huge wind generators – from birds to underground fauna.  Things do get dicey, however, when endangered species such as golden eagles  and bald eagles are getting (literally and figuratively) “whacked.”  These recent deaths are not the first, and apparently not much noise has been made when 1500+ birds are killed each year at the Pine Tree facility.  These deaths mostly are “ordinary” migratory song birds, quail, and meadowlarks.

Now I’m not being an apologist, but it is important to understand that over 400,000 birds are killed each year nationwide by encounters with buildings, utility towers, cars, etc.  My own house kills a couple of small birds each year as they fly into our picture windows.

Two questions arise that call for answers: First, are the bird deaths due to wind generators “acceptable” as compared to the already acceptable deaths caused by other structures?  And second, if not, especially if not for endangered species, then what can we do technologically to mitigate these deaths?

It’s probably impractical to enclose a wind farm (and especially a single generator) in a huge chicken-wire fence, but ideas that come to my mind are whistles, sirens, recorded bird screeches, and strobe lights.   Dang if I know, but let’s do something!

-gayn