PE Volume 121 Issue 7 Archives https://www.power-eng.com/tag/pe-volume-121-issue-7/ The Latest in Power Generation News Tue, 31 Aug 2021 18:13:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.power-eng.com/wp-content/uploads/2021/03/cropped-CEPE-0103_512x512_PE-140x140.png PE Volume 121 Issue 7 Archives https://www.power-eng.com/tag/pe-volume-121-issue-7/ 32 32 Psychology Researchers Develop App to Cut Electricity Consumption in California https://www.power-eng.com/news/psychology-researchers-develop-app-to-cut-electricity-consumption-in-california/ Wed, 12 Jul 2017 20:59:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/departments/generating-buzz/psychology-researchers-develop-app-to-cut-electricity-consumption-in-california Greenely and Stanford University are launching the Greenely Go mobile application in California after a year of development in a unique collaboration aiming to demonstrate how effectively a newly developed behavioral algorithm can help decrease electricity usage in California. The app utilizes energy consumption data from PG&E and is now available to download for all PG&E customers free of charge.

The Greenely Go mobile app, which uses data from customers of Pacific Gas & Electric, was developed over a year. Its creation followed two years of research and development of behavioral technologies at Stanford University.

Researchers from the Department of Psychology at Stanford University have collaborated with Greenely for almost two years to study and develop new behavioral technologies for reducing households’ energy consumption to a greater extent than previously achieved. Stanford University has also verified similar technologies in other areas and achieved good results in terms of encouraging people to embrace sustainable behaviors by reducing meat and water consumption. The core of the new technology is a dynamic comparison with equivalent households.

“Many homes already receive comparisons of their energy usage to their neighbors to help encourage energy savings, what we refer to as ‘static norm feedback’. We believe it will be more influential to show people ‘dynamic norm feedback’ by highlighting that others are reducing their home energy use, which signals that energy conservation is important and that change is possible. In other domains, like water conservation, we’ve seen that learning that others are changing can be three times as effective than just comparing one’s own use to others,” says Gregg Sparkman, PhD. researcher at Dept. of Psychology, Stanford University.

Power utilities in the US have come a long way in the work of residential energy efficiency thanks to several financial instruments that encourage this, especially in California. They have established portfolios with different energy efficiency solutions and have therefore shown great interest in Greenely’s service and the results that can be achieved by the algorithms.

“We hope to demonstrate a higher energy efficiency rate than previous behavioral techniques. Following the project, the results will be presented to the major energy utilities in the United States to commercialize the service and build a business here in the United States,” says Tanmoy Bari, CEO of Greenely.

Researchers from the Dept. of Psychology at Stanford University, Greenely’s founders and a Swedish delegation consisting of members from the Ministry of Environment and Energy, the Swedish Energy Agency and Sweden’s Embassy, met at Jordan Hall at Stanford University last June to discuss and demonstrate Greenely’s cooperation with Stanford.

The project is funded by the Swedish Energy Agency, KIC InnoEnergy, Stanford University and Greenely, and is expected to run over 12 months.

]]>
Ammonia-Based Flue Gas Desulfurization https://www.power-eng.com/emissions/ammonia-based-flue-gas-desulfurization/ Wed, 12 Jul 2017 20:51:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/departments/what-works/ammonia-based-flue-gas-desulfurization FGD using ammonia instead of lime or limestone can achieve higher levels of SO2 removal while eliminating liquid and solid wastes.

By Dr. Peter Lu and Dennis McLinden, Jiangnan Environmental Technology Inc.

Flue gas desulfurization (FGD) systems using lime or limestone as the chemical reagent are widely used throughout the world for SO2 emissions control at coal-fired power plants. Ammonia-based systems, however, are emerging as a viable alternative to address limitations with respect to liquid and solid waste generation and handling. Efficient Ammonia-Based Desulfurization Technology (EADS) does not generate any liquid waste streams or undesirable solid byproducts that require disposal; rather, the closed-loop process produces a salable ammonium sulfate fertilizer byproduct which can reduce more than 50 percent of the operating cost.

Shenhua Ningxia Coal to Liquids Plant. The plant began commercial production in December 2016
Shenhua Ningxia Coal to Liquids Plant. The plant began commercial production in December 2016

Drawbacks with Lime/Limestone

As shown in Figure 1, FGD systems employing lime/limestone forced oxidation (LSFO) include three major sub-systems:

  • Reagent preparation, handling and storage
  • Absorber vessel
  • Waste and byproduct handling

Reagent preparation consists of conveying crushed limestone (CaCO3) from a storage silo to an agitated feed tank. The resulting limestone slurry is then pumped to the absorber vessel along with the boiler flue gas and oxidizing air. Spray nozzles deliver fine droplets of reagent that then flow countercurrent to the incoming flue gas. The SO2 in the flue gas reacts with the calcium-rich reagent to form calcium sulfite (CaSO3) and CO2. The air introduced into the absorber promotes oxidation of CaSO3 to CaSO4 (dihydrate form).

The basic LSFO reactions are:

CaCO3 + SO2 → CaSO3 + CO2 · 2H2O

The oxidized slurry collects in the bottom of the absorber and is subsequently recycled along with fresh reagent back to the spray nozzle headers. A portion of the recycle stream is withdrawn to the waste/byproduct handling system, which typically consists of hydrocyclones, drum or belt filters, and an agitated wastewater/liquor holding tank. Wastewater from the holding tank is recycled back to the limestone reagent feed tank or to a hydrocyclone where the overflow is removed as effluent.

Typical Lime/Limestone Forced Oxidatin Wet Scrubbing Process Schematic

Wet LSFO systems typically can achieve SO2 removal efficiencies of 95-97 percent. Reaching levels above 97.5 percent to meet emissions control requirements, however, is difficult, especially for plants using high-sulfur coals. Magnesium catalysts can be added or the limestone can be calcined to higher reactivity lime (CaO), but such modifications involve additional plant equipment and the associated labor and power costs. For example, calcining to lime requires the installation of a separate lime kiln. Also, lime is readily precipitated and this increases the potential for scale deposit formation in the scrubber.

The cost of calcination with a lime kiln can be reduced by directly injecting limestone into the boiler furnace. In this approach, lime generated in the boiler is carried with the flue gas into the scrubber. Possible problems include boiler fouling, interference with heat transfer, and lime inactivation due to overburning in the boiler. Moreover, the lime reduces the flow temperature of molten ash in coal-fired boilers, resulting in solid deposits that would otherwise not occur.

Liquid waste from the LSFO process is typically directed to stabilization ponds along with liquid waste from elsewhere in the power plant. The wet FGD liquid effluent can be saturated with sulfite and sulfate compounds and environmental considerations typically limit its release to rivers, streams or other watercourses. Also, recycling wastewater/liquor back to the scrubber can lead to the buildup of dissolved sodium, potassium, calcium, magnesium or chloride salts. These species can eventually crystallize unless sufficient bleed is provided to keep the dissolved salt concentrations below saturation. An additional problem is the slow settling rate of waste solids, which results in the need for large, high-volume stabilization ponds. In typical conditions, the settled layer in a stabilization pond can contain 50 percent or more liquid phase even after several months of storage.

The calcium sulfate recovered from the absorber recycle slurry can be high in unreacted limestone and calcium sulfite ash. These contaminants can prevent the calcium sulfate from being sold as synthetic gypsum for use in wallboard, plaster, and cement production. Unreacted limestone is the predominant impurity found in synthetic gypsum and it is also a common impurity in natural (mined) gypsum. While limestone itself does not interfere with the properties of wallboard end products, its abrasive properties present wear issues for processing equipment. Calcium sulfite is an unwanted impurity in any gypsum as its fine particle size poses scaling problems and other processing problems such as cake washing and dewatering.

If the solids generated in the LSFO process are not commercially marketable as synthetic gypsum, this poses a sizeable waste disposal problem. For a 1000 MW boiler firing 1 percent sulfur coal, the amount of gypsum is approximately 550 tons (short)/day. For the same plant firing 2 percent sulfur coal, the gypsum production increases to approximately 1100 tons/day. Adding some 1000 tons/day for fly ash production, this brings the total solid waste tonnage to about 1550 tons/day for the 1 percent sulfur coal case and 2100 tons/day for the 2 percent sulfur case.

EADS Advantages

A proven technology alternative to LSFO scrubbing replaces limestone with ammonia as the reagent for SO2 removal. The solid reagent milling, storage, handling and transport components in an LSFO system are replaced by simple storage tanks for aqueous or anhydrous ammonia. Figure 2 shows a flow schematic for the EADS system provided by JET Inc.

Ammonia, flue gas, oxidizing air and process water enter an absorber containing multiple levels of spray nozzles. The nozzles generate fine droplets of ammonia-containing reagent to ensure intimate contact of reagent with incoming flue gas according to the following reactions:

(1) SO2 + 2NH3 + H2O → (NH4)2SO3

(2) (NH4)2SO3 + ½O2 → (NH4)2SO4

The SO2 in the flue gas stream reacts with ammonia in the upper half of the vessel to produce ammonium sulfite. The bottom of the absorber vessel serves as an oxidation tank where air oxidizes the ammonium sulfite to ammonium sulfate. The resulting ammonium sulfate solution is pumped back to the spray nozzle headers at multiple levels in the absorber. Prior to the scrubbed flue gas exiting the top of the absorber, it passes through a demister that coalesces any entrained liquid droplets and captures fine particulates.

The ammonia reaction with SO2 and the sulfite oxidation to sulfate achieves a high reagent utilization rate. Four pounds of ammonium sulfate are produced for every pound of ammonia consumed.

As with the LSFO process, a portion of the reagent/product recycle stream can be withdrawn to produce a commercial byproduct. In the EADS system, the takeoff product solution is pumped to a solids recovery system consisting of a hydrocyclone and centrifuge to concentrate the ammonium sulfate product prior to drying and packaging. All liquids (hydrocyclone overflow and centrifuge centrate) are directed back to a slurry tank and then re-introduced into the absorber ammonium sulfate recycle stream.

The EADS technology provides numerous technical and economic advantages, as shown in Table 1.

  • EADS systems provide higher SO2 removal efficiencies (>99%), which gives coal-fired power plants more flexibility to blend cheaper, higher sulfur coals.
  • Whereas LSFO systems create 0.7 tons of CO2 for every ton of SO2 removed, the EADS process produces no CO2.
  • Because lime and limestone are less reactive compared to ammonia for SO2 removal, higher process water consumption and pumping energy is required to achieve high circulation rates. This results in higher operating costs for LSFO systems.
  • Capital costs for EADS systems are similar to those for constructing an LSFO system. As noted above, while the EADS system requires ammonium sulfate byproduct processing and packaging equipment, the reagent preparation facilities associated with LSFO are not required for milling, handling and transport.

The most distinctive advantage of EADS is the elimination of both liquid and solid wastes. The EADS technology is a zero-liquid-discharge process, which means no wastewater treatment is required. The solid ammonium sulfate byproduct is readily marketable; ammonia sulfate is the most utilized fertilizer and fertilizer component in the world, with worldwide market growth expected through 2030. In addition, while the manufacturing of ammonium sulfate requires a centrifuge, dryer, conveyer and packaging equipment, these items are non-proprietary and commercially available. Depending on economic and market conditions, the ammonium sulfate fertilizer can offset the costs for ammonia-based flue gas desulfurization and potentially provide a substantial profit.

Efficient Ammonia Desulfurization Process Schematic

Enhanced EADS in China

In 2016, China became one of 194 signatories to the United Nations Framework Convention on Climate Change held in Paris (Paris Agreement). During this conference, the Chinese government announced it would cut pollution from coal-fired plants by 60% to include emissions of dust, NOx and SO2, and carbon emissions by 180 million metric tons over the next five years. This is to be accomplished by upgrading power stations with clean coal technologies such as flue gas desulfurization and selective catalytic reduction.

In concert, China is implementing an action plan of Energy Saving, emission reduction, upgrading and retrofitting of coal-fired power plants for the period 2014-2020. This requires that air pollutant emissions concentrations for new coal-fired power generating units generally meet the emission standards for gas-fired boiler/power generators. Emissions of particulate matter and SO2 in the discharged flue gas will need to be lower than 1.18 lb/MMSCF, 12 PPM and, respectively.

The basic EADS process described above has been installed in more than 150 power generation, chemical, sulfur recovery, and steel plants in China, demonstrating SO2 removal efficiencies greater than 99% and SO2 concentrations in the treated flue gas down to 17 PPM. In addition, the EADS absorption process in combination with a patented demister at the top of the absorber vessel removes fine particulates (1-20μm) to levels below 4.72 lb/MMSCF. However, because these emission levels do not meet China’s Ultra-low Emissions Standards, JET Inc. developed an enhanced version of its EADS technology, which has been installed on 40 projects representing over 100 absorbers.

The technology enhancements improve the performance of the original EADS system through three mechanisms, as shown in Figure 3. Collectively, absorption efficiency enhancement, acoustic agglomeration of fine particulate, and efficient demisting comprise the Ultrasound-Enhanced SO2 and Particulate Control (USPAC) technology.

Mechanisms Comprising Ultrasound-enhanced SO2 and Particulate Control technology.

Enhanced SO2 absorption is achieved by optimizing spray density, liquid-gas distribution, and the oxidation process. Fine particulates in the flue gas are agglomerated with scrubbing and ultrasound mechanisms, and are removed using a high-efficiency, patented demister. Using the USPAC enhancement to the basic EADS process, SO2 and particulate matter emissions meet or exceed the Chinese Ultra-low Emissions Regulations, achieving <35 mg/Nm3 for SO2 and < 5 mg/Nm3 for total particulate matter.

In September 2013, construction started on the world’s largest coal-to-liquids (CTL) plant in the Ningxia Hui autonomous region of China. Touted as the world’s largest chemical project in both the petrochemical and coal chemical industries (total capital of US$8 billion), the plant converts 22.55 million ST/year of coal into 4.46 million ST/year of oil and 174,400 SCFM of olefin synthesis gas. Shenhua Ningxia Coal Industry Group put this plant into commercial operation in December 2016.

Central to the CTL plant is the thermal power station, which consists of 10 * 200 MW ultra-high pressure coal-fired boilers. Each boiler is paired with an air quality control system consisting of:

  • Selective catalytic reduction (SCR) reactors for NOx control
  • Electrostatic precipitators (ESPs) for particulate matter collection (each with two chambers and six electric fields)
  • Ammonia-based flue gas desulfurization systems for SO2 removal and additional fine particulate matter removal

In 2014, the plant selected JET Inc. to supply the FGD systems. Because emissions had to conform to China’s Ultra-low Emissions Regulations, JET chose the USPAC technology. Each USPAC system is designed to treat 475,500 SCFM of flue gas with an SO2 concentration of 980PPM. Initial performance of the USPAC systems since the plant entered operation in late 2016 has met or exceeded the Ultra-low Emissions Regulations, achieving outlet SO2 concentrations in the clean flue gas of less than 12PPM (dry basis, standard conditions, 6% oxygen) and particulate matter concentrations of less than 0.29lb/MMSCF. In addition, the USPAC system has demonstrated an availability of greater than 98% with greater than 99% ammonia recovery.

In order to ensure smooth implementation of this project and supply of desulfurization absorbent, Shenhua Ningxia Industry Group invested a synthetic ammonia plant with the capacity of 165,300 short tons/year to supply desulfurization absorbent for this project and other FGD projects in the group, thereby substantially reducing the cost of anhydrous ammonia from 318 USD/ston for outsourcing to 227 USD/ston, and further reducing the OPEX of the FGD units.

Comparative Economics Using EADS

Table 2 compares the operating costs at the Shenhua Ningxia CTL plant for EADS versus LSFO. If an LSFO process had been applied to this project along with commercial sales of the byproduct gypsum, the annual operating costs would be $14,642,000. In comparison, the EADS process can essentially eliminate these costs while generating a profit of over US$500,000 from the sale of ammonium sulfate (at US$90/ST), netting total annual savings of approximately $15,000,000.

Conclusion

The EADS technology enables power plant and industrial boiler operators to meet strict environmental regulations while providing economic benefits. EADS is available under several business models, including engineering packages with supply of key equipment and parts, project engineering, procurement and construction, Build-Operate-Transfer and Build-Operate-Own.

]]>
Leveraging the Science of Measurement to Mitigate Risk for Nuclear Plants https://www.power-eng.com/nuclear/leveraging-the-science-of-measurement-to-mitigate-risk-for-nuclear-plants/ Wed, 12 Jul 2017 20:44:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/features/leveraging-the-science-of-measurement-to-mitigate-risk-for-nuclear-plants By Bob Timberlake

Risk at a nuclear power plant can take many forms. Operators are concerned about factors that impact consistent, efficient energy production. Engineers are concerned about component reliability, compatibility and quality. All are focused on safety on the job. As a result, plant managers seek methods of mitigating risk. Any technique that is able to transform an “unknown” into a “known” factor is considered highly beneficial.

Metrology, the science of measurement, offers an easy method of mitigating risk. In fact, the smart application of modernized metrology techniques can have substantial benefits for plant managers.

Now, the word “metrology” probably makes most plant managers think about as-built plans. Indeed, metrology techniques such as laser scanning are most commonly used to complete as-builts after a construction project. However, metrology goes far beyond this simple use.

There are many “tools in the toolbox” when it comes to metrology. Measurement technologies are ever evolving, getting more precise and accurate. Modern metrology equipment can make measurements so precise, they are accurate down to the atomic level. By recognizing all of the unique tools available and taking a holistic look at each project, it’s easier to match up the perfect tool for the project to ensure the most effective use of metrology each time.

Photogrammetry uses high-resolution photography to measure discrete features using adhesive targets strategically placed on points of interest to capture as-built dimensions. Photo courtesy: AREVA NP
Photogrammetry uses high-resolution photography to measure discrete features using adhesive targets strategically placed on points of interest to capture as-built dimensions. Photo courtesy: AREVA NP

Turning “I Think” Into “I Know”

Metrology’s greatest benefit for nuclear plants is the mitigation of risk. Metrology can be used on a wide range of projects to help reduce risk, dose, and maintain or improve schedule, safety, financial success and project predictability. When applied early, as projects are set up, these techniques can turn a set of unknowns into a set of knowns. This prevents stopping and starting due to mid-project delays. Ultimately, removing the unknowns helps turn “I think I can” into “I know I can” and mitigates any issues up front.

How? One example is where metrology provides the necessary information to allow a virtual reality simulation of components. The simulation tells you if you can remove and re-install components based on supplied plans versus real-world conditions. It also captures accurate as-built dimensions for the entire plant, reducing uncertainty and inaccuracy of plant components, locations and dimensions. This can help with retrofitting and reverse engineering of components in plants because it increases the ability to put the “knowns” down on paper, rather than taking the paper and trying to build it to fit the unknowns.

Ideally, metrology should be introduced in a project during the initial project planning phase in order to optimize and take full advantage of the benefits for the project. Metrology can also be applied at the design, fabrication and implementation stages of a project timeline. Depending on the complexity and needs of the project, application times can be measured from minutes to weeks as a project progresses. However, even if the project is measured in weeks, the typical industrial application survey duration tends to average a few hours.

Some project examples include:

  • Component replacements – Component replacement projects come with many challenges such as load path interference identification, rigging and new versus old component dimensions, as well as installation challenges. Using metrology during all stages of a project, from planning through installation, takes the guess work out of project decisions.
  • Plant modifications – Using metrology techniques during the design phase of a project to capture the as-built configuration of the project area versus relying solely on original design drawings is a means to remove project risk while increasing confidence and predictability. The project team utilizes the plant’s as-built configurations for design purposes, reducing rework caused by original design versus as-built differences. Follow-on work includes the pre-fabrication of piping, hangers, etc., as well as layout for pumps, foundations and more.
  • Flow-accelerated corrosion (FAC) piping – As plants age, FAC continues to be an issue that all plants must monitor. Using metrology allows project teams to better prepare and install more retrofitted piping in a shorter time with first time fit-up quality.
  • 3-D modeling and animations – Laser scan data has many uses to enhance a project’s predictability. Generating 3-D models of the plant’s as-built configurations using laser scan data gathered through metrology enables engineers to design and plan in the real-world environment. Additionally, rigging and component moves are created in the virtual database, enabling the team to prove out rigging scenarios and identify interferences along the prescribed load path.
This technology takes photogrammetry underwater and without the need for adhesive targeting. Photo courtesy: AREVA NP
This technology takes photogrammetry underwater and without the need for adhesive targeting. Photo courtesy: AREVA NP

The “Tools in the Toolbox”

Despite the advances of metrology tools and the myriad of uses it can have, adoptions of metrology techniques in the nuclear industry have been slow. Meanwhile, other industries such as civil engineering and heavy construction have eagerly adopted these technologies. Certain metrology techniques have even been used for accident reconstruction and crime scene investigations, proving the portability and versatility of these technologies.

There are a wide range of modern metrology tools to support projects within the nuclear industry. These include:

  • Laser scanning – This technology is used to capture 3-D coordinate values for everything in sight between 18 inches and 500 feet. This is a commonly used metrology technique, as it collects large amounts of incredibly detailed data. Different levels of scanners can be used to ensure the best data collection for each project. For example, AREVA NP maintains three levels of scanners – a large volume scanner (± 0.125″ accuracy, used to measure a whole building), a medium volume scanner (± 0.015″ accuracy at 20 to 30 feet from an object) and small volume scanners (± 0.001″ accuracy at 1 to 2 feet from an object).
  • 3-D CADD modeling – Data collected through laser scanning creates 3-D models, animation, load-path type interferences and plans, with ± 0.125″ accuracy.
  • Portable coordinate measuring machine arm – This single-point portable measurement device can measure applications on its own with ± 0.0015″ accuracy.
  • Photogrammetry – Photogrammetry uses high-resolution photography to measure discrete features using adhesive targets strategically placed on points of interest to capture as-built dimensions at ± 0.005″ accuracy. For context, most of today’s maps are made using this type of technology. In fact, industrial photogrammetry was developed from the aerial photogrammetry technique. This development drove the accuracy possibilities down to a few thousandths of an inch, making the application a versatile, easy to use measurement tool.
  • Underwater photogrammetry – This technology, uniquely offered through a partnership between AREVA NP and the DimEye Corp., takes photogrammetry underwater and without the need for adhesive targeting. For use in areas previously thought to be inaccessible such as spent fuel pools, jet pumps, core spray, etc., the housing and cabling for the camera has been designed specifically for these environments. This technique is accurate to ± 0.015″.
  • Laser tracking – Laser tracking uses servo motors and encoders to accurately “track” a mirrored prism. The system has the ability to collect measurement data on the “tracked” prism thousands of times a second, rendering the statistical data to be accurate to ± 0.001″. It can have many applications, including placing a part or component in its final location or supporting machining operations.
  • Total station – This device captures measurements of anything within its line of sight, similar to a land surveying instrument. This single tool actually incorporates all those used for land surveying, including an electronic distance meter (EDM) that aims at a point and shows distance from scope center, and computes slope and angle to provide 3-D coordinates at that point. It is typically accurate down to ± 0.024″.

For nuclear plants, where plant managers most frequently need to ensure accurate measurements on as-built plans prior to planning plant upgrades or replacements, laser- and photography-based tools are frequently the most effective. But, the exact technology used often depends on what one is trying to measure. For example, when measuring large areas for projects that require load path interference analysis or to capture plant as-builts in the case of planning and designing a plant modification, laser scanning technology is typically applied. This technique produces accuracies in the .065″ to .125″ range. However, when installing a new component or retrofitting piping, a higher degree of accuracy requires technologies such as photogrammetry and laser tracking.

Why Use Metrology

Metrology offers a practical, easy way for plant managers, engineers and project managers to make more informed decisions. Smart application of a growing number of tools can increase detailed control of all project elements. In turn, this can help prevent mistakes, saving time and budget

Author

Bob Timberlake is the product line manager for AREVA NP’s Metrology Services. He has more than 30 years of applied practice in the metrology field.

]]>
Cooling Tower Heat Transfer Fundamentals https://www.power-eng.com/om/cooling-tower-heat-transfer-fundamentals/ Wed, 12 Jul 2017 20:15:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/features/cooling-tower-heat-transfer-fundamentals By Brad Buecker

The continued planning, construction, and operation of combined cycle power plants (and other energy and industrial facilities) is introducing many new personnel to numerous water-related issues, including those related to cooling. A critical unit operation at many of these facilities is energy transfer in one or more cooling towers.

This article examines important cooling tower heat transfer fundamentals, and modern methods for maintaining proper chemistry control in cooling systems.

A critical element of operation at many combined cycle power plants is energy transfer in one or more cooling towers. There are important cooling tower heat transfer fundamentals and modern methods for maintaining proper chemistry control in cooling systems.
A critical element of operation at many combined cycle power plants is energy transfer in one or more cooling towers. There are important cooling tower heat transfer fundamentals and modern methods for maintaining proper chemistry control in cooling systems.

Cooling Tower Heat Transfer

The basic cooling tower process is outlined in Figure 1.

In the words of an excellent reference manual on cooling, “Evaporation is utilized to its fullest extent in cooling towers, which are designed to expose the maximum transient water surface to the maximum flow of air – for the longest period of time.” This statement highlights a fundamental aspect of cooling towers that those new to the industry may not fully recognize; the majority of heat transfer in a cooling tower (typically 65 to 85 percent depending upon atmospheric conditions) is due to evaporation of a small amount of the circulating water. This aspect will be outlined in a subsequent example.

Outline of Cooling Tower Process

A very important concept for understanding cooling tower heat transfer is that of “wet bulb” temperature. Consider a warm summer day with 90°F shade temperature at 40 percent relative humidity. A standard thermometer would naturally read 90o, which is the “dry bulb” temperature. Now, attach another thermometer alongside the dry bulb thermometer but with a soaked piece of cloth around the bulb of the second thermometer, and put both on a swivel such that the thermometers can be swirled very rapidly through the air. This simple and common device is known as a sling psychrometer. After a while, the dry bulb thermometer will still read 90°F but the other thermometer will read 71.2°F. This latter reading is the wet bulb temperature, and is the lowest temperature that can be achieved by evaporative cooling.

No matter how efficient, a cooling tower can never chill the recirculating water to the wet bulb temperature, and at some point costs and space requirements limit cooling tower size. The separation in temperature between the chilled water and wet-bulb value is known as the approach. The data below show the relative size of a cooling tower for a range of approach temperatures.

The table indicates that a “standard” sized cooling tower should approach the wet bulb temperature within about 15°F. The curve becomes asymptotic as approach temperatures narrow. Thus, for any cooling tower application at some point the law of diminishing returns takes over. This data is only for general consideration, as the approach temperature may be significantly influence by several factors including the type of cooling tower fill, which will be explored later in greater detail.

The data needed to calculate heat transfer by air cooling and evaporation has been compiled in a graph known as a psychrometric chart.

All versions of psychrometric charts are “very busy” and at times difficult to follow, but a psychrometric chart provides data for the following parameters.

  • Dew point temperature
  • Dry bulb temperature
  • Enthalpy (Btu/lbm)
  • Humidity ratio (absolute value of moisture in air on a lb/lb basis)
  • Relative humidity
  • Specific volume (ft3/lbm)
  • Wet bulb temperature

If any two properties of air are known, all of the other properties can be determined. Programs are available on-line that will calculate psychrometric parameters with a few simple user inputs.

At this point, we will populate Figure 1 with some real-world data and calculate the mass flow rate of air needed to cool 150,000 gpm of tower inlet water to the desired temperature, and also calculate the water lost by evaporation.

The first step is to determine the energy balance around the tower.

(ma1*ha1) + (mw3*hw3) = (ma2*ha2) + (mw4*hw4), where Eq. 1

ma = mass flow rate of dry air ha = enthalpy of dry air streams hw = enthalpy of water streams

Utilizing algebra, the fact that ma1 = ma2, and that a mass balance on the water flow is m4 = m3 – (w2 -w1)*ma, where w = humidity ratio; the energy balance equation can be rewritten in the following form.

ma = (m3*(h4 – h3))/((h1 – h2) + (w2 – w1)*h4 Eq. 2

From a psychrometric chart and the steam tables, we find

the following. h1 = 24.6 Btu/lbm

h2 = 52.5 Btu/lbm

h3 = 72.0 Btu/lbm h4 = 45.1 Btu/lbm

w1 = 0.0075 lbs moisture per lb of dry air

w2 = 0.0286 lbs moisture per lb of dry air

So, with an inlet cooling water flow rate of 150,000 gpm (1,251,000 lb/min), the calculated air flow is 1,248,000 lb/min, which by chance in this case is very close to the cooling water flow rate. (Obviously, the air flow requirement would change significantly depending upon air temperature, inlet water temperature and flow rate, and other factors, and that is why cooling towers typically have multiple cells, often including fans that have adjustable speed control.)

Cooling Tower Example Conditions

The volumetric air flow rate can be found using the psychrometric chart, where inlet air at 68°F and 50 percent RH has a tabulated specific volume of 13.46 ft3/lb. Plugging this value into the mass flow rate gives a volumetric flow rate of almost 17,000,000 ft3/min.

The amount of water lost to evaporation can be simply calculated by a mass balance of water only. We have already seen that,

m4 = m3– (w2 – w1)*ma Eq. 3

Utilizing the data above, m4 = 146,841 gpm. Thus, the water lost to evaporation is, m3 – m4 = 3,159 gpm

Note that only about 2 percent evaporation is sufficient to provide so much cooling.

This is due to the fact that the latent heat of evaporation at common atmospheric conditions is close to 1,000 Btu/lbm. Thus, as water evaporates it carries away a great deal of heat.

Example of Cooling Tower Film Fill

A simpler method is available to more quickly calculate the typical evaporation from a cooling tower. The standard formula is,

E = (f * R * DT)/1000, where

Eq. 4 E = Evaporation in gpm

R = Recirculation rate in gpm

DT = Temperature difference (range) between the warm and cooled circulating water (°F)

f = A correction factor that helps to account for sensible heat transfer, where

f (average) is often considered to be 0.65 to 0.85, but will rise in summer and decline in winter.

The factor of 1,000 is, of course, the approximate latent heat of vaporization (Btu/lb) of water. To check the general accuracy of this calculation, consider the previous problem we solved in detail. Evaporation was 3,159 gpm with a recirculation rate of 150,000 gpm and a range of 27°F. This gives a correction factor of 0.78, which is quite in line with where Æ’ should be for the conditions shown.

This example was taken at sea-level conditions. Conditions can be significantly different at higher elevations.

The Cooling Technology Institute (www.cti.org) offers more sophisticated programs (and much other extremely useful information) to perform cooling tower calculations.

Liquid-to-Gas Ratio

A very important factor with regard to cooling towers or other processes of this type, including wet flue gas scrubbers, is the liquid-to-gas ratio (L/G). This parameter can also be evaluated from Equation 1, where the enthalpy of the water streams is simply the heat capacity of the water multiplied by the temperature. Designating ma = G and mw = L from Equation 1 transforms it to: Cp*L3*t3 + G*ha1 = Cp*L4*t4 + G*ha2 Eq. 5

We know that L4 = L3*G(w2 – w1), and using some simplifying algebra, elimination of a negligible flow term, and that tw2 – tw1 is the “Range” between inlet and outlet cooling water temperature, Equation 5 reduces to:

ha2 = ha1 + L/G*Range

Thus, it can be seen that the heat transfer is significantly influenced by the liquid-to-gas ratio. So, the more that liquid/gas interaction can be enhanced, the better the heat transfer properties.

This explains the intensive past and continuing research into cooling tower fill design. Most towers now are equipped with some variety of film fill.

As the name film fill implies, the material induces the incoming return water to form a film that greatly increases its surface area. Critical to proper performance of film fill are correct design and maintenance of the water distribution system above the fill.

Also critical, and a subject that will be covered in a future article, is cooling water chemical treatment to prevent fill fouling, especially from microbiological colonies and silt. Not only will fouling inhibit heat transfer,

Author

Brad Buecker is a senior process specialist in the Water Technologies group of Kiewit Engineering Group Inc.

]]>
Frame versus Aero: Who Wins in Simple Cycle? Mid to Large-size Combustion Turbines https://www.power-eng.com/gas/frame-versus-aero-who-wins-in-simple-cycle-mid-to-large-size-combustion-turbines/ Wed, 12 Jul 2017 19:16:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/features/frame-versus-aero-who-wins-in-simple-cycle-mid-to-large-size-combustion-turbines By Craig S. Brooker, P.E.

Across the US power markets, there are needs for new peaking capacity. Some may value peaking differently; however all markets value the main characteristics of a peaking facility: quick start/ramp rate, emissions compliance, flexible operation, and output. Historically, aeroderivative gas turbines have been the benefactor of the peaking market given their characteristics that match the demands. With the advancements in frame machine technology and the projections of low gas prices, the default answer to fulfill peaking market needs may not be so clear cut anymore.

Traditionally, frame turbines have had poor ramp rates, start times and a negative perception of maintenance due to starts charges. In addition to those deficiencies, historically an SCR on frame engines in simple cycle was more complex due to higher exhaust gas temperatures. The aggregate market impact has resulted in a peaker market dominated by aeroderivative turbines. However, given the advancement of frame gas turbines and SCR technology over the past decade, many of these shortfalls have been addressed. This includes faster start up times, improved ramp rates and overcoming the technical challenges of combining a hot SCR with a simple cycle frame engine. Thus the frame machine has advanced such that it is technically feasible for the peaking market. More importantly, now that it can compete technically, the market may realize the significant capital cost advantage of a simple cycle frame turbine. This evolution challenges the conventional thought that a peaking plant equates to an aeroderivative engine.

This article aims to provide a snapshot comparison of frame versus aero gas turbine technology and how these engines have technically evolved over time.

PJM Demand Curve

I. Peaking Market Fundamentals

Regardless of what electricity market you are in, peaking plants are valued for their ability to be responsive and flexible to market demands. They meet those demands by using technologies that have low turn-down, fast start times and quick ramp rates. There are various reasons why these performance attributes are valued more so for peaking operations than baseload operations, ranging from specific market structures to a region’s generation mix. Peaking plants, which are traditionally gas turbine simple cycle plants, typically only operate during peak load times, which are seasonal and/or cyclical and therefore limit their operating hours.

Four years of data (2012-2015) of simple cycle generating units across the United States shows that 90 percent of plants had an operating profile consisting of a capacity factor of 15 percent or less and 150 starts or less based on data obtained via Velocity Suite. The average capacity factor for a peaking plant results in low operating hours as shown in Figure 1. With low operating hours, it is in the generator’s best interest to respond quickly and generate as many megawatt-hours as possible within that plants specific infrastructure constraints (i.e. gas supply, transmission, etc.) when dispatched in order to capture the peak energy prices and maximize operating revenue from the energy market.

The ancillary services market (up/down regulation and spinning/non-spinning reserves) exemplifies the flexible and responsive characteristics of a peaking plant. Regulation helps to maintain grid stability by tightly controlling the system frequency to around 60 Hz, which means having to respond to rapid load changes that happen every few seconds. In the most stringent regions reserves are required to have at maximum a 10 minute response time to increase its’ current generation level from either online and synchronized status (spinning) or offline and non-synchronized status (non-spinning). Figure 2 summarizes various ancillary services and their response times.

Ancillary Service Response Times

A region’s generation mix also plays a role when discussing peaking operation. For example, a region that has a high amount of renewables will require units that have a quick response to not only the shoulder hours of generation of solar assets, but also the volatility of wind generation. One can observe the impacts of these differing operating characteristics in Figures 3& 4.

Another market that peaking units participate in is the capacity market. Although this market does not value responsiveness and flexibility as much as the energy and ancillary services markets, it does provide a source of revenue which is directly tied to the capital cost of the pant. Plant capital costs will be discussed later in section IV.

II. GT Technology Evolution

One of the key factors in power plant performance is the technology around which that plant is designed and constructed with the key technology being the gas turbine. This section looks at general gas turbine attributes, both historic and recent.

Both simple cycle and combined cycle plants can showcase responsiveness and flexibility through inherent gas turbine technology and plant design. The primary focus of this article is on simple cycle plants and gas turbine technology. Historically, frame gas turbines have had attributes not conducive to peaking market applications. They have had long start times, poor turn-down, slow ramp rates and start penalties, an aspect not previously discussed. When compared with the aeroderivative gas turbines, there is a stark difference, as shown below. That difference has led to the automatic correlation between simple cycle peaking plants and aero engines.

CAISO Duck Curve

In addition to performance gaps between the technologies, the frame machines’ equivalent hours factor for starts was a large detriment to the engine for project evaluation.

Aeroderivatives had an advantage over frame engines when they were first introduced. Both technologies have realized improvements over the years, however, frames have realized greater improvements which has eroded the long held advantage of aero’s.

Within the last 5-10 years, frame gas turbines have made significant technological advancements which have been mainly driven by the larger frame models and adopted into the smaller sized products. Just a few examples of improvements are blade tip clearances, thermal barrier coatings, combustors, blade design and manufacturing processes. These advancements have led to both performance improvements applicable to the peaking market and also removal of the equivalent hours starts penalty.

Another aspect to consider for a simple cycle gas turbine plant is emissions control. In today’s market it is highly likely that emission control technology will be required, mainly selective catalytic reduction (SCR) technology. The next section will provide a high level summary of historical applications of SCR technology.

CAISO Single Day Generation Profile

III. SCR Implementation Snapshot

Frame engines have been challenging for SCR’s due to their high exhaust gas temperature and the potential impact on catalyst materials and life. That said, when looking at the projects that have given a bad reputation to frame engines and SCR’s, the majority of the problems have arisen from improper installation and/or engineering design. Table 3 shows highlights of frame engine SCR applications.

Early on complications were present due to catalyst manufacturing and plant engineering and construction issues. However, as catalyst technology has matured and design and installation practices have improved through experience, there has been demonstrated success for frame engine SCR’s.

Given the technology advancements of frame engines and the demonstration of successful SCR applications, the frame engines advantages can now be considered. The next section will show a levelized cost of electricity comparison between frame and aero engines.

IV. COE Comparison

Frame engines have always had a lower all in capex than aero engines, even after including a more expensive SCR system as shown in Tables 4 and 5. When operating a low number of hours as a peaking plant does, capital costs become a majority factor in determining a plant’s cost of electricity. As depicted in Table 4, the capital cost between the two technologies is considerable. In addition to the per unit basis advantage, there is also an absolute magnitude of the cost advantage, as shown in Table 5.

Although aero engines have an advantage when it comes to efficiency and Long Term Service Agreement (LTSA) costs, these factors become less of a contributor to the COE than the capital cost when considering low capacity factors and low fuel prices. It is widely expected that natural gas prices will stay low and this will continue to counteract the efficiency advantage of the aero. In addition with low capacity factors, the LTSA has a small impact at best on the overall COE (Figure 5).

Aero engines have long been the de facto technology solution for simple cycle peaking plants given their advantageous performance attributes for a peaking application. However, with modern frame gas turbines and the advancements they have realized over the years, the synonymy of aero engine and peaking application should be challenged. As this paper has outlined, frame engines have realized improvements for start times, ramp rates, turn down, emission levels and the LTSA basis. In addition to the gas turbine itself, the SCR technology and application have matured as demonstrated by the success of new builds and retrofits.

Each project has its unique challenges and characteristics that contribute to the eventual selection of a technology. Given the capital cost advantage and the ability of frame engines to now meet the needs of peaking applications across mid to large capacity ranges, these products should be seriously considered when determining technology selection.

Author

Craig S. Brooker is a market research analyst at Mitsubishi Hitachi Power Systems Americas.

]]>
CHP: Is it a Means to Enhance Grid Reliability and Promote Energy Sustainability? https://www.power-eng.com/coal/boilers/chp-is-it-a-means-to-enhance-grid-reliability-and-promote-energy-sustainability/ Wed, 12 Jul 2017 18:57:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/features/chp-is-it-a-means-to-enhance-grid-reliability-and-promote-energy-sustainability By Anthony J. Cirillo, P.E., MBA

Using a single primary fuel, combined heat and power (CHP) projects, often referred to as cogeneration projects, have been stalwart, self-serving suppliers of electricity and another thermal energy form to manufacturing industries for over a century. Owing to the immaturity of the electric power grid, early industrial CHP projects were often borne out of necessity rather than a drive toward energy efficiency. Throughout nearly two thirds of the 20th century, much of heavy industry remained vertically integrated and maintained control of key manufacturing inputs, including heat and power, within its fence line. Over this same period and beyond the fence line, the electric power grid expanded and evolved to form the backbone of today’s transmission system-one now referred to as aged, outdated, and of marginal reliability.

Spurred in large part by the energy crisis created by the Arab Oil Embargo of 1973, the U.S., through policy and legislation, made a foray into stimulating energy efficiency by broadly advocating cogeneration. Legislatively, this advocacy took the form of a bill, PURPA, the Public Utility Regulation Policy Act, which became law in 1978. Among its other provisions, this bill required public utilities, the collective owners and operators of the grid, to purchase electric power from non-utility generators (NUGs) at avoided cost rates. The bill termed these NUGs as QF’s or IPP’s, Qualifying Facilities and Independent Power Producers, respectively. PURPA was the catalyst for the electric power industry’s deregulation of generation; this deregulation, under different legislation, progressed to include modern day transmission deregulation. While PURPA of 1978 did little to alter the business practices and economics of the existing industrial cogenerators, it spawned a new breed of power generation owner–the power project developer. Unlike their predecessor industrial cogenerators, many such developed projects ‘manufactured’ electricity as their primary product while meeting the concurrent minimum, alternate thermal energy use requirement stipulated in the bill. These early QF projects, typically sub-80 MW, also received tax incentives to induce their implementation.

Chesapeake Utilities Corp.’s 20-MW Eight Flags Energy CHP plant provides retail electricity to 16,000 residents on Florida’s Amelia Island, while simultaneously delivering hot water and steam to manufacturing facilities operated by Rayonier Advanced Materials.

While PURPA projects proliferated, with many creative ventures developed albeit fewer coming to construction fruition, they nonetheless flourished despite what could be termed ‘grid resistance’. Such resistance was borne out of the natural aversion to change of control of electric generation, heretofore along with transmission and distribution, purely a utility domain. This was coupled with the NUG’s difficulty in negotiating an economically favorable Power Purchase Agreement (PPA) when they did not possess the data to assess the utility’s avoided cost. The PPA was not only an essential element of the project’s limited or non-recourse financed debt structure, but was the vehicle that enabled, de facto, interconnection access to the grid. As the initially cold, adversarial utility-NUG relationship thawed over two decades of generation deregulation, the next frontier [or battle ground dependent upon your perspective], transmission deregulation, was forming. As a result, PURPA of 1978 was amended, under Title XII, Subtitle E, of the Energy Policy Act of 2005.

Amendments to PURPA in this 2005 Act addressed, among other items, cogeneration facility access to the grid. Summarily, this amendment declared “…that no electric utility shall be required to enter into a new contract or obligation to purchase electric energy from a qualifying cogeneration facility or a qualifying small power production facility (qualifying facility)…” if any one of three (3) grid access conditions/circumstances existed as determined by the Federal Energy Regulatory Commission (FERC). With the grid’s maturation and structure into ISOs and RTOs, Independent System Operators and Regional Transmission Organizations, respectively, this law effectively relegated cogeneration facilities to exporting their power to ‘grids’ with either a captive consumer, e.g., government facilities, or a grid with no transparent, wholesale electric power market.

Existing CHP Compared to On-Site Technical Potential by Sector

Barring these power export scenarios, cogenerated power sales to the grid would be subject to a new and heretofore unknown set of transmission market rules and regulations. Given the nature of their design and operation, this hamstrung bringing private industrial cogeneration projects from behind the fence line and out into the public sector power grid. The Renewable Energy provision of the 2005 Act, Title II, which gave preferential treatment, in the form of subsidies, federal purchases, and consumer rebates for defined renewable-based generation, exacerbated cogeneration’s plight–cogeneration was not deemed a renewable energy form. This portion of the Act gave birth to many States adopting renewable performance standards (RPS) whereby a percentage, normally double digits, of a utility’s distributed power must come from renewables. Regardless of their intermittent and unpredictable nature, the renewables ‘free pass’:

  • Effectively exempted them from the transmission system’s evolving and ever-tightening market rules for firm capacity-that is, a reliable source of power;
  • Displaced generation, that lowered the MW output of operating base loaded units on the grid due to ‘must take’ renewables output;
  • Caused base loaded operating thermal units to cycle and operate at reduced levels of efficiency; and
  • Further disadvantaged non-base load, non-quick start, generation forms such as behind-the-fence CHP/cogeneration from getting, or staying dispatched, on the grid without undue risk [cost] exposure associated with compliance to reliability standards.

Section 2 of an August 2012 Executive Order, Accelerating Investment in Industrial Energy Efficiency, was intended to encourage industrial efforts to achieve a national goal of deploying 40 GW of new, cost effective CHP in the U.S by the end of 2020. Largely symbolic, this Order’s goal was to reduce, through CHP implementation, the industrial sector’s use of 30 percent of all energy consumed in the country. As stated in this Order, “Instead of burning fuel in an on-site boiler to produce thermal energy and also purchasing electricity from the grid, a manufacturing facility can use a CHP system to provide both types of energy in one energy efficient step”. The displacement of a manufacturing facility’s purchased electric power from the grid, in favor of self-generation, was this policy’s intended means of implementation. Unless demand-side management, that is, the removal of an industrial customer’s load from the grid during demand peaks, is considered an ‘anti- or nega-watt’ [as DSM agglomeration has a value to the grid, one might argue that monetary compensation for same is appropriate], nothing in this policy promoted or facilitated new or the expansion of existing cogeneration/CHP power, or any ancillary service, sales to the grid. Also, to the extent that an industrial facility installed CHP or upgraded or expanded its existing CHP infrastructure, nothing in this order sought to establish parity, between renewables and cogenerators, regarding the rules governing sales to the grid.

Energy Efficiency Advantage of CHP Compared to Traditional Energy Supply

Summarily and historically, CHP projects were initially and widely used in the absence of and without being promulgated by, legislation, then its [CHP] use was encouraged by targeted legislation, followed by incidental legislation that discouraged its power export to the grid, and ultimately followed by its use being promoted through symbolic policy gestures and executive orders. While not quite a full-circle, CHP projects continue as a boutique industry and generally remain bound within the industrial plant’s fence line.

Transmission Grid & Reliability

Beyond the industrial fence line and as a part of the evolution of utility deregulation, ISOs and RTOs were formed and/or formalized in structure to ostensibly allow for non-discriminatory grid access and pricing transparency between power generators and electric distribution companies. These transmission organizations, in compliance with the FERC regulations, established quality standards and market rules for the maintenance, expansion, and operation of their territories’ grids or bulk power supply (BPS) systems. While the rules, standards, and market structures differ between these organizations, of paramount concern to each is the reliable operation of their system. Reliable operation, as defined by the North American Electric Reliability Corporation (NERC), is, “Operating the elements of the BPS within equipment and electric system thermal, voltage, and stability limits so that instability, uncontrolled separation, or cascading failures of such system will not occur as a result of a sudden disturbance, including a cybersecurity incident, or unanticipated failure of system elements”. Explicit in the definition of reliable, stable system operation of a BPS is stability and failure avoidance. Implicit in this definition is the importance of system and stability restoration in the event of a failure. The implicit nature of the latter’s importance, restoration, was illustrated following the massive cascading North American power outages of August 2003, September 2011, and October 2012. These occurred in seven (7) northeast states/province of Ontario, the southwest [CA and AZ]/northwestern Mexico, and the northeast [primarily NJ, NY, DE, PA] due to Superstorm Sandy, respectively.

Fundamentally, BPS reliability has, and continues, to depend upon load following and regulation to balance generation to load. While load following involves matching generation to energy consumption trends at the macro-level, regulation requires rapid adjustment around the underlying trend at the micro-level. Collectively, these are ancillary services that the grid must provide or obtain. What has changed is that with market deregulation and restructuring, grid balancing is more precise. Additionally, both the energy markets [GENCOs] and the grids [TRANSCOs] acknowledge that there is a monetary value associated with these ancillary services-especially regulation as it is a zero-energy service. As the ISOs/RTOs evolve and strive for transparency of equal grid access for generators, market rules for the pricing and supply of BPS and ancillary services continue to emerge and be refined.

Existing CHP Capacity by State

CHP & THE GRID: PERFECT TOGETHER?

Unlike a conventional operating power plant, a CHP plant is not only capable of putting power onto the grid, but it is also capable of taking all or a portion of its load off the grid, aka, load shedding. To this extent, during events such as grid peak demand periods and intra-nodal transmission line congestion, where the CHP plant is located between such nodes, grid stability and reliability may be explicitly enhanced by ‘islanding’ the plant. Correspondingly, this ‘islanding’ or micro-grid creation allows the mini-grid to operate independently of the larger power grid infrastructure and can possibly serve a black-start function during isolated or massive cascading BPS system failures. This duality of function makes the CHP plant an attractive and complementary asset for lending stability and reliability [resiliency] to the transmission grid. However, notwithstanding this flexibility feature, the improved thermal cycle efficiency of CHP plants over conventional power plants, or the 2012 Executive Order, these remain collectively insufficient to fuel a proliferation, or post-PURPA 1978 resurgence, of CHP projects in general, and specifically those of a large [> 25 MWe export capacity] nature. A major impediment is the protocol under which CHP projects must operate relative to the transmission grid, i.e., in front of the meter, as mandated by the Energy Policy Act of 2005.

Unlike conventional nuclear or fossil-fueled generators, and renewable generators like wind or solar, CHP plants also represent system load. Dependent upon whether these are commercial [comCHP] or industrial [indCHP] loads, these can range from a few hundred KW to more than 100 MW. Based upon regulation of this load and the extent of self-generation and power export, CHP plants have the ability to enhance a grid’s resiliency while contributing to energy sustainability. This uniqueness makes CHP plants a strong and attractive ally to the grid as they are neither ‘pure givers’ nor ‘pure takers’ and can often function across both ends of this spectrum-seemingly making them a good/better partner in search of a ‘more perfect’ union with the grid and a more sustainable energy future. Yet, this ‘marriage’ of CHP plants, termed ‘prosumers’ by a European electric industry trade association, and the grid rarely occurs.

ENABLING A MORE PERFECT UNION-WHAT CAN BE DONE?

CHP plants, and their power exports to the grid, are largely absent not because of an intrinsic flaw, but due to a combination of an unintended [assumed] legislative consequence and immaturity and fragmentation of grid rules. The effective removal of the ‘grid export dimension’ from potential CHP projects has particularly hurt those of a medium-to-large capacity …where energy sustainability can have the most impact. The Energy Policy Act of 2005, and its resultant implementation, put a double whammy on these plants and projects by: Effectively revoking their ‘must take [buy]’ status from the grid by subjecting them to unreasonable grid supply rules; and Exempting renewables from such supply rules while simultaneously granting their generation ‘must take’ status. This represents juxtaposition between CHP and renewables and remains counter-intuitive to this country’s drive toward energy efficiency, sustainability, and independence. The mutually exclusive nature of ‘who gets the preferred terms’ is unwarranted as these two generation forms are complimentary to energy policy and one does not have to come at the sacrifice of the other. From a reliability perspective, power from a CHP plant is under man’s control whereas wind and solar renewables produce at God’s will. The improved predictability of the output of CHP plants versus that of renewables makes them inherently more dispatch-able and grid ‘friendly’. As CHP’s grid conundrum is man-made, so too is the solution.

Notwithstanding the ‘inside-the-fence’ economic and environmental benefits of CHP, the use of the term ‘solution’ herein pertains to alleviating CHP’s effective exclusion from the grid. Mitigation, or elimination, of this barrier will not only unlock CHP’s potential to enhance the reliability of the grid, but because of such access, be a driver for CHP’s renaissance. In general, the solution will follow a sequential path beginning with legislative bodies, through the regulators, to the marketplace. Legislative bodies include lawmakers at the federal and state levels of government; similarly, regulators include those at the same levels such as the FERC and public utility commissions, respectively; followed by the market place that includes energy project developers, the ISOs/RTOs, and the industrial and commercial CHP plant hosts, and owners.

Existing Commercial CHP Site by Business Type (2,567 sites)

Unlike the stand-alone, pre-packaged, and sized-to-a-standard renewable technologies associated with most solar and wind projects, CHP projects are unique and represent a nearly infinite amalgam of integrated equipment, fuels, alternate use energy forms, and commodities. As a result, and while most would agree that CHP projects make good business sense, they have no clear, consistent voice with which to lobby government as they are undertaken by a wide swath of sectors, including institutional, commercial, industrial, governmental, and energy project developers. Absent such external, public advocacy, government leaders, e.g., the President, Secretary of Energy, members of Congress, must push the merits of CHP from the inside, with the result being law, not just policy. From there, regulators, as well as the marketplace, will take their cues, both sequentially and in parallel, to foster the development of CHP projects. This is illustrated by what has happened with wind and solar renewables at the residential/commercial and industrial/power levels; these representing behind the meter and in front of the meter, transactions, respectively. In the latter transactions, they received a legislative ‘free pass’ by not having to guarantee capacity to a grid that must accept their power, and they also received federal subsidies through investment tax credits and energy production incentives. Regulators followed with the establishment of renewable performance standards (RPS) while energy project developers subsequently rushed to fill emerging RPS quotas with a proliferation of projects.

At the Federal, legislative Level

While current legislation does not inhibit the creation of micro-grids at governmental facilities such as federal buildings and military bases, and commercial- and institutional-scale facilities, CHP’s full potential to substantially contribute, beyond the interior of the fence-line and especially at the industrial-sector level, to the grid is hampered by an uneven playing field. The industrial sector, owing to its large heat and power [>25 MWe] needs, has the most to offer the grid in terms of resiliency. However, and with few geographic region exceptions, CHP has very limited ability to export its power due to select provisions of the Energy Policy Act of 2005. These provisions, including any capital or operating subsidies, must be removed and CHP afforded the same, or equivalent, ‘free passes’ that renewables have received. Such actions would be consistent with both the legislative philosophy and intent of promoting cleaner, more sustainable energy forms, and the more pragmatic realities of today’s deregulated power markets and grids. Simply put, renewables such as wind and solar are unreliable but must nonetheless be taken by a grid mandated to be reliable, while true CHP, with its improved overall thermal efficiency, albeit not as clean, can reliably control its load on the grid and its export of power to the grid. The term ‘true CHP’ is used to distinguish an industrial manufacturer’s thermal energy generation from that of the historic QF. These QFs, or predecessor CHPs, were largely created to export power to a grid that had to take it while they searched for, if not fabricated, an alternate energy [e.g., chilled water, steam] user to comply with PURPA. Today, a more reasonable approach to leveling the playing field between renewables and CHP would be to pass legislation that, as a minimum, accounts for, the thermal energy efficiency, MW output, and certainty of such generated output or load withdrawal, to establish the quantity of MWs the grid must take and the costing terms associated with them. Conceptually, this approach would use a select series of dedicated values assigned to each key parameter, e.g., MW quantity, supply certainty of the MWs, time of day MWs, energy efficiency, to suitably balance the ‘greenness’ objectives with other objectives such as reliability, then combine them, in a suitably weighted formula, to develop the ‘must take’ qualifying MWs.

At the Government [State, Regulatory] and Grid Operator Level

State legislative and regulatory policies, and grid operators and regulators, essentially follow the lead of the federal government. There are several avenues that the former two (2) parties may take to enable CHP to gain an equal footing with renewables in light of federal policies and rules. At the state, regulatory, and utility commission and grid [ISO/RTO] levels, these include CHP’s technology insertion to the ever-growing renewable performance standards (RPS) and the elimination or reduction of excessive standby power rates or tariff structures that function as barriers to grid entry. The grid could encourage ‘dispatch-ability’ and islanding by establishing a means of valuation for time-of-day MW addition [power export] and removal from [load shedding] the grid. This feature would capitalize on CHP’s ability to function across the export-shed spectrum and give it the choice of moderating power flow across the spectrum based upon market price signals.

At the Marketplace Level …

From a marketplace perspective, many heavy industrial manufacturers, with or without the collaboration of an energy project developer, are effectively already in the CHP business. What they may lack, and what the energy project developer can bring to the table, is greater emphasis on the world beyond the industrial facility’s fence-line. Whether steam is exported or used within the ‘gated community’, most industrial facilities and rightfully so, focus on ‘steam-for-process’ and view power generation, albeit often internally consumed, as a by-product. The energy project developer can bring technology that not only offers improved steam generation efficiencies, but given their power market expertise, add value by availing CHP’s power and energy components to the grid–a grid that, from a regulatory and rate-making perspective, puts CHP on par, if not above, other renewable forms that only generate at God’s will. Genuine grid openness to CHP’s pent-up capabilities will allow free-market forces to proliferate the application and integration of CHP to spot, short, and long-term power markets, as well as the opportunity to provide ancillary services. The energy projects developer, or power market-savvy industrial manufacturer, can use ‘optionality’ and the sale of such services as elements of their economic/financial evaluation of a CHP project.

With such a view, and using a developer payback period of ~7 years, the often sought long-term Power Purchase Agreement, may be unnecessary. Instead, retaining the option of selling into the spot market may provide the necessary revenue stream.

Conclusion

CHP plants, through load following, load shedding, and islanding, are readily able to enhance a grid’s reliability by playing a role in, exclusive of a catastrophic or non-contingent event, failure avoidance. Their intrinsic ability to ‘island’ themselves into a micro-grid also enables them to have a role in recovery in the event of a BPS system failure. Also, with grid access equality, CHP, unlike many other renewable energy forms, can be propagated without government subsidies, while bringing more efficient, sustainable power and ancillary services to the grid. The only government intervention needed is for the removal of the self-inflicted restrictions and the leveling of the renewable playing field.

]]>
Digitalization: Moving from Step Change to Transformation in Power Generation https://www.power-eng.com/renewables/digitalization-moving-from-step-change-to-transformation-in-power-generation/ Wed, 12 Jul 2017 18:47:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/features/digitalization-moving-from-step-change-to-transformation-in-power-generation By Thomas Trepanier

You’ve no doubt seen or heard a lot of discussion these days about the terms “digitization” and “digitalization,” as well as some disagreement as to the difference between the two. Digitization seems to be the easier to define – basically taking an analog “thing” like a paper diagram or material safety data sheet (MSDS) and scanning it to make it digital. But, digitalization encompasses so much more than simply scanning paper documents.

Leading industry research firm Gartner, defines digitalization in its IT Glossary as “the use of digital technologies to change a business model and provide new revenue and value-producing opportunities; it is the process of moving to a digital business. This description truly captures the essence of what digitalization is and why it is so important to the power generation industry – and so challenging.

It is one thing to scan a paper work order so that it can be transferred electronically. It is another proposition entirely to redefine workflows, plant- or fleet-wide processes, or entire business models because you have the ability to share critical information in a digital format. That is exactly the possibility that digitalization brings to the table.

This article digs deep into exactly how digitalization can fulfill critical needs in power plants today, as well as drive the evolution of plant operations into the future.

Digitalization helps get the right information to users (and workgroups) that need it – at precisely the time that they need it – to perform their tasks.

What is digitalization, and why do power plants need it?

In its most basic sense, digitalization is the evolution from manual and paper-based processes to digital processes, enabled by the integration of information and operational technology (IT/OT) and analytics. This evolution is not a new phenomenon; it began decades ago as plant owners began digitizing piping & instrumentation diagrams (P&IDs), safety procedures and numerous other critical documents to make them easier to store and retrieve.

As the Gartner definition implies, digitalization must inherently leverage technology to access the information and distribute it to the people and systems that need it. Furthermore, when analytics are applied to the data, the value grows as actionable insight can be extracted from the data and presented in a format that is easily digestible and can be used to make rapid, informed, and most importantly, accurate decisions.

In other words, it helps get the right information to users (and workgroups) that need it – at precisely the time that they need it to perform their tasks. As a result, digitalization can significantly decrease waste and increase wrench time for maintenance, operations and engineering functions. In addition, by enhancing the quality of data and inter-workgroup communications, digitalized operations can increase safety and reliability, reduce unexpected failure, and lead to higher capacity factors and operational excellence.

How does digitalization deliver value?

Old habits die hard, and the processes of the past can be difficult to break. Manual, paper-based processes have created silos of information, resulting in a lack of reliability from non-standard workflows and added costs due to errors and inefficient activities. This segmenting of information leads to gaps in information sharing and an inability to comprehensively analyze information. In turn this has resulted in major component failures in transformers, large pumps and other critical equipment.

Digitalization, on the other hand, breaks down the silos separating plant-wide (and fleet-wide) information flows. Not only does this improve standardization of work processes; it gives the plants’ staff a better chance to analyze the information and predict failures before they happen.

Digitalization will help independent power producers provide flexible, fast-ramping generation to accommodate growing supplies of intermittent wind and solar power.

Digitalization also enables tools like enterprise asset management (EAM) software to bring more efficiency to the work cycle through capabilities like electronic work packages, which enable assets to be taken out of and put back into service more quickly, lowering out-of-service time and increasing efficiency of work planning and maintenance processes. According to analysis by DataGlance, and validated by a major utility in the U.S., enhanced efficiency tied to an EAM system with features like electronic work packages can lead to expected cost savings for a two-unit power plant of approximately $3.5 million per year.

Additionally, EAM software has evolved such that architecture improvements and cloud-enabled technology platforms can also reduce IT costs. In a recent report, “Cloud-Based Alternatives Are Changing the Enterprise Asset Management Market,” Gartner estimates that 50 percent of all EAM deployments will be cloud-based by 2020. Not surprisingly, given the potential for such significant savings. By deploying EAM on a cloud platform, one major international nuclear fleet estimated 25-30 percent cost savings in IT costs alone – exclusive of additional operational efficiency cost savings – potentially millions of dollars.

How does IT/OT integration enable digitalization?

The integration of IT and OT is an enabler of the evolution from manual, paper-based processes to digital workflows, from which digitalization derives its value. This is especially true in utilities’ asset management programs. In fact, ABB recently surveyed more than 200 utility executives from across the globe, and the results indicate that the vast majority (80 percent) believe that IT-OT integration is a key component of any effective asset management strategy. And, 55 percent reported that the importance of asset management has increased over the past 12 months. It’s not a wonder why they ranked the importance as they did. These same utility executives stated that the benefits of asset management empowered by IT/OT integration enable them to achieve their most critical priorities, including (on a scale from 1-5): better long-term planning (4.86), increased staff productivity (4.43), improved safety (3.98) and better use of capital (3.68), among others.

What does the future hold for digitalization?

Based on the evidence, digitalization is without question the trend for the power generation industry in the future. But, how exactly will power producers apply digitalization to capitalize on this trend? There are, of course, numerous variables that will impact the answer to this question. However, there are a few prominent trends that we can reasonably expect to continue, or even escalate.

First, as the cost of sensors continues to drop, the number and ubiquity of monitored equipment will continue to increase exponentially – contributing to the continued explosion of the Internet of Things (IoT) across multiple industries. In the research report, “Worldwide Internet of Things Forecast Update, 2016-2020,” leading industry research firm IDC predicts that the worldwide installed base of IoT endpoints will grow at a rate of 16.1 percent through 2020 to more than 30 billion connections. Expect leading solution providers to the power generation industry to capitalize on this trend by delivering products that leverage the IoT (or the Industrial IoT (IIoT) for more asset-intensive industries.

This will prove to be a boon to power plants that leverage analytics to enable predictive maintenance and proactive replacement of assets at risk. In another report, “Business Strategy: The State of Digital Transformation in North America Power in 2016,” IDC analyst John Villali outlined the potential benefits in a recent report, in which he predicted that independent power producers(IPPs) will pursue digitalization in the coming years. “IPPs can utilize digital technology when managing a power plant’s maintenance cycle. Predictive analytics can help IPPs avoid forced outages and identify root causes of underperforming generation assets. Predictive technology sensors and monitoring tools developed by companies that specialize in handling large data sets and analyzing them for decision-ready results can give IPPs an edge over their competitors. In addition, real-time analysis of generation can provide insights into optimizing a number of generators under certain market conditions, which can reduce operating costs and increase a generation fleet’s overall cumulative output.”

In the report, Villali touches on another trend that power producers will leverage to further increase the benefits of digitalization as more distributed energy resources (DERs), such as renewables and microgrids, are pushed into the grid. He writes, “… new requirements are being created because of the increased amount of intermittent wind and solar capacity being added to the grid, which requires traditional fossil fuel generation to have quicker ramp-up and ramp-down times as well as additional spinning reserves. These capabilities are needed for generating supply to meet electric demand as market conditions constantly change.”

Villali uses the California Independent System Operator’s (CAISO) plan for new requirements as an example of this trend. “In anticipation of steep changes up or down in demand due to the growing amount of DERs, CAISO is currently studying potential new requirements and reliability standards, which will be able to handle these large fluctuations in demand without disrupting electric supply. By 2020, CAISO anticipates supply needs of approximately 13,000MW within a three-hour time span as the sun sets and solar generation decreases significantly during certain high-demand days. These types of real-time adjustments will benecessary for traditional fossil fuel generation producers to provide reliability and maintain competitiveness in a market that will require flexible fast-ramping units.” Digitalization will help IPPs overcome this challenge.

The third and final trend that will impact the power generation is the continued demand for greater mobility. Mobility will be a driving force in efficiency for workers in the plant, greatly enhancing work management processes. For example, some plant operations management solutions offer mobile logbook capabilities, which can be used to eliminate the inefficiencies of paper-based logging activities. These mobile logbooks allow plant operators to transfer and receive log entries and associated plant data on their mobile devices. This enables more timely data capture throughout the plant at the time of occurrence, as well as access to key plant activities by enterprise-level decision makers. The results are enhanced performance and productivity, improved communications and efficiency, and greater safety and security.

Digitalization is disruptive!

Digitalization can be described in many different ways. One thing that is consistent in any discussion of digitalization, however, is that it is disruptive. If you are planning to digitalize your operations, but are basing your plans on existing business processes, procedures and workflows; you may want to reconsider. True digitalization will significantly transform your operations! As digitalization of power plants makes more information available to personnel from the plant floor to the boardroom, not only will day-to-day operations be enhanced. The ability to make critical business decisions today, planning for the coming days, weeks, months, and even years will be improved – creating a more productive and successful enterprise overall. It will bring about the kind of disruption we can all stand to see more of.

Author

Thomas Trepanier is Senior VP of the Enterprise Software product group at ABB. He has managed operations at power plants for nearly 30 years.

]]>
Industry, UCF team to fuel Florida’s “Turbine Turnpike” https://www.power-eng.com/gas/industry-ucf-team-to-fuel-florida-s-turbine-turnpike/ Wed, 12 Jul 2017 17:52:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/departments/industry-watch/industry-ucf-team-to-fuel-florida-s-turbine-turnpike By Jay Kapat, Sc.D., professor of mechanical and aerospace engineering, University of Central Florida

Sunny Florida’s global reputation for world-class theme parks and beaches overshadows a thriving turbine design ecosystem that has become unparalleled in the industry.

Welcome to Florida’s “Turbine Turnpike,” powered in large part by partnerships between industry leaders and the University of Central Florida, which features 11,000 future engineers and a robust research faculty in one of America’s largest engineering programs.

Five of the six major original equipment manufacturers have a significant presence in Central and South Florida, and they are loosely linked by the Florida Turnpike. Both Siemens Power and Gas and Mitsubishi Hitachi Power Systems have headquarters in Orlando. PSM of Ansaldo Energia in Jupiter has been expanding steadily in recent years. Also, Pratt & Whitney has major operations in Palm Beach County for its aviation turbine engines, as does GE Aviation in the Tampa area.

Surrounding these five OEMs in Florida are a growing array of design support, analysis and retrofit companies, including Florida Turbine Technology, PowerPhase, Agilis, Belcan, Chromalloy, Ethos Energy, TTS Services and ETS Power. Combined, these companies employ 5,000 engineers, and more jobs are coming. For example, Doosan Heavy Industries recently opened offices in Palm Beach Gardens.

The OEMs and their supply chain place a significant demand for trained engineers and for local research and testing collaboration. Enter the 64,000-student University of Central Florida. Aviation Week hails UCF as supplying more graduates to aerospace and defense companies than any other university.

In 2005, a unique partnership was formalized between UCF and Siemens, with all efforts to center on a custom-built, dedicated laboratory at UCF called the Siemens Energy Center (SEC). Since then, continuous investment by Siemens – and constant interactions between Siemens engineers, managers and SEC staff, UCF students and professors – have bolstered turbomachinery and energy research at UCF.

In 2012, UCF’s College of Engineering and Computer Science created the Center for Advanced Turbomachinery and Energy Research (CATER) to perform research and student training in a system-focused approach to turbomachinery-based systems for power generation, aviation and space propulsion.

Currently engaged are 11 core faculty members along with 70 graduate and 60 undergraduate students.

Besides Siemens, CATER has established broad partnerships with Ansaldo Energia/ Alstom and Aerojet Rocketdyne. Corresponding research covers aerodynamics, advanced cooling, combustion, advanced materials and coatings, mechanical and dynamic integrity and transient response. CATER’s unique partnership model for industry includes: complete access control, with 24/7 unrestricted access to partner personnel; a complete firewall around laboratory activities with regular audits from the industry partner: and complete protection of proprietary information and IP as outlined in a framework agreement.

This UCF approach to partnership with industry has led to key components of next-generation OEM products being tested or researched at CATER or by UCF. This effort has been greatly helped by the Florida HighTech Corridor program through matching industry funds for research. Several new courses, each co-taught by an industry technology expert, have been introduced, which cover practical, real-life examples and include hands-on and design components. These courses are being packaged for a professional Science Masters in Energy Systems Engineering to debut in fall 2018. UCF’s growing emphasis on turbomachinery research and related courses has significantly boosted job and internship opportunities for UCF engineering students at Florida companies. And several full-time employees at these companies pursue master’s or doctoral degrees at UCF. In addition, Siemens is offering a number of Siemens Doctoral Fellowships at UCF, and GE has sponsored a GE GRC Doctoral Fellow at UCF.

Currently, CATER focuses on three key, multi-disciplinary initiatives: alternative cycles and fuels, including super-critical carbon dioxide power systems; digital twin platforms with new sensors and algorithms based on a stochastic approach: and new designs enabled by innovative manufacturing and materials.

The aviation and space industries in and around Florida are intertwined with the turbine industry.

Many of the segments and technical needs overlap. And, as the turbine engine design ecosystem in Central and South Florida expands, the “Turbine Turnpike” stands to keep Florida surging in the fast lane of the future.

]]>
The Additive Age https://www.power-eng.com/renewables/the-additive-age/ Wed, 12 Jul 2017 17:49:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/departments/gas-generation/the-additive-age BY PAUL BROWNING, CEO AND PRESIDENT OF MITSUBISHI HITACHI POWER SYSTEMS – AMERICAS (A part of MITSUBISHI HEAVY INDUSTRIES Group)

Materials Science and Engineering has provided the chapter titles in the book of human history: The Stone Age, the Bronze Age, and the Iron Age. Although all fields of science and engineering are important, it is the advancements in materials science that have marked critical eras of human advancement. When I was studying Materials Science and Engineering as an undergraduate at Carnegie Mellon University, I wondered whether, hundreds or thousands of years from now, historians would say we are currently in the Silicon Age due to the transformational emergence of the silicon transistor chip. But in recent years, I’ve concluded the history books will label this the “Additive Age”.

I say this because it’s actually a form of additive materials technology that enabled the modern silicon transistor chip. These chips are printed on large single crystal disks of silicon. When most materials turn from liquid to solid, the atoms arrange themselves into small “crystals” that all have the same geometric orientation relative to one another. Many of these small crystals grow together to form a solid material composing many small crystals of varying orientations. The boundaries between these crystals can cause microscopic variations in important material properties, such as electrical conductivity.

The latest Thermal Barrier Coatings are applied layer by layer with a high velocity high temperature Plasma Spray process on the component surface.
The latest Thermal Barrier Coatings are applied layer by layer with a high velocity high temperature Plasma Spray process on the component surface.

About six decades ago, Materials Scientists discovered, under very controlled conditions, they could create a “single crystal” by starting with a seed crystal of the desired orientation and then controlling solidification so that a larger crystal was formed by solidifying one layer of atoms at a time and “growing” the desired object. This allowed scientists and engineers to make single crystal wafers with atomically uniform electrical properties that were ideal substrates for very small transistors – thus the silicon chip was born. This depositing of one layer of atoms at a time was one of the earliest forms of additive manufacturing.

And as additive manufacturing enabled the computer chip, the computer chip enabled greater computational power, which allowed Materials Engineers to extend additive manufacturing not only to lithographic printing of ever smaller p-n junctions on single crystal silicon, but also to many other applications.

It’s been almost 30 years since I graduated from CMU, and I’m now president & CEO of a company that manufactures some of the largest, most fuel efficient gas turbines in the world. We make extensive use of additive manufacturing in the development of turbines. For example, we use the same single crystal technology that was developed for silicon chips to manufacture large turbine blades that are directionally solidified. This means we start with several seed crystals and grow them all in one direction so that the boundaries between them are all oriented along the major stress axis of the blade. In our case, we’re worried about the strength of those boundaries at a high temperature. By orienting the grain boundaries in this way, the life of our turbine blades at higher temperatures can be maximized. In addition, we use additive manufacturing to deposit ceramic coatings on many of the cooled components in our turbine. Through a process called plasma spraying, these coatings are deposited one layer at a time. They then act as insulating “thermal barriers” between the very hot gasses that flow through the turbine and the alloys the components are made from. These additive technologies are critical to improving fuel efficiency, which has enabled a dramatic reduction in carbon dioxide emissions in the latest generation of power plants versus the older coal-fired power plants they often replace.

More recently, we have begun to use additive technology to “print” components for gas turbines. Today, we’re able to print these same components in three dimensions, by using lasers to solidify powders, one layer at a time, in a complex three dimensional pattern. Using 3D printing, we can rapidly prototype new designs we want to test, and we can even print production-ready parts.

We now see additive technology expanding into many industries, and being used for a wide range of plastic, metallic and ceramic materials. And it’s all made possible by the original additive technology, which enabled the modern computer chip.

So welcome to the Additive Age of human history. In the coming years, we’ll see many new uses of additive technology.

 

]]>
Cybersecurity: The Power of Partnership https://www.power-eng.com/renewables/cybersecurity-the-power-of-partnership/ Wed, 12 Jul 2017 17:41:00 +0000 /content/pe/en/articles/print/volume-121/issue-7/departments/energy-matters/cybersecurity-the-power-of-partnership By Robynn Andracsek, PE, Burns & McDonnell and contributing editor

The FBI and the U.S. Government partner with private sector entities in a crucial effort to counter, fight, and defeat cyber threats and adversaries. As cyber threats continue to evolve, private industry, international partners and state, local and Federal agencies need to strengthen partnerships. To do so, all parties must be willing to share information and work together to enhance the nation’s cybersecurity posture. From a Federal law enforcement perspective, the FBI is working to advance its relationships with private industry to address the ever-changing cyber threats faced from global adversaries. Executive Order 13636 “Improving Critical Infrastructure Cybersecurity” mandated that government agencies enhance the ways in which they share information with private industry. Although the FBI has worked with private sector throughout its history, the EO provided the bureau with an opportunity to look at new and innovative ways to interact with industry.

One new initiative the FBI has pursued is its Chief Information Security Officer (CISO) Academy. Hosted at the FBI Academy, this three-day training program provides executive managers from private industry an opportunity to understand the roles and responsibilities of the FBI and its federal partners in the cyber arena. The third CISO Academy was held in March 2017, and included 40 participants from across industry, including Energy, who stayed on the Academy grounds and had a taste of what new agent trainees experience. Participants interacted with subject matter experts to discuss cyber threats, legal policies, law enforcement processes and the importance of intelligence sharing between the government and private industry.

Stacy Stevens, Unit Chief of the Mission Critical Engagement Unit inside the FBI’s Cyber Division, says that “developing trusted partnerships prior to a cyber-attack helps put missing pieces of an investigative puzzle together.” Programs such as the CISO Academy allow industry to see the innerworkings of the FBI to gain a better understanding of how the government works to address cyber threats. Attendees of the class are often critical of the government at the beginning of the program but as they gain a better understanding of how it works and how each of their colleagues from other sectors work, they become more supportive of proactively collaborating to address cyber threats. These partnerships provide opportunities to interact prior to a cyber-attack and enhance measures to mitigate threats. An example is an Action Campaign focused on the recent power outages in Eastern Europe. The FBI, along with federal partners from the Department of Homeland Security (DHS) and the Department of Energy (DOE), used in-person briefs and webinars to provide information to the Energy Sector regarding the power outages.

Highlights of the most recent CISO Academy included a discussion on the Internet of Things (IoT) and the challenges surrounding ransomware. IoT examples include connected devices such as medical devices, televisions, Wi-Fi routers, smart hubs, vehicles, thermostats, door locks, light switches, manufacturing equipment, security cameras, universal remotes, kitchen appliances, fitness trackers, and sprinkler systems. IoT devices now have the capability to collect health data, location, energy consumption, dietary habits, entertainment preferences, and other intimate details of daily living. This type of information is valuable to hackers and third party individuals. The networked nature of IoT creates many attack surfaces that can be exploited and increases the potential for a data breach.

Anyone can be a victim of ransomware, such as the May 2017 WannaCry attack that shut down work at 16 hospitals across the United Kingdom. To raise awareness of the ransomware threat, in the fall of 2016, the FBI collaborated with the U.S. Secret Service and the National Council of Information Sharing and Analysis Centers to provide ransomware briefings in 39 cities across the country. Additionally, the FBI sends out Private Industry Notifications reports which provide contextual threat information regarding a cyber threat and FBI Liaison Alert System reports which provide technical indicators surrounding specific threats.

The overarching message is that the first-time private industry, such as a utility company, interacts with the government should not be during a cyber-attack. The FBI has 56 field offices across the United States and Legal Attaches throughout the world. Each utility should foster a proactive relationship with their local FBI field office and include federal law enforcement in any incident response plan it develops in anticipation of an attack. These different interactive programs hosted by the FBI can reach only a few people and companies. Developing relationships with the Federal government, including the FBI, establishes a link for utility companies to receive timely information which may assist in helping to mitigate threats before a power outage. Find the field office closest to your facility at www.fbi.gov/contact-us/field-offices.

]]>